paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
1 INTRODUCTION . The inference-time computational demands of deep neural networks ( DNNs ) are increasing , owing to the “ going deeper '' ( Szegedy et al. , 2015 ) strategy for improving accuracy : as a DNN gets deeper , it progressively gains the ability to learn higher-level , complex representations . This strategy has enabled breakthroughs in many tasks , such as image classification ( Krizhevsky et al. , 2012 ) or speech recognition ( Hinton et al. , 2012 ) , at the price of costly inferences . For instance , with 4× more inference cost , a 56-layer ResNet ( He et al. , 2016 ) improved the Top-1 accuracy on ImageNet by 19 % over the 8-layer AlexNet . This trend continued with the 57-layer state-of-the-art EfficientNet ( Tan & Le , 2019 ) : it improved the accuracy by 10 % over ResNet , with 9× costlier inferences . The accuracy improvements stem from the fact that the deeper networks fix the mistakes of the shallow ones ( Huang et al. , 2018 ) . This implies that some samples , which are already correctly classified by shallow networks , do not necessitate the extra complexity . This observation has motivated research on input-adaptive mechanisms , in particular , multi-exit architectures ( Teerapittayanon et al. , 2016 ; Huang et al. , 2018 ; Kaya et al. , 2019 ; Hu et al. , 2020 ) . Multi-exit architectures save computation by making input-specific decisions about bypassing the remaining layers , once the model becomes confident , and are orthogonal to techniques that achieve savings by permanently modifying the ∗Authors contributed equally . model ( Li et al. , 2016 ; Banner et al. , 2018 ; Han et al. , 2015 ; Taylor et al. , 2018 ) . Figure 1 illustrates how a multi-exit model ( Kaya et al. , 2019 ) , based on a standard VGG-16 architecture , correctly classifies a selection of test images from ‘ Tiny ImageNet ’ before the final layer . We see that more typical samples , which have more supporting examples in the training set , require less depth and , therefore , less computation . It is unknown if the computational savings provided by multi-exit architectures are robust against adversarial pressure . Prior research showed that DNNs are vulnerable to a wide range of attacks , which involve imperceptible input perturbations ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ; Papernot et al. , 2016 ; Hu et al. , 2020 ) . Considering that a multi-exit model , on the worst-case input , does not provide any computational savings , we ask : Can the savings from multi-exit models be maliciously negated by input perturbations ? As some natural inputs do require the full depth of the model , it may be possible to craft adversarial examples that delay the correct decision ; it is unclear , however , how many inputs can be delayed with imperceptible perturbations . Furthermore , it is unknown if universal versions of these adversarial examples exist , if the examples transfer across multi-exit architectures and datasets , or if existing defenses ( e.g . adversarial training ) are effective against slowdown attacks . Threat Model . We consider a new threat against DNNs , analogous to the denial-of-service ( DoS ) attacks that have been plaguing the Internet for decades . By imperceptibly perturbing the input to trigger this worst-case , the adversary aims to slow down the inferences and increase the cost of using the DNN . This is an important threat for many practical applications , which impose strict limits on the responsiveness and resource usage of DNN models ( e.g . in the Internet-of-Things ( Taylor et al. , 2018 ) ) , because the adversary could push the victim outside these limits . For example , against a commercial image classification system , such as Clarifai.com , a slowdown attack might waste valuable computational resources . Against a model partitioning scheme , such as Big-Little ( De Coninck et al. , 2015 ) , it might introduce network latency by forcing excessive transmissions between local and remote models . A slowdown attack aims to force the victim to do more work than the adversary , e.g . by amplifying the latency needed to process the sample or by crafting reusable perturbations . The adversary may have to achieve this with incomplete information about the multi-exit architecture targeted , the training data used by the victim or the classification task ( see discussion in Appendix A ) . Our Contributions . To our best knowledge , we conduct the first study of the robustness of multi-exit architectures against adversarial slowdowns . To this end , we find that examples crafted by prior evasion attacks ( Madry et al. , 2017 ; Hu et al. , 2020 ) fail to bypass the victim model ’ s early exits , and we show that an adversary can adapt such attacks to the goal of model slowdown by modifying its objective function . We call the resulting attack DeepSloth . We also propose an efficacy metric for comparing slowdowns across different multi-exit architectures . We experiment with three generic multi-exit DNNs ( based on VGG16 , ResNet56 and MobileNet ) ( Kaya et al. , 2019 ) and a speciallydesigned multi-exit architecture , MSDNets ( Huang et al. , 2018 ) , on two popular image classification benchmarks ( CIFAR-10 and Tiny ImageNet ) . We find that DeepSloth reduces the efficacy of multiexit DNNs by 90–100 % , i.e. , the perturbations render nearly all early exits ineffective . In a scenario typical for IoT deployments , where the model is partitioned between edge devices and the cloud , our attack amplifies the latency by 1.5–5× , negating the benefits of model partitioning . We also show that it is possible to craft a universal DeepSloth perturbation , which can slow down the model on either all or a class of inputs . While more constrained , this attack still reduces the efficacy by 5–45 % . Further , we observe that DeepSloth can be effective in some black-box scenarios , where the attacker has limited knowledge about the victim . Finally , we show that a standard defense against adversarial samples—adversarial training—is inadequate against slowdowns . Our results suggest that further research will be required for protecting multi-exit architectures against this emerging security threat . 2 RELATED WORK . Adversarial Examples and Defenses . Prior work on adversarial examples has shown that DNNs are vulnerable to test-time input perturbations ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ; Papernot et al. , 2017 ; Carlini & Wagner , 2017 ; Madry et al. , 2018 ) . An adversary who wants to maximize a model ’ s error on specific test-time samples can introduce human-imperceptible perturbations to these samples . Moreover , an adversary can also exploit a surrogate model for launching the attack and still hurt an unknown victim ( Athalye et al. , 2018 ; Tramèr et al. , 2017b ; Inkawhich et al. , 2019 ) . This transferability leads to adversarial examples in more practical black-box scenarios . Although many defenses ( Kurakin et al. , 2016 ; Xu et al. , 2017 ; Song et al. , 2018 ; Liao et al. , 2018 ; Lecuyer et al. , 2019 ) have been proposed against this threat , adversarial training ( AT ) has become the frontrunner ( Madry et al. , 2018 ) . In Sec 5 , we evaluate the vulnerability of multi-exit DNNs to adversarial slowdowns in white-box and black-box scenarios . In Sec 6 , we show that standard AT and its simple adaptation to our perturbations are not sufficient for preventing slowdown attacks . Efficient Input-Adaptive Inference . Recent input-adaptive DNN architectures have brought two seemingly distant goals closer : achieving both high predictive quality and computational efficiency . There are two types of input-adaptive DNNs : adaptive neural networks ( AdNNs ) and multi-exit architectures . During the inference , AdNNs ( Wang et al. , 2018 ; Figurnov et al. , 2017 ) dynamically skip a certain part of the model to reduce the number of computations . This mechanism can be used only for ResNet-based architectures as they facilitate skipping within a network . On the other hand , multi-exit architectures ( Teerapittayanon et al. , 2016 ; Huang et al. , 2018 ; Kaya et al. , 2019 ) introduce multiple side branches—or early-exits—to a model . During the inference on an input sample , these models can preemptively stop the computation altogether once the stopping criteria are met at one of the branches . Kaya et al . ( 2019 ) have also identified that standard , non-adaptive DNNs are susceptible to overthinking , i.e. , their inability to stop computation leads to inefficient inferences on many inputs . Haque et al . ( 2020 ) presented attacks specifically designed for reducing the energy-efficiency of AdNNs by using adversarial input perturbations . However , our work studies a new threat model that an adversary causes slowdowns on multi-exit architectures . By imperceptibly perturbing the inputs , our attacker can ( i ) introduce network latency to an infrastructure that utilizes multi-exit architectures and ( ii ) waste the victim ’ s computational resources . To quantify this vulnerability , we define a new metric to measure the impact of adversarial input perturbation on different multi-exit architectures ( Sec 3 ) . In Sec 5 , we also study practical attack scenarios and the transferability of adversarial input perturbations crafted by our attacker . Moreover , we discuss the potential defense mechanisms against this vulnerability , by proposing a simple adaptation of adversarial training ( Sec 6 ) . To the best of our knowledge , our work is the first systematic study of this new vulnerability . Model Partitioning . Model partitioning has been proposed to bring DNNs to resource-constrained devices ( De Coninck et al. , 2015 ; Taylor et al. , 2018 ) . These schemes split a multi-exit model into sequential components and deploy them in separate endpoints , e.g. , a small , local on-device part and a large , cloud-based part . For bringing DNNs to the Internet of Things ( IoT ) , partitioning is instrumental as it reduces the transmissions between endpoints , a major bottleneck . In Sec 5.1 , on a partitioning scenario , we show that our attack can force excessive transmissions . 3 EXPERIMENTAL SETUP . Datasets . We use two datasets : CIFAR-10 ( Krizhevsky et al. , 2009 ) and Tiny-ImageNet ( Tiny ) . For testing the cross-domain transferability of our attacks , we use the CIFAR-100 dataset . Architectures and Hyper-parameters . To demonstrate that the vulnerability to adversarial slowdowns is common among multi-exit architectures , we experiment on two recent techniques : ShallowDeep Networks ( SDNs ) ( Kaya et al. , 2019 ) and MSDNets ( Huang et al. , 2018 ) . These architectures were designed for different purposes : SDNs are generic and can convert any DNN into a multi-exit model , and MSDNets are custom designed for efficiency . We evaluate an MSDNet architecture ( 6 exits ) and three SDN architectures , based on VGG-16 ( Simonyan & Zisserman , 2014 ) ( 14 exits ) , ResNet-56 ( He et al. , 2016 ) ( 27 exits ) , and MobileNet ( Howard et al. , 2017 ) ( 14 exits ) . Metrics . We define the early-exit capability ( EEC ) curve of a multi-exit model to indicate the fraction of the test samples that exit early at a specific fraction of the model ’ s full inference cost . Figure 2 shows the EEC curves of our SDNs on Tiny ImageNet , assuming that the computation stops when there is a correct classification at an exit point . For example , VGG-16-based SDN model can correctly classify ∼50 % of the samples using ∼50 % of its full cost . Note that this stopping criterion is impractical ; in Sec 4 , we will discuss the practical ones . We define the early-exit efficacy , or efficacy in short , to quantify a model ’ s ability of utilizing its exit points . The efficacy of a multi-exit model is the area under its EEC curve , estimated via the trapezoidal rule . An ideal efficacy for a model is close to 1 , when most of the input samples the computation stops very early ; models that do not use their early exits have 0 efficacy . A model with low efficacy generally exhibits a higher latency ; in a partitioned model , the low efficacy will cause more input transmissions to the cloud , and the latency is further amplified by the network round trips . A multi-exit model ’ s efficacy and accuracy are dictated by its stopping criteria , which we discuss in the next section . As for the classification performance , we report the Top-1 accuracy on the test data .
This paper studies adversarial attack and defense for adaptive multi-exit network. Adaptive multi-exit network is, by itself, a pretty new and under-studied topic, let alone the adversarial study on top of it. This paper proposes a simple-yet-effective DeepSloth attack based on layerwise loss function. It also proposed an efficacy metric for better evaluation. The experiments are conducted on four multi-exit networks and two datasets: cifar10 and tiny imagenet. In the appendix, there is also evaluation on different norms of attacks. The results on white-box and black-box attack demonstrate the effectiveness: DeepSloth not only hurts the accuracy (also achieved by baselines), but also hurts the efficacy (only achieved by DeepSloth). To reduce the computation burden of performing the attack, the authors did two things: 1) model partitioning in scenario of IOT, and 2) universal attack across a dataset. At the end, the authors adapt AT to multi-exit networks and demonstrate that AT is effective in general and DeepSloth is further helping AT for better robustness.
SP:cb32c18a6a766894aa23e1f84ea9c38ef21fe023
Accelerating Convergence of Replica Exchange Stochastic Gradient MCMC via Variance Reduction
1 INTRODUCTION . Stochastic gradient Monte Carlo methods ( Welling & Teh , 2011 ; Chen et al. , 2014 ; Li et al. , 2016 ) are the golden standard for Bayesian inference in deep learning due to their theoretical guarantees in uncertainty quantification ( Vollmer et al. , 2016 ; Chen et al. , 2015 ) and non-convex optimization ( Zhang et al. , 2017 ) . However , despite their scalability with respect to the data size , their mixing rates are often extremely slow for complex deep neural networks with rugged energy landscapes ( Li et al. , 2018 ) . To speed up the convergence , several techniques have been proposed in the literature in order to accelerate their exploration of multiple modes on the energy landscape , for example , dynamic temperatures ( Ye et al. , 2017 ) and cyclic learning rates ( Zhang et al. , 2020 ) , to name a few . However , such strategies only explore contiguously a limited region around a few informative modes . Inspired by the successes of replica exchange , also known as parallel tempering , in traditional Monte Carlo methods ( Swendsen & Wang , 1986 ; Earl & Deem , 2005 ) , reSGLD ( Deng et al. , ∗Equal contribution 2020 ) uses multiple processes based on stochastic gradient Langevin dynamics ( SGLD ) where interactions between different SGLD chains are conducted in a manner that encourages large jumps . In addition to the ideal utilization of parallel computation , the resulting process is able to jump to more informative modes for more robust uncertainty quantification . However , the noisy energy estimators in mini-batch settings lead to a large bias in the naı̈ve swaps , and a large correction is required to reduce the bias , which yields few effective swaps and insignificant accelerations . Therefore , how to reduce the variance of noisy energy estimators becomes essential in speeding up the convergence . A long standing technique for variance reduction is the control variates method . The key to reducing the variance is to properly design correlated control variates so as to counteract some noise . Towards this direction , Dubey et al . ( 2016 ) ; Xu et al . ( 2018 ) proposed to update the control variate periodically for the stochastic gradient estimators and Baker et al . ( 2019 ) studied the construction of control variates using local modes . Despite the advantages in near-convex problems , a natural discrepancy between theory ( Chatterji et al. , 2018 ; Xu et al. , 2018 ; Zou et al. , 2019b ) and practice ( He et al. , 2016 ; Devlin et al. , 2019 ) is whether we should avoid the gradient noise in non-convex problems . To fill in the gap , we only focus on the variance reduction of noisy energy estimators to exploit the theoretical accelerations but no longer consider the variance reduction of the noisy gradients so that the empirical experience from stochastic gradient descents with momentum ( M-SGD ) can be naturally imported . In this paper we propose the variance-reduced replica exchange stochastic gradient Langevin dynamics ( VR-reSGLD ) algorithm to accelerate convergence by reducing the variance of the noisy energy estimators . This algorithm not only shows the potential of exponential acceleration via much more effective swaps in the non-asymptotic analysis but also demonstrates remarkable performance in practical tasks where a limited time is required ; while others ( Xu et al. , 2018 ; Zou et al. , 2019a ) may only work well when the dynamics is sufficiently mixed and the discretization error becomes a major component . Moreover , the existing discretization error of the Langevin-based Markov jump processes ( Chen et al. , 2019 ; Deng et al. , 2020 ; Futami et al. , 2020 ) is exponentially dependent on time due to the limitation of Grönwall ’ s inequality . To avoid such a crude estimate , we consider the generalized Girsanov theorem and a change of Poisson measure . As a result , we obtain a much tighter discretization error only polynomially dependent on time . Empirically , we test the algorithm through extensive experiments and achieve state-of-the-art performance in both optimization and uncertainty estimates . 2 PRELIMINARIES . A common problem , in Bayesian inference , is the simulation from a posterior P ( β|X ) ∝ P ( β ) ∏N i=1 P ( xi|β ) , where P ( β ) is a proper prior , ∏N i=1 P ( xi|β ) is the likelihood function and N is the number of data points . When N is large , the standard Langevin dynamics is too costly in evaluating the gradients . To tackle this issue , stochastic gradient Langevin dynamics ( SGLD ) ( Welling & Teh , 2011 ) was proposed to make the algorithm scalable by approximating the gradient through a mini-batch data B of size n such that βk = βk−1 − ηk N n ∑ i∈Bk ∇L ( xi|βk−1 ) + √ 2ηkτξk , ( 1 ) where βk ∈ Rd , τ denotes the temperature , ηk is the learning rate at iteration k , ξk is a standard Gaussian vector , and L ( · ) : = − log P ( β|X ) is the energy function . SGLD is known to converge weakly to a stationary Gibbs measure πτ ( β ) ∝ exp ( −L ( β ) /τ ) as ηk decays to 0 ( Teh et al. , 2016 ) . The temperature τ is the key to accelerating the computations in multi-modal distributions . On the one hand , a high temperature flattens the Gibbs distribution exp ( −L ( β ) /τ ) ( see the red curve in Fig.1 ( a ) ) and accelerates mixing by facilitating exploration of the whole domain , but the resulting distribution becomes much less concentrated around the global optima . On the other hand , a low temperature exploits the local region rapidly ; however , it may cause the particles to stick in a local region for an exponentially long time , as shown in the blue curve in Fig.1 ( a , b ) . To bridge the gap between global exploration and local exploitation , Deng et al . ( 2020 ) proposed the replica exchange SGLD algorithm ( reSGLD ) , which consists of a low-temperature SGLD to encourage exploitation and a high-temperature SGLD to support exploration β ( 1 ) k = β ( 1 ) k−1 − ηk N n ∑ i∈Bk ∇L ( xi|β ( 1 ) k−1 ) + √ 2ηkτ ( 1 ) ξ ( 1 ) k β ( 2 ) k = β ( 2 ) k−1 − ηk N n ∑ i∈Bk ∇L ( xi|β ( 2 ) k−1 ) + √ 2ηkτ ( 2 ) ξ ( 2 ) k , where the invariant measure is known to be π ( β ( 1 ) , β ( 2 ) ) ∝ exp ( −L ( β ( 1 ) ) τ ( 1 ) − L ( β ( 2 ) ) τ ( 2 ) ) as ηk → 0 and τ ( 1 ) < τ ( 2 ) . Moreover , the two processes may swap the positions to allow tunneling between different modes . To avoid inducing a large bias in mini-batch settings , a corrected swapping rate Ŝ is developed such that Ŝ = exp { ( 1 τ ( 1 ) − 1 τ ( 2 ) ) ( N n ∑ i∈Bk L ( xi|β ( 1 ) k ) − N n ∑ i∈Bk L ( xi|β ( 2 ) k ) − ( 1 τ ( 1 ) − 1 τ ( 2 ) ) σ̂2 F ) } , where σ̂2 is an estimator of the variance of Nn ∑ i∈Bk L ( xi|β ( 1 ) k ) − N n ∑ i∈Bk L ( xi|β ( 2 ) k ) and F is the correction factor to balance between acceleration and bias . In other words , the parameters switch the positions from ( β ( 1 ) k , β ( 2 ) k ) to ( β ( 2 ) k , β ( 1 ) k ) with a probability r ( 1∧ Ŝ ) ηk , where the constant r is the swapping intensity and can set to 1ηk for simplicity . From a probabilistic point of view , reSGLD is a discretization scheme of replica exchange Langevin diffusion ( reLD ) in mini-batch settings . Given a smooth test function f and a swapping-rate function S , the infinitesimal generator LS associated with the continuous-time reLD follows LSf ( β ( 1 ) , β ( 2 ) ) = −〈∇β ( 1 ) f ( β ( 1 ) , β ( 2 ) ) , ∇L ( β ( 1 ) ) 〉 − 〈∇β ( 2 ) f ( β ( 1 ) , β ( 2 ) ) , ∇L ( β ( 2 ) ) 〉 + τ ( 1 ) ∆β ( 1 ) f ( β ( 1 ) , β ( 2 ) ) + τ ( 2 ) ∆β ( 2 ) f ( β ( 1 ) , β ( 2 ) ) + rS ( β ( 1 ) , β ( 2 ) ) · ( f ( β ( 2 ) , β ( 1 ) ) − f ( β ( 1 ) , β ( 2 ) ) ) , where the last term arises from swaps and ∆β ( · ) is the the Laplace operator with respect to β ( · ) . Note that the infinitesimal generator is closely related to Dirichlet forms in characterizing the evolution of a stochastic process . By standard calculations in Markov semigroups ( Chen et al. , 2019 ) , the Dirichlet form ES associated with the infinitesimal generator LS follows ES ( f ) = ∫ ( τ ( 1 ) ‖∇β ( 1 ) f ( β ( 1 ) , β ( 2 ) ) ‖2 + τ ( 2 ) ‖∇β ( 2 ) f ( β ( 1 ) , β ( 2 ) ) ‖2 ) dπ ( β ( 1 ) , β ( 2 ) ) ︸ ︷︷ ︸ vanilla term E ( f ) + r 2 ∫ S ( β ( 1 ) , β ( 2 ) ) · ( f ( β ( 2 ) , β ( 1 ) ) − f ( β ( 1 ) , β ( 2 ) ) ) 2dπ ( β ( 1 ) , β ( 2 ) ) ︸ ︷︷ ︸ acceleration term , ( 2 ) which leads to a strictly positive acceleration under mild conditions and is crucial for the exponentially accelerated convergence in the W2 distance ( see Fig.1 ( c ) ) . However , the acceleration depends on the swapping-rate function S and becomes much smaller given a noisy estimate of N n ∑ i∈B L ( xi|β ) due to the demand of large corrections to reduce the bias . 3 VARIANCE REDUCTION IN REPLICA EXCHANGE STOCHASTIC GRADIENT LANGEVIN DYNAMICS . The desire to obtain more effective swaps and larger accelerations drives us to design more efficient energy estimators . A naı̈ve idea would be to apply a large batch size n , which reduces the variance of the noisy energy estimator proportionally . However , this comes with a significantly increased memory overhead and computations and therefore is inappropriate for big data problems . A natural idea to propose more effective swaps is to reduce the variance of the noisy energy estimator L ( B|β ( h ) ) = Nn ∑ i∈B L ( xi|β ( h ) ) for h ∈ { 1 , 2 } . Considering an unbiased estimator L ( B|β̂ ( h ) ) for ∑N i=1 L ( xi|β̂ ( h ) ) and a constant c , we see that a new estimator L̃ ( B|β ( h ) ) , which follows L̃ ( B|β ( h ) ) = L ( B|β ( h ) ) + c ( L ( B|β̂ ( h ) ) − N∑ i=1 L ( xi|β̂ ( h ) ) ) , ( 3 ) is still the unbiased estimator for ∑N i=1 L ( xi|β ( h ) ) . By decomposing the variance , we have Var ( L̃ ( B|β ( h ) ) ) = Var ( L ( B|β ( h ) ) ) + c2Var ( L ( B|β̂ ( h ) ) ) + 2cCov ( L ( B|β ( h ) ) , L ( B|β̂ ( h ) ) ) . In such a case , Var ( L̃ ( B|β ( h ) ) ) achieves the minimum variance ( 1 − ρ2 ) Var ( L ( B|β ( h ) ) ) given c ? : = −Cov ( L ( B|β ( h ) ) , L ( B|β̂ ( h ) ) ) Var ( L ( B|β̂ ( h ) ) ) , where Cov ( · , · ) denotes the covariance and ρ is the correlation coefficient of L ( B|β ( h ) ) and L ( B|β̂ ( h ) ) . To propose a correlated control variate , we follow Johnson & Zhang ( 2013 ) and update β̂ ( h ) = β ( h ) mb km c every m iterations . Moreover , the optimal c ? is often unknown in practice . To handle this issue , a well-known solution ( Johnson & Zhang , 2013 ) is to fix c = −1 given a high correlation |ρ| of the estimators and then we can present the VR-reSGLD algorithm in Algorithm 1 . Since the exact variance for correcting the stochastic swapping rate is unknown and even time-varying , we follow Deng et al . ( 2020 ) and propose to use stochastic approximation ( Robbins & Monro , 1951 ) to adaptively update the unknown variance . Variants of VR-reSGLD The number of iterations m to update the control variate β̂ ( h ) gives rise to a trade-off in computations and variance reduction . A small m introduces a highly correlated control variate at the cost of expensive computations ; a largem , however , may yield a less correlated control variate and setting c = −1 fails to reduce the variance . In spirit of the adaptive variance in Deng et al . ( 2020 ) to estimate the unknown variance , we explore the idea of the adaptive coefficient c̃k = ( 1− γk ) c̃k−m + γkck such that the unknown optimal c ? is well approximated . We present the adaptive VR-reSGLD in Algorithm 2 in Appendix E.2 and show empirically later that the adaptive VR-reSGLD leads to a significant improvement over VR-reSGLD for the less correlated estimators . A parallel line of research is to exploit the SAGA algorithm ( Defazio et al. , 2014 ) in the study of variance reduction . Despite the most effective performance in variance reduction ( Chatterji et al. , 2018 ) , the SAGA type of sampling algorithms require an excessively memory storage of O ( Nd ) , which is too costly for big data problems . Therefore , we leave the study of the lightweight SAGA algorithm inspired by Harikandeh et al . ( 2015 ) ; Zhou et al . ( 2019 ) for future works . Related work Although our VR-reSGLD is , in spirit , similar to VR-SGLD ( Dubey et al. , 2016 ; Xu et al. , 2018 ) , it differs from VR-SGLD in two aspects : First , VR-SGLD conducts variance reduction on the gradient and only shows promises in the nearly log-concave distributions or when the Markov process is sufficiently converged ; however , our VR-reSGLD solely focuses on the variance reduction of the energy estimator to propose more effective swaps , and therefore we can import the empirical experience in hyper-parameter tuning from M-SGD to our proposed algorithm . Second , VR-SGLD doesn ’ t accelerate the continuous-time Markov process but only focuses on reducing the discretization error ; VR-reSGLD possesses a larger acceleration term in the Dirichlet form ( 2 ) and shows a potential in exponentially speeding up the convergence of the continuous-time process in the early stage , in addition to the improvement on the discretization error . In other words , our algorithm is not only theoretically sound but also more empirically appealing for a wide variety of problems in non-convex learning . Algorithm 1 Variance-reduced replica exchange stochastic gradient Langevin dynamics ( VRreSGLD ) . The learning rate and temperature can be set to dynamic to speed up the computations . A larger smoothing factor γ captures the trend better but becomes less robust . T is the thinning factor to avoid a cumbersome system . Input The initial parameters β ( 1 ) 0 and β ( 2 ) 0 , learning rate η , temperatures τ ( 1 ) and τ ( 2 ) , correction factor F and smoothing factor γ. repeat Parallel sampling Randomly pick a mini-batch set Bk of size n. β ( h ) k = β ( h ) k−1 − η N n ∑ i∈Bk ∇L ( xi|β ( h ) k−1 ) + √ 2ητ ( h ) ξ ( h ) k , for h ∈ { 1 , 2 } . ( 4 ) Variance-reduced energy estimators Update L̂ ( h ) = ∑N i=1 L ( xi ∣∣∣β ( h ) mb k m c ) every m iterations . L̃ ( Bk|β ( h ) k ) = N n ∑ i∈Bk [ L ( xi|β ( h ) k ) − L ( xi ∣∣∣β ( h ) mb km c ) ] + L̂ ( h ) , for h ∈ { 1 , 2 } . ( 5 ) if k mod m = 0 then Update σ̃2k = ( 1− γ ) σ̃2k−m + γσ2k , where σ2k is an estimate for Var ( L̃ ( Bk|β ( 1 ) k ) − L̃ ( Bk|β ( 2 ) k ) ) . end if Bias-reduced swaps Swap β ( 1 ) k+1 and β ( 2 ) k+1 if u < S̃η , m , n , where u ∼ Unif [ 0 , 1 ] , and S̃η , m , n follows S̃η , m , n = exp { ( 1 τ ( 1 ) − 1 τ ( 2 ) ) ( L̃ ( Bk+1|β ( 1 ) k+1 ) − L̃ ( Bk+1|β ( 2 ) k+1 ) − 1 F ( 1 τ ( 1 ) − 1 τ ( 2 ) ) σ̃2 mb k m c ) } . ( 6 ) until k = kmax . Output : The low-temperature process { β ( 1 ) iT } bkmax/Tc i=1 , where T is the thinning factor .
The paper presents VR-reSGLD, a method to accelerate replica exchange stochastic gradient Langevin diffusion (reSGLD), which has been proposed recently to tackle non-convex learning problems. reSGLD suffers from two major sources of error resulting in low swapping rates: minibatch noise and the discretization error of Langevin diffusion. The idea of the paper is to use control variates to reduce the variance of energy estimators and thereby improve the swapping rate (which should lead to an accelerated convergence). Unlike previous modifications of SGLD, the variance reduction proposed in the paper aims at improving the energy estimators rather than the gradient estimators. The paper presents non-asymptotic results backing the intended acceleration of the Markov jump process. Numerical experiments illustrate the performance gain achieved by VR-reSGLD. These tests include a one-dimensional example (learning the component mean of a bimodal mixture of Gaussians) and Bayesian training of DNNs based on CIFAR imaging data.
SP:5817f96548dd3269127fe57d136b38735e6acea8
Accelerating Convergence of Replica Exchange Stochastic Gradient MCMC via Variance Reduction
1 INTRODUCTION . Stochastic gradient Monte Carlo methods ( Welling & Teh , 2011 ; Chen et al. , 2014 ; Li et al. , 2016 ) are the golden standard for Bayesian inference in deep learning due to their theoretical guarantees in uncertainty quantification ( Vollmer et al. , 2016 ; Chen et al. , 2015 ) and non-convex optimization ( Zhang et al. , 2017 ) . However , despite their scalability with respect to the data size , their mixing rates are often extremely slow for complex deep neural networks with rugged energy landscapes ( Li et al. , 2018 ) . To speed up the convergence , several techniques have been proposed in the literature in order to accelerate their exploration of multiple modes on the energy landscape , for example , dynamic temperatures ( Ye et al. , 2017 ) and cyclic learning rates ( Zhang et al. , 2020 ) , to name a few . However , such strategies only explore contiguously a limited region around a few informative modes . Inspired by the successes of replica exchange , also known as parallel tempering , in traditional Monte Carlo methods ( Swendsen & Wang , 1986 ; Earl & Deem , 2005 ) , reSGLD ( Deng et al. , ∗Equal contribution 2020 ) uses multiple processes based on stochastic gradient Langevin dynamics ( SGLD ) where interactions between different SGLD chains are conducted in a manner that encourages large jumps . In addition to the ideal utilization of parallel computation , the resulting process is able to jump to more informative modes for more robust uncertainty quantification . However , the noisy energy estimators in mini-batch settings lead to a large bias in the naı̈ve swaps , and a large correction is required to reduce the bias , which yields few effective swaps and insignificant accelerations . Therefore , how to reduce the variance of noisy energy estimators becomes essential in speeding up the convergence . A long standing technique for variance reduction is the control variates method . The key to reducing the variance is to properly design correlated control variates so as to counteract some noise . Towards this direction , Dubey et al . ( 2016 ) ; Xu et al . ( 2018 ) proposed to update the control variate periodically for the stochastic gradient estimators and Baker et al . ( 2019 ) studied the construction of control variates using local modes . Despite the advantages in near-convex problems , a natural discrepancy between theory ( Chatterji et al. , 2018 ; Xu et al. , 2018 ; Zou et al. , 2019b ) and practice ( He et al. , 2016 ; Devlin et al. , 2019 ) is whether we should avoid the gradient noise in non-convex problems . To fill in the gap , we only focus on the variance reduction of noisy energy estimators to exploit the theoretical accelerations but no longer consider the variance reduction of the noisy gradients so that the empirical experience from stochastic gradient descents with momentum ( M-SGD ) can be naturally imported . In this paper we propose the variance-reduced replica exchange stochastic gradient Langevin dynamics ( VR-reSGLD ) algorithm to accelerate convergence by reducing the variance of the noisy energy estimators . This algorithm not only shows the potential of exponential acceleration via much more effective swaps in the non-asymptotic analysis but also demonstrates remarkable performance in practical tasks where a limited time is required ; while others ( Xu et al. , 2018 ; Zou et al. , 2019a ) may only work well when the dynamics is sufficiently mixed and the discretization error becomes a major component . Moreover , the existing discretization error of the Langevin-based Markov jump processes ( Chen et al. , 2019 ; Deng et al. , 2020 ; Futami et al. , 2020 ) is exponentially dependent on time due to the limitation of Grönwall ’ s inequality . To avoid such a crude estimate , we consider the generalized Girsanov theorem and a change of Poisson measure . As a result , we obtain a much tighter discretization error only polynomially dependent on time . Empirically , we test the algorithm through extensive experiments and achieve state-of-the-art performance in both optimization and uncertainty estimates . 2 PRELIMINARIES . A common problem , in Bayesian inference , is the simulation from a posterior P ( β|X ) ∝ P ( β ) ∏N i=1 P ( xi|β ) , where P ( β ) is a proper prior , ∏N i=1 P ( xi|β ) is the likelihood function and N is the number of data points . When N is large , the standard Langevin dynamics is too costly in evaluating the gradients . To tackle this issue , stochastic gradient Langevin dynamics ( SGLD ) ( Welling & Teh , 2011 ) was proposed to make the algorithm scalable by approximating the gradient through a mini-batch data B of size n such that βk = βk−1 − ηk N n ∑ i∈Bk ∇L ( xi|βk−1 ) + √ 2ηkτξk , ( 1 ) where βk ∈ Rd , τ denotes the temperature , ηk is the learning rate at iteration k , ξk is a standard Gaussian vector , and L ( · ) : = − log P ( β|X ) is the energy function . SGLD is known to converge weakly to a stationary Gibbs measure πτ ( β ) ∝ exp ( −L ( β ) /τ ) as ηk decays to 0 ( Teh et al. , 2016 ) . The temperature τ is the key to accelerating the computations in multi-modal distributions . On the one hand , a high temperature flattens the Gibbs distribution exp ( −L ( β ) /τ ) ( see the red curve in Fig.1 ( a ) ) and accelerates mixing by facilitating exploration of the whole domain , but the resulting distribution becomes much less concentrated around the global optima . On the other hand , a low temperature exploits the local region rapidly ; however , it may cause the particles to stick in a local region for an exponentially long time , as shown in the blue curve in Fig.1 ( a , b ) . To bridge the gap between global exploration and local exploitation , Deng et al . ( 2020 ) proposed the replica exchange SGLD algorithm ( reSGLD ) , which consists of a low-temperature SGLD to encourage exploitation and a high-temperature SGLD to support exploration β ( 1 ) k = β ( 1 ) k−1 − ηk N n ∑ i∈Bk ∇L ( xi|β ( 1 ) k−1 ) + √ 2ηkτ ( 1 ) ξ ( 1 ) k β ( 2 ) k = β ( 2 ) k−1 − ηk N n ∑ i∈Bk ∇L ( xi|β ( 2 ) k−1 ) + √ 2ηkτ ( 2 ) ξ ( 2 ) k , where the invariant measure is known to be π ( β ( 1 ) , β ( 2 ) ) ∝ exp ( −L ( β ( 1 ) ) τ ( 1 ) − L ( β ( 2 ) ) τ ( 2 ) ) as ηk → 0 and τ ( 1 ) < τ ( 2 ) . Moreover , the two processes may swap the positions to allow tunneling between different modes . To avoid inducing a large bias in mini-batch settings , a corrected swapping rate Ŝ is developed such that Ŝ = exp { ( 1 τ ( 1 ) − 1 τ ( 2 ) ) ( N n ∑ i∈Bk L ( xi|β ( 1 ) k ) − N n ∑ i∈Bk L ( xi|β ( 2 ) k ) − ( 1 τ ( 1 ) − 1 τ ( 2 ) ) σ̂2 F ) } , where σ̂2 is an estimator of the variance of Nn ∑ i∈Bk L ( xi|β ( 1 ) k ) − N n ∑ i∈Bk L ( xi|β ( 2 ) k ) and F is the correction factor to balance between acceleration and bias . In other words , the parameters switch the positions from ( β ( 1 ) k , β ( 2 ) k ) to ( β ( 2 ) k , β ( 1 ) k ) with a probability r ( 1∧ Ŝ ) ηk , where the constant r is the swapping intensity and can set to 1ηk for simplicity . From a probabilistic point of view , reSGLD is a discretization scheme of replica exchange Langevin diffusion ( reLD ) in mini-batch settings . Given a smooth test function f and a swapping-rate function S , the infinitesimal generator LS associated with the continuous-time reLD follows LSf ( β ( 1 ) , β ( 2 ) ) = −〈∇β ( 1 ) f ( β ( 1 ) , β ( 2 ) ) , ∇L ( β ( 1 ) ) 〉 − 〈∇β ( 2 ) f ( β ( 1 ) , β ( 2 ) ) , ∇L ( β ( 2 ) ) 〉 + τ ( 1 ) ∆β ( 1 ) f ( β ( 1 ) , β ( 2 ) ) + τ ( 2 ) ∆β ( 2 ) f ( β ( 1 ) , β ( 2 ) ) + rS ( β ( 1 ) , β ( 2 ) ) · ( f ( β ( 2 ) , β ( 1 ) ) − f ( β ( 1 ) , β ( 2 ) ) ) , where the last term arises from swaps and ∆β ( · ) is the the Laplace operator with respect to β ( · ) . Note that the infinitesimal generator is closely related to Dirichlet forms in characterizing the evolution of a stochastic process . By standard calculations in Markov semigroups ( Chen et al. , 2019 ) , the Dirichlet form ES associated with the infinitesimal generator LS follows ES ( f ) = ∫ ( τ ( 1 ) ‖∇β ( 1 ) f ( β ( 1 ) , β ( 2 ) ) ‖2 + τ ( 2 ) ‖∇β ( 2 ) f ( β ( 1 ) , β ( 2 ) ) ‖2 ) dπ ( β ( 1 ) , β ( 2 ) ) ︸ ︷︷ ︸ vanilla term E ( f ) + r 2 ∫ S ( β ( 1 ) , β ( 2 ) ) · ( f ( β ( 2 ) , β ( 1 ) ) − f ( β ( 1 ) , β ( 2 ) ) ) 2dπ ( β ( 1 ) , β ( 2 ) ) ︸ ︷︷ ︸ acceleration term , ( 2 ) which leads to a strictly positive acceleration under mild conditions and is crucial for the exponentially accelerated convergence in the W2 distance ( see Fig.1 ( c ) ) . However , the acceleration depends on the swapping-rate function S and becomes much smaller given a noisy estimate of N n ∑ i∈B L ( xi|β ) due to the demand of large corrections to reduce the bias . 3 VARIANCE REDUCTION IN REPLICA EXCHANGE STOCHASTIC GRADIENT LANGEVIN DYNAMICS . The desire to obtain more effective swaps and larger accelerations drives us to design more efficient energy estimators . A naı̈ve idea would be to apply a large batch size n , which reduces the variance of the noisy energy estimator proportionally . However , this comes with a significantly increased memory overhead and computations and therefore is inappropriate for big data problems . A natural idea to propose more effective swaps is to reduce the variance of the noisy energy estimator L ( B|β ( h ) ) = Nn ∑ i∈B L ( xi|β ( h ) ) for h ∈ { 1 , 2 } . Considering an unbiased estimator L ( B|β̂ ( h ) ) for ∑N i=1 L ( xi|β̂ ( h ) ) and a constant c , we see that a new estimator L̃ ( B|β ( h ) ) , which follows L̃ ( B|β ( h ) ) = L ( B|β ( h ) ) + c ( L ( B|β̂ ( h ) ) − N∑ i=1 L ( xi|β̂ ( h ) ) ) , ( 3 ) is still the unbiased estimator for ∑N i=1 L ( xi|β ( h ) ) . By decomposing the variance , we have Var ( L̃ ( B|β ( h ) ) ) = Var ( L ( B|β ( h ) ) ) + c2Var ( L ( B|β̂ ( h ) ) ) + 2cCov ( L ( B|β ( h ) ) , L ( B|β̂ ( h ) ) ) . In such a case , Var ( L̃ ( B|β ( h ) ) ) achieves the minimum variance ( 1 − ρ2 ) Var ( L ( B|β ( h ) ) ) given c ? : = −Cov ( L ( B|β ( h ) ) , L ( B|β̂ ( h ) ) ) Var ( L ( B|β̂ ( h ) ) ) , where Cov ( · , · ) denotes the covariance and ρ is the correlation coefficient of L ( B|β ( h ) ) and L ( B|β̂ ( h ) ) . To propose a correlated control variate , we follow Johnson & Zhang ( 2013 ) and update β̂ ( h ) = β ( h ) mb km c every m iterations . Moreover , the optimal c ? is often unknown in practice . To handle this issue , a well-known solution ( Johnson & Zhang , 2013 ) is to fix c = −1 given a high correlation |ρ| of the estimators and then we can present the VR-reSGLD algorithm in Algorithm 1 . Since the exact variance for correcting the stochastic swapping rate is unknown and even time-varying , we follow Deng et al . ( 2020 ) and propose to use stochastic approximation ( Robbins & Monro , 1951 ) to adaptively update the unknown variance . Variants of VR-reSGLD The number of iterations m to update the control variate β̂ ( h ) gives rise to a trade-off in computations and variance reduction . A small m introduces a highly correlated control variate at the cost of expensive computations ; a largem , however , may yield a less correlated control variate and setting c = −1 fails to reduce the variance . In spirit of the adaptive variance in Deng et al . ( 2020 ) to estimate the unknown variance , we explore the idea of the adaptive coefficient c̃k = ( 1− γk ) c̃k−m + γkck such that the unknown optimal c ? is well approximated . We present the adaptive VR-reSGLD in Algorithm 2 in Appendix E.2 and show empirically later that the adaptive VR-reSGLD leads to a significant improvement over VR-reSGLD for the less correlated estimators . A parallel line of research is to exploit the SAGA algorithm ( Defazio et al. , 2014 ) in the study of variance reduction . Despite the most effective performance in variance reduction ( Chatterji et al. , 2018 ) , the SAGA type of sampling algorithms require an excessively memory storage of O ( Nd ) , which is too costly for big data problems . Therefore , we leave the study of the lightweight SAGA algorithm inspired by Harikandeh et al . ( 2015 ) ; Zhou et al . ( 2019 ) for future works . Related work Although our VR-reSGLD is , in spirit , similar to VR-SGLD ( Dubey et al. , 2016 ; Xu et al. , 2018 ) , it differs from VR-SGLD in two aspects : First , VR-SGLD conducts variance reduction on the gradient and only shows promises in the nearly log-concave distributions or when the Markov process is sufficiently converged ; however , our VR-reSGLD solely focuses on the variance reduction of the energy estimator to propose more effective swaps , and therefore we can import the empirical experience in hyper-parameter tuning from M-SGD to our proposed algorithm . Second , VR-SGLD doesn ’ t accelerate the continuous-time Markov process but only focuses on reducing the discretization error ; VR-reSGLD possesses a larger acceleration term in the Dirichlet form ( 2 ) and shows a potential in exponentially speeding up the convergence of the continuous-time process in the early stage , in addition to the improvement on the discretization error . In other words , our algorithm is not only theoretically sound but also more empirically appealing for a wide variety of problems in non-convex learning . Algorithm 1 Variance-reduced replica exchange stochastic gradient Langevin dynamics ( VRreSGLD ) . The learning rate and temperature can be set to dynamic to speed up the computations . A larger smoothing factor γ captures the trend better but becomes less robust . T is the thinning factor to avoid a cumbersome system . Input The initial parameters β ( 1 ) 0 and β ( 2 ) 0 , learning rate η , temperatures τ ( 1 ) and τ ( 2 ) , correction factor F and smoothing factor γ. repeat Parallel sampling Randomly pick a mini-batch set Bk of size n. β ( h ) k = β ( h ) k−1 − η N n ∑ i∈Bk ∇L ( xi|β ( h ) k−1 ) + √ 2ητ ( h ) ξ ( h ) k , for h ∈ { 1 , 2 } . ( 4 ) Variance-reduced energy estimators Update L̂ ( h ) = ∑N i=1 L ( xi ∣∣∣β ( h ) mb k m c ) every m iterations . L̃ ( Bk|β ( h ) k ) = N n ∑ i∈Bk [ L ( xi|β ( h ) k ) − L ( xi ∣∣∣β ( h ) mb km c ) ] + L̂ ( h ) , for h ∈ { 1 , 2 } . ( 5 ) if k mod m = 0 then Update σ̃2k = ( 1− γ ) σ̃2k−m + γσ2k , where σ2k is an estimate for Var ( L̃ ( Bk|β ( 1 ) k ) − L̃ ( Bk|β ( 2 ) k ) ) . end if Bias-reduced swaps Swap β ( 1 ) k+1 and β ( 2 ) k+1 if u < S̃η , m , n , where u ∼ Unif [ 0 , 1 ] , and S̃η , m , n follows S̃η , m , n = exp { ( 1 τ ( 1 ) − 1 τ ( 2 ) ) ( L̃ ( Bk+1|β ( 1 ) k+1 ) − L̃ ( Bk+1|β ( 2 ) k+1 ) − 1 F ( 1 τ ( 1 ) − 1 τ ( 2 ) ) σ̃2 mb k m c ) } . ( 6 ) until k = kmax . Output : The low-temperature process { β ( 1 ) iT } bkmax/Tc i=1 , where T is the thinning factor .
The authors propose a variant of the Replica Exchange Stochastic Gradient Langevin Dynamics (reSGLD) for non log-concave sampling by using a variant reduction technique on the estimation of the swapping rate. Assuming that the log-density is a finite sum. the authors apply classical variance reduction techniques to the energy estimator necessay to compute the swapping rate. They show that applying such technique yields a higher swapping frequency and faster convergent rate of both the continuous time SDE and its dicretization scheme. Finally, the authors perform numerical experiments on both synthetic and real world data, and show that VR indeed reduces the variance of energy estimator by several orders of magnitude, hence inducing faster convergence.
SP:5817f96548dd3269127fe57d136b38735e6acea8
Sobolev Training for the Neural Network Solutions of PDEs
1 INTRODUCTION . Deep learning has achieved remarkable success in many scientific fields , including computer vision and natural language processing . In addition to engineering , deep learning has been successfully applied to the field of scientific computing . Particularly , the use of neural networks for the numerical integration of partial differential equations ( PDEs ) has emerged as a new important application of the deep learning . Being a universal approximator ( Cybenko , 1989 ; Hornik et al. , 1989 ; Li , 1996 ) , a neural network can approximate solutions of complex PDEs . To find the neural network solution of a PDE , a neural network is trained on a domain wherein the PDE is defined . Training a neural network comprises the following : feeding the input data through forward pass and minimizing a predefined loss function with respect to the network parameters through backward pass . In the traditional supervised learning setting , the loss function is designed to guide the neural network to generate the same output as the target data for the given input data . However , while solving PDEs using neural networks , the target values that correspond to the analytic solution are not available . One possible way to guide the neural network to produce the same output as the solution of the PDE is to penalize the neural network to satisfy the PDE itself ( Sirignano & Spiliopoulos , 2018 ; Berg & Nyström , 2018 ; Raissi et al. , 2019 ; Hwang et al. , 2020 ) . Unlike the traditional mesh-based schemes including the finite difference method ( FDM ) and the finite element method ( FEM ) , neural networks are inherently mesh-free function-approximators . Advantageously , as mesh-free function-approximators , neural networks can avoid the curse of dimensionality ( Sirignano & Spiliopoulos , 2018 ) and approximate the solutions of PDEs on complex geometries ( Berg & Nyström , 2018 ) . Recently , Hwang et al . ( 2020 ) showed that neural networks could approximate the solutions of kinetic Fokker–Planck equations under not only various kinds of kinetic boundary conditions but also several irregular initial conditions . Moreover , they showed that the neural networks automatically approximate the macroscopic physical quantities including the kinetic energy , the entropy , the free energy , and the asymptotic behavior of the solutions . Further issues including the inverse problem were investigated by Raissi et al . ( 2019 ) ; Jo et al . ( 2020 ) . Although the neural network approach can be used to solve several complex PDEs in various kinds of settings , it requires relatively high computational cost compared to the traditional mesh-based schemes in general . To resolve this issue , we propose a novel loss function using Sobolev norms in this paper . Inspired by a recent study that incorporated derivative information for the training of neural networks ( Czarnecki et al. , 2017 ) , we develop a loss function that efficiently guides neural networks to find the solutions of PDEs . We prove that the H1 and H2 norms of the approximation errors converge to zero as our loss functions tend to zero for the 1-D Heat equation , the 1-D viscous Burgers equation , and the 1-D kinetic Fokker–Planck equation . Moreover , we show via several simulation results that the number of epochs to achieve a certain accuracy is significantly reduced as the order of derivatives in the loss function gets higher , provided that the solution is smooth . This study might pave the way for overcoming the issue of high computational cost when solving PDEs using neural networks . The main contributions of this work are threefold : 1 ) We introduce novel loss functions that enable the Sobolev Training of neural networks for solving PDEs . 2 ) We prove that the proposed loss functions guarantee the convergence of neuarl networks in the corresponding Sobolev spaces although it is not a supervised learning task . 3 ) We empirically demonstrate the effect of Sobolev Training for several regression problems and the improved performances of our loss functions in solving several PDEs including the heat equation , Burgers ’ equation , the Fokker–Planck equation , and high-dimensional Poisson equation . 2 RELATED WORKS . Training neural networks to approximate the solutions of PDEs has been intensively studied over the past decades . For example , Lagaris et al . ( 1998 ; 2000 ) used neural networks to solve Ordinary Differential Equations ( ODEs ) and PDEs on a predefined set of grid points . Subsequently , Sirignano & Spiliopoulos ( 2018 ) proposed a method to solve high-dimensional PDEs by approximating the solution using a neural network . They focused on the fact that the traditional finite mesh-based scheme becomes computationally intractable when the dimension becomes high . However , because neural networks are mesh-free function-approximators , they can solve high-dimensional PDEs by incorporating mini-batch sampling . Furthermore , the authors showed the convergence of the neural network to the solution of quasilinear parabolic PDEs under certain conditions . Recently , Raissi et al . ( 2019 ) reported that one can use observed data to solve PDEs using physicsinformed neural networks ( PINNs ) . Notably , PINNs can solve a supervised regression problem on observed data while satisfying any physical properties given by nonlinear PDEs . A significant advantage of PINNs is that the data-driven discovery of PDEs , also called the inverse problem , is possible with a small change in the code . The authors provided several numerical simulations for various types of nonlinear PDEs including the Navier–Stokes equation and Burgers ’ equation . The first theoretical justification for PINNs was provided by Shin et al . ( 2020 ) , who showed that a sequence of neural networks converges to the solutions of linear elliptic and parabolic PDEs in L2 sense as the number of observed data increases . There also exists a study aiming to enhance the convergence of PINNs ( van der Meer et al. , 2020 ) . Additionally , several works related deep neural networks with PDEs but not by the direct approximation of the solutions of PDEs . For instance , Long et al . ( 2018 ) attempted to discover the hidden physics model from data by learning differential operators . A fast , iterative PDE-solver was proposed by learning to modify each iteration of the existing solver ( Hsieh et al. , 2019 ) . A deep backward stochastic differential equation ( BSDE ) solver was proposed and investigated in Weinan et al . ( 2017 ) ; Han et al . ( 2018 ) for solving high-dimensional parabolic PDEs by reformulating them using BSDE . The main strategy of the present study is to leverage derivative information while solving PDEs via neural networks . The authors of Czarnecki et al . ( 2017 ) first proposed Sobolev Training that uses derivative information of the target function when training a neural network by slightly modifying the loss function . They showed that Sobolev Training had lower sample complexity than regular training , and therefore it is highly efficient in many applicable fields , such as regression and policy distillation problems . We appropriate the concept of Sobolev Training to develop a loss function for the efficient training of a neural network for solving PDEs . 3 LOSS FUNCTION . We consider the following Cauchy problem of PDEs : Pu = f , ( t , x ) ∈ [ 0 , T ] × Ω , ( 3.1 ) Iu = g , ( t , x ) ∈ { 0 } × Ω , ( 3.2 ) Bu = h , ( t , x ) ∈ [ 0 , T ] × ∂Ω , ( 3.3 ) where P denotes a differential operator ; I and B denote the initial and boundary operators , respectively ; f , g , and h denote the inhomogeneous term , and initial and boundary data , respectively . In most studies that reported the neural network solutions of PDEs , a neural network was trained on uniformly sampled grid points { ( ti , xj ) } Nt , Nxi , j=1 ∈ [ 0 , T ] × Ω , which were completely determined before training . One of the most intuitive ways to make the neural network satisfy PDEs ( 3.1 ) – ( 3.3 ) is to minimize the following loss functional : Loss ( unn ; p ) = ‖Punn − f‖pLp ( [ 0 , T ] ×Ω ) + ‖Iunn − g‖ p Lp ( Ω ) + ‖Bunn − h‖ p Lp ( [ 0 , T ] ×∂Ω ) , where unn denotes the neural network and p = 1 or 2 , as they have been the most commonly used exponents in regression problems in previous studies . Evidently , an analytic solution u satisfies Loss ( u ) = 0 , and thus one can conceptualize a neural network that makes Loss ( unn ) = 0 a possible solution of PDEs ( 3.1 ) – ( 3.3 ) . This statement is in fact proved for second-order parabolic equations with the Dirichlet boundary condition in Jo et al . ( 2020 ) , and for the Fokker–Planck equation with inflow and specular reflective boundary conditions in Hwang et al . ( 2020 ) . Both the proofs are based on the following inequality : ‖u− unn‖L∞ ( 0 , T ; L2 ( Ω ) ) ≤ CLoss ( unn ; 2 ) , for some constant C , which states that minimizing the loss functional implies minimizing the approximation error . The main concept behind Sobolev Training is to minimize the error between the output and the target function , and that between the derivatives of the output and those of the target function . However , unlike the traditional supervised regression problem , neither the target function nor its derivative is provided while solving PDEs via neural networks . Thus , a special treatment is required to apply Sobolev Training for solving PDEs using neural networks . In this and the following sections , we propose several loss functions and prove that they guarantee the convergence of the neural network to the solution of a given PDE in the corresponding Sobolev space . Therefore , the proposed loss functions play similar roles to those in Sobolev Training . We define the loss function that depends on the Sobolev norm W k , p as follows : LossGE ( unn ; k , p , l , q ) = ∥∥∥‖P ( unn ( t , · ) ) − f ( t , · ) ‖qW l , q ( Ω ) ∥∥∥pWk , p ( [ 0 , T ] ) , ( 3.4 ) LossIC ( unn ; l , q ) = ‖Iunn ( t , x ) − g ( x ) ‖qW l , q ( Ω ) , ( 3.5 ) LossBC ( unn ; k , p , l , q ) = ∥∥∥‖Bunn ( t , · ) − h ( t , · ) ‖qW l , q ( ∂Ω ) ∥∥∥pWk , p ( [ 0 , T ] ) . ( 3.6 ) Remark 3.1 . Here , Loss ( 0 ) TOTAL ( unn ) = LossGE ( unn ; 0 , 2 , 0 , 2 ) + LossIC ( unn ; 0 , 2 ) + LossBC ( unn ; 0 , 2 , 0 , 2 ) coincides with the traditional L2 loss function employed by Sirignano & Spiliopoulos ( 2018 ) ; Berg & Nyström ( 2018 ) ; Raissi et al . ( 2019 ) ; Hwang et al . ( 2020 ) . When we train a neural network , the loss functions ( 3.4 ) – ( 3.6 ) are computed by Monte-Carlo approximation . Because the grid points are uniformly sampled , the loss functions are approximated as follows : LossGE ( unn ; k , p , l , q ) ≈ T |Ω| NtNx ∑ |β|≤k Nt∑ i=1 ∣∣∣∣∣∣ d β dtβ ∑ |α|≤l Nx∑ j=1 |DαP ( unn ( ti , xj ) ) −Dαf ( ti , xj ) |q ∣∣∣∣∣∣ p , LossIC ( unn ; l , q ) ≈ |Ω| Nx ∑ |α|≤l Nx∑ j=1 |Dαunn ( 0 , xj ) −Dαg ( xj ) |q , LossBC ( unn ; k , p , l , q ) ≈ T |∂Ω| NtNB ∑ |β|≤k Nt∑ i=1 ∣∣∣∣∣∣ d β dtβ ∑ |α|≤l ∑ xj∈∂Ω |Dαunn ( ti , xj ) −Dαh ( ti , xj ) |q ∣∣∣∣∣∣ p , where α and β denote the conventional multi-indexes , and D denotes the spatial derivatives .
The idea of using neural networks to approximate the solutions of the pdes is very interesting, specially in high-dimensional setting where classical approaches fail to scale. Although there has been many efforts in this direction, there are still open venues to explore. One of the most important aspect is the choice of loss function to guide the training of the neural network. And the paper's aim is to address this issue by proposing Sobolev norm as the loss function instead of the commonly used $L^2$-norm. The Sobolev norm includes additional term about derivatives of the error. The main claim is that with the inclusion of the additional term, the convergence of the neural network training becomes faster. This is the basic promise of the paper.
SP:41f2c5e8a9a38a38a14436b57ba124e1b69ff3d2
Sobolev Training for the Neural Network Solutions of PDEs
1 INTRODUCTION . Deep learning has achieved remarkable success in many scientific fields , including computer vision and natural language processing . In addition to engineering , deep learning has been successfully applied to the field of scientific computing . Particularly , the use of neural networks for the numerical integration of partial differential equations ( PDEs ) has emerged as a new important application of the deep learning . Being a universal approximator ( Cybenko , 1989 ; Hornik et al. , 1989 ; Li , 1996 ) , a neural network can approximate solutions of complex PDEs . To find the neural network solution of a PDE , a neural network is trained on a domain wherein the PDE is defined . Training a neural network comprises the following : feeding the input data through forward pass and minimizing a predefined loss function with respect to the network parameters through backward pass . In the traditional supervised learning setting , the loss function is designed to guide the neural network to generate the same output as the target data for the given input data . However , while solving PDEs using neural networks , the target values that correspond to the analytic solution are not available . One possible way to guide the neural network to produce the same output as the solution of the PDE is to penalize the neural network to satisfy the PDE itself ( Sirignano & Spiliopoulos , 2018 ; Berg & Nyström , 2018 ; Raissi et al. , 2019 ; Hwang et al. , 2020 ) . Unlike the traditional mesh-based schemes including the finite difference method ( FDM ) and the finite element method ( FEM ) , neural networks are inherently mesh-free function-approximators . Advantageously , as mesh-free function-approximators , neural networks can avoid the curse of dimensionality ( Sirignano & Spiliopoulos , 2018 ) and approximate the solutions of PDEs on complex geometries ( Berg & Nyström , 2018 ) . Recently , Hwang et al . ( 2020 ) showed that neural networks could approximate the solutions of kinetic Fokker–Planck equations under not only various kinds of kinetic boundary conditions but also several irregular initial conditions . Moreover , they showed that the neural networks automatically approximate the macroscopic physical quantities including the kinetic energy , the entropy , the free energy , and the asymptotic behavior of the solutions . Further issues including the inverse problem were investigated by Raissi et al . ( 2019 ) ; Jo et al . ( 2020 ) . Although the neural network approach can be used to solve several complex PDEs in various kinds of settings , it requires relatively high computational cost compared to the traditional mesh-based schemes in general . To resolve this issue , we propose a novel loss function using Sobolev norms in this paper . Inspired by a recent study that incorporated derivative information for the training of neural networks ( Czarnecki et al. , 2017 ) , we develop a loss function that efficiently guides neural networks to find the solutions of PDEs . We prove that the H1 and H2 norms of the approximation errors converge to zero as our loss functions tend to zero for the 1-D Heat equation , the 1-D viscous Burgers equation , and the 1-D kinetic Fokker–Planck equation . Moreover , we show via several simulation results that the number of epochs to achieve a certain accuracy is significantly reduced as the order of derivatives in the loss function gets higher , provided that the solution is smooth . This study might pave the way for overcoming the issue of high computational cost when solving PDEs using neural networks . The main contributions of this work are threefold : 1 ) We introduce novel loss functions that enable the Sobolev Training of neural networks for solving PDEs . 2 ) We prove that the proposed loss functions guarantee the convergence of neuarl networks in the corresponding Sobolev spaces although it is not a supervised learning task . 3 ) We empirically demonstrate the effect of Sobolev Training for several regression problems and the improved performances of our loss functions in solving several PDEs including the heat equation , Burgers ’ equation , the Fokker–Planck equation , and high-dimensional Poisson equation . 2 RELATED WORKS . Training neural networks to approximate the solutions of PDEs has been intensively studied over the past decades . For example , Lagaris et al . ( 1998 ; 2000 ) used neural networks to solve Ordinary Differential Equations ( ODEs ) and PDEs on a predefined set of grid points . Subsequently , Sirignano & Spiliopoulos ( 2018 ) proposed a method to solve high-dimensional PDEs by approximating the solution using a neural network . They focused on the fact that the traditional finite mesh-based scheme becomes computationally intractable when the dimension becomes high . However , because neural networks are mesh-free function-approximators , they can solve high-dimensional PDEs by incorporating mini-batch sampling . Furthermore , the authors showed the convergence of the neural network to the solution of quasilinear parabolic PDEs under certain conditions . Recently , Raissi et al . ( 2019 ) reported that one can use observed data to solve PDEs using physicsinformed neural networks ( PINNs ) . Notably , PINNs can solve a supervised regression problem on observed data while satisfying any physical properties given by nonlinear PDEs . A significant advantage of PINNs is that the data-driven discovery of PDEs , also called the inverse problem , is possible with a small change in the code . The authors provided several numerical simulations for various types of nonlinear PDEs including the Navier–Stokes equation and Burgers ’ equation . The first theoretical justification for PINNs was provided by Shin et al . ( 2020 ) , who showed that a sequence of neural networks converges to the solutions of linear elliptic and parabolic PDEs in L2 sense as the number of observed data increases . There also exists a study aiming to enhance the convergence of PINNs ( van der Meer et al. , 2020 ) . Additionally , several works related deep neural networks with PDEs but not by the direct approximation of the solutions of PDEs . For instance , Long et al . ( 2018 ) attempted to discover the hidden physics model from data by learning differential operators . A fast , iterative PDE-solver was proposed by learning to modify each iteration of the existing solver ( Hsieh et al. , 2019 ) . A deep backward stochastic differential equation ( BSDE ) solver was proposed and investigated in Weinan et al . ( 2017 ) ; Han et al . ( 2018 ) for solving high-dimensional parabolic PDEs by reformulating them using BSDE . The main strategy of the present study is to leverage derivative information while solving PDEs via neural networks . The authors of Czarnecki et al . ( 2017 ) first proposed Sobolev Training that uses derivative information of the target function when training a neural network by slightly modifying the loss function . They showed that Sobolev Training had lower sample complexity than regular training , and therefore it is highly efficient in many applicable fields , such as regression and policy distillation problems . We appropriate the concept of Sobolev Training to develop a loss function for the efficient training of a neural network for solving PDEs . 3 LOSS FUNCTION . We consider the following Cauchy problem of PDEs : Pu = f , ( t , x ) ∈ [ 0 , T ] × Ω , ( 3.1 ) Iu = g , ( t , x ) ∈ { 0 } × Ω , ( 3.2 ) Bu = h , ( t , x ) ∈ [ 0 , T ] × ∂Ω , ( 3.3 ) where P denotes a differential operator ; I and B denote the initial and boundary operators , respectively ; f , g , and h denote the inhomogeneous term , and initial and boundary data , respectively . In most studies that reported the neural network solutions of PDEs , a neural network was trained on uniformly sampled grid points { ( ti , xj ) } Nt , Nxi , j=1 ∈ [ 0 , T ] × Ω , which were completely determined before training . One of the most intuitive ways to make the neural network satisfy PDEs ( 3.1 ) – ( 3.3 ) is to minimize the following loss functional : Loss ( unn ; p ) = ‖Punn − f‖pLp ( [ 0 , T ] ×Ω ) + ‖Iunn − g‖ p Lp ( Ω ) + ‖Bunn − h‖ p Lp ( [ 0 , T ] ×∂Ω ) , where unn denotes the neural network and p = 1 or 2 , as they have been the most commonly used exponents in regression problems in previous studies . Evidently , an analytic solution u satisfies Loss ( u ) = 0 , and thus one can conceptualize a neural network that makes Loss ( unn ) = 0 a possible solution of PDEs ( 3.1 ) – ( 3.3 ) . This statement is in fact proved for second-order parabolic equations with the Dirichlet boundary condition in Jo et al . ( 2020 ) , and for the Fokker–Planck equation with inflow and specular reflective boundary conditions in Hwang et al . ( 2020 ) . Both the proofs are based on the following inequality : ‖u− unn‖L∞ ( 0 , T ; L2 ( Ω ) ) ≤ CLoss ( unn ; 2 ) , for some constant C , which states that minimizing the loss functional implies minimizing the approximation error . The main concept behind Sobolev Training is to minimize the error between the output and the target function , and that between the derivatives of the output and those of the target function . However , unlike the traditional supervised regression problem , neither the target function nor its derivative is provided while solving PDEs via neural networks . Thus , a special treatment is required to apply Sobolev Training for solving PDEs using neural networks . In this and the following sections , we propose several loss functions and prove that they guarantee the convergence of the neural network to the solution of a given PDE in the corresponding Sobolev space . Therefore , the proposed loss functions play similar roles to those in Sobolev Training . We define the loss function that depends on the Sobolev norm W k , p as follows : LossGE ( unn ; k , p , l , q ) = ∥∥∥‖P ( unn ( t , · ) ) − f ( t , · ) ‖qW l , q ( Ω ) ∥∥∥pWk , p ( [ 0 , T ] ) , ( 3.4 ) LossIC ( unn ; l , q ) = ‖Iunn ( t , x ) − g ( x ) ‖qW l , q ( Ω ) , ( 3.5 ) LossBC ( unn ; k , p , l , q ) = ∥∥∥‖Bunn ( t , · ) − h ( t , · ) ‖qW l , q ( ∂Ω ) ∥∥∥pWk , p ( [ 0 , T ] ) . ( 3.6 ) Remark 3.1 . Here , Loss ( 0 ) TOTAL ( unn ) = LossGE ( unn ; 0 , 2 , 0 , 2 ) + LossIC ( unn ; 0 , 2 ) + LossBC ( unn ; 0 , 2 , 0 , 2 ) coincides with the traditional L2 loss function employed by Sirignano & Spiliopoulos ( 2018 ) ; Berg & Nyström ( 2018 ) ; Raissi et al . ( 2019 ) ; Hwang et al . ( 2020 ) . When we train a neural network , the loss functions ( 3.4 ) – ( 3.6 ) are computed by Monte-Carlo approximation . Because the grid points are uniformly sampled , the loss functions are approximated as follows : LossGE ( unn ; k , p , l , q ) ≈ T |Ω| NtNx ∑ |β|≤k Nt∑ i=1 ∣∣∣∣∣∣ d β dtβ ∑ |α|≤l Nx∑ j=1 |DαP ( unn ( ti , xj ) ) −Dαf ( ti , xj ) |q ∣∣∣∣∣∣ p , LossIC ( unn ; l , q ) ≈ |Ω| Nx ∑ |α|≤l Nx∑ j=1 |Dαunn ( 0 , xj ) −Dαg ( xj ) |q , LossBC ( unn ; k , p , l , q ) ≈ T |∂Ω| NtNB ∑ |β|≤k Nt∑ i=1 ∣∣∣∣∣∣ d β dtβ ∑ |α|≤l ∑ xj∈∂Ω |Dαunn ( ti , xj ) −Dαh ( ti , xj ) |q ∣∣∣∣∣∣ p , where α and β denote the conventional multi-indexes , and D denotes the spatial derivatives .
Sobolev training of neural networks, which augments the standard loss function with terms that penalize discrepancies between the derivatives of the network and target functions, has been shown empirically to improve data-efficiency. Intuitively, one would expect that it also aids generalization in settings where the target function is sufficiently smooth. This manuscript proposes augmenting the loss functions used to represent the solutions of partial differential equations with terms penalizing the Sobolev norm of the solution, its initial condition, and the boundary condition. The motivation for this approach is clear because data -efficiency is of the utmost importance in PDE learning problems where the data could be very difficult to access.
SP:41f2c5e8a9a38a38a14436b57ba124e1b69ff3d2
Mixed-Features Vectors and Subspace Splitting
Motivated by metagenomics , recommender systems , dictionary learning , and related problems , this paper introduces subspace splitting ( SS ) : the task of clustering the entries of what we call a mixed-features vector , that is , a vector whose subsets of coordinates agree with a collection of subspaces . We derive precise identifiability conditions under which SS is well-posed , thus providing the first fundamental theory for this problem . We also propose the first three practical SS algorithms , each with advantages and disadvantages : a random sampling method , a projection-based greedy heuristic , and an alternating Lloyd-type algorithm ; all allow noise , outliers , and missing data . Our extensive experiments outline the performance of our algorithms , and in lack of other SS algorithms , for reference we compare against methods for tightly related problems , like robust matched subspace detection and maximum feasible subsystem , which are special simpler cases of SS . 1 INTRODUCTION . As the reach of data science expands , and as we continuously improve our sensing , storage and computing capabilities , data in virtually all fields of science keeps becoming increasingly highdimensional . For example , the CERN Large Hadron Collider currently “ generates so much data that scientists must discard the overwhelming majority of it , hoping that they ’ ve not thrown away anything useful ” [ 1 ] , and the upcoming Square Kilometer Array is expected to produce 100 times that [ 2 ] . Fortunately , high-dimensional data often has an underlying low-dimensional structure . Inferring such structure not only cuts memory and computational burdens , but also reduces noise and improves learning and prediction . However , higher dimensionality not only increases computational requirements ; it also augments data ’ s structure complexity . In light of this , several research lines have explored new low-dimensional models that best summarize data , going from principal component analysis ( PCA ) [ 3–11 ] and single subspaces [ 12–21 ] to unions of subspaces [ 22–40 ] , and algebraic varieties [ 41 ] . This paper introduces mixed-features vectors ( MFV ’ s ) : a new model that describes the underlying structure of data arising from several modern applications that is not captured by existing lowdimensional models . The main idea is that each entry of a MFV comes from one out of several classes , and that the entries of the same class lie in an underlying subspace . In particular , MFV ’ s are motivated by megatenomics [ 42–46 ] and recommender systems [ 47–59 ] : in metagenomics each gene segment comes from one of the several taxa present in a microbiome ; in recommender systems each rating may come from one of several users sharing the same account . However , MFV ’ s also have applications in robust estimation ( e.g. , robust PCA [ 3–11 ] and robust dictionary learning [ 60–67 ] ) , matrix completion [ 48–59 ] , subspace clustering [ 22–39 ] , and more . This paper also introduces subspace splitting ( SS ) : the task of clustering the entries of a MFV according to its underlying subspaces . SS is tightly related to other machine learning problems . In particular , SS can be thought as a generalization of robust matched subspace detection ( RMSD ) [ 12–17 ] , and maximum feasible subsystem ( MAXFS ) [ 68–74 ] . However , the added complexity of SS renders existing approaches for these problems inapplicable , which calls the attention for specialized SS theory and methods . In these regards , ( i ) we derive precise identifiability conditions under which SS is well-posed , and ( ii ) we propose the first three SS algorithms . 2 PROBLEM STATEMENT AND FUNDAMENTAL THEORY . Let U1 , . . . , UK be subspaces of Rd , and let Ω0 , Ω1 , . . . , ΩK denote a partition of [ d ] : = { 1 , . . . , d } . For any subspace , matrix or vector that is compatible with a set of indices Ω ⊂ [ d ] , we will use the subscript Ω to denote its restriction to the coordinates/rows in Ω . For example , U1ΩK ⊂ R |ΩK| denotes the restriction of U1 to the coordinates in ΩK . Define x ∈ Rd as the mixed-features vector ( MFV ) such that xΩk ∈ U k Ωk for each k = 1 , . . . , K , and the entries of xΩ0 are outliers . Let ∈ Rd denote a noise vector with variance σ2 . Given U1 , . . . , UK , and an incomplete observation yΩ = xΩ + Ω , the goal of subspace splitting ( SS ) is to determine the subsets Ω1 ∩ Ω , . . . , ΩK ∩ Ω indicating the observed coordinates of y that match with each subspace . Example 1 . Consider the following setup , with 1-dimensional subspaces U1 , U2 spanned by U1 , U2 : U1 = 1 1 1 1 1 1 , U2 = 1 2 3 4 5 6 , x = 1/2 1/2 6 8 9 10 , = 0.1 −0.1 −0.1 0.1 −0.1 0.1 , yΩ = 0.51 0.49 5.9 8.1 8.9 · . It is easy to see that Ω1 = { 1 , 2 } , Ω2 = { 3 , 4 } , Ω0 = { 5 } , because xΩ1 = 12U 1 Ω1 and xΩ2 = 2U 2 Ω2 . The keen reader will immediately wonder : is there another partition { Ω′1 , . . . , Ω ′ K } different from { Ω1 , . . . , ΩK } such that xΩ′k ∈ U k Ω′k for every k ? In other words , is this problem well-posed , and if so , under what conditions ? Our main theoretical result answers this question , showing that under the next assumptions , Ωk can be recovered if and only if it has more elements than the dimension of Uk . A1 Each Uk is drawn independently with respect to the uniform measure over the Grassmannian . A2 Each xΩk is drawn independently according to an absolutely continuous distribution with respect to the Lebesgue measure on UkΩk . In words , A1 essentially requires that U1 , . . . , UK are in general position with no particular relation with one another . Similarly , A2 requires that each piece of x is in general position over its corresponding piece of subspace . This type of genericity assumptions are becoming increasingly common in compressed sensing , matrix completion , subspace clustering , tensor theory , and related problems [ 10 , 21 , 30 , 31 , 33–38 , 41 , 56–59 ] . All our statements hold with probability 1 with respect to the measures in A1 and A2 . We point out that A1 and A2 do not imply coherence or affinity ( other typical assumptions in related theory that quantify alignment with the canonical axes or between subspaces [ 3 , 6 , 11 , 23 , 26 , 27 , 48–50 , 54 , 55 ] ) nor vice-versa . For example , bounded coherence and affinity assumptions indeed allow subspaces perfectly aligned on some coordinates . However , they rule-out cases that our assumptions allow , for example the non-zero measure set of highly coherent or affine subspaces that are somewhat aligned with the canonical axes or with one another . To sum up , these assumptions are different , not stronger nor weaker than the usual coherence and affinity assumptions . With this , we are ready to state our main theorem , showing that subspace splitting is possible if and only if x contains more than dim ( Uk ) entries of each subspace Uk . Theorem 1 . Suppose A1 and A2 hold . Given x and U1 , . . . , UK , one can identify Ω1 , . . . , ΩK if and only if |Ωk| > dim ( Uk ) for every k. Example 1 shows a case where the conditions of Theorem 1 are met ( |Ω1| = |Ω2| = 2 > dim ( U1 ) = dim ( U2 ) = 1 ) , and consequently subspace splitting is well-posed ( there exist no partition other than the true { Ω1 , Ω2 } that splits x into U1 and U2 ) . Conversely , the following Example shows a case where the conditions of Theorem 1 are not satisfied , and at least some Ωk is unidentifiable . Example 2 . Consider the following setting : U1 = 111 1 , U2 = 123 4 , U3 = 014 9 , x = 1/2 1/2 6 3 . Now |Ω2| = |Ω3| = 1 = dim ( U2 ) = dim ( U3 ) . As Theorem 1 shows , there exist multiple ways to split x into U1 , U2 , and U3 . Here the partitions could be { Ω1 , Ω2 , Ω3 } = { { 1 , 2 } , { 3 } , { 4 } } or { { 1 , 2 } , { 4 } , { 3 } } , and there is no way of telling which is the true partition from U1 , U2 , U3 , and x . In other words , Ω2 and Ω3 are unidentifiable . Remark 1 . We point out that the constructions in our examples are not generic ( i.e. , they were not constructed according to A1 and A2 ) . We chose them for their simplicity to build intuition and make a point . However , our results still apply to these constructions , showing that in addition to all generic cases , our theory also holds for some non-generic ones . An exact characterization of all the non-generic cases that our theory covers requires a careful study of notions of sketching and partial coordinate discrepancy [ 31 ] , which are out of the scope of this paper . The proof of Theorem 1 follows by the next two lemmas . Lemma 1 shows that any subset of r or fewer entries of any vector will always match with any r-dimensional subspace in general position . Lemma 2 shows that r + 1 entries of a vector won ’ t match with a random r-dimensional subspace by chance . Said in other words , r + 1 entries of a vector will match with an r-dimensional subspace if and only if such entries truly come from that r-dimensional subspace . Lemma 2 is effectively the key towards Theorem 1 , as it allows us to try all combinations of r + 1 entries that fit in an r-dimensional subspace , knowing that we will never get a false match . All proofs are in Appendix A. Lemma 1 . Suppose A1 holds . Let Ω ⊂ [ d ] . If |Ω| ≤ dim ( Uk ) , then xΩ ∈ UkΩ for every x ∈ Rd . Lemma 2 . Suppose A1 and A2 hold . Let Ω be an arbitrary subset of [ d ] with exactly dim ( Uk ) + 1 elements . Then xΩ ∈ UkΩ if and only if Ω ⊂ Ωk . Corollary 1 extends these results to account for noise and missing data , replacing A1 and A2 with : A1 ’ The coherence of Uk is upper bounded by µ , and the geodesic distance over the Grassmannian between Uk and U ` is lower bounded by ϕ , for every k , ` = 1 , . . . , K. A2 ’ The coherence of xΩk is upper bounded by ν , and its norm is lower bounded by ψ , for every k = 1 , . . . , K. Corollary 1 . Suppose A1 ’ and A2 ’ hold with µ , ν < Cσ , and ϕ , ψ > cσ for some constants C and c. Let yΩ and U 1 , . . . , UK be given . Suppose |Ωk ∩ Ω| > dim ( Uk ) for every k. Then with probability decreasing in C and increasing in c , one can identify Ω1 ∩ Ω , . . . , ΩK ∩ Ω . To guarantee identifiability , Corollary 1 requires that subspaces and samples are sufficiently incoherent and separated to overcome the noise level σ . Notice that since now there is missing data ( we observe yΩ , instead of x as in Theorem 1 ) , Corollary 1 requires that there are enough observed entries per subspace , i.e. , that |Ωk ∩ Ω| are sufficiently large ( rather than Ωk ) . Similarly , only the observed entires can be classified , so only the intersections Ωk ∩ Ω are identifiable ( rather than Ωk ) .
The paper introduces the problem of subspace spitting, in which an observed mixed-features vector is to be partitioned such that the identified partitions match with given subspaces. The main results of the paper lie in deriving sufficient and necessary conditions for identifiability of these partitions when the subspaces and the entries of the features are randomly positioned in the ambient dimension and the subspaces, respectively. The conditions simply require that there are more entries associated with each subspace than the dimension of the subspace. The paper also presents algorithms to perform the splitting.
SP:fe56b465b11b4f99ed9eb8bd07d08254e8603a80
Mixed-Features Vectors and Subspace Splitting
Motivated by metagenomics , recommender systems , dictionary learning , and related problems , this paper introduces subspace splitting ( SS ) : the task of clustering the entries of what we call a mixed-features vector , that is , a vector whose subsets of coordinates agree with a collection of subspaces . We derive precise identifiability conditions under which SS is well-posed , thus providing the first fundamental theory for this problem . We also propose the first three practical SS algorithms , each with advantages and disadvantages : a random sampling method , a projection-based greedy heuristic , and an alternating Lloyd-type algorithm ; all allow noise , outliers , and missing data . Our extensive experiments outline the performance of our algorithms , and in lack of other SS algorithms , for reference we compare against methods for tightly related problems , like robust matched subspace detection and maximum feasible subsystem , which are special simpler cases of SS . 1 INTRODUCTION . As the reach of data science expands , and as we continuously improve our sensing , storage and computing capabilities , data in virtually all fields of science keeps becoming increasingly highdimensional . For example , the CERN Large Hadron Collider currently “ generates so much data that scientists must discard the overwhelming majority of it , hoping that they ’ ve not thrown away anything useful ” [ 1 ] , and the upcoming Square Kilometer Array is expected to produce 100 times that [ 2 ] . Fortunately , high-dimensional data often has an underlying low-dimensional structure . Inferring such structure not only cuts memory and computational burdens , but also reduces noise and improves learning and prediction . However , higher dimensionality not only increases computational requirements ; it also augments data ’ s structure complexity . In light of this , several research lines have explored new low-dimensional models that best summarize data , going from principal component analysis ( PCA ) [ 3–11 ] and single subspaces [ 12–21 ] to unions of subspaces [ 22–40 ] , and algebraic varieties [ 41 ] . This paper introduces mixed-features vectors ( MFV ’ s ) : a new model that describes the underlying structure of data arising from several modern applications that is not captured by existing lowdimensional models . The main idea is that each entry of a MFV comes from one out of several classes , and that the entries of the same class lie in an underlying subspace . In particular , MFV ’ s are motivated by megatenomics [ 42–46 ] and recommender systems [ 47–59 ] : in metagenomics each gene segment comes from one of the several taxa present in a microbiome ; in recommender systems each rating may come from one of several users sharing the same account . However , MFV ’ s also have applications in robust estimation ( e.g. , robust PCA [ 3–11 ] and robust dictionary learning [ 60–67 ] ) , matrix completion [ 48–59 ] , subspace clustering [ 22–39 ] , and more . This paper also introduces subspace splitting ( SS ) : the task of clustering the entries of a MFV according to its underlying subspaces . SS is tightly related to other machine learning problems . In particular , SS can be thought as a generalization of robust matched subspace detection ( RMSD ) [ 12–17 ] , and maximum feasible subsystem ( MAXFS ) [ 68–74 ] . However , the added complexity of SS renders existing approaches for these problems inapplicable , which calls the attention for specialized SS theory and methods . In these regards , ( i ) we derive precise identifiability conditions under which SS is well-posed , and ( ii ) we propose the first three SS algorithms . 2 PROBLEM STATEMENT AND FUNDAMENTAL THEORY . Let U1 , . . . , UK be subspaces of Rd , and let Ω0 , Ω1 , . . . , ΩK denote a partition of [ d ] : = { 1 , . . . , d } . For any subspace , matrix or vector that is compatible with a set of indices Ω ⊂ [ d ] , we will use the subscript Ω to denote its restriction to the coordinates/rows in Ω . For example , U1ΩK ⊂ R |ΩK| denotes the restriction of U1 to the coordinates in ΩK . Define x ∈ Rd as the mixed-features vector ( MFV ) such that xΩk ∈ U k Ωk for each k = 1 , . . . , K , and the entries of xΩ0 are outliers . Let ∈ Rd denote a noise vector with variance σ2 . Given U1 , . . . , UK , and an incomplete observation yΩ = xΩ + Ω , the goal of subspace splitting ( SS ) is to determine the subsets Ω1 ∩ Ω , . . . , ΩK ∩ Ω indicating the observed coordinates of y that match with each subspace . Example 1 . Consider the following setup , with 1-dimensional subspaces U1 , U2 spanned by U1 , U2 : U1 = 1 1 1 1 1 1 , U2 = 1 2 3 4 5 6 , x = 1/2 1/2 6 8 9 10 , = 0.1 −0.1 −0.1 0.1 −0.1 0.1 , yΩ = 0.51 0.49 5.9 8.1 8.9 · . It is easy to see that Ω1 = { 1 , 2 } , Ω2 = { 3 , 4 } , Ω0 = { 5 } , because xΩ1 = 12U 1 Ω1 and xΩ2 = 2U 2 Ω2 . The keen reader will immediately wonder : is there another partition { Ω′1 , . . . , Ω ′ K } different from { Ω1 , . . . , ΩK } such that xΩ′k ∈ U k Ω′k for every k ? In other words , is this problem well-posed , and if so , under what conditions ? Our main theoretical result answers this question , showing that under the next assumptions , Ωk can be recovered if and only if it has more elements than the dimension of Uk . A1 Each Uk is drawn independently with respect to the uniform measure over the Grassmannian . A2 Each xΩk is drawn independently according to an absolutely continuous distribution with respect to the Lebesgue measure on UkΩk . In words , A1 essentially requires that U1 , . . . , UK are in general position with no particular relation with one another . Similarly , A2 requires that each piece of x is in general position over its corresponding piece of subspace . This type of genericity assumptions are becoming increasingly common in compressed sensing , matrix completion , subspace clustering , tensor theory , and related problems [ 10 , 21 , 30 , 31 , 33–38 , 41 , 56–59 ] . All our statements hold with probability 1 with respect to the measures in A1 and A2 . We point out that A1 and A2 do not imply coherence or affinity ( other typical assumptions in related theory that quantify alignment with the canonical axes or between subspaces [ 3 , 6 , 11 , 23 , 26 , 27 , 48–50 , 54 , 55 ] ) nor vice-versa . For example , bounded coherence and affinity assumptions indeed allow subspaces perfectly aligned on some coordinates . However , they rule-out cases that our assumptions allow , for example the non-zero measure set of highly coherent or affine subspaces that are somewhat aligned with the canonical axes or with one another . To sum up , these assumptions are different , not stronger nor weaker than the usual coherence and affinity assumptions . With this , we are ready to state our main theorem , showing that subspace splitting is possible if and only if x contains more than dim ( Uk ) entries of each subspace Uk . Theorem 1 . Suppose A1 and A2 hold . Given x and U1 , . . . , UK , one can identify Ω1 , . . . , ΩK if and only if |Ωk| > dim ( Uk ) for every k. Example 1 shows a case where the conditions of Theorem 1 are met ( |Ω1| = |Ω2| = 2 > dim ( U1 ) = dim ( U2 ) = 1 ) , and consequently subspace splitting is well-posed ( there exist no partition other than the true { Ω1 , Ω2 } that splits x into U1 and U2 ) . Conversely , the following Example shows a case where the conditions of Theorem 1 are not satisfied , and at least some Ωk is unidentifiable . Example 2 . Consider the following setting : U1 = 111 1 , U2 = 123 4 , U3 = 014 9 , x = 1/2 1/2 6 3 . Now |Ω2| = |Ω3| = 1 = dim ( U2 ) = dim ( U3 ) . As Theorem 1 shows , there exist multiple ways to split x into U1 , U2 , and U3 . Here the partitions could be { Ω1 , Ω2 , Ω3 } = { { 1 , 2 } , { 3 } , { 4 } } or { { 1 , 2 } , { 4 } , { 3 } } , and there is no way of telling which is the true partition from U1 , U2 , U3 , and x . In other words , Ω2 and Ω3 are unidentifiable . Remark 1 . We point out that the constructions in our examples are not generic ( i.e. , they were not constructed according to A1 and A2 ) . We chose them for their simplicity to build intuition and make a point . However , our results still apply to these constructions , showing that in addition to all generic cases , our theory also holds for some non-generic ones . An exact characterization of all the non-generic cases that our theory covers requires a careful study of notions of sketching and partial coordinate discrepancy [ 31 ] , which are out of the scope of this paper . The proof of Theorem 1 follows by the next two lemmas . Lemma 1 shows that any subset of r or fewer entries of any vector will always match with any r-dimensional subspace in general position . Lemma 2 shows that r + 1 entries of a vector won ’ t match with a random r-dimensional subspace by chance . Said in other words , r + 1 entries of a vector will match with an r-dimensional subspace if and only if such entries truly come from that r-dimensional subspace . Lemma 2 is effectively the key towards Theorem 1 , as it allows us to try all combinations of r + 1 entries that fit in an r-dimensional subspace , knowing that we will never get a false match . All proofs are in Appendix A. Lemma 1 . Suppose A1 holds . Let Ω ⊂ [ d ] . If |Ω| ≤ dim ( Uk ) , then xΩ ∈ UkΩ for every x ∈ Rd . Lemma 2 . Suppose A1 and A2 hold . Let Ω be an arbitrary subset of [ d ] with exactly dim ( Uk ) + 1 elements . Then xΩ ∈ UkΩ if and only if Ω ⊂ Ωk . Corollary 1 extends these results to account for noise and missing data , replacing A1 and A2 with : A1 ’ The coherence of Uk is upper bounded by µ , and the geodesic distance over the Grassmannian between Uk and U ` is lower bounded by ϕ , for every k , ` = 1 , . . . , K. A2 ’ The coherence of xΩk is upper bounded by ν , and its norm is lower bounded by ψ , for every k = 1 , . . . , K. Corollary 1 . Suppose A1 ’ and A2 ’ hold with µ , ν < Cσ , and ϕ , ψ > cσ for some constants C and c. Let yΩ and U 1 , . . . , UK be given . Suppose |Ωk ∩ Ω| > dim ( Uk ) for every k. Then with probability decreasing in C and increasing in c , one can identify Ω1 ∩ Ω , . . . , ΩK ∩ Ω . To guarantee identifiability , Corollary 1 requires that subspaces and samples are sufficiently incoherent and separated to overcome the noise level σ . Notice that since now there is missing data ( we observe yΩ , instead of x as in Theorem 1 ) , Corollary 1 requires that there are enough observed entries per subspace , i.e. , that |Ωk ∩ Ω| are sufficiently large ( rather than Ωk ) . Similarly , only the observed entires can be classified , so only the intersections Ωk ∩ Ω are identifiable ( rather than Ωk ) .
The authors propose a method to perform subspace splitting. That is, the task of clustering the entries of an input vector into sets of coherent subspaces. The contribution of the work is two-fold: (1) the theoretical characterization of the problem, and its well-posedness, and (2) the presentation of three algorithms for tackling the problem of subspace splitting. Quantitative analysis of the performance of the three algorithms is provided by means of synthetic experiments.
SP:fe56b465b11b4f99ed9eb8bd07d08254e8603a80
Learning Chess Blindfolded
1 INTRODUCTION . Recently , transformer-based language models such as GPT-3 have stretched notions of what is possible with the simple self-supervised objective of language modeling , becoming a fixture in state of the art language technologies ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ; Brown et al. , 2020 ) . However , the black box nature of these models combined with the complexity of natural language makes it challenging to measure how accurately these models represent the world state underlying the text . Motivated by the above issues , we propose training transformer language models for the game of chess . Chess provides a simple , constrained , and deterministic domain where the exact world state is known . Also , chess games can be transcribed exactly and unambiguously using chess notations ( Section 2 ) . In fact , the form of chess notations allows us to probe our language models for aspects of the board state using simple prompts ( Section 3 ) . Due to the simplicity and precision of chess , we can evaluate language model predictions at a more fine-grained level than merely comparing it to the ground truth . For example , even if the next move prediction doesn ’ t match the ground truth move , we can still evaluate whether the move is legal given the board state , and if it is illegal , we can determine the reason why ( Appendix D ) . Moreover , since the state can be exactly modeled , we can evaluate models using counterfactual queries as well . The proposed evaluation sets and metrics are described in Section 5.3 . A side benefit of working with chess is that we have access to nearly unlimited data that is coupled with the exact board state at every turn . This board state is a form of grounding for the move sequence and allows us to compare training on move sequences alone to training with access to varying amounts of explicit state . Thus , modeling chess using language models may have implications for the debate surrounding the ability of language models to capture meaning if only exposed to text ( Bender & Koller , 2020 ) . To test the impact of chess board grounding on learnability and data efficiency , we can train language models with varying degree of access to the board state ( Section 4 ) . Finally , while chess represents a controlled domain , it is by no means trivial for a language model . To illustrate the challenges of language modeling for chess , consider the left board shown in Figure 1b , where white is next to move . In order to generate a valid next move , the language model needs to ( a ) infer that it is white ’ s turn , ( b ) represent the locations of all pieces , both white and black , ( c ) select one of the white pieces which can be legally moved , and finally ( d ) make a legal move with the selected piece . Thus , a language model has to learn to track the board state , learn to generate moves according to the rules of chess , and on top of that learn chess strategies to predict the actual move . We find that when given enough training data , transformers can learn to both track piece locations and predict legal moves at high levels of accuracy . However , when testing predictive ability for long move histories or when only given small training sets or when the model has access to limited history ( Appendix C.1 ) , predictive ability suffers . These challenging settings can provide an interesting testbed for future development of language models , and moreover because of the probing properties , errors can be diagnosed in great detail . In these more challenging settings , we show that providing parts of the board state ( during training time only ) can lead to significant improvements in accuracy . Our results also provide some key insights on transformer language models : ( i ) They are robust to various ways of incorporating explicit supervision about the board state when given enough training data . ( ii ) In particular , they are robust to changes in input distribution where additional tokens , related to board state , are added to input sequence only during training ( Section 4.1 ) . In contrast to LSTMs , transformers achieve this robustness even with smaller training sets ( Appendix F ) . ( iii ) The model performance strongly relies on access to the whole sequence history as the performance drops on limiting this access to a fixed-size window of previous tokens ( Appendix C.1 ) . To summarize , our contributions are to : • Propose chess as a testbed for evaluating world state tracking capabilities of language models . • Show that by selecting ( and tweaking ) the appropriate chess notation , we can probe language model for aspects of the world state using simple prompts ( Section 3 ) . • Propose a suite of probing tasks to evaluate language models for chess on world state tracking ( Section 5.3 ) . These probing tasks go beyond simple exact match , and use a more fine-grained evaluation , and allow for automated error analysis ( Appendix D ) . • Show that given enough training data , transformer language models can learn to track piece locations and predict legal moves at high levels of accuracy . • Evaluate the effect of grounding by training and evaluating a spectrum of transformer language models with varying degrees of access to the world state . We find that grounding helps in challenging settings of our proposed probing tasks . • Provide insights on transformer language models such as their robustness to incorporating the world state in various ways , their dependence on access to the whole history , etc . 2 BACKGROUND . Chess Preliminaries . Figure 1a1 shows how squares are indicated in chess notations via a combination of lettered columns and numbered rows . Chess notations use this square naming convention to denote the movement of pieces . As our notation , we choose Universal Chess Interface ( UCI ) notation , which combines the starting square and the destination square to represent a move.2 The move in Figure 1b is 1Source https : //en.wikipedia.org/wiki/File : SCD_algebraic_notation.svg 2For more details see https : //en.wikipedia.org/wiki/Universal_Chess_Interface represented as f1b5 in UCI where f1 indicates the starting square and b5 denotes the ending square . While another notation , SAN , is the standard choice for gameplay , we prefer UCI ( see Appendix A for our reasons for choosing UCI over SAN ) . 2.1 RELATED WORK . Simulated Worlds and Grounding . There have been several prior efforts in relating simulated worlds to natural language . The bAbI framework simulates a world modeled via templates to generate question-answering ( QA ) tasks ( Weston et al. , 2015a ) . The recent TextWorld framework facilitates generating , training , and evaluating interactive text-based games ( Côté et al. , 2018 ) . bAbI and TextWorld are similar to our work in the sense that the true world state is , by construction , available . The key difference with bAbI is that the it provides explicit world-state supervision in the form of training data for QA . With TextWorld we differ in : ( 1 ) their objective ( reward maximization vs. maximizing the probability of the next observation ) , ( 2 ) how they are ultimately evaluated ( final reward vs. world state tracking ) , and ( 3 ) whether we can directly probe the model ’ s knowledge of the entire state . The world models of Ha & Schmidhuber ( 2018 ) also maximize the probability of the next observation , but differ along the other two dimensions . Similarly the work by Hermann et al . ( 2017 ) and Hill et al . ( 2017 ) on developing and using 3D world simulations for learning grounded language has only partial overlap with the objective function , and differ along the other two aspects . Our work is related to work on grounding in that we are interested in comparing model performance when it does not have access to grounding information to when it does ( Bruni et al. , 2014 ; Kiros et al. , 2014 ; Ororbia et al. , 2019 ) . However , unlike the work of Ororbia et al . ( 2019 ) , for instance , the goal is not to improve performance of language models using access to more of the world state , but to assess how much of this state has been recovered by the model from just learning the LM task . Cloze Tasks for Natural Language Models . There has been a plethora of prior work on developing and using cloze tasks for evaluating natural language models ( Hermann et al. , 2015 ; Hill et al. , 2016 ) . These cloze tasks can range from testing general text understanding ( Paperno et al. , 2016 ) to targeting particular aspects of natural language , such as commonsense/pragmatics ( Mostafazadeh et al. , 2016 ; Ettinger , 2020 ) , narrative understanding ( Mostafazadeh et al. , 2017 ) , factual knowledge ( Petroni et al. , 2019 ) , etc . Creating these tasks often requires human curation , 3 and the evaluation is typically limited to exact match . In contrast , we propose cloze tasks/prompts targeting the world state , which can be precisely automated for chess , and can be evaluated at a fine-grained level . Probing . One of the goals of this work is to probe the language model ’ s board state tracking capability . A typical solution used by prior work is to train a probing model on top of a pretrained model ( Ettinger et al. , 2016 ; Alain & Bengio , 2017 ; Adi et al. , 2017 ; Tenney et al. , 2019 ; Hewitt & Liang , 2019 ) . This setup is time-consuming as it requires training probing models for all tasks . Moreover , the complexity of the probing model can also affect the conclusions ( Pimentel et al. , 2020 ) . In our case , we show that by appropriate choice of notation , probing for board state can be accomplished via simple prompts ( Section 3 ) . Deep Learning for Chess . Deep networks have been used in prior work to predict/mimic the next move given the true game state David et al . ( 2016 ) ; Oshri & Khandwala ( 2015 ) . Using just self-play and the rules of chess , AlphaZero achieves superhuman performance starting from random play ( Silver et al. , 2018 ) . The focus of these work is the quality of game play given the true board state while we just use chess as a testbed for evaluating the tranformer LMs world state tracking capabilities . Recently there have been several work focusing on transformer language models for chess ( Presser & Branwen , 2020 ; Cheng , 2020 ; Noever et al. , 2020 ) . These work are similar to ours in the sense that input is limited to the move sequence and not the true board state , but the focus of these work is again the quality of game play rather than how well is the model aware of the underlying board state . 3Automated cloze tasks without any human filtering can end up with instances which even humans can ’ t answer ( Hill et al. , 2016 ) 3 LANGUAGE MODEL PROMPTS AS BOARD STATE PROBES . Chess notations provide a simple alternate solution of using language model prompts as board state probes . For example , the prompt “ e2e4 e7e5 g1f3 b8c6 d2d4 h7h6 f1 ” ( the underlined move sequence leads to the left board state in Figure 1b ) can be used for next-token prediction with a language model trained on UCI notation . The generated token can be interpreted as the ending square predicted for the bishop at f1 . This prediction can then be used to determine the level of board state awareness of the model . A prediction of g1 may indicate that the model does not recognize that the piece type at f1 is a bishop , as such a move is not possible for a bishop . If the model predicts g2 , it may mean that the model is not aware that another piece is currently located at g2 . UCI notation is sufficient for assessing ending square prediction . However , it does not allow us to test starting square prediction directly . That is , we would like to give a language model the prompt “ e2e4 e7e5 g1f3 b8c6 d2d4 h7h6 N ” , where N represents knight , and expect it to generate a valid starting position for a knight of the correct color . To allow testing the model with such prompts , we propose randomly including piece types in moves during training with some fixed probability p. We refer to this strategy as “ randomly annotated piece type ” ( RAP ) and use the nomenclature “ UCI + RAP p ” to indicate that with p % probability , piece type is part of the move notation during training . Note that for p = 0 , the notation reduces to UCI . When testing with these starting square prediction prompts , we only include piece type for the prompt , not for any moves in the history . Thus , using RAP during training allows us to probe , at test time , where the model thinks each piece is , given any game history ’ s prefix ; by simply providing the desired piece type ( e.g. , N ) the model outputs the predicted starting square for a piece of that type .
This paper explores learning chess from raw notation as a benchmark for the ability of language models to track world state. Chess is an interesting benchmark, as a set of moves can be unambiguously linked to a world state, there are large amounts of data available and the model can easily be probed for its board tracking abilities. The contributions of this paper are twofold: (i) introducing blindfolded chess as a benchmark for grounded language learning and world state tracking, as well as a suite of probing tasks to evaluate models; (ii) empirical evidence that transformer language models can learn both the rules of the game and to track board state.
SP:001a31f7a439ab22943dedb4fa4d46e3dd56e603
Learning Chess Blindfolded
1 INTRODUCTION . Recently , transformer-based language models such as GPT-3 have stretched notions of what is possible with the simple self-supervised objective of language modeling , becoming a fixture in state of the art language technologies ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ; Brown et al. , 2020 ) . However , the black box nature of these models combined with the complexity of natural language makes it challenging to measure how accurately these models represent the world state underlying the text . Motivated by the above issues , we propose training transformer language models for the game of chess . Chess provides a simple , constrained , and deterministic domain where the exact world state is known . Also , chess games can be transcribed exactly and unambiguously using chess notations ( Section 2 ) . In fact , the form of chess notations allows us to probe our language models for aspects of the board state using simple prompts ( Section 3 ) . Due to the simplicity and precision of chess , we can evaluate language model predictions at a more fine-grained level than merely comparing it to the ground truth . For example , even if the next move prediction doesn ’ t match the ground truth move , we can still evaluate whether the move is legal given the board state , and if it is illegal , we can determine the reason why ( Appendix D ) . Moreover , since the state can be exactly modeled , we can evaluate models using counterfactual queries as well . The proposed evaluation sets and metrics are described in Section 5.3 . A side benefit of working with chess is that we have access to nearly unlimited data that is coupled with the exact board state at every turn . This board state is a form of grounding for the move sequence and allows us to compare training on move sequences alone to training with access to varying amounts of explicit state . Thus , modeling chess using language models may have implications for the debate surrounding the ability of language models to capture meaning if only exposed to text ( Bender & Koller , 2020 ) . To test the impact of chess board grounding on learnability and data efficiency , we can train language models with varying degree of access to the board state ( Section 4 ) . Finally , while chess represents a controlled domain , it is by no means trivial for a language model . To illustrate the challenges of language modeling for chess , consider the left board shown in Figure 1b , where white is next to move . In order to generate a valid next move , the language model needs to ( a ) infer that it is white ’ s turn , ( b ) represent the locations of all pieces , both white and black , ( c ) select one of the white pieces which can be legally moved , and finally ( d ) make a legal move with the selected piece . Thus , a language model has to learn to track the board state , learn to generate moves according to the rules of chess , and on top of that learn chess strategies to predict the actual move . We find that when given enough training data , transformers can learn to both track piece locations and predict legal moves at high levels of accuracy . However , when testing predictive ability for long move histories or when only given small training sets or when the model has access to limited history ( Appendix C.1 ) , predictive ability suffers . These challenging settings can provide an interesting testbed for future development of language models , and moreover because of the probing properties , errors can be diagnosed in great detail . In these more challenging settings , we show that providing parts of the board state ( during training time only ) can lead to significant improvements in accuracy . Our results also provide some key insights on transformer language models : ( i ) They are robust to various ways of incorporating explicit supervision about the board state when given enough training data . ( ii ) In particular , they are robust to changes in input distribution where additional tokens , related to board state , are added to input sequence only during training ( Section 4.1 ) . In contrast to LSTMs , transformers achieve this robustness even with smaller training sets ( Appendix F ) . ( iii ) The model performance strongly relies on access to the whole sequence history as the performance drops on limiting this access to a fixed-size window of previous tokens ( Appendix C.1 ) . To summarize , our contributions are to : • Propose chess as a testbed for evaluating world state tracking capabilities of language models . • Show that by selecting ( and tweaking ) the appropriate chess notation , we can probe language model for aspects of the world state using simple prompts ( Section 3 ) . • Propose a suite of probing tasks to evaluate language models for chess on world state tracking ( Section 5.3 ) . These probing tasks go beyond simple exact match , and use a more fine-grained evaluation , and allow for automated error analysis ( Appendix D ) . • Show that given enough training data , transformer language models can learn to track piece locations and predict legal moves at high levels of accuracy . • Evaluate the effect of grounding by training and evaluating a spectrum of transformer language models with varying degrees of access to the world state . We find that grounding helps in challenging settings of our proposed probing tasks . • Provide insights on transformer language models such as their robustness to incorporating the world state in various ways , their dependence on access to the whole history , etc . 2 BACKGROUND . Chess Preliminaries . Figure 1a1 shows how squares are indicated in chess notations via a combination of lettered columns and numbered rows . Chess notations use this square naming convention to denote the movement of pieces . As our notation , we choose Universal Chess Interface ( UCI ) notation , which combines the starting square and the destination square to represent a move.2 The move in Figure 1b is 1Source https : //en.wikipedia.org/wiki/File : SCD_algebraic_notation.svg 2For more details see https : //en.wikipedia.org/wiki/Universal_Chess_Interface represented as f1b5 in UCI where f1 indicates the starting square and b5 denotes the ending square . While another notation , SAN , is the standard choice for gameplay , we prefer UCI ( see Appendix A for our reasons for choosing UCI over SAN ) . 2.1 RELATED WORK . Simulated Worlds and Grounding . There have been several prior efforts in relating simulated worlds to natural language . The bAbI framework simulates a world modeled via templates to generate question-answering ( QA ) tasks ( Weston et al. , 2015a ) . The recent TextWorld framework facilitates generating , training , and evaluating interactive text-based games ( Côté et al. , 2018 ) . bAbI and TextWorld are similar to our work in the sense that the true world state is , by construction , available . The key difference with bAbI is that the it provides explicit world-state supervision in the form of training data for QA . With TextWorld we differ in : ( 1 ) their objective ( reward maximization vs. maximizing the probability of the next observation ) , ( 2 ) how they are ultimately evaluated ( final reward vs. world state tracking ) , and ( 3 ) whether we can directly probe the model ’ s knowledge of the entire state . The world models of Ha & Schmidhuber ( 2018 ) also maximize the probability of the next observation , but differ along the other two dimensions . Similarly the work by Hermann et al . ( 2017 ) and Hill et al . ( 2017 ) on developing and using 3D world simulations for learning grounded language has only partial overlap with the objective function , and differ along the other two aspects . Our work is related to work on grounding in that we are interested in comparing model performance when it does not have access to grounding information to when it does ( Bruni et al. , 2014 ; Kiros et al. , 2014 ; Ororbia et al. , 2019 ) . However , unlike the work of Ororbia et al . ( 2019 ) , for instance , the goal is not to improve performance of language models using access to more of the world state , but to assess how much of this state has been recovered by the model from just learning the LM task . Cloze Tasks for Natural Language Models . There has been a plethora of prior work on developing and using cloze tasks for evaluating natural language models ( Hermann et al. , 2015 ; Hill et al. , 2016 ) . These cloze tasks can range from testing general text understanding ( Paperno et al. , 2016 ) to targeting particular aspects of natural language , such as commonsense/pragmatics ( Mostafazadeh et al. , 2016 ; Ettinger , 2020 ) , narrative understanding ( Mostafazadeh et al. , 2017 ) , factual knowledge ( Petroni et al. , 2019 ) , etc . Creating these tasks often requires human curation , 3 and the evaluation is typically limited to exact match . In contrast , we propose cloze tasks/prompts targeting the world state , which can be precisely automated for chess , and can be evaluated at a fine-grained level . Probing . One of the goals of this work is to probe the language model ’ s board state tracking capability . A typical solution used by prior work is to train a probing model on top of a pretrained model ( Ettinger et al. , 2016 ; Alain & Bengio , 2017 ; Adi et al. , 2017 ; Tenney et al. , 2019 ; Hewitt & Liang , 2019 ) . This setup is time-consuming as it requires training probing models for all tasks . Moreover , the complexity of the probing model can also affect the conclusions ( Pimentel et al. , 2020 ) . In our case , we show that by appropriate choice of notation , probing for board state can be accomplished via simple prompts ( Section 3 ) . Deep Learning for Chess . Deep networks have been used in prior work to predict/mimic the next move given the true game state David et al . ( 2016 ) ; Oshri & Khandwala ( 2015 ) . Using just self-play and the rules of chess , AlphaZero achieves superhuman performance starting from random play ( Silver et al. , 2018 ) . The focus of these work is the quality of game play given the true board state while we just use chess as a testbed for evaluating the tranformer LMs world state tracking capabilities . Recently there have been several work focusing on transformer language models for chess ( Presser & Branwen , 2020 ; Cheng , 2020 ; Noever et al. , 2020 ) . These work are similar to ours in the sense that input is limited to the move sequence and not the true board state , but the focus of these work is again the quality of game play rather than how well is the model aware of the underlying board state . 3Automated cloze tasks without any human filtering can end up with instances which even humans can ’ t answer ( Hill et al. , 2016 ) 3 LANGUAGE MODEL PROMPTS AS BOARD STATE PROBES . Chess notations provide a simple alternate solution of using language model prompts as board state probes . For example , the prompt “ e2e4 e7e5 g1f3 b8c6 d2d4 h7h6 f1 ” ( the underlined move sequence leads to the left board state in Figure 1b ) can be used for next-token prediction with a language model trained on UCI notation . The generated token can be interpreted as the ending square predicted for the bishop at f1 . This prediction can then be used to determine the level of board state awareness of the model . A prediction of g1 may indicate that the model does not recognize that the piece type at f1 is a bishop , as such a move is not possible for a bishop . If the model predicts g2 , it may mean that the model is not aware that another piece is currently located at g2 . UCI notation is sufficient for assessing ending square prediction . However , it does not allow us to test starting square prediction directly . That is , we would like to give a language model the prompt “ e2e4 e7e5 g1f3 b8c6 d2d4 h7h6 N ” , where N represents knight , and expect it to generate a valid starting position for a knight of the correct color . To allow testing the model with such prompts , we propose randomly including piece types in moves during training with some fixed probability p. We refer to this strategy as “ randomly annotated piece type ” ( RAP ) and use the nomenclature “ UCI + RAP p ” to indicate that with p % probability , piece type is part of the move notation during training . Note that for p = 0 , the notation reduces to UCI . When testing with these starting square prediction prompts , we only include piece type for the prompt , not for any moves in the history . Thus , using RAP during training allows us to probe , at test time , where the model thinks each piece is , given any game history ’ s prefix ; by simply providing the desired piece type ( e.g. , N ) the model outputs the predicted starting square for a piece of that type .
This paper is an interesting exploratory study analyzing the ability of language models to track the state of a chessboard. The authors adopt a clever chess notation which allows them to probe the language model's state tracking ability by looking at its next word prediction (akin to probes in [1]). Quite remarkably, language models finetuned on chess data store a very accurate state representation, and predict legal moves over 90% of the times even without a visual representation of the board.
SP:001a31f7a439ab22943dedb4fa4d46e3dd56e603
L2E: Learning to Exploit Your Opponent
1 INTRODUCTION . One core research topic in modern artificial intelligence is creating agents that can interact effectively with their opponents in different scenarios . To achieve this goal , the agents should have the ability to reason about their opponents ’ behaviors , goals , and beliefs . Opponent modeling , which constructs the opponents ’ models to reason about them , has been extensively studied in past decades ( Albrecht & Stone , 2018 ) . In general , an opponent model is a function that takes some interaction history as its input and predicts some property of interest of the opponent . Specifically , the interaction history may contain the past actions that the opponent took in various situations , and the properties of interest could be the actions that the opponent may take in the future , the style of the opponent ( e.g. , “ defensive ” , “ aggressive ” ) , or its current goals . The resulting opponent model can inform the agent ’ s decision-making by incorporating the model ’ s predictions in its planning procedure to optimize its interactions with the opponent . Opponent modeling has already been used in many practical applications , such as dialogue systems ( Grosz & Sidner , 1986 ) , intelligent tutor systems ( McCalla et al. , 2000 ) , and security systems ( Jarvis et al. , 2005 ) . Many opponent modeling algorithms vary greatly in their underlying assumptions and methodology . For example , policy reconstruction based methods ( Powers & Shoham , 2005 ; Banerjee & Sen , 2007 ) explicitly fit an opponent model to reflect the opponent ’ s observed behaviors . Type reasoning based methods ( Dekel et al. , 2004 ; Nachbar , 2005 ) reuse pre-learned models of several known opponents by finding the one which most resembles the behavior of the current opponent . Classification based methods ( Huynh et al. , 2006 ; Sukthankar & Sycara , 2007 ) build models that predict the play style of the opponent , and employ the counter-strategy , which is effective against that particular style . Some recent works combine opponent modeling with deep learning methods or reinforcement learning methods and propose many related algorithms ( He et al. , 2016 ; Foerster et al. , 2018 ; Wen et al. , 2018 ) . Although these algorithms have achieved some success , they also have some obvious disadvantages . First , constructing accurate opponent models requires a lot of data , which is problematic since the agent does not have the time or opportunity to collect enough data about its opponent in most applications . Second , most of these algorithms perform well only when the opponents during testing are similar to the ones used for training , and it is difficult for them to adapt to opponents with new styles quickly . More related works on opponent modeling are in Appendix A.1 . To overcome these shortcomings , we propose a novel Learning to Exploit ( L2E ) framework in this work for implicit opponent modeling , which has two desirable advantages . First , L2E does not build an explicit model for the opponent , so it does not require a large amount of interactive data and eliminates the modeling errors simultaneously . Second , L2E can quickly adapt to new opponents with unknown styles , with only a few interactions with them . The key idea underlying L2E to train a base policy against various styles of opponents by using only a few interactions between them during training , such that it acquires the ability to exploit different opponents quickly . After training , the base policy can quickly adapt to new opponents using only a few interactions during testing . In effect , our L2E framework optimizes for a base policy that is easy and fast to adapt . It can be seen as a particular case of learning to learn , i.e. , meta-learning ( Finn et al. , 2017 ) . The meta-learning algorithm ( c.f . , Appendix A.2 for details ) , such as MAML ( Finn et al. , 2017 ) , is initially designed for single-agent environments . It requires manual design of training tasks , and the final performance largely depends on the user-specified training task distribution . The L2E framework is designed explicitly for the multi-agent competitive environments , which generates effective training tasks ( opponents ) automatically ( c.f . , Appendix A.3 for details ) . Some recent works have also initially used meta-learning for opponent modeling . Unlike these works , which either use meta-learning to predict the opponent ’ s behaviors ( Rabinowitz et al. , 2018 ) or to handle the non-stationarity problem in multi-agent reinforcement learning ( Al-Shedivat et al. , 2018 ) , we focus on how to improve the agent ’ s ability to adapt to unknown opponents quickly . In our L2E framework , the base policy is explicitly trained such that a few interactions with a new opponent will produce an opponent-specific policy to effectively exploit this opponent , i.e. , the base policy has strong adaptability that is broadly adaptive to many opponents . In specific , if a deep neural network models the base policy , then the opponent-specific policy can be obtained by fine-tuning the parameters of the base policy ’ s network using the new interactive data with the opponent . A critical step in L2E is how to generate effective opponents to train the base policy . The ideal training opponents should satisfy the following two desiderata . 1 ) The opponents need to be challenging enough ( i.e. , hard to exploit ) . By learning to exploit these challenging opponents , the base policy eliminates its weakness and learns a more robust strategy . 2 ) The opponents need to have enough diversity . The more diverse the opponents during training , the stronger the base policy ’ s generalization ability is , and the more adaptable the base policy to the new opponents . To this end , we propose a novel opponent strategy generation ( OSG ) algorithm , which can produce challenging and diverse opponents automatically . We use the idea of adversarial training to generate challenging opponents . Some previous works have also been proposed to obtain more robust policies through adversarial training and showed that it improves the generalization ( Pinto et al. , 2017 ; Pattanaik et al. , 2018 ) . From the perspective of the base policy , giving an opponent , the base policy first adjusts itself to obtain an adapted policy , the base policy is then optimized to maximize the rewards that the adapted policy gets when facing the opponent . The challenging opponents are then adversarially generated by minimizing the base policy ’ s adaptability by automatically generating difficult to exploit opponents . These hard-to-exploit opponents are trained such that even if the base policy adapts to them , the adapted base policy can not take advantage of them . Besides , our OSG algorithm can further produce diverse training opponents with a novel diversity-regularized policy optimization procedure . In specific , we use the Maximum Mean Discrepancy ( MMD ) metric ( Gretton et al. , 2007 ) to evaluate the differences between policies . The MMD metric is then incorporated as a regularization term into the policy optimization process to obtain a diverse set of opponent policies . By training with these challenging and diverse training opponents , the robustness and generalization ability of our L2E framework can be significantly improved . To summarize , the main contributions of this work are listed bellow in four-fold : • We propose a novel learning to exploit ( L2E ) framework to exploit sub-optimal opponents without building explicit models for it . L2E can quickly adapt to a new opponent with unknown style using only a few interactions . • We propose to use an adversarial training procedure to generate challenging opponents automatically . These hard to exploit opponents help L2E eliminate its weakness and improve its robustness effectively . • We further propose a diversity-regularized policy optimization procedure to generate diverse opponents automatically . The generalization ability of L2E is improved significantly by training with these diverse opponents . • We conduct detailed experiments to evaluate the L2E framework in three different environments . The experimental results demonstrate that the base policy trained with L2E quickly exploits a wide range of opponents compared to other algorithms . 2 METHOD . In this paper , we propose a novel L2E framework to endow the agents to adapt to diverse opponents quickly . As shown in Fig . 1 , L2E mainly consists of two modules , i.e. , the base policy training part , and the opponent strategy generation part . In the base policy training part , our goal is to find a base policy that , given the unknown opponent , can fast adapt to it by using only a few interactions . To this end , the base policy is trained to be able to adapt to many opponents . In specific , giving an opponent O , the base policy B first adjusts itself to obtain an adapted policy B′ by using a little interaction data between O and B , the base policy is then optimized to maximize the rewards that B′ gets when facing O . In other words , the base policy has learned how to adapt to its opponents and exploit them quickly . The opponent strategy generation provides the base policy training part with challenging and diverse training opponents automatically . First , our proposed opponent strategy generation ( OSG ) algorithm can produce difficult to exploit opponents . In specific , the base policy B first adjusts itself to obtain an adapted policy B′ by using a little interaction data between O and B , the opponent O is then optimized to minimize the rewards that B′ gets when facing O . The resulting opponent O is hard to exploit since even if the base policy B adapts to O , the adapted policy B′ can not take advantage of O . By training with these hard to exploit opponents , the base policy can eliminate its weakness and improve its robustness effectively . Second , our OSG algorithm can further produce diverse training opponents with a novel diversity-regularized policy optimization procedure . More specifically , we first formalize the difference between opponent policies as the difference between the distribution of trajectories induced by each policy . The difference between distributions can be evaluated by the Maximum Mean Discrepancy ( MMD ) metric ( Gretton et al. , 2007 ) . Then , MMD is integrated as a regularization term in the policy optimization process to identify various opponent policies . By training with these diverse opponents , the base policy can improve its generalization ability significantly . Next , we introduce these two modules in detail . 2.1 BASE POLICY TRAINING . Our goal is to find a base policy B that can fast adapt to an unknown opponent O by updating the parameters of B using only a few interactions between B and O . The key idea is to train the base policy B against many opponents to maximize its payoffs by using only a small amount of interactive data during training , such that it acquires the ability to exploit different opponents quickly . In effect , our L2E framework treats each opponent as a training example . After training , the resulting base policy B can quickly adapt to new and unknown opponents using only a few interactions . Without loss of generality , the base policy B is modeled by a deep neural network in this work , i.e. , a parameterized function πθ with parameters θ . Similarly , the opponent O for training is also a deep neural network πφ with parameters φ . We model the base policy as playing against an opponent in a two-player Markov game ( Shapley , 1953 ) . This Markov game M = ( S , ( AB , AO ) , T , ( RB , RO ) ) consists of the state space S , the action space AB and AO , and a state transition function T : S × AB × AO → ∆ ( S ) where ∆ ( S ) is a probability distribution on S. The reward function Ri : S × AB × AO × S → R for each player i ∈ { B , O } depends on the current state , the next state and both players ’ actions . Given a training opponent O whose policy is known and fixed , this two-player Markov game M reduces to a single-player Markov Decision Process ( MDP ) , i.e. , MOB = ( S , AB , T O B , R O B ) . The state and action space of M O B are the same as in M . The transition and reward functions have the opponent policy embedded : TOB ( s , aB ) = T ( s , aB , aO ) , R O B ( s , aB , s ′ ) = RB ( s , aB , aO , s ′ ) , where the opponent ’ s action is sampled from its policy aO ∼ πφ ( · | s ) . Throughout the paper , MYX represents a single-player MDP , which is reduced from a two-player Markov game ( i.e. , player X and player Y ) . In this MDP , the player Y is fixed and can be regarded as part of the environment . Suppose a set of training opponents { Oi } Ni=1 is given . For each training opponentOi , an MDPM Oi B can be constructed as described above . The base policy B , i.e. , πθ is allowed to query a limited number of sample trajectories τ to adapt to Oi . In our method , the adapted parameters θOi of the base policy are computed using one or more gradient descent updates with the sample trajectories τ . For example , when using one gradient update : θOi = θ − α∇θLOiB ( πθ ) , ( 1 ) LOiB ( πθ ) = −Eτ∼MOiB [ ∑ t γtROiB ( s ( t ) , a ( t ) B , s ( t+1 ) ) ] . ( 2 ) τ ∼ MOiB represents that the trajectory τ = { s ( 1 ) , a ( 1 ) B , s ( 2 ) , . . . , s ( t ) , a ( t ) B , s ( t+1 ) , . . . } is sampled from the MDP MOiB , where s ( t+1 ) ∼ TOiB ( s ( t ) , a ( t ) B ) and a ( t ) B ∼ πθ ( · | s ( t ) ) . We use BOi to denote the updated base policy , i.e. , πθOi . BOi can be seen as an opponent-specific policy , which is updated from the base policy through fast adaptation . Our goal is to find a generalizable base policy whose opponent-specific policy BOi can exploit its opponent Oi as much as possible . To this end , we optimize the parameters θ of the base policy to maximize the rewards that BOi gets when interacting with Oi . More concretely , the learning to exploit objective function is defined as follows : min θ ∑N i=1 LOi BOi ( πθOi ) = min θ ∑N i=1 LOi BOi ( π θ−α∇θL Oi B ( πθ ) ) . ( 3 ) It is worth noting that the optimization is performed over the base policy ’ s parameters θ , whereas the objective is computed using the adapted based policy ’ s parameters θOi . The parameters θ of the base policy are updated as follows : θ = θ − β∇θ ∑N i=1 LOi BOi ( πθOi ) . ( 4 ) In effect , our L2E framework aims to find a base policy that can significantly exploit the opponent with only a few interactions with it ( i.e. , with a few gradient steps ) . The resulting base policy has learned how to adapt to different opponents and exploit them quickly . An overall description of the base policy training procedure is shown in Alg . 1 . The algorithm consists of three main steps . First , generating hard to exploit opponents through the Hard-OSG module . Second , generating diverse opponent policies through the Diverse-OSG module . Third , training the base policy with these opponents to obtain fast adaptability .
This paper proposes the Learning to Exploit (L2E) framework that can quickly adapt to diverse opponent's unknown strategies. The main contributions of L2E include: 1. learning of the base model based on the optimization similar to MAML (Finn et al., ICML-17) to adapt to a new opponent after a few learning iterations (Section 2.1), 2. the generation of the hard-to-exploit opponent to robustly train the base model (Section 2.2), and 3. the generation of diverse opponent policies using the maximum mean discrepancy (MMD) metric (Section 2.3). Empirical results show that L2E can exploit diverse opponents in the Leduc poker, BigLeduc poker, and Grid Soccer domains. 
SP:1ddcef5a08ed53d730119a0b591e2cb092c422eb
L2E: Learning to Exploit Your Opponent
1 INTRODUCTION . One core research topic in modern artificial intelligence is creating agents that can interact effectively with their opponents in different scenarios . To achieve this goal , the agents should have the ability to reason about their opponents ’ behaviors , goals , and beliefs . Opponent modeling , which constructs the opponents ’ models to reason about them , has been extensively studied in past decades ( Albrecht & Stone , 2018 ) . In general , an opponent model is a function that takes some interaction history as its input and predicts some property of interest of the opponent . Specifically , the interaction history may contain the past actions that the opponent took in various situations , and the properties of interest could be the actions that the opponent may take in the future , the style of the opponent ( e.g. , “ defensive ” , “ aggressive ” ) , or its current goals . The resulting opponent model can inform the agent ’ s decision-making by incorporating the model ’ s predictions in its planning procedure to optimize its interactions with the opponent . Opponent modeling has already been used in many practical applications , such as dialogue systems ( Grosz & Sidner , 1986 ) , intelligent tutor systems ( McCalla et al. , 2000 ) , and security systems ( Jarvis et al. , 2005 ) . Many opponent modeling algorithms vary greatly in their underlying assumptions and methodology . For example , policy reconstruction based methods ( Powers & Shoham , 2005 ; Banerjee & Sen , 2007 ) explicitly fit an opponent model to reflect the opponent ’ s observed behaviors . Type reasoning based methods ( Dekel et al. , 2004 ; Nachbar , 2005 ) reuse pre-learned models of several known opponents by finding the one which most resembles the behavior of the current opponent . Classification based methods ( Huynh et al. , 2006 ; Sukthankar & Sycara , 2007 ) build models that predict the play style of the opponent , and employ the counter-strategy , which is effective against that particular style . Some recent works combine opponent modeling with deep learning methods or reinforcement learning methods and propose many related algorithms ( He et al. , 2016 ; Foerster et al. , 2018 ; Wen et al. , 2018 ) . Although these algorithms have achieved some success , they also have some obvious disadvantages . First , constructing accurate opponent models requires a lot of data , which is problematic since the agent does not have the time or opportunity to collect enough data about its opponent in most applications . Second , most of these algorithms perform well only when the opponents during testing are similar to the ones used for training , and it is difficult for them to adapt to opponents with new styles quickly . More related works on opponent modeling are in Appendix A.1 . To overcome these shortcomings , we propose a novel Learning to Exploit ( L2E ) framework in this work for implicit opponent modeling , which has two desirable advantages . First , L2E does not build an explicit model for the opponent , so it does not require a large amount of interactive data and eliminates the modeling errors simultaneously . Second , L2E can quickly adapt to new opponents with unknown styles , with only a few interactions with them . The key idea underlying L2E to train a base policy against various styles of opponents by using only a few interactions between them during training , such that it acquires the ability to exploit different opponents quickly . After training , the base policy can quickly adapt to new opponents using only a few interactions during testing . In effect , our L2E framework optimizes for a base policy that is easy and fast to adapt . It can be seen as a particular case of learning to learn , i.e. , meta-learning ( Finn et al. , 2017 ) . The meta-learning algorithm ( c.f . , Appendix A.2 for details ) , such as MAML ( Finn et al. , 2017 ) , is initially designed for single-agent environments . It requires manual design of training tasks , and the final performance largely depends on the user-specified training task distribution . The L2E framework is designed explicitly for the multi-agent competitive environments , which generates effective training tasks ( opponents ) automatically ( c.f . , Appendix A.3 for details ) . Some recent works have also initially used meta-learning for opponent modeling . Unlike these works , which either use meta-learning to predict the opponent ’ s behaviors ( Rabinowitz et al. , 2018 ) or to handle the non-stationarity problem in multi-agent reinforcement learning ( Al-Shedivat et al. , 2018 ) , we focus on how to improve the agent ’ s ability to adapt to unknown opponents quickly . In our L2E framework , the base policy is explicitly trained such that a few interactions with a new opponent will produce an opponent-specific policy to effectively exploit this opponent , i.e. , the base policy has strong adaptability that is broadly adaptive to many opponents . In specific , if a deep neural network models the base policy , then the opponent-specific policy can be obtained by fine-tuning the parameters of the base policy ’ s network using the new interactive data with the opponent . A critical step in L2E is how to generate effective opponents to train the base policy . The ideal training opponents should satisfy the following two desiderata . 1 ) The opponents need to be challenging enough ( i.e. , hard to exploit ) . By learning to exploit these challenging opponents , the base policy eliminates its weakness and learns a more robust strategy . 2 ) The opponents need to have enough diversity . The more diverse the opponents during training , the stronger the base policy ’ s generalization ability is , and the more adaptable the base policy to the new opponents . To this end , we propose a novel opponent strategy generation ( OSG ) algorithm , which can produce challenging and diverse opponents automatically . We use the idea of adversarial training to generate challenging opponents . Some previous works have also been proposed to obtain more robust policies through adversarial training and showed that it improves the generalization ( Pinto et al. , 2017 ; Pattanaik et al. , 2018 ) . From the perspective of the base policy , giving an opponent , the base policy first adjusts itself to obtain an adapted policy , the base policy is then optimized to maximize the rewards that the adapted policy gets when facing the opponent . The challenging opponents are then adversarially generated by minimizing the base policy ’ s adaptability by automatically generating difficult to exploit opponents . These hard-to-exploit opponents are trained such that even if the base policy adapts to them , the adapted base policy can not take advantage of them . Besides , our OSG algorithm can further produce diverse training opponents with a novel diversity-regularized policy optimization procedure . In specific , we use the Maximum Mean Discrepancy ( MMD ) metric ( Gretton et al. , 2007 ) to evaluate the differences between policies . The MMD metric is then incorporated as a regularization term into the policy optimization process to obtain a diverse set of opponent policies . By training with these challenging and diverse training opponents , the robustness and generalization ability of our L2E framework can be significantly improved . To summarize , the main contributions of this work are listed bellow in four-fold : • We propose a novel learning to exploit ( L2E ) framework to exploit sub-optimal opponents without building explicit models for it . L2E can quickly adapt to a new opponent with unknown style using only a few interactions . • We propose to use an adversarial training procedure to generate challenging opponents automatically . These hard to exploit opponents help L2E eliminate its weakness and improve its robustness effectively . • We further propose a diversity-regularized policy optimization procedure to generate diverse opponents automatically . The generalization ability of L2E is improved significantly by training with these diverse opponents . • We conduct detailed experiments to evaluate the L2E framework in three different environments . The experimental results demonstrate that the base policy trained with L2E quickly exploits a wide range of opponents compared to other algorithms . 2 METHOD . In this paper , we propose a novel L2E framework to endow the agents to adapt to diverse opponents quickly . As shown in Fig . 1 , L2E mainly consists of two modules , i.e. , the base policy training part , and the opponent strategy generation part . In the base policy training part , our goal is to find a base policy that , given the unknown opponent , can fast adapt to it by using only a few interactions . To this end , the base policy is trained to be able to adapt to many opponents . In specific , giving an opponent O , the base policy B first adjusts itself to obtain an adapted policy B′ by using a little interaction data between O and B , the base policy is then optimized to maximize the rewards that B′ gets when facing O . In other words , the base policy has learned how to adapt to its opponents and exploit them quickly . The opponent strategy generation provides the base policy training part with challenging and diverse training opponents automatically . First , our proposed opponent strategy generation ( OSG ) algorithm can produce difficult to exploit opponents . In specific , the base policy B first adjusts itself to obtain an adapted policy B′ by using a little interaction data between O and B , the opponent O is then optimized to minimize the rewards that B′ gets when facing O . The resulting opponent O is hard to exploit since even if the base policy B adapts to O , the adapted policy B′ can not take advantage of O . By training with these hard to exploit opponents , the base policy can eliminate its weakness and improve its robustness effectively . Second , our OSG algorithm can further produce diverse training opponents with a novel diversity-regularized policy optimization procedure . More specifically , we first formalize the difference between opponent policies as the difference between the distribution of trajectories induced by each policy . The difference between distributions can be evaluated by the Maximum Mean Discrepancy ( MMD ) metric ( Gretton et al. , 2007 ) . Then , MMD is integrated as a regularization term in the policy optimization process to identify various opponent policies . By training with these diverse opponents , the base policy can improve its generalization ability significantly . Next , we introduce these two modules in detail . 2.1 BASE POLICY TRAINING . Our goal is to find a base policy B that can fast adapt to an unknown opponent O by updating the parameters of B using only a few interactions between B and O . The key idea is to train the base policy B against many opponents to maximize its payoffs by using only a small amount of interactive data during training , such that it acquires the ability to exploit different opponents quickly . In effect , our L2E framework treats each opponent as a training example . After training , the resulting base policy B can quickly adapt to new and unknown opponents using only a few interactions . Without loss of generality , the base policy B is modeled by a deep neural network in this work , i.e. , a parameterized function πθ with parameters θ . Similarly , the opponent O for training is also a deep neural network πφ with parameters φ . We model the base policy as playing against an opponent in a two-player Markov game ( Shapley , 1953 ) . This Markov game M = ( S , ( AB , AO ) , T , ( RB , RO ) ) consists of the state space S , the action space AB and AO , and a state transition function T : S × AB × AO → ∆ ( S ) where ∆ ( S ) is a probability distribution on S. The reward function Ri : S × AB × AO × S → R for each player i ∈ { B , O } depends on the current state , the next state and both players ’ actions . Given a training opponent O whose policy is known and fixed , this two-player Markov game M reduces to a single-player Markov Decision Process ( MDP ) , i.e. , MOB = ( S , AB , T O B , R O B ) . The state and action space of M O B are the same as in M . The transition and reward functions have the opponent policy embedded : TOB ( s , aB ) = T ( s , aB , aO ) , R O B ( s , aB , s ′ ) = RB ( s , aB , aO , s ′ ) , where the opponent ’ s action is sampled from its policy aO ∼ πφ ( · | s ) . Throughout the paper , MYX represents a single-player MDP , which is reduced from a two-player Markov game ( i.e. , player X and player Y ) . In this MDP , the player Y is fixed and can be regarded as part of the environment . Suppose a set of training opponents { Oi } Ni=1 is given . For each training opponentOi , an MDPM Oi B can be constructed as described above . The base policy B , i.e. , πθ is allowed to query a limited number of sample trajectories τ to adapt to Oi . In our method , the adapted parameters θOi of the base policy are computed using one or more gradient descent updates with the sample trajectories τ . For example , when using one gradient update : θOi = θ − α∇θLOiB ( πθ ) , ( 1 ) LOiB ( πθ ) = −Eτ∼MOiB [ ∑ t γtROiB ( s ( t ) , a ( t ) B , s ( t+1 ) ) ] . ( 2 ) τ ∼ MOiB represents that the trajectory τ = { s ( 1 ) , a ( 1 ) B , s ( 2 ) , . . . , s ( t ) , a ( t ) B , s ( t+1 ) , . . . } is sampled from the MDP MOiB , where s ( t+1 ) ∼ TOiB ( s ( t ) , a ( t ) B ) and a ( t ) B ∼ πθ ( · | s ( t ) ) . We use BOi to denote the updated base policy , i.e. , πθOi . BOi can be seen as an opponent-specific policy , which is updated from the base policy through fast adaptation . Our goal is to find a generalizable base policy whose opponent-specific policy BOi can exploit its opponent Oi as much as possible . To this end , we optimize the parameters θ of the base policy to maximize the rewards that BOi gets when interacting with Oi . More concretely , the learning to exploit objective function is defined as follows : min θ ∑N i=1 LOi BOi ( πθOi ) = min θ ∑N i=1 LOi BOi ( π θ−α∇θL Oi B ( πθ ) ) . ( 3 ) It is worth noting that the optimization is performed over the base policy ’ s parameters θ , whereas the objective is computed using the adapted based policy ’ s parameters θOi . The parameters θ of the base policy are updated as follows : θ = θ − β∇θ ∑N i=1 LOi BOi ( πθOi ) . ( 4 ) In effect , our L2E framework aims to find a base policy that can significantly exploit the opponent with only a few interactions with it ( i.e. , with a few gradient steps ) . The resulting base policy has learned how to adapt to different opponents and exploit them quickly . An overall description of the base policy training procedure is shown in Alg . 1 . The algorithm consists of three main steps . First , generating hard to exploit opponents through the Hard-OSG module . Second , generating diverse opponent policies through the Diverse-OSG module . Third , training the base policy with these opponents to obtain fast adaptability .
The authors propose an opponent modeling in 1-vs-1 games called the Learning to Exploit (L2E) framework, which exploits opponents by a few interactions with different opponents during training so that it can adapt to new opponents with unknown styles during testing quickly. In particular, the authors propose Opponent Strategy Generation (OSG) that produces effective opponents for training automatically through adversarial training for eliminating its own strategy’s weaknesses and diversity-regularized policy optimization to improve the generalization ability of L2E. Experimental results of two poker games and one grid soccer game indicate that L2E quickly adapts to diverse styles of unknown opponents.
SP:1ddcef5a08ed53d730119a0b591e2cb092c422eb
Multilayer Dense Connections for Hierarchical Concept Classification
1 INTRODUCTION . Classification is a core concept for numerous computer vision tasks . Given the convolutional features , different architectures classify either the image itself ( He et al. , 2015 ; Szegedy et al. , 2016 ) , the region/bounding boxes for object detection ( He et al. , 2017 ; Liu et al. , 2015 ) , or , at the granular level , pixels for scene segmentation ( Chen et al. , 2018 ) . Although early image recognition works employed multilayer classification layers ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ) , the more recent models have all been using single layer dense connection ( He et al. , 2016 ; Szegedy et al. , 2016 ) or convolutions ( Lin et al. , 2017 ) . The vision community has invented a multitude of techniques to enhance the capacity of feature computation layers ( Xie et al. , 2017 ; Huang et al. , 2017 ; Hu et al. , 2018 ; Dai et al. , 2017 ; Chollet , 2016 ; Tan & Le , 2019 ) . But , the classification layer has mostly retained the form of a multinomial/softmax logistic regression performing a mapping from a set of inputs ( images ) to a set of categories/labels . As such , the final output of these networks do not furnish a comprehensive depiction about the input entity . In particular , when an existing CNN correctly identifies an image of an English Setter , it is not laid out in the output that it is an instance of a dog , or more extensively , a hunting dog , a domestic animal and a living thing . It is rational to assume that convolutional layers construct some internal representation of the conceptual superclasses , e.g. , dog , animal etc. , during training . We argue that , by appropriately harnessing such representation , one can retrieve a much broader description of the input image from a CNN than it is supplied by a single layer output . Extensive information about most categories are freely available in repositories such as WordNet ( Fellbaum , 1998 ) . WordNet provides the hierarchical organization of category classes ( e.g. , English Setter ) and their conceptual superclasses ( e.g. , Hunting dog , Domestic animal , Living thing ) . However , a surprisingly limited number of CNNs utilize the concept hierarchy . The primary goal of almost all existing studies is to improve the category-wise classification performance by exploiting the conceptual relations , often via a separate tool . Deng et al . ( 2014 ) and Ding et al . ( 2015 ) apply graphical models to capture the interdependence among concept labels to improve category classification accuracy . Other works either do not clarify the semantic meaning of the ancestor concepts ( Yan et al. , 2015 ) or impose a level of complexity in the additional tool ( RNN ) that is perhaps unnecessary ( Hu et al. , 2016 ) . We have not found an existing ( deep learning ) model that attempts to predict both the finer categories and the chain of ancestor concepts for an input image by a single network . The classical hedging method ( Deng et al. , 2012 ) computes either the finer labels or one of its superclasses exclusively , but not both simultaneously . In this paper , we introduce a CNN to classify the category and the concept superclasses simultaneously . As illustrated in Figure 1 , in order to classify any category class ( e.g. , English Setter ) , our model is constrained to also predict the ancestor superclasses ( e.g. , Hunting dog , Domestic animal , Living thing ) in the same order as defined in a given ontology . We propose a configuration of multilayer dense connections to predict the category & concept superclasses as well as model their interrelations based on the ontology . We also propose a simple method to prune and rearrange the label hierarchy for efficient connectivity . Capturing the hierarchical relationship within the CNN architecture itself enables us to train the model end-to-end ( as opposed to attaching a separate tool ) by applying existing optimization strategies for training deep networks1 . We experimentally demonstrate that one can train the proposed archi- tecture using standard optimization protocols to predict the concept classes with two popular CNN backbones : ResNet and InceptionV4 , while maintaining their category-wise accuracy . The proposed multilayer connection is shown to further refine the learned representations of these backbone CNNs to yield better concept and category classification than 1 ) multinomial logistic regression , and , 2 ) other existing works ( that apply separate mechanisms ) on standard datasets and challenging images . Predicting coarser superclasses in addition to finer level categories improves interpretability of the classifier performance . Even if an eagle is misclassified as a parrot , the capability of inferring that it is a bird , and not an artifact ( e.g. , drone ) , may be beneficial in some applications ( e.g. , surveillance ) . More importantly , an object detector can enhance its capability on unseen categories by adopting the proposed classification scheme ( as demonstrated in Section 4.4 ) . For example , a movie/TV violence recognition/detection tool can recognize an equipment as a ’ weapon ’ concept class even if that particular weapon category was not in the training set . In visual question answering ( VQA ) , encoding concept classes would expand the scope of query terms by allowing broader description ( ’ how many vehicles are present ’ in addition to ’ how many buses ’ , ’ how many trucks ’ etc . ; see Cao et al . ( 2018 ) ; Wang et al . ( 2016 ) ) . In Appendix G , we point out how our architecture can be extended to object detectors to compute the concept classes . In addition , we allude to the potential applications of our model to capture label structures different from concept graph , e.g. , spatial or compositional dependence . 2 RELEVANT WORKS . Use of hierarchical classifiers can be traced back to the early works of Torralba et al . ( 2004 ) ; Wu et al . ( 2004 ) ; Fergus et al . ( 2010 ) that shared features for improved classification . Some studies claimed a hierarchical organization of categories resembles how human cognitive system stores knowledge ( Zhao et al. , 2011 ) while others experimentally showed a correspondence between structure of semantic hierarchy and visual confusion between categories ( Deng et al. , 2010 ) . Bengio et al . ( 2010 ) ; Deng et al . ( 2011 ) learn a label tree for efficient inference with low theoretical complexity and also suggest a label hierarchy is beneficial for datasets with tens of thousands of categories . Deng et al . ( 2012 ) aim to predict either a coarse level concept or a fine level category ( but not both at the same time ) given an initial classical classifier . Provided an initial classifier output , this method determines category or the coarse concept node ( exclusively ) with max reward based on aggregated probabilities in a label hierarchy . The reported results suggest the prediction of superclasses comes at the expense of the fine level category classification failure . For CNN based classification , Deng et al . ( 2014 ) modeled the relationships such as subsumption , overlap and exclusion among the categories 1One can also envision learning the multilayer connectivity structure from data by architecture learning techniques ( Zoph et al. , 2017 ; Pham et al. , 2018 ) . via a CRF . Although the CRF parameter can be trained via gradient descent , the inference required a separate computation of message passing . The work of Ding et al . ( 2015 ) extended this model by utilizing probabilistic label relationships . The HDCNN framework ( Yan et al. , 2015 ) groups the finer classes into coarse concept using the category labels . The framework comprises two modules for coarse and fine categories where the coarse prediction modulates the layers for finer classification . However , this work does not describe the coarser concept classes in order to analyze whether they , or their descendent categories , have any semantic meaning ( see Appendix H ) . Furthermore , the overlap among concept class descendants precludes a mechanism to predict the conceptual superclasses . Hu et al . ( 2016 ) present a structured inference model for hierarchical representation where the concepts representing scene attributes are predicted as indicator vectors of different length . A bidirectional message passing , inspired by the bidirectional recurrent networks , establishes the relations among different levels of concepts . The model leads to a large number of inter and intra-layer label interactions some which needed to be manually hard-coded to 0 . Appendix A describes a few other studies that proposed to incorporate structural prediction techniques in their design ( Guo et al. , 2017 ; Liang , 2019 ) . Unlike us , the primary objective of almost all the aforementioned papers is to improve the category prediction performance by utilizing the superclass hierarchy as an auxiliary source of information or as intermediate result . With the exception of Deng et al . ( 2012 ) ; Hu et al . ( 2016 ) , none of these studies were designed for , and demonstrate their effectiveness in , concept prediction . The algorithms of Deng et al . ( 2014 ) ; Guo et al . ( 2017 ) require substantial modifications to be able to classify the concept classes . Furthermore , in contrast to ours , most of these studies use a separate technique/tool for modeling the conceptual relations that need to be trained or applied separately with different mechanisms . It is important to distinguish our work from the hyperbolic embedding studies ( Nickel & Kiela , 2017 ; Khrulkov et al. , 2020 ) . Khrulkov et al . ( 2020 ) attempt to compute an embedding that respects ancestral hierarchy in the hyperbolic space – which implies that , ideally , a more generic image would be closer to the center than the specific ones . However , the paper does not describe – and , it is neither obvious nor straightforward – how to determine the concept classes from these embedded points . These studies , therefore , are not suitable for a comparison with our method for concurrent prediction of category and concept classes . 3 PROPOSED METHOD . Given an input image , the goal of our proposed method is to determine its category ( leaf node in the hierarchy ) and a list of its concept superclasses ( i.e. , ancestors in the ontology ) . As an example , for an image of a Chimpanzee , the proposed algorithm produces predictions for 1 ) the category Chimpanzee , and 2 ) an ordered list of ancestor concepts : Living thing → Chordate → Mammal → Primate → Chimpanzee . We illustrate ( and experiment with ) the proposed model for image classification in this paper . Our CNN architecture is designed to encompass the chain of relationships among the category and the predecessor concepts in the dense layers . We utilize an existing label hierarchy/ontology to guide the design of the dense layers , but do not use the hierarchy in prediction . In order to maximize the information within an ontology and to reduce the number of variables in the dense layers , we condense the original label hierarchy as explained in Section 3.1 . In our design of multilayer dense connections , each concept is associated with a set of hidden nodes . These hidden nodes are in turn connected to those of its children ( both category and concept ) and the output prediction nodes . Section 3.2 elaborates these connections and the loss functions to train the network .
The paper proposes a method to learn concept classes along with its concept superclasses. The proposed method relies on an ontology which they heuristically re-organize by essentially pruning nodes that have few descendants and large semantic overlap. The network proposed to model the ontology essentially just consists of a learned multiplicative gate at each level of the ontology with a standard xent loss over concepts and regularizing term that indicates if the category is an ancestor concept. The experimental results claim gains over some baselines, e.g., combined acc. of 69.05 vs chosen baseline 66.15 on ImageNet12 for ResNet-50 features at a cost of between 18.2% increase in parameters.
SP:2cd4e9bce03b98ab2d321dca8628eeacfada140a
Multilayer Dense Connections for Hierarchical Concept Classification
1 INTRODUCTION . Classification is a core concept for numerous computer vision tasks . Given the convolutional features , different architectures classify either the image itself ( He et al. , 2015 ; Szegedy et al. , 2016 ) , the region/bounding boxes for object detection ( He et al. , 2017 ; Liu et al. , 2015 ) , or , at the granular level , pixels for scene segmentation ( Chen et al. , 2018 ) . Although early image recognition works employed multilayer classification layers ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ) , the more recent models have all been using single layer dense connection ( He et al. , 2016 ; Szegedy et al. , 2016 ) or convolutions ( Lin et al. , 2017 ) . The vision community has invented a multitude of techniques to enhance the capacity of feature computation layers ( Xie et al. , 2017 ; Huang et al. , 2017 ; Hu et al. , 2018 ; Dai et al. , 2017 ; Chollet , 2016 ; Tan & Le , 2019 ) . But , the classification layer has mostly retained the form of a multinomial/softmax logistic regression performing a mapping from a set of inputs ( images ) to a set of categories/labels . As such , the final output of these networks do not furnish a comprehensive depiction about the input entity . In particular , when an existing CNN correctly identifies an image of an English Setter , it is not laid out in the output that it is an instance of a dog , or more extensively , a hunting dog , a domestic animal and a living thing . It is rational to assume that convolutional layers construct some internal representation of the conceptual superclasses , e.g. , dog , animal etc. , during training . We argue that , by appropriately harnessing such representation , one can retrieve a much broader description of the input image from a CNN than it is supplied by a single layer output . Extensive information about most categories are freely available in repositories such as WordNet ( Fellbaum , 1998 ) . WordNet provides the hierarchical organization of category classes ( e.g. , English Setter ) and their conceptual superclasses ( e.g. , Hunting dog , Domestic animal , Living thing ) . However , a surprisingly limited number of CNNs utilize the concept hierarchy . The primary goal of almost all existing studies is to improve the category-wise classification performance by exploiting the conceptual relations , often via a separate tool . Deng et al . ( 2014 ) and Ding et al . ( 2015 ) apply graphical models to capture the interdependence among concept labels to improve category classification accuracy . Other works either do not clarify the semantic meaning of the ancestor concepts ( Yan et al. , 2015 ) or impose a level of complexity in the additional tool ( RNN ) that is perhaps unnecessary ( Hu et al. , 2016 ) . We have not found an existing ( deep learning ) model that attempts to predict both the finer categories and the chain of ancestor concepts for an input image by a single network . The classical hedging method ( Deng et al. , 2012 ) computes either the finer labels or one of its superclasses exclusively , but not both simultaneously . In this paper , we introduce a CNN to classify the category and the concept superclasses simultaneously . As illustrated in Figure 1 , in order to classify any category class ( e.g. , English Setter ) , our model is constrained to also predict the ancestor superclasses ( e.g. , Hunting dog , Domestic animal , Living thing ) in the same order as defined in a given ontology . We propose a configuration of multilayer dense connections to predict the category & concept superclasses as well as model their interrelations based on the ontology . We also propose a simple method to prune and rearrange the label hierarchy for efficient connectivity . Capturing the hierarchical relationship within the CNN architecture itself enables us to train the model end-to-end ( as opposed to attaching a separate tool ) by applying existing optimization strategies for training deep networks1 . We experimentally demonstrate that one can train the proposed archi- tecture using standard optimization protocols to predict the concept classes with two popular CNN backbones : ResNet and InceptionV4 , while maintaining their category-wise accuracy . The proposed multilayer connection is shown to further refine the learned representations of these backbone CNNs to yield better concept and category classification than 1 ) multinomial logistic regression , and , 2 ) other existing works ( that apply separate mechanisms ) on standard datasets and challenging images . Predicting coarser superclasses in addition to finer level categories improves interpretability of the classifier performance . Even if an eagle is misclassified as a parrot , the capability of inferring that it is a bird , and not an artifact ( e.g. , drone ) , may be beneficial in some applications ( e.g. , surveillance ) . More importantly , an object detector can enhance its capability on unseen categories by adopting the proposed classification scheme ( as demonstrated in Section 4.4 ) . For example , a movie/TV violence recognition/detection tool can recognize an equipment as a ’ weapon ’ concept class even if that particular weapon category was not in the training set . In visual question answering ( VQA ) , encoding concept classes would expand the scope of query terms by allowing broader description ( ’ how many vehicles are present ’ in addition to ’ how many buses ’ , ’ how many trucks ’ etc . ; see Cao et al . ( 2018 ) ; Wang et al . ( 2016 ) ) . In Appendix G , we point out how our architecture can be extended to object detectors to compute the concept classes . In addition , we allude to the potential applications of our model to capture label structures different from concept graph , e.g. , spatial or compositional dependence . 2 RELEVANT WORKS . Use of hierarchical classifiers can be traced back to the early works of Torralba et al . ( 2004 ) ; Wu et al . ( 2004 ) ; Fergus et al . ( 2010 ) that shared features for improved classification . Some studies claimed a hierarchical organization of categories resembles how human cognitive system stores knowledge ( Zhao et al. , 2011 ) while others experimentally showed a correspondence between structure of semantic hierarchy and visual confusion between categories ( Deng et al. , 2010 ) . Bengio et al . ( 2010 ) ; Deng et al . ( 2011 ) learn a label tree for efficient inference with low theoretical complexity and also suggest a label hierarchy is beneficial for datasets with tens of thousands of categories . Deng et al . ( 2012 ) aim to predict either a coarse level concept or a fine level category ( but not both at the same time ) given an initial classical classifier . Provided an initial classifier output , this method determines category or the coarse concept node ( exclusively ) with max reward based on aggregated probabilities in a label hierarchy . The reported results suggest the prediction of superclasses comes at the expense of the fine level category classification failure . For CNN based classification , Deng et al . ( 2014 ) modeled the relationships such as subsumption , overlap and exclusion among the categories 1One can also envision learning the multilayer connectivity structure from data by architecture learning techniques ( Zoph et al. , 2017 ; Pham et al. , 2018 ) . via a CRF . Although the CRF parameter can be trained via gradient descent , the inference required a separate computation of message passing . The work of Ding et al . ( 2015 ) extended this model by utilizing probabilistic label relationships . The HDCNN framework ( Yan et al. , 2015 ) groups the finer classes into coarse concept using the category labels . The framework comprises two modules for coarse and fine categories where the coarse prediction modulates the layers for finer classification . However , this work does not describe the coarser concept classes in order to analyze whether they , or their descendent categories , have any semantic meaning ( see Appendix H ) . Furthermore , the overlap among concept class descendants precludes a mechanism to predict the conceptual superclasses . Hu et al . ( 2016 ) present a structured inference model for hierarchical representation where the concepts representing scene attributes are predicted as indicator vectors of different length . A bidirectional message passing , inspired by the bidirectional recurrent networks , establishes the relations among different levels of concepts . The model leads to a large number of inter and intra-layer label interactions some which needed to be manually hard-coded to 0 . Appendix A describes a few other studies that proposed to incorporate structural prediction techniques in their design ( Guo et al. , 2017 ; Liang , 2019 ) . Unlike us , the primary objective of almost all the aforementioned papers is to improve the category prediction performance by utilizing the superclass hierarchy as an auxiliary source of information or as intermediate result . With the exception of Deng et al . ( 2012 ) ; Hu et al . ( 2016 ) , none of these studies were designed for , and demonstrate their effectiveness in , concept prediction . The algorithms of Deng et al . ( 2014 ) ; Guo et al . ( 2017 ) require substantial modifications to be able to classify the concept classes . Furthermore , in contrast to ours , most of these studies use a separate technique/tool for modeling the conceptual relations that need to be trained or applied separately with different mechanisms . It is important to distinguish our work from the hyperbolic embedding studies ( Nickel & Kiela , 2017 ; Khrulkov et al. , 2020 ) . Khrulkov et al . ( 2020 ) attempt to compute an embedding that respects ancestral hierarchy in the hyperbolic space – which implies that , ideally , a more generic image would be closer to the center than the specific ones . However , the paper does not describe – and , it is neither obvious nor straightforward – how to determine the concept classes from these embedded points . These studies , therefore , are not suitable for a comparison with our method for concurrent prediction of category and concept classes . 3 PROPOSED METHOD . Given an input image , the goal of our proposed method is to determine its category ( leaf node in the hierarchy ) and a list of its concept superclasses ( i.e. , ancestors in the ontology ) . As an example , for an image of a Chimpanzee , the proposed algorithm produces predictions for 1 ) the category Chimpanzee , and 2 ) an ordered list of ancestor concepts : Living thing → Chordate → Mammal → Primate → Chimpanzee . We illustrate ( and experiment with ) the proposed model for image classification in this paper . Our CNN architecture is designed to encompass the chain of relationships among the category and the predecessor concepts in the dense layers . We utilize an existing label hierarchy/ontology to guide the design of the dense layers , but do not use the hierarchy in prediction . In order to maximize the information within an ontology and to reduce the number of variables in the dense layers , we condense the original label hierarchy as explained in Section 3.1 . In our design of multilayer dense connections , each concept is associated with a set of hidden nodes . These hidden nodes are in turn connected to those of its children ( both category and concept ) and the output prediction nodes . Section 3.2 elaborates these connections and the loss functions to train the network .
This paper designs a multilayer connection structure for neural networks, such that the connection architecture supports implementation of a hierarchical classification scheme within these layers. It applies this design to the task of hierarchical classification on ImageNet. Experiments compare results with those of Deng et al. (2012), as well as baseline flat classification models.
SP:2cd4e9bce03b98ab2d321dca8628eeacfada140a
Deep Q Learning from Dynamic Demonstration with Behavioral Cloning
1 INTRODUCTION . Deep reinforcement learning ( DRL ) methods have made great progress ( Mnih et al. , 2013 ; 2015 ; Silver et al. , 2017 ) when applied in several rule-based applications such as the Go game ( Silver et al. , 2016 ) . However , due to the diversity and uncertainty of complex systems , the establishment of a simulation environment is difficult to be consistent with the real-world system . Therefore , DRL algorithms usually fail in a direct application to many real-world scenarios . Meanwhile , a DRL model may produce an action that is sampled from a random policy when exploring the state-action space . However , random actions are not allowed in many real-world circumstances . For example , in autonomous driving experiments ( Kiran et al. , 2020 ) , a random policy may contribute to traffic congestion , even road accidents . Therefore , fitting to complex situations becomes one of the most urgent tasks for applying a DRL model for complicated decision-making tasks . It is noted that human experts have great advantages in learning efficiency and decision-making performance ( Tsividis et al. , 2017 ) . Incorporating expert knowledge is a potential solution to enhance the adaptability of DRL models for complex tasks ( Hester et al. , 2018 ; Matas et al. , 2018 ) . Nevertheless , the knowledge and experience of an expert are difficult to be modeled and described directly . One solution , attracting more and more attention , is to indirectly learn expert strategies by learning their decision trajectories , also known as demonstrations ( Schaal , 1997 ; Behbahani et al. , 2019 ; Ravichandar et al. , 2020 ) . Particularly , deep Q learning from demonstrations ( DQfD ) is a typical algorithm that succeeded in combining DRL with demonstrations ( Hester et al. , 2018 ) , which combines the temporal difference ( TD ) error in the traditional DDQN algorithm with supervised expert loss by constructing a hybrid loss function . Through a specially designed large margin supervised loss function ( Piot et al. , 2014a ; b ) , the DQfD method can guide and assist an agent to learn the expert ’ s knowledge by constantly steering the agent learning strategies closer to those represented by the demonstration . However , the DQfD model suffers from three major issues : ( 1 ) In the DQfD learning process , the trajectory data in the historical demonstration dataset is the only source for contributing expert loss values , which does not include the self-generated transitions of the trained agents . As a result , the DQfD algorithm merely relies on TD errors to improve the policy but the demonstration is idle when the self-generated transitions are sampled from the experience replay buffer , which reduces the efficiency of utilizing demonstrations . ( 2 ) According to the learning mechanism , static demonstrations are too limited to cover sufficient state-action space during the agent training process , especially when it is difficult or expensive to collect demonstrations in real-world applications . Also , with more newly-generated transitions added into the experience replay buffer , historical demonstrations would make smaller contributions to the policy improvement as their sampling probability becomes lower and lower . ( 3 ) The DQfD algorithm requires the learned policy to approximate the demonstration but ignores the imperfection of historical demonstrations and it is an imperfect demonstration that commonly exists in real-world applications . Perfect demonstration can always provide an appropriate guide but the imperfect demonstration is detrimental to the improvement of the policy when the learned policy is superior to the policy represented by the imperfect demonstration . To address the above issues , we propose a novel deep Q learning from dynamic demonstration with the behavioral cloning ( DQfDD-BC ) method . Fig . 1 illustrates the structure of the proposed approach . DQfDD-BC shares the same basic components of the DDQN algorithm ( Hasselt et al. , 2016 ) , including two Q networks and an experience replay buffer . There are an additional imitation learning ( IL ) ( Hussein et al. , 2017 ) module and an alterable demonstration in the replay buffer . Besides , the loss value is calculated considering both the output of the trained Q network and the BC model . Two main contributions are summarized as follows : ( 1 ) An IL model , named behavioral cloning ( BC ) ( Torabi et al. , 2018 ; Bühler et al. , 2020 ) , is proposed to generate expert loss to utilize all the transitions in the experience replay buffer . It first attempts to extract the experts ’ policy from the initial demonstration and allows the agent to provide reasonable actions when facing newly generated states . During the self-learning process , the agent ’ s actions are compared with those generated by the BC model through a self-designed expert loss function . The inclusion of the BC model allows the knowledge in the demonstrations to be sufficiently utilized in the training process and enables the model to cope with the states which experts have not ever encountered . The supervised learning process and self-learning process can promote each other . The supervised model provides a basic reference to guide the model adjustment . Meanwhile , the self-learning process keeps improving and overcomes the limitation of the BC model and suboptimal samples during the learning process . ( 2 ) An automatic update mechanism is proposed to adaptively enhance the BC model . In particular , new transitions are generated by the trained agents to fine-tune the BC model if the model achieves a relatively high-performance score . Such a mechanism tries to include more high-quality transition samples to improve the demonstration and avoids potential adverse impacts caused by imperfect demonstrations . In this study , we evaluate the proposed DQfDD-BC method on several gym environments ( Brockman et al. , 2016 ) . For comparison purposes , the DDQN and DQfD methods are used as the baselines ( Hasselt et al. , 2016 ; Hester et al. , 2018 ) . The experiments clearly demonstrate that DQfDD-BC surpasses all the baselines in terms of convergence speed as well as decision-making performance . The ablation experiments also show that both the proposed expert loss function together with the BC model and dynamic demonstration contribute significantly to the performance superior to the DQfDD-BC algorithm . 2 RELATED WORKS . In most cases , demonstration comes from the human or other kinds of ” experts ” , who can provide valuable information for different hard problems , such as robot control ( Schaal , 1997 ; Ravichandar et al. , 2020 ) , self-driving ( Bojarski et al. , 2016 ) and so on . As a kind of approaches which can make use of the demonstration , behavioral cloning ( Torabi et al. , 2018 ) , one of imitation learning ( Hussein et al. , 2017 ; Brown et al. , 2020 ) , has received extensive attention ( Zhang et al. , 2018 ; Bühler et al. , 2020 ) due to its significant advantages , such as fast learning speed , simplicity , and effective utilization of demonstration and so on . It constructs mappings from states to actions ( or distributions of actions ) in a supervised manner to model the policies represented by the demonstrations and achieves the goal of learning from a demonstration by minimizing various supervised losses . In a study by NVIDIA , the model obtained interesting results by minimizing the mean squared error between the steering command output by the network and the command of a human driver ( Bojarski et al. , 2016 ) . Chauffeurnet further enhances the performance of imitation learning in autopilot by synthesizing the worst scenarios ( Bansal et al. , 2018 ) . DAgger ( Ross et al. , 2011 ) is proposed to cope with the covariate shift problem and experts are required to respond to self-generated states so that totally new transitions are generated and they can expand the state-action space covered by demonstration . The goal of performance improvement can be achieved by adding new samples continuously during the learning process . However , the DAgger method requires an always-available expert to assist in labeling the data , which reduces the practicality of the method . Deeply AggreVaTeD ( Sun et al. , 2017 ) enables the DAgger to handle continuous action spaces by using deep neural networks but the weakness of DAgger was also preserved . Another typical class of imitation learning algorithms is generative adversarial imitation learning ( GAIL ) ( Ho & Ermon , 2016 ; Wang et al. , 2019 ) . Instead of mapping from state to action directly , GAIL learns from the demonstrations by introducing the generator and discriminator . The generator is used to generate state-action pairs and learn a policy , while the discriminator is used to distinguish whether a state-action pair is from the expert or the learned policy . The information contained in the demonstration is indirectly combined with policy improvement through the process of adversarial learning . GAIL shows good performance on the high-dimensional continuous-control problem ( Kang et al. , 2018 ; Song et al. , 2018 ) . Policy optimization with demonstrations ( POfD ) ( Kang et al. , 2018 ) utilizes the demonstration by an adversarial learning method and has made efforts on sparse and dense reward environments . Recently , demonstrations have been leveraged to tutor DRL to achieve better performance . Since the DRL reward is immediate feedback from the environment for evaluating the performance of the policy , reshaping the reward function through the demonstration is an effective approach to improve DRL performance ( Brys et al. , 2015 ; Suay et al. , 2016 ) . By transforming the form of the reward function , the difficulty of agent training has been reduced . Model-Based DRL ( MBRL ) has shown great power in some difficult tasks ( Kaiser et al. , 2019 ) and demonstration is also leveraged to deal with the MBRL problems ( Lambert et al. , 2019 ; Thananjeyan et al. , 2020 ) . All of the above methods yielded impressive results but most of them utilized demonstrations through reward shaping or imitating the demonstration . Another idea is to learn from demonstrations directly to improve the policy . The transitions in the experience replay buffer and demonstrations are both extracted by DRL agents ( Hester et al. , 2018 ; Oh et al. , 2018 ; Xiong et al. , 2019 ) . These approaches put the demonstration into the experience replay buffer and sample from the hybrid experience replay buffer . The DQfD algorithm enhances the DDQN ( Hasselt et al. , 2016 ) algorithm by keeping the demonstration in the replay buffer at all times . A designed supervised expert loss is generated when the transitions in the demonstration are sampled to update the parameters of the neural networks . The self-imitation learning ( SIL ) method chooses different loss functions and adds new trajectories to the demonstration . It updates the same neural network twice with the A2C loss function and the SIL loss function ( Oh et al. , 2018 ) . The DDPGfD algorithm ( Vecerik et al. , 2017 ) is similar to the DQfD and they both assist in enhancing the original algorithms through a hybrid loss function with the ” expert loss ” . The difference between the two is that the DDPGfD algorithm is based on the DDPG algorithm ( Lillicrap et al. , 2015 ) , which is designed to solve continuous action space problems . However , both DQfD and DDPGfD suffer from the problem of under-exploiting demonstration data ( Kang et al. , 2018 ) and our DQfDD-BC method has made new and comprehensive progress on solving the problem .
This paper proposes integrating deep Q-learning from dynamic demonstrations with a behavioral cloning model (DQfDD-BC). Compared with DQfD, the proposed approach introduces a behavior cloning model, which was first pre-trained by leveraging historical demonstrations and then updated using generated dynamic demonstration. The BC model is used in the expert loss function, where the DRL model's actions are compared with those obtained from the BC model for policy improvement guidance. The experimental results in OpenAI Gym environments show that the proposed approach adapts well to different demonstrations' imperfection levels and accelerates the learning processes. The ablation study also indicates that the new method improves the learning convergence performance compared with the original DQfD model.
SP:183f775e3c461066fc88446883bb458fb8d6d607
Deep Q Learning from Dynamic Demonstration with Behavioral Cloning
1 INTRODUCTION . Deep reinforcement learning ( DRL ) methods have made great progress ( Mnih et al. , 2013 ; 2015 ; Silver et al. , 2017 ) when applied in several rule-based applications such as the Go game ( Silver et al. , 2016 ) . However , due to the diversity and uncertainty of complex systems , the establishment of a simulation environment is difficult to be consistent with the real-world system . Therefore , DRL algorithms usually fail in a direct application to many real-world scenarios . Meanwhile , a DRL model may produce an action that is sampled from a random policy when exploring the state-action space . However , random actions are not allowed in many real-world circumstances . For example , in autonomous driving experiments ( Kiran et al. , 2020 ) , a random policy may contribute to traffic congestion , even road accidents . Therefore , fitting to complex situations becomes one of the most urgent tasks for applying a DRL model for complicated decision-making tasks . It is noted that human experts have great advantages in learning efficiency and decision-making performance ( Tsividis et al. , 2017 ) . Incorporating expert knowledge is a potential solution to enhance the adaptability of DRL models for complex tasks ( Hester et al. , 2018 ; Matas et al. , 2018 ) . Nevertheless , the knowledge and experience of an expert are difficult to be modeled and described directly . One solution , attracting more and more attention , is to indirectly learn expert strategies by learning their decision trajectories , also known as demonstrations ( Schaal , 1997 ; Behbahani et al. , 2019 ; Ravichandar et al. , 2020 ) . Particularly , deep Q learning from demonstrations ( DQfD ) is a typical algorithm that succeeded in combining DRL with demonstrations ( Hester et al. , 2018 ) , which combines the temporal difference ( TD ) error in the traditional DDQN algorithm with supervised expert loss by constructing a hybrid loss function . Through a specially designed large margin supervised loss function ( Piot et al. , 2014a ; b ) , the DQfD method can guide and assist an agent to learn the expert ’ s knowledge by constantly steering the agent learning strategies closer to those represented by the demonstration . However , the DQfD model suffers from three major issues : ( 1 ) In the DQfD learning process , the trajectory data in the historical demonstration dataset is the only source for contributing expert loss values , which does not include the self-generated transitions of the trained agents . As a result , the DQfD algorithm merely relies on TD errors to improve the policy but the demonstration is idle when the self-generated transitions are sampled from the experience replay buffer , which reduces the efficiency of utilizing demonstrations . ( 2 ) According to the learning mechanism , static demonstrations are too limited to cover sufficient state-action space during the agent training process , especially when it is difficult or expensive to collect demonstrations in real-world applications . Also , with more newly-generated transitions added into the experience replay buffer , historical demonstrations would make smaller contributions to the policy improvement as their sampling probability becomes lower and lower . ( 3 ) The DQfD algorithm requires the learned policy to approximate the demonstration but ignores the imperfection of historical demonstrations and it is an imperfect demonstration that commonly exists in real-world applications . Perfect demonstration can always provide an appropriate guide but the imperfect demonstration is detrimental to the improvement of the policy when the learned policy is superior to the policy represented by the imperfect demonstration . To address the above issues , we propose a novel deep Q learning from dynamic demonstration with the behavioral cloning ( DQfDD-BC ) method . Fig . 1 illustrates the structure of the proposed approach . DQfDD-BC shares the same basic components of the DDQN algorithm ( Hasselt et al. , 2016 ) , including two Q networks and an experience replay buffer . There are an additional imitation learning ( IL ) ( Hussein et al. , 2017 ) module and an alterable demonstration in the replay buffer . Besides , the loss value is calculated considering both the output of the trained Q network and the BC model . Two main contributions are summarized as follows : ( 1 ) An IL model , named behavioral cloning ( BC ) ( Torabi et al. , 2018 ; Bühler et al. , 2020 ) , is proposed to generate expert loss to utilize all the transitions in the experience replay buffer . It first attempts to extract the experts ’ policy from the initial demonstration and allows the agent to provide reasonable actions when facing newly generated states . During the self-learning process , the agent ’ s actions are compared with those generated by the BC model through a self-designed expert loss function . The inclusion of the BC model allows the knowledge in the demonstrations to be sufficiently utilized in the training process and enables the model to cope with the states which experts have not ever encountered . The supervised learning process and self-learning process can promote each other . The supervised model provides a basic reference to guide the model adjustment . Meanwhile , the self-learning process keeps improving and overcomes the limitation of the BC model and suboptimal samples during the learning process . ( 2 ) An automatic update mechanism is proposed to adaptively enhance the BC model . In particular , new transitions are generated by the trained agents to fine-tune the BC model if the model achieves a relatively high-performance score . Such a mechanism tries to include more high-quality transition samples to improve the demonstration and avoids potential adverse impacts caused by imperfect demonstrations . In this study , we evaluate the proposed DQfDD-BC method on several gym environments ( Brockman et al. , 2016 ) . For comparison purposes , the DDQN and DQfD methods are used as the baselines ( Hasselt et al. , 2016 ; Hester et al. , 2018 ) . The experiments clearly demonstrate that DQfDD-BC surpasses all the baselines in terms of convergence speed as well as decision-making performance . The ablation experiments also show that both the proposed expert loss function together with the BC model and dynamic demonstration contribute significantly to the performance superior to the DQfDD-BC algorithm . 2 RELATED WORKS . In most cases , demonstration comes from the human or other kinds of ” experts ” , who can provide valuable information for different hard problems , such as robot control ( Schaal , 1997 ; Ravichandar et al. , 2020 ) , self-driving ( Bojarski et al. , 2016 ) and so on . As a kind of approaches which can make use of the demonstration , behavioral cloning ( Torabi et al. , 2018 ) , one of imitation learning ( Hussein et al. , 2017 ; Brown et al. , 2020 ) , has received extensive attention ( Zhang et al. , 2018 ; Bühler et al. , 2020 ) due to its significant advantages , such as fast learning speed , simplicity , and effective utilization of demonstration and so on . It constructs mappings from states to actions ( or distributions of actions ) in a supervised manner to model the policies represented by the demonstrations and achieves the goal of learning from a demonstration by minimizing various supervised losses . In a study by NVIDIA , the model obtained interesting results by minimizing the mean squared error between the steering command output by the network and the command of a human driver ( Bojarski et al. , 2016 ) . Chauffeurnet further enhances the performance of imitation learning in autopilot by synthesizing the worst scenarios ( Bansal et al. , 2018 ) . DAgger ( Ross et al. , 2011 ) is proposed to cope with the covariate shift problem and experts are required to respond to self-generated states so that totally new transitions are generated and they can expand the state-action space covered by demonstration . The goal of performance improvement can be achieved by adding new samples continuously during the learning process . However , the DAgger method requires an always-available expert to assist in labeling the data , which reduces the practicality of the method . Deeply AggreVaTeD ( Sun et al. , 2017 ) enables the DAgger to handle continuous action spaces by using deep neural networks but the weakness of DAgger was also preserved . Another typical class of imitation learning algorithms is generative adversarial imitation learning ( GAIL ) ( Ho & Ermon , 2016 ; Wang et al. , 2019 ) . Instead of mapping from state to action directly , GAIL learns from the demonstrations by introducing the generator and discriminator . The generator is used to generate state-action pairs and learn a policy , while the discriminator is used to distinguish whether a state-action pair is from the expert or the learned policy . The information contained in the demonstration is indirectly combined with policy improvement through the process of adversarial learning . GAIL shows good performance on the high-dimensional continuous-control problem ( Kang et al. , 2018 ; Song et al. , 2018 ) . Policy optimization with demonstrations ( POfD ) ( Kang et al. , 2018 ) utilizes the demonstration by an adversarial learning method and has made efforts on sparse and dense reward environments . Recently , demonstrations have been leveraged to tutor DRL to achieve better performance . Since the DRL reward is immediate feedback from the environment for evaluating the performance of the policy , reshaping the reward function through the demonstration is an effective approach to improve DRL performance ( Brys et al. , 2015 ; Suay et al. , 2016 ) . By transforming the form of the reward function , the difficulty of agent training has been reduced . Model-Based DRL ( MBRL ) has shown great power in some difficult tasks ( Kaiser et al. , 2019 ) and demonstration is also leveraged to deal with the MBRL problems ( Lambert et al. , 2019 ; Thananjeyan et al. , 2020 ) . All of the above methods yielded impressive results but most of them utilized demonstrations through reward shaping or imitating the demonstration . Another idea is to learn from demonstrations directly to improve the policy . The transitions in the experience replay buffer and demonstrations are both extracted by DRL agents ( Hester et al. , 2018 ; Oh et al. , 2018 ; Xiong et al. , 2019 ) . These approaches put the demonstration into the experience replay buffer and sample from the hybrid experience replay buffer . The DQfD algorithm enhances the DDQN ( Hasselt et al. , 2016 ) algorithm by keeping the demonstration in the replay buffer at all times . A designed supervised expert loss is generated when the transitions in the demonstration are sampled to update the parameters of the neural networks . The self-imitation learning ( SIL ) method chooses different loss functions and adds new trajectories to the demonstration . It updates the same neural network twice with the A2C loss function and the SIL loss function ( Oh et al. , 2018 ) . The DDPGfD algorithm ( Vecerik et al. , 2017 ) is similar to the DQfD and they both assist in enhancing the original algorithms through a hybrid loss function with the ” expert loss ” . The difference between the two is that the DDPGfD algorithm is based on the DDPG algorithm ( Lillicrap et al. , 2015 ) , which is designed to solve continuous action space problems . However , both DQfD and DDPGfD suffer from the problem of under-exploiting demonstration data ( Kang et al. , 2018 ) and our DQfDD-BC method has made new and comprehensive progress on solving the problem .
This paper is introducing a learning method which combines both Imitation Learning and Reinforcement Learning, such that an autonomous learner can leverage prerecorded expert knowledge. In comparison to previous work, this model has an expert cost function which gives priority to the expert behavior, not only using the expert demos (like in DQfD), but also with a model trained with those demos using behavioral cloning, such that it could be evaluated in states that were not visited during the demonstrations. Additionally, new executions of the learner that have high performance are included in the buffer for training the policy imitating the expert, since they can also be considered new better demonstrations.
SP:183f775e3c461066fc88446883bb458fb8d6d607
Simple Augmentation Goes a Long Way: ADRL for DNN Quantization
1 INTRODUCTION . By reducing the number of bits needed to represent a model parameter of Deep Neural Networks ( DNN ) , quantization ( Lin et al. , 2016 ; Park et al. , 2017 ; Han et al. , 2015 ; Zhou et al. , 2018 ; Zhu et al. , 2016 ; Hwang & Sung , 2014 ; Wu et al. , 2016 ; Zhang et al. , 2018 ; Köster et al. , 2017 ; Ullrich et al. , 2017 ; Hou & Kwok , 2018 ; Jacob et al. , 2018 ) is an important way to reduce the size and improve the energy efficiency and speed of DNN . Mixed precision quantization selects a proper bit-width for each layer of a DNN , offering more flexibility than fixed precision quantization . A major challenge to mixed precision quantization ( Micikevicius et al. , 2017 ; Cheng et al. , 2018 ) is the configuration search problem , that is , how to find the appropriate bit-width for each DNN layer efficiently . The search space grows exponentially as the number of layers increases , and assessing each candidate configuration requires a long time of training and evaluation of the DNN . Research efforts have been drawn to mitigate the issue for help better tap into the power of mixed precision quantization . Prior methods mainly fall into two categories : ( i ) automatic methods , such as reinforcement learning ( RL ) ( Lou et al. , 2019 ; Gong et al. , 2019 ; Wang et al. , 2018 ; Yazdanbakhsh et al. , 2018 ; Cai et al. , 2020 ) and neural architecture search ( NAS ) ( Wu et al. , 2018 ; Li et al. , 2020 ) , to learn from feedback signals and automatically determine the quantization configurations ; ( ii ) heuristic methods to reduce the search space under the guidance of metrics such as weight loss or Hessian spectrum ( Dong et al. , 2019 ; Wu et al. , 2018 ; Zhou et al. , 2018 ; Park et al. , 2017 ) of each layer . Comparing to the heuristic method , automatic methods , especially Deep Reinforcement Learning ( DRL ) , require little human effort and give the state-of-the-art performance ( e.g. , via actor-critic set∗This work was done when Lin Ning was an intern at Alibaba Group . †Now at Google Research . ting ( Whiteson et al. , 2011 ; Zhang et al. , 2016 ; Henderson et al. , 2018 ; Wang et al. , 2018 ) ) . It however suffers from overestimation bias , high variance of the estimated value , and hence slow convergence and suboptimal results . The problem is fundamentally due to the poor function approximations given out by the DRL agent , especially during the early stage of the DRL learning process ( Thrun & Schwartz , 1993 ; Anschel et al. , 2017 ; Fujimoto et al. , 2018 ) when the neural networks used in the DRL are of low quality . The issue prevents DRL from serving as a scalable solution to DNN quantization as DNN becomes deeper and more complex . This paper reports that simple augmentations can bring some surprising improvements to DRL for DNN quantization . We introduce augmented DRL ( ADRL ) as a principled way to significantly magnify the potential of DRL for DNN quantization . The principle of ADRL is to augment the neural networks in DRL with a complementary scheme ( called augmentation scheme ) to complement the weakness of DRL policy approximator . Analytically , we prove the effects of such a method in reducing the variance and improving the convergence rates of DRL . Empirically , we exemplify ADRL with two example augmentation schemes and test on four popular DNNs . Comparisons with four prior DRL methods show that ADRL can shorten the quantization process by 4.5-64× while improving the model accuracy substantially . It is worth mentioning that there is some prior work trying to increase the scalability of DRL . Dulac-Arnold et al . ( 2016 ) , for instance , addresses large discrete action spaces by embedding them into continuous spaces and leverages nearest neighbor to find closest actions . Our focus is different , aiming to enhance the learning speed of DRL by augmenting the weak policy approximator with complementary schemes . 2 BACKGROUND . Deep Deterministic Policy Gradient ( DDPG ) A standard reinforcement learning system consists of an agent interacting with an environment E . At each time step t , the agent receives an observation xt , takes an action at and then receives an award rt . Modeled with Markov decision process ( MDP ) with a state space S and an action space A , an agent ’ s behavior is defined by a policy π : S → A . A state is defined as a sequence of actions and observations st = ( x1 , a2 , · · · , at−1 , xt ) when the environment is ( partially ) observed . For DNN quantization , the environment is assumed to be fully observable ( st = xt ) . The return from a state s at time t is defined as the future discounted return Rt = ∑T i=t γ i−tr ( si , ai ) with a discount factor γ . The goal of the agent is to learn a policy that maximizes the expected return from the start state J ( π ) = E [ R1|π ] . An RL agent in continuous action spaces can be trained through the actor-critic algorithm and the deep deterministic policy gradient ( DDPG ) . The parameterized actor function µ ( s|θµ ) specifies the current policy and deterministically maps a state s to an action a . The critic networkQ ( s , a ) is a neural network function for estimating the action-value E [ Rt|st = s , at = a , π ] ; it is parameterized with θQ and is learned using the Bellman equation as Q-learning . The critic is updated by minimizing the loss L ( θQ ) = E [ ( yt −Q ( st , at|θQ ) ) 2 ] , where yt = r ( st , at ) + γQ ( st+1 , µ ( st+1|θµ ) |θQ ) . ( 1 ) The actor is updated by applying the chain rule to the expected return J with respect to its parameters : ∇θµJ ≈ E [ ∇aQ ( s , a|θQ ) |s=st , a=µ ( st ) ∇θµµ ( s|θ µ ) |s=st ] . ( 2 ) DDPG for Mixed Precision Quantization To apply the DRL to mixed precision quantization , previous work , represented by HAQ ( Wang et al. , 2018 ) , uses DDPG as the agent learning policy . The environment is assumed to be fully observed so that st = xt , where the observation xt is defined as xt = ( l , cin , cout , skernel , sstride , sfeat , nparams , idw , iw/a , at−1 ) for convolution layers and xt = ( l , hin , hout , 1 , 0 , sfeat , nparams , 0 , iw/a , at−1 ) for fully connected layers . Here , l denotes the layer index , cin and cout are the numbers of input and output channels for the convolution layer , skernel and sstride are the kernel size and stride for the convolution layer , hin and hout are the numbers of input and output hidden units for the fully connected layer , nparams is the number of parameters , idw and iw/a are binary indicators for depth-wise convolution and weight/activation , and at−1 is the action given by the agent from the previous step . In time step t− 1 , the agent gives an action at−1 for layer l − 1 , leading to an observation xt . Then the agent gives the action at for layer l at time step t given xt . The agent updates the actor and critic networks after one episode following DDPG , which is a full pass of all the layers in the target neural network for quantization . The time step t and layer index l are interchangeable in this scenario . The systems use a continuous action space for precision prediction . At each time step t , the continuous action at is mapped to the discrete bit value bk for layer k using bk = round ( bmin − 0.5 + ak × ( bmax − bmin + 1 ) ) . The reward function is computed usingR = λ× ( accquant − accorigin ) . 3 AUGMENTED DEEP REINFORCEMENT LEARNING ( ADRL ) . This section explains our proposal , ADRL . It builds upon the default actor-critic algorithm and is trained with DDPG . In the original actor-critic algorithm , the policy approximator is the actor networks , which generates one action and feeds it into the critic networks . The essence of ADRL is to augment the policy approximator with a supplementary scheme . The scheme may be in an arbitrary form , constructed with domain knowledge or other mechanisms . It should boost the weak policy approximator in accuracy especially when it is not yet well trained , and hence help efficiently reduce the variance and accelerate the convergence of the DRL algorithm . 3.1 DESIGN . Components of ADRL Figure 1 illustrates the general flow of ADRL that employs post augmentation . The augmented policy approximator consists of two main components : an expanded actor network and a refinement function . The expansion of the actor network makes it output multiple rather than one candidate action ; the refinement function derives the most promising candidate by processing those actions and feeds it to the critic networks . The two components can be formally described as follows . Actions generation The expanded actor function , µ̂θµ̂ : S → Rk×n , maps a state from the state space Rm to k ( k > 1 ) actions in a continuous actions space Rn : Ak = µ̂ ( s|θµ̂ ) where Ak = [ aT1 , · · · , aTk ] T . The expansion can be done by modifying the last layer of the actor network ( Sec 3.3 ) . The outputs of this actor function serve as the candidate actions to the refinement component . Action refinement . The action refinement function , g : Rk×n → Rn , derives a promising action from the candidate actions . A simple form of derivation is selection , that is , a∗ = arg maxa=Ak , i Q ( s , a ) Depending on how well the critic network is trained , the critic may not be able to consistently give a good estimation of Q ( s , a ) , especially at the early training stage . The augmented policy may use a Q-value indicator Q̃E ( a ) whose output depends on only the action and the property of the environment . Hence , g ( Ak ) = arg maxa=Ak , i Q̃E ( a ) The choice of Q̃E ( a ) also depends on the specific tasks the ADRL is trying to solve . Combining the generation function and the refinement function , the augmented policy estimator can be represented as πθµ̂ ( s ) = g ◦ µ̂θµ̂ . Training with DDPG We train parameters for the actor and the critic networks using DDPG . Although the augmented policy πθµ̂ consists of an actor network µ̂θµ̂ and a refinement function g , the training of the full policy follows the policy gradient of µ̂θµ̂ because the effects of g are deterministic aspects of the environment E ( Dulac-Arnold et al. , 2015 ) . The actor and the critic network are trained with variations of formulae ( 1 ) and ( 2 ) : L ( θQ ) = E [ ( yt −Q ( st , µ̂ ( st|θµ̂ ) |θQ ) ) 2 ] , where yt = r ( st , πθµ̂ ( st ) ) + γQ ( st+1 , µ̂ ( st+1|θµ̂ ) |θQ ) Algorithm 1 Augmented Policy 1 : State s previously received from the environment E 2 : Ak = µ ( s|θµ ) ( generating k candidate actors ) 3 : a = g ( Ak ) ( a ) ( refines the choice of the action with g ( Ak ) = arg maxa=Ak , i Q̃E ( a ) ) 4 : Apply a to environment ; receive r , s′ ∇θµ̂J ≈ E [ ∇AkQ ( s , Ak|θQ ) |s=st , Ak=µ̂ ( st ) · ∇θµ̂ µ̂ ( s|θ µ̂ ) |s=st ] . ( 3 )
This paper describes an improved way to determine weight quantization bit lengths using reinforcement learning, by injecting model evaluation directly into action selection. Building upon a DRL setup where the action at each timestep corresponds to selecting a bit value for each layer, the method adds a "Q-value indicator" function Q~ that selects among a set of candidate actions, and filters based on the model performance quantizing the layer to each level. This seems to form a hybrid between DRL and greedy search, using a greedy criteria Q~ to filter proposals made by the DRL agent. Experiments show very good performance, with similar or better quantization levels and accuracy as other DRL-based methods and much faster runtime.
SP:a84141dacf9260f8c5dede0959fd4f58f29a51dd
Simple Augmentation Goes a Long Way: ADRL for DNN Quantization
1 INTRODUCTION . By reducing the number of bits needed to represent a model parameter of Deep Neural Networks ( DNN ) , quantization ( Lin et al. , 2016 ; Park et al. , 2017 ; Han et al. , 2015 ; Zhou et al. , 2018 ; Zhu et al. , 2016 ; Hwang & Sung , 2014 ; Wu et al. , 2016 ; Zhang et al. , 2018 ; Köster et al. , 2017 ; Ullrich et al. , 2017 ; Hou & Kwok , 2018 ; Jacob et al. , 2018 ) is an important way to reduce the size and improve the energy efficiency and speed of DNN . Mixed precision quantization selects a proper bit-width for each layer of a DNN , offering more flexibility than fixed precision quantization . A major challenge to mixed precision quantization ( Micikevicius et al. , 2017 ; Cheng et al. , 2018 ) is the configuration search problem , that is , how to find the appropriate bit-width for each DNN layer efficiently . The search space grows exponentially as the number of layers increases , and assessing each candidate configuration requires a long time of training and evaluation of the DNN . Research efforts have been drawn to mitigate the issue for help better tap into the power of mixed precision quantization . Prior methods mainly fall into two categories : ( i ) automatic methods , such as reinforcement learning ( RL ) ( Lou et al. , 2019 ; Gong et al. , 2019 ; Wang et al. , 2018 ; Yazdanbakhsh et al. , 2018 ; Cai et al. , 2020 ) and neural architecture search ( NAS ) ( Wu et al. , 2018 ; Li et al. , 2020 ) , to learn from feedback signals and automatically determine the quantization configurations ; ( ii ) heuristic methods to reduce the search space under the guidance of metrics such as weight loss or Hessian spectrum ( Dong et al. , 2019 ; Wu et al. , 2018 ; Zhou et al. , 2018 ; Park et al. , 2017 ) of each layer . Comparing to the heuristic method , automatic methods , especially Deep Reinforcement Learning ( DRL ) , require little human effort and give the state-of-the-art performance ( e.g. , via actor-critic set∗This work was done when Lin Ning was an intern at Alibaba Group . †Now at Google Research . ting ( Whiteson et al. , 2011 ; Zhang et al. , 2016 ; Henderson et al. , 2018 ; Wang et al. , 2018 ) ) . It however suffers from overestimation bias , high variance of the estimated value , and hence slow convergence and suboptimal results . The problem is fundamentally due to the poor function approximations given out by the DRL agent , especially during the early stage of the DRL learning process ( Thrun & Schwartz , 1993 ; Anschel et al. , 2017 ; Fujimoto et al. , 2018 ) when the neural networks used in the DRL are of low quality . The issue prevents DRL from serving as a scalable solution to DNN quantization as DNN becomes deeper and more complex . This paper reports that simple augmentations can bring some surprising improvements to DRL for DNN quantization . We introduce augmented DRL ( ADRL ) as a principled way to significantly magnify the potential of DRL for DNN quantization . The principle of ADRL is to augment the neural networks in DRL with a complementary scheme ( called augmentation scheme ) to complement the weakness of DRL policy approximator . Analytically , we prove the effects of such a method in reducing the variance and improving the convergence rates of DRL . Empirically , we exemplify ADRL with two example augmentation schemes and test on four popular DNNs . Comparisons with four prior DRL methods show that ADRL can shorten the quantization process by 4.5-64× while improving the model accuracy substantially . It is worth mentioning that there is some prior work trying to increase the scalability of DRL . Dulac-Arnold et al . ( 2016 ) , for instance , addresses large discrete action spaces by embedding them into continuous spaces and leverages nearest neighbor to find closest actions . Our focus is different , aiming to enhance the learning speed of DRL by augmenting the weak policy approximator with complementary schemes . 2 BACKGROUND . Deep Deterministic Policy Gradient ( DDPG ) A standard reinforcement learning system consists of an agent interacting with an environment E . At each time step t , the agent receives an observation xt , takes an action at and then receives an award rt . Modeled with Markov decision process ( MDP ) with a state space S and an action space A , an agent ’ s behavior is defined by a policy π : S → A . A state is defined as a sequence of actions and observations st = ( x1 , a2 , · · · , at−1 , xt ) when the environment is ( partially ) observed . For DNN quantization , the environment is assumed to be fully observable ( st = xt ) . The return from a state s at time t is defined as the future discounted return Rt = ∑T i=t γ i−tr ( si , ai ) with a discount factor γ . The goal of the agent is to learn a policy that maximizes the expected return from the start state J ( π ) = E [ R1|π ] . An RL agent in continuous action spaces can be trained through the actor-critic algorithm and the deep deterministic policy gradient ( DDPG ) . The parameterized actor function µ ( s|θµ ) specifies the current policy and deterministically maps a state s to an action a . The critic networkQ ( s , a ) is a neural network function for estimating the action-value E [ Rt|st = s , at = a , π ] ; it is parameterized with θQ and is learned using the Bellman equation as Q-learning . The critic is updated by minimizing the loss L ( θQ ) = E [ ( yt −Q ( st , at|θQ ) ) 2 ] , where yt = r ( st , at ) + γQ ( st+1 , µ ( st+1|θµ ) |θQ ) . ( 1 ) The actor is updated by applying the chain rule to the expected return J with respect to its parameters : ∇θµJ ≈ E [ ∇aQ ( s , a|θQ ) |s=st , a=µ ( st ) ∇θµµ ( s|θ µ ) |s=st ] . ( 2 ) DDPG for Mixed Precision Quantization To apply the DRL to mixed precision quantization , previous work , represented by HAQ ( Wang et al. , 2018 ) , uses DDPG as the agent learning policy . The environment is assumed to be fully observed so that st = xt , where the observation xt is defined as xt = ( l , cin , cout , skernel , sstride , sfeat , nparams , idw , iw/a , at−1 ) for convolution layers and xt = ( l , hin , hout , 1 , 0 , sfeat , nparams , 0 , iw/a , at−1 ) for fully connected layers . Here , l denotes the layer index , cin and cout are the numbers of input and output channels for the convolution layer , skernel and sstride are the kernel size and stride for the convolution layer , hin and hout are the numbers of input and output hidden units for the fully connected layer , nparams is the number of parameters , idw and iw/a are binary indicators for depth-wise convolution and weight/activation , and at−1 is the action given by the agent from the previous step . In time step t− 1 , the agent gives an action at−1 for layer l − 1 , leading to an observation xt . Then the agent gives the action at for layer l at time step t given xt . The agent updates the actor and critic networks after one episode following DDPG , which is a full pass of all the layers in the target neural network for quantization . The time step t and layer index l are interchangeable in this scenario . The systems use a continuous action space for precision prediction . At each time step t , the continuous action at is mapped to the discrete bit value bk for layer k using bk = round ( bmin − 0.5 + ak × ( bmax − bmin + 1 ) ) . The reward function is computed usingR = λ× ( accquant − accorigin ) . 3 AUGMENTED DEEP REINFORCEMENT LEARNING ( ADRL ) . This section explains our proposal , ADRL . It builds upon the default actor-critic algorithm and is trained with DDPG . In the original actor-critic algorithm , the policy approximator is the actor networks , which generates one action and feeds it into the critic networks . The essence of ADRL is to augment the policy approximator with a supplementary scheme . The scheme may be in an arbitrary form , constructed with domain knowledge or other mechanisms . It should boost the weak policy approximator in accuracy especially when it is not yet well trained , and hence help efficiently reduce the variance and accelerate the convergence of the DRL algorithm . 3.1 DESIGN . Components of ADRL Figure 1 illustrates the general flow of ADRL that employs post augmentation . The augmented policy approximator consists of two main components : an expanded actor network and a refinement function . The expansion of the actor network makes it output multiple rather than one candidate action ; the refinement function derives the most promising candidate by processing those actions and feeds it to the critic networks . The two components can be formally described as follows . Actions generation The expanded actor function , µ̂θµ̂ : S → Rk×n , maps a state from the state space Rm to k ( k > 1 ) actions in a continuous actions space Rn : Ak = µ̂ ( s|θµ̂ ) where Ak = [ aT1 , · · · , aTk ] T . The expansion can be done by modifying the last layer of the actor network ( Sec 3.3 ) . The outputs of this actor function serve as the candidate actions to the refinement component . Action refinement . The action refinement function , g : Rk×n → Rn , derives a promising action from the candidate actions . A simple form of derivation is selection , that is , a∗ = arg maxa=Ak , i Q ( s , a ) Depending on how well the critic network is trained , the critic may not be able to consistently give a good estimation of Q ( s , a ) , especially at the early training stage . The augmented policy may use a Q-value indicator Q̃E ( a ) whose output depends on only the action and the property of the environment . Hence , g ( Ak ) = arg maxa=Ak , i Q̃E ( a ) The choice of Q̃E ( a ) also depends on the specific tasks the ADRL is trying to solve . Combining the generation function and the refinement function , the augmented policy estimator can be represented as πθµ̂ ( s ) = g ◦ µ̂θµ̂ . Training with DDPG We train parameters for the actor and the critic networks using DDPG . Although the augmented policy πθµ̂ consists of an actor network µ̂θµ̂ and a refinement function g , the training of the full policy follows the policy gradient of µ̂θµ̂ because the effects of g are deterministic aspects of the environment E ( Dulac-Arnold et al. , 2015 ) . The actor and the critic network are trained with variations of formulae ( 1 ) and ( 2 ) : L ( θQ ) = E [ ( yt −Q ( st , µ̂ ( st|θµ̂ ) |θQ ) ) 2 ] , where yt = r ( st , πθµ̂ ( st ) ) + γQ ( st+1 , µ̂ ( st+1|θµ̂ ) |θQ ) Algorithm 1 Augmented Policy 1 : State s previously received from the environment E 2 : Ak = µ ( s|θµ ) ( generating k candidate actors ) 3 : a = g ( Ak ) ( a ) ( refines the choice of the action with g ( Ak ) = arg maxa=Ak , i Q̃E ( a ) ) 4 : Apply a to environment ; receive r , s′ ∇θµ̂J ≈ E [ ∇AkQ ( s , Ak|θQ ) |s=st , Ak=µ̂ ( st ) · ∇θµ̂ µ̂ ( s|θ µ̂ ) |s=st ] . ( 3 )
This paper studies the DNN quantization using deep reinforcement learning. The paper proposes an augmented DRL which introduces a Q-value indicator to refine action selection. The proposed approach has been applied to several image classification baselines and has compared with several recent DRL based quantization approach, achieving a similar compression rate without accuracy decrease. In addition, compared to previous methods, the learning speed has been improved by 4.5-64x.
SP:a84141dacf9260f8c5dede0959fd4f58f29a51dd
Convergent Adaptive Gradient Methods in Decentralized Optimization
1 INTRODUCTION . Distributed training of machine learning models is drawing growing attention in the past few years due to its practical benefits and necessities . Given the evolution of computing capabilities of CPUs and GPUs , computation time in distributed settings is gradually dominated by the communication time in many circumstances ( Chilimbi et al. , 2014 ; McMahan et al. , 2017 ) . As a result , a large amount of recent works has been focussing on reducing communication cost for distributed learning ( Alistarh et al. , 2017 ; Lin et al. , 2018 ; Wangni et al. , 2018 ; Stich et al. , 2018 ; Wang et al. , 2018 ; Tang et al. , 2019 ) . In the traditional parameter ( central ) server setting , where a parameter server is employed to manage communication in the whole network , many effective communication reductions have been proposed based on gradient compression ( Aji & Heafield , 2017 ) and quantization ( Chen et al. , 2010 ; Ge et al. , 2013 ; Jegou et al. , 2010 ) techniques . Despite these communication reduction techniques , its cost still , usually , scales linearly with the number of workers . Due to this limitation and with the sheer size of decentralized devices , the decentralized training paradigm ( Duchi et al. , 2011b ) , where the parameter server is removed and each node only communicates with its neighbors , is drawing attention . It has been shown in Lian et al . ( 2017 ) that decentralized training algorithms can outperform parameter server-based algorithms when the training bottleneck is the communication cost . The decentralized paradigm is also preferred when a central parameter server is not available . In light of recent advances in nonconvex optimization , an effective way to accelerate training is by using adaptive gradient methods like AdaGrad ( Duchi et al. , 2011a ) , Adam ( Kingma & Ba , 2015 ) or AMSGrad ( Reddi et al. , 2018 ) . Their popularity are due to their practical benefits in training neural networks , featured by faster convergence and ease of parameter tuning compared with Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) . Despite a large amount of studies within the distributed optimization literature , few works have considered bringing adaptive gradient methods into distributed training , largely due to the lack of understanding of their convergence behaviors . Notably , Reddi et al . ( 2020 ) develop the first decentralized ADAM method for distributed optimization problems with a direct application to federated learning . An inner loop is employed to compute mini-batch gradients on each node and a global adaptive step is applied to update the global parameter at each outer iteration . Yet , in the settings of our paper , nodes can only communicate to their neighbors on a fixed communication graph while a server/worker communication is required in Reddi et al . ( 2020 ) . Designing adaptive methods in such settings is highly non-trivial due to the already complex update rules and to the interaction between the effect of using adaptive learning rates and the decentralized communication protocols . This paper is an attempt at bridging the gap between both realms in nonconvex optimization . Our contributions are summarized as follows : • In this paper , we investigate the possibility of using adaptive gradient methods in the decentralized training paradigm , where nodes have only a local view of the whole communication graph . We develop a general technique that converts an adaptive gradient method from a centralized method to its decentralized variant . • By using our proposed technique , we present a new decentralized optimization algorithm , called decentralized AMSGrad , as the decentralized counterpart of AMSGrad . • We provide a theoretical verification interface , in Theroem 2 , for analyzing the behavior of decentralized adaptive gradient methods obtained as a result of our technique . Thus , we characterize the convergence rate of decentralized AMSGrad , which is the first convergent decentralized adaptive gradient method , to the best of our knowledge . A novel technique in our framework is a mechanism to enforce a consensus on adaptive learning rates at different nodes . We show the importance of consensus on adaptive learning rates by proving a divergent problem instance for a recently proposed decentralized adaptive gradient method , namely DADAM ( Nazari et al. , 2019 ) , a decentralized version of AMSGrad . Though consensus is performed on the model parameter , DADAM lacks consensus principles on adaptive learning rates . After having presented existing related work and important concepts of decentralized adaptive methods in Section 2 , we develop our general framework for converting any adaptive gradient algorithm in its decentralized counterpart along with their rigorous finite-time convergence analysis in Section 3 concluded by some illustrative examples of our framework ’ s behavior in practice . Notations : xt , i denotes variable x at node i and iteration t. ‖·‖abs denotes the entry-wise L1 norm of a matrix , i.e . ‖A‖abs = ∑ i , j |Ai , j | . We introduce important notations used throughout the paper : for any t > 0 , Gt : = [ gt , N ] where [ gt , N ] denotes the matrix [ gt,1 , gt,2 , · · · , gt , N ] ( where gt , i is a column vector ) , Mt : = [ mt , N ] , Xt : = [ xt , N ] , ∇f ( Xt ) : = 1N ∑N i=1∇fi ( xt , i ) , Ut : = [ ut , N ] , Ũt : = [ ũt , N ] , Vt : = [ vt , N ] , V̂t : = [ v̂t , N ] , Xt : = 1N ∑N i=1 xt , i , U t : = 1 N ∑N i=1 ut , i and Ũt : = 1 N ∑N i=1 ũt , i . 2 DECENTRALIZED ADAPTIVE TRAINING AND DIVERGENCE OF DADAM . 2.1 RELATED WORK . Decentralized optimization : Traditional decentralized optimization methods include well-know algorithms such as ADMM ( Boyd et al. , 2011 ) , Dual Averaging ( Duchi et al. , 2011b ) , Distributed Subgradient Descent ( Nedic & Ozdaglar , 2009 ) . More recent algorithms include Extra ( Shi et al. , 2015 ) , Next ( Di Lorenzo & Scutari , 2016 ) , Prox-PDA ( Hong et al. , 2017 ) , GNSD ( Lu et al. , 2019 ) , and Choco-SGD ( Koloskova et al. , 2019 ) . While these algorithms are commonly used in applications other than deep learning , recent algorithmic advances in the machine learning community have shown that decentralized optimization can also be useful for training deep models such as neural networks . Lian et al . ( 2017 ) demonstrate that a stochastic version of Decentralized Subgradient Descent can outperform parameter server-based algorithms when the communication cost is high . Tang et al . ( 2018 ) propose the D2 algorithm improving the convergence rate over Stochastic Subgradient Descent . Assran et al . ( 2019 ) propose the Stochastic Gradient Push that is more robust to network failures for training neural networks . The study of decentralized training algorithms in the machine learning community is only at its initial stage . No existing work , to our knowledge , has seriously considered integrating adaptive gradient methods in the setting of decentralized learning . One noteworthy work ( Nazari et al. , 2019 ) propose a decentralized version of AMSGrad ( Reddi et al. , 2018 ) and it is proven to satisfy some non-standard regret . Adaptive gradient methods : Adaptive gradient methods have been popular in recent years due to their superior performance in training neural networks . Most commonly used adaptive methods include AdaGrad ( Duchi et al. , 2011a ) or Adam ( Kingma & Ba , 2015 ) and their variants . Key features of such methods lie in the use of momentum and adaptive learning rates ( which means that the learning rate is changing during the optimization and is anisotropic , i.e . depends on the dimension ) . The method of reference , called Adam , has been analyzed in Reddi et al . ( 2018 ) where the authors point out an error in previous convergence analyses . Since then , a variety of papers have been focussing on analyzing the convergence behavior of the numerous existing adaptive gradient methods . Ward et al . ( 2019 ) , Li & Orabona ( 2019 ) derive convergence guarantees for a variant of AdaGrad without coordinate-wise learning rates . Chen et al . ( 2019 ) analyze the convergence behavior of a broad class of algorithms including AMSGrad and AdaGrad . Zou & Shen ( 2018 ) provide a unified convergence analysis for AdaGrad with momentum . Noticeable recent works on adaptive gradient methods can be found in Agarwal et al . ( 2019 ) ; Luo et al . ( 2019 ) ; Zaheer et al . ( 2018 ) . 2.2 DECENTRALIZED OPTIMIZATION . In distributed optimization ( with N nodes ) , we aim at solving the following problem min x∈Rd 1 N N∑ i=1 fi ( x ) , ( 1 ) where x is the vector of parameters and fi is only accessible by the ith node . Through the prism of empirical risk minimization procedures , fi can be viewed as the average loss of the data samples located at node i , for all i ∈ [ N ] . Throughout the paper , we make the following mild assumptions required for analyzing the convergence behavior of the different decentralized optimization algorithms : A1 . For all i ∈ [ N ] , fi is differentiable and the gradients is L-Lipschitz , i.e. , for all ( x , y ) ∈ Rd , ‖∇fi ( x ) −∇fi ( y ) ‖ ≤ L‖x− y‖ . A2 . We assume that , at iteration t , node i accesses a stochastic gradient gt , i . The stochastic gradients and the gradients of fi have bounded L∞ norms , i.e . ‖gt , i‖ ≤ G∞ , ‖∇fi ( x ) ‖∞ ≤ G∞ . A3 . The gradient estimators are unbiased and each coordinate have bounded variance , i.e . E [ gt , i ] = ∇fi ( xt , i ) and E [ ( [ gt , i − fi ( xt , i ) ] j ) 2 ] ≤ σ2 , ∀t , i , j . Assumptions A1 and A3 are standard in distributed optimization literature . A2 is slightly stronger than the traditional assumption that the estimator has bounded variance , but is commonly used for the analysis of adaptive gradient methods ( Chen et al. , 2019 ; Ward et al. , 2019 ) . Note that the bounded gradient estimator assumption in A2 implies the bounded variance assumption in A3 . In decentralized optimization , the nodes are connected as a graph and each node only communicates to its neighbors . In such case , one usually constructs a N × N matrix W for information sharing when designing new algorithms . We denote λi to be its ith largest eigenvalue and define λ , max ( |λ2| , |λN | ) . The matrix W can not be arbitrary , its required key properties are listed in the following assumption : A4 . The matrix W satisfies : ( I ) ∑N j=1Wi , j = 1 , ∑N i=1Wi , j = 1 , Wi , j ≥ 0 , ( II ) λ1 = 1 , |λ2| < 1 , |λN | < 1 and ( III ) Wi , j = 0 if node i and node j are not neighbors . We now present the failure to converge of current decentralized adaptive method before introducing our proposed framework for general decentralized adaptive gradient methods . 2.3 DIVERGENCE OF DADAM Algorithm 1 DADAM ( with N nodes ) 1 : Input : α , current point Xt , u 1 2 , i = v̂0 , i = 1 , m0 = 0 and mixing matrix W 2 : for t = 1 , 2 , · · · , T do 3 : for all i ∈ [ N ] do in parallel 4 : gt , i ← ∇fi ( xt , i ) + ξt , i 5 : mt , i = β1mt−1 , i + ( 1− β1 ) gt , i 6 : vt , i = β2vt−1 , i + ( 1− β2 ) g2t , i 7 : v̂t , i = β3v̂t , i+ ( 1−β3 ) max ( v̂t−1 , i , vt , i ) 8 : xt+ 12 , i = ∑N j=1Wijxt , j 9 : xt+1 , i = xt+ 12 , i − α mt , i√ v̂t , i 10 : end for Recently , Nazari et al . ( 2019 ) initiated an attempt to bring adaptive gradient methods into decentralized optimization with Decentralized ADAM ( DADAM ) , shown in Algorithm 1 . DADAM is essentially a decentralized version of ADAM and the key modification is the use of a consensus step on the optimization variable x to transmit information across the network , encouraging its convergence . The matrix W is a doubly stochastic matrix ( which satisfies A4 ) for achieving average consensus of x . Introducing such mixing matrix is standard for decentralizing an algorithm , such as distributed gradient descent ( Nedic & Ozdaglar , 2009 ; Yuan et al. , 2016 ) . It is proven in Nazari et al . ( 2019 ) that DADAM admits a non-standard regret bound in the online setting . Nevertheless , whether the algorithm can converge to stationary points in standard offline settings such training neural networks is still unknown . The next theorem shows that DADAM may fail to converge in the offline settings . Theorem 1 . There exists a problem satisfying A1-A4 where DADAM fails to converge to a stationary points with∇f ( X̄t ) = 0 . Proof . Consider a two-node setting with objective function f ( x ) = 1/2 ∑2 i=1 fi ( x ) and f1 ( x ) = 1 [ |x| ≤ 1 ] 2x2 +1 [ |x| > 1 ] ( 4|x|−2 ) , f2 ( x ) = 1 [ |x−1| ≤ 1 ] ( x−1 ) 2 +1 [ |x−1| > 1 ] ( 2|x−1|−1 ) . We set the mixing matrix W = [ 0.5 , 0.5 ; 0.5 , 0.5 ] . The optimal solution is x∗ = 1/3 . Both f1 and f2 are smooth and convex with bounded gradient norm 4 and 2 , respectively . We also have L = 4 ( defined in A1 ) . If we initialize with x1,1 = x1,2 = −1 and run DADAM with β1 = β2 = β3 = 0 and ≤ 1 , we will get v̂1,1 = 16 and v̂1,2 = 4 . Since |gt,1| ≤ 4 , |gt,2| ≤ 2 due to bounded gradient , and ( v̂t,1 , v̂t,2 ) are non-decreasing , we have v̂t,1 = 16 , v̂t,2 = 4 , ∀t ≥ 1 . Thus , after t = 1 , DADAM is equivalent to running decentralized gradient descent ( DGD ) ( Yuan et al. , 2016 ) with a re-scaled f1 and f2 , i.e . running DGD on f ′ ( x ) = ∑2 i=1 f ′ i ( x ) with f ′ 1 ( x ) = 0.25f1 ( x ) and f ′ 2 ( x ) = 0.5f2 ( x ) , which unique optimal x′ = 0.5 . Define x̄t = ( xt,1 + xt,2 ) /2 , then by Th . 2 in Yuan et al . ( 2016 ) , we have when α < 1/4 , f ′ ( x̄t ) −f ( x′ ) = O ( 1/ ( αt ) ) . Since f ′ has a unique optima x′ , the above bound implies x̄t is converging to x′ = 0.5 which has non-zero gradient on function∇f ( 0.5 ) = 0.5 . Theorem 1 shows that , even though DADAM is proven to satisfy some regret bounds ( Nazari et al. , 2019 ) , it can fail to converge to stationary points in the nonconvex offline setting ( common for training neural networks ) . We conjecture that this inconsistency in the convergence behavior of DADAM is due to the definition of the regret in Nazari et al . ( 2019 ) . The next section presents decentralized adaptive gradient methods that are guaranteed to converge to stationary points under assumptions and provide a characterization of that convergence in finite-time and independently of the initialization .
The paper introduces a decentralized framework for adaptive momentum-based gradient descent optimizers, such as ADAM. The proposed method is novel and is among the first works to consider a decentralized communication graph without a master node. The author discovers the divergent properties of the recent work of DADAM (Nazari, 2019) and proposes a way to fix it by adding a similar consensus step for the adaptive learning rates of agents. The mathematical derivation seems to be correct to the best of my knowledge. Finally, the author tests their method on a simple CNN and show their superiority compared to DADAM to achieve a close to the centralized performance.
SP:63bd51b9796b118e53bf1bff71c405f61f210e9f
Convergent Adaptive Gradient Methods in Decentralized Optimization
1 INTRODUCTION . Distributed training of machine learning models is drawing growing attention in the past few years due to its practical benefits and necessities . Given the evolution of computing capabilities of CPUs and GPUs , computation time in distributed settings is gradually dominated by the communication time in many circumstances ( Chilimbi et al. , 2014 ; McMahan et al. , 2017 ) . As a result , a large amount of recent works has been focussing on reducing communication cost for distributed learning ( Alistarh et al. , 2017 ; Lin et al. , 2018 ; Wangni et al. , 2018 ; Stich et al. , 2018 ; Wang et al. , 2018 ; Tang et al. , 2019 ) . In the traditional parameter ( central ) server setting , where a parameter server is employed to manage communication in the whole network , many effective communication reductions have been proposed based on gradient compression ( Aji & Heafield , 2017 ) and quantization ( Chen et al. , 2010 ; Ge et al. , 2013 ; Jegou et al. , 2010 ) techniques . Despite these communication reduction techniques , its cost still , usually , scales linearly with the number of workers . Due to this limitation and with the sheer size of decentralized devices , the decentralized training paradigm ( Duchi et al. , 2011b ) , where the parameter server is removed and each node only communicates with its neighbors , is drawing attention . It has been shown in Lian et al . ( 2017 ) that decentralized training algorithms can outperform parameter server-based algorithms when the training bottleneck is the communication cost . The decentralized paradigm is also preferred when a central parameter server is not available . In light of recent advances in nonconvex optimization , an effective way to accelerate training is by using adaptive gradient methods like AdaGrad ( Duchi et al. , 2011a ) , Adam ( Kingma & Ba , 2015 ) or AMSGrad ( Reddi et al. , 2018 ) . Their popularity are due to their practical benefits in training neural networks , featured by faster convergence and ease of parameter tuning compared with Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) . Despite a large amount of studies within the distributed optimization literature , few works have considered bringing adaptive gradient methods into distributed training , largely due to the lack of understanding of their convergence behaviors . Notably , Reddi et al . ( 2020 ) develop the first decentralized ADAM method for distributed optimization problems with a direct application to federated learning . An inner loop is employed to compute mini-batch gradients on each node and a global adaptive step is applied to update the global parameter at each outer iteration . Yet , in the settings of our paper , nodes can only communicate to their neighbors on a fixed communication graph while a server/worker communication is required in Reddi et al . ( 2020 ) . Designing adaptive methods in such settings is highly non-trivial due to the already complex update rules and to the interaction between the effect of using adaptive learning rates and the decentralized communication protocols . This paper is an attempt at bridging the gap between both realms in nonconvex optimization . Our contributions are summarized as follows : • In this paper , we investigate the possibility of using adaptive gradient methods in the decentralized training paradigm , where nodes have only a local view of the whole communication graph . We develop a general technique that converts an adaptive gradient method from a centralized method to its decentralized variant . • By using our proposed technique , we present a new decentralized optimization algorithm , called decentralized AMSGrad , as the decentralized counterpart of AMSGrad . • We provide a theoretical verification interface , in Theroem 2 , for analyzing the behavior of decentralized adaptive gradient methods obtained as a result of our technique . Thus , we characterize the convergence rate of decentralized AMSGrad , which is the first convergent decentralized adaptive gradient method , to the best of our knowledge . A novel technique in our framework is a mechanism to enforce a consensus on adaptive learning rates at different nodes . We show the importance of consensus on adaptive learning rates by proving a divergent problem instance for a recently proposed decentralized adaptive gradient method , namely DADAM ( Nazari et al. , 2019 ) , a decentralized version of AMSGrad . Though consensus is performed on the model parameter , DADAM lacks consensus principles on adaptive learning rates . After having presented existing related work and important concepts of decentralized adaptive methods in Section 2 , we develop our general framework for converting any adaptive gradient algorithm in its decentralized counterpart along with their rigorous finite-time convergence analysis in Section 3 concluded by some illustrative examples of our framework ’ s behavior in practice . Notations : xt , i denotes variable x at node i and iteration t. ‖·‖abs denotes the entry-wise L1 norm of a matrix , i.e . ‖A‖abs = ∑ i , j |Ai , j | . We introduce important notations used throughout the paper : for any t > 0 , Gt : = [ gt , N ] where [ gt , N ] denotes the matrix [ gt,1 , gt,2 , · · · , gt , N ] ( where gt , i is a column vector ) , Mt : = [ mt , N ] , Xt : = [ xt , N ] , ∇f ( Xt ) : = 1N ∑N i=1∇fi ( xt , i ) , Ut : = [ ut , N ] , Ũt : = [ ũt , N ] , Vt : = [ vt , N ] , V̂t : = [ v̂t , N ] , Xt : = 1N ∑N i=1 xt , i , U t : = 1 N ∑N i=1 ut , i and Ũt : = 1 N ∑N i=1 ũt , i . 2 DECENTRALIZED ADAPTIVE TRAINING AND DIVERGENCE OF DADAM . 2.1 RELATED WORK . Decentralized optimization : Traditional decentralized optimization methods include well-know algorithms such as ADMM ( Boyd et al. , 2011 ) , Dual Averaging ( Duchi et al. , 2011b ) , Distributed Subgradient Descent ( Nedic & Ozdaglar , 2009 ) . More recent algorithms include Extra ( Shi et al. , 2015 ) , Next ( Di Lorenzo & Scutari , 2016 ) , Prox-PDA ( Hong et al. , 2017 ) , GNSD ( Lu et al. , 2019 ) , and Choco-SGD ( Koloskova et al. , 2019 ) . While these algorithms are commonly used in applications other than deep learning , recent algorithmic advances in the machine learning community have shown that decentralized optimization can also be useful for training deep models such as neural networks . Lian et al . ( 2017 ) demonstrate that a stochastic version of Decentralized Subgradient Descent can outperform parameter server-based algorithms when the communication cost is high . Tang et al . ( 2018 ) propose the D2 algorithm improving the convergence rate over Stochastic Subgradient Descent . Assran et al . ( 2019 ) propose the Stochastic Gradient Push that is more robust to network failures for training neural networks . The study of decentralized training algorithms in the machine learning community is only at its initial stage . No existing work , to our knowledge , has seriously considered integrating adaptive gradient methods in the setting of decentralized learning . One noteworthy work ( Nazari et al. , 2019 ) propose a decentralized version of AMSGrad ( Reddi et al. , 2018 ) and it is proven to satisfy some non-standard regret . Adaptive gradient methods : Adaptive gradient methods have been popular in recent years due to their superior performance in training neural networks . Most commonly used adaptive methods include AdaGrad ( Duchi et al. , 2011a ) or Adam ( Kingma & Ba , 2015 ) and their variants . Key features of such methods lie in the use of momentum and adaptive learning rates ( which means that the learning rate is changing during the optimization and is anisotropic , i.e . depends on the dimension ) . The method of reference , called Adam , has been analyzed in Reddi et al . ( 2018 ) where the authors point out an error in previous convergence analyses . Since then , a variety of papers have been focussing on analyzing the convergence behavior of the numerous existing adaptive gradient methods . Ward et al . ( 2019 ) , Li & Orabona ( 2019 ) derive convergence guarantees for a variant of AdaGrad without coordinate-wise learning rates . Chen et al . ( 2019 ) analyze the convergence behavior of a broad class of algorithms including AMSGrad and AdaGrad . Zou & Shen ( 2018 ) provide a unified convergence analysis for AdaGrad with momentum . Noticeable recent works on adaptive gradient methods can be found in Agarwal et al . ( 2019 ) ; Luo et al . ( 2019 ) ; Zaheer et al . ( 2018 ) . 2.2 DECENTRALIZED OPTIMIZATION . In distributed optimization ( with N nodes ) , we aim at solving the following problem min x∈Rd 1 N N∑ i=1 fi ( x ) , ( 1 ) where x is the vector of parameters and fi is only accessible by the ith node . Through the prism of empirical risk minimization procedures , fi can be viewed as the average loss of the data samples located at node i , for all i ∈ [ N ] . Throughout the paper , we make the following mild assumptions required for analyzing the convergence behavior of the different decentralized optimization algorithms : A1 . For all i ∈ [ N ] , fi is differentiable and the gradients is L-Lipschitz , i.e. , for all ( x , y ) ∈ Rd , ‖∇fi ( x ) −∇fi ( y ) ‖ ≤ L‖x− y‖ . A2 . We assume that , at iteration t , node i accesses a stochastic gradient gt , i . The stochastic gradients and the gradients of fi have bounded L∞ norms , i.e . ‖gt , i‖ ≤ G∞ , ‖∇fi ( x ) ‖∞ ≤ G∞ . A3 . The gradient estimators are unbiased and each coordinate have bounded variance , i.e . E [ gt , i ] = ∇fi ( xt , i ) and E [ ( [ gt , i − fi ( xt , i ) ] j ) 2 ] ≤ σ2 , ∀t , i , j . Assumptions A1 and A3 are standard in distributed optimization literature . A2 is slightly stronger than the traditional assumption that the estimator has bounded variance , but is commonly used for the analysis of adaptive gradient methods ( Chen et al. , 2019 ; Ward et al. , 2019 ) . Note that the bounded gradient estimator assumption in A2 implies the bounded variance assumption in A3 . In decentralized optimization , the nodes are connected as a graph and each node only communicates to its neighbors . In such case , one usually constructs a N × N matrix W for information sharing when designing new algorithms . We denote λi to be its ith largest eigenvalue and define λ , max ( |λ2| , |λN | ) . The matrix W can not be arbitrary , its required key properties are listed in the following assumption : A4 . The matrix W satisfies : ( I ) ∑N j=1Wi , j = 1 , ∑N i=1Wi , j = 1 , Wi , j ≥ 0 , ( II ) λ1 = 1 , |λ2| < 1 , |λN | < 1 and ( III ) Wi , j = 0 if node i and node j are not neighbors . We now present the failure to converge of current decentralized adaptive method before introducing our proposed framework for general decentralized adaptive gradient methods . 2.3 DIVERGENCE OF DADAM Algorithm 1 DADAM ( with N nodes ) 1 : Input : α , current point Xt , u 1 2 , i = v̂0 , i = 1 , m0 = 0 and mixing matrix W 2 : for t = 1 , 2 , · · · , T do 3 : for all i ∈ [ N ] do in parallel 4 : gt , i ← ∇fi ( xt , i ) + ξt , i 5 : mt , i = β1mt−1 , i + ( 1− β1 ) gt , i 6 : vt , i = β2vt−1 , i + ( 1− β2 ) g2t , i 7 : v̂t , i = β3v̂t , i+ ( 1−β3 ) max ( v̂t−1 , i , vt , i ) 8 : xt+ 12 , i = ∑N j=1Wijxt , j 9 : xt+1 , i = xt+ 12 , i − α mt , i√ v̂t , i 10 : end for Recently , Nazari et al . ( 2019 ) initiated an attempt to bring adaptive gradient methods into decentralized optimization with Decentralized ADAM ( DADAM ) , shown in Algorithm 1 . DADAM is essentially a decentralized version of ADAM and the key modification is the use of a consensus step on the optimization variable x to transmit information across the network , encouraging its convergence . The matrix W is a doubly stochastic matrix ( which satisfies A4 ) for achieving average consensus of x . Introducing such mixing matrix is standard for decentralizing an algorithm , such as distributed gradient descent ( Nedic & Ozdaglar , 2009 ; Yuan et al. , 2016 ) . It is proven in Nazari et al . ( 2019 ) that DADAM admits a non-standard regret bound in the online setting . Nevertheless , whether the algorithm can converge to stationary points in standard offline settings such training neural networks is still unknown . The next theorem shows that DADAM may fail to converge in the offline settings . Theorem 1 . There exists a problem satisfying A1-A4 where DADAM fails to converge to a stationary points with∇f ( X̄t ) = 0 . Proof . Consider a two-node setting with objective function f ( x ) = 1/2 ∑2 i=1 fi ( x ) and f1 ( x ) = 1 [ |x| ≤ 1 ] 2x2 +1 [ |x| > 1 ] ( 4|x|−2 ) , f2 ( x ) = 1 [ |x−1| ≤ 1 ] ( x−1 ) 2 +1 [ |x−1| > 1 ] ( 2|x−1|−1 ) . We set the mixing matrix W = [ 0.5 , 0.5 ; 0.5 , 0.5 ] . The optimal solution is x∗ = 1/3 . Both f1 and f2 are smooth and convex with bounded gradient norm 4 and 2 , respectively . We also have L = 4 ( defined in A1 ) . If we initialize with x1,1 = x1,2 = −1 and run DADAM with β1 = β2 = β3 = 0 and ≤ 1 , we will get v̂1,1 = 16 and v̂1,2 = 4 . Since |gt,1| ≤ 4 , |gt,2| ≤ 2 due to bounded gradient , and ( v̂t,1 , v̂t,2 ) are non-decreasing , we have v̂t,1 = 16 , v̂t,2 = 4 , ∀t ≥ 1 . Thus , after t = 1 , DADAM is equivalent to running decentralized gradient descent ( DGD ) ( Yuan et al. , 2016 ) with a re-scaled f1 and f2 , i.e . running DGD on f ′ ( x ) = ∑2 i=1 f ′ i ( x ) with f ′ 1 ( x ) = 0.25f1 ( x ) and f ′ 2 ( x ) = 0.5f2 ( x ) , which unique optimal x′ = 0.5 . Define x̄t = ( xt,1 + xt,2 ) /2 , then by Th . 2 in Yuan et al . ( 2016 ) , we have when α < 1/4 , f ′ ( x̄t ) −f ( x′ ) = O ( 1/ ( αt ) ) . Since f ′ has a unique optima x′ , the above bound implies x̄t is converging to x′ = 0.5 which has non-zero gradient on function∇f ( 0.5 ) = 0.5 . Theorem 1 shows that , even though DADAM is proven to satisfy some regret bounds ( Nazari et al. , 2019 ) , it can fail to converge to stationary points in the nonconvex offline setting ( common for training neural networks ) . We conjecture that this inconsistency in the convergence behavior of DADAM is due to the definition of the regret in Nazari et al . ( 2019 ) . The next section presents decentralized adaptive gradient methods that are guaranteed to converge to stationary points under assumptions and provide a characterization of that convergence in finite-time and independently of the initialization .
In this paper, the authors attempt to use adaptive gradient methods in decentralized training paradigm. They develop a general framework to convert an adaptive gradient method from a centralized one to its decentralized variant. Specifically, they propose a decentralized AMSGrad algorithm. They also point out a potential divergent problem of an existing method and investigate the conditions to ensure convergence. Finally, they conduct some experiments to verify the performance of their algorithm.
SP:63bd51b9796b118e53bf1bff71c405f61f210e9f
Contextual Image Parsing via Panoptic Segment Sorting
Visual context is versatile and hard to describe or label precisely . We aim to leverage the densely labeled task , image parsing , a.k.a panoptic segmentation , to learn a model that encodes and discovers object-centric context . Most existing approaches based on deep learning tackle image parsing via fusion of pixel-wise classification and instance masks from two sub-networks . Such approaches isolate things from stuff and fuse the semantic and instance masks in the later stage . To encode object-centric context inherently , we propose a metric learning framework , Panoptic Segment Sorting , that is directly trained with stuff and things jointly . Our key insight is to make the panoptic embeddings separate every instance so that the model automatically learns to leverage visual context as many instances across different images appear similar . We show that the context of our model ’ s retrieved instances is more consistent relatively by 13.7 % , further demonstrating its ability to discover novel context unsupervisedly . Our overall framework also achieves competitive performance across standard panoptic segmentation metrics amongst the state-of-the-art methods on two large datasets , Cityscapes and PASCAL VOC . These promising results suggest that pixel-wise embeddings can not only inject new understanding into panoptic segmentation but potentially serve for other tasks such as modeling instance relationships . 1 INTRODUCTION . Visual context is versatile and hard to describe or label precisely , yet it is critical for humans ( Medin & Schaffer , 1978 ) to recognize objects quickly . More importantly , objects in different contexts carry different meanings . For example , pedestrians walking in crosswalks should receive more attention than on sidewalks to a driver . However , it is almost impossible to categorize objects with different contexts as the change can be subtile yet dramatic : A pedestrian is more likely in danger if walking in front of a car than by a car , where both a person and a car appear together . We are thus motivated to propose a model that automatically encodes and discovers object visual context by leveraging a densely labeled task , panoptic segmentation . Panoptic segmentation ( Kirillov et al. , 2019b ) , a.k.a. , image parsing ( Tu et al. , 2005 ) , is to segment an image into its constituent visual patterns with both semantic and instance labels . The major challenge lies in delineating different instances while associating them with semantic categories . For example , one has to segment two side-by-side cars apart while still being able to classify them as the same category . Most existing approaches ( Kirillov et al. , 2019a ; Xiong et al. , 2019 ; Yang et al. , 2019 ; Cheng et al. , 2020 ; Li et al. , 2020 ) tackle these two aspects via two sub-networks , instance and semantic segmentation branches . The advantage of such approaches is that each branch can cater to one aspect and achieves high performance . Additional modules for integrating things and stuff are needed to resolve the disagreements between two branches . Yet for object visual context , things and stuff are two integral parts . Hence , we aim to propose a framework that unifies these two seemingly competing aspects and thus encodes visual context inherently . Our framework is inspired by the perceptual organization view ( Biederman , 1987 ) : Humans perceive a scene by breaking it down into visual groups and structures ; repeated structures are then associated for cognitive recognition . Our key insight is to separate everything first and group visually similar components later . The grouping takes place within an image and across images . Within an image , visually similar segments are merged to form instances ; across images , visually familiar segments are associated to create semantics , as illustrated in Fig . 1 . We carry out this idea by building an end-to-end trained pixel-wise embedding framework . Each pixel in an image is mapped via a CNN to a feature in latent space , and nearby features indicate pixels belonging to the same instance . This framework is therefore a non-parametric model at the segment and instance levels as its complexity scales with number of segments and instances , i.e. , exemplars . Particularly , by forcing all the instances to separate , the model has to utilize all the possible visual and semantic information . The model thus learns to separate instances by not only their appearances but also their surroundings , or visual context . A major difference between our model and others is the metric learning perspective : Our model trains with a contrastive loss that captures pixelto-segment relationships while others trains with pixel-wise classification that predicts category or instance directly . As a result , the learned panoptic embeddings can discover instances under similar context , as in Figure 1 bottom row . Specifically , we adapt the Segment Sorting approach ( Hwang et al. , 2019b ) to panoptic segmentation by sorting segments according to both of its semantic and instance labels , hence dubbed Panoptic Segment Sorting ( PSS ) . Such trained pixel-wise embeddings thus encode both semantic and instance information . We then predict each segment ’ s semantic label by simply mapping and classifying its prototype feature with a softmax classifier . We also propose a corresponding clustering algorithm to merge segments into instances with a nearest neighbor criterion ( Sarfraz et al. , 2019 ) . To alleviate the problem of instances with various scales , we further equip our framework with hybrid scale exemplars during training and dynamic partitioning during inference . Finally , we facilitate the merging process with a seeding branch that predicts the center of each instance . As a result , we demonstrate that the contexts of instances retrieved by our panoptic embeddings are more consistent relatively by 13.7 % while achieving competitive performance amongst the stateof-the-art on two datasets , Cityscapes ( Cordts et al. , 2016 ) and PASCAL VOC ( Everingham et al. ) . These promising results suggest that Panoptic Segment Sorting or pixel-wise embeddings can not only inject new understanding into panoptic segmentation but potentially serve as a foundation for other tasks such as discovering novel contexts or modeling instance relationships . 2 RELATED WORK . Image parsing and panoptic segmentation . The task of image parsing is first introduced in Tu et al . ( 2005 ) , where they formulate the solution in a Bayesian framework and construct a parsing graph as output . Since then , a lot of work has attempted to solve holistic scene understanding ( Zhu & Mumford ( 2007 ) ; Malisiewicz & Efros ( 2008 ) ; Tighe & Lazebnik ( 2013 ) ; Rabinovich et al . ( 2007 ) ; Yao et al . ( 2012 ) ) . Recently , Kirillov et al . ( 2019b ) reintroduce image parsing in the context of deep learning with large-scale datasets and new evaluation metric , renaming the task as panoptic segmentation as to unify the well-developed semantic and instance segmentation . Many research efforts ( Li et al. , 2018b ; Kirillov et al. , 2019a ; Xiong et al. , 2019 ; Porzi et al. , 2019 ; Yang et al. , 2019 ; Liu et al. , 2019 ; Li et al. , 2019 ; 2018a ; Gao et al. , 2019 ; Chen et al. , 2020 ; Wu et al. , 2020 ; Wang et al. , 2020 ; Li et al. , 2020 ) have followed quickly . The common approaches embrace the concept of unifying instance and semantic segmentation by integrating the time-tested object proposal and segmentation framework popularized by Mask R-CNN ( He et al. , 2017 ) . Instance segmentation . This task is generally approached by two camps of solutions : top-down or bottom-up . The top-down approaches ( Dai et al. , 2016 ; Li et al. , 2017 ; Dai et al. , 2017 ; He et al. , 2017 ; Chen et al. , 2018a ; Liu et al. , 2018a ) adopt a two-stage framework where the bounding boxes are proposed by a detection network ( Ren et al. , 2015 ) and the segmentation masks are produced by an add-on head . The bottom-up approaches ( Carreira & Sminchisescu , 2011 ; Arbeláez et al. , 2014 ; Pinheiro et al. , 2015 ; 2016 ; Bai & Urtasun , 2017 ; Liu et al. , 2017a ; Kirillov et al. , 2017 ; Newell et al. , 2017 ; Fathi et al. , 2017 ; Kendall et al. , 2018 ; Liu et al. , 2018b ; Papandreou et al. , 2018 ; Zhou et al. , 2019 ) predict and encode pair-wise relationships in various forms and segment the instance accordingly . Instance context . Instance contexts and relationships are explored mainly to enhance the detection performance . Earlier work ( Malisiewicz & Efros , 2009 ) models the appearances and 2D spatial context as a graph . Recently , researchers integrate graphs ( Chen et al. , 2018c ) or spatial memory ( Chen & Gupta , 2017 ) into the deep learning framework . The distinction of our work is that our model does not explicitly model contexts yet is able to discovers novel contexts automatically . Semantic segmentation . Current state-of-the-art semantic segmentation approaches develop from fully convolutional networks ( Long et al. , 2015 ; Chen et al. , 2016 ) , with various innovations . Incorporating contextual information ( Ronneberger et al. , 2015 ; Yu & Koltun , 2016 ; Xie et al. , 2016 ; Zhao et al. , 2017 ; Chen et al. , 2017 ; 2018b ) , and encoding pair-wise relationships ( Zheng et al. , 2015 ; Bertasius et al. , 2016 ; Liu et al. , 2017b ; Bertasius et al. , 2017 ; Maire et al. , 2016 ; Mostajabi et al. , 2018 ; Kong & Fowlkes , 2018 ; Ke et al. , 2018 ; Hwang et al. , 2019a ; b ) are the two major research lines . Non-parametric segmentation . Prior to deep learning ’ s emergence , non-parametric models ( Russell et al. , 2009 ; Tighe & Lazebnik , 2010 ; Liu et al. , 2011 ) usually use hand-craft features with statistical models or graphical models to segment images with pixel-wise labels . Deep metric learning methods ( Fathi et al. , 2017 ; Neven et al. , 2019 ) for instance segmentation emphasize the simplicity and fast computation . More recently , inspired by non-parametric models ( Wu et al. , 2018b ; a ) for image recognition , SegSort ( Hwang et al. , 2019b ) , upon which our work is built , captures pixel-tosegment relationships via pixel-wise embeddings , proposing the first deep non-parametric semantic segmentation in both supervised and unsupervised settings . 3 METHOD . Our end-to-end framework consists of a major SegSort branch and a seeding branch , both of which share one backbone network that generates multi-scale pixel-wise features . The SegSort branch outputs pixel-wise panoptic embeddings , which encode both semantic and instance information and are thus used to discover instance-centric context . The over-segmentations induced by the embeddings are then merged into instances and segments are classified by a softmax classifier . The seeding branch predicts the center of instances , which guide the merging process to reduce false positives . The overall framework is illustrated in Figure 2 . This section is organized as follows . We first briefly review the Segment Sorting framework for semantic segmentation in Sec . 3.1 . We then describe how to extend it for panoptic segmentation in Sec . 3.2 . In Sec . 3.3 , we further develop a dynamic partitioning mechanism to alleviate the problem of varying scales of instances . Finally , we briefly describe the seeding branch in Sec . 3.4 that helps decide the ownership of boundaries .
The paper presents a pixel-wise embedding strategy for panoptic segmentation, which aims to learn a pixel representation that encodes both semantic and instance information. To this end, the proposed method builds on top of the Segment Sorting approach and extends its contrastive loss to the instance level by utilizing panoptic supervision. To predict instance segmentation, the paper also designs a merging process to cluster the pixels into instances, which further employs an object center prediction module for localization and a dynamic partition strategy to cope with scale variation. This method is evaluated on two panotpic segmentation benchmarks, including Cityscapes and PASCAL VOC 2012.
SP:37afbad2d7a35ae747b29b2bef8e0b87dc82bfa6
Contextual Image Parsing via Panoptic Segment Sorting
Visual context is versatile and hard to describe or label precisely . We aim to leverage the densely labeled task , image parsing , a.k.a panoptic segmentation , to learn a model that encodes and discovers object-centric context . Most existing approaches based on deep learning tackle image parsing via fusion of pixel-wise classification and instance masks from two sub-networks . Such approaches isolate things from stuff and fuse the semantic and instance masks in the later stage . To encode object-centric context inherently , we propose a metric learning framework , Panoptic Segment Sorting , that is directly trained with stuff and things jointly . Our key insight is to make the panoptic embeddings separate every instance so that the model automatically learns to leverage visual context as many instances across different images appear similar . We show that the context of our model ’ s retrieved instances is more consistent relatively by 13.7 % , further demonstrating its ability to discover novel context unsupervisedly . Our overall framework also achieves competitive performance across standard panoptic segmentation metrics amongst the state-of-the-art methods on two large datasets , Cityscapes and PASCAL VOC . These promising results suggest that pixel-wise embeddings can not only inject new understanding into panoptic segmentation but potentially serve for other tasks such as modeling instance relationships . 1 INTRODUCTION . Visual context is versatile and hard to describe or label precisely , yet it is critical for humans ( Medin & Schaffer , 1978 ) to recognize objects quickly . More importantly , objects in different contexts carry different meanings . For example , pedestrians walking in crosswalks should receive more attention than on sidewalks to a driver . However , it is almost impossible to categorize objects with different contexts as the change can be subtile yet dramatic : A pedestrian is more likely in danger if walking in front of a car than by a car , where both a person and a car appear together . We are thus motivated to propose a model that automatically encodes and discovers object visual context by leveraging a densely labeled task , panoptic segmentation . Panoptic segmentation ( Kirillov et al. , 2019b ) , a.k.a. , image parsing ( Tu et al. , 2005 ) , is to segment an image into its constituent visual patterns with both semantic and instance labels . The major challenge lies in delineating different instances while associating them with semantic categories . For example , one has to segment two side-by-side cars apart while still being able to classify them as the same category . Most existing approaches ( Kirillov et al. , 2019a ; Xiong et al. , 2019 ; Yang et al. , 2019 ; Cheng et al. , 2020 ; Li et al. , 2020 ) tackle these two aspects via two sub-networks , instance and semantic segmentation branches . The advantage of such approaches is that each branch can cater to one aspect and achieves high performance . Additional modules for integrating things and stuff are needed to resolve the disagreements between two branches . Yet for object visual context , things and stuff are two integral parts . Hence , we aim to propose a framework that unifies these two seemingly competing aspects and thus encodes visual context inherently . Our framework is inspired by the perceptual organization view ( Biederman , 1987 ) : Humans perceive a scene by breaking it down into visual groups and structures ; repeated structures are then associated for cognitive recognition . Our key insight is to separate everything first and group visually similar components later . The grouping takes place within an image and across images . Within an image , visually similar segments are merged to form instances ; across images , visually familiar segments are associated to create semantics , as illustrated in Fig . 1 . We carry out this idea by building an end-to-end trained pixel-wise embedding framework . Each pixel in an image is mapped via a CNN to a feature in latent space , and nearby features indicate pixels belonging to the same instance . This framework is therefore a non-parametric model at the segment and instance levels as its complexity scales with number of segments and instances , i.e. , exemplars . Particularly , by forcing all the instances to separate , the model has to utilize all the possible visual and semantic information . The model thus learns to separate instances by not only their appearances but also their surroundings , or visual context . A major difference between our model and others is the metric learning perspective : Our model trains with a contrastive loss that captures pixelto-segment relationships while others trains with pixel-wise classification that predicts category or instance directly . As a result , the learned panoptic embeddings can discover instances under similar context , as in Figure 1 bottom row . Specifically , we adapt the Segment Sorting approach ( Hwang et al. , 2019b ) to panoptic segmentation by sorting segments according to both of its semantic and instance labels , hence dubbed Panoptic Segment Sorting ( PSS ) . Such trained pixel-wise embeddings thus encode both semantic and instance information . We then predict each segment ’ s semantic label by simply mapping and classifying its prototype feature with a softmax classifier . We also propose a corresponding clustering algorithm to merge segments into instances with a nearest neighbor criterion ( Sarfraz et al. , 2019 ) . To alleviate the problem of instances with various scales , we further equip our framework with hybrid scale exemplars during training and dynamic partitioning during inference . Finally , we facilitate the merging process with a seeding branch that predicts the center of each instance . As a result , we demonstrate that the contexts of instances retrieved by our panoptic embeddings are more consistent relatively by 13.7 % while achieving competitive performance amongst the stateof-the-art on two datasets , Cityscapes ( Cordts et al. , 2016 ) and PASCAL VOC ( Everingham et al. ) . These promising results suggest that Panoptic Segment Sorting or pixel-wise embeddings can not only inject new understanding into panoptic segmentation but potentially serve as a foundation for other tasks such as discovering novel contexts or modeling instance relationships . 2 RELATED WORK . Image parsing and panoptic segmentation . The task of image parsing is first introduced in Tu et al . ( 2005 ) , where they formulate the solution in a Bayesian framework and construct a parsing graph as output . Since then , a lot of work has attempted to solve holistic scene understanding ( Zhu & Mumford ( 2007 ) ; Malisiewicz & Efros ( 2008 ) ; Tighe & Lazebnik ( 2013 ) ; Rabinovich et al . ( 2007 ) ; Yao et al . ( 2012 ) ) . Recently , Kirillov et al . ( 2019b ) reintroduce image parsing in the context of deep learning with large-scale datasets and new evaluation metric , renaming the task as panoptic segmentation as to unify the well-developed semantic and instance segmentation . Many research efforts ( Li et al. , 2018b ; Kirillov et al. , 2019a ; Xiong et al. , 2019 ; Porzi et al. , 2019 ; Yang et al. , 2019 ; Liu et al. , 2019 ; Li et al. , 2019 ; 2018a ; Gao et al. , 2019 ; Chen et al. , 2020 ; Wu et al. , 2020 ; Wang et al. , 2020 ; Li et al. , 2020 ) have followed quickly . The common approaches embrace the concept of unifying instance and semantic segmentation by integrating the time-tested object proposal and segmentation framework popularized by Mask R-CNN ( He et al. , 2017 ) . Instance segmentation . This task is generally approached by two camps of solutions : top-down or bottom-up . The top-down approaches ( Dai et al. , 2016 ; Li et al. , 2017 ; Dai et al. , 2017 ; He et al. , 2017 ; Chen et al. , 2018a ; Liu et al. , 2018a ) adopt a two-stage framework where the bounding boxes are proposed by a detection network ( Ren et al. , 2015 ) and the segmentation masks are produced by an add-on head . The bottom-up approaches ( Carreira & Sminchisescu , 2011 ; Arbeláez et al. , 2014 ; Pinheiro et al. , 2015 ; 2016 ; Bai & Urtasun , 2017 ; Liu et al. , 2017a ; Kirillov et al. , 2017 ; Newell et al. , 2017 ; Fathi et al. , 2017 ; Kendall et al. , 2018 ; Liu et al. , 2018b ; Papandreou et al. , 2018 ; Zhou et al. , 2019 ) predict and encode pair-wise relationships in various forms and segment the instance accordingly . Instance context . Instance contexts and relationships are explored mainly to enhance the detection performance . Earlier work ( Malisiewicz & Efros , 2009 ) models the appearances and 2D spatial context as a graph . Recently , researchers integrate graphs ( Chen et al. , 2018c ) or spatial memory ( Chen & Gupta , 2017 ) into the deep learning framework . The distinction of our work is that our model does not explicitly model contexts yet is able to discovers novel contexts automatically . Semantic segmentation . Current state-of-the-art semantic segmentation approaches develop from fully convolutional networks ( Long et al. , 2015 ; Chen et al. , 2016 ) , with various innovations . Incorporating contextual information ( Ronneberger et al. , 2015 ; Yu & Koltun , 2016 ; Xie et al. , 2016 ; Zhao et al. , 2017 ; Chen et al. , 2017 ; 2018b ) , and encoding pair-wise relationships ( Zheng et al. , 2015 ; Bertasius et al. , 2016 ; Liu et al. , 2017b ; Bertasius et al. , 2017 ; Maire et al. , 2016 ; Mostajabi et al. , 2018 ; Kong & Fowlkes , 2018 ; Ke et al. , 2018 ; Hwang et al. , 2019a ; b ) are the two major research lines . Non-parametric segmentation . Prior to deep learning ’ s emergence , non-parametric models ( Russell et al. , 2009 ; Tighe & Lazebnik , 2010 ; Liu et al. , 2011 ) usually use hand-craft features with statistical models or graphical models to segment images with pixel-wise labels . Deep metric learning methods ( Fathi et al. , 2017 ; Neven et al. , 2019 ) for instance segmentation emphasize the simplicity and fast computation . More recently , inspired by non-parametric models ( Wu et al. , 2018b ; a ) for image recognition , SegSort ( Hwang et al. , 2019b ) , upon which our work is built , captures pixel-tosegment relationships via pixel-wise embeddings , proposing the first deep non-parametric semantic segmentation in both supervised and unsupervised settings . 3 METHOD . Our end-to-end framework consists of a major SegSort branch and a seeding branch , both of which share one backbone network that generates multi-scale pixel-wise features . The SegSort branch outputs pixel-wise panoptic embeddings , which encode both semantic and instance information and are thus used to discover instance-centric context . The over-segmentations induced by the embeddings are then merged into instances and segments are classified by a softmax classifier . The seeding branch predicts the center of instances , which guide the merging process to reduce false positives . The overall framework is illustrated in Figure 2 . This section is organized as follows . We first briefly review the Segment Sorting framework for semantic segmentation in Sec . 3.1 . We then describe how to extend it for panoptic segmentation in Sec . 3.2 . In Sec . 3.3 , we further develop a dynamic partitioning mechanism to alleviate the problem of varying scales of instances . Finally , we briefly describe the seeding branch in Sec . 3.4 that helps decide the ownership of boundaries .
This paper adapts Segment Sorting to panoptic segmentation and proposes a Panoptic Segment Sorting (PSS). The proposed method learns to sort segments according to both of its semantic and instance labels. The semantic label is acquired by simply mapping and classifying prototype feature and instances are formed by a clustering algorithm. A seeding branch is further used to guide merging and avoid false positives.
SP:37afbad2d7a35ae747b29b2bef8e0b87dc82bfa6
Explainable Reinforcement Learning Through Goal-Based Interpretability
1 INTRODUCTION . Deep learning has had a huge impact on Reinforcement Learning , making it possible to solve certain problems for the first time , vastly improving performance in many old problems and often exceeding human performance in difficult tasks ( Schrittwieser et al. , 2019 ; Badia et al. , 2020 ) . These improvements come at a price though : deep agents are black-boxes which are difficult to understand and their decisions are hard to explain due to the complexity and non-obvious behavior of neural networks . In safety-critical applications , it is often fundamental to check that certain properties are respected or to understand what the behavior of the agent will be ( Garcı́a & Fernández , 2015 ; Bragg & Habli , 2018 ) . Simply observing the behavior of the agent is often not enough , since it might take its actions for the wrong reasons or it might have surprising behavior when faced with an unexpected state . Ideally , the agent would explain its behavior , which would allow for auditing , accountability , and safety-checking ( Puiutta & Veith , 2020 ) , unlocking the use of Reinforcement Learning systems in critical areas such as robotics , semi-autonomous driving , or industrial applications . We provide three contributions to make more interpretable deep agents . First , we develop a new type of explanation for the agent ’ s behavior . Imagine the following scenario : a robotic agent has to traverse a difficult terrain until it reaches a specific building , where it collects a reward . The agent decomposes its task into a series of goals ( for example , positions it has to reach ) and tries to reach these goals successively until it reaches the reward zone . The agent is more interpretable since it explicitly produces the successive goals it is trying to accomplish : the current goal explains its shortterm behavior ( the joint movements are done to reach the current goal position ) and the remaining goals help us understand the agent ’ s overall plan to solve the task and predict its future behavior . We call goal-based explanation or goal-based interpretability the use of plan composed by a series of goals . Both model-based reinforcement learning ( Moerland et al. , 2020 ) and planning techniques ( Fox et al. , 2017 ) appear similar to goal-based explanations but there are important differences that make this technique novel . Goal-based explanations do not require learning a model of the environment ( neither the reward function nor the transition function ) , thus being compatible with both model-free and model-based reinforcement learning . Planning can be a useful explainability technique , but it has a few limitations : it typically requires knowing the end goals , they often can not be applied to complex Markov Decision Problems and they may have difficulty handling very large or continuous action spaces or state spaces . Our approach suffers from none of these limitations . Second , we develop a method to make the agent produce the goals that add interpretability . To do so , the agent is structured as a 2-level hierarchy of policies , with a goal-picking policy that produces goals and a goal-reaching policy that attempts to reach them . Goals are ( state , minimum desired reward ) pairs , meaning the goal-reaching policy has to reach a specific state in at most H steps and collect a minimum amount of reward along the way . To create a goal-based explanation , the goal-picking policy is queried repeatedly : given the agent ’ s state s , we query for the current goal g1 = ( s1 , r1 ) ; we then assume the agent reaches the state s1 and query for the next goal g2 = ( s2 , r2 ) ; this process for a fixed amount of steps per environment , though in future work more sophisticated algorithms to determine the adequate number of goals could be compared . Our third contribution is developing HAC-General , a new algorithm specifically designed to train goal-producing hierarchical agents . This algorithm builds upon the Hindsight Actor-Critic ( HAC ) algorithm ( Levy et al. , 2019 ) and makes it more widely applicable by not requiring the environment to provide an explicit end-goal . Instead of trying to reach the end-goal as fast as possible and ignoring the environment ’ s rewards , the HAC-General algorithm trains the agent to maximize the collected reward . Our extension tries to preserve the key property that makes the Hindsight ActorCritic algorithm effective : having an effective strategy to deal with non-stationarity by giving the illusion that the policies in sub-levels are optimal . The HAC-General algorithm is also able to leverage a black-box expert to improve and speed up the training for the hierarchical agent . 2 BACKGROUND & RELATED WORK . 2.1 EXPLAINABLE REINFORCEMENT LEARNING . The Reinforcement Learning community has recognized the need for interpretable and explainable agents , and researchers have developed several methods to add explainability and interpretability . Puiutta & Veith ( 2020 ) survey explainability techniques ; we briefly describe some key methods . To add interpretability , saliency-map methods determine the importance of each input feature for the policy when it generates its output . Perturbation-based methods ( Greydanus et al. , 2018 ) measure importance by perturbing different parts of the input and measuring the change in the policy ’ s output . The larger the change in output , the more important the feature ; the magnitude of the change quantifies the relative importance of features , making it possible to build the saliency map . In object-based saliency maps ( Iyer et al. , 2018 ) , in addition to measuring the importance of raw features , they also measure the importance of the whole objects present in the image . The importance of each object is measured by masking it and measuring the change in the policy ’ s output . Thus , a higher-level object saliency map is created which can be more easily interpreted by non-experts . Another approach is to distill the policy of the black-box agent into a simpler , more interpretable model while trying to preserve the behavior and performance of the black-box policy . Coppens et al . ( 2019 ) distill the black-box policy into a soft decision tree , a type of decision tree where the leaves output a static distribution over the actions and the inner nodes select the sub-branch using a logistic model . A different approach is taken by Liu et al . ( 2019 ) which distill the model into linear model U-Trees , a type of decision tree in which leaf nodes use a linear model to produce their output ( Qvalues ) instead of outputting a constant value . Both types of decision trees are more interpretable since they follow clear and simpler rules to go down the tree and to pick the output value . 2.2 HIERARCHICAL REINFORCEMENT LEARNING . In Hierarchical Reinforcement Learning ( HRL ) , an agent is composed of a hierarchy of policies . The top layer decomposes the task into sub-tasks , the layer below decomposes sub-tasks into sub-subtasks , and so on until the lowest level receives a low-level task and attempts to solve it by interacting with the environment . Policies at higher layers learn to act at higher temporal and abstraction levels . A subtask φi can be defined in multiple ways , for example as simpler linearly solvable Markov Decision Problems ( Earle et al. , 2018 ) or as a tuple ( P i , Cicomp , Ri ) where the subtask φ i is eligible to start any time the precondition Pi is satisfied and it is completed once the current state is part of the completion set Cicomp upon which it receives a reward r t ∼ Ri ( Sohn et al. , 2020 ) . Our approach is based upon goal-oriented hierarchical reinforcement learning , where completing a task means reaching a goal where that goal typically is a state s which the agent must reach . Policies that receive a goal have only H steps to reach it instead of an unlimited time budget . The policy at the bottom of the hierarchy interacts with the environment while the other policies act by picking goals ( i.e . their actions are goals for the policy below them ) . In some problem settings , the reward must be maximized . However , in other settings , the agent receives a goal genv from the environment which must be reached as fast as possible . In that setting , it is important to note that the agent ignores the rewards produced by the environment ; it only uses its internal reward scheme which gives the agent a small negative reward at each step , encouraging it to find short paths . While goal-oriented hierarchical reinforcement learning has a long history ( Dayan & Hinton , 1992 ) , there has been a resurgence in interest in recent years . Hierarchical-DQN ( Kulkarni et al. , 2016 ) combines hierarchical learning with deep learning for the first time ; Hierarchical Actor-Critic ( Levy et al. , 2017 ) improves performance by carefully setting up the hierarchy of actor-critic policies ; Deep Feudal Reinforcement Learning ( Vezhnevets et al. , 2017 ) use abstract goals in a latent space instead of an actual state in the real state space S. More recently , Hierarchical Learning with Off-Policy Correction ( Nachum et al. , 2018 ) tries to support off-policy learning even though that all layers are constantly evolving by correcting the goals present in the transitions using a heuristic method . While goals produced by feudal networks ( Vezhnevets et al. , 2017 ) might be more effective for training , they do not fit our interpretability objectives either since the goal space is a latent space that is not directly understandable to researchers and non-experts . 3 GENERALIZED HINDSIGHT ACTOR-CRITIC WITH TEACHER . Our work builds upon the Hindsight Actor-Critic ( Levy et al. , 2019 ) or HAC , a state-of-the-art algorithm to train hierarchical agents , which achieves excellent performance in some environments beating the hierarchical agent algorithms mentioned before . We refer the interested reader to the original description of HAC ( Levy et al. , 2019 ) due to its non-trivial nature . HAC is designed for a specific setting : environments that provide an end goal and where the only objective is to reach the goal as fast as possible . This specialization leads to 2 limitations : ( 1 ) HAC requires a goal , making it incompatible with all environments which do not provide a goal for the agent and ( 2 ) HAC ignores the rewards given by the environment since it uses an internal reward scheme . This makes it inapplicable to most environments , in which rewards can be given anytime . To address these issues we generalize HAC , creating the HAC-General with Teacher algorithm which doesn ’ t require a goal and that considers the reward given by the environment . To avoid requiring a goal , the policy at the top of the hierarchy produces its output ( a shorter-term goal ) using only the state as input ( no end-goal in the input ) . To take into account the rewards , the objective of the goal-picking policy becomes picking goals such that the maximum amount of reward is collected during the episode . The objective of policy at the bottom of the hierarchy ( the goal-reaching policy ) stays the same : reaching the short-term goal in at most H steps , ignoring environment rewards . 3.1 MAINTAINING THE OPTIMALITY ILLUSION TO ADDRESS NON-STATIONARITY . These changes address the 2 limitations of HAC , but they break HAC ’ s technique to make training effective : addressing the non-stationarity problem , i.e . the problem that since all policies in the hierarchy train in parallel , each policy needs to continuously adapt to the changes in the policies below it in the hierarchy ( whose behavior it relies on ) , which makes training difficult and unstable . The insight of HAC is that if each policy trained under the illusion that all the policies below it were stationary , then it would train faster and more efficiently . Since optimal policies are stationary , HAC attempts to give each policy the illusion that the policy below it is optimal and thus stationary . HAC carefully constructs 3 types of transitions to create this illusion , where a transition is a tuple of the form ( state , action , reward , next state , goal , discount ) . While the 3 types of transitions are detailed in Appendix A for space reasons , we summarize how HAC creates the illusion that the policy below is optimal and how HAC-General With Teacher preserves that illusion . We define some terminology : let π be the policy in the hierarchy for which we create the illusion . We call π the goal-picking policy since it produces goals and call the policy below it in the hierarchy the goal-reaching policy πbelow , since it attempts to reach the goals it receives from π. HAC . In HAC it is simple to give to π the illusion that πbelow is optimal because rewards are only given when the goal-state is reached . As shown in Figure 2 , if the action of π is to pick the goal g and the goal-reaching policy πbelow fails to reach it and reaches state s instead , π ’ s action is replaced by the hindsight action s. The policy πbelow now appears optimal since π picked goal s and πbelow reached it . Problem . This technique breaks down when environment rewards matter and must be maximized , i.e . it breaks down for HAC-General with Teacher . Replacing g by s is not enough anymore to give the illusion that the goal-reaching πbelow acted optimally : while πbelow reached state s , it might not have collected the maximum amount of reward possible . In other words , there might be an alternative path to the same final state where more reward would have been collected ( Figure 3 ) . Since in most environments , it is impractical or impossible to determine if the optimal path was taken ( or what the optimal path is ) , we can not guarantee that πbelow appears optimal . Solution . To address this issue , HAC-General uses a new definition of goals . The new goals have 2 components : a state s which must be reached and a minimum amount of reward rmin which must be collected . As shown in Figure 4 , if the original action/goal is ( s , rmin ) but the goal-reaching policy reaches instead s′ and collects r′ reward , then the goal-picking policy ’ s action is replaced by the hindsight action ( s′ , rhindsight ) where rhindsight ≤ r , creating again the optimality illusion . It is important to note HAC-General creates the same 3 types of transitions as HAC ; the major change is the way goals are defined . Advantages . The goal-picking policy now has 2 mechanisms to maximize the reward it collects1 : ( 1 ) pick goal states s that lead to high rewards and ( 2 ) force the goal-reaching policy to take a highreward path to s by making the minimum reward threshold rmin as high as possible . The second point makes it possible to achieve high reward in RL environments where the reward is also tied to the action , not just the state , since the goal-reaching policy will learn to pick actions that lead to the goal-state but also lead to high reward .
The paper proposes a hybrid imitation learning/reinforcement learning method for learning hierarchical policies where the top layer provides sub-goals and desired cumulative rewards and the bottom layer learns to meet these goals. The advantage of such a decomposition is interpretability of the learned policy. The algorithm is evaluated on MountainCar and LunarLander from OpenAI’s gym. The authors show that their imitation learning/RL scheme is able to solve both tasks while producing reasonable sub-goals.
SP:b669fac24df0cab2ca31e638ea1b336e5af40866
Explainable Reinforcement Learning Through Goal-Based Interpretability
1 INTRODUCTION . Deep learning has had a huge impact on Reinforcement Learning , making it possible to solve certain problems for the first time , vastly improving performance in many old problems and often exceeding human performance in difficult tasks ( Schrittwieser et al. , 2019 ; Badia et al. , 2020 ) . These improvements come at a price though : deep agents are black-boxes which are difficult to understand and their decisions are hard to explain due to the complexity and non-obvious behavior of neural networks . In safety-critical applications , it is often fundamental to check that certain properties are respected or to understand what the behavior of the agent will be ( Garcı́a & Fernández , 2015 ; Bragg & Habli , 2018 ) . Simply observing the behavior of the agent is often not enough , since it might take its actions for the wrong reasons or it might have surprising behavior when faced with an unexpected state . Ideally , the agent would explain its behavior , which would allow for auditing , accountability , and safety-checking ( Puiutta & Veith , 2020 ) , unlocking the use of Reinforcement Learning systems in critical areas such as robotics , semi-autonomous driving , or industrial applications . We provide three contributions to make more interpretable deep agents . First , we develop a new type of explanation for the agent ’ s behavior . Imagine the following scenario : a robotic agent has to traverse a difficult terrain until it reaches a specific building , where it collects a reward . The agent decomposes its task into a series of goals ( for example , positions it has to reach ) and tries to reach these goals successively until it reaches the reward zone . The agent is more interpretable since it explicitly produces the successive goals it is trying to accomplish : the current goal explains its shortterm behavior ( the joint movements are done to reach the current goal position ) and the remaining goals help us understand the agent ’ s overall plan to solve the task and predict its future behavior . We call goal-based explanation or goal-based interpretability the use of plan composed by a series of goals . Both model-based reinforcement learning ( Moerland et al. , 2020 ) and planning techniques ( Fox et al. , 2017 ) appear similar to goal-based explanations but there are important differences that make this technique novel . Goal-based explanations do not require learning a model of the environment ( neither the reward function nor the transition function ) , thus being compatible with both model-free and model-based reinforcement learning . Planning can be a useful explainability technique , but it has a few limitations : it typically requires knowing the end goals , they often can not be applied to complex Markov Decision Problems and they may have difficulty handling very large or continuous action spaces or state spaces . Our approach suffers from none of these limitations . Second , we develop a method to make the agent produce the goals that add interpretability . To do so , the agent is structured as a 2-level hierarchy of policies , with a goal-picking policy that produces goals and a goal-reaching policy that attempts to reach them . Goals are ( state , minimum desired reward ) pairs , meaning the goal-reaching policy has to reach a specific state in at most H steps and collect a minimum amount of reward along the way . To create a goal-based explanation , the goal-picking policy is queried repeatedly : given the agent ’ s state s , we query for the current goal g1 = ( s1 , r1 ) ; we then assume the agent reaches the state s1 and query for the next goal g2 = ( s2 , r2 ) ; this process for a fixed amount of steps per environment , though in future work more sophisticated algorithms to determine the adequate number of goals could be compared . Our third contribution is developing HAC-General , a new algorithm specifically designed to train goal-producing hierarchical agents . This algorithm builds upon the Hindsight Actor-Critic ( HAC ) algorithm ( Levy et al. , 2019 ) and makes it more widely applicable by not requiring the environment to provide an explicit end-goal . Instead of trying to reach the end-goal as fast as possible and ignoring the environment ’ s rewards , the HAC-General algorithm trains the agent to maximize the collected reward . Our extension tries to preserve the key property that makes the Hindsight ActorCritic algorithm effective : having an effective strategy to deal with non-stationarity by giving the illusion that the policies in sub-levels are optimal . The HAC-General algorithm is also able to leverage a black-box expert to improve and speed up the training for the hierarchical agent . 2 BACKGROUND & RELATED WORK . 2.1 EXPLAINABLE REINFORCEMENT LEARNING . The Reinforcement Learning community has recognized the need for interpretable and explainable agents , and researchers have developed several methods to add explainability and interpretability . Puiutta & Veith ( 2020 ) survey explainability techniques ; we briefly describe some key methods . To add interpretability , saliency-map methods determine the importance of each input feature for the policy when it generates its output . Perturbation-based methods ( Greydanus et al. , 2018 ) measure importance by perturbing different parts of the input and measuring the change in the policy ’ s output . The larger the change in output , the more important the feature ; the magnitude of the change quantifies the relative importance of features , making it possible to build the saliency map . In object-based saliency maps ( Iyer et al. , 2018 ) , in addition to measuring the importance of raw features , they also measure the importance of the whole objects present in the image . The importance of each object is measured by masking it and measuring the change in the policy ’ s output . Thus , a higher-level object saliency map is created which can be more easily interpreted by non-experts . Another approach is to distill the policy of the black-box agent into a simpler , more interpretable model while trying to preserve the behavior and performance of the black-box policy . Coppens et al . ( 2019 ) distill the black-box policy into a soft decision tree , a type of decision tree where the leaves output a static distribution over the actions and the inner nodes select the sub-branch using a logistic model . A different approach is taken by Liu et al . ( 2019 ) which distill the model into linear model U-Trees , a type of decision tree in which leaf nodes use a linear model to produce their output ( Qvalues ) instead of outputting a constant value . Both types of decision trees are more interpretable since they follow clear and simpler rules to go down the tree and to pick the output value . 2.2 HIERARCHICAL REINFORCEMENT LEARNING . In Hierarchical Reinforcement Learning ( HRL ) , an agent is composed of a hierarchy of policies . The top layer decomposes the task into sub-tasks , the layer below decomposes sub-tasks into sub-subtasks , and so on until the lowest level receives a low-level task and attempts to solve it by interacting with the environment . Policies at higher layers learn to act at higher temporal and abstraction levels . A subtask φi can be defined in multiple ways , for example as simpler linearly solvable Markov Decision Problems ( Earle et al. , 2018 ) or as a tuple ( P i , Cicomp , Ri ) where the subtask φ i is eligible to start any time the precondition Pi is satisfied and it is completed once the current state is part of the completion set Cicomp upon which it receives a reward r t ∼ Ri ( Sohn et al. , 2020 ) . Our approach is based upon goal-oriented hierarchical reinforcement learning , where completing a task means reaching a goal where that goal typically is a state s which the agent must reach . Policies that receive a goal have only H steps to reach it instead of an unlimited time budget . The policy at the bottom of the hierarchy interacts with the environment while the other policies act by picking goals ( i.e . their actions are goals for the policy below them ) . In some problem settings , the reward must be maximized . However , in other settings , the agent receives a goal genv from the environment which must be reached as fast as possible . In that setting , it is important to note that the agent ignores the rewards produced by the environment ; it only uses its internal reward scheme which gives the agent a small negative reward at each step , encouraging it to find short paths . While goal-oriented hierarchical reinforcement learning has a long history ( Dayan & Hinton , 1992 ) , there has been a resurgence in interest in recent years . Hierarchical-DQN ( Kulkarni et al. , 2016 ) combines hierarchical learning with deep learning for the first time ; Hierarchical Actor-Critic ( Levy et al. , 2017 ) improves performance by carefully setting up the hierarchy of actor-critic policies ; Deep Feudal Reinforcement Learning ( Vezhnevets et al. , 2017 ) use abstract goals in a latent space instead of an actual state in the real state space S. More recently , Hierarchical Learning with Off-Policy Correction ( Nachum et al. , 2018 ) tries to support off-policy learning even though that all layers are constantly evolving by correcting the goals present in the transitions using a heuristic method . While goals produced by feudal networks ( Vezhnevets et al. , 2017 ) might be more effective for training , they do not fit our interpretability objectives either since the goal space is a latent space that is not directly understandable to researchers and non-experts . 3 GENERALIZED HINDSIGHT ACTOR-CRITIC WITH TEACHER . Our work builds upon the Hindsight Actor-Critic ( Levy et al. , 2019 ) or HAC , a state-of-the-art algorithm to train hierarchical agents , which achieves excellent performance in some environments beating the hierarchical agent algorithms mentioned before . We refer the interested reader to the original description of HAC ( Levy et al. , 2019 ) due to its non-trivial nature . HAC is designed for a specific setting : environments that provide an end goal and where the only objective is to reach the goal as fast as possible . This specialization leads to 2 limitations : ( 1 ) HAC requires a goal , making it incompatible with all environments which do not provide a goal for the agent and ( 2 ) HAC ignores the rewards given by the environment since it uses an internal reward scheme . This makes it inapplicable to most environments , in which rewards can be given anytime . To address these issues we generalize HAC , creating the HAC-General with Teacher algorithm which doesn ’ t require a goal and that considers the reward given by the environment . To avoid requiring a goal , the policy at the top of the hierarchy produces its output ( a shorter-term goal ) using only the state as input ( no end-goal in the input ) . To take into account the rewards , the objective of the goal-picking policy becomes picking goals such that the maximum amount of reward is collected during the episode . The objective of policy at the bottom of the hierarchy ( the goal-reaching policy ) stays the same : reaching the short-term goal in at most H steps , ignoring environment rewards . 3.1 MAINTAINING THE OPTIMALITY ILLUSION TO ADDRESS NON-STATIONARITY . These changes address the 2 limitations of HAC , but they break HAC ’ s technique to make training effective : addressing the non-stationarity problem , i.e . the problem that since all policies in the hierarchy train in parallel , each policy needs to continuously adapt to the changes in the policies below it in the hierarchy ( whose behavior it relies on ) , which makes training difficult and unstable . The insight of HAC is that if each policy trained under the illusion that all the policies below it were stationary , then it would train faster and more efficiently . Since optimal policies are stationary , HAC attempts to give each policy the illusion that the policy below it is optimal and thus stationary . HAC carefully constructs 3 types of transitions to create this illusion , where a transition is a tuple of the form ( state , action , reward , next state , goal , discount ) . While the 3 types of transitions are detailed in Appendix A for space reasons , we summarize how HAC creates the illusion that the policy below is optimal and how HAC-General With Teacher preserves that illusion . We define some terminology : let π be the policy in the hierarchy for which we create the illusion . We call π the goal-picking policy since it produces goals and call the policy below it in the hierarchy the goal-reaching policy πbelow , since it attempts to reach the goals it receives from π. HAC . In HAC it is simple to give to π the illusion that πbelow is optimal because rewards are only given when the goal-state is reached . As shown in Figure 2 , if the action of π is to pick the goal g and the goal-reaching policy πbelow fails to reach it and reaches state s instead , π ’ s action is replaced by the hindsight action s. The policy πbelow now appears optimal since π picked goal s and πbelow reached it . Problem . This technique breaks down when environment rewards matter and must be maximized , i.e . it breaks down for HAC-General with Teacher . Replacing g by s is not enough anymore to give the illusion that the goal-reaching πbelow acted optimally : while πbelow reached state s , it might not have collected the maximum amount of reward possible . In other words , there might be an alternative path to the same final state where more reward would have been collected ( Figure 3 ) . Since in most environments , it is impractical or impossible to determine if the optimal path was taken ( or what the optimal path is ) , we can not guarantee that πbelow appears optimal . Solution . To address this issue , HAC-General uses a new definition of goals . The new goals have 2 components : a state s which must be reached and a minimum amount of reward rmin which must be collected . As shown in Figure 4 , if the original action/goal is ( s , rmin ) but the goal-reaching policy reaches instead s′ and collects r′ reward , then the goal-picking policy ’ s action is replaced by the hindsight action ( s′ , rhindsight ) where rhindsight ≤ r , creating again the optimality illusion . It is important to note HAC-General creates the same 3 types of transitions as HAC ; the major change is the way goals are defined . Advantages . The goal-picking policy now has 2 mechanisms to maximize the reward it collects1 : ( 1 ) pick goal states s that lead to high rewards and ( 2 ) force the goal-reaching policy to take a highreward path to s by making the minimum reward threshold rmin as high as possible . The second point makes it possible to achieve high reward in RL environments where the reward is also tied to the action , not just the state , since the goal-reaching policy will learn to pick actions that lead to the goal-state but also lead to high reward .
This paper proposes a hierarchical RL method where the high level controller produces a series of sub-goals in an open-loop fashion which the low-level controller attempts to reach sequentially with the aim of maximising task rewards. The agent is trained using an extension of Hindsight Actor-Critic (HAC) algorithm. The algorithm also leverages a model-free flat policy trained on task rewards as an expert. The approach is evaluated on two tasks: Mountain Car and Lunar Lander.
SP:b669fac24df0cab2ca31e638ea1b336e5af40866
Learning Deeply Shared Filter Bases for Efficient ConvNets
1 INTRODUCTION . Modern networks such as ResNets usually have massive identical convolution blocks and recent analytic studies ( Jastrzebski et al. , 2018 ) show that these blocks perform similar iterative refinement rather than learning new features . Inspired by these massive identical block structure of modern networks , recursive ConvNets sharing weights across iterative blocks have been studied as a promising direction to parameter-efficient ConvNets ( Jastrzebski et al. , 2018 ; Guo et al. , 2019 ; Savarese & Maire , 2019 ) . However , repetitive use of parameters across many convolution layers incurs several challenges that limit the performance of such recursive networks . First of all , deep sharing of parameters might result in vanishing gradients and exploding gradients problems , which are often found in recurrent neural networks ( RNNs ) ( Pascanu et al. , 2013 ; Jastrzebski et al. , 2018 ) . Another challenge is that overall representation power of the networks might be limited by using same filters repeatedly for many convolution layers . To address aforementioned challenges , in this paper , we propose an effective and efficient parametersharing mechanism for modern ConvNets having many repetitive convolution blocks . In our work , convolution filters are decomposed into a fundamental and reusable unit , which is called a filter basis , and a layer-specific part , which is called coefficients . By sharing a filter basis , not whole convolution filters or a layer , we can impose two desirable properties on the shared parameters : ( 1 ) resilience against vanishing/exploding gradients , and ( 2 ) representational expressiveness of individual layers sharing parameters . We first show theoretically that a shared filter basis can cause vanishing gradients and exploding gradients problems , and this problem can be controlled to a large extent by making filter bases orthogonal . To enforce the orthogonality of filter bases , we propose an orthogonality regularization to train ConvNets having deeply shared filter bases . Our experimental results show that the proposed orthogonality regularization reduces the redundancy not just in deeply shared filter bases , but also in none-shared parameters , resulting in better performance than over-parameterized counterpart networks . Next , we make convolution layers with shared parameters more expressive using a hybrid approach to sharing filter bases , in which a small number of layer-specific non-shared filter basis components are combined with shared filter basis components . With this hybrid scheme , the constructed filters can be positioned in different vector subspaces that reflect the peculiarity of individual convolution layers . We argue that these layer-specific variations contribute to increasing the representation power of the networks when a large portion of parameters is shared . Since our focus is not on pushing the state-of-the-art performance , we show the validity of our work using widely-used ResNets as a base model on image classification tasks with CIFAR and ImageNet datasets . Our experimental results demonstrate that when each filter basis is shared by up to 10 convolution layers , our method consistently outperforms counterpart ConvNet models while reducing a significant amount of parameters and computational costs . For example , our method can save up to 63.8 % of parameters and 33.4 % of FLOPs , respectively , while achieving lower test errors than much deeper counterpart models . Our parameter sharing structure and training mechanism can be applied to modern compact networks , such as MobileNets ( Howard et al. , 2017 ) and ShuffleNets ( Zhang et al. , 2018 ) with minor adaptations . Since these compact models already have decomposed convolution blocks , some parts of each block can be identified as a shareable filter basis and the rest of the layer-specific parts . In Experiments , we demonstrate that compact MobileNetV2 can achieve further 8-21 % parameter savings with our scheme while retaining , or improving , the performance of the original models . 2 RELATED WORK . Recursive networks and parameter sharing : Recurrent neural networks ( RNNs ) ( Graves et al. , 2013 ) have been well-studied for temporal and sequential data . As a generalization of RNNs , recursive variants of ConvNets are used extensively for visual tasks ( Socher et al. , 2011 ; Liang & Hu , 2015 ; Xingjian et al. , 2015 ; Kim et al. , 2016 ; Zamir et al. , 2017 ) . For instance , Eigen et al . ( 2014 ) explore recursive convolutional architectures that share filters across multiple convolution layers . They show that recurrence with deeper layers tends to increase performance . However , their recursive architecture shows worse performance than independent convolution layers due to overfitting . In most previous works , filters themselves are shared across layers . In contrast , we propose to share filter bases that are more fundamental and reusable building blocks to construct layer-specific filters . More recently , Jastrzebski et al . ( 2018 ) show that iterative refinement of features in ResNets suggests that deep networks can potentially leverage intensive parameter sharing . Guo et al . ( 2019 ) introduce a gate unit to determine whether to jump out of the recursive loop of convolution blocks to save computational resources . These works show that training recursive networks with naively shared blocks leads to bad performance due to the problem of gradient explosion and vanish like RNN ( Pascanu et al. , 2013 ; Vorontsov et al. , 2017 ) . In order to mitigate the problem of gradient explosion and vanish , they suggest unshared batch normalization strategy . In our work , we propose an orthogonality regularization of shared filter bases to further address this problem . Savarese & Maire ( 2019 ) ’ s work is also relevant to our work . In their work , the parameters of recurrent layers of ConvNets are generated by a linear combination of 1-2 parameter tensors from a global bank of templates . Though similar to our work , our work suggests more fine-grained filter bases as more desirable building blocks for effective parameter sharing since filter bases can be easily combined with layer-specific non-shared components for better representation power . Our result shows that these layer-specific non-shared components are critical to achieve high performance . Although they achieve about 60 % parameter savings , their approach does not outperform counterpart models and incurs slight increases in computational costs due to the overheads in reparameterizing tensors from the templates . Model compression and efficient convolution block design : Reducing storage and inference time of ConvNets has been an important research topic for both resource constrained mobile/embedded systems and energy-hungry data centers . A number of research techniques have been developed such as filter pruning ( LeCun et al. , 1990 ; Polyak & Wolf , 2015 ; Li et al. , 2017 ; He et al. , 2017 ) , low-rank factorization ( Denton et al. , 2014 ; Jaderberg et al. , 2014 ) , quantization ( Han et al. , 2016 ) , and knowledge distillation ( Hinton et al. , 2015 ; Chen et al. , 2017 ) , to name a few . These compression techniques have been suggested as post-processing steps that are applied after initial training . Unfortunately , their accuracy is usually bounded by the approximated original models . By contrast , our models are trained from scratch as in Ioannou et al . ( 2017 ) ’ s work and our result shows that parameter-sharing approaches can outperform the counterpart models , if a proper training method is combined , while achieving significant savings in parameters . Some compact networks such as ShuffleNet ( Zhang et al. , 2018 ) and MobileNet ( Howard et al. , 2017 ; Sandler et al. , 2018 ) show that delicately designed internal structure of convolution blocks acquire better ability with lower computational complexity . Our work can be applied to these compact networks since they already exploit decomposed convolutions for their efficient convolution blocks . For instance , since MobileNet ’ s convolution blocks have pointwise-depthwise-pointwise convolution steps , the first two convolution steps can be considered a reusable filter basis and the last step can be considered layer-specific coefficients . In Experiments , we show that MobileNetV2 combined with our parameter-sharing scheme outperforms the original models while saving about 8-21 % parameters . 3 DEEP RECURSIVE SHARING OF A FILTER BASIS . In this section , we discuss how to decompose typical convolution layers into more recursive units , or filter bases , and remaining layer-specific parts . We also discuss how to train ConvNets effectively when filter bases are deeply shared by repetitive convolution layers . 3.1 FILTER BASES OF CONVOLUTION LAYERS . We assume that a convolution layer with S input channels , T output channels , and a set of filters W = { Wt ∈ Rk×k×S , t ∈ [ 1 .. T ] } . Each filter Wt can be decomposed using a lower rank filter basis Wbasis and coefficients α : Wt = R∑ r=1 αrtW r basis , ( 1 ) where Wbasis = { W rbasis ∈ Rk×k×S , r ∈ [ 1 .. R ] } is a filter basis , and α = { αrt ∈ R , r ∈ [ 1 .. R ] , t ∈ [ 1 .. T ] } is scalar coefficients . In Equation 1 , R is the rank of the basis . In a typical convolution layer , output feature maps Vt ∈ Rw×h×T , t ∈ [ 1 .. T ] are obtained by the convolution between input feature maps U ∈ Rw×h×S and filters Wt , t ∈ [ 1 .. T ] . With Equation 1 , this convolution can be rewritten as follows : Vt = U ∗Wt = U ∗ R∑ r=1 αrtW r basis ( 2 ) = R∑ r=1 αrt ( U ∗W rbasis ) , where t ∈ [ 1 .. T ] . ( 3 ) In Equation 3 , the order of the convolution operation and the linear combination of filter basis is reordered according to the linearity of convolution operators . This result shows that a standard convolution layer can be replaced with two successive convolution layers as shown in Figure 1- ( b ) . The first decomposed convolution layer performs R convolutions between W rbasis , r ∈ [ 1 .. R ] and input feature maps U , and it generates an intermediate feature map basis Vbasis ∈ Rw×h×R . The second decomposed convolution layer performs point-wise 1×1 convolutions that linearly combineR intermediate feature maps Vbasis to generate output feature maps V . The computational complexity of the original convolution is O ( whk2ST ) while the decomposed operations take O ( wh ( k2SR+RT ) ) . As far as R < T , the decomposed convolution has lower computational complexity than the original convolution . Due to this computational efficiency , many compact networks such as MobileNets and ShuffleNets also have similar block structures of decomposed convolution layers . For instance , MobileNets have repetitive convolution blocks of pointwise-depthwise-pointwise convolutions . The filters in the first two steps can be considered a reusable filter basis and the remaining 1x1 filters in the last step can be considered layer-specific coefficients . 3.2 RECURSIVE SHARING OF A FILTER BASIS . In typical ConvNets , convolution layers have different filters W s and , hence , each decomposed convolution layer has its own filter basis Wbasis and coefficients α . In contrast , our primary goal in decomposing convolution layers is to share a single filter basis ( or a small number of filter bases ) across many recursive convolution layers . Unlike some previous works ( Jastrzebski et al. , 2018 ; Köpüklü et al. , 2019 ) , in which convolution filters W themselves are shared recursively , we argue that a filter basis Wbasis is a more intrinsic and reusable building block that can be shared effectively since a filter basis constitutes a subspace , in which high dimensional filters across many convolution layers can be approximated . Though components of a basis only need to be independent and span a vector subspace , some specific bases are more convenient and appropriate for specific purposes . For the purpose of sharing a filter basis , we need to find an optimal filter basis Wbasis that can expedite the training of filters of shared convolution layers . Although this optimization can be done with a typical stochastic gradient descent ( SGD ) , one problem is that exploding/vanishing gradients problems might prevent efficient search of the optimization space . More formally , we consider a series of N decomposed convolution layers , in which a filter basis Wbasis is shared N times . Let xi be the input of the i-th convolution layer , and ai+1 be the output of the convolution of xi with the filter basis Wbasis ai ( xi−1 ) = W > basisx i−1 . ( 4 ) In Equation 4 , Wbasis ∈ Rk 2S×R is a reshaped filter basis that has basis components at its columns . We assume that input x is properly adapted ( e.g. , with im2col ) to express convolutions using a matrix-matrix multiplication . Since Wbasis is shared across N recusrive convolution layers , the gradient of Wbasis for some loss function L is : ∂L ∂Wbasis = N∑ i=1 ∂L ∂aN N−1∏ j=i ( ∂aj+1 ∂aj ) ∂ai ∂Wbasis , ( 5 ) , where ∂aj+1 ∂aj = ∂aj+1 ∂xj ∂xj ∂aj = Wbasis ∂xj ∂aj ( 6 ) If we plug Wbasis ∂x j ∂aj in Equation 6 into Equation 5 , we can see that ∏ ∂aj+1 ∂aj is the term that makes gradients unstable since Wbasis is multiplied many times . This exploding/vanishing gradients can be controlled to a large extent by keeping Wbasis close to orthogonal ( Vorontsov et al. , 2017 ) . For instance , if Wbasis admits eigendecomposition , [ Wbasis ] N can be rewritten as follows : [ Wbasis ] N = [ QΛQ−1 ] N = QΛNQ−1 , ( 7 ) where Λ is a diagonal matrix with the eigenvalues placed on the diagonal and Q is a matrix composed of the corresponding eigenvectors . If Wbasis is orthogonal , [ Wbasis ] N neither explodes nor vanishes , since all the eigenvalues of an orthogonal matrix have absolute value 1 . Similarly , an orthogonal shared basis ensures that forward signals neither explodes nor vanishes . We also need to ensure that the norm of ∂x j ∂aj in Equation 5 is bounded ( Pascanu et al. , 2013 ) for stability during forward and backward passes . It is shown that batch normalization after non-linear activation at each convolution layer ensures healthy norms ( Ioffe & Szegedy , 2015 ; Guo et al. , 2019 ; Jastrzebski et al. , 2018 ) . For training networks , the orthogonality of shared bases can be enforced with an orthogonality regularizer . For instance , when each residual block group of a ResNet shares a filter basis for its convolution layers , the objective function LR can be defined to have an orthogonality regularizer in addition to the original loss L : LR = L+ λ G∑ g ‖W ( g ) basis > ·W ( g ) basis − I‖ 2 , ( 8 ) where W ( g ) basis is a shared filter basis for g-th residual block group and λ is a hyperparameter .
This method proposes to decompose convolutional filters using a low rank filter basis where the convolutional operation in a layer consists of shareable filter basis and non-shareable layer coefficients. This is developped to save computational costs whilst maintaining performance. To regularise against vanishing/exploding gradients and promoting more useful representations, the authors seek orthogonal filter basis'. This filter basis is shared across layers in contrast to other works who look at recursive sharing.
SP:50759dd814d98ba988b7cc423e3115d62e05db47
Learning Deeply Shared Filter Bases for Efficient ConvNets
1 INTRODUCTION . Modern networks such as ResNets usually have massive identical convolution blocks and recent analytic studies ( Jastrzebski et al. , 2018 ) show that these blocks perform similar iterative refinement rather than learning new features . Inspired by these massive identical block structure of modern networks , recursive ConvNets sharing weights across iterative blocks have been studied as a promising direction to parameter-efficient ConvNets ( Jastrzebski et al. , 2018 ; Guo et al. , 2019 ; Savarese & Maire , 2019 ) . However , repetitive use of parameters across many convolution layers incurs several challenges that limit the performance of such recursive networks . First of all , deep sharing of parameters might result in vanishing gradients and exploding gradients problems , which are often found in recurrent neural networks ( RNNs ) ( Pascanu et al. , 2013 ; Jastrzebski et al. , 2018 ) . Another challenge is that overall representation power of the networks might be limited by using same filters repeatedly for many convolution layers . To address aforementioned challenges , in this paper , we propose an effective and efficient parametersharing mechanism for modern ConvNets having many repetitive convolution blocks . In our work , convolution filters are decomposed into a fundamental and reusable unit , which is called a filter basis , and a layer-specific part , which is called coefficients . By sharing a filter basis , not whole convolution filters or a layer , we can impose two desirable properties on the shared parameters : ( 1 ) resilience against vanishing/exploding gradients , and ( 2 ) representational expressiveness of individual layers sharing parameters . We first show theoretically that a shared filter basis can cause vanishing gradients and exploding gradients problems , and this problem can be controlled to a large extent by making filter bases orthogonal . To enforce the orthogonality of filter bases , we propose an orthogonality regularization to train ConvNets having deeply shared filter bases . Our experimental results show that the proposed orthogonality regularization reduces the redundancy not just in deeply shared filter bases , but also in none-shared parameters , resulting in better performance than over-parameterized counterpart networks . Next , we make convolution layers with shared parameters more expressive using a hybrid approach to sharing filter bases , in which a small number of layer-specific non-shared filter basis components are combined with shared filter basis components . With this hybrid scheme , the constructed filters can be positioned in different vector subspaces that reflect the peculiarity of individual convolution layers . We argue that these layer-specific variations contribute to increasing the representation power of the networks when a large portion of parameters is shared . Since our focus is not on pushing the state-of-the-art performance , we show the validity of our work using widely-used ResNets as a base model on image classification tasks with CIFAR and ImageNet datasets . Our experimental results demonstrate that when each filter basis is shared by up to 10 convolution layers , our method consistently outperforms counterpart ConvNet models while reducing a significant amount of parameters and computational costs . For example , our method can save up to 63.8 % of parameters and 33.4 % of FLOPs , respectively , while achieving lower test errors than much deeper counterpart models . Our parameter sharing structure and training mechanism can be applied to modern compact networks , such as MobileNets ( Howard et al. , 2017 ) and ShuffleNets ( Zhang et al. , 2018 ) with minor adaptations . Since these compact models already have decomposed convolution blocks , some parts of each block can be identified as a shareable filter basis and the rest of the layer-specific parts . In Experiments , we demonstrate that compact MobileNetV2 can achieve further 8-21 % parameter savings with our scheme while retaining , or improving , the performance of the original models . 2 RELATED WORK . Recursive networks and parameter sharing : Recurrent neural networks ( RNNs ) ( Graves et al. , 2013 ) have been well-studied for temporal and sequential data . As a generalization of RNNs , recursive variants of ConvNets are used extensively for visual tasks ( Socher et al. , 2011 ; Liang & Hu , 2015 ; Xingjian et al. , 2015 ; Kim et al. , 2016 ; Zamir et al. , 2017 ) . For instance , Eigen et al . ( 2014 ) explore recursive convolutional architectures that share filters across multiple convolution layers . They show that recurrence with deeper layers tends to increase performance . However , their recursive architecture shows worse performance than independent convolution layers due to overfitting . In most previous works , filters themselves are shared across layers . In contrast , we propose to share filter bases that are more fundamental and reusable building blocks to construct layer-specific filters . More recently , Jastrzebski et al . ( 2018 ) show that iterative refinement of features in ResNets suggests that deep networks can potentially leverage intensive parameter sharing . Guo et al . ( 2019 ) introduce a gate unit to determine whether to jump out of the recursive loop of convolution blocks to save computational resources . These works show that training recursive networks with naively shared blocks leads to bad performance due to the problem of gradient explosion and vanish like RNN ( Pascanu et al. , 2013 ; Vorontsov et al. , 2017 ) . In order to mitigate the problem of gradient explosion and vanish , they suggest unshared batch normalization strategy . In our work , we propose an orthogonality regularization of shared filter bases to further address this problem . Savarese & Maire ( 2019 ) ’ s work is also relevant to our work . In their work , the parameters of recurrent layers of ConvNets are generated by a linear combination of 1-2 parameter tensors from a global bank of templates . Though similar to our work , our work suggests more fine-grained filter bases as more desirable building blocks for effective parameter sharing since filter bases can be easily combined with layer-specific non-shared components for better representation power . Our result shows that these layer-specific non-shared components are critical to achieve high performance . Although they achieve about 60 % parameter savings , their approach does not outperform counterpart models and incurs slight increases in computational costs due to the overheads in reparameterizing tensors from the templates . Model compression and efficient convolution block design : Reducing storage and inference time of ConvNets has been an important research topic for both resource constrained mobile/embedded systems and energy-hungry data centers . A number of research techniques have been developed such as filter pruning ( LeCun et al. , 1990 ; Polyak & Wolf , 2015 ; Li et al. , 2017 ; He et al. , 2017 ) , low-rank factorization ( Denton et al. , 2014 ; Jaderberg et al. , 2014 ) , quantization ( Han et al. , 2016 ) , and knowledge distillation ( Hinton et al. , 2015 ; Chen et al. , 2017 ) , to name a few . These compression techniques have been suggested as post-processing steps that are applied after initial training . Unfortunately , their accuracy is usually bounded by the approximated original models . By contrast , our models are trained from scratch as in Ioannou et al . ( 2017 ) ’ s work and our result shows that parameter-sharing approaches can outperform the counterpart models , if a proper training method is combined , while achieving significant savings in parameters . Some compact networks such as ShuffleNet ( Zhang et al. , 2018 ) and MobileNet ( Howard et al. , 2017 ; Sandler et al. , 2018 ) show that delicately designed internal structure of convolution blocks acquire better ability with lower computational complexity . Our work can be applied to these compact networks since they already exploit decomposed convolutions for their efficient convolution blocks . For instance , since MobileNet ’ s convolution blocks have pointwise-depthwise-pointwise convolution steps , the first two convolution steps can be considered a reusable filter basis and the last step can be considered layer-specific coefficients . In Experiments , we show that MobileNetV2 combined with our parameter-sharing scheme outperforms the original models while saving about 8-21 % parameters . 3 DEEP RECURSIVE SHARING OF A FILTER BASIS . In this section , we discuss how to decompose typical convolution layers into more recursive units , or filter bases , and remaining layer-specific parts . We also discuss how to train ConvNets effectively when filter bases are deeply shared by repetitive convolution layers . 3.1 FILTER BASES OF CONVOLUTION LAYERS . We assume that a convolution layer with S input channels , T output channels , and a set of filters W = { Wt ∈ Rk×k×S , t ∈ [ 1 .. T ] } . Each filter Wt can be decomposed using a lower rank filter basis Wbasis and coefficients α : Wt = R∑ r=1 αrtW r basis , ( 1 ) where Wbasis = { W rbasis ∈ Rk×k×S , r ∈ [ 1 .. R ] } is a filter basis , and α = { αrt ∈ R , r ∈ [ 1 .. R ] , t ∈ [ 1 .. T ] } is scalar coefficients . In Equation 1 , R is the rank of the basis . In a typical convolution layer , output feature maps Vt ∈ Rw×h×T , t ∈ [ 1 .. T ] are obtained by the convolution between input feature maps U ∈ Rw×h×S and filters Wt , t ∈ [ 1 .. T ] . With Equation 1 , this convolution can be rewritten as follows : Vt = U ∗Wt = U ∗ R∑ r=1 αrtW r basis ( 2 ) = R∑ r=1 αrt ( U ∗W rbasis ) , where t ∈ [ 1 .. T ] . ( 3 ) In Equation 3 , the order of the convolution operation and the linear combination of filter basis is reordered according to the linearity of convolution operators . This result shows that a standard convolution layer can be replaced with two successive convolution layers as shown in Figure 1- ( b ) . The first decomposed convolution layer performs R convolutions between W rbasis , r ∈ [ 1 .. R ] and input feature maps U , and it generates an intermediate feature map basis Vbasis ∈ Rw×h×R . The second decomposed convolution layer performs point-wise 1×1 convolutions that linearly combineR intermediate feature maps Vbasis to generate output feature maps V . The computational complexity of the original convolution is O ( whk2ST ) while the decomposed operations take O ( wh ( k2SR+RT ) ) . As far as R < T , the decomposed convolution has lower computational complexity than the original convolution . Due to this computational efficiency , many compact networks such as MobileNets and ShuffleNets also have similar block structures of decomposed convolution layers . For instance , MobileNets have repetitive convolution blocks of pointwise-depthwise-pointwise convolutions . The filters in the first two steps can be considered a reusable filter basis and the remaining 1x1 filters in the last step can be considered layer-specific coefficients . 3.2 RECURSIVE SHARING OF A FILTER BASIS . In typical ConvNets , convolution layers have different filters W s and , hence , each decomposed convolution layer has its own filter basis Wbasis and coefficients α . In contrast , our primary goal in decomposing convolution layers is to share a single filter basis ( or a small number of filter bases ) across many recursive convolution layers . Unlike some previous works ( Jastrzebski et al. , 2018 ; Köpüklü et al. , 2019 ) , in which convolution filters W themselves are shared recursively , we argue that a filter basis Wbasis is a more intrinsic and reusable building block that can be shared effectively since a filter basis constitutes a subspace , in which high dimensional filters across many convolution layers can be approximated . Though components of a basis only need to be independent and span a vector subspace , some specific bases are more convenient and appropriate for specific purposes . For the purpose of sharing a filter basis , we need to find an optimal filter basis Wbasis that can expedite the training of filters of shared convolution layers . Although this optimization can be done with a typical stochastic gradient descent ( SGD ) , one problem is that exploding/vanishing gradients problems might prevent efficient search of the optimization space . More formally , we consider a series of N decomposed convolution layers , in which a filter basis Wbasis is shared N times . Let xi be the input of the i-th convolution layer , and ai+1 be the output of the convolution of xi with the filter basis Wbasis ai ( xi−1 ) = W > basisx i−1 . ( 4 ) In Equation 4 , Wbasis ∈ Rk 2S×R is a reshaped filter basis that has basis components at its columns . We assume that input x is properly adapted ( e.g. , with im2col ) to express convolutions using a matrix-matrix multiplication . Since Wbasis is shared across N recusrive convolution layers , the gradient of Wbasis for some loss function L is : ∂L ∂Wbasis = N∑ i=1 ∂L ∂aN N−1∏ j=i ( ∂aj+1 ∂aj ) ∂ai ∂Wbasis , ( 5 ) , where ∂aj+1 ∂aj = ∂aj+1 ∂xj ∂xj ∂aj = Wbasis ∂xj ∂aj ( 6 ) If we plug Wbasis ∂x j ∂aj in Equation 6 into Equation 5 , we can see that ∏ ∂aj+1 ∂aj is the term that makes gradients unstable since Wbasis is multiplied many times . This exploding/vanishing gradients can be controlled to a large extent by keeping Wbasis close to orthogonal ( Vorontsov et al. , 2017 ) . For instance , if Wbasis admits eigendecomposition , [ Wbasis ] N can be rewritten as follows : [ Wbasis ] N = [ QΛQ−1 ] N = QΛNQ−1 , ( 7 ) where Λ is a diagonal matrix with the eigenvalues placed on the diagonal and Q is a matrix composed of the corresponding eigenvectors . If Wbasis is orthogonal , [ Wbasis ] N neither explodes nor vanishes , since all the eigenvalues of an orthogonal matrix have absolute value 1 . Similarly , an orthogonal shared basis ensures that forward signals neither explodes nor vanishes . We also need to ensure that the norm of ∂x j ∂aj in Equation 5 is bounded ( Pascanu et al. , 2013 ) for stability during forward and backward passes . It is shown that batch normalization after non-linear activation at each convolution layer ensures healthy norms ( Ioffe & Szegedy , 2015 ; Guo et al. , 2019 ; Jastrzebski et al. , 2018 ) . For training networks , the orthogonality of shared bases can be enforced with an orthogonality regularizer . For instance , when each residual block group of a ResNet shares a filter basis for its convolution layers , the objective function LR can be defined to have an orthogonality regularizer in addition to the original loss L : LR = L+ λ G∑ g ‖W ( g ) basis > ·W ( g ) basis − I‖ 2 , ( 8 ) where W ( g ) basis is a shared filter basis for g-th residual block group and λ is a hyperparameter .
This paper addresses the problem of obtaining more compact CNNs by a parameter sharing method. The authors propose to represent a weight filter in a low-rank subspace (represented as a linear combination of low-rank filter basis) plus a set of non-shared low-rank filter basis (per-layer). In this way, the shared low-rank filter basis is reused across several layers, and the non-shared ones per layer are used to enhance model generalization ability. Experiments are performed on CIFAR and ImageNet datasets, using some popular CNN structures for evaluation.
SP:50759dd814d98ba988b7cc423e3115d62e05db47
Improving Sampling Accuracy of Stochastic Gradient MCMC Methods via Non-uniform Subsampling of Gradients
1 INTRODUCTION . Many MCMC methods use physics-inspired evolution such as Langevin dynamics ( Brooks et al. , 2011 ) to utilize gradient information for exploring posterior distributions over continuous parameter space efficiently . However , gradient-based MCMC methods are often limited by the computational cost of computing the gradient on large data sets . Motivated by the great success of stochastic gradient methods for optimization , stochastic gradient MCMC methods ( SG-MCMC ) for sampling have also been gaining increasing attention . When the accurate but expensive-to-evaluate batch gradients in a MCMC method are replaced by computationally cheaper estimates based on a subset of the data , the method is turned to a stochastic gradient version . Classical examples include SG ( overdamped ) Langevin Dynamics ( Welling & Teh , 2011 ) and SG Hamiltonian Monte Carlo ( Chen et al. , 2014 ) , all of which were designed for scalability suitable for machine learning tasks . However , directly replacing the batch gradient by a ( uniform ) stochastic one without additional mitigation will generally cause a MCMC method to sample from a statistical distribution different from the target , because the transition kernel of the MCMC method gets corrupted by the noise of subsampled gradient . In general , the additional noise is tolerable if the learning rate/step size is tiny or decreasing . However , when large steps are used for better efficiency , the extra noise is nonnegligible and undermines the performance of downstream applications such as Bayesian inference . In this paper , we present a state-dependent non-uniform SG-MCMC algorithm termed Exponentially Weighted Stochastic Gradients method ( EWSG ) , which continues the efforts of uniform SG- MCMC methods for better scalability . Our approach is based on designing the transition kernel of a SG-MCMC method to match the transition kernel of a full-gradient-based MCMC method . This matching leads to non-uniform ( in fact , exponential ) weights that aim at capturing the entire statevariable distribution of the full-gradient-based MCMC method , rather than just providing unbiased gradient estimator or reducing its variance . When focusing on the variance , the advantage of EWSG is the following : recall the stochasticity of a SG-MCMC method can be decomposed into the intrinsic randomness of MCMC and the randomness introduced by gradient subsampling ; in conventional uniform subsampling treatments , the latter randomness is independent of the former , and thus when they are coupled together , variances add up ; EWSG , on the other hand , dynamically chooses the weight of each datum according to the current state of the MCMC , and thus the variances do not add up due to dependence . However , the gained accuracy is beyond reduced variance , as EWSG , when converged , samples from a distribution close to the invariant distribution of the full-gradient MCMC method ( which has no variance of the 2nd type ) , because its transition kernel ( of the corresponding Markov process ) is close to that of the full-gradient-MCMC method . This is how better sampling accuracy can be achieved . Our main demonstration of EWSG is based on 2nd-order Langevin equations ( a.k.a . inertial , kinetic , or underdamped Langevin ) , although it works for other MCMC methods too ( e.g. , Sec.F , G ) . To concentrate on the role of non-uniform SG weights , we will work with constant step sizes only . The fact that EWSG has locally reduced variance than its uniform counterpart is rigorously shown in Theorem 3 , and a global non-asymptotic analysis of EWSG is given in Theorem 4 to quantify its convergence properties and demonstrate the advantage over its uniform SG counterpart . A number of experiments on synthetic and real world data sets , across downstream tasks including Bayesian logistic regression and Bayesian neural networks , are conducted to validate our theoretical results and demonstrate the effectiveness of EWSG . In addition to improved accuracy , the convergence speed was empirically observed , in a fair comparison setup based on the same data pass , to be comparable to its uniform counterpart when hyper-parameters are appropriately chosen . The convergence ( per data pass ) was also seen to be clearly faster than a classical Variance Reduction ( VR ) approach ( note : for sampling , not optimization ) , and EWSG hence provides a useful alternative to VR . Additional theoretical investigation of EWSG convergence speed is provided in Sec . I. Terminology-wise , ∇V will be called the full/batch-gradient , n∇VI with random I will be called stochastic gradient ( SG ) , and when I is uniform distributed it will be called a uniform SG/subsampling , otherwise non-uniform . When uniform SG is used to approximate the batchgradient in underdamped Langevin , the method will be referred to as ( vanilla ) stochastic gradient underdamped Langevin dynamics ( SGULD/SGHMC ) 1 , and it serves as a baseline in experiments . 2 RELATED WORK . Stochastic Gradient MCMC Methods Since the seminal work of SGLD ( Welling & Teh , 2011 ) , much progress ( Ahn et al. , 2012 ; Patterson & Teh , 2013 ) has been made in the field of SG-MCMC . Teh et al . ( 2016 ) theoretically justified the convergence of SGLD and offered practical guidance on tuning step size . Li et al . ( 2016 ) introduced a preconditioner and improved stability of SGLD . We also refer to Maclaurin & Adams ( 2015 ) and Fu & Zhang ( 2017 ) which will be discussed in Sec.5 . While these work were mostly based on 1st-order ( overdamped ) Langevin , other dynamics were considered too . For instance , Chen et al . ( 2014 ) proposed SGHMC , which is closely related to 2ndorder Langevin dynamics ( Bou-Rabee & Sanz-Serna , 2018 ; Bou-Rabee et al. , 2018 ) , and Ma et al . ( 2015 ) put it in a more general framework . 2nd-order Langevin was recently shown to be faster than the 1st-order version in appropriate setups ( Cheng et al. , 2018b ; a ) and began to gain more attention . Variance Reduction For optimization , vanilla SG methods usually find approximate solutions quickly but the convergence slows down when an accurate solution is needed ( Bach , 2013 ; Johnson & Zhang , 2013 ) . SAG ( Schmidt et al. , 2017 ) improved the convergence speed of stochastic gradient methods to linear , which is the same as gradient descent methods with full gradient , at the expense of large memory overhead . SVRG ( Johnson & Zhang , 2013 ) successfully reduced this memory overhead . SAGA ( Defazio et al. , 2014 ) furthers improved convergence speed over SAG and SVRG . For 1SGULD is the same as the well-known SGHMC with B̂ = 0 , see ( Chen et al. , 2014 , Eq ( 13 ) and section 3.3 ) for details . To be consistent with existing literature , we will refer SGULD as SGHMC in the sequel . sampling , Dubey et al . ( 2016 ) applied VR techniques to SGLD ( see also ( Baker et al. , 2019 ; Chatterji et al. , 2018 ) ) . However , many VR methods have large memory overhead and/or periodically use the whole data set for gradient estimation calibration , and hence can be resource-demanding . EWSG is derived based on matching transition kernels of MCMC and improves the accuracy of the entire distribution rather than just the variance . However , it does have a consequence of variance reduction and thus can be implicitly regarded as a VR method . When compared to the classic work on VR for SG-MCMC ( Dubey et al. , 2016 ) , EWSG converges faster when the same amount of data pass is used , although its sampling accuracy is below that of VR for Gaussian targets ( but well above vanilla SG ; Sec.5.1 ) . In this sense , EWSG and VR suit different application domains : EWSG can replace vanilla SG for tasks in which the priority is speed and then accuracy , as it keeps the speed but improves the accuracy ; on the other hand , VR remains to be the heavy weapon for accuracydemanding scenarios . Importantly , EWSG , as a generic way to improve SG-MCMC methods , can be combined with VR too ( e.g. , Sec.G ) ; thus , they are not exclusive or competitors . Importance Sampling ( IS ) IS employs nonuniform weights to improve SG methods for optimization . Traditional IS uses fixes weights that do not change along iterations , and the weight computation requires prior information of gradient terms , e.g. , Lipschitz constants of gradient ( Needell et al. , 2014 ; Schmidt et al. , 2015 ; Csiba & Richtárik , 2018 ) , which are usually unknown or difficult to estimate . Adaptive IS was also proposed in which the importance was re-evaluated at each iteration , whose computation usually required the entire data set per iteration and may also require information like the upper bound of gradient ( Zhao & Zhang , 2015 ; Zhu , 2016 ) . For sampling , it is not easy to combine IS with SG ( Fu & Zhang , 2017 ) ; the same paper is , to our knowledge , the closest to this goal and will be compared with in Sec.5.3 . EWSG can be viewed as a way to combine ( adaptive ) IS with SG for efficient sampling . It require no oracle about the gradient , nor any evaluation over the full data set . Instead , an inner-loop Metropolis chain maintains a random index that approximates a state-dependent non-uniform distribution ( i.e . the weights/importance ) . 3 UNDERDAMPED LANGEVIN : THE BACKGROUND OF A MCMC METHOD . Underdamped Langevin Dynamics ( ULD ) is { dθ = rdt dr = − ( ∇V ( θ ) + γr ) dt+ σdW ( 1 ) where θ , r ∈ Rd are state and momentum variables , V is a potential energy function which in our context ( originated from cost minimization or Bayesian inference over many data ) is the sum of many terms V ( θ ) = ∑n i=1 Vi ( θ ) , γ is a friction coefficient , σ is intrinsic noise amplitude , and W is a standard d-dimensional Wiener process . Under mild assumptions on the potential V ( Pavliotis , 2014 ) , Langevin dynamics admits a unique invariant distribution π ( θ , r ) ∼ exp ( − 1T ( V ( θ ) + ‖r‖2 2 ) ) and is in many cases geometric ergodic . T is the temperature of system determined via the fluctuation dissipation theorem σ2 = 2γT ( Kubo , 1966 ) . The main reason for considering ULD rather than overdamped one is that ULD can converge faster than overdamped Langevin , in particular in high-dimension space ( e.g. , Cheng et al . ( 2018b ; a ) ; Tao & Ohsawa ( 2020 ) ) . Like the overdamped version , numerical integrators for ULD with well captured statistical properties of the continuous process have been extensively investigated ( e.g , Roberts et al . ( 1996 ) ; Bou-Rabee & Owhadi ( 2010 ) ) , and both the overdamped and underdamped integrators are friendly to derivations that will allow us to obtain explicit expressions of the non-uniform weights . 4 MAIN WORK . 4.1 AN ILLUSTRATION OF NON-OPTIMALITY OF UNIFORM SUBSAMPLING . In many applications , cases where data size n is larger than dimension d are not uncommon . In such cases , { ∇Vi } i=1,2 , ··· , n ⊂ Rd are linearly dependent and hence it is likely that there exist probability distributions { pi } i=1,2 , ··· , n other than the uniform one such that the gradient estimate is unbiased . This opens up the door to develop non-uniform subsampling schemes ( weights may be θ dependent ) , which can help reduce introduced additional variance while maintaining unbiasedness . In fact , in a reasonable setup , it turns out an optimal way of subsampling gradients , is far from being uniform : Theorem 1 Suppose given θ ∈ Rd , the errors of SG approximation bi = n∇Vi ( θ ) −∇V ( θ ) , 1 ≤ i ≤ n are i.i.d . absolutely continuous random vectors with possibly-θ-dependent density p ( x|θ ) . Define p ∈ Rn as a sparse vector if the number of non-zero entries in p is no greater than d + 1 . Then with probability 1 , the optimal probability distribution p ? that is unbiased and minimizes the trace of the covariance of n∇VI ( θ ) , i.e . p ? which solves the following , is a sparse vector . min p Tr ( EI∼p [ bIbTI ] ) s.t . EI∼p [ bI ] = 0 , ( 2 ) Despite the sparsity of p ? , which seemingly suggests one only needs at most d+1 gradient terms per iteration when using SG methods , it is not practical because p ? requires solving the linear programming problem ( 2 ) in Theorem 1 , for which an entire data pass is needed . Nevertheless , this result shows uniform SG can be far from optimal and motivates us to propose an exponentially weighted stochastic gradient method , which has reduced local variance with high probability and at the same time remains efficiently implementable without necessarily using all the data per parameter update .
The paper proposes an alternative to the uniform sampling scheme used for constructing mini-batches in stochastic gradient sampling algorithms. The proposed scheme, called Exponentially Weighted Stochastic Gradient (EWSG), is devised such its transition kernel matches that of the batch gradient descent. The proposed scheme is shown to achieve better results than the uniform sampling one.
SP:b9919f153a64663a2bfaf12303c660d995694591
Improving Sampling Accuracy of Stochastic Gradient MCMC Methods via Non-uniform Subsampling of Gradients
1 INTRODUCTION . Many MCMC methods use physics-inspired evolution such as Langevin dynamics ( Brooks et al. , 2011 ) to utilize gradient information for exploring posterior distributions over continuous parameter space efficiently . However , gradient-based MCMC methods are often limited by the computational cost of computing the gradient on large data sets . Motivated by the great success of stochastic gradient methods for optimization , stochastic gradient MCMC methods ( SG-MCMC ) for sampling have also been gaining increasing attention . When the accurate but expensive-to-evaluate batch gradients in a MCMC method are replaced by computationally cheaper estimates based on a subset of the data , the method is turned to a stochastic gradient version . Classical examples include SG ( overdamped ) Langevin Dynamics ( Welling & Teh , 2011 ) and SG Hamiltonian Monte Carlo ( Chen et al. , 2014 ) , all of which were designed for scalability suitable for machine learning tasks . However , directly replacing the batch gradient by a ( uniform ) stochastic one without additional mitigation will generally cause a MCMC method to sample from a statistical distribution different from the target , because the transition kernel of the MCMC method gets corrupted by the noise of subsampled gradient . In general , the additional noise is tolerable if the learning rate/step size is tiny or decreasing . However , when large steps are used for better efficiency , the extra noise is nonnegligible and undermines the performance of downstream applications such as Bayesian inference . In this paper , we present a state-dependent non-uniform SG-MCMC algorithm termed Exponentially Weighted Stochastic Gradients method ( EWSG ) , which continues the efforts of uniform SG- MCMC methods for better scalability . Our approach is based on designing the transition kernel of a SG-MCMC method to match the transition kernel of a full-gradient-based MCMC method . This matching leads to non-uniform ( in fact , exponential ) weights that aim at capturing the entire statevariable distribution of the full-gradient-based MCMC method , rather than just providing unbiased gradient estimator or reducing its variance . When focusing on the variance , the advantage of EWSG is the following : recall the stochasticity of a SG-MCMC method can be decomposed into the intrinsic randomness of MCMC and the randomness introduced by gradient subsampling ; in conventional uniform subsampling treatments , the latter randomness is independent of the former , and thus when they are coupled together , variances add up ; EWSG , on the other hand , dynamically chooses the weight of each datum according to the current state of the MCMC , and thus the variances do not add up due to dependence . However , the gained accuracy is beyond reduced variance , as EWSG , when converged , samples from a distribution close to the invariant distribution of the full-gradient MCMC method ( which has no variance of the 2nd type ) , because its transition kernel ( of the corresponding Markov process ) is close to that of the full-gradient-MCMC method . This is how better sampling accuracy can be achieved . Our main demonstration of EWSG is based on 2nd-order Langevin equations ( a.k.a . inertial , kinetic , or underdamped Langevin ) , although it works for other MCMC methods too ( e.g. , Sec.F , G ) . To concentrate on the role of non-uniform SG weights , we will work with constant step sizes only . The fact that EWSG has locally reduced variance than its uniform counterpart is rigorously shown in Theorem 3 , and a global non-asymptotic analysis of EWSG is given in Theorem 4 to quantify its convergence properties and demonstrate the advantage over its uniform SG counterpart . A number of experiments on synthetic and real world data sets , across downstream tasks including Bayesian logistic regression and Bayesian neural networks , are conducted to validate our theoretical results and demonstrate the effectiveness of EWSG . In addition to improved accuracy , the convergence speed was empirically observed , in a fair comparison setup based on the same data pass , to be comparable to its uniform counterpart when hyper-parameters are appropriately chosen . The convergence ( per data pass ) was also seen to be clearly faster than a classical Variance Reduction ( VR ) approach ( note : for sampling , not optimization ) , and EWSG hence provides a useful alternative to VR . Additional theoretical investigation of EWSG convergence speed is provided in Sec . I. Terminology-wise , ∇V will be called the full/batch-gradient , n∇VI with random I will be called stochastic gradient ( SG ) , and when I is uniform distributed it will be called a uniform SG/subsampling , otherwise non-uniform . When uniform SG is used to approximate the batchgradient in underdamped Langevin , the method will be referred to as ( vanilla ) stochastic gradient underdamped Langevin dynamics ( SGULD/SGHMC ) 1 , and it serves as a baseline in experiments . 2 RELATED WORK . Stochastic Gradient MCMC Methods Since the seminal work of SGLD ( Welling & Teh , 2011 ) , much progress ( Ahn et al. , 2012 ; Patterson & Teh , 2013 ) has been made in the field of SG-MCMC . Teh et al . ( 2016 ) theoretically justified the convergence of SGLD and offered practical guidance on tuning step size . Li et al . ( 2016 ) introduced a preconditioner and improved stability of SGLD . We also refer to Maclaurin & Adams ( 2015 ) and Fu & Zhang ( 2017 ) which will be discussed in Sec.5 . While these work were mostly based on 1st-order ( overdamped ) Langevin , other dynamics were considered too . For instance , Chen et al . ( 2014 ) proposed SGHMC , which is closely related to 2ndorder Langevin dynamics ( Bou-Rabee & Sanz-Serna , 2018 ; Bou-Rabee et al. , 2018 ) , and Ma et al . ( 2015 ) put it in a more general framework . 2nd-order Langevin was recently shown to be faster than the 1st-order version in appropriate setups ( Cheng et al. , 2018b ; a ) and began to gain more attention . Variance Reduction For optimization , vanilla SG methods usually find approximate solutions quickly but the convergence slows down when an accurate solution is needed ( Bach , 2013 ; Johnson & Zhang , 2013 ) . SAG ( Schmidt et al. , 2017 ) improved the convergence speed of stochastic gradient methods to linear , which is the same as gradient descent methods with full gradient , at the expense of large memory overhead . SVRG ( Johnson & Zhang , 2013 ) successfully reduced this memory overhead . SAGA ( Defazio et al. , 2014 ) furthers improved convergence speed over SAG and SVRG . For 1SGULD is the same as the well-known SGHMC with B̂ = 0 , see ( Chen et al. , 2014 , Eq ( 13 ) and section 3.3 ) for details . To be consistent with existing literature , we will refer SGULD as SGHMC in the sequel . sampling , Dubey et al . ( 2016 ) applied VR techniques to SGLD ( see also ( Baker et al. , 2019 ; Chatterji et al. , 2018 ) ) . However , many VR methods have large memory overhead and/or periodically use the whole data set for gradient estimation calibration , and hence can be resource-demanding . EWSG is derived based on matching transition kernels of MCMC and improves the accuracy of the entire distribution rather than just the variance . However , it does have a consequence of variance reduction and thus can be implicitly regarded as a VR method . When compared to the classic work on VR for SG-MCMC ( Dubey et al. , 2016 ) , EWSG converges faster when the same amount of data pass is used , although its sampling accuracy is below that of VR for Gaussian targets ( but well above vanilla SG ; Sec.5.1 ) . In this sense , EWSG and VR suit different application domains : EWSG can replace vanilla SG for tasks in which the priority is speed and then accuracy , as it keeps the speed but improves the accuracy ; on the other hand , VR remains to be the heavy weapon for accuracydemanding scenarios . Importantly , EWSG , as a generic way to improve SG-MCMC methods , can be combined with VR too ( e.g. , Sec.G ) ; thus , they are not exclusive or competitors . Importance Sampling ( IS ) IS employs nonuniform weights to improve SG methods for optimization . Traditional IS uses fixes weights that do not change along iterations , and the weight computation requires prior information of gradient terms , e.g. , Lipschitz constants of gradient ( Needell et al. , 2014 ; Schmidt et al. , 2015 ; Csiba & Richtárik , 2018 ) , which are usually unknown or difficult to estimate . Adaptive IS was also proposed in which the importance was re-evaluated at each iteration , whose computation usually required the entire data set per iteration and may also require information like the upper bound of gradient ( Zhao & Zhang , 2015 ; Zhu , 2016 ) . For sampling , it is not easy to combine IS with SG ( Fu & Zhang , 2017 ) ; the same paper is , to our knowledge , the closest to this goal and will be compared with in Sec.5.3 . EWSG can be viewed as a way to combine ( adaptive ) IS with SG for efficient sampling . It require no oracle about the gradient , nor any evaluation over the full data set . Instead , an inner-loop Metropolis chain maintains a random index that approximates a state-dependent non-uniform distribution ( i.e . the weights/importance ) . 3 UNDERDAMPED LANGEVIN : THE BACKGROUND OF A MCMC METHOD . Underdamped Langevin Dynamics ( ULD ) is { dθ = rdt dr = − ( ∇V ( θ ) + γr ) dt+ σdW ( 1 ) where θ , r ∈ Rd are state and momentum variables , V is a potential energy function which in our context ( originated from cost minimization or Bayesian inference over many data ) is the sum of many terms V ( θ ) = ∑n i=1 Vi ( θ ) , γ is a friction coefficient , σ is intrinsic noise amplitude , and W is a standard d-dimensional Wiener process . Under mild assumptions on the potential V ( Pavliotis , 2014 ) , Langevin dynamics admits a unique invariant distribution π ( θ , r ) ∼ exp ( − 1T ( V ( θ ) + ‖r‖2 2 ) ) and is in many cases geometric ergodic . T is the temperature of system determined via the fluctuation dissipation theorem σ2 = 2γT ( Kubo , 1966 ) . The main reason for considering ULD rather than overdamped one is that ULD can converge faster than overdamped Langevin , in particular in high-dimension space ( e.g. , Cheng et al . ( 2018b ; a ) ; Tao & Ohsawa ( 2020 ) ) . Like the overdamped version , numerical integrators for ULD with well captured statistical properties of the continuous process have been extensively investigated ( e.g , Roberts et al . ( 1996 ) ; Bou-Rabee & Owhadi ( 2010 ) ) , and both the overdamped and underdamped integrators are friendly to derivations that will allow us to obtain explicit expressions of the non-uniform weights . 4 MAIN WORK . 4.1 AN ILLUSTRATION OF NON-OPTIMALITY OF UNIFORM SUBSAMPLING . In many applications , cases where data size n is larger than dimension d are not uncommon . In such cases , { ∇Vi } i=1,2 , ··· , n ⊂ Rd are linearly dependent and hence it is likely that there exist probability distributions { pi } i=1,2 , ··· , n other than the uniform one such that the gradient estimate is unbiased . This opens up the door to develop non-uniform subsampling schemes ( weights may be θ dependent ) , which can help reduce introduced additional variance while maintaining unbiasedness . In fact , in a reasonable setup , it turns out an optimal way of subsampling gradients , is far from being uniform : Theorem 1 Suppose given θ ∈ Rd , the errors of SG approximation bi = n∇Vi ( θ ) −∇V ( θ ) , 1 ≤ i ≤ n are i.i.d . absolutely continuous random vectors with possibly-θ-dependent density p ( x|θ ) . Define p ∈ Rn as a sparse vector if the number of non-zero entries in p is no greater than d + 1 . Then with probability 1 , the optimal probability distribution p ? that is unbiased and minimizes the trace of the covariance of n∇VI ( θ ) , i.e . p ? which solves the following , is a sparse vector . min p Tr ( EI∼p [ bIbTI ] ) s.t . EI∼p [ bI ] = 0 , ( 2 ) Despite the sparsity of p ? , which seemingly suggests one only needs at most d+1 gradient terms per iteration when using SG methods , it is not practical because p ? requires solving the linear programming problem ( 2 ) in Theorem 1 , for which an entire data pass is needed . Nevertheless , this result shows uniform SG can be far from optimal and motivates us to propose an exponentially weighted stochastic gradient method , which has reduced local variance with high probability and at the same time remains efficiently implementable without necessarily using all the data per parameter update .
This paper proposes a non-uniform sampling method for stochastic gradient minibatches for SG-MCMC. By sampling the indices of the stochastic gradients according to a parameter-specific (exponentially weighted) non-uniform distribution, the paper shows that it exactly matches the transition kernel of full batch gradient MCMC (for underdamped langevin dynamics) in Theorem 2. Although this exponentially weight distribution is intractable, the paper presents an approximate sampler in Algorithm 1 (that both uses a 1-step Metropolis Hastings approximation and a deterministic approximation for the $x$ term $r_{k+1} = r_k$). Finally the paper compares Algorithm 1 with other SGMCMC methods in a small synthetic Gaussian example, bayesian logistic regression on Covertype data, and a Bayesian neural network on MNIST.
SP:b9919f153a64663a2bfaf12303c660d995694591
NNGeometry: Easy and Fast Fisher Information Matrices and Neural Tangent Kernels in PyTorch
Practical and theoretical advances in deep learning have been accelerated by the development of an ecosystem of libraries allowing practitioners to focus on developing new techniques instead of spending weeks or months re-implementing the wheel . In particular , automatic differentiation frameworks such as Theano ( Bergstra et al. , 2011 ) , Tensorflow ( Abadi et al. , 2016 ) or PyTorch ( Paszke et al. , 2019 ) have been the backbone for the leap in performance of last decade ’ s increasingly deeper neural networks as they allow to compute average gradients efficiently , used in the stochastic gradient algorithm or variants thereof . While being versatile in neural networks that can be designed by varying the type and number of their layers , they are however specialized to the very task of computing these average gradients , so more advanced techniques can be burdensome to implement . While the popularity of neural networks has grown thanks to their always improving performance , other techniques have emerged , amongst them we highlight some involving Fisher Information Matrices ( FIM ) and Neural Tangent Kernels ( NTK ) . Approximate 2nd order ( Schraudolph , 2002 ) or natural gradient techniques ( Amari , 1998 ) aim at accelerating training , elastic weight consolidation ( Kirkpatrick et al. , 2017 ) proposes to fight catastrophic forgetting in continual learning and WoodFisher ( Singh & Alistarh , 2020 ) tackles the problem of network pruning so as to minimize its computational footprint while retaining prediction capability . These 3 methods all use the Fisher Information Matrix while formalizing the problem they aim at solving , but resort to using different approximations when going to implementation . Similarly , following the work of Jacot et al . ( 2018 ) , a line of work study the NTK in either its limiting infinite-width regime , or during training of actual finite-size networks . All of these papers start by formalizing the problem at hand in a very concise math formula , then face the experimental challenge that computing the FIM or NTK involves performing operations for which off-the-shelf automatic differentiation libraries are not well adapted . An even greater turnoff comes from the fact that these matrices scale with the number of parameters ( for the FIM ) or the number of examples in the training set ( for the empirical NTK ) . This is prohibitively large for modern neural networks involving millions of parameters or large datasets , a problem circumvented by a series of techniques to approximate the FIM ( Ollivier , 2015 ; Martens & Grosse , 2015 ; George et al. , 2018 ) . NNGeometry aims at making use of these approximations effortless , so as to accelerate development or analysis of new techniques , allowing to spend more time on the theory and less time in fighting development bugs . NNGeometry ’ s interface is designed to be as close as possible to maths formulas . In summary , this paper and library contribute : • We introduce NNGeometry by describing and motivating design choices . – A unified interface for all FIM and NTK operations , regardless of how these are ap- proximated . – Implicit operations for ability to scale to large networks .. • Using NNGeometry , we get new empirical insights on FIMs and NTKs : – We compare different approximations in different scenarios . – We scale some NTK evolution experiments to TinyImagenet . 1 PRELIMINARIES . 1.1 NETWORK LINEARIZATION . Neural networks are parametric functions f ( x , w ) : X × Rd → Rc where x ∈ X are covariates from an input space , and w ∈ Rd are the network ’ s parameters , arranged in layers composed of weight matrices and biases . The function returns a value in Rc , such as the c scores in softmax classification , or c real values in c-dimensional regression . Neural networks are trained by iteratively adjusting their parameters w ( t+1 ) ← w ( t ) + δw ( t ) using steps δw ( t ) typically computed using the stochastic gradient algorithm or variants thereof , in order to minimize the empirical risk of a loss function . In machine learning , understanding and being able to control the properties of the solution obtained by an algorithm is of crucial interest , as it can provide generalization guarantees , or help design more efficient or accurate algorithms . Contrary to ( kernelized ) linear models , where closed-form expressions of the empirical risk minimizer exist , deep networks are non-linear functions , whose generalization properties and learning dynamics is not yet fully understood . Amongst the recent advances toward improving theory , is the study of the linearization ( in w ) of the deep network function f ( x , w ) : f ( x , w + δw ) = f ( x , w ) + J ( x , w ) δw + o ( ‖δw‖ ) 1 ( 1 ) where J ( x , w ) = ∂f ( x , w ) ∂w is the Jacobian with respect to parameters w , computed in ( w , x ) , mapping changes in parameter space δw to corresponding changes in output space using the identity δf ( x , w , δw ) = J ( x , w ) δw . For tiny steps δw , we neglect the term o ( ‖δw‖ ) thus f is close to its linearization . It happens for instance at small step sizes , or in the large-width limit with the specific parameter initialization scheme proposed by Jacot et al . ( 2018 ) . 1The Landau notation o ( pronounced ” little-o ” ) means a function whose exact value is irrelevant , with the property that limx→0 o ( x ) x = 0 , or in other words that is negligible compared to x for small x . 1.2 PARAMETER SPACE METRICS AND FISHER INFORMATION MATRIX . While neural networks are trained by tuning their parameters w , the end goal of machine learning is not to find the best parameter values , but rather to find good functions , in a sense that is dependent of the task at hand . For instance different parameter values can represent the same function ( Dinh et al. , 2017 ) . On the contrary 2 parameter space steps δw1 and δw2 with same euclidean norm can provide very different changes in a function ( δf ( x , w , δw1 ) 6= δf ( x , w , δw2 ) ) . In order to quantify changes of a function , one generally defines a distance2 on the function space . Examples of such distances are the Lk-norms , Wasserstein distances , or the KL divergence used in information geometry . To each of these function space distances correspond a parameter space metric . We continue our exposition by focusing on the KL divergence , which is closely related to the Fisher Information Matrix , but our library can be used for other function space distances . Suppose f is interpreted as log-probability of a density p : log p ( x , w ) = f ( x , w ) , the KL divergence gives a sense of how much the probability distribution changes when adding a small increment δw to the parameters of f ( x , w ) . We can approximate it as : KL ( p ( x , w ) ‖p ( x , w + δw ) ) = ∫ x∈X log ( p ( x , w ) p ( x , w + δw ) ) dp ( x , w ) ( 2 ) = 1 2 ∫ x∈X ( 1 p ( x , w ) J ( x , w ) δw ) 2 dp ( x , w ) + o ( ‖δw‖2 ) ( 3 ) where we used this form ( derived in appendix ) in order to emphasize how steps in parameter space δw affect distances measured on the function space : equation 3 is the result of i ) taking a step δw in parameter space ; ii ) multiplying with J ( x , w ) to push the change to the function space ; iii ) weight this function space change using p ( x , w ) −1 ; iv ) square and sum . In particular , because of the properties of the KL divergence , there is no second derivative of f involved , even if equation 3 is equivalent to taking the 2nd order Taylor series expansion of the KL divergence . We can rewrite in a more concise way : KL ( f ( x , w ) ‖f ( x , w + δw ) ) = δw > Fwδw + o ( ‖δw‖2 ) ( 4 ) which uses the d × d FIM Fw = ∫ x∈X 1 p ( x , w ) 2 J ( x , w ) > J ( x , w ) dp ( x , w ) . In particular , we can now define the norm ‖δw‖Fw = δw > Fwδw used in the natural gradient algorithm ( Amari ( 1998 ) , also see Martens ( 2020 ) for a more thorough discussion of the FIM ) , in elastic weight consolidation ( Kirkpatrick et al. , 2017 ) , or in pruning ( Singh & Alistarh , 2020 ) . Other quantities also share the same structure of a covariance of parameter space vectors , such as the covariance of loss gradients in TONGA ( Roux et al. , 2008 ) , the second moment of loss gradients3 ( Kunstner et al. , 2019 ; Thomas et al. , 2020 ) , or posterior covariances in bayesian deep learning ( e.g . in Maddox et al . ( 2019 ) ) . 1.3 NEURAL TANGENT KERNEL . Another very active line of research around the linearization of equation 1 is to take inspiration from the rich literature on kernel methods by defining the neural tangent kernel ( NTK ) : kw ( x , y ) = J ( x , w ) J ( y , w ) > ( 5 ) In the limit of networks infinite width , Jacot et al . ( 2018 ) have shown that the tangent kernel remains constant through training using gradient descent , which allows to directly apply kernel learning theory to deep learning . While this regime is of theoretical interest , it arguably does not explain what happens at finite width , where the NTK evolves during training . While kernels are functions of the whole input space X ×X , we often only have access to a limited number of samples in a datasets . We thus resort to using the kernel evaluated at points xi of a 2We here use the notion of distance informally . 3The second moment of loss gradients is sometimes called empirical Fisher . training or a test set , called the Gram Matrix ( Kw ) ij = kw ( xi , xj ) . Note that in the case where the output space is multidimensional with dimension c , then Kw is in fact a 4d tensor . 2 DESIGN AND IMPLEMENTATION . 2.1 DIFFICULTIES . Current deep learning frameworks such as PyTorch and Tensorflow are well adapted to neural network training , i.e . computing average gradients over parameters , used in optimizers such as Adam and others . However , when going to more advanced algorithms or analysis techniques involving FIMs and NTKs , practitioners typically have to hack the framework ’ s internal mechanisms , which is time consuming , error prone , and results in each project having its own slightly different implementation of the very same technique . We here list the difficulties in computing FIMs and NTKs using current frameworks : Per-example gradient FIMs and NTKs require per-example Jacobians J ( xi , w ) of a dataset ( xi ) i . This can be obtained by looping through examples x , but at the cost of not using mini-batched operations , thus missing the benefit of using GPUs . NNGeometry ’ s Jacobian generator extensively use efficient techniques such as Goodfellow ( 2015 ) Memory usage and computational cost A FIM matrix is d × d where d is the total number of parameters . With a memory cost in O ( d2 ) , this is prohibitively costly even for moderate size networks . Typical linear algebra operations have a computational cost in eitherO ( d2 ) ( e.g . matrixvector product ) or evenO ( d3 ) ( e.g . matrix inverse ) . NNGeometry instead comes with recent lower memory intensive approximations .
This paper introduces a new PyTorch library for computing Fisher Information Matrices and Neural Tangent Kernel (NTK) in deep learning; with applications ranging from Frobenius norm regularization, second-order optimization, and generalization analysis. The authors begin by providing a background on Fisher matrices and NTK and then present the main components of their proposed NNGeometry library (consisting of Layer Collection, Generator, and Concrete Representations modules). A brief experimental study is provided in the last part of the paper.
SP:f2bede94ecbf910aeafa0355e1d2576f5716dc7d
NNGeometry: Easy and Fast Fisher Information Matrices and Neural Tangent Kernels in PyTorch
Practical and theoretical advances in deep learning have been accelerated by the development of an ecosystem of libraries allowing practitioners to focus on developing new techniques instead of spending weeks or months re-implementing the wheel . In particular , automatic differentiation frameworks such as Theano ( Bergstra et al. , 2011 ) , Tensorflow ( Abadi et al. , 2016 ) or PyTorch ( Paszke et al. , 2019 ) have been the backbone for the leap in performance of last decade ’ s increasingly deeper neural networks as they allow to compute average gradients efficiently , used in the stochastic gradient algorithm or variants thereof . While being versatile in neural networks that can be designed by varying the type and number of their layers , they are however specialized to the very task of computing these average gradients , so more advanced techniques can be burdensome to implement . While the popularity of neural networks has grown thanks to their always improving performance , other techniques have emerged , amongst them we highlight some involving Fisher Information Matrices ( FIM ) and Neural Tangent Kernels ( NTK ) . Approximate 2nd order ( Schraudolph , 2002 ) or natural gradient techniques ( Amari , 1998 ) aim at accelerating training , elastic weight consolidation ( Kirkpatrick et al. , 2017 ) proposes to fight catastrophic forgetting in continual learning and WoodFisher ( Singh & Alistarh , 2020 ) tackles the problem of network pruning so as to minimize its computational footprint while retaining prediction capability . These 3 methods all use the Fisher Information Matrix while formalizing the problem they aim at solving , but resort to using different approximations when going to implementation . Similarly , following the work of Jacot et al . ( 2018 ) , a line of work study the NTK in either its limiting infinite-width regime , or during training of actual finite-size networks . All of these papers start by formalizing the problem at hand in a very concise math formula , then face the experimental challenge that computing the FIM or NTK involves performing operations for which off-the-shelf automatic differentiation libraries are not well adapted . An even greater turnoff comes from the fact that these matrices scale with the number of parameters ( for the FIM ) or the number of examples in the training set ( for the empirical NTK ) . This is prohibitively large for modern neural networks involving millions of parameters or large datasets , a problem circumvented by a series of techniques to approximate the FIM ( Ollivier , 2015 ; Martens & Grosse , 2015 ; George et al. , 2018 ) . NNGeometry aims at making use of these approximations effortless , so as to accelerate development or analysis of new techniques , allowing to spend more time on the theory and less time in fighting development bugs . NNGeometry ’ s interface is designed to be as close as possible to maths formulas . In summary , this paper and library contribute : • We introduce NNGeometry by describing and motivating design choices . – A unified interface for all FIM and NTK operations , regardless of how these are ap- proximated . – Implicit operations for ability to scale to large networks .. • Using NNGeometry , we get new empirical insights on FIMs and NTKs : – We compare different approximations in different scenarios . – We scale some NTK evolution experiments to TinyImagenet . 1 PRELIMINARIES . 1.1 NETWORK LINEARIZATION . Neural networks are parametric functions f ( x , w ) : X × Rd → Rc where x ∈ X are covariates from an input space , and w ∈ Rd are the network ’ s parameters , arranged in layers composed of weight matrices and biases . The function returns a value in Rc , such as the c scores in softmax classification , or c real values in c-dimensional regression . Neural networks are trained by iteratively adjusting their parameters w ( t+1 ) ← w ( t ) + δw ( t ) using steps δw ( t ) typically computed using the stochastic gradient algorithm or variants thereof , in order to minimize the empirical risk of a loss function . In machine learning , understanding and being able to control the properties of the solution obtained by an algorithm is of crucial interest , as it can provide generalization guarantees , or help design more efficient or accurate algorithms . Contrary to ( kernelized ) linear models , where closed-form expressions of the empirical risk minimizer exist , deep networks are non-linear functions , whose generalization properties and learning dynamics is not yet fully understood . Amongst the recent advances toward improving theory , is the study of the linearization ( in w ) of the deep network function f ( x , w ) : f ( x , w + δw ) = f ( x , w ) + J ( x , w ) δw + o ( ‖δw‖ ) 1 ( 1 ) where J ( x , w ) = ∂f ( x , w ) ∂w is the Jacobian with respect to parameters w , computed in ( w , x ) , mapping changes in parameter space δw to corresponding changes in output space using the identity δf ( x , w , δw ) = J ( x , w ) δw . For tiny steps δw , we neglect the term o ( ‖δw‖ ) thus f is close to its linearization . It happens for instance at small step sizes , or in the large-width limit with the specific parameter initialization scheme proposed by Jacot et al . ( 2018 ) . 1The Landau notation o ( pronounced ” little-o ” ) means a function whose exact value is irrelevant , with the property that limx→0 o ( x ) x = 0 , or in other words that is negligible compared to x for small x . 1.2 PARAMETER SPACE METRICS AND FISHER INFORMATION MATRIX . While neural networks are trained by tuning their parameters w , the end goal of machine learning is not to find the best parameter values , but rather to find good functions , in a sense that is dependent of the task at hand . For instance different parameter values can represent the same function ( Dinh et al. , 2017 ) . On the contrary 2 parameter space steps δw1 and δw2 with same euclidean norm can provide very different changes in a function ( δf ( x , w , δw1 ) 6= δf ( x , w , δw2 ) ) . In order to quantify changes of a function , one generally defines a distance2 on the function space . Examples of such distances are the Lk-norms , Wasserstein distances , or the KL divergence used in information geometry . To each of these function space distances correspond a parameter space metric . We continue our exposition by focusing on the KL divergence , which is closely related to the Fisher Information Matrix , but our library can be used for other function space distances . Suppose f is interpreted as log-probability of a density p : log p ( x , w ) = f ( x , w ) , the KL divergence gives a sense of how much the probability distribution changes when adding a small increment δw to the parameters of f ( x , w ) . We can approximate it as : KL ( p ( x , w ) ‖p ( x , w + δw ) ) = ∫ x∈X log ( p ( x , w ) p ( x , w + δw ) ) dp ( x , w ) ( 2 ) = 1 2 ∫ x∈X ( 1 p ( x , w ) J ( x , w ) δw ) 2 dp ( x , w ) + o ( ‖δw‖2 ) ( 3 ) where we used this form ( derived in appendix ) in order to emphasize how steps in parameter space δw affect distances measured on the function space : equation 3 is the result of i ) taking a step δw in parameter space ; ii ) multiplying with J ( x , w ) to push the change to the function space ; iii ) weight this function space change using p ( x , w ) −1 ; iv ) square and sum . In particular , because of the properties of the KL divergence , there is no second derivative of f involved , even if equation 3 is equivalent to taking the 2nd order Taylor series expansion of the KL divergence . We can rewrite in a more concise way : KL ( f ( x , w ) ‖f ( x , w + δw ) ) = δw > Fwδw + o ( ‖δw‖2 ) ( 4 ) which uses the d × d FIM Fw = ∫ x∈X 1 p ( x , w ) 2 J ( x , w ) > J ( x , w ) dp ( x , w ) . In particular , we can now define the norm ‖δw‖Fw = δw > Fwδw used in the natural gradient algorithm ( Amari ( 1998 ) , also see Martens ( 2020 ) for a more thorough discussion of the FIM ) , in elastic weight consolidation ( Kirkpatrick et al. , 2017 ) , or in pruning ( Singh & Alistarh , 2020 ) . Other quantities also share the same structure of a covariance of parameter space vectors , such as the covariance of loss gradients in TONGA ( Roux et al. , 2008 ) , the second moment of loss gradients3 ( Kunstner et al. , 2019 ; Thomas et al. , 2020 ) , or posterior covariances in bayesian deep learning ( e.g . in Maddox et al . ( 2019 ) ) . 1.3 NEURAL TANGENT KERNEL . Another very active line of research around the linearization of equation 1 is to take inspiration from the rich literature on kernel methods by defining the neural tangent kernel ( NTK ) : kw ( x , y ) = J ( x , w ) J ( y , w ) > ( 5 ) In the limit of networks infinite width , Jacot et al . ( 2018 ) have shown that the tangent kernel remains constant through training using gradient descent , which allows to directly apply kernel learning theory to deep learning . While this regime is of theoretical interest , it arguably does not explain what happens at finite width , where the NTK evolves during training . While kernels are functions of the whole input space X ×X , we often only have access to a limited number of samples in a datasets . We thus resort to using the kernel evaluated at points xi of a 2We here use the notion of distance informally . 3The second moment of loss gradients is sometimes called empirical Fisher . training or a test set , called the Gram Matrix ( Kw ) ij = kw ( xi , xj ) . Note that in the case where the output space is multidimensional with dimension c , then Kw is in fact a 4d tensor . 2 DESIGN AND IMPLEMENTATION . 2.1 DIFFICULTIES . Current deep learning frameworks such as PyTorch and Tensorflow are well adapted to neural network training , i.e . computing average gradients over parameters , used in optimizers such as Adam and others . However , when going to more advanced algorithms or analysis techniques involving FIMs and NTKs , practitioners typically have to hack the framework ’ s internal mechanisms , which is time consuming , error prone , and results in each project having its own slightly different implementation of the very same technique . We here list the difficulties in computing FIMs and NTKs using current frameworks : Per-example gradient FIMs and NTKs require per-example Jacobians J ( xi , w ) of a dataset ( xi ) i . This can be obtained by looping through examples x , but at the cost of not using mini-batched operations , thus missing the benefit of using GPUs . NNGeometry ’ s Jacobian generator extensively use efficient techniques such as Goodfellow ( 2015 ) Memory usage and computational cost A FIM matrix is d × d where d is the total number of parameters . With a memory cost in O ( d2 ) , this is prohibitively costly even for moderate size networks . Typical linear algebra operations have a computational cost in eitherO ( d2 ) ( e.g . matrixvector product ) or evenO ( d3 ) ( e.g . matrix inverse ) . NNGeometry instead comes with recent lower memory intensive approximations .
This paper describes a new PyTorch package, NNGeometry, for computing complicated neural network objects, such as the Fisher Information Matrix (FIM) and the Neural Tangent Kernel (NTK). The package uses an abstract representation to allow the user to implicitly choose between different approximations to these objects and automatically makes a bunch of efficient choices "under the hood" for the user. The paper concludes with a selection of experiments that compare these approximations as well as verify their validity against Monte-Carlo estimates of these matrices.
SP:f2bede94ecbf910aeafa0355e1d2576f5716dc7d
Distributional Sliced-Wasserstein and Applications to Generative Modeling
Sliced-Wasserstein distance ( SW ) and its variant , Max Sliced-Wasserstein distance ( Max-SW ) , have been used widely in the recent years due to their fast computation and scalability even when the probability measures lie in a very high dimensional space . However , SW requires many unnecessary projection samples to approximate its value while Max-SW only uses the most important projection , which ignores the information of other useful directions . In order to account for these weaknesses , we propose a novel distance , named Distributional Sliced-Wasserstein distance ( DSW ) , that finds an optimal distribution over projections that can balance between exploring distinctive projecting directions and the informativeness of projections themselves . We show that the DSW is a generalization of Max-SW , and it can be computed efficiently by searching for the optimal push-forward measure over a set of probability measures over the unit sphere satisfying certain regularizing constraints that favor distinct directions . Finally , we conduct extensive experiments with large-scale datasets to demonstrate the favorable performances of the proposed distances over the previous sliced-based distances in generative modeling applications . 1 INTRODUCTION . Optimal transport ( OT ) is a classical problem in mathematics and operation research . Due to its appealing theoretical properties and flexibility in practical applications , it has recently become an important tool in the machine learning and statistics community ; see for example , ( Courty et al. , 2017 ; Arjovsky et al. , 2017 ; Tolstikhin et al. , 2018 ; Gulrajani et al. , 2017 ) and references therein . The main usage of OT is to provide a distance named Wasserstein distance , to measure the discrepancy between two probability distributions . However , that distance suffers from expensive computational complexity , which is the main obstacle to using OT in practical applications . There have been two main approaches to overcome the high computational complexity problem : either approximate the value of OT or apply the OT adaptively to specific situations . The first approach was initiated by ( Cuturi , 2013 ) using an entropic regularizer to speed up the computation of the OT ( Sinkhorn , 1967 ; Knight , 2008 ) . The entropic regularization approach has demonstrated its usefulness in several application domains ( Courty et al. , 2014 ; Genevay et al. , 2018 ; Bunne et al. , 2019 ) . Along this direction , several works proposed efficient algorithms for solving the entropic OT ( Altschuler et al. , 2017 ; Lin et al. , 2019b ; a ) as well as methods to stabilize these algorithms ( Chizat et al. , 2018 ; Peyré & Cuturi , 2019 ; Chizat et al. , 2018 ; Schmitzer , 2019 ) . However , these algorithms have complexities of the order O ( k2 ) , where k is the number of supports . It is expensive when we need to compute the OT repeatedly , especially in learning the data distribution . ∗The work was finished when Nhat Ho worked at VinAI Research in the summer of 2020 . The second approach , known as `` slicing '' , takes a rather different perspective . It leverages two key ideas : the OT closed-form expression for two distributions in one-dimensional space , and the transformation of a distribution into a set of projected one-dimensional distributions by the Radon transform ( RT ) ( Helgason , 2010 ) . The popular proposal along this direction is Sliced-Wasserstein ( SW ) distance ( Bonneel et al. , 2015 ) , which samples the projecting directions uniformly over a unit sphere in the data ambient space and takes the expectation of the resulting one-dimensional OT distance . The SW distance hence requires a significantly lower computation cost than the original Wasserstein distance and is more scalable than the first approach . Due to its solid statistical guarantees and efficient computation , the SW distance has been successfully applied to a variety of practical tasks ( Deshpande et al. , 2018 ; Liutkus et al. , 2019 ; Kolouri et al. , 2018 ; Wu et al. , 2019 ; Deshpande et al. , 2019 ) where it has been shown to have comparative performances to other distances and divergences between probability distributions . However , there is an inevitable bottleneck of computing the SW distance . Specifically , the expectation with respect to the uniform distribution of projections in SW is intractable to compute ; therefore , the Monte Carlo method is employed to approximate it . Nevertheless , drawing from a uniform distribution of directions in high-dimension can result in an overwhelming number of irrelevant directions , especially when the actual data lies in a low-dimensional manifold . Hence , SW typically needs to have a large number of samples to yield an accurate estimation of the discrepancy . Alternatively , in the other extreme , Max Sliced-Wasserstein ( Max-SW ) distance ( Deshpande et al. , 2019 ) uses only one important direction to distinguish the probability distributions . However , other potentially relevant directions are ignored in Max-SW . Therefore , Max-SW can miss some important differences between the two distributions in high dimension . We note that the linear projections in the Radon transform can be replaced by non-linear projections resulting in the generalized sliced-Wasserstein distance and its variants ( Beylkin , 1984 ; Kolouri et al. , 2019 ) . Apart from these main directions , there are also few proposals that try either to modify them or to combine the advantages of the above-mentioned approaches . In particular , Paty & Cuturi ( 2019 ) extended the idea of the max-sliced distance to the max-subspace distance by considering finding an optimal orthogonal subspace . However , this approach is computationally expensive , since it could not exploit the closed-form of the one-dimensional Wasserstein distance . Another approach named the Projected Wasserstein distance ( PWD ) , which was proposed in ( Rowland et al. , 2019 ) , uses sliced decomposition to find multiple one-dimension optimal transport maps . Then , it computes the average cost of those maps equally in the original dimension . Our contributions . Our paper also follows the slicing approach . However , we address key friction in this general line of work : how to obtain a relatively small number of slices simultaneously to maintain the computational efficiency , but at the same time , cover the major differences between two high-dimensional distributions . We take a probabilistic view of slicing by using a probability measure on the unit sphere to represent how important each direction is . From this viewpoint , SW uses the uniform distribution while Max-SW searches for the best delta-Dirac distribution over the projections , both can be considered as special cases . In this paper , we propose to search for an optimal distribution of important directions . We regularize this distribution such that it prefers directions that are far away from one another , hence encouraging an efficient exploration of the space of directions . In the case of no regularization , our proposed method recovers max- ( generalized ) SW as a special case . In summary , our main contributions are two-fold : 1 . First , we introduce a novel distance , named Distributional Sliced-Wasserstein distance ( DSW ) , to account for the issues of previous sliced distances . Our main idea is to search for not just a single most important projection , but an optimal distribution over projections that could balance between an expansion of the area around important projections and the informativeness of projections themselves , i.e. , how well they can distinguish the two target probability measures . We show that DSW is a proper metric in the probability space and possesses appealing statistical and computational properties as the previous sliced distances . 2 . Second , we apply the DSW distance to generative modeling tasks based on the generative adversarial framework . The extensive experiments on real and large-scale datasets show that DSW distance significantly outperforms the SW and Max-SW distances under similar computational time on these tasks . Furthermore , the DSW distance helps model distribution converge to the data distribution faster and provides more realistic generated images than the SW and Max-SW distances . Organization . The remainder of the paper is organized as follows . In Section 2 , we provide backgrounds for Wasserstein distance and its slice-based versions . In Section 3 , we propose distributional ( generalized ) sliced-Wasserstein distance and analyze some of its theoretical properties . Section 4 includes extensive experiment results followed by discussions in Section 5 . Finally , we defer the proofs of key results and extra materials in the Appendices . Notation . For any θ , θ′ ∈ Rd , cos ( θ , θ′ ) = θ > θ′ ‖θ‖‖θ′‖ , where ‖.‖ is ` 2 norm . For any d ≥ 2 , S d−1 denotes the unit sphere in d dimension in ` 2 norm . Furthermore , δ denotes the Dirac delta function , and 〈· , ·〉 is the Euclidean inner-product . For any p ≥ 1 , Lp ( Rd ) is the set of real-valued functions on Rd with finite p-th moment . 2 BACKGROUND . In this section , we provide necessary backgrounds for the ( generalized ) Radon transform , the Wasserstein , and sliced-Wasserstein distances . 2.1 WASSERSTEIN DISTANCE . We start with a formal definition of Wasserstein distance . For any p ≥ 1 , we define Pp ( Rd ) as the set of Borel probability measures with finite p-th moment defined on a given metric space ( Rd , ‖.‖ ) . For any probability measures µ , ν defined on X , Y ⊆ Rd , we denote their corresponding probability density functions as Iµ and Iν . The Wasserstein distance of order p between µ and ν is given by ( Villani , 2008 ; Peyré & Cuturi , 2019 ) : Wp ( µ , ν ) : = ( inf π∈Π ( µ , ν ) ∫ X×Y ‖x− y‖pdπ ( x , y ) ) 1 p , where Π ( µ , ν ) is a set of all transportation plans π such that the marginal distributions of π are µ and ν , respectively . In order to simplify the presentation , we abuse the notation by using both Wp ( µ , ν ) and Wp ( Iµ , Iν ) interchangeably for the Wasserstein distance between µ and ν . When µ and ν are one-dimension measures , the Wasserstein distance between µ and ν has a closed-form expression Wp ( µ , ν ) = ( ∫ 1 0 |F−1µ ( z ) − F−1ν ( z ) |pdz ) 1/p where Fµ and Fν are the cumulative distribution function ( CDF ) of Iµ and Iν , respectively . 2.2 ( GENERALIZED ) RADON TRANSFORMS . Now , we review ( generalized ) Radon transform maps , which are key to the notion of ( generalized ) sliced-Wasserstein distance and its variants . The Radon transform ( RT ) maps a function I ∈ L1 ( Rd ) to the space of functions defined over space of lines in Rd . In particular , for any t ∈ R and direction θ ∈ Sd−1 , the RT is defined as follows ( Helgason , 2010 ) : RI ( t , θ ) : = ∫ Rd I ( x ) δ ( t− 〈x , θ〉 ) dx . The generalized Radon transform ( GRT ) ( Beylkin , 1984 ) extends the original one from integration over hyperplanes of Rd to integration over hypersurfaces . In particular , it is defined as : GI ( t , θ ) : =∫ Rd I ( x ) δ ( t − g ( x , θ ) ) dx , where t ∈ R and θ ∈ Ωθ . Here , Ωθ is a compact subset of R d and g : Rd × Sd−1 7→ R is a defining function ( cf . Assumptions H1-H4 in ( Kolouri et al. , 2019 ) for the definition of defining function ) inducing the hypersurfaces . When g ( x , θ ) = 〈x , θ〉 and Ωθ = Sd−1 , the generalized Radon transform becomes the standard Radon transform .
The paper presents a novel variant of the Sliced Wasserstein (SW) distance. Wasserstein distances have been used recently in lot of machine learning problems. One of the major problem is that, in its primal form, it is computationally expensive. In order to alleviate this problem, a class of methods, called sliced, leverage on the fact that Wasserstein has a closed form expression in 1D (which amount to sort the samples). It replaces the original Wasserstein distance by an expectation of 1D sub-problems over directions drawn uniformly on the unit hypersphere (akin to a Radon transform). Observing that not all the directions are meaningful, the authors propose a variant of SW where the expectation over all the directions is replaced by an expectation over a distribution of directions. The ‘extent’ of this distribution is controlled by an extra parameter. Interestingly, the authors show that this formulation is computationally tractable if one parametrizes this distribution by a measurable function, expressed as a neural network. This result is obtained by deriving the Lagrangian dual of the original problem. Comparisons with previous works are then given in two GAN scenarii: one on MNIST to explore the importance of the different parameters, and another on larger and more complicated datasets, where the FID score is exposed.
SP:6fe5ce1a3c0f3a9a80bad30444dc2d51482b3b11
Distributional Sliced-Wasserstein and Applications to Generative Modeling
Sliced-Wasserstein distance ( SW ) and its variant , Max Sliced-Wasserstein distance ( Max-SW ) , have been used widely in the recent years due to their fast computation and scalability even when the probability measures lie in a very high dimensional space . However , SW requires many unnecessary projection samples to approximate its value while Max-SW only uses the most important projection , which ignores the information of other useful directions . In order to account for these weaknesses , we propose a novel distance , named Distributional Sliced-Wasserstein distance ( DSW ) , that finds an optimal distribution over projections that can balance between exploring distinctive projecting directions and the informativeness of projections themselves . We show that the DSW is a generalization of Max-SW , and it can be computed efficiently by searching for the optimal push-forward measure over a set of probability measures over the unit sphere satisfying certain regularizing constraints that favor distinct directions . Finally , we conduct extensive experiments with large-scale datasets to demonstrate the favorable performances of the proposed distances over the previous sliced-based distances in generative modeling applications . 1 INTRODUCTION . Optimal transport ( OT ) is a classical problem in mathematics and operation research . Due to its appealing theoretical properties and flexibility in practical applications , it has recently become an important tool in the machine learning and statistics community ; see for example , ( Courty et al. , 2017 ; Arjovsky et al. , 2017 ; Tolstikhin et al. , 2018 ; Gulrajani et al. , 2017 ) and references therein . The main usage of OT is to provide a distance named Wasserstein distance , to measure the discrepancy between two probability distributions . However , that distance suffers from expensive computational complexity , which is the main obstacle to using OT in practical applications . There have been two main approaches to overcome the high computational complexity problem : either approximate the value of OT or apply the OT adaptively to specific situations . The first approach was initiated by ( Cuturi , 2013 ) using an entropic regularizer to speed up the computation of the OT ( Sinkhorn , 1967 ; Knight , 2008 ) . The entropic regularization approach has demonstrated its usefulness in several application domains ( Courty et al. , 2014 ; Genevay et al. , 2018 ; Bunne et al. , 2019 ) . Along this direction , several works proposed efficient algorithms for solving the entropic OT ( Altschuler et al. , 2017 ; Lin et al. , 2019b ; a ) as well as methods to stabilize these algorithms ( Chizat et al. , 2018 ; Peyré & Cuturi , 2019 ; Chizat et al. , 2018 ; Schmitzer , 2019 ) . However , these algorithms have complexities of the order O ( k2 ) , where k is the number of supports . It is expensive when we need to compute the OT repeatedly , especially in learning the data distribution . ∗The work was finished when Nhat Ho worked at VinAI Research in the summer of 2020 . The second approach , known as `` slicing '' , takes a rather different perspective . It leverages two key ideas : the OT closed-form expression for two distributions in one-dimensional space , and the transformation of a distribution into a set of projected one-dimensional distributions by the Radon transform ( RT ) ( Helgason , 2010 ) . The popular proposal along this direction is Sliced-Wasserstein ( SW ) distance ( Bonneel et al. , 2015 ) , which samples the projecting directions uniformly over a unit sphere in the data ambient space and takes the expectation of the resulting one-dimensional OT distance . The SW distance hence requires a significantly lower computation cost than the original Wasserstein distance and is more scalable than the first approach . Due to its solid statistical guarantees and efficient computation , the SW distance has been successfully applied to a variety of practical tasks ( Deshpande et al. , 2018 ; Liutkus et al. , 2019 ; Kolouri et al. , 2018 ; Wu et al. , 2019 ; Deshpande et al. , 2019 ) where it has been shown to have comparative performances to other distances and divergences between probability distributions . However , there is an inevitable bottleneck of computing the SW distance . Specifically , the expectation with respect to the uniform distribution of projections in SW is intractable to compute ; therefore , the Monte Carlo method is employed to approximate it . Nevertheless , drawing from a uniform distribution of directions in high-dimension can result in an overwhelming number of irrelevant directions , especially when the actual data lies in a low-dimensional manifold . Hence , SW typically needs to have a large number of samples to yield an accurate estimation of the discrepancy . Alternatively , in the other extreme , Max Sliced-Wasserstein ( Max-SW ) distance ( Deshpande et al. , 2019 ) uses only one important direction to distinguish the probability distributions . However , other potentially relevant directions are ignored in Max-SW . Therefore , Max-SW can miss some important differences between the two distributions in high dimension . We note that the linear projections in the Radon transform can be replaced by non-linear projections resulting in the generalized sliced-Wasserstein distance and its variants ( Beylkin , 1984 ; Kolouri et al. , 2019 ) . Apart from these main directions , there are also few proposals that try either to modify them or to combine the advantages of the above-mentioned approaches . In particular , Paty & Cuturi ( 2019 ) extended the idea of the max-sliced distance to the max-subspace distance by considering finding an optimal orthogonal subspace . However , this approach is computationally expensive , since it could not exploit the closed-form of the one-dimensional Wasserstein distance . Another approach named the Projected Wasserstein distance ( PWD ) , which was proposed in ( Rowland et al. , 2019 ) , uses sliced decomposition to find multiple one-dimension optimal transport maps . Then , it computes the average cost of those maps equally in the original dimension . Our contributions . Our paper also follows the slicing approach . However , we address key friction in this general line of work : how to obtain a relatively small number of slices simultaneously to maintain the computational efficiency , but at the same time , cover the major differences between two high-dimensional distributions . We take a probabilistic view of slicing by using a probability measure on the unit sphere to represent how important each direction is . From this viewpoint , SW uses the uniform distribution while Max-SW searches for the best delta-Dirac distribution over the projections , both can be considered as special cases . In this paper , we propose to search for an optimal distribution of important directions . We regularize this distribution such that it prefers directions that are far away from one another , hence encouraging an efficient exploration of the space of directions . In the case of no regularization , our proposed method recovers max- ( generalized ) SW as a special case . In summary , our main contributions are two-fold : 1 . First , we introduce a novel distance , named Distributional Sliced-Wasserstein distance ( DSW ) , to account for the issues of previous sliced distances . Our main idea is to search for not just a single most important projection , but an optimal distribution over projections that could balance between an expansion of the area around important projections and the informativeness of projections themselves , i.e. , how well they can distinguish the two target probability measures . We show that DSW is a proper metric in the probability space and possesses appealing statistical and computational properties as the previous sliced distances . 2 . Second , we apply the DSW distance to generative modeling tasks based on the generative adversarial framework . The extensive experiments on real and large-scale datasets show that DSW distance significantly outperforms the SW and Max-SW distances under similar computational time on these tasks . Furthermore , the DSW distance helps model distribution converge to the data distribution faster and provides more realistic generated images than the SW and Max-SW distances . Organization . The remainder of the paper is organized as follows . In Section 2 , we provide backgrounds for Wasserstein distance and its slice-based versions . In Section 3 , we propose distributional ( generalized ) sliced-Wasserstein distance and analyze some of its theoretical properties . Section 4 includes extensive experiment results followed by discussions in Section 5 . Finally , we defer the proofs of key results and extra materials in the Appendices . Notation . For any θ , θ′ ∈ Rd , cos ( θ , θ′ ) = θ > θ′ ‖θ‖‖θ′‖ , where ‖.‖ is ` 2 norm . For any d ≥ 2 , S d−1 denotes the unit sphere in d dimension in ` 2 norm . Furthermore , δ denotes the Dirac delta function , and 〈· , ·〉 is the Euclidean inner-product . For any p ≥ 1 , Lp ( Rd ) is the set of real-valued functions on Rd with finite p-th moment . 2 BACKGROUND . In this section , we provide necessary backgrounds for the ( generalized ) Radon transform , the Wasserstein , and sliced-Wasserstein distances . 2.1 WASSERSTEIN DISTANCE . We start with a formal definition of Wasserstein distance . For any p ≥ 1 , we define Pp ( Rd ) as the set of Borel probability measures with finite p-th moment defined on a given metric space ( Rd , ‖.‖ ) . For any probability measures µ , ν defined on X , Y ⊆ Rd , we denote their corresponding probability density functions as Iµ and Iν . The Wasserstein distance of order p between µ and ν is given by ( Villani , 2008 ; Peyré & Cuturi , 2019 ) : Wp ( µ , ν ) : = ( inf π∈Π ( µ , ν ) ∫ X×Y ‖x− y‖pdπ ( x , y ) ) 1 p , where Π ( µ , ν ) is a set of all transportation plans π such that the marginal distributions of π are µ and ν , respectively . In order to simplify the presentation , we abuse the notation by using both Wp ( µ , ν ) and Wp ( Iµ , Iν ) interchangeably for the Wasserstein distance between µ and ν . When µ and ν are one-dimension measures , the Wasserstein distance between µ and ν has a closed-form expression Wp ( µ , ν ) = ( ∫ 1 0 |F−1µ ( z ) − F−1ν ( z ) |pdz ) 1/p where Fµ and Fν are the cumulative distribution function ( CDF ) of Iµ and Iν , respectively . 2.2 ( GENERALIZED ) RADON TRANSFORMS . Now , we review ( generalized ) Radon transform maps , which are key to the notion of ( generalized ) sliced-Wasserstein distance and its variants . The Radon transform ( RT ) maps a function I ∈ L1 ( Rd ) to the space of functions defined over space of lines in Rd . In particular , for any t ∈ R and direction θ ∈ Sd−1 , the RT is defined as follows ( Helgason , 2010 ) : RI ( t , θ ) : = ∫ Rd I ( x ) δ ( t− 〈x , θ〉 ) dx . The generalized Radon transform ( GRT ) ( Beylkin , 1984 ) extends the original one from integration over hyperplanes of Rd to integration over hypersurfaces . In particular , it is defined as : GI ( t , θ ) : =∫ Rd I ( x ) δ ( t − g ( x , θ ) ) dx , where t ∈ R and θ ∈ Ωθ . Here , Ωθ is a compact subset of R d and g : Rd × Sd−1 7→ R is a defining function ( cf . Assumptions H1-H4 in ( Kolouri et al. , 2019 ) for the definition of defining function ) inducing the hypersurfaces . When g ( x , θ ) = 〈x , θ〉 and Ωθ = Sd−1 , the generalized Radon transform becomes the standard Radon transform .
The paper describes a family of sliced Wasserstein divergences that maximize the distribution over slices subject to constraints on the concentration of slices. Extremes of the family are the sliced Wasserstein and max-sliced Wasserstein distance. In between these the divergence is sensitive to informative discrepancies in multiple subspaces, while still leveraging the relatively fast computation the Wasserstein distance in one dimension for empirical distributions. A dual formulation provides a variational approximation using a (possibly deep) neural network to instantiate the slicing distribution through a pushforward approach. Basic theory prove the divergence is a distance metric between measures. Extensive experiments show improvement over sliced and max-sliced Wasserstein distances and related projection based approaches.
SP:6fe5ce1a3c0f3a9a80bad30444dc2d51482b3b11
Task-similarity Aware Meta-learning through Nonparametric Kernel Regression
1 INTRODUCTION . Meta-learning seeks to abstract a general learning rule applicable to a class of different learning problems or tasks , given the knowledge of a set of training tasks from the class ( Finn & Levine , 2018 ; Denevi et al. , 2018 ; Hospedales et al. , 2020 ; Grant et al. , 2018 ; Yoon et al. , 2018 ) . The setting is such that the data available for solving each task is often severely limited , resulting in a poor performance when the tasks are solved independently from each other . This also sets meta-learning apart from the transfer learning paradigm where the focus is to transfer a well-trained network from existing domain to another ( Pan & Yang , 2010 ) . While existing meta-learning approaches implicitly assume the tasks as being similar , it is generally unclear how this task-similarity could be quantified and used in the learning . As a result , most popular meta-learning approaches do not actively use the similarity/dissimilarity between the tasks , but rely on availability of huge number of tasks for their working . In many practical applications , the number of tasks could be limited and the tasks may not always be very similar . There might even be ‘ outlier tasks ’ or ‘ out-of-the-distribution tasks ’ that are less similar or dissimilar from the rest of the tasks . Our conjecture is that the explicit incorporation or awareness of task-similarity helps improve meta-learning performance in such task-limited and adverse settings . The goal of this paper is to test this hypothesis by developing a task-similarity aware meta-learning algorithm using nonparametric kernel regression . Specifically , our contribution is a novel metalearning algorithm called the Task-similarity Aware Nonparametric Meta-Learning ( TANML ) that : • Explicitly employs similarity across the tasks to fast adapt the meta-information to a given task , by using nonparametric kernel regression . • Models the parameters of a task as belonging to a reproducing kernel Hilbert space ( RKHS ) , obtained by viewing the popular meta-learning of MAML and Meta-SGD approaches through the lens of linear/kernel regression . • Uses task-descriptors through a kernel function to quantify task-similarity/dissimilarity . • Offers a general framework for incorporating task-similarity in the meta-learning process . Though we pursued the algorithm with a specific choice of the task-descriptors , the proposed RKHS task-similarity aware framework can be extended to other formulations . We wish to emphasize that our goal is not to propose another meta-learning algorithm that outperforms the state-of-the-art , but rather to investigate if task-similarity can be explicitly incorporated and used to advantage in a meaningful manner . We show how this is achieved as the consequence of viewing the popular MAML and Meta-SGD formulations through the lens of nonparametric kernel regression . In order to keep the comparison fair on an apple-to-apple level , we compare the performance of TANML with that of MAML and Meta-SGD algorithms . 1.1 MATHEMATICAL OVERVIEW OF THE PROPOSED TASK-SIMILARITY AWARE FRAMEWORK . Given pairs of data ( xk , yk ) ∈ Rnx ×Rny where k ∈ { 1 , 2 , · · · , K } generated by an unknown data source , we are interested in learning a predictor Rnx × RD 3 ( x , θ ) 7→ f ( x , θ ) ∈ Rny from the given data . For example , f ( x , θ ) could be a function defined by an artificial neural network ( ANN ) . We collect pairs of data in X = ( x1 , x2 , · · · , xK ) and Y = ( y1 , y2 , · · · , yK ) and define the loss function RKnx×RKny×RD 3 ( X , Y , θ ) 7→ L ( X , Y , θ ) ∈ R which we then minimize with respect to θ . This constitutes the training of the predictor . In the case of a ANN , L ( X , Y , θ ) ∈ R could be the mean-square loss or the cross-entropy function . The data X , Y used for training is referred to as the training data . Let θ̂ denote the optimal value of θ obtained from training . Given a new x ∈ Rnx , we use ŷ = f ( x , θ̂ ) to predict y . The goodness of θ̂ is evaluated using y − ŷ on a sequence of pairs of new data called the test data X̄ , Ȳ , defined similarly as X and Y , but with K̄ number of data pairs . The training of the predictor for the given data source is defined as a task . Now , consider that we are interested in carrying out several such tasks for data coming from different but similar sources . Let Xi , Yi , X̄i , Ȳi , i = 1 , · · · , Ttr denote the data from Ttr different datasources , and defined similarly as X , Y , X̄ , Ȳ above . We refer to the training of the predictor for data Xi , Yi , X̄i , Ȳi as the ith training task , and θi is referred to as the parameter for the task . Metalearning captures similarities across the tasks by learning a common θ̂ ( which we denote by θ0 ) from the data of these Ttr tasks ( called the meta-training data ) , such that θ0 can be quickly adapted to train a predictor for data from new and different but similar data-sources . Depending on how θ is obtained from θ0 , various meta-learning algorithms exist ( Denevi et al. , 2018 ; Finn & Levine , 2018 ; Allen et al. , 2019 ) . The performance of the meta-learning algorithm is evaluated on previously unseen data from several other similar sources X vi , Yvi , X̄ vi , Ȳvi , i = 1 , · · · , Tv ( called the meta-test data ) defined similarly to X , Y , X̄ , Ȳ − this constitutes the meta-test phase . The training of the predictor for test data X vi , Yvi , X̄ vi , Ȳvi is referred to as the ith test task , θvi denotes the parameter for the ith test task . In the existing meta-learning approaches , both θi and θvi are obtained by adapting θ0 using the gradient of L ( Xi , Yi , θ ) and L ( X vi , Yvi , θ ) , respectively , evaluated at θ0 . In our work , we propose a meta-learning algorithm where θi explicitly uses a similarity between the ith training task and all the training tasks . Similarly , the parameters θvi for the test tasks also use explicitly a similarity between the ith test task and all the training tasks . As specified later , we define this task-similarity between two tasks through kernel regression , and our algorithm learns the kernel regression coefficients Ψ as meta-parameters in addition to θ0 . A motivating example Let us now consider a specific loss function given by L ( X , Y , θ ) =∑K k=1 ‖yk − f ( xk , θ ) ‖22 . Training for tasks independently with limited training data will typically result in a predictor that overfits to X , Y , and generalizes poorly to X̄ , Ȳ . MAML-type metalearning approaches ( Finn et al. , 2017 ) solve this by inferring the information across tasks in the form of a good initialization θ0– specialized/adapted to a new task using the adaptation function RD × RKnx × RKny 3 ( θ0 , X , Y ) 7→ gMAML ( θ0 , X , Y ) ∈ RD defined as : gMAML ( θ0 , X , Y ) , θ0 − α∇θ0L ( X , Y , θ0 ) The parameters for the training and test tasks as obtained through adaptation of θ0 as θi = gMAML ( θ0 , Xi , Yi ) , i = 1 , · · · , Ttr , and θvi = gMAML ( θ0 , X vi , Yvi ) , i = 1 , · · · , Tv . The meta-parameter θ0 is learnt by iteratively taking a gradient descent with respect to the test loss on the training tasks given by ∑Ttr i=1 L ( X̄i , Ȳi , gMAML ( θ0 , Xi , Yi ) ) . The parameters for a task are obtained directly from θ0 and does not make use of any information from the other training tasks . As a result , the common θ0 learnt during the meta-training treats all tasks equally − the algorithm implicitly assumes similarity of all tasks , but is not able to discern or quantify the degree of similarity or dissimilarity among the tasks . In contrast , our algorithm involves an adaptation function gTANML ( to be defined later ) that explicitly uses a notion of similarity between the tasks to predict parameters for a task . As a result , we expect that our approach helps train predictors even when the data-sources that are not very similar to each other . In our numerical experiments in Section 4 , we see that this is indeed the case the sinusoidal function as the data source . 1.2 RELATED WORK . The structural characterization of tasks and use of task-dependent knowledge has gained interest in meta-learning recently . Edwards & Storkey ( 2017 ) proposed a variational autoencoder based approach to generate task/dataset statistics used to measure similarity . Ruder & Plank ( 2017 ) considered domain similarity and diversity measures in the context of transfer learning ( Ruder & Plank , 2017 ) . The study of how task properties affect the catastrophic forgetting in continual learning was pursued by Nguyen et al . ( 2019 ) . Lee et al . ( 2020 ) proposed a task-adaptive meta-learning approach for classification that adaptively balances meta-learning and task-specific learning differently for every task and class . Bayesian approaches have been proposed to capture the similarity across tasks in the form of task hyperpriors ( Yoon et al. , 2018 ; Finn et al. , 2018 ; Grant et al. , 2018 ; Rothfuss et al. , 2020 ) . Task-similarity defined through effective sample size has been used to develop a new offpolicy algorithm for meta-reinforcement learning ( Fakoor et al. , 2020 ) . It was shown by Oreshkin et al . ( 2018 ) that the performance few-shot learning shows significant improvements with the use of task-dependent metrics . While the use of kernels or similarity metrics is not new in meta-learning , they are typically seen in the context of defining relations between the classes or samples within a given task ( Qiao et al. , 2018 ; Rusu et al. , 2019 ; Vinyals et al. , 2016 ; Snell et al. , 2017 ; Oreshkin et al. , 2018 ; Fortuin & Rätsch , 2019 ; Goo & Niekum , 2020 ) . Qiao et al . ( 2018 ) use similarity metrics in the activation space to predict parameters for novel categories in few-shot learning with zero-training . Information-theoretic ideas have also been used in the study of the topology and the geometry of task spaces by Nguyen et al . ( 2019 ) ; Achille et al . ( 2018 ) . Achille et al . ( 2019 ) construct vector representations for tasks using partially trained probe networks , based on which task-similarity metrics are developed . Task descriptors have been of interest specially in vision related tasks in the context of transfer learning ( Zamir et al. , 2018 ; Achille et al. , 2019 ; Tran et al. , 2019 ) . Recently , neural tangent kernels were been proposed for asymptotic analysis of meta-learning for infinitely wide neural networks by considering gradient based kernels across tasks by Wang et al . ( 2020 ) . The work of Wang et al . ( 2020 ) is the closest in spirit to our work in that they consider kernels across meta-learning tasks . However , the premise of their work is very different from ours . The aim of their work is to show how global convergence behaviour of popular MAML type task-specific adaptation can be assymptotically described using specific kernels , when every task involves training deep neural networks of asymptotically infinite or very large widths . Our premise on the other hand is entirely different − we consider a task-specific adaptation that actively employs similarity of tasks in the form of valid kernel functions , in order to improve meta-learning performance . Our work does not make assumptions on the nature of the kernel , the structure of the learner , or its dimensions .
The paper introduced a meta-learning framework in which a kernel describing similarity between the tasks is used to construct an RKHS which is used to perform kernel regression. The framework is instantiated in a form of an algorithm: TANML which can be viewed as an extension to a popular Meta-SGD algorithm. The experiments on two regression tasks are presented to analyse the efficacy of the proposed method.
SP:df4c28b42c8505f42804ad298a1b51ebb060ea32
Task-similarity Aware Meta-learning through Nonparametric Kernel Regression
1 INTRODUCTION . Meta-learning seeks to abstract a general learning rule applicable to a class of different learning problems or tasks , given the knowledge of a set of training tasks from the class ( Finn & Levine , 2018 ; Denevi et al. , 2018 ; Hospedales et al. , 2020 ; Grant et al. , 2018 ; Yoon et al. , 2018 ) . The setting is such that the data available for solving each task is often severely limited , resulting in a poor performance when the tasks are solved independently from each other . This also sets meta-learning apart from the transfer learning paradigm where the focus is to transfer a well-trained network from existing domain to another ( Pan & Yang , 2010 ) . While existing meta-learning approaches implicitly assume the tasks as being similar , it is generally unclear how this task-similarity could be quantified and used in the learning . As a result , most popular meta-learning approaches do not actively use the similarity/dissimilarity between the tasks , but rely on availability of huge number of tasks for their working . In many practical applications , the number of tasks could be limited and the tasks may not always be very similar . There might even be ‘ outlier tasks ’ or ‘ out-of-the-distribution tasks ’ that are less similar or dissimilar from the rest of the tasks . Our conjecture is that the explicit incorporation or awareness of task-similarity helps improve meta-learning performance in such task-limited and adverse settings . The goal of this paper is to test this hypothesis by developing a task-similarity aware meta-learning algorithm using nonparametric kernel regression . Specifically , our contribution is a novel metalearning algorithm called the Task-similarity Aware Nonparametric Meta-Learning ( TANML ) that : • Explicitly employs similarity across the tasks to fast adapt the meta-information to a given task , by using nonparametric kernel regression . • Models the parameters of a task as belonging to a reproducing kernel Hilbert space ( RKHS ) , obtained by viewing the popular meta-learning of MAML and Meta-SGD approaches through the lens of linear/kernel regression . • Uses task-descriptors through a kernel function to quantify task-similarity/dissimilarity . • Offers a general framework for incorporating task-similarity in the meta-learning process . Though we pursued the algorithm with a specific choice of the task-descriptors , the proposed RKHS task-similarity aware framework can be extended to other formulations . We wish to emphasize that our goal is not to propose another meta-learning algorithm that outperforms the state-of-the-art , but rather to investigate if task-similarity can be explicitly incorporated and used to advantage in a meaningful manner . We show how this is achieved as the consequence of viewing the popular MAML and Meta-SGD formulations through the lens of nonparametric kernel regression . In order to keep the comparison fair on an apple-to-apple level , we compare the performance of TANML with that of MAML and Meta-SGD algorithms . 1.1 MATHEMATICAL OVERVIEW OF THE PROPOSED TASK-SIMILARITY AWARE FRAMEWORK . Given pairs of data ( xk , yk ) ∈ Rnx ×Rny where k ∈ { 1 , 2 , · · · , K } generated by an unknown data source , we are interested in learning a predictor Rnx × RD 3 ( x , θ ) 7→ f ( x , θ ) ∈ Rny from the given data . For example , f ( x , θ ) could be a function defined by an artificial neural network ( ANN ) . We collect pairs of data in X = ( x1 , x2 , · · · , xK ) and Y = ( y1 , y2 , · · · , yK ) and define the loss function RKnx×RKny×RD 3 ( X , Y , θ ) 7→ L ( X , Y , θ ) ∈ R which we then minimize with respect to θ . This constitutes the training of the predictor . In the case of a ANN , L ( X , Y , θ ) ∈ R could be the mean-square loss or the cross-entropy function . The data X , Y used for training is referred to as the training data . Let θ̂ denote the optimal value of θ obtained from training . Given a new x ∈ Rnx , we use ŷ = f ( x , θ̂ ) to predict y . The goodness of θ̂ is evaluated using y − ŷ on a sequence of pairs of new data called the test data X̄ , Ȳ , defined similarly as X and Y , but with K̄ number of data pairs . The training of the predictor for the given data source is defined as a task . Now , consider that we are interested in carrying out several such tasks for data coming from different but similar sources . Let Xi , Yi , X̄i , Ȳi , i = 1 , · · · , Ttr denote the data from Ttr different datasources , and defined similarly as X , Y , X̄ , Ȳ above . We refer to the training of the predictor for data Xi , Yi , X̄i , Ȳi as the ith training task , and θi is referred to as the parameter for the task . Metalearning captures similarities across the tasks by learning a common θ̂ ( which we denote by θ0 ) from the data of these Ttr tasks ( called the meta-training data ) , such that θ0 can be quickly adapted to train a predictor for data from new and different but similar data-sources . Depending on how θ is obtained from θ0 , various meta-learning algorithms exist ( Denevi et al. , 2018 ; Finn & Levine , 2018 ; Allen et al. , 2019 ) . The performance of the meta-learning algorithm is evaluated on previously unseen data from several other similar sources X vi , Yvi , X̄ vi , Ȳvi , i = 1 , · · · , Tv ( called the meta-test data ) defined similarly to X , Y , X̄ , Ȳ − this constitutes the meta-test phase . The training of the predictor for test data X vi , Yvi , X̄ vi , Ȳvi is referred to as the ith test task , θvi denotes the parameter for the ith test task . In the existing meta-learning approaches , both θi and θvi are obtained by adapting θ0 using the gradient of L ( Xi , Yi , θ ) and L ( X vi , Yvi , θ ) , respectively , evaluated at θ0 . In our work , we propose a meta-learning algorithm where θi explicitly uses a similarity between the ith training task and all the training tasks . Similarly , the parameters θvi for the test tasks also use explicitly a similarity between the ith test task and all the training tasks . As specified later , we define this task-similarity between two tasks through kernel regression , and our algorithm learns the kernel regression coefficients Ψ as meta-parameters in addition to θ0 . A motivating example Let us now consider a specific loss function given by L ( X , Y , θ ) =∑K k=1 ‖yk − f ( xk , θ ) ‖22 . Training for tasks independently with limited training data will typically result in a predictor that overfits to X , Y , and generalizes poorly to X̄ , Ȳ . MAML-type metalearning approaches ( Finn et al. , 2017 ) solve this by inferring the information across tasks in the form of a good initialization θ0– specialized/adapted to a new task using the adaptation function RD × RKnx × RKny 3 ( θ0 , X , Y ) 7→ gMAML ( θ0 , X , Y ) ∈ RD defined as : gMAML ( θ0 , X , Y ) , θ0 − α∇θ0L ( X , Y , θ0 ) The parameters for the training and test tasks as obtained through adaptation of θ0 as θi = gMAML ( θ0 , Xi , Yi ) , i = 1 , · · · , Ttr , and θvi = gMAML ( θ0 , X vi , Yvi ) , i = 1 , · · · , Tv . The meta-parameter θ0 is learnt by iteratively taking a gradient descent with respect to the test loss on the training tasks given by ∑Ttr i=1 L ( X̄i , Ȳi , gMAML ( θ0 , Xi , Yi ) ) . The parameters for a task are obtained directly from θ0 and does not make use of any information from the other training tasks . As a result , the common θ0 learnt during the meta-training treats all tasks equally − the algorithm implicitly assumes similarity of all tasks , but is not able to discern or quantify the degree of similarity or dissimilarity among the tasks . In contrast , our algorithm involves an adaptation function gTANML ( to be defined later ) that explicitly uses a notion of similarity between the tasks to predict parameters for a task . As a result , we expect that our approach helps train predictors even when the data-sources that are not very similar to each other . In our numerical experiments in Section 4 , we see that this is indeed the case the sinusoidal function as the data source . 1.2 RELATED WORK . The structural characterization of tasks and use of task-dependent knowledge has gained interest in meta-learning recently . Edwards & Storkey ( 2017 ) proposed a variational autoencoder based approach to generate task/dataset statistics used to measure similarity . Ruder & Plank ( 2017 ) considered domain similarity and diversity measures in the context of transfer learning ( Ruder & Plank , 2017 ) . The study of how task properties affect the catastrophic forgetting in continual learning was pursued by Nguyen et al . ( 2019 ) . Lee et al . ( 2020 ) proposed a task-adaptive meta-learning approach for classification that adaptively balances meta-learning and task-specific learning differently for every task and class . Bayesian approaches have been proposed to capture the similarity across tasks in the form of task hyperpriors ( Yoon et al. , 2018 ; Finn et al. , 2018 ; Grant et al. , 2018 ; Rothfuss et al. , 2020 ) . Task-similarity defined through effective sample size has been used to develop a new offpolicy algorithm for meta-reinforcement learning ( Fakoor et al. , 2020 ) . It was shown by Oreshkin et al . ( 2018 ) that the performance few-shot learning shows significant improvements with the use of task-dependent metrics . While the use of kernels or similarity metrics is not new in meta-learning , they are typically seen in the context of defining relations between the classes or samples within a given task ( Qiao et al. , 2018 ; Rusu et al. , 2019 ; Vinyals et al. , 2016 ; Snell et al. , 2017 ; Oreshkin et al. , 2018 ; Fortuin & Rätsch , 2019 ; Goo & Niekum , 2020 ) . Qiao et al . ( 2018 ) use similarity metrics in the activation space to predict parameters for novel categories in few-shot learning with zero-training . Information-theoretic ideas have also been used in the study of the topology and the geometry of task spaces by Nguyen et al . ( 2019 ) ; Achille et al . ( 2018 ) . Achille et al . ( 2019 ) construct vector representations for tasks using partially trained probe networks , based on which task-similarity metrics are developed . Task descriptors have been of interest specially in vision related tasks in the context of transfer learning ( Zamir et al. , 2018 ; Achille et al. , 2019 ; Tran et al. , 2019 ) . Recently , neural tangent kernels were been proposed for asymptotic analysis of meta-learning for infinitely wide neural networks by considering gradient based kernels across tasks by Wang et al . ( 2020 ) . The work of Wang et al . ( 2020 ) is the closest in spirit to our work in that they consider kernels across meta-learning tasks . However , the premise of their work is very different from ours . The aim of their work is to show how global convergence behaviour of popular MAML type task-specific adaptation can be assymptotically described using specific kernels , when every task involves training deep neural networks of asymptotically infinite or very large widths . Our premise on the other hand is entirely different − we consider a task-specific adaptation that actively employs similarity of tasks in the form of valid kernel functions , in order to improve meta-learning performance . Our work does not make assumptions on the nature of the kernel , the structure of the learner , or its dimensions .
This paper proposes a theoretical formulation for meta-learning that uses task similarity based on task gradients, which helps learning in the presence of outlier tasks. The inner loop parameter update is given by linear kernel regression, where the kernel function computes similarity between gradients of different tasks. While the paper includes experiments that outperform MAML and Meta-SGD on estimating randomized linear predictors, and randomized sinusoids with outlier data-points, these are not sufficient to establish the efficacy of the approach.
SP:df4c28b42c8505f42804ad298a1b51ebb060ea32
Learning Robust Models by Countering Spurious Correlations
1 INTRODUCTION . Machine learning , especially deep neural networks , has demonstrated remarkable empirical successes over various benchmarks . One promising next step is to extend such empirical achievements beyond i.i.d benchmarks . If we train a model with data from one distribution ( i.e. , the source distribution ) , how can we guarantee the error to be small over other unseen , but related distributions ( i.e. , target distributions ) . Quantifying the generalization error over two arbitrary distributions is not useful , thus , we require the distributions of study similar but different : being similar in the sense that there exists a common function that can achieve zero error over both distributions , while being different in the sense that there exists another different function that can only achieve zero error over the training distribution , but not the test distribution . This problem may not be trivial because the empirical risk minimizer ( ERM ) may lead the model to learn this second function , a topic studied under different terminologies such as spurious correlations ( Vigen , 2015 ) , confounding factors ( McDonald , 2014 ) or dataset bias ( Torralba & Efros , 2011 ) . As a result , small empirical error may not mean the model learns what we expect ( Geirhos et al. , 2019 ; Wang et al. , 2020 ) , thus the model may not be able to perform consistently over other related data . In particular , our view of the challenges in this topic is illustrated with a toy example in Figure 1 where the model is trained on the source domain data to classify triangle vs. circle and tested on the target domain data . However , the color coincides with the shape on the source domain , so the model may learn either the desired function ( relying on shape ) or the spurious function ( relying on color ) . The spurious function will not classify the target domain data correctly while the desired function can , but the ERM can not differentiate them . As one may expect , whether shape or color is considered as desired or spurious is subjective dependent on the task or the data , and in general irrelevant to the statistical nature of the problem . Therefore , our error bound will require the knowledge of the spurious function . While this is a toy example , this scenario surly exists in real world tasks ( e.g. , Jo & Bengio , 2017 ; Geirhos et al. , 2019 ; Wang et al. , 2020 ) . The contributions of this paper are : • We analyze the cross-distribution generalization error bound of a model when the model is trained with a distribution with spuriously correlated features , which is formalized as the main theorem of this paper . • We compare our bound to the widely-accepted domain adaptation one ( Ben-David et al. , 2010 ) and show that our bound can be tighter under assumptions that we consider realistic . • Our main theorem naturally offers principled solutions of this problem , and the solutions are linked to many previous established methods for robustness in a broader context . • As the principled solutions all require some knowledge of the task or the data , our main theorem also leads to a new heuristic absent of the knowledge . This new method may be on a par with the principled solutions , and can outperform the vanilla training empirically . 2 RELATED WORK . There is a rich history of learning robust models . We first discuss works in three topics , all centering around the concept of invariance , where invariance intuitively means the model ’ s prediction preserves under certain shift of the data . We then highlight works related to our theoretical discussion . Cross-domain Generalization This line of works probably originates from domain adaptation ( Ben-David et al. , 2007 ) , which studies the problem of training a model over one distribution and test it over another one . Since ( Ganin et al. , 2016 ) , recent advances along this topic mainly center around the concept of invariance : most techniques leverage different regularizers to learn representations that are invariant to the marginals of these two distributions ( e.g. , Ghifary et al. , 2016 ; Rozantsev et al. , 2018 ) . Further , the community aims beyond the situation that a trained model from domain adaptation may only be applicable to one distribution , and focuses on domain generalization ( Muandet et al. , 2013 ) , which studies the problem of training a model over a collection of distributions and test it with distributions unseen during training . Similarly , most recent methods aim to learn representations invariant to the marginals of the training distributions ( e.g. , Motiian et al. , 2017 ; Li et al. , 2018 ; Carlucci et al. , 2018 ) . Recently , the community extends the study to domain generalization without domain IDs to address the real-world situations that domain IDs are unavailable ( Wang et al. , 2019b ) , which again focuses on learning representations invariant to specifically designed functions . Adversarially Robust Models The study of robustness against adversarial examples was popularized by the empirical observations that small perturbations on image data can significantly alter the model ’ s prediction ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . This observation initiated a line of works building models invariant to such small perturbations ( the rigorous definitions of “ small perturbations ” will not be discussed in details here ) ( e.g. , Lee et al. , 2017 ; Akhtar et al. , 2018 ) and adversarial training ( Madry et al. , 2018 ) is currently the most widely-accepted method in terms of empirical defense . On the other hand , the community also aims to develop methods that are provably robust to predefined perturbations ( e.g. , Wong & Kolter , 2018 ; Croce & Hein , 2020 ) , which links back to the works of distributional robust models ( e.g. , Abadeh et al. , 2015 ; Sagawa * et al. , 2020 ) , whose central goal is to train models invariant to a predefined shift of distributions . Recent evidence shows key challenges of learning adversarially robust models are spuriously correlated features ( Ilyas et al. , 2019 ; Wang et al. , 2020 ) , connecting adversarial robustness to the next topic . Countering Spurious Correlation Works along this line usually connects the robustness of a model to its ability of ignoring the spurious correlation in the data , which was also studied under the terminologies of confounding factors , or dataset bias . With different concrete definitions of the spurious correlation , methods have been developed for various applications , such as image/video classification ( e.g. , Goyal et al. , 2017 ; Wang et al. , 2019a ; b ; Bahng et al. , 2019 ; Shi et al. , 2020 ) , text classification ( e.g. , He et al. , 2019 ; Clark et al. , 2019 ; Bras et al. , 2020 ; Zhou & Bansal , 2020 ; Ko et al. , 2020 ) , medical diagnosis ( e.g. , Zech et al. , 2018 ; Chaibub Neto et al. , 2019 ; Larrazabal et al. , 2020 ) etc . The key concept is , as expected , to be invariant to the spurious correlated features . Related Theoretical Discussion Out of a rich collection of theoretical discussions in learning robust models , we only focus on the ones for unsupervised domain adaptation , as they will be related to our discussions in the sequel . Popularized by ( Ben-David et al. , 2007 ; 2010 ) , these analyses , although in various forms ( Mansour et al. , 2009 ; Germain et al. , 2016 ; Zhang et al. , 2019 ; Dhouib et al. , 2020 ) , mostly involve two additional terms than standard machine learning generalization bound : one term describes the “ learnable ” nature of the problem and one term quantifies the differences of the two distributions . This second term probably inspired most of the empirical methods forcing invariant representations from distributions . However , the value of invariance is recently challenged ( Wu et al. , 2019 ; Zhao et al. , 2019 ) . For example , Zhao et al . ( 2019 ) argued that “ invariance is not sufficient ” by showing counter examples violating the “ learnable ” nature of the problem and formalized the understanding as that the two distributions have possibly different labeling functions . Key Difference : However , we find the argument of disparity in labeling functions less intuitive , because human will nonetheless be able to agree on the label of an object whichever distribution the object lies in : in the context of this paper , we argue a shared labeling function always exists ( in any task reasonable to human ) , but the ERM model may not have the incentive to learn this function and learns a spurious one instead . As in Figure 1 , we formalize the problem as learning against spurious functions , and argue that the central problem is still invariance , but instead of invariance to marginals , we urge for invariance to the spurious function . Our discussion also applies to more than unsupervised domain adaptation and relates to most of the topics discussed in this section . 3 GENERALIZATION UNDERSTANDING WITH SPURIOUS CORRELATION . 3.1 NOTATIONS & BACKGROUND . We consider a binary classification problem from feature space X ∈ Rp to label space Y ∈ { 0 , 1 } . The distribution over X is denoted as P. A labeling function f : X → Y is a function that maps feature x to its label y . A hypothesis or model θ : X → Y is also a function that maps feature to the label . The difference in naming is only because we want to differentiate whether the function is a natural property of the space or distribution ( thus called a labeling function ) or a function to estimate ( thus called a hypothesis or model ) . The hypothesis space is denoted as Θ . This work concerns with the generalization error across two distributions , namely source and target distribution , denoted as Ps and Pt respectively . As stated previously , we are only interested when these two distributions are similar but different : being similar means there exists a desired labeling function , fd , that maps any x ∈ X to its label ( thus the label y : = fd ( x ) ) ; being different means there exists a spurious labeling function , fp , that for any x ∼ Ps , fp ( x ) = fd ( x ) . This “ similar but different ” property will be reiterated as an assumption ( A2 ) later . We use ( x , y ) to denote a sample , and use ( X , Y ) P to denote a finite dataset if the features are drawn from P. We use P ( θ ) to denote the expected risk of θ over distribution P , and use ·̂ to denote estimated term · ( e.g. , the empirical risk is ̂P ( θ̂ ) ) . We use l ( · , · ) to denote a generic loss function . For a dataset ( X , Y ) P , if we train a model θ̂ = arg min θ∈Θ ∑ ( x , y ) ∈ ( X , Y ) P l ( θ ( x ) , y ) , ( 1 ) previous generalization study suggests that we can expect the error rate to be bounded as P ( θ̂ ) ≤ ̂P ( θ̂ ) + φ ( |Θ| , n , δ ) , ( 2 ) where P ( θ̂ ) and ̂P ( θ̂ ) respectively are P ( θ̂ ) = Ex∼P|θ̂ ( x ) − y| = Ex∼P|θ̂ ( x ) − fd ( x ) | and ̂P ( θ̂ ) = 1 n ∑ ( x , y ) ∈ ( X , Y ) P |θ̂ ( x ) − y| , and φ ( |Θ| , n , δ ) is a function of hypothesis space |Θ| , number of samples n , and δ accounts for the probability when the bound holds . This paper only concerns with this generic form that can subsume many discussions , each with its own assumptions . We refer to these assumptions as A1 . A1 : basic assumptions needed to derived ( 2 ) , formalized with two examples in appendix A.1 .
This paper studies the problem that spurious features in the training set can cause accuracy drop in the test phase. They formalize a generalization error bound for this setup and provide two solutions, one principled solution with the knowledge of spurious correlated features and one minimal supervision (MS) method without knowing this information, based on their bound. They also have some experimental results demonstrating the effectiveness of their proposed MS method.
SP:2ec2433a907a60ebfbf9ffefc72b70eb76c1f591
Learning Robust Models by Countering Spurious Correlations
1 INTRODUCTION . Machine learning , especially deep neural networks , has demonstrated remarkable empirical successes over various benchmarks . One promising next step is to extend such empirical achievements beyond i.i.d benchmarks . If we train a model with data from one distribution ( i.e. , the source distribution ) , how can we guarantee the error to be small over other unseen , but related distributions ( i.e. , target distributions ) . Quantifying the generalization error over two arbitrary distributions is not useful , thus , we require the distributions of study similar but different : being similar in the sense that there exists a common function that can achieve zero error over both distributions , while being different in the sense that there exists another different function that can only achieve zero error over the training distribution , but not the test distribution . This problem may not be trivial because the empirical risk minimizer ( ERM ) may lead the model to learn this second function , a topic studied under different terminologies such as spurious correlations ( Vigen , 2015 ) , confounding factors ( McDonald , 2014 ) or dataset bias ( Torralba & Efros , 2011 ) . As a result , small empirical error may not mean the model learns what we expect ( Geirhos et al. , 2019 ; Wang et al. , 2020 ) , thus the model may not be able to perform consistently over other related data . In particular , our view of the challenges in this topic is illustrated with a toy example in Figure 1 where the model is trained on the source domain data to classify triangle vs. circle and tested on the target domain data . However , the color coincides with the shape on the source domain , so the model may learn either the desired function ( relying on shape ) or the spurious function ( relying on color ) . The spurious function will not classify the target domain data correctly while the desired function can , but the ERM can not differentiate them . As one may expect , whether shape or color is considered as desired or spurious is subjective dependent on the task or the data , and in general irrelevant to the statistical nature of the problem . Therefore , our error bound will require the knowledge of the spurious function . While this is a toy example , this scenario surly exists in real world tasks ( e.g. , Jo & Bengio , 2017 ; Geirhos et al. , 2019 ; Wang et al. , 2020 ) . The contributions of this paper are : • We analyze the cross-distribution generalization error bound of a model when the model is trained with a distribution with spuriously correlated features , which is formalized as the main theorem of this paper . • We compare our bound to the widely-accepted domain adaptation one ( Ben-David et al. , 2010 ) and show that our bound can be tighter under assumptions that we consider realistic . • Our main theorem naturally offers principled solutions of this problem , and the solutions are linked to many previous established methods for robustness in a broader context . • As the principled solutions all require some knowledge of the task or the data , our main theorem also leads to a new heuristic absent of the knowledge . This new method may be on a par with the principled solutions , and can outperform the vanilla training empirically . 2 RELATED WORK . There is a rich history of learning robust models . We first discuss works in three topics , all centering around the concept of invariance , where invariance intuitively means the model ’ s prediction preserves under certain shift of the data . We then highlight works related to our theoretical discussion . Cross-domain Generalization This line of works probably originates from domain adaptation ( Ben-David et al. , 2007 ) , which studies the problem of training a model over one distribution and test it over another one . Since ( Ganin et al. , 2016 ) , recent advances along this topic mainly center around the concept of invariance : most techniques leverage different regularizers to learn representations that are invariant to the marginals of these two distributions ( e.g. , Ghifary et al. , 2016 ; Rozantsev et al. , 2018 ) . Further , the community aims beyond the situation that a trained model from domain adaptation may only be applicable to one distribution , and focuses on domain generalization ( Muandet et al. , 2013 ) , which studies the problem of training a model over a collection of distributions and test it with distributions unseen during training . Similarly , most recent methods aim to learn representations invariant to the marginals of the training distributions ( e.g. , Motiian et al. , 2017 ; Li et al. , 2018 ; Carlucci et al. , 2018 ) . Recently , the community extends the study to domain generalization without domain IDs to address the real-world situations that domain IDs are unavailable ( Wang et al. , 2019b ) , which again focuses on learning representations invariant to specifically designed functions . Adversarially Robust Models The study of robustness against adversarial examples was popularized by the empirical observations that small perturbations on image data can significantly alter the model ’ s prediction ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . This observation initiated a line of works building models invariant to such small perturbations ( the rigorous definitions of “ small perturbations ” will not be discussed in details here ) ( e.g. , Lee et al. , 2017 ; Akhtar et al. , 2018 ) and adversarial training ( Madry et al. , 2018 ) is currently the most widely-accepted method in terms of empirical defense . On the other hand , the community also aims to develop methods that are provably robust to predefined perturbations ( e.g. , Wong & Kolter , 2018 ; Croce & Hein , 2020 ) , which links back to the works of distributional robust models ( e.g. , Abadeh et al. , 2015 ; Sagawa * et al. , 2020 ) , whose central goal is to train models invariant to a predefined shift of distributions . Recent evidence shows key challenges of learning adversarially robust models are spuriously correlated features ( Ilyas et al. , 2019 ; Wang et al. , 2020 ) , connecting adversarial robustness to the next topic . Countering Spurious Correlation Works along this line usually connects the robustness of a model to its ability of ignoring the spurious correlation in the data , which was also studied under the terminologies of confounding factors , or dataset bias . With different concrete definitions of the spurious correlation , methods have been developed for various applications , such as image/video classification ( e.g. , Goyal et al. , 2017 ; Wang et al. , 2019a ; b ; Bahng et al. , 2019 ; Shi et al. , 2020 ) , text classification ( e.g. , He et al. , 2019 ; Clark et al. , 2019 ; Bras et al. , 2020 ; Zhou & Bansal , 2020 ; Ko et al. , 2020 ) , medical diagnosis ( e.g. , Zech et al. , 2018 ; Chaibub Neto et al. , 2019 ; Larrazabal et al. , 2020 ) etc . The key concept is , as expected , to be invariant to the spurious correlated features . Related Theoretical Discussion Out of a rich collection of theoretical discussions in learning robust models , we only focus on the ones for unsupervised domain adaptation , as they will be related to our discussions in the sequel . Popularized by ( Ben-David et al. , 2007 ; 2010 ) , these analyses , although in various forms ( Mansour et al. , 2009 ; Germain et al. , 2016 ; Zhang et al. , 2019 ; Dhouib et al. , 2020 ) , mostly involve two additional terms than standard machine learning generalization bound : one term describes the “ learnable ” nature of the problem and one term quantifies the differences of the two distributions . This second term probably inspired most of the empirical methods forcing invariant representations from distributions . However , the value of invariance is recently challenged ( Wu et al. , 2019 ; Zhao et al. , 2019 ) . For example , Zhao et al . ( 2019 ) argued that “ invariance is not sufficient ” by showing counter examples violating the “ learnable ” nature of the problem and formalized the understanding as that the two distributions have possibly different labeling functions . Key Difference : However , we find the argument of disparity in labeling functions less intuitive , because human will nonetheless be able to agree on the label of an object whichever distribution the object lies in : in the context of this paper , we argue a shared labeling function always exists ( in any task reasonable to human ) , but the ERM model may not have the incentive to learn this function and learns a spurious one instead . As in Figure 1 , we formalize the problem as learning against spurious functions , and argue that the central problem is still invariance , but instead of invariance to marginals , we urge for invariance to the spurious function . Our discussion also applies to more than unsupervised domain adaptation and relates to most of the topics discussed in this section . 3 GENERALIZATION UNDERSTANDING WITH SPURIOUS CORRELATION . 3.1 NOTATIONS & BACKGROUND . We consider a binary classification problem from feature space X ∈ Rp to label space Y ∈ { 0 , 1 } . The distribution over X is denoted as P. A labeling function f : X → Y is a function that maps feature x to its label y . A hypothesis or model θ : X → Y is also a function that maps feature to the label . The difference in naming is only because we want to differentiate whether the function is a natural property of the space or distribution ( thus called a labeling function ) or a function to estimate ( thus called a hypothesis or model ) . The hypothesis space is denoted as Θ . This work concerns with the generalization error across two distributions , namely source and target distribution , denoted as Ps and Pt respectively . As stated previously , we are only interested when these two distributions are similar but different : being similar means there exists a desired labeling function , fd , that maps any x ∈ X to its label ( thus the label y : = fd ( x ) ) ; being different means there exists a spurious labeling function , fp , that for any x ∼ Ps , fp ( x ) = fd ( x ) . This “ similar but different ” property will be reiterated as an assumption ( A2 ) later . We use ( x , y ) to denote a sample , and use ( X , Y ) P to denote a finite dataset if the features are drawn from P. We use P ( θ ) to denote the expected risk of θ over distribution P , and use ·̂ to denote estimated term · ( e.g. , the empirical risk is ̂P ( θ̂ ) ) . We use l ( · , · ) to denote a generic loss function . For a dataset ( X , Y ) P , if we train a model θ̂ = arg min θ∈Θ ∑ ( x , y ) ∈ ( X , Y ) P l ( θ ( x ) , y ) , ( 1 ) previous generalization study suggests that we can expect the error rate to be bounded as P ( θ̂ ) ≤ ̂P ( θ̂ ) + φ ( |Θ| , n , δ ) , ( 2 ) where P ( θ̂ ) and ̂P ( θ̂ ) respectively are P ( θ̂ ) = Ex∼P|θ̂ ( x ) − y| = Ex∼P|θ̂ ( x ) − fd ( x ) | and ̂P ( θ̂ ) = 1 n ∑ ( x , y ) ∈ ( X , Y ) P |θ̂ ( x ) − y| , and φ ( |Θ| , n , δ ) is a function of hypothesis space |Θ| , number of samples n , and δ accounts for the probability when the bound holds . This paper only concerns with this generic form that can subsume many discussions , each with its own assumptions . We refer to these assumptions as A1 . A1 : basic assumptions needed to derived ( 2 ) , formalized with two examples in appendix A.1 .
This paper formalizes a new generalization error bound for the problem of spurious correlation (a.k.a. confounding factors or dataset bias) and shows that it is tighter than the well-established domain adaptation one under realistic assumptions. The analysis leads to a set of solutions linking to established solutions. It further proposes a practical solution that does not require explicit knowledge of spurious correlated features as the established ones need.
SP:2ec2433a907a60ebfbf9ffefc72b70eb76c1f591
Fast Partial Fourier Transform
In this paper , we propose a fast Partial Fourier Transform ( PFT ) , an efficient algorithm for computing only a part of Fourier coefficients . PFT approximates a part of twiddle factors ( trigonometric constants ) using polynomials , thereby reducing the computational complexity due to the mixture of many twiddle factors . We derive the asymptotic time complexity of PFT with respect to input and output sizes , as well as its numerical accuracy . Experimental results show that PFT outperforms the current state-of-the-art algorithms , with an order of magnitude of speedup for sufficiently small output sizes without sacrificing accuracy . 1 INTRODUCTION . How can we efficiently compute a specified part of Fourier coefficients of a given time-series vector ? Discrete Fourier transform ( DFT ) is a crucial task in several application areas , including anomaly detection ( Hou & Zhang ( 2007 ) ; Rasheed et al . ( 2009 ) ; Ren et al . ( 2019 ) ) , data center monitoring ( Mueen et al . ( 2010 ) ) , and image processing ( Shi et al . ( 2017 ) ) . Notably , in many such applications , it is well known that the DFT results in strong “ energy-compaction ” or “ sparsity ” in the frequency domain . That is , the Fourier coefficients of data are mostly small or equal to zero , having a much smaller support compared to the input size . Moreover , the support can often be specified in practice ( e.g. , a few low-frequency coefficients around the origin ) . These observations arouse a great interest in an efficient algorithm capable of computing only a specified part of Fourier coefficients . Accordingly , various approaches have been proposed to address the problem , which include Goertzel algorithm ( Burrus & Parks ( 1985 ) ) , Subband DFT ( Hossen et al . ( 1995 ) ; Shentov et al . ( 1995 ) ) , and Pruned FFT ( Markel ( 1971 ) ; Skinner ( 1976 ) ; Nagai ( 1986 ) ; Sorensen & Burrus ( 1993 ) ; Ailon & Liberty ( 2009 ) ) . In this paper , we propose a fast Partial Fourier Transform ( PFT ) , an efficient algorithm for computing a part of Fourier coefficients . Specifically , we consider the following problem : given a complex-valued vector a of sizeN , a non-negative integerM , and an integer µ , estimate the Fourier coefficients of a for the interval [ µ −M , µ + M ] . The resulting algorithm is of remarkably simple structure , composed of several “ smaller ” FFTs combined with linear pre- and post-processing steps . Consequently , PFT reduces the number of operations to O ( N + M logM ) , which is , to the best of our knowledge , the lowest arithmetic complexity achieved so far . Besides that , most subroutines of PFT are the already highly optimized algorithms ( e.g. , matrix multiplication and FFT ) , thus the arithmetic gains are readily turned into actual run-time improvements . Furthermore , PFT does not require the input size to be a power of 2 , unlike many other competitors . This is because the idea of PFT derives from a modification of the Cooley-Tukey algorithm ( Cooley & Tukey , 1965 ) , which also makes it straightforward to extend the idea to a higher dimensionality . Through experiments , we show that PFT outperforms the state-of-the-art FFT libraries , FFTW by Frigo & Johnson ( 2005 ) and Intel Math Kernel Library ( MKL ) as well as Pruned FFTW , with an order of magnitude of speedup without sacrificing accuracy . 2 RELATED WORK . We describe various existing methods for computing partial Fourier coefficients . Fast Fourier transform . One may consider just using Fast Fourier transform ( FFT ) and discarding the unnecessary coefficients , where FFT efficiently computes the full DFT , reducing the arithmetic cost from naı̈ve O ( N2 ) to O ( N logN ) . Such an approach has two major advantages : ( 1 ) it is straightforward to implement , and ( 2 ) the method often outperforms the competitors because it directly employs FFT which has been highly optimized over decades . Therefore , we provide extensive comparisons of PFT and FFT both theoretically and through run-time evaluations . Experimental results in Section 4.2 show that PFT outperforms the FFT when the output size is small enough ( < 10 % ) compared to the input size . Goertzel algorithm . Goertzel algorithm ( Burrus & Parks ( 1985 ) ) is one of the first methods devised for computing only a part of Fourier coefficients . The technique is essentially the same as computing the individual coefficients of DFT , thus requiring O ( MN ) operations for M coefficients of an input of size N . Specifically , theoretical analysis represents “ the M at which the Goertzel algorithm is advantageous over FFT ” as M < 2 logN ( Sysel & Rajmic ( 2012 ) ) . For example , with N = 222 , the Goertzel algorithm becomes faster than FFT only when M < 44 , while PFT outperforms FFT for M < 219 = 524288 ( Figure 1b ) . A few variants which improve the Goertzel algorithm have been proposed ( e.g. , Boncelet ( 1986 ) ) . Nevertheless , the performance gain is only by a small constant factor , thus they are still limited to rare scenarios where a very few number of coefficients are required . Subband DFT . Subband DFT ( Hossen et al . ( 1995 ) ; Shentov et al . ( 1995 ) ) consists of two stages of algorithm : Hadamard transform that decomposes the input sequence into a set of smaller subsequences , and correction stage for recombination . The algorithm approximates a part of coefficients by eliminating subsequences with small energy contribution , and manages to reduce the number of operations to O ( N + M logN ) . Apart from the arithmetic gain , however , there is a substantial issue of low accuracy with the Subband DFT . Indeed , experimental results in Hossen et al . ( 1995 ) show that the relative approximation error of the method is around 10−1 ( only one significant figure ) regardless of output size . Moreover , the Fourier coefficients can be evaluated to arbitrary numerical precision with PFT , which is not the case for Subband DFT . Such limitations often preclude one from considering the Subband DFT in applications that require a certain degree of accuracy . Pruned FFT . FFT pruning ( Markel ( 1971 ) ; Skinner ( 1976 ) ; Nagai ( 1986 ) ; Sorensen & Burrus ( 1993 ) ; Ailon & Liberty ( 2009 ) ) is another technique for the efficient computation of partial Fourier coefficients . The method is a modification of the standard split-radix FFT , where the edges ( operations ) in a flow graph are pruned away if they do not affect the specified range of frequency domain . Besides being almost optimized ( it uses FFT as a subroutine ) , the FFT pruning algorithm is exact and reduces the arithmetic cost to O ( N logM ) . Thus , along with the full FFT , the pruned FFT is reasonably the most appropriate competitor of PFT . Through experiments ( Section 4.2 ) , we show that PFT consistently outperforms the pruned FFT , significantly extending the range of output sizes for which partial Fourier transform becomes practical . Finally , we mention that there have been other approaches but with different settings . For example , Hassanieh et al . ( 2012a ; b ) and Indyk et al . ( 2014 ) propose Sparse Fourier transform , which estimates the top-k ( the k largest in magnitude ) Fourier coefficients of a given vector . The algorithm is useful especially when there is prior knowledge of the number of non-zero coefficients in frequency domain . Note that our setting does not require any prior knowledge of the given data . Applications of FFT . We outline various applications of Fast Fourier transform , to which partial Fourier transform can potentially be applied . FFT has been widely used for anomaly detection ( Hou & Zhang ( 2007 ) ; Rasheed et al . ( 2009 ) ; Ren et al . ( 2019 ) ) . Hou & Zhang ( 2007 ) and Ren et al . ( 2019 ) detect anomalous points of a given data by extracting a compact representation with FFT . Rasheed et al . ( 2009 ) use FFT to detect local spatial outliers which have similar patterns within a region but different patterns from the outside . Several works ( Pagh ( 2013 ) ; Pham & Pagh ( 2013 ) ; Malik & Becker ( 2018 ) ) exploit FFT for efficient operations . Pagh ( 2013 ) leverages FFT to efficiently compute a polynomial kernel used with support vector machines ( SVMs ) . Malik & Becker ( 2018 ) propose an efficient tucker decomposition method using FFT . In addition , FFT has been used for fast training of convolutional neural networks ( Mathieu et al . ( 2014 ) ; Rippel et al . ( 2015 ) ) and an efficient recommendation model on a heterogeneous graph ( Jin et al . ( 2020 ) ) . 3 PROPOSED METHOD . 3.1 OVERVIEW . We propose PFT , an efficient algorithm for computing a specified part of Fourier coefficients . The main challenges and our approaches are as follows : 1 . How can we extract essential information for a specified output ? Considering that only a specified part of Fourier coefficients should be computed , we need to find an algorithm requiring fewer operations than the direct use of conventional FFT . This is achievable by carefully modifying the Cooley-Tukey algorithm , finding twiddle factors ( trigonometric factors ) with small oscillations , and approximating them via polynomials ( Section 3.2.1 ) . 2 . How can we reduce approximation cost ? The approach given above involves an approximating process , which would be computationally demanding . We propose using a base exponential function , by which all data-independent factors can be precomputed , enabling one to bypass the approximation problem during the run-time ( Sections 3.2.2 and 3.3 ) . 3 . How can we further reduce numerical computation ? We carefully reorder operations and factorize terms in order to alleviate the complexity of PFT . Such techniques separate all data-independent factors from data-dependent factors , allowing further precomputation . The arithmetic cost of the resulting algorithm is O ( N + M logM ) , where N and M are input and output size descriptors , respectively ( Sections 3.4 and 3.5.1 ) . 3.2 APPROXIMATION OF TWIDDLE FACTORS . The key of our algorithm is to approximate a part of twiddle factors with small oscillations by using polynomial functions , reducing the computational complexity of DFT due to the mixture of many twiddle factors . Using polynomial approximation also allows one to carefully control the degree of polynomial ( or the number of approximating terms ) , enabling fine-tuning the output range and the approximation bound of the estimation . Our first goal is to find a collection of twiddle factors with small oscillations . This can be achieved by slightly adjusting the summand of DFT and splitting the summation as in the Cooley-Tukey algorithm ( Section 3.2.1 ) . Next , using a proper base exponential function , we give an explicit form of approximation to the twiddle factors ( Section 3.2.2 ) .
The paper presents a fast approximate algorithm for partial discrete Fourier transform. Given the input signal $a_0...a_N$ in the time domain the algorithm approximately computes the first $O(M)$ frequencies. The running time is $O(r (N + M \log M))$, where $r$ a parameter that controls the accuracy of the output. The algorithm itself proceeds by applying polynomial approximation to the twiddle coefficients in FFT, which makes it possible to reduce the running time.
SP:fc497267ee411f936495a4b404ed87752f12687f
Fast Partial Fourier Transform
In this paper , we propose a fast Partial Fourier Transform ( PFT ) , an efficient algorithm for computing only a part of Fourier coefficients . PFT approximates a part of twiddle factors ( trigonometric constants ) using polynomials , thereby reducing the computational complexity due to the mixture of many twiddle factors . We derive the asymptotic time complexity of PFT with respect to input and output sizes , as well as its numerical accuracy . Experimental results show that PFT outperforms the current state-of-the-art algorithms , with an order of magnitude of speedup for sufficiently small output sizes without sacrificing accuracy . 1 INTRODUCTION . How can we efficiently compute a specified part of Fourier coefficients of a given time-series vector ? Discrete Fourier transform ( DFT ) is a crucial task in several application areas , including anomaly detection ( Hou & Zhang ( 2007 ) ; Rasheed et al . ( 2009 ) ; Ren et al . ( 2019 ) ) , data center monitoring ( Mueen et al . ( 2010 ) ) , and image processing ( Shi et al . ( 2017 ) ) . Notably , in many such applications , it is well known that the DFT results in strong “ energy-compaction ” or “ sparsity ” in the frequency domain . That is , the Fourier coefficients of data are mostly small or equal to zero , having a much smaller support compared to the input size . Moreover , the support can often be specified in practice ( e.g. , a few low-frequency coefficients around the origin ) . These observations arouse a great interest in an efficient algorithm capable of computing only a specified part of Fourier coefficients . Accordingly , various approaches have been proposed to address the problem , which include Goertzel algorithm ( Burrus & Parks ( 1985 ) ) , Subband DFT ( Hossen et al . ( 1995 ) ; Shentov et al . ( 1995 ) ) , and Pruned FFT ( Markel ( 1971 ) ; Skinner ( 1976 ) ; Nagai ( 1986 ) ; Sorensen & Burrus ( 1993 ) ; Ailon & Liberty ( 2009 ) ) . In this paper , we propose a fast Partial Fourier Transform ( PFT ) , an efficient algorithm for computing a part of Fourier coefficients . Specifically , we consider the following problem : given a complex-valued vector a of sizeN , a non-negative integerM , and an integer µ , estimate the Fourier coefficients of a for the interval [ µ −M , µ + M ] . The resulting algorithm is of remarkably simple structure , composed of several “ smaller ” FFTs combined with linear pre- and post-processing steps . Consequently , PFT reduces the number of operations to O ( N + M logM ) , which is , to the best of our knowledge , the lowest arithmetic complexity achieved so far . Besides that , most subroutines of PFT are the already highly optimized algorithms ( e.g. , matrix multiplication and FFT ) , thus the arithmetic gains are readily turned into actual run-time improvements . Furthermore , PFT does not require the input size to be a power of 2 , unlike many other competitors . This is because the idea of PFT derives from a modification of the Cooley-Tukey algorithm ( Cooley & Tukey , 1965 ) , which also makes it straightforward to extend the idea to a higher dimensionality . Through experiments , we show that PFT outperforms the state-of-the-art FFT libraries , FFTW by Frigo & Johnson ( 2005 ) and Intel Math Kernel Library ( MKL ) as well as Pruned FFTW , with an order of magnitude of speedup without sacrificing accuracy . 2 RELATED WORK . We describe various existing methods for computing partial Fourier coefficients . Fast Fourier transform . One may consider just using Fast Fourier transform ( FFT ) and discarding the unnecessary coefficients , where FFT efficiently computes the full DFT , reducing the arithmetic cost from naı̈ve O ( N2 ) to O ( N logN ) . Such an approach has two major advantages : ( 1 ) it is straightforward to implement , and ( 2 ) the method often outperforms the competitors because it directly employs FFT which has been highly optimized over decades . Therefore , we provide extensive comparisons of PFT and FFT both theoretically and through run-time evaluations . Experimental results in Section 4.2 show that PFT outperforms the FFT when the output size is small enough ( < 10 % ) compared to the input size . Goertzel algorithm . Goertzel algorithm ( Burrus & Parks ( 1985 ) ) is one of the first methods devised for computing only a part of Fourier coefficients . The technique is essentially the same as computing the individual coefficients of DFT , thus requiring O ( MN ) operations for M coefficients of an input of size N . Specifically , theoretical analysis represents “ the M at which the Goertzel algorithm is advantageous over FFT ” as M < 2 logN ( Sysel & Rajmic ( 2012 ) ) . For example , with N = 222 , the Goertzel algorithm becomes faster than FFT only when M < 44 , while PFT outperforms FFT for M < 219 = 524288 ( Figure 1b ) . A few variants which improve the Goertzel algorithm have been proposed ( e.g. , Boncelet ( 1986 ) ) . Nevertheless , the performance gain is only by a small constant factor , thus they are still limited to rare scenarios where a very few number of coefficients are required . Subband DFT . Subband DFT ( Hossen et al . ( 1995 ) ; Shentov et al . ( 1995 ) ) consists of two stages of algorithm : Hadamard transform that decomposes the input sequence into a set of smaller subsequences , and correction stage for recombination . The algorithm approximates a part of coefficients by eliminating subsequences with small energy contribution , and manages to reduce the number of operations to O ( N + M logN ) . Apart from the arithmetic gain , however , there is a substantial issue of low accuracy with the Subband DFT . Indeed , experimental results in Hossen et al . ( 1995 ) show that the relative approximation error of the method is around 10−1 ( only one significant figure ) regardless of output size . Moreover , the Fourier coefficients can be evaluated to arbitrary numerical precision with PFT , which is not the case for Subband DFT . Such limitations often preclude one from considering the Subband DFT in applications that require a certain degree of accuracy . Pruned FFT . FFT pruning ( Markel ( 1971 ) ; Skinner ( 1976 ) ; Nagai ( 1986 ) ; Sorensen & Burrus ( 1993 ) ; Ailon & Liberty ( 2009 ) ) is another technique for the efficient computation of partial Fourier coefficients . The method is a modification of the standard split-radix FFT , where the edges ( operations ) in a flow graph are pruned away if they do not affect the specified range of frequency domain . Besides being almost optimized ( it uses FFT as a subroutine ) , the FFT pruning algorithm is exact and reduces the arithmetic cost to O ( N logM ) . Thus , along with the full FFT , the pruned FFT is reasonably the most appropriate competitor of PFT . Through experiments ( Section 4.2 ) , we show that PFT consistently outperforms the pruned FFT , significantly extending the range of output sizes for which partial Fourier transform becomes practical . Finally , we mention that there have been other approaches but with different settings . For example , Hassanieh et al . ( 2012a ; b ) and Indyk et al . ( 2014 ) propose Sparse Fourier transform , which estimates the top-k ( the k largest in magnitude ) Fourier coefficients of a given vector . The algorithm is useful especially when there is prior knowledge of the number of non-zero coefficients in frequency domain . Note that our setting does not require any prior knowledge of the given data . Applications of FFT . We outline various applications of Fast Fourier transform , to which partial Fourier transform can potentially be applied . FFT has been widely used for anomaly detection ( Hou & Zhang ( 2007 ) ; Rasheed et al . ( 2009 ) ; Ren et al . ( 2019 ) ) . Hou & Zhang ( 2007 ) and Ren et al . ( 2019 ) detect anomalous points of a given data by extracting a compact representation with FFT . Rasheed et al . ( 2009 ) use FFT to detect local spatial outliers which have similar patterns within a region but different patterns from the outside . Several works ( Pagh ( 2013 ) ; Pham & Pagh ( 2013 ) ; Malik & Becker ( 2018 ) ) exploit FFT for efficient operations . Pagh ( 2013 ) leverages FFT to efficiently compute a polynomial kernel used with support vector machines ( SVMs ) . Malik & Becker ( 2018 ) propose an efficient tucker decomposition method using FFT . In addition , FFT has been used for fast training of convolutional neural networks ( Mathieu et al . ( 2014 ) ; Rippel et al . ( 2015 ) ) and an efficient recommendation model on a heterogeneous graph ( Jin et al . ( 2020 ) ) . 3 PROPOSED METHOD . 3.1 OVERVIEW . We propose PFT , an efficient algorithm for computing a specified part of Fourier coefficients . The main challenges and our approaches are as follows : 1 . How can we extract essential information for a specified output ? Considering that only a specified part of Fourier coefficients should be computed , we need to find an algorithm requiring fewer operations than the direct use of conventional FFT . This is achievable by carefully modifying the Cooley-Tukey algorithm , finding twiddle factors ( trigonometric factors ) with small oscillations , and approximating them via polynomials ( Section 3.2.1 ) . 2 . How can we reduce approximation cost ? The approach given above involves an approximating process , which would be computationally demanding . We propose using a base exponential function , by which all data-independent factors can be precomputed , enabling one to bypass the approximation problem during the run-time ( Sections 3.2.2 and 3.3 ) . 3 . How can we further reduce numerical computation ? We carefully reorder operations and factorize terms in order to alleviate the complexity of PFT . Such techniques separate all data-independent factors from data-dependent factors , allowing further precomputation . The arithmetic cost of the resulting algorithm is O ( N + M logM ) , where N and M are input and output size descriptors , respectively ( Sections 3.4 and 3.5.1 ) . 3.2 APPROXIMATION OF TWIDDLE FACTORS . The key of our algorithm is to approximate a part of twiddle factors with small oscillations by using polynomial functions , reducing the computational complexity of DFT due to the mixture of many twiddle factors . Using polynomial approximation also allows one to carefully control the degree of polynomial ( or the number of approximating terms ) , enabling fine-tuning the output range and the approximation bound of the estimation . Our first goal is to find a collection of twiddle factors with small oscillations . This can be achieved by slightly adjusting the summand of DFT and splitting the summation as in the Cooley-Tukey algorithm ( Section 3.2.1 ) . Next , using a proper base exponential function , we give an explicit form of approximation to the twiddle factors ( Section 3.2.2 ) .
The paper suggests a method for quickly computing a "partial Fourier transform", which basically means that we want only a small range of output frequencies. The main technique is an approximation of so called "twiddle functions" (which are basically trigonometric functions, or, exponents of complex units if viewed in the complex plane) using polynomials. The resulting algorithms run in time O(N + M log M) where M is the size of the required frequency range, and N is the input. This should be compared with Cooley and Tukey's FFT, which is O(N log N). In fact, the main idea in the paper uses Cooley and Tukey's decomposition of the expression for Fourier transform.
SP:fc497267ee411f936495a4b404ed87752f12687f
SVMax: A Feature Embedding Regularizer
1 INTRODUCTION . A neural network ’ s knowledge is embodied in both its weights and activations . This difference manifests in how network pruning and knowledge distillation tackle the model compression problem . While pruning literature Li et al . ( 2016 ) ; Luo et al . ( 2017 ) ; Yu et al . ( 2018 ) compresses models by removing less significant weights , knowledge distillation Hinton et al . ( 2015 ) reduces computational complexity by matching a cumbersome network ’ s last layer activations ( logits ) . This perspective , of weight-knowledge versus activation-knowledge , emphasizes how neural network literature is dominated by explicit weight regularizers . In contrast , this paper leverages singular value decomposition ( SVD ) to regularize a network through its last layer activations – its feature embedding . Our formulation is inspired by principal component analysis ( PCA ) . Given a set of points and their covariance , PCA yields the set of orthogonal eigenvectors sorted by their eigenvalues . The principal component ( first eigenvector ) is the axis with the highest variation ( largest eigenvalue ) as shown in Figure 1c . The eigenvalues from PCA , and similarly the singular values from SVD , provide insights about the embedding space structure . As such , by regularizing the singular values , we reshape the feature embedding . The main contribution of this paper is to leverage the singular value decomposition of a network ’ s activations to regularize the embedding space . We achieve this objective through singular value maximization ( SVMax ) . The SVMax regularizer is oblivious to both the input-class ( labels ) and the sampling strategy . Thus it promotes a uniform embedding space in both supervised and unsupervised learning . Furthermore , we present a mathematical analysis of the mean singular value ’ s lower and upper bounds . This analysis makes tuning the SVMax ’ s balancing-hyperparameter easier , when the feature embedding is normalized to the unit circle . The SVMax regularizer promotes a uniform embedding space . During training , SVMax speeds up convergence by enabling large learning rates . The SVMax regularizer integrates seamlessly with various ranking losses . We apply the SVMax regularizer to the last feature embedding layer , but the same formulation can be applied to intermediate layers . The SVMax regularizer mitigates model collapse in both retrieval networks and generative adversarial networks ( GANs ) Goodfellow et al . ( 2014 ) ; Srivastava et al . ( 2017 ) ; Metz et al . ( 2017 ) . Furthermore , the SVMax regularizer is useful when training unsupervised feature embedding networks with a contrastive loss ( e.g. , CPC ) Noroozi et al . ( 2017 ) ; Oord et al . ( 2018 ) ; He et al . ( 2019 ) ; Tian et al . ( 2019 ) . In summary , we propose singular value maximization to regularize the feature embedding . In addition , we present a mathematical analysis of the mean singular value ’ s lower and upper bounds to reduce hyperparameter tuning ( Sec . 3 ) . We quantitatively evaluate how the SVMax regularizer significantly boosts the performance of ranking losses ( Sec . 4.1 ) . And we provide a qualitative evaluation of using SVMax in the unsupervised learning setting via GAN training ( Sec . 4.2 ) . 2 RELATED WORK . Network weight regularizers dominate the deep learning regularizer literature , because they support a large spectrum of tasks and architectures . Singular value decomposition ( SVD ) has been applied as a weight regularizer in several recent works Zhang et al . ( 2018 ) ; Sedghi et al . ( 2018 ) ; Guo & Ye ( 2019 ) . Zhang et al . ( 2018 ) employ SVD to avoid vanishing and exploding gradients in recurrent neural networks . Similarly , Guo & Ye ( 2019 ) bound the singular values of the convolutional layer around 1 to preserve the layer ’ s input and output norms . A bounded output norm mitigates the exploding/vanishing gradient problem . Weight regularizers share the common limitation that they do not enforce an explicit feature embedding objective and are thus ineffective against model collapse . Feature embedding regularizers have also been extensively studied , especially for classification networks Rippel et al . ( 2015 ) ; Wen et al . ( 2016 ) ; He et al . ( 2018 ) ; Hoffman et al . ( 2019 ) ; Taha et al . ( 2020 ) . These regularizers aim to maximize class margins , class compactness , or both simultaneously . For instance , Wen et al . ( 2016 ) propose center loss to explicitly learn class representatives and thus promote class compactness . In classification tasks , test samples are assumed to lie within the same classes of the training set , i.e. , closed-set identification . However , retrieval tasks , such as product re-identification , assume an open-set setting . Because of this , a retrieval network regularizer should aim to spread features across many dimensions to fully utilize the expressive power of the embedding space . Recent literature Sablayrolles et al . ( 2018 ) ; Zhang et al . ( 2017 ) has recognized the importance of a spread-out feature embedding . However , this literature is tailored to triplet loss and therefore assumes a particular sampling procedure . In this paper , we leverage SVD as a regularizer because it is simple , differentiable Ionescu et al . ( 2015 ) , and class oblivious . SVD has been used to promote low rank models to learn compact intermediate layer representations Kliegl et al . ( 2017 ) ; Sanyal et al . ( 2019 ) . This helps compress the network and speed up matrix multiplications on embedded devices ( iPhone and Raspberry Pi ) . In contrast , we regularize the embedding space through a high rank objective . By maximizing the mean singular value , we promote a higher rank representation – a spread-out embedding . 3 SINGULAR VALUE MAXIMIZATION ( SVMAX ) . We first introduce our mathematical notation . Let I denote the image space andEI ∈ Rd denote the feature embeddings space , where d is the dimension of the features . A feature embedding network is a function Fθ : I → EI , parameterized by the network ’ s weights θ . We quantify similarity between an image pair ( I1 , I2 ) via the Euclidean distance in feature space , i.e. , ‖EI1 − EI2‖2 . During training , a 2D matrix E ∈ Rb×d stores b samples ’ embeddings , where b is the mini-batch size . Assuming b ≥ d , the singular value decomposition ( SVD ) of E provides the singular values S = [ s1 , . , si , . , sd ] , where s1 and sd are the largest and smallest singular values , respectively . We maximize the mean singular value , sµ = 1d ∑d i=1 si , to regularize the network ’ s last layer activations – the feature embedding . By maximizing the mean singular value , the deep network spreads out its embeddings . This has the added benefit of implicitly regularizing the network ’ s weights θ . The proposed SVMax regularizer integrates with both supervised and unsupervised feature embedding networks as follows LNN = Lr − λ 1 d d∑ i=1 si = Lr − λsµ , ( 1 ) where Lr is the original network loss and λ is a balancing hyperparameter . Lower and Upper Bounds of the Mean Singular Value : One caveat to equation 1 is the hyperparameter λ . It is difficult to tune because the mean singular value sµ depends on the range of values inside E and its dimensions ( b , d ) . Thus , changing the batch size or embedding dimension requires a different λ . To address this , we utilize a common assumption in metric learning – the unit circle ( L2-normalized ) embedding assumption . This assumption provides both lower and upper bounds on ranking losses . This will allow us to impose lower and upper bounds on sµ . For an L2-normalized embeddingE , the largest singular value s1 is maximum when the matrix-rank of E equals one , i.e. , rank ( E ) = 1 , and si = 0 for i ∈ [ 2 , d ] . Horn & Johnson ( 1991 ) provide an upper bound on this largest singular value s1 as s∗ ( E ) ≤ √ ||E||1||E||∞ . This holds in equality for all L2-normalized E ∈ Rb×d with rank ( E ) = 1 . For an L2-normalized matrix E with ||E||1 = b , and ||E||∞ = 1 , this gives : s∗ ( E ) = √ ||E||1||E||∞ = √ b . ( 2 ) Thus , the lower bound L on sµ is L = s∗ ( E ) d = √ b d . Similarly , an upper bound is defined on the sum of the singular values Turkmen & Civciv ( 2007 ) ; Kong et al . ( 2018 ) ; Friedland & Lim ( 2016 ) . This summation is formally known as the nuclear norm of a matrix ||E||∗ . Hu ( 2015 ) established an upper bound on this summation using the Frobenius Norm ||E||F as follows ||E||∗ ≤ √ b× d max ( b , d ) ||E||F , ( 3 ) where ||E||F = ( ∑rows i=1 ∑cols j=1 |Eij | 2 ) 1 2 = √ b because of the L2-normalization assumption . Accordingly , the lower and upper bounds of sµ are [ L , U ] = [ s∗ ( E ) d , ||E||∗ d ] . With these bounds , we rewrite our final loss function as follows LNN = Lr + λ exp ( U − sµ U − L ) . ( 4 ) The SVMax regularizer grows exponentially ∈ [ 1 , e ] . We employ this loss function in all our retrieval experiments . It is important to note that the L2-normalized assumption makes λ tuning easier , but it is not required . Equation 4 makes the hyperparameter λ only dependent on the range of Lr which is also bounded for ranking losses . Lower and Upper Bounds of Ranking Losses : We briefly show that ranking losses are bounded when assuming an L2-normalized embedding . Equations 5 and 6 show triplet and contrastive losses , respectively , and their corresponding bounds [ L , U ] . TL ( a , p , n ) ∈T = [ ( D ( bac , bpc ) −D ( bac , bnc ) +m ) ] + [ L , U ] −−−→ [ 0 , 2 +m ] , ( 5 ) CL ( x , y ) ∈P = ( 1− δx , y ) D ( bxc , byc ) ) + δx , y [ m−D ( bxc , byc ) ) ] + [ L , U ] −−−→ [ 0 , 2 ] , ( 6 ) where [ • ] + = max ( 0 , • ) , m < 2 is the margin between classes , since 2 is the maximum distance on the unit circle . b•c and D ( , ) are the embedding and Euclidean distance functions , respectively . In equation 5 , a , p , and n are the anchor , positive , and negative images in a single triplet ( a , p , n ) from the triplets set T . In equation 6 , x and y form a single pair of images from the pairs set P . δx , y = 1 when x and y belong to different classes ; zero otherwise . In the supplementary material , we ( 1 ) show similar analysis for N-pair and angular losses , ( 2 ) provide an SVMax evaluation on small training batches , i.e. , b < d , and ( 3 ) evaluate the computational complexity of SVMax .
This paper proposes a new approach to regularize the feature embedding of neural networks. The proposed regularizer, maximizes the mean singular value of the feature matrix per batch, leading to a uniform spread of features. This enables learning with larger learning rates without the risk of model collapse. Authors derive lower and upper bounds for the proposed singular values loss, as well as popular ranking losses used in recent studies, eg. triplet loss and pairwise loss. These bounds help to tune the mixing parameter of network’s loss and the singular value loss.
SP:3498af23a51d9522d5727025750c462e114f5566
SVMax: A Feature Embedding Regularizer
1 INTRODUCTION . A neural network ’ s knowledge is embodied in both its weights and activations . This difference manifests in how network pruning and knowledge distillation tackle the model compression problem . While pruning literature Li et al . ( 2016 ) ; Luo et al . ( 2017 ) ; Yu et al . ( 2018 ) compresses models by removing less significant weights , knowledge distillation Hinton et al . ( 2015 ) reduces computational complexity by matching a cumbersome network ’ s last layer activations ( logits ) . This perspective , of weight-knowledge versus activation-knowledge , emphasizes how neural network literature is dominated by explicit weight regularizers . In contrast , this paper leverages singular value decomposition ( SVD ) to regularize a network through its last layer activations – its feature embedding . Our formulation is inspired by principal component analysis ( PCA ) . Given a set of points and their covariance , PCA yields the set of orthogonal eigenvectors sorted by their eigenvalues . The principal component ( first eigenvector ) is the axis with the highest variation ( largest eigenvalue ) as shown in Figure 1c . The eigenvalues from PCA , and similarly the singular values from SVD , provide insights about the embedding space structure . As such , by regularizing the singular values , we reshape the feature embedding . The main contribution of this paper is to leverage the singular value decomposition of a network ’ s activations to regularize the embedding space . We achieve this objective through singular value maximization ( SVMax ) . The SVMax regularizer is oblivious to both the input-class ( labels ) and the sampling strategy . Thus it promotes a uniform embedding space in both supervised and unsupervised learning . Furthermore , we present a mathematical analysis of the mean singular value ’ s lower and upper bounds . This analysis makes tuning the SVMax ’ s balancing-hyperparameter easier , when the feature embedding is normalized to the unit circle . The SVMax regularizer promotes a uniform embedding space . During training , SVMax speeds up convergence by enabling large learning rates . The SVMax regularizer integrates seamlessly with various ranking losses . We apply the SVMax regularizer to the last feature embedding layer , but the same formulation can be applied to intermediate layers . The SVMax regularizer mitigates model collapse in both retrieval networks and generative adversarial networks ( GANs ) Goodfellow et al . ( 2014 ) ; Srivastava et al . ( 2017 ) ; Metz et al . ( 2017 ) . Furthermore , the SVMax regularizer is useful when training unsupervised feature embedding networks with a contrastive loss ( e.g. , CPC ) Noroozi et al . ( 2017 ) ; Oord et al . ( 2018 ) ; He et al . ( 2019 ) ; Tian et al . ( 2019 ) . In summary , we propose singular value maximization to regularize the feature embedding . In addition , we present a mathematical analysis of the mean singular value ’ s lower and upper bounds to reduce hyperparameter tuning ( Sec . 3 ) . We quantitatively evaluate how the SVMax regularizer significantly boosts the performance of ranking losses ( Sec . 4.1 ) . And we provide a qualitative evaluation of using SVMax in the unsupervised learning setting via GAN training ( Sec . 4.2 ) . 2 RELATED WORK . Network weight regularizers dominate the deep learning regularizer literature , because they support a large spectrum of tasks and architectures . Singular value decomposition ( SVD ) has been applied as a weight regularizer in several recent works Zhang et al . ( 2018 ) ; Sedghi et al . ( 2018 ) ; Guo & Ye ( 2019 ) . Zhang et al . ( 2018 ) employ SVD to avoid vanishing and exploding gradients in recurrent neural networks . Similarly , Guo & Ye ( 2019 ) bound the singular values of the convolutional layer around 1 to preserve the layer ’ s input and output norms . A bounded output norm mitigates the exploding/vanishing gradient problem . Weight regularizers share the common limitation that they do not enforce an explicit feature embedding objective and are thus ineffective against model collapse . Feature embedding regularizers have also been extensively studied , especially for classification networks Rippel et al . ( 2015 ) ; Wen et al . ( 2016 ) ; He et al . ( 2018 ) ; Hoffman et al . ( 2019 ) ; Taha et al . ( 2020 ) . These regularizers aim to maximize class margins , class compactness , or both simultaneously . For instance , Wen et al . ( 2016 ) propose center loss to explicitly learn class representatives and thus promote class compactness . In classification tasks , test samples are assumed to lie within the same classes of the training set , i.e. , closed-set identification . However , retrieval tasks , such as product re-identification , assume an open-set setting . Because of this , a retrieval network regularizer should aim to spread features across many dimensions to fully utilize the expressive power of the embedding space . Recent literature Sablayrolles et al . ( 2018 ) ; Zhang et al . ( 2017 ) has recognized the importance of a spread-out feature embedding . However , this literature is tailored to triplet loss and therefore assumes a particular sampling procedure . In this paper , we leverage SVD as a regularizer because it is simple , differentiable Ionescu et al . ( 2015 ) , and class oblivious . SVD has been used to promote low rank models to learn compact intermediate layer representations Kliegl et al . ( 2017 ) ; Sanyal et al . ( 2019 ) . This helps compress the network and speed up matrix multiplications on embedded devices ( iPhone and Raspberry Pi ) . In contrast , we regularize the embedding space through a high rank objective . By maximizing the mean singular value , we promote a higher rank representation – a spread-out embedding . 3 SINGULAR VALUE MAXIMIZATION ( SVMAX ) . We first introduce our mathematical notation . Let I denote the image space andEI ∈ Rd denote the feature embeddings space , where d is the dimension of the features . A feature embedding network is a function Fθ : I → EI , parameterized by the network ’ s weights θ . We quantify similarity between an image pair ( I1 , I2 ) via the Euclidean distance in feature space , i.e. , ‖EI1 − EI2‖2 . During training , a 2D matrix E ∈ Rb×d stores b samples ’ embeddings , where b is the mini-batch size . Assuming b ≥ d , the singular value decomposition ( SVD ) of E provides the singular values S = [ s1 , . , si , . , sd ] , where s1 and sd are the largest and smallest singular values , respectively . We maximize the mean singular value , sµ = 1d ∑d i=1 si , to regularize the network ’ s last layer activations – the feature embedding . By maximizing the mean singular value , the deep network spreads out its embeddings . This has the added benefit of implicitly regularizing the network ’ s weights θ . The proposed SVMax regularizer integrates with both supervised and unsupervised feature embedding networks as follows LNN = Lr − λ 1 d d∑ i=1 si = Lr − λsµ , ( 1 ) where Lr is the original network loss and λ is a balancing hyperparameter . Lower and Upper Bounds of the Mean Singular Value : One caveat to equation 1 is the hyperparameter λ . It is difficult to tune because the mean singular value sµ depends on the range of values inside E and its dimensions ( b , d ) . Thus , changing the batch size or embedding dimension requires a different λ . To address this , we utilize a common assumption in metric learning – the unit circle ( L2-normalized ) embedding assumption . This assumption provides both lower and upper bounds on ranking losses . This will allow us to impose lower and upper bounds on sµ . For an L2-normalized embeddingE , the largest singular value s1 is maximum when the matrix-rank of E equals one , i.e. , rank ( E ) = 1 , and si = 0 for i ∈ [ 2 , d ] . Horn & Johnson ( 1991 ) provide an upper bound on this largest singular value s1 as s∗ ( E ) ≤ √ ||E||1||E||∞ . This holds in equality for all L2-normalized E ∈ Rb×d with rank ( E ) = 1 . For an L2-normalized matrix E with ||E||1 = b , and ||E||∞ = 1 , this gives : s∗ ( E ) = √ ||E||1||E||∞ = √ b . ( 2 ) Thus , the lower bound L on sµ is L = s∗ ( E ) d = √ b d . Similarly , an upper bound is defined on the sum of the singular values Turkmen & Civciv ( 2007 ) ; Kong et al . ( 2018 ) ; Friedland & Lim ( 2016 ) . This summation is formally known as the nuclear norm of a matrix ||E||∗ . Hu ( 2015 ) established an upper bound on this summation using the Frobenius Norm ||E||F as follows ||E||∗ ≤ √ b× d max ( b , d ) ||E||F , ( 3 ) where ||E||F = ( ∑rows i=1 ∑cols j=1 |Eij | 2 ) 1 2 = √ b because of the L2-normalization assumption . Accordingly , the lower and upper bounds of sµ are [ L , U ] = [ s∗ ( E ) d , ||E||∗ d ] . With these bounds , we rewrite our final loss function as follows LNN = Lr + λ exp ( U − sµ U − L ) . ( 4 ) The SVMax regularizer grows exponentially ∈ [ 1 , e ] . We employ this loss function in all our retrieval experiments . It is important to note that the L2-normalized assumption makes λ tuning easier , but it is not required . Equation 4 makes the hyperparameter λ only dependent on the range of Lr which is also bounded for ranking losses . Lower and Upper Bounds of Ranking Losses : We briefly show that ranking losses are bounded when assuming an L2-normalized embedding . Equations 5 and 6 show triplet and contrastive losses , respectively , and their corresponding bounds [ L , U ] . TL ( a , p , n ) ∈T = [ ( D ( bac , bpc ) −D ( bac , bnc ) +m ) ] + [ L , U ] −−−→ [ 0 , 2 +m ] , ( 5 ) CL ( x , y ) ∈P = ( 1− δx , y ) D ( bxc , byc ) ) + δx , y [ m−D ( bxc , byc ) ) ] + [ L , U ] −−−→ [ 0 , 2 ] , ( 6 ) where [ • ] + = max ( 0 , • ) , m < 2 is the margin between classes , since 2 is the maximum distance on the unit circle . b•c and D ( , ) are the embedding and Euclidean distance functions , respectively . In equation 5 , a , p , and n are the anchor , positive , and negative images in a single triplet ( a , p , n ) from the triplets set T . In equation 6 , x and y form a single pair of images from the pairs set P . δx , y = 1 when x and y belong to different classes ; zero otherwise . In the supplementary material , we ( 1 ) show similar analysis for N-pair and angular losses , ( 2 ) provide an SVMax evaluation on small training batches , i.e. , b < d , and ( 3 ) evaluate the computational complexity of SVMax .
This paper proposes a regularization technique called SVMax (singular value maximization) that can mitigate model collapse and enable large learning rates to reduce training computation costs. The singular value decomposition of network activation is used to regularize the embedding space with unit circle embedding assumptions. In addition, a mathematical analysis of the mean singular value boundary is provided to reduce hyperparameter tuning. The authors evaluate the proposed method for the retrieval tasks and generative adversarial networks.
SP:3498af23a51d9522d5727025750c462e114f5566
MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training
1 INTRODUCTION . Locality Sensitive Hashing ( LSH ) has been adapted to address the computational and memory bottlenecks of large-scale neural network ( NN ) training in natural language processing ( Chandar et al. , 2016 ; Rae et al. , 2016 ; Kitaev et al. , 2020 ) , computer vision ( Chen et al. , 2015 ) and recommendation systems ( Spring & Shrivastava , 2017 ; Chen et al. , 2020 ) . Specifically , giant matrix multiplications in linear layers preceding a softmax can be approximated using nearest neighbor search ( NNS ) techniques , which often rely on LSH . However , LSH methods used in NNs are inefficient . Although LSH achieves sub-linear query time in theory , it is known to suffer from high query and pre-processing ( update ) overhead in practice ( Erik et al. , 2018 ) . In the setting of NN training , where data points for LSH are model parameters , this overhead is exacerbated by a high number of updates due to constantly evolving model parameters . The most established solution for reducing LSH query overhead is data-dependent or learningbased hashing , which uses adaptive hash functions to optimize the LSH bucket distribution for each dataset ( Andoni & Razenshteyn , 2015 ; Dong et al. , 2019 ) . These methods reduce query time by incurring a one-time offline cost to learn useful data input patterns in a preprocessing step . The learning techniques are often computationally complex , but can lead to a net reduction in overall query time . However , in NN training , the expensive preprocessing procedure has to be repeated each time the parameters are updated . Naïvely applying these techniques would increase the LSH update overhead rather than reduce it . A more appropriate data-dependent LSH framework would ideally ( i ) have a deeper understanding of the training dynamics of model parameters , a setting in which LSH has not been well-studied , ( ii ) be able to perform low-cost updates to account for evolving parameters , and ( iii ) have better query time while accurately approximating matrix multiplication . We argue that it is unnecessary to view evolving model parameters as streaming data and not every parameter update requires an LSH update . In fact , LSH updates are necessary only when the NN gra- dient steps are large enough to cause the model parameters ’ LSH hash codes to change . Therefore , we count the number of hash code changes that occur for important models such as Transformers and fully-connected NNs from previous work ( Chen et al. , 2020 ; Kitaev et al. , 2020 ) . We find that only 1 % of the hash codes change after each epoch on average for Transformers , and 5 % for a fullyconnected NNs , so most weights do not change on a scale that is enough to trigger an LSH update . Furthermore , the rate of hash code change initially decays exponentially and eventually plateaus , as shown later in Figure 3 . We calculate that a 100× speed up is possible with an update oracle . In contrast , current approaches with LSH have not fully exploited this observation . We show in Section 4.2 that they either blindly skip updates for speed , resulting in a 10-point accuracy drop , or suffer from a 20× slow-down . We demonstrate that these slowly changing hash codes provide two opportunities to realize the ideal data-dependent LSH framework described above ( shown as the intuition for the scheduler in Figure 1 ) . First , given weight changes , an algorithm could adaptively schedule LSH updates to realize low update overhead . Second , given current weights , we could learn data-dependent hash functions for better LSH bucket distribution to achieve fast query time . However , this comes with three challenges : ( i ) how to characterize the slowly-changing phenomenon without computing hash codes , ( ii ) how to schedule the updates without the oracle , and ( iii ) how to learn data-dependent hash functions for shorter query time without compromising on the update time . In this paper , we first provide a general formulation of the problem as dynamic NNS in Section 2 . In Section 3 , we propose MONGOOSE , a framework for fast and memory-efficient NN training to address the above challenges . We show one major observation , slow change , and two key components based on it in MONGOOSE . Specifically , • In Section 3.1 , we measure the ` 2 norm of the weight changes and their hash code changes during training on a fully-connected NN . ( More models are studied in Appendix A ) . We find that ( i ) the hash codes are slowly changing ( ii ) both quantities share similar trajectories , but the former ’ s slow change is the necessary condition of the latter ’ s . Therefore , we formally define the slow change of ` 2 norm of weight changes in Assumption 3.1 , which is the key to build MONGOOSE . • In Section 3.2 , we present an algorithm for scheduling efficient LSH updates . We closely analyze our scheduler ’ s theoretical guarantees and provide an upper bound of its running time , which is proportional to the ` 2 norm of the weight changes . Therefore , under the slow change assumption , we show that our scheduler is provably faster than previous approaches . • In Section 3.3 , we propose a method of learning parameterized LSH hash functions ( e.g. , SimHash ( Charikar , 2002 ) ) during NN training . Our method utilizes intermediate activations during the forward pass to generate training inputs for tuning LSH hash functions with little overhead . Combining this with our scheduler further decreases the overhead , while providing hash functions that better separate data . Finally , in Section 4 , we demonstrate the efficacy of MONGOOSE on two LSH-NN systems , SLIDE and Reformer . For SLIDE applications , we show up to 6.5× speedup in time and 8 % higher accuracy on three datasets . For Reformer applications , we show improvement in perplexity when training lan- guage modeling benchmarks from scratch . Moreover , We provide two additional sets of experiments to show how MONGOOSE addresses the aforementioned three challenges separately . 2 RELATED WORK AND PROBLEM SETTING . In this section , we first discuss applications of LSH in the NN training setting and introduce datadependent LSH techniques . Then we formally define the problem we solve as dynamic NNS . 2.1 LSH FOR EFFICIENT NN TRAINING Recent works take advantage of LSH as an efficient NNS algorithm to speed up matrix multiplication in neural networks . SLIDE ( Chen et al. , 2020 ) is an algorithm that retrieves neurons with maximum inner product during the forward pass via an LSH-based data structure . In this way , in the backward pass gradients are only computed for neurons with estimated large gradients . Reformer ( Kitaev et al. , 2020 ) , a variant of Transformer , similarly uses LSH to reduce the memory bottleneck of self-attention layers over long sequences . One pain point of LSH in this setting is that model weights change throughout training . This necessitates constantly updating the LSH data structure . Figure 2 illustrates what happens when we don ’ t perform such updates . In short , failing to update the LSH data structure as the search data changes degrades its NNS performance . This in turn worsens the quality of the matrix product approximation . In our experiments , we found that failing to update the LSH data structure in SLIDE causes a 28 % decrease in top-1 accuracy for a fully-connected NN . ( More related work is presented in Appendix E.1 ) 2.2 PROBLEM FORMULATION . In this section , we formulate LSH for effcient training as a dynamic NNS problem . This formulation is closely built on the static NNS problem and the well-known lower bound on LSH complexity . In the static NNS problem , we are given a set of weights w1 , · · · , wn ∈ Rd and want to construct a data structure ( NNS-ds ) that supports the QUERY operation , defined as follows : given x ∈ Rd , QUERY ( x ) returns a set S of weights wi that are all “ close ” to x in a distance measure . More precisely , we require the QUERY ( x ) operation to be ( c1 , c2 ) -accurate : Definition 2.1 ( ( c1 , c2 ) -accurate ) . Denote S to be the ( random ) set returned by QUERY ( x ) . We say QUERY ( x ) is ( c1 , c2 ) accurate if ( i ) for any i ∈ [ n ] such that 〈wi , x〉 ≥ c1‖x‖2‖wi‖2 , Pr [ wi ∈ S ] ≥ 1− 1/ poly ( n ) , ( ii ) for any i ∈ [ n ] such that 〈wi , x〉 < c2‖x‖2‖wi‖2 , Pr [ wi ∈ S ] < 1/poly ( n ) . In our application , we assume 1 > c1 > c2 > 0 to be some constants and denote ρ = ( c2/c1 ) 2 . In the dynamic NNS problem , the weights w1 , · · · , wn ∈ Rd can evolve over time , so we need to update the data structure . Lemma 2.2 ( ( Andoni & Indyk , 2006 ) ) . Using LSH , one can achieve ( c1 , c2 ) accuracy with query time O ( dnρ ) , preprocessing time O ( dn1+ρ ) , updating time O ( dnρ ) . 3 MONGOOSE : A FRAMEWORK FOR LEARNABLE LSH . We present the workflow of our main framework in Figure 1 . In Section 3.1 , we first show the key observation that encourages the design of MONGOOSE and formally define the slow change assumption . In Section 3.2 , we introduce our first component , an algorithm for scheduling LSH updates with provable guarantees based on the assumption . Finally , we present the second component , lowcost learnable LSH hash functions , in Section 3.3 . More details about the efficient implementation of MONGOOSE are in Appendix F . 3.1 CHARACTERIZATION OF “ SLOW CHANGE ” PHENOMENON . We first present an observation on model weights and how their LSH hash codes change during training . Based on this observation , we define the slow change assumption . We denote ∆W as the difference between weights across gradient steps , and ∆H as the Hamming distance between hash codes during training . Observation . We plot the ` 2 norm of ∆W and ∆H of the weights during training a one layered fully-connected NN in Figure 3 . Our most surprising finding is that only 5 % of hash codes change after each epoch on average , which implies that for most neurons , ∆W is not large enough to trigger hash code changes . For both ∆W and ∆H , there is a sharp drop at the early stages of training before they plateau . ( Similar observations on Transformers and further discussions are presented in Appendix A . ) Insights . In the dynamic NNS problem , input data ( model weights ) change over time . Without any assumptions about weight updates , naïvely applying LSH will require updating the LSH hash functions at every time step . However , the above observation suggests that we can reasonably assume ∆W is ( roughly ) upper-bounded by ∆H . Denote w as the weight matrix , n as number of neurons and wi as the weight vector for neuron i . Formally : Assumption 3.1 ( Slow change ) . Assume NN weights change slowly over time . In particular , we assume there is an upper bound on the expected movement of the weights of the neural networks ( C1 ) and an upper bound on the variance ( C2 ) . Specifically , we denote the initial weight matrix as w ( 0 ) ∈ Rn×d . We assume the ( random ) update sequence w ( 1 ) , · · · , w ( T ) ∈ Rn×d satisfies n∑ i=1 ∥∥∥E [ w ( k+1 ) i ] − w ( k ) i ∥∥∥2 2 ≤ C21 and n∑ i=1 ‖Var [ w ( k+1 ) i ] ‖2 ≤ C22 , ( 1 ) where the expectation and variance is conditioned on w ( k ) i for all k = 0 , 1 , · · · , T − 1 .
The authors make a good insight into the slowly changing of Locality-Sensitive Hashing (LSH) hash codes for the weights (or model parameters) during the Neural Network (NN) training. With this new insight, they introduce a framework Mongoose with a newly designed schedule mechanism to reduce the LSH update overhead. The authors also analyse their model and show some bounds for batch speedup and LSH maintenance. Experimental results validate the efficiency and effectiveness of Mongoose over original LSH methods in NN training.
SP:de05c7b7b8830e38da4254af1e0ca2ddadb50134
MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training
1 INTRODUCTION . Locality Sensitive Hashing ( LSH ) has been adapted to address the computational and memory bottlenecks of large-scale neural network ( NN ) training in natural language processing ( Chandar et al. , 2016 ; Rae et al. , 2016 ; Kitaev et al. , 2020 ) , computer vision ( Chen et al. , 2015 ) and recommendation systems ( Spring & Shrivastava , 2017 ; Chen et al. , 2020 ) . Specifically , giant matrix multiplications in linear layers preceding a softmax can be approximated using nearest neighbor search ( NNS ) techniques , which often rely on LSH . However , LSH methods used in NNs are inefficient . Although LSH achieves sub-linear query time in theory , it is known to suffer from high query and pre-processing ( update ) overhead in practice ( Erik et al. , 2018 ) . In the setting of NN training , where data points for LSH are model parameters , this overhead is exacerbated by a high number of updates due to constantly evolving model parameters . The most established solution for reducing LSH query overhead is data-dependent or learningbased hashing , which uses adaptive hash functions to optimize the LSH bucket distribution for each dataset ( Andoni & Razenshteyn , 2015 ; Dong et al. , 2019 ) . These methods reduce query time by incurring a one-time offline cost to learn useful data input patterns in a preprocessing step . The learning techniques are often computationally complex , but can lead to a net reduction in overall query time . However , in NN training , the expensive preprocessing procedure has to be repeated each time the parameters are updated . Naïvely applying these techniques would increase the LSH update overhead rather than reduce it . A more appropriate data-dependent LSH framework would ideally ( i ) have a deeper understanding of the training dynamics of model parameters , a setting in which LSH has not been well-studied , ( ii ) be able to perform low-cost updates to account for evolving parameters , and ( iii ) have better query time while accurately approximating matrix multiplication . We argue that it is unnecessary to view evolving model parameters as streaming data and not every parameter update requires an LSH update . In fact , LSH updates are necessary only when the NN gra- dient steps are large enough to cause the model parameters ’ LSH hash codes to change . Therefore , we count the number of hash code changes that occur for important models such as Transformers and fully-connected NNs from previous work ( Chen et al. , 2020 ; Kitaev et al. , 2020 ) . We find that only 1 % of the hash codes change after each epoch on average for Transformers , and 5 % for a fullyconnected NNs , so most weights do not change on a scale that is enough to trigger an LSH update . Furthermore , the rate of hash code change initially decays exponentially and eventually plateaus , as shown later in Figure 3 . We calculate that a 100× speed up is possible with an update oracle . In contrast , current approaches with LSH have not fully exploited this observation . We show in Section 4.2 that they either blindly skip updates for speed , resulting in a 10-point accuracy drop , or suffer from a 20× slow-down . We demonstrate that these slowly changing hash codes provide two opportunities to realize the ideal data-dependent LSH framework described above ( shown as the intuition for the scheduler in Figure 1 ) . First , given weight changes , an algorithm could adaptively schedule LSH updates to realize low update overhead . Second , given current weights , we could learn data-dependent hash functions for better LSH bucket distribution to achieve fast query time . However , this comes with three challenges : ( i ) how to characterize the slowly-changing phenomenon without computing hash codes , ( ii ) how to schedule the updates without the oracle , and ( iii ) how to learn data-dependent hash functions for shorter query time without compromising on the update time . In this paper , we first provide a general formulation of the problem as dynamic NNS in Section 2 . In Section 3 , we propose MONGOOSE , a framework for fast and memory-efficient NN training to address the above challenges . We show one major observation , slow change , and two key components based on it in MONGOOSE . Specifically , • In Section 3.1 , we measure the ` 2 norm of the weight changes and their hash code changes during training on a fully-connected NN . ( More models are studied in Appendix A ) . We find that ( i ) the hash codes are slowly changing ( ii ) both quantities share similar trajectories , but the former ’ s slow change is the necessary condition of the latter ’ s . Therefore , we formally define the slow change of ` 2 norm of weight changes in Assumption 3.1 , which is the key to build MONGOOSE . • In Section 3.2 , we present an algorithm for scheduling efficient LSH updates . We closely analyze our scheduler ’ s theoretical guarantees and provide an upper bound of its running time , which is proportional to the ` 2 norm of the weight changes . Therefore , under the slow change assumption , we show that our scheduler is provably faster than previous approaches . • In Section 3.3 , we propose a method of learning parameterized LSH hash functions ( e.g. , SimHash ( Charikar , 2002 ) ) during NN training . Our method utilizes intermediate activations during the forward pass to generate training inputs for tuning LSH hash functions with little overhead . Combining this with our scheduler further decreases the overhead , while providing hash functions that better separate data . Finally , in Section 4 , we demonstrate the efficacy of MONGOOSE on two LSH-NN systems , SLIDE and Reformer . For SLIDE applications , we show up to 6.5× speedup in time and 8 % higher accuracy on three datasets . For Reformer applications , we show improvement in perplexity when training lan- guage modeling benchmarks from scratch . Moreover , We provide two additional sets of experiments to show how MONGOOSE addresses the aforementioned three challenges separately . 2 RELATED WORK AND PROBLEM SETTING . In this section , we first discuss applications of LSH in the NN training setting and introduce datadependent LSH techniques . Then we formally define the problem we solve as dynamic NNS . 2.1 LSH FOR EFFICIENT NN TRAINING Recent works take advantage of LSH as an efficient NNS algorithm to speed up matrix multiplication in neural networks . SLIDE ( Chen et al. , 2020 ) is an algorithm that retrieves neurons with maximum inner product during the forward pass via an LSH-based data structure . In this way , in the backward pass gradients are only computed for neurons with estimated large gradients . Reformer ( Kitaev et al. , 2020 ) , a variant of Transformer , similarly uses LSH to reduce the memory bottleneck of self-attention layers over long sequences . One pain point of LSH in this setting is that model weights change throughout training . This necessitates constantly updating the LSH data structure . Figure 2 illustrates what happens when we don ’ t perform such updates . In short , failing to update the LSH data structure as the search data changes degrades its NNS performance . This in turn worsens the quality of the matrix product approximation . In our experiments , we found that failing to update the LSH data structure in SLIDE causes a 28 % decrease in top-1 accuracy for a fully-connected NN . ( More related work is presented in Appendix E.1 ) 2.2 PROBLEM FORMULATION . In this section , we formulate LSH for effcient training as a dynamic NNS problem . This formulation is closely built on the static NNS problem and the well-known lower bound on LSH complexity . In the static NNS problem , we are given a set of weights w1 , · · · , wn ∈ Rd and want to construct a data structure ( NNS-ds ) that supports the QUERY operation , defined as follows : given x ∈ Rd , QUERY ( x ) returns a set S of weights wi that are all “ close ” to x in a distance measure . More precisely , we require the QUERY ( x ) operation to be ( c1 , c2 ) -accurate : Definition 2.1 ( ( c1 , c2 ) -accurate ) . Denote S to be the ( random ) set returned by QUERY ( x ) . We say QUERY ( x ) is ( c1 , c2 ) accurate if ( i ) for any i ∈ [ n ] such that 〈wi , x〉 ≥ c1‖x‖2‖wi‖2 , Pr [ wi ∈ S ] ≥ 1− 1/ poly ( n ) , ( ii ) for any i ∈ [ n ] such that 〈wi , x〉 < c2‖x‖2‖wi‖2 , Pr [ wi ∈ S ] < 1/poly ( n ) . In our application , we assume 1 > c1 > c2 > 0 to be some constants and denote ρ = ( c2/c1 ) 2 . In the dynamic NNS problem , the weights w1 , · · · , wn ∈ Rd can evolve over time , so we need to update the data structure . Lemma 2.2 ( ( Andoni & Indyk , 2006 ) ) . Using LSH , one can achieve ( c1 , c2 ) accuracy with query time O ( dnρ ) , preprocessing time O ( dn1+ρ ) , updating time O ( dnρ ) . 3 MONGOOSE : A FRAMEWORK FOR LEARNABLE LSH . We present the workflow of our main framework in Figure 1 . In Section 3.1 , we first show the key observation that encourages the design of MONGOOSE and formally define the slow change assumption . In Section 3.2 , we introduce our first component , an algorithm for scheduling LSH updates with provable guarantees based on the assumption . Finally , we present the second component , lowcost learnable LSH hash functions , in Section 3.3 . More details about the efficient implementation of MONGOOSE are in Appendix F . 3.1 CHARACTERIZATION OF “ SLOW CHANGE ” PHENOMENON . We first present an observation on model weights and how their LSH hash codes change during training . Based on this observation , we define the slow change assumption . We denote ∆W as the difference between weights across gradient steps , and ∆H as the Hamming distance between hash codes during training . Observation . We plot the ` 2 norm of ∆W and ∆H of the weights during training a one layered fully-connected NN in Figure 3 . Our most surprising finding is that only 5 % of hash codes change after each epoch on average , which implies that for most neurons , ∆W is not large enough to trigger hash code changes . For both ∆W and ∆H , there is a sharp drop at the early stages of training before they plateau . ( Similar observations on Transformers and further discussions are presented in Appendix A . ) Insights . In the dynamic NNS problem , input data ( model weights ) change over time . Without any assumptions about weight updates , naïvely applying LSH will require updating the LSH hash functions at every time step . However , the above observation suggests that we can reasonably assume ∆W is ( roughly ) upper-bounded by ∆H . Denote w as the weight matrix , n as number of neurons and wi as the weight vector for neuron i . Formally : Assumption 3.1 ( Slow change ) . Assume NN weights change slowly over time . In particular , we assume there is an upper bound on the expected movement of the weights of the neural networks ( C1 ) and an upper bound on the variance ( C2 ) . Specifically , we denote the initial weight matrix as w ( 0 ) ∈ Rn×d . We assume the ( random ) update sequence w ( 1 ) , · · · , w ( T ) ∈ Rn×d satisfies n∑ i=1 ∥∥∥E [ w ( k+1 ) i ] − w ( k ) i ∥∥∥2 2 ≤ C21 and n∑ i=1 ‖Var [ w ( k+1 ) i ] ‖2 ≤ C22 , ( 1 ) where the expectation and variance is conditioned on w ( k ) i for all k = 0 , 1 , · · · , T − 1 .
Some neural network runs involve layers with a large number of neurons. These require large matrix-vector or matrix-multiplication which can slow their training/inference. However, if the output of mat-vec/mul is dominated by a few neurons with which the activation has large inner product (a matmul can be thought of as a weighted sum of inner products), then the computation can be sped up by approximating mat-vec/mul by a limited weighted sum with the dominant terms. This requires maintaining an ANNS data structure that is up to data with the back prop. These updates to ANNS have to be done carefully -- too frequent and the training will slow down, or too infrequent and the results of the mat-mul are way off. This paper studies how to do this in a principled way using data-dependent LSH updates and backs it up with experimental data.
SP:de05c7b7b8830e38da4254af1e0ca2ddadb50134
Similarity Search for Efficient Active Learning and Search of Rare Concepts
1 INTRODUCTION . Large-scale unlabeled datasets contain millions or billions of examples spread over a wide variety of underlying concepts ( Chelba et al. , 2013 ; Zhu et al. , 2015 ; Zhang et al. , 2015 ; Wan et al. , 2019 ; Russakovsky et al. , 2015 ; Kuznetsova et al. , 2020 ; Thomee et al. , 2016 ; Abu-El-Haija et al. , 2016 ; Caesar et al. , 2019 ; Lee et al. , 2019 ) . Often , these massive datasets skew towards a relatively small number of common concepts , such as cats , dogs , and people ( Liu et al. , 2019 ; Zhang et al. , 2017 ; Wang et al. , 2017 ; Van Horn & Perona , 2017 ) . Rare concepts , such as harbor seals , may only appear in a small fraction of the data ( less than 1 % ) . However , in many settings , performance on these rare concepts is critical . For example , harmful or malicious content may comprise a small percentage of user-generated content , but it can have an outsize impact on the overall user experience ( Wan et al. , 2019 ) . Similarly , when debugging model behavior for safety-critical applications like autonomous vehicles , or when dealing with representational biases in models , obtaining data that captures rare concepts allows machine learning practitioners to combat blind spots in model performance ( Karpathy , 2018 ; Holstein et al. , 2019 ; Ashmawy et al. , 2019 ; Karpathy , 2020 ) . Even a simple prediction task like stop sign detection can be challenging given the diversity of real-world data . Stop signs may appear in a variety of conditions ( e.g. , on a wall or held by a person ) , be heavily occluded , or have modifiers ( e.g. , “ Except Right Turns ” ) ( Karpathy , 2020 ) . While large-scale datasets are core to addressing these issues , finding the relevant examples for these long-tail tasks is challenging . Active learning and search have the potential to automate the process of identifying these rare , high value data points significantly , but existing methods become intractable at this scale . Specifically , the goal of active learning is to reduce the cost of labeling ( Settles , 2012 ) . To this end , the learning algorithm is allowed to choose which data to label based on uncertainty ( e.g. , the entropy of predicted class probabilities ) or other heuristics ( Settles , 2011 ; 2012 ; Lewis & Gale , 1994 ) . Active search is a sub-area focused on finding positive examples in skewed distributions ( Garnett et al. , 2012 ) . Because of a concentrated focus on labeling costs , existing techniques , such as uncertainty sampling ( Lewis & Gale , 1994 ) or information density ( Settles & Craven , 2008 ) , perform multiple selection rounds and iterate over the entire unlabeled data to identify the optimal example or batch of examples to label and scale linearly or even quadratically with the size of the unlabeled data . Computational efficiency is becoming an impediment as the size of datasets and model complexities have increased ( Amodei & Hernandez , 2018 ) . Recent work has tried to address this problem with sophisticated methods to select larger and more diverse batches of examples in each selection round and reduce the total number of rounds needed to reach the target labeling budget ( Sener & Savarese , 2018 ; Kirsch et al. , 2019 ; Coleman et al. , 2020 ; Pinsler et al. , 2019 ; Jiang et al. , 2018 ) . Nevertheless , these approaches still scan over all of the examples to find the optimal examples to label in each round and can be intractable for large-scale unlabeled datasets . For example , running a single inference pass over 10 billion images with ResNet-50 ( He et al. , 2016 ) would take 38 exaFLOPs . In this work , we propose Similarity search for Efficient Active Learning and Search ( SEALS ) to restrict the candidates considered in each selection round and vastly reduce the computational complexity of active learning and search methods . Empirically , we find that learned representations from pre-trained models can effectively cluster many unseen and rare concepts . We exploit this latent structure to improve the computational efficiency of active learning and search methods by only considering the nearest neighbors of the currently labeled examples in each selection round . This can be done transparently for many selection strategies making SEALS widely applicable . Finding the nearest neighbors for each labeled example in unlabeled data can be performed efficiently with sublinear retrieval times ( Charikar , 2002 ) and sub-second latency on billion-scale datasets ( Johnson et al. , 2017 ) for approximate approaches . While constructing the index for similarity search requires at least a linear pass over the unlabeled data , this computational cost is effectively amortized over many selection rounds or other applications . As a result , our SEALS approach enables selection to scale with the size of the labeled data rather than the size of the unlabeled data , making active learning and search tractable on datasets with billions of unlabeled examples . We empirically evaluated SEALS for both active learning and search on three large scale computer vision datasets : ImageNet ( Russakovsky et al. , 2015 ) , OpenImages ( Kuznetsova et al. , 2020 ) , and a proprietary dataset of 10 billion images from a large internet company . We selected 611 concepts spread across these datasets that range in prevalence from 0.203 % to 0.002 % ( 1 in 50,000 ) of the training examples . We evaluated three selection strategies for each concept : max entropy uncertainty sampling ( Lewis & Gale , 1994 ) , information density ( Settles & Craven , 2008 ) , and most-likely positive ( Warmuth et al. , 2002 ; 2003 ; Jiang et al. , 2018 ) . Across datasets , selection strategies , and concepts , SEALS achieved similar model quality and nearly the same recall of the positive examples as the baseline approaches , while improving the computational complexity by orders of magnitude . On ImageNet with a budget of 2,000 binary labels per concept ( ˜0.31 % of the unlabeled data ) , all baseline and SEALS approaches were within 0.011 mAP of full supervision and recalled over 50 % of the positive examples . On OpenImages , SEALS reduced the candidate pool to 1 % of the unlabeled data on average while remaining within 0.013 mAP and 0.1 % recall of the baseline approaches . On the proprietary dataset with 10 billion images , SEALS needed an even smaller fraction of the data , about 0.1 % , to match the baseline , which allowed SEALS to run on a single machine rather than a cluster . To the best of our knowledge , no other works have performed active learning at this scale . We also applied SEALS to the NLP spoiler detection dataset Goodreads ( Wan et al. , 2019 ) , where it achieved the same recall as the baseline approaches while only considering less than 1 % of the unlabeled data . Together , these results demonstrate that SEALS ’ improvements to computational efficiency make active learning and search tractable for even billion-scale datasets . 2 RELATED WORK . Active learning ’ s iterative retraining combined with the high computational complexity of deep learning models has led to significant work on computational efficiency ( Sener & Savarese , 2018 ; Kirsch et al. , 2019 ; Pinsler et al. , 2019 ; Coleman et al. , 2020 ; Yoo & Kweon , 2019 ; Mayer & Timofte , 2020 ; Zhu & Bento , 2017 ) . One branch of recent work has focused on selecting large batches of data to minimize the amount of retraining and reduce the number of selection rounds necessary to reach a target budget ( Sener & Savarese , 2018 ; Kirsch et al. , 2019 ; Pinsler et al. , 2019 ) . These approaches introduce novel techniques to avoid selecting highly similar or redundant examples and ensure the batches are both informative and diverse . In comparison , our work aims to reduce the number of examples considered in each selection round and complements existing work on batch active learning . Many of these approaches sacrifice computational complexity to ensure diversity , and their selection methods can scale quadratically with the size of the unlabeled data . Combined with our method , these selection methods scale with the size of the labeled data rather than the unlabeled data . Outside of batch active learning , other work has tried to improve computational efficiency by either using much smaller models as cheap proxies during selection ( Yoo & Kweon , 2019 ; Coleman et al. , 2020 ) or by generating examples ( Mayer & Timofte , 2020 ; Zhu & Bento , 2017 ) . Using a smaller model reduces the amount of computation per example , but unlike our approach , it still requires making multiple passes over the entire unlabeled pool of examples . The generative approaches ( Mayer & Timofte , 2020 ; Zhu & Bento , 2017 ) , however , enable sub-linear runtime complexity like our approach . Unfortunately , they struggle to match the label-efficiency of traditional approaches because the quality of the generated examples is highly variable . Active search is a sub-area of active learning that focuses on highly skewed class distributions ( Garnett et al. , 2012 ; Jiang et al. , 2017 ; 2018 ; 2019 ) . Rather than optimizing for model quality , active search aims to find as many examples from the minority class as possible . Prior work has focused on applications such as drug discovery , where the dataset sizes are limited , and labeling costs are exceptionally high . Our work similarly focuses on skewed distributions . However , we consider novel active search settings in image and text where the available unlabeled datasets are much larger , and computational efficiency is a significant bottleneck . k nearest neighbor ( k-NN ) classifiers are popular models in active learning and search because they do not require an explicit training phase ( Joshi et al. , 2012 ; Wei et al. , 2015 ; Garnett et al. , 2012 ; Jiang et al. , 2017 ; 2018 ) . The prediction and score for each unlabeled example can be updated immediately after each new batch of labels . In comparison , our SEALS approach uses k-NN algorithms for similarity search to create and expand the candidate pool and not as a classifier . This is an important but subtle difference . While prior work avoids expensive training by using k-NN classifiers , these approaches still require evaluating all of the unlabeled examples , which can still be prohibitively expensive on large-scale datasets like the ones we consider here . SEALS targets the selection phase rather than training , presenting a novel and complementary approach . 3 METHODS . In this section , we outline the problems of active learning ( Section 3.1 ) and search ( Section 3.2 ) formally as well as the selection methods we accelerate using SEALS . For both , we examine the pool-based setting , where all of the unlabeled data is available at once , and examples are selected in batches to improve computational efficiency , as mentioned above . Then in Section 3.3 , we describe our SEALS approach and how it further improves computational efficiency in both settings . 3.1 ACTIVE LEARNING . Pool-based active learning is an iterative process that begins with a large pool of unlabeled data U = { x1 , . . . , xn } . Each example is sampled from the space X with an unknown label from the label space Y = { 1 , . . . , C } as ( xi , yi ) . We additionally assume a feature extraction function Gz to embed each xi as a latent variable Gz ( xi ) = zi and that the C concepts are unequally distributed . Specifically , there are one or more valuable rare concepts R ⊂ C that appear in less than 1 % of the unlabeled data . For simplicity , we frame this as |R| binary classification problems solved independently rather than 1 multi-class classification problem with |R| concepts . Initially , each rare concept has a small number of positive examples and several negative examples that serve as a labeled seed set L0r . The goal of active learning is to take this seed set and select up to a budget of T examples to label that produce a model ATr that achieves low error . For each round t in pool-based active learning , the most informative examples are selected according to the selection strategy φ from a pool of candidate examples Pr in batches of size b and labeled , as shown in Algorithm 1 . For the baseline approach , Pr = { Gz ( x ) | x ∈ U } , meaning that all the unlabeled examples are considered to find the global optimal according to φ . Between each round , the model Atr is trained on all of the labeled data Ltr , allowing the selection process to adapt . In this paper , we considered max entropy ( MaxEnt ) uncertainty sampling ( Lewis & Gale , 1994 ) : φMaxEnt ( z ) = − ∑ ŷ P ( ŷ|z ; Ar ) logP ( ŷ|z ; Ar ) and information density ( ID ) ( Settles & Craven , 2008 ) : φID ( z ) = φMaxEnt ( z ) × 1 |Pr| ∑ zp∈Pr sim ( z , zp ) β where sim ( z , zp ) is the cosine similarity of the embedded examples and β = 1 . Note that for binary classification , max entropy is equivalent to least confidence and margin sampling , which are also popular criteria for uncertainty sampling ( Settles , 2009 ) . While max entropy uncertainty sampling only requires a linear pass over the unlabeled data , ID scales quadratically with |U | because it weights each example ’ s informativeness by its similarity to all other examples . To improve computational performance , the average similarity score for each example can be cached after the first selection round , so subsequent rounds scale linearly . This optimization only works whenGz is fixed and would not apply to dynamic similarity calculations like those in Sener & Savarese ( 2018 ) . We explored the greedy k-centers approach from Sener & Savarese ( 2018 ) but found that it never outperformed random sampling for our experimental setup . Unlike MaxEnt and ID , k-centers does not consider the predicted labels . It tries to achieve high coverage over the entire candidate pool , of which rare concepts make up a small fraction by definition , making it ineffective for our setting . Algorithm 1 BASELINE APPROACH Input : unlabeled data U , labeled seed set L0r , feature extractor Gz , selection strategy φ ( · ) , batch size b , labeling budget T 1 : Lr = { ( Gz ( x ) , y ) | ( x , y ) ∈ L0r } 2 : Pr = { Gz ( x ) | x ∈ U and ( x , · ) 6∈ L0r } 3 : repeat 4 : Ar = train ( Lr ) 5 : for 1 to b do 6 : z∗ = argmaxz∈Pr φ ( z ) 7 : Lr = Lr ∪ { ( z∗ , label ( x∗ ) ) } 8 : Pr = Pr \ { z∗ } 9 : end for 10 : until |Lr| = T Algorithm 2 SEALS APPROACH Input : unlabeled data U , labeled seed set L0r , feature extractor Gz , selection strategy φ ( · ) , batch size b , labeling budget T , knearest neighbors implementation N ( · , · ) 1 : Lr = { ( Gz ( x ) , y ) | ( x , y ) ∈ L0r } 2 : Pr = ∪ ( z , y ) ∈LrN ( z , k ) 3 : repeat 4 : Ar = train ( Lr ) 5 : for 1 to b do 6 : z∗ = argmaxz∈Pr φ ( z ) 7 : Lr = Lr ∪ { ( z∗ , label ( x∗ ) ) } 8 : Pr = ( Pr \ { z∗ } ) ∪N ( z∗ , k ) 9 : end for 10 : until |Lr| = T
This paper proposes a new method (SEALS) to accelerate the active learning and active search with the skewness of the cardinality of rare class compared to the large-scale datasets. To leverage this skewness, the authors restrict the candidate pool for labelling mainly from the nearest neighbours of the currently labelled set (except the initial set). The authors conduct very detailed experiments on the tasks of active learning and active search over three large-scale data sets to validate the efficiency and effectiveness of SEALS.
SP:0a155411707f21e84ec5d4ec1d53e29d5a622074
Similarity Search for Efficient Active Learning and Search of Rare Concepts
1 INTRODUCTION . Large-scale unlabeled datasets contain millions or billions of examples spread over a wide variety of underlying concepts ( Chelba et al. , 2013 ; Zhu et al. , 2015 ; Zhang et al. , 2015 ; Wan et al. , 2019 ; Russakovsky et al. , 2015 ; Kuznetsova et al. , 2020 ; Thomee et al. , 2016 ; Abu-El-Haija et al. , 2016 ; Caesar et al. , 2019 ; Lee et al. , 2019 ) . Often , these massive datasets skew towards a relatively small number of common concepts , such as cats , dogs , and people ( Liu et al. , 2019 ; Zhang et al. , 2017 ; Wang et al. , 2017 ; Van Horn & Perona , 2017 ) . Rare concepts , such as harbor seals , may only appear in a small fraction of the data ( less than 1 % ) . However , in many settings , performance on these rare concepts is critical . For example , harmful or malicious content may comprise a small percentage of user-generated content , but it can have an outsize impact on the overall user experience ( Wan et al. , 2019 ) . Similarly , when debugging model behavior for safety-critical applications like autonomous vehicles , or when dealing with representational biases in models , obtaining data that captures rare concepts allows machine learning practitioners to combat blind spots in model performance ( Karpathy , 2018 ; Holstein et al. , 2019 ; Ashmawy et al. , 2019 ; Karpathy , 2020 ) . Even a simple prediction task like stop sign detection can be challenging given the diversity of real-world data . Stop signs may appear in a variety of conditions ( e.g. , on a wall or held by a person ) , be heavily occluded , or have modifiers ( e.g. , “ Except Right Turns ” ) ( Karpathy , 2020 ) . While large-scale datasets are core to addressing these issues , finding the relevant examples for these long-tail tasks is challenging . Active learning and search have the potential to automate the process of identifying these rare , high value data points significantly , but existing methods become intractable at this scale . Specifically , the goal of active learning is to reduce the cost of labeling ( Settles , 2012 ) . To this end , the learning algorithm is allowed to choose which data to label based on uncertainty ( e.g. , the entropy of predicted class probabilities ) or other heuristics ( Settles , 2011 ; 2012 ; Lewis & Gale , 1994 ) . Active search is a sub-area focused on finding positive examples in skewed distributions ( Garnett et al. , 2012 ) . Because of a concentrated focus on labeling costs , existing techniques , such as uncertainty sampling ( Lewis & Gale , 1994 ) or information density ( Settles & Craven , 2008 ) , perform multiple selection rounds and iterate over the entire unlabeled data to identify the optimal example or batch of examples to label and scale linearly or even quadratically with the size of the unlabeled data . Computational efficiency is becoming an impediment as the size of datasets and model complexities have increased ( Amodei & Hernandez , 2018 ) . Recent work has tried to address this problem with sophisticated methods to select larger and more diverse batches of examples in each selection round and reduce the total number of rounds needed to reach the target labeling budget ( Sener & Savarese , 2018 ; Kirsch et al. , 2019 ; Coleman et al. , 2020 ; Pinsler et al. , 2019 ; Jiang et al. , 2018 ) . Nevertheless , these approaches still scan over all of the examples to find the optimal examples to label in each round and can be intractable for large-scale unlabeled datasets . For example , running a single inference pass over 10 billion images with ResNet-50 ( He et al. , 2016 ) would take 38 exaFLOPs . In this work , we propose Similarity search for Efficient Active Learning and Search ( SEALS ) to restrict the candidates considered in each selection round and vastly reduce the computational complexity of active learning and search methods . Empirically , we find that learned representations from pre-trained models can effectively cluster many unseen and rare concepts . We exploit this latent structure to improve the computational efficiency of active learning and search methods by only considering the nearest neighbors of the currently labeled examples in each selection round . This can be done transparently for many selection strategies making SEALS widely applicable . Finding the nearest neighbors for each labeled example in unlabeled data can be performed efficiently with sublinear retrieval times ( Charikar , 2002 ) and sub-second latency on billion-scale datasets ( Johnson et al. , 2017 ) for approximate approaches . While constructing the index for similarity search requires at least a linear pass over the unlabeled data , this computational cost is effectively amortized over many selection rounds or other applications . As a result , our SEALS approach enables selection to scale with the size of the labeled data rather than the size of the unlabeled data , making active learning and search tractable on datasets with billions of unlabeled examples . We empirically evaluated SEALS for both active learning and search on three large scale computer vision datasets : ImageNet ( Russakovsky et al. , 2015 ) , OpenImages ( Kuznetsova et al. , 2020 ) , and a proprietary dataset of 10 billion images from a large internet company . We selected 611 concepts spread across these datasets that range in prevalence from 0.203 % to 0.002 % ( 1 in 50,000 ) of the training examples . We evaluated three selection strategies for each concept : max entropy uncertainty sampling ( Lewis & Gale , 1994 ) , information density ( Settles & Craven , 2008 ) , and most-likely positive ( Warmuth et al. , 2002 ; 2003 ; Jiang et al. , 2018 ) . Across datasets , selection strategies , and concepts , SEALS achieved similar model quality and nearly the same recall of the positive examples as the baseline approaches , while improving the computational complexity by orders of magnitude . On ImageNet with a budget of 2,000 binary labels per concept ( ˜0.31 % of the unlabeled data ) , all baseline and SEALS approaches were within 0.011 mAP of full supervision and recalled over 50 % of the positive examples . On OpenImages , SEALS reduced the candidate pool to 1 % of the unlabeled data on average while remaining within 0.013 mAP and 0.1 % recall of the baseline approaches . On the proprietary dataset with 10 billion images , SEALS needed an even smaller fraction of the data , about 0.1 % , to match the baseline , which allowed SEALS to run on a single machine rather than a cluster . To the best of our knowledge , no other works have performed active learning at this scale . We also applied SEALS to the NLP spoiler detection dataset Goodreads ( Wan et al. , 2019 ) , where it achieved the same recall as the baseline approaches while only considering less than 1 % of the unlabeled data . Together , these results demonstrate that SEALS ’ improvements to computational efficiency make active learning and search tractable for even billion-scale datasets . 2 RELATED WORK . Active learning ’ s iterative retraining combined with the high computational complexity of deep learning models has led to significant work on computational efficiency ( Sener & Savarese , 2018 ; Kirsch et al. , 2019 ; Pinsler et al. , 2019 ; Coleman et al. , 2020 ; Yoo & Kweon , 2019 ; Mayer & Timofte , 2020 ; Zhu & Bento , 2017 ) . One branch of recent work has focused on selecting large batches of data to minimize the amount of retraining and reduce the number of selection rounds necessary to reach a target budget ( Sener & Savarese , 2018 ; Kirsch et al. , 2019 ; Pinsler et al. , 2019 ) . These approaches introduce novel techniques to avoid selecting highly similar or redundant examples and ensure the batches are both informative and diverse . In comparison , our work aims to reduce the number of examples considered in each selection round and complements existing work on batch active learning . Many of these approaches sacrifice computational complexity to ensure diversity , and their selection methods can scale quadratically with the size of the unlabeled data . Combined with our method , these selection methods scale with the size of the labeled data rather than the unlabeled data . Outside of batch active learning , other work has tried to improve computational efficiency by either using much smaller models as cheap proxies during selection ( Yoo & Kweon , 2019 ; Coleman et al. , 2020 ) or by generating examples ( Mayer & Timofte , 2020 ; Zhu & Bento , 2017 ) . Using a smaller model reduces the amount of computation per example , but unlike our approach , it still requires making multiple passes over the entire unlabeled pool of examples . The generative approaches ( Mayer & Timofte , 2020 ; Zhu & Bento , 2017 ) , however , enable sub-linear runtime complexity like our approach . Unfortunately , they struggle to match the label-efficiency of traditional approaches because the quality of the generated examples is highly variable . Active search is a sub-area of active learning that focuses on highly skewed class distributions ( Garnett et al. , 2012 ; Jiang et al. , 2017 ; 2018 ; 2019 ) . Rather than optimizing for model quality , active search aims to find as many examples from the minority class as possible . Prior work has focused on applications such as drug discovery , where the dataset sizes are limited , and labeling costs are exceptionally high . Our work similarly focuses on skewed distributions . However , we consider novel active search settings in image and text where the available unlabeled datasets are much larger , and computational efficiency is a significant bottleneck . k nearest neighbor ( k-NN ) classifiers are popular models in active learning and search because they do not require an explicit training phase ( Joshi et al. , 2012 ; Wei et al. , 2015 ; Garnett et al. , 2012 ; Jiang et al. , 2017 ; 2018 ) . The prediction and score for each unlabeled example can be updated immediately after each new batch of labels . In comparison , our SEALS approach uses k-NN algorithms for similarity search to create and expand the candidate pool and not as a classifier . This is an important but subtle difference . While prior work avoids expensive training by using k-NN classifiers , these approaches still require evaluating all of the unlabeled examples , which can still be prohibitively expensive on large-scale datasets like the ones we consider here . SEALS targets the selection phase rather than training , presenting a novel and complementary approach . 3 METHODS . In this section , we outline the problems of active learning ( Section 3.1 ) and search ( Section 3.2 ) formally as well as the selection methods we accelerate using SEALS . For both , we examine the pool-based setting , where all of the unlabeled data is available at once , and examples are selected in batches to improve computational efficiency , as mentioned above . Then in Section 3.3 , we describe our SEALS approach and how it further improves computational efficiency in both settings . 3.1 ACTIVE LEARNING . Pool-based active learning is an iterative process that begins with a large pool of unlabeled data U = { x1 , . . . , xn } . Each example is sampled from the space X with an unknown label from the label space Y = { 1 , . . . , C } as ( xi , yi ) . We additionally assume a feature extraction function Gz to embed each xi as a latent variable Gz ( xi ) = zi and that the C concepts are unequally distributed . Specifically , there are one or more valuable rare concepts R ⊂ C that appear in less than 1 % of the unlabeled data . For simplicity , we frame this as |R| binary classification problems solved independently rather than 1 multi-class classification problem with |R| concepts . Initially , each rare concept has a small number of positive examples and several negative examples that serve as a labeled seed set L0r . The goal of active learning is to take this seed set and select up to a budget of T examples to label that produce a model ATr that achieves low error . For each round t in pool-based active learning , the most informative examples are selected according to the selection strategy φ from a pool of candidate examples Pr in batches of size b and labeled , as shown in Algorithm 1 . For the baseline approach , Pr = { Gz ( x ) | x ∈ U } , meaning that all the unlabeled examples are considered to find the global optimal according to φ . Between each round , the model Atr is trained on all of the labeled data Ltr , allowing the selection process to adapt . In this paper , we considered max entropy ( MaxEnt ) uncertainty sampling ( Lewis & Gale , 1994 ) : φMaxEnt ( z ) = − ∑ ŷ P ( ŷ|z ; Ar ) logP ( ŷ|z ; Ar ) and information density ( ID ) ( Settles & Craven , 2008 ) : φID ( z ) = φMaxEnt ( z ) × 1 |Pr| ∑ zp∈Pr sim ( z , zp ) β where sim ( z , zp ) is the cosine similarity of the embedded examples and β = 1 . Note that for binary classification , max entropy is equivalent to least confidence and margin sampling , which are also popular criteria for uncertainty sampling ( Settles , 2009 ) . While max entropy uncertainty sampling only requires a linear pass over the unlabeled data , ID scales quadratically with |U | because it weights each example ’ s informativeness by its similarity to all other examples . To improve computational performance , the average similarity score for each example can be cached after the first selection round , so subsequent rounds scale linearly . This optimization only works whenGz is fixed and would not apply to dynamic similarity calculations like those in Sener & Savarese ( 2018 ) . We explored the greedy k-centers approach from Sener & Savarese ( 2018 ) but found that it never outperformed random sampling for our experimental setup . Unlike MaxEnt and ID , k-centers does not consider the predicted labels . It tries to achieve high coverage over the entire candidate pool , of which rare concepts make up a small fraction by definition , making it ineffective for our setting . Algorithm 1 BASELINE APPROACH Input : unlabeled data U , labeled seed set L0r , feature extractor Gz , selection strategy φ ( · ) , batch size b , labeling budget T 1 : Lr = { ( Gz ( x ) , y ) | ( x , y ) ∈ L0r } 2 : Pr = { Gz ( x ) | x ∈ U and ( x , · ) 6∈ L0r } 3 : repeat 4 : Ar = train ( Lr ) 5 : for 1 to b do 6 : z∗ = argmaxz∈Pr φ ( z ) 7 : Lr = Lr ∪ { ( z∗ , label ( x∗ ) ) } 8 : Pr = Pr \ { z∗ } 9 : end for 10 : until |Lr| = T Algorithm 2 SEALS APPROACH Input : unlabeled data U , labeled seed set L0r , feature extractor Gz , selection strategy φ ( · ) , batch size b , labeling budget T , knearest neighbors implementation N ( · , · ) 1 : Lr = { ( Gz ( x ) , y ) | ( x , y ) ∈ L0r } 2 : Pr = ∪ ( z , y ) ∈LrN ( z , k ) 3 : repeat 4 : Ar = train ( Lr ) 5 : for 1 to b do 6 : z∗ = argmaxz∈Pr φ ( z ) 7 : Lr = Lr ∪ { ( z∗ , label ( x∗ ) ) } 8 : Pr = ( Pr \ { z∗ } ) ∪N ( z∗ , k ) 9 : end for 10 : until |Lr| = T
This paper proposes an active learning and active search approach that targets samples for rare classes in very large unlabeled datasets with highly imbalanced class distributions. This is a common scenario in real-world applications, where these rare situations can be critical to accurately categorize - ie endangered species. The authors propose an approach that targets these rare cases while reducing the number of overall examples sampled, and that scales with the amount of labeled data as opposed to the amount of unlabeled data which allows them to consider datasets up to billions of examples.
SP:0a155411707f21e84ec5d4ec1d53e29d5a622074
Robust Loss Functions for Complementary Labels Learning
In ordinary-label learning , the correct label is given to each training sample . 1 Similarly , a complementary label is also provided for each training sample in 2 complementary-label learning . A complementary label indicates a class that the 3 example does not belong to . Robust learning of classifiers has been investi- 4 gated from many viewpoints under label noise , but little attention has been paid 5 to complementary-label learning . In this paper , we present a new algorithm of 6 complementary-label learning with the robustness of loss function . We also pro- 7 vide two sufficient conditions on a loss function so that the minimizer of the risk 8 for complementary labels is theoretically guaranteed to be consistent with the min- 9 imizer of the risk for ordinary labels . Finally , the empirical results validate our 10 method ’ s superiority to current state-of-the-art techniques . Especially in cifar10 , 11 our algorithm achieves a much higher test accuracy than the gradient ascent algo- 12 rithm , and the parameters of our model are less than half of the ResNet-34 they 13 used . 14 1 INSTRUCTION 15 . Deep neural networks have exhibited excellent performance in many real-applications . Yet , their 16 supper performance is based on the correctly labeled large-scale training set . However , labeling 17 such a large-scale dataset is time-consuming and expensive . For example , the crowd-workers need 18 to select the correct label for a sample from 100 labels for CIFAR100 . To migrate this problem , 19 reachers have proposed many solutions to learn from weak-supervision : Noise-label learning Li 20 et al . ( 2017 ) ; Hu et al . ( 2019 ) ; Lee et al . ( 2018 ) ; Xia et al . ( 2019 ) , semi-supervised learning Zhai 21 et al . ( 2019 ) ; Berthelot et al . ( 2019 ) ; Rasmus et al . ( 2015 ) ; Miyato et al . ( 2019 ) ; Sakai et al . ( 2017 ) , 22 similar-unlabeled learning Tanha ( 2019 ) ; Bao et al . ( 2018 ) ; Zelikovitz & Hirsh ( 2000 ) , unlabeled- 23 unlabeled learning Lu et al . ( 2018 ) ; Chen et al . ( 2020a ; b ) , positive-unlabeled learning Elkan & Noto 24 ( 2008 ) ; du Plessis et al . ( 2014 ) ; Kiryo et al . ( 2017 ) , contrast learning Chen et al . ( 2020a ; b ) , partial 25 label learning Cour et al . ( 2011 ) ; Feng & An ( 2018 ) ; Wu & Zhang ( 2018 ) and others . 26 We investigate complementary-label learning Ishida et al . ( 2017 ) in this paper . A complementary 27 Label is only indicating that the class label of a sample is incorrect . In the view of label noise , 28 complementary labels can also be viewed as noise labels but without any true labels in the training 29 set . Our task is to learn a classifier from the given complementary labels , predicting a correct label 30 for a given sample . Collecting complementary labels is much easier and efficient than choosing a 31 true class from many candidate classes precisely . For example , the label-system uniformly chooses a 32 label for a sample . It has a probability of 1k to be ordinary-label but k−1 k to be complementary-label . 33 Moreover , another potential application of complementary-label is data privacy . For example , on 34 some privacy issues , it is much easier to collect complementary-label than ordinary-label . 35 Robust learning of classifiers has been investigated from many viewpoints in the presence of label 36 noise Ghosh et al . ( 2017 ) , but little attention paid to complementary-label learning . We call a loss 37 function robust if the minimizer of risk under that loss function with complementary labels would be 38 the same as that with ordinary labels . The robustness of risk minimization relies on the loss function 39 used in the training set . 40 This paper presents a general risk formulation that category cross-entropy loss ( CCE ) can be used to 41 learn with complementary labels and achieve robustness . We then offer some innovative analytical 42 results on robust loss functions under complementary labels . Having robustness of risk minimization 43 helps select the best hyper-parameter by empirical risk since there are no ordinary labels in the44 validation set . We conclude two sufficient conditions on a loss function to be robust for learning45 with complementary labels . We then explore some popular loss functions used for ordinary-label46 learning , such as CCE , Mean square error ( MSE ) and Mean absolute error ( MAE ) , and show that47 CCE and MAE satisfy our sufficient conditions . Finally , we present a learning algorithm for learning48 with complementary labels , named exclusion algorithm . The empirical results well demonstrate the49 advantage of the theoretical results we addressed and verify our algorithm ’ s superiority to the current50 state-of-the-art methods . The contribution of this paper can be summarized as:51 • We present a general risk formulation that can be view as a framework to employing a52 loss function that satisfies our robustness sufficient condition to learn from complementary53 labels.54 • We conclude two sufficient conditions on a loss function to be robust for learning with55 complementary labels.56 • We prove that the minimizer of the risk for complementary labels is theoretically guaran-57 teed to be consistent with the minimizer of the risk for ordinary labels.58 • The empirical results validate the superiority of our method to current state-of-the-art meth-59 ods.60 2 RELATED WORKS61 Complementary-label refers to that the pattern does not belong to the given label . Learning from62 complementary labels is a new topic in supervised-learning . It was first proposed by Ishida et al.63 ( 2017 ) . They conduct such an idea to try to deal with time-consuming and expensive to tag a large-64 scale dataset.65 In their early work Ishida et al . ( 2017 ) , they assume the complementary labels are the same prob-66 ability to be selected for a sample . And then , based on the ordinary one-versus-all ( OVA ) and67 pairwise-comparison ( PC ) multi-class loss functions Zhang ( 2004 ) proposed a modifying loss for68 learning with complementary labels.69 Even though they provided theoretical analysis with a statistical consistency guarantee , the loss70 function met a sturdy restriction that needs to be symmetric ( ` ( z ) + ` ( −z ) = 1 ) . Such a severe71 limitation allows only the OVA and PC loss functions with symmetric non-convex binary losses.72 However , the categorical cross-entropy loss widely used in the deep learning domain , can not be73 employed by the two losses they defined.74 Later , Yu et al . ( 2018a ) assume there are some biased amongst the complementary labels and75 presents a different formulation for biased complementary labels by using the forward loss cor-76 rection technique Patrini et al . ( 2017 ) to modify traditional loss functions . Their suggested risk77 estimator is not necessarily unbiased and proved that learning with complementary labels can the-78 oretically converge to the optimal classifier learned from ordinary labels based on the estimated79 transition matrix . However , the key to the forward loss correction technique is to evaluate the tran-80 sition matrix correctly . Hence , one will need to assess the transition matrix beforehand , which is81 relatively tricky without strong assumptions . Moreover , in such a setup , it restricts a small com-82 plementary label space to provide more information . Thus , it is necessary to encourage the worker83 to provide more challenging complementary labels , for example , by giving higher rewards to the84 specific classes . Otherwise , the complementary label given by the worker may be too evident and85 uninformative . For example , class three and class five are not class one evidently but is uninforma-86 tive . This paper focuses on the uniform ( symmetric ) assumption and study random distribution as a87 biased assumption ( asymmetric or non-uniform ) .88 Based on the uniform assumption , Ishida et al . ( 2019 ) proposed an unbiased estimator with a general89 loss function for complementary labels . It can make any loss functions available for use , not only90 soft-max cross-entropy loss function , but other loss functions can also be utilized . Their new frame-91 work is a generalization of previous complementary-label learning Ishida et al . ( 2017 ) . However,92 their proposed unbiased risk estimator has an issue that the classification risk can attain negative93 values after learning , leading to overfitting Ishida et al . ( 2019 ) . They then offered a non-negative94 correction to the original unbiased risk estimator to improve their estimator , which is no longer95 guaranteed to be an unbiased risk estimator . In this paper , our proposed risk estimator is also not 96 unbiased , but the minimizer of the risk for complementary labels is theoretically guaranteed to be 97 consistent with the minimizer of the risk for ordinary labels , both uniform and non-uniform . 98 3 PRELIMINARIES 99 . 3.1 LEARNING WITH ORDINARY LABELS 100 In the context of learning with ordinary labels , let X ⊂ Rd be the feature space and Y = { 1 , · · · , k } 101 be the class labels . A multi-class loss function is a map : L ( fθ ( x ) , y ) : X × Y → R+ . A classifier 102 can be presented as : 103 h ( x ) = arg max i∈ [ k ] f ( i ) θ ( x ) , ( 1 ) where fθ ( x ) = ( f ( 1 ) θ ( x ) , · · · , f ( k ) θ ( x ) ) , θ is the set of parameters in the CNN network , f ( i ) θ ( x ) is 104 the probability prediction for the corresponding class i . Even though h ( x ) is the final classifier , we 105 use notation of calling fθ ( x ) itself as the classifier . Given dataset S = { ( xi , yi ) } Ni , together with a 106 loss function L , ∀fθ ∈ F ( F is the function space for searching ) , L-risk is defined as : 107 RSL ( fθ ) = ED [ L ( fθ ( x ) , y ) ] = ES [ L ( fθ ( x ) , y ) ] , ( 2 ) Some popular multi-class loss functions are CCE , MAE , MSE . Specifically , 108 ` ( fθ ( x ) , y ) = ` ( u , y ) = ∑k i=1 e ( i ) y log 1 µy = log 1µy CCE , ‖ey − u‖1 = 2− 2µy MAE , ‖ey − u‖22 = ‖u‖ 2 2 + 1− 2µy MSE , ( 3 ) where u = fθ ( x ) = ( µ1 , · · · , µk ) , and ey is a one-hot vector that the y-th component equals to 1 , 109 others are 0 . The goal of multi-class classification is to learn a classifier fθ ( x ) that minimize the 110 classification riskRSL with multi-class loss L . 111 3.2 LEARNING WITH COMPLEMENTARY LABELS 112 In contrast to the ordinary-label learning , the complementary-label ( CL ) dataset contains only labels 113 indicating that the class label of a sample is incorrect . Corresponding to the ordinary labels dataset 114 S , the independent and identically distributed ( i.i.d . ) complementary labels dataset denoted as : 115 S̄ = { ( x , ȳ ) } Ni , ( 4 ) where N is the size of the dataset S̄ , and ȳ represents that pattern x does not belong to class-ȳ . 116 The general labels ’ distribution of dataset S̄ is as : 117 P ( ȳ|y ) = 0 p12 . . . p1k p21 0 . . . p2k ... ... . . . ... pk1 . . . pk ( k−1 ) 0 k×k , ( 5 ) where pij denotes that the probability of the i-th class ’ s pattern x labeled as j , ∑k j=1 pij = 118 1 , pij 6=0 , j 6= i. Supposing that the label system uniformly select a label from { 1 , · · · , k } \ { y } 119 for each sample x , then the Eq . ( 5 ) becomes 120 P ( ȳ|y ) = 0 1k−1 . . . 1 k−1 1 k−1 0 . . . 1 k−1 ... ... . . . ... 1 k−1 . . . 1 k−1 0 k×k . ( 6 ) Yu et al . ( 2018b ) make a strong assumption that there are some bias in Eq . ( 5 ) , while Ishida 121 et al . ( 2017 ; 2019 ) focus on the assumption of Eq . ( 6 ) . In this paper , we study both kinds of distri- 122 bution . 123 4 METHODOLOGY124 In this section , we firstly propose a general risk formulation for leaning with complementary labels.125 And then prove that some loss functions designed for the ordinary labels learning are robust to126 complementary labels with our risk formulation , such as categorical cross-entropy loss and mean127 absolute error.128 4.1 GENERAL RISK FORMULATION129 The goal of learning with complementary labels is to learn a classifier that predicts a correct label for130 any sample drawn from the same distribution . Because there are not ordinary labels for the model,131 we need to design a loss function or model for learning with complementary labels . The key to learn-132 ing a classifier for ordinary label learning is to maximize the true label ’ s predict-probability . One133 intuitive way to maximize the true label ’ s predict-probability is to minimize the predict-probability134 of complementary labels . In this paper , with little abuse of notation , we let135 u = fθ ( x ) = ( µ1 , · · · , µk ) v = 1− fθ ( x ) = ( 1− µ1 , · · · , 1− µk ) . ( 7 ) Definition 1 . ( CL-loss ) Together with loss function ` designed for the ordinary-label learning , the136 loss for learning with complementary-label is as:137 ¯̀ ( fθ ( x ) , ȳ ) = ¯̀ ( u , ȳ ) = ` ( v , ȳ ) . ( 8 ) 4.2 THEORETICAL RESULTS138 Definition 2 . ( Robust loss ) In the framework of risk minimization , a loss function is called robust139 loss function if minimizer of risk with complementary labels would be the same as with ordinary140 labels , i.e.,141 RS̄¯̀ ( fθ∗ ) −RS̄¯̀ ( fθ ) ≤ 0⇒ RS ` ( fθ∗ ) −RS ` ( fθ ) ≤ 0 . ( 9 ) Theorem 1 . Together with ` , ¯̀ is a robust loss function for learning with complementary labels , if ¯̀142 satisfies:143 ∂ ¯̀ ( u , ȳ ) ∂µȳ > 0 , ∂ ¯̀ ( u , ȳ ) ∂µi = 0 , ∀i ∈ { 1 , · · · , k } \ { ȳ } . ( 10 ) Note that , in Eq . 10 , it means that ¯̀ is a monotone increasing loss function only on u ( ȳ ) .144 Proof . Recall that for any fθ , and any ` ,145 RS ` ( fθ ) = E ( x , y ) [ ` ( fθ ( x ) , y ) ] = 1 |S| ∑ ( x , y ) ∈S ` ( fθ ( x ) , y ) . ( 11 ) For any complementary-label distribution in Eq . ( 5 ) , and any loss function ` , we have146 RS̄¯̀ ( fθ ) = E ( x , ȳ ) [ ¯̀ ( fθ ( x ) , ȳ ) ] = 1 |S̄| k∑ i=1 ∑ x∈Si k∑ j 6=i pij ¯̀ ( fθ ( x ) , j ) , ( 12 ) where pij is the component of complementary labels distribution matrix P , S1 ∪ · · · ∪ Sk = S .147 Supposing that fθ∗ is the optimal classifier learns from the complementary labels , and ∀f ∈ F ,148 where F is the function space for searching , we have149 RS̄¯̀ ( fθ∗ ) −RS̄¯̀ ( fθ ) = 1 |S̄| k∑ i=1 ∑ x∈Si k∑ j 6=i pij ( ¯̀ ( fθ∗ ( x ) , j ) − ¯̀ ( fθ ( x ) , j ) ) ≤ 0 , ( 13 ) where pij 6= 0 . If ∃x ′ ∈ S̄ , s.t. , ¯̀ ( fθ∗ ( x ′ ) , ȳ ) > ¯̀ ( fθ ( x ′ ) , ȳ ) , let fθ′ satisfying150 fθ′ ( x ) = { fθ∗ ( x ) x ∈ S̄ \ { x ′ } , fθ ( x ) x = x ′ , ( 14 ) then according to Eq . 12 and 13 , RS̄¯̀ ( fθ′ ) < RS̄¯̀ ( fθ∗ ) , fθ∗ is not the optimal classifier . This 151 contradicts the hypothesize that fθ∗ is the optimal classifier . 152 Thus , ∀ȳ ∈ { 1 , · · · , k } \ { y } , we have 153 ¯̀ ( fθ∗ ( x ) , ȳ ) ≤ ¯̀ ( fθ ( x ) , ȳ ) . ( 15 ) According to Eq . ( 10 ) , ¯̀ is a monotone increasing loss function only on u ( ȳ ) , then we have 154 ∀ȳ ∈ { 1 , · · · , k } \ { y } , f ( ȳ ) θ∗ ( x ) ≤ f ( ȳ ) θ ( x ) . ( 16 ) Thus , 155 f ( y ) θ∗ ( x ) ≥ f ( y ) θ ( x ) , f ( y ) θ ( x ) = 1−∑ ȳ 6=y f ( ȳ ) θ ( x ) ( 17 ) and then , 156 ` ( fθ∗ ( x ) , y ) ≤ ` ( fθ ( x ) , y ) , ( 18 ) thus , 157 RS ` ( fθ∗ ) −RS ` ( fθ ) ≤ 0 . ( 19 ) 158 Theorem 2 . Together with ` , ¯̀ is a robust loss function for learning with complementary labels 159 under symmetric distribution or uniform distribution , if ¯̀satisfies : 160 ∂ ¯̀ ( u , ȳ ) ∂µȳ > 0 , k∑ i=1 ¯̀ ( u , i ) = C , ( C is a constant ) . ( 20 ) It should be noted that , in Eq . 20 , it means that ¯̀ is a symmetric loss ( ∑ ` ( u , i ) = C ) , and ¯̀ is a 161 monotone increasing loss function on any ȳ . 162 Proof . For any complementary-label distribution in Eq . ( 6 ) , and any loss function ` , we have 163 RS̄¯̀ ( fθ ) = E ( x , ȳ ) [ ¯̀ ( fθ ( x ) , ȳ ) ] = 1 |S̄| k∑ i=1 ∑ x∈Si k∑ j 6=i 1 k − 1 ¯̀ ( fθ ( x ) , j ) = 1 |S̄| k∑ i=1 ∑ x∈Si 1 k − 1 ( C − ¯̀ ( fθ ( x ) , i ) ) = C k − 1 −RS¯̀ ( fθ ) , ( 21 ) where S1 ∪ · · · ∪ Sk = S . 164 Supposing that fθ∗ is the optimal classifier learns from the complementary labels , and ∀f ∈ F , 165 where F is the function space for searching , we have 166 RS̄¯̀ ( fθ∗ ) −RS̄¯̀ ( fθ ) = RS¯̀ ( fθ ) −RS¯̀ ( fθ∗ ) ≤ 0 , ( 22 ) According to the first constraint in Eq . ( 20 ) , we then have 167 ¯̀ ( fθ ( x ) , y ) ≤ ¯̀ ( fθ∗ ( x ) , y ) , ( f ( y ) θ ( x ) ≤ f ( y ) θ∗ ( x ) ) ( 23 ) and then , 168 ` ( fθ∗ ( x ) , y ) ≤ ` ( fθ ( x ) , y ) , ( 24 ) thus , 169 RS ` ( fθ∗ ) −RS ` ( fθ ) ≤ 0 . ( 25 ) 170 Algorithm 1 Learning from complementary labels by exclusion
This paper deals with the problem of complementary label learning, that is, when we know the set of labels which a given observation does not belong to. In particular, the paper proposes a robust loss function and an algorithm for learning from complimentary labels. Results shown on MNIST and CIFAR datasets indicate the superior accuracy using the proposed loss function.
SP:b3803f35c83786a139be4422007de99c6e786cf3
Robust Loss Functions for Complementary Labels Learning
In ordinary-label learning , the correct label is given to each training sample . 1 Similarly , a complementary label is also provided for each training sample in 2 complementary-label learning . A complementary label indicates a class that the 3 example does not belong to . Robust learning of classifiers has been investi- 4 gated from many viewpoints under label noise , but little attention has been paid 5 to complementary-label learning . In this paper , we present a new algorithm of 6 complementary-label learning with the robustness of loss function . We also pro- 7 vide two sufficient conditions on a loss function so that the minimizer of the risk 8 for complementary labels is theoretically guaranteed to be consistent with the min- 9 imizer of the risk for ordinary labels . Finally , the empirical results validate our 10 method ’ s superiority to current state-of-the-art techniques . Especially in cifar10 , 11 our algorithm achieves a much higher test accuracy than the gradient ascent algo- 12 rithm , and the parameters of our model are less than half of the ResNet-34 they 13 used . 14 1 INSTRUCTION 15 . Deep neural networks have exhibited excellent performance in many real-applications . Yet , their 16 supper performance is based on the correctly labeled large-scale training set . However , labeling 17 such a large-scale dataset is time-consuming and expensive . For example , the crowd-workers need 18 to select the correct label for a sample from 100 labels for CIFAR100 . To migrate this problem , 19 reachers have proposed many solutions to learn from weak-supervision : Noise-label learning Li 20 et al . ( 2017 ) ; Hu et al . ( 2019 ) ; Lee et al . ( 2018 ) ; Xia et al . ( 2019 ) , semi-supervised learning Zhai 21 et al . ( 2019 ) ; Berthelot et al . ( 2019 ) ; Rasmus et al . ( 2015 ) ; Miyato et al . ( 2019 ) ; Sakai et al . ( 2017 ) , 22 similar-unlabeled learning Tanha ( 2019 ) ; Bao et al . ( 2018 ) ; Zelikovitz & Hirsh ( 2000 ) , unlabeled- 23 unlabeled learning Lu et al . ( 2018 ) ; Chen et al . ( 2020a ; b ) , positive-unlabeled learning Elkan & Noto 24 ( 2008 ) ; du Plessis et al . ( 2014 ) ; Kiryo et al . ( 2017 ) , contrast learning Chen et al . ( 2020a ; b ) , partial 25 label learning Cour et al . ( 2011 ) ; Feng & An ( 2018 ) ; Wu & Zhang ( 2018 ) and others . 26 We investigate complementary-label learning Ishida et al . ( 2017 ) in this paper . A complementary 27 Label is only indicating that the class label of a sample is incorrect . In the view of label noise , 28 complementary labels can also be viewed as noise labels but without any true labels in the training 29 set . Our task is to learn a classifier from the given complementary labels , predicting a correct label 30 for a given sample . Collecting complementary labels is much easier and efficient than choosing a 31 true class from many candidate classes precisely . For example , the label-system uniformly chooses a 32 label for a sample . It has a probability of 1k to be ordinary-label but k−1 k to be complementary-label . 33 Moreover , another potential application of complementary-label is data privacy . For example , on 34 some privacy issues , it is much easier to collect complementary-label than ordinary-label . 35 Robust learning of classifiers has been investigated from many viewpoints in the presence of label 36 noise Ghosh et al . ( 2017 ) , but little attention paid to complementary-label learning . We call a loss 37 function robust if the minimizer of risk under that loss function with complementary labels would be 38 the same as that with ordinary labels . The robustness of risk minimization relies on the loss function 39 used in the training set . 40 This paper presents a general risk formulation that category cross-entropy loss ( CCE ) can be used to 41 learn with complementary labels and achieve robustness . We then offer some innovative analytical 42 results on robust loss functions under complementary labels . Having robustness of risk minimization 43 helps select the best hyper-parameter by empirical risk since there are no ordinary labels in the44 validation set . We conclude two sufficient conditions on a loss function to be robust for learning45 with complementary labels . We then explore some popular loss functions used for ordinary-label46 learning , such as CCE , Mean square error ( MSE ) and Mean absolute error ( MAE ) , and show that47 CCE and MAE satisfy our sufficient conditions . Finally , we present a learning algorithm for learning48 with complementary labels , named exclusion algorithm . The empirical results well demonstrate the49 advantage of the theoretical results we addressed and verify our algorithm ’ s superiority to the current50 state-of-the-art methods . The contribution of this paper can be summarized as:51 • We present a general risk formulation that can be view as a framework to employing a52 loss function that satisfies our robustness sufficient condition to learn from complementary53 labels.54 • We conclude two sufficient conditions on a loss function to be robust for learning with55 complementary labels.56 • We prove that the minimizer of the risk for complementary labels is theoretically guaran-57 teed to be consistent with the minimizer of the risk for ordinary labels.58 • The empirical results validate the superiority of our method to current state-of-the-art meth-59 ods.60 2 RELATED WORKS61 Complementary-label refers to that the pattern does not belong to the given label . Learning from62 complementary labels is a new topic in supervised-learning . It was first proposed by Ishida et al.63 ( 2017 ) . They conduct such an idea to try to deal with time-consuming and expensive to tag a large-64 scale dataset.65 In their early work Ishida et al . ( 2017 ) , they assume the complementary labels are the same prob-66 ability to be selected for a sample . And then , based on the ordinary one-versus-all ( OVA ) and67 pairwise-comparison ( PC ) multi-class loss functions Zhang ( 2004 ) proposed a modifying loss for68 learning with complementary labels.69 Even though they provided theoretical analysis with a statistical consistency guarantee , the loss70 function met a sturdy restriction that needs to be symmetric ( ` ( z ) + ` ( −z ) = 1 ) . Such a severe71 limitation allows only the OVA and PC loss functions with symmetric non-convex binary losses.72 However , the categorical cross-entropy loss widely used in the deep learning domain , can not be73 employed by the two losses they defined.74 Later , Yu et al . ( 2018a ) assume there are some biased amongst the complementary labels and75 presents a different formulation for biased complementary labels by using the forward loss cor-76 rection technique Patrini et al . ( 2017 ) to modify traditional loss functions . Their suggested risk77 estimator is not necessarily unbiased and proved that learning with complementary labels can the-78 oretically converge to the optimal classifier learned from ordinary labels based on the estimated79 transition matrix . However , the key to the forward loss correction technique is to evaluate the tran-80 sition matrix correctly . Hence , one will need to assess the transition matrix beforehand , which is81 relatively tricky without strong assumptions . Moreover , in such a setup , it restricts a small com-82 plementary label space to provide more information . Thus , it is necessary to encourage the worker83 to provide more challenging complementary labels , for example , by giving higher rewards to the84 specific classes . Otherwise , the complementary label given by the worker may be too evident and85 uninformative . For example , class three and class five are not class one evidently but is uninforma-86 tive . This paper focuses on the uniform ( symmetric ) assumption and study random distribution as a87 biased assumption ( asymmetric or non-uniform ) .88 Based on the uniform assumption , Ishida et al . ( 2019 ) proposed an unbiased estimator with a general89 loss function for complementary labels . It can make any loss functions available for use , not only90 soft-max cross-entropy loss function , but other loss functions can also be utilized . Their new frame-91 work is a generalization of previous complementary-label learning Ishida et al . ( 2017 ) . However,92 their proposed unbiased risk estimator has an issue that the classification risk can attain negative93 values after learning , leading to overfitting Ishida et al . ( 2019 ) . They then offered a non-negative94 correction to the original unbiased risk estimator to improve their estimator , which is no longer95 guaranteed to be an unbiased risk estimator . In this paper , our proposed risk estimator is also not 96 unbiased , but the minimizer of the risk for complementary labels is theoretically guaranteed to be 97 consistent with the minimizer of the risk for ordinary labels , both uniform and non-uniform . 98 3 PRELIMINARIES 99 . 3.1 LEARNING WITH ORDINARY LABELS 100 In the context of learning with ordinary labels , let X ⊂ Rd be the feature space and Y = { 1 , · · · , k } 101 be the class labels . A multi-class loss function is a map : L ( fθ ( x ) , y ) : X × Y → R+ . A classifier 102 can be presented as : 103 h ( x ) = arg max i∈ [ k ] f ( i ) θ ( x ) , ( 1 ) where fθ ( x ) = ( f ( 1 ) θ ( x ) , · · · , f ( k ) θ ( x ) ) , θ is the set of parameters in the CNN network , f ( i ) θ ( x ) is 104 the probability prediction for the corresponding class i . Even though h ( x ) is the final classifier , we 105 use notation of calling fθ ( x ) itself as the classifier . Given dataset S = { ( xi , yi ) } Ni , together with a 106 loss function L , ∀fθ ∈ F ( F is the function space for searching ) , L-risk is defined as : 107 RSL ( fθ ) = ED [ L ( fθ ( x ) , y ) ] = ES [ L ( fθ ( x ) , y ) ] , ( 2 ) Some popular multi-class loss functions are CCE , MAE , MSE . Specifically , 108 ` ( fθ ( x ) , y ) = ` ( u , y ) = ∑k i=1 e ( i ) y log 1 µy = log 1µy CCE , ‖ey − u‖1 = 2− 2µy MAE , ‖ey − u‖22 = ‖u‖ 2 2 + 1− 2µy MSE , ( 3 ) where u = fθ ( x ) = ( µ1 , · · · , µk ) , and ey is a one-hot vector that the y-th component equals to 1 , 109 others are 0 . The goal of multi-class classification is to learn a classifier fθ ( x ) that minimize the 110 classification riskRSL with multi-class loss L . 111 3.2 LEARNING WITH COMPLEMENTARY LABELS 112 In contrast to the ordinary-label learning , the complementary-label ( CL ) dataset contains only labels 113 indicating that the class label of a sample is incorrect . Corresponding to the ordinary labels dataset 114 S , the independent and identically distributed ( i.i.d . ) complementary labels dataset denoted as : 115 S̄ = { ( x , ȳ ) } Ni , ( 4 ) where N is the size of the dataset S̄ , and ȳ represents that pattern x does not belong to class-ȳ . 116 The general labels ’ distribution of dataset S̄ is as : 117 P ( ȳ|y ) = 0 p12 . . . p1k p21 0 . . . p2k ... ... . . . ... pk1 . . . pk ( k−1 ) 0 k×k , ( 5 ) where pij denotes that the probability of the i-th class ’ s pattern x labeled as j , ∑k j=1 pij = 118 1 , pij 6=0 , j 6= i. Supposing that the label system uniformly select a label from { 1 , · · · , k } \ { y } 119 for each sample x , then the Eq . ( 5 ) becomes 120 P ( ȳ|y ) = 0 1k−1 . . . 1 k−1 1 k−1 0 . . . 1 k−1 ... ... . . . ... 1 k−1 . . . 1 k−1 0 k×k . ( 6 ) Yu et al . ( 2018b ) make a strong assumption that there are some bias in Eq . ( 5 ) , while Ishida 121 et al . ( 2017 ; 2019 ) focus on the assumption of Eq . ( 6 ) . In this paper , we study both kinds of distri- 122 bution . 123 4 METHODOLOGY124 In this section , we firstly propose a general risk formulation for leaning with complementary labels.125 And then prove that some loss functions designed for the ordinary labels learning are robust to126 complementary labels with our risk formulation , such as categorical cross-entropy loss and mean127 absolute error.128 4.1 GENERAL RISK FORMULATION129 The goal of learning with complementary labels is to learn a classifier that predicts a correct label for130 any sample drawn from the same distribution . Because there are not ordinary labels for the model,131 we need to design a loss function or model for learning with complementary labels . The key to learn-132 ing a classifier for ordinary label learning is to maximize the true label ’ s predict-probability . One133 intuitive way to maximize the true label ’ s predict-probability is to minimize the predict-probability134 of complementary labels . In this paper , with little abuse of notation , we let135 u = fθ ( x ) = ( µ1 , · · · , µk ) v = 1− fθ ( x ) = ( 1− µ1 , · · · , 1− µk ) . ( 7 ) Definition 1 . ( CL-loss ) Together with loss function ` designed for the ordinary-label learning , the136 loss for learning with complementary-label is as:137 ¯̀ ( fθ ( x ) , ȳ ) = ¯̀ ( u , ȳ ) = ` ( v , ȳ ) . ( 8 ) 4.2 THEORETICAL RESULTS138 Definition 2 . ( Robust loss ) In the framework of risk minimization , a loss function is called robust139 loss function if minimizer of risk with complementary labels would be the same as with ordinary140 labels , i.e.,141 RS̄¯̀ ( fθ∗ ) −RS̄¯̀ ( fθ ) ≤ 0⇒ RS ` ( fθ∗ ) −RS ` ( fθ ) ≤ 0 . ( 9 ) Theorem 1 . Together with ` , ¯̀ is a robust loss function for learning with complementary labels , if ¯̀142 satisfies:143 ∂ ¯̀ ( u , ȳ ) ∂µȳ > 0 , ∂ ¯̀ ( u , ȳ ) ∂µi = 0 , ∀i ∈ { 1 , · · · , k } \ { ȳ } . ( 10 ) Note that , in Eq . 10 , it means that ¯̀ is a monotone increasing loss function only on u ( ȳ ) .144 Proof . Recall that for any fθ , and any ` ,145 RS ` ( fθ ) = E ( x , y ) [ ` ( fθ ( x ) , y ) ] = 1 |S| ∑ ( x , y ) ∈S ` ( fθ ( x ) , y ) . ( 11 ) For any complementary-label distribution in Eq . ( 5 ) , and any loss function ` , we have146 RS̄¯̀ ( fθ ) = E ( x , ȳ ) [ ¯̀ ( fθ ( x ) , ȳ ) ] = 1 |S̄| k∑ i=1 ∑ x∈Si k∑ j 6=i pij ¯̀ ( fθ ( x ) , j ) , ( 12 ) where pij is the component of complementary labels distribution matrix P , S1 ∪ · · · ∪ Sk = S .147 Supposing that fθ∗ is the optimal classifier learns from the complementary labels , and ∀f ∈ F ,148 where F is the function space for searching , we have149 RS̄¯̀ ( fθ∗ ) −RS̄¯̀ ( fθ ) = 1 |S̄| k∑ i=1 ∑ x∈Si k∑ j 6=i pij ( ¯̀ ( fθ∗ ( x ) , j ) − ¯̀ ( fθ ( x ) , j ) ) ≤ 0 , ( 13 ) where pij 6= 0 . If ∃x ′ ∈ S̄ , s.t. , ¯̀ ( fθ∗ ( x ′ ) , ȳ ) > ¯̀ ( fθ ( x ′ ) , ȳ ) , let fθ′ satisfying150 fθ′ ( x ) = { fθ∗ ( x ) x ∈ S̄ \ { x ′ } , fθ ( x ) x = x ′ , ( 14 ) then according to Eq . 12 and 13 , RS̄¯̀ ( fθ′ ) < RS̄¯̀ ( fθ∗ ) , fθ∗ is not the optimal classifier . This 151 contradicts the hypothesize that fθ∗ is the optimal classifier . 152 Thus , ∀ȳ ∈ { 1 , · · · , k } \ { y } , we have 153 ¯̀ ( fθ∗ ( x ) , ȳ ) ≤ ¯̀ ( fθ ( x ) , ȳ ) . ( 15 ) According to Eq . ( 10 ) , ¯̀ is a monotone increasing loss function only on u ( ȳ ) , then we have 154 ∀ȳ ∈ { 1 , · · · , k } \ { y } , f ( ȳ ) θ∗ ( x ) ≤ f ( ȳ ) θ ( x ) . ( 16 ) Thus , 155 f ( y ) θ∗ ( x ) ≥ f ( y ) θ ( x ) , f ( y ) θ ( x ) = 1−∑ ȳ 6=y f ( ȳ ) θ ( x ) ( 17 ) and then , 156 ` ( fθ∗ ( x ) , y ) ≤ ` ( fθ ( x ) , y ) , ( 18 ) thus , 157 RS ` ( fθ∗ ) −RS ` ( fθ ) ≤ 0 . ( 19 ) 158 Theorem 2 . Together with ` , ¯̀ is a robust loss function for learning with complementary labels 159 under symmetric distribution or uniform distribution , if ¯̀satisfies : 160 ∂ ¯̀ ( u , ȳ ) ∂µȳ > 0 , k∑ i=1 ¯̀ ( u , i ) = C , ( C is a constant ) . ( 20 ) It should be noted that , in Eq . 20 , it means that ¯̀ is a symmetric loss ( ∑ ` ( u , i ) = C ) , and ¯̀ is a 161 monotone increasing loss function on any ȳ . 162 Proof . For any complementary-label distribution in Eq . ( 6 ) , and any loss function ` , we have 163 RS̄¯̀ ( fθ ) = E ( x , ȳ ) [ ¯̀ ( fθ ( x ) , ȳ ) ] = 1 |S̄| k∑ i=1 ∑ x∈Si k∑ j 6=i 1 k − 1 ¯̀ ( fθ ( x ) , j ) = 1 |S̄| k∑ i=1 ∑ x∈Si 1 k − 1 ( C − ¯̀ ( fθ ( x ) , i ) ) = C k − 1 −RS¯̀ ( fθ ) , ( 21 ) where S1 ∪ · · · ∪ Sk = S . 164 Supposing that fθ∗ is the optimal classifier learns from the complementary labels , and ∀f ∈ F , 165 where F is the function space for searching , we have 166 RS̄¯̀ ( fθ∗ ) −RS̄¯̀ ( fθ ) = RS¯̀ ( fθ ) −RS¯̀ ( fθ∗ ) ≤ 0 , ( 22 ) According to the first constraint in Eq . ( 20 ) , we then have 167 ¯̀ ( fθ ( x ) , y ) ≤ ¯̀ ( fθ∗ ( x ) , y ) , ( f ( y ) θ ( x ) ≤ f ( y ) θ∗ ( x ) ) ( 23 ) and then , 168 ` ( fθ∗ ( x ) , y ) ≤ ` ( fθ ( x ) , y ) , ( 24 ) thus , 169 RS ` ( fθ∗ ) −RS ` ( fθ ) ≤ 0 . ( 25 ) 170 Algorithm 1 Learning from complementary labels by exclusion
This paper studied a new problem, that is, learning from complementary labels. The goal is to predict a correct label for a given sample when only given complementary labels. On the basis of the ordinary-label learning, the authors defined "robust loss functions" for complementary-label learning: a a loss function is called robust loss function if minimizer of risk with complementary labels would be the same as with ordinary labels. Then, they provided two sufficient conditions for the robust loss function and a exclusion algorithm is provided for prediction. Experimental results show that the proposed method outperforms other methods in several datasets.
SP:b3803f35c83786a139be4422007de99c6e786cf3
LEAF: A Learnable Frontend for Audio Classification
1 INTRODUCTION . Learning representations by backpropagation in deep neural networks has become the standard in audio understanding , ranging from automatic speech recognition ( ASR ) ( Hinton et al. , 2012 ; Senior et al. , 2015 ) to music information retrieval ( Arcas et al. , 2017 ) , as well as animal vocalizations ( Lostanlen et al. , 2018 ) and audio events ( Hershey et al. , 2017 ; Kong et al. , 2019 ) . Still , a striking constant along the history of audio classification is the mel-filterbanks , a fixed , hand-engineered representation of sound . Mel-filterbanks first compute a spectrogram , using the squared modulus of the short-term Fourier transform ( STFT ) . Then , the spectrogram is passed through a bank of triangular bandpass filters , spaced on a logarithmic scale ( the mel-scale ) to replicate the non-linear human perception of pitch ( Stevens & Volkmann , 1940 ) . Eventually , the resulting coefficients are passed through a logarithm compression , to replicate our non-linear sensitivity to loudness ( Fechner et al. , 1966 ) . This approach of drawing inspiration from the human auditory system to design features for machine learning has been historically successful ( Davis & Mermelstein , 1980 ; Mogran et al. , 2004 ) . Moreover , decades after the design of mel-filterbanks , Andén & Mallat ( 2014 ) showed that they coincidentally exhibit desirable mathematical properties for representation learning , in particular shift-invariance and stability to small deformations . Hence , both from an auditory and a machine learning perspective , mel-filterbanks represent strong audio features . However , the design of mel-filterbanks is also flawed by biases . First , not only has the mel-scale been revised multiple times ( O ’ Shaughnessy , 1987 ; Umesh et al. , 1999 ) , but also the auditory experiments that led their original design could not be replicated afterwards ( Greenwood , 1997 ) . Similarly , better alternatives to log-compression have been proposed , like cubic root for speech enhancement ( Lyons & Paliwal , 2008 ) or 10th root for ASR ( Schluter et al. , 2007 ) . Moreover , even though matching human perception provides good inductive biases for some application domains , e.g. , ASR or music understanding , these biases may also be detrimental , e.g . for tasks that require fine-grained resolution at high frequencies . Finally , the recent history of other fields like computer vision , in which the rise of deep learning methods has allowed learning representations from raw pixels rather than from engineered features ( Krizhevsky et al. , 2012 ) , inspired us to take the same path . These observations motivated replacing mel-filterbanks with learnable neural layers , ranging from standard convolutional layers ( Palaz et al. , 2015 ) to dilated convolutions ( Schneider et al. , 2019 ) , as well as structured filters exploiting the characteristics of known filter families , such as Gammatone ( Sainath et al. , 2015 ) , Gabor ( Zeghidour et al. , 2018a ; Noé et al. , 2020 ) , Sinc ( Ravanelli & Bengio , 2018 ; Pariente et al. , 2020 ) or Spline ( Balestriero et al. , 2018 ) filters . While tasks such as speech separation have already successfully adopted learnable frontends ( Luo & Mesgarani , 2019 ; Luo et al. , 2019 ) , we observe that most state-of-the art approaches for audio classification ( Kong et al. , 2019 ) , ASR ( Synnaeve et al. , 2019 ) and speaker recognition ( Villalba et al. , 2020 ) still employ mel-filterbanks as input features , regardless of the backbone architecture . In this work , we argue that a credible alternative to mel-filterbanks for classification should be evaluated across many tasks , and propose the first extensive study of learnable frontends for audio over a wide and diverse range of audio signals , including speech , music , audio events , and animal sounds . By breaking down mel-filterbanks into three components ( filtering , pooling , compression/normalization ) , we propose LEAF , a novel frontend that is fully learnable in all its operations , while being controlled by just a few hundred parameters . In a multi-task setting over 8 datasets , we show that we can learn a single set of parameters that outperforms mel-filterbanks , as well as previously proposed learnable alternatives . Moreover , these findings are replicated when training a different model for each individual task . We also confirm these results on a challenging , large-scale benchmark : classification on Audioset ( Gemmeke et al. , 2017 ) . In addition , we show that the general inductive bias of our frontend ( i.e. , learning bandpass filters , lowpass filtering before downsampling , learning a per-channel compression ) is general enough to benefit other systems , and propose a new , improved version of SincNet ( Ravanelli & Bengio , 2018 ) . Our code is publicly available1 . 2 RELATED WORK . In the last decade , several works addressed the problem of learning the audio frontend , as an alternative to mel-filterbanks . The first notable contributions in this field emerged for ASR , with Jaitly & Hinton ( 2011 ) pretraining Restricted Boltzmann Machines from the waveform , and Palaz et al . ( 2013 ) training a hybrid DNN-HMM model , replacing mel-filterbanks by several layers of convolution . However , these alternatives , as well as others proposed more recently ( Tjandra et al. , 2017 ; Schneider et al. , 2019 ) , are composed of many layers , which makes a fair comparison with melfilterbanks difficult . In the following section , we focus on frontends that provide a lightweight , drop-in replacement to mel-filterbanks , with comparable capacity . 2.1 LEARNING FILTERS FROM WAVEFORMS . A first attempt at learning the filters of mel-filterbanks was proposed by Sainath et al . ( 2013 ) , where a filterbank is initialized using the mel-scale and then learned together with the rest of the network , taking a spectrogram as input . Instead , Sainath et al . ( 2015 ) and Hoshen et al . ( 2015 ) later 1https : //github.com/google-research/leaf-audio proposed to learn convolutional filters directly from raw waveforms , initialized with Gammatone filters ( Schluter et al. , 2007 ) . In the same spirit , Zeghidour et al . ( 2018a ) used the scattering transform approximation of mel-filterbanks ( Andén & Mallat , 2014 ) to propose the time-domain filterbanks , a learnable frontend that approximates mel-filterbanks at initialization and can then be learned without constraints ( see Figure 1 ) . More recently , the SincNet ( Ravanelli & Bengio , 2018 ) model was proposed , which computes a convolution with sine cardinal filters , a non-linearity and a max-pooling operator ( see Figure 1 ) , as well as a variant using Gabor filters ( Noé et al. , 2020 ) . We take inspiration from these works to design a new learnable filtering layer . As detailed in Section 3.1.2 , we parametrize a complex-valued filtering layer with Gabor filters . Gabor filters are optimally localized in time and frequency ( Gabor , 1946 ) , unlike Sinc filters that require using a window function ( Ravanelli & Bengio , 2018 ) . Moreover , unlike Noé et al . ( 2020 ) , who use complex-valued layers in the rest of the network , we describe in Section 3.1.2 how using a squared modulus not only brings back the signal to the real-valued domain ( leading to compatibility with standard architectures ) , but also performs shift-invariant Hilbert envelope extraction . Zeghidour et al . ( 2018a ) also apply a squared modulus non-linearity , however as described in Section 3.1.1 , training unconstrained filters can lead to overfitting and stability issues , which we solve with our approach . 2.2 LEARNING THE COMPRESSION AND THE NORMALIZATION . The problem of learning a compression and/or normalization function has received less attention in the past literature . A notable contribution is the Per-Channel Energy Normalization ( PCEN ) ( Wang et al. , 2017 ; Lostanlen et al. , 2019 ) , which was originally proposed for keyword spotting , outperforming log-compression . Later , Battenberg et al . ( 2017 ) and Lostanlen et al . ( 2018 ) confirmed the advantages of PCEN , respectively for large scale ASR and animal bioacoustics . However , these previous works learn a compression on top of fixed mel-filterbanks . Instead , in this work we propose a new version of PCEN and show for the first time that combining learnable filters , learnable pooling , and learnable compression and normalization outperforms all other approaches . 3 MODEL . Let x ∈ RT denote a one-dimensional waveform of T samples , available at the sampling frequency Fs [ Hz ] . We decompose the frontend into a sequence of three components : i ) filtering , which passes x through a bank of bandpass filters followed by a non-linearity , operating at the original sampling rate Fs ; ii ) pooling , which decimates the signal to reduce its temporal resolution ; iii ) compression/normalization , which applies a non-linearity to reduce the dynamic range . Overall , the frontend can be represented as a function Fψ : RT → RM×N , which maps the input waveform to a 2-dimensional feature space , where M denotes the number of temporal frames ( typically M < T ) , N the number of feature channels ( which might correspond to frequency bins ) and ψ the frontend parameters . The features computed by the frontend are then fed to a model gθ ( · ) parametrized by θ . The frontend and the model parameters are estimated by solving a supervised classification problem : θ∗ , ψ∗ = argmin θ , ψ E ( x , y ) ∈D L ( gθ ( Fψ ( x ) ) , y ) , ( 1 ) where ( x , y ) are samples in a labelled dataset D and L is a loss function . Our goal is to learn the frontend parameters ψ end-to-end with the model parameters θ . To achieve this , it is necessary to make all the frontend components learnable , so that we can solve the optimization problem in equation 1 with gradient descent . In the following we detail the design choices of each component . 3.1 FILTERING . The first block of the learnable frontend takes x as input , and computes a convolution with a bank of complex-valued filters ( ϕn ) n=1 .. N , followed by a squared modulus operator , which brings back its output to the real-valued domain . This convolution step has a stride of 1 , therefore keeping the input temporal resolution , and outputs the following time-frequency representation : fn = |x ∗ ϕn|2 ∈ RT , n = 1 , . . . , N , ( 2 ) where ϕn ∈ CW is a complex-valued 1-D filter of length W . It is possible to compute equation 2 without explicitly manipulating complex numbers . As proposed by Zeghidour et al . ( 2018a ) , to produce the squared-modulus of a complex-valued convolution with N filters , we compute instead the convolution with 2N real-valued filters ϕ̃n , n = 1 , . . . , 2N , and perform squared ` 2-pooling with size 2 and stride 2 along the channel axis to obtain the squared modulus , using adjacent filters as real and imaginary part of ϕn . Formally : fn = |x ∗ ϕ̃2n−1|2 + |x ∗ ϕ̃2n|2 ∈ RT , n = 1 , . . . , N. ( 3 ) We explore two different parametrizations for ϕn . One relies on standard fully parametrized convolutional filters while the other makes use of learnable Gabor filters .
This paper presents a new learnable representation fo audio signal classification and compares it to the classical mel-filterbanks representation and two other learnable representations on a broad range of audio classification tasks, from birdsongs to pitch, instrument, language or emotion recognition. The proposed representation combines several parameterized representation techniques from the recent litterature. It is reported to yield on par or better classification results than the other methods on several of these tasks using single- or multi-task learning.
SP:5a1ad3ed5e9e4e7c7b0f10530688f2f52ee76948
LEAF: A Learnable Frontend for Audio Classification
1 INTRODUCTION . Learning representations by backpropagation in deep neural networks has become the standard in audio understanding , ranging from automatic speech recognition ( ASR ) ( Hinton et al. , 2012 ; Senior et al. , 2015 ) to music information retrieval ( Arcas et al. , 2017 ) , as well as animal vocalizations ( Lostanlen et al. , 2018 ) and audio events ( Hershey et al. , 2017 ; Kong et al. , 2019 ) . Still , a striking constant along the history of audio classification is the mel-filterbanks , a fixed , hand-engineered representation of sound . Mel-filterbanks first compute a spectrogram , using the squared modulus of the short-term Fourier transform ( STFT ) . Then , the spectrogram is passed through a bank of triangular bandpass filters , spaced on a logarithmic scale ( the mel-scale ) to replicate the non-linear human perception of pitch ( Stevens & Volkmann , 1940 ) . Eventually , the resulting coefficients are passed through a logarithm compression , to replicate our non-linear sensitivity to loudness ( Fechner et al. , 1966 ) . This approach of drawing inspiration from the human auditory system to design features for machine learning has been historically successful ( Davis & Mermelstein , 1980 ; Mogran et al. , 2004 ) . Moreover , decades after the design of mel-filterbanks , Andén & Mallat ( 2014 ) showed that they coincidentally exhibit desirable mathematical properties for representation learning , in particular shift-invariance and stability to small deformations . Hence , both from an auditory and a machine learning perspective , mel-filterbanks represent strong audio features . However , the design of mel-filterbanks is also flawed by biases . First , not only has the mel-scale been revised multiple times ( O ’ Shaughnessy , 1987 ; Umesh et al. , 1999 ) , but also the auditory experiments that led their original design could not be replicated afterwards ( Greenwood , 1997 ) . Similarly , better alternatives to log-compression have been proposed , like cubic root for speech enhancement ( Lyons & Paliwal , 2008 ) or 10th root for ASR ( Schluter et al. , 2007 ) . Moreover , even though matching human perception provides good inductive biases for some application domains , e.g. , ASR or music understanding , these biases may also be detrimental , e.g . for tasks that require fine-grained resolution at high frequencies . Finally , the recent history of other fields like computer vision , in which the rise of deep learning methods has allowed learning representations from raw pixels rather than from engineered features ( Krizhevsky et al. , 2012 ) , inspired us to take the same path . These observations motivated replacing mel-filterbanks with learnable neural layers , ranging from standard convolutional layers ( Palaz et al. , 2015 ) to dilated convolutions ( Schneider et al. , 2019 ) , as well as structured filters exploiting the characteristics of known filter families , such as Gammatone ( Sainath et al. , 2015 ) , Gabor ( Zeghidour et al. , 2018a ; Noé et al. , 2020 ) , Sinc ( Ravanelli & Bengio , 2018 ; Pariente et al. , 2020 ) or Spline ( Balestriero et al. , 2018 ) filters . While tasks such as speech separation have already successfully adopted learnable frontends ( Luo & Mesgarani , 2019 ; Luo et al. , 2019 ) , we observe that most state-of-the art approaches for audio classification ( Kong et al. , 2019 ) , ASR ( Synnaeve et al. , 2019 ) and speaker recognition ( Villalba et al. , 2020 ) still employ mel-filterbanks as input features , regardless of the backbone architecture . In this work , we argue that a credible alternative to mel-filterbanks for classification should be evaluated across many tasks , and propose the first extensive study of learnable frontends for audio over a wide and diverse range of audio signals , including speech , music , audio events , and animal sounds . By breaking down mel-filterbanks into three components ( filtering , pooling , compression/normalization ) , we propose LEAF , a novel frontend that is fully learnable in all its operations , while being controlled by just a few hundred parameters . In a multi-task setting over 8 datasets , we show that we can learn a single set of parameters that outperforms mel-filterbanks , as well as previously proposed learnable alternatives . Moreover , these findings are replicated when training a different model for each individual task . We also confirm these results on a challenging , large-scale benchmark : classification on Audioset ( Gemmeke et al. , 2017 ) . In addition , we show that the general inductive bias of our frontend ( i.e. , learning bandpass filters , lowpass filtering before downsampling , learning a per-channel compression ) is general enough to benefit other systems , and propose a new , improved version of SincNet ( Ravanelli & Bengio , 2018 ) . Our code is publicly available1 . 2 RELATED WORK . In the last decade , several works addressed the problem of learning the audio frontend , as an alternative to mel-filterbanks . The first notable contributions in this field emerged for ASR , with Jaitly & Hinton ( 2011 ) pretraining Restricted Boltzmann Machines from the waveform , and Palaz et al . ( 2013 ) training a hybrid DNN-HMM model , replacing mel-filterbanks by several layers of convolution . However , these alternatives , as well as others proposed more recently ( Tjandra et al. , 2017 ; Schneider et al. , 2019 ) , are composed of many layers , which makes a fair comparison with melfilterbanks difficult . In the following section , we focus on frontends that provide a lightweight , drop-in replacement to mel-filterbanks , with comparable capacity . 2.1 LEARNING FILTERS FROM WAVEFORMS . A first attempt at learning the filters of mel-filterbanks was proposed by Sainath et al . ( 2013 ) , where a filterbank is initialized using the mel-scale and then learned together with the rest of the network , taking a spectrogram as input . Instead , Sainath et al . ( 2015 ) and Hoshen et al . ( 2015 ) later 1https : //github.com/google-research/leaf-audio proposed to learn convolutional filters directly from raw waveforms , initialized with Gammatone filters ( Schluter et al. , 2007 ) . In the same spirit , Zeghidour et al . ( 2018a ) used the scattering transform approximation of mel-filterbanks ( Andén & Mallat , 2014 ) to propose the time-domain filterbanks , a learnable frontend that approximates mel-filterbanks at initialization and can then be learned without constraints ( see Figure 1 ) . More recently , the SincNet ( Ravanelli & Bengio , 2018 ) model was proposed , which computes a convolution with sine cardinal filters , a non-linearity and a max-pooling operator ( see Figure 1 ) , as well as a variant using Gabor filters ( Noé et al. , 2020 ) . We take inspiration from these works to design a new learnable filtering layer . As detailed in Section 3.1.2 , we parametrize a complex-valued filtering layer with Gabor filters . Gabor filters are optimally localized in time and frequency ( Gabor , 1946 ) , unlike Sinc filters that require using a window function ( Ravanelli & Bengio , 2018 ) . Moreover , unlike Noé et al . ( 2020 ) , who use complex-valued layers in the rest of the network , we describe in Section 3.1.2 how using a squared modulus not only brings back the signal to the real-valued domain ( leading to compatibility with standard architectures ) , but also performs shift-invariant Hilbert envelope extraction . Zeghidour et al . ( 2018a ) also apply a squared modulus non-linearity , however as described in Section 3.1.1 , training unconstrained filters can lead to overfitting and stability issues , which we solve with our approach . 2.2 LEARNING THE COMPRESSION AND THE NORMALIZATION . The problem of learning a compression and/or normalization function has received less attention in the past literature . A notable contribution is the Per-Channel Energy Normalization ( PCEN ) ( Wang et al. , 2017 ; Lostanlen et al. , 2019 ) , which was originally proposed for keyword spotting , outperforming log-compression . Later , Battenberg et al . ( 2017 ) and Lostanlen et al . ( 2018 ) confirmed the advantages of PCEN , respectively for large scale ASR and animal bioacoustics . However , these previous works learn a compression on top of fixed mel-filterbanks . Instead , in this work we propose a new version of PCEN and show for the first time that combining learnable filters , learnable pooling , and learnable compression and normalization outperforms all other approaches . 3 MODEL . Let x ∈ RT denote a one-dimensional waveform of T samples , available at the sampling frequency Fs [ Hz ] . We decompose the frontend into a sequence of three components : i ) filtering , which passes x through a bank of bandpass filters followed by a non-linearity , operating at the original sampling rate Fs ; ii ) pooling , which decimates the signal to reduce its temporal resolution ; iii ) compression/normalization , which applies a non-linearity to reduce the dynamic range . Overall , the frontend can be represented as a function Fψ : RT → RM×N , which maps the input waveform to a 2-dimensional feature space , where M denotes the number of temporal frames ( typically M < T ) , N the number of feature channels ( which might correspond to frequency bins ) and ψ the frontend parameters . The features computed by the frontend are then fed to a model gθ ( · ) parametrized by θ . The frontend and the model parameters are estimated by solving a supervised classification problem : θ∗ , ψ∗ = argmin θ , ψ E ( x , y ) ∈D L ( gθ ( Fψ ( x ) ) , y ) , ( 1 ) where ( x , y ) are samples in a labelled dataset D and L is a loss function . Our goal is to learn the frontend parameters ψ end-to-end with the model parameters θ . To achieve this , it is necessary to make all the frontend components learnable , so that we can solve the optimization problem in equation 1 with gradient descent . In the following we detail the design choices of each component . 3.1 FILTERING . The first block of the learnable frontend takes x as input , and computes a convolution with a bank of complex-valued filters ( ϕn ) n=1 .. N , followed by a squared modulus operator , which brings back its output to the real-valued domain . This convolution step has a stride of 1 , therefore keeping the input temporal resolution , and outputs the following time-frequency representation : fn = |x ∗ ϕn|2 ∈ RT , n = 1 , . . . , N , ( 2 ) where ϕn ∈ CW is a complex-valued 1-D filter of length W . It is possible to compute equation 2 without explicitly manipulating complex numbers . As proposed by Zeghidour et al . ( 2018a ) , to produce the squared-modulus of a complex-valued convolution with N filters , we compute instead the convolution with 2N real-valued filters ϕ̃n , n = 1 , . . . , 2N , and perform squared ` 2-pooling with size 2 and stride 2 along the channel axis to obtain the squared modulus , using adjacent filters as real and imaginary part of ϕn . Formally : fn = |x ∗ ϕ̃2n−1|2 + |x ∗ ϕ̃2n|2 ∈ RT , n = 1 , . . . , N. ( 3 ) We explore two different parametrizations for ϕn . One relies on standard fully parametrized convolutional filters while the other makes use of learnable Gabor filters .
The paper shows a detailed interpretation on the relationship between each component of hand-crafted audio front-ends (such as mel-spectrograms) and learnable counterparts. To do that, they followed the narratives presented from the previous works such as SincNet and improved the model by changing the several components of it. The authors grouped the audio front-ends into mainly three parts which are filtering, pooling, and compression. And, the contributions were made at each stage. For filtering stage, instead of learning all the parameters of the convolution layer, they let the model to learn only center frequency and bandwidth of the filterbanks that are initially assigned with Gabor filters. For pooling stage, instead of using simple average or max poolings, they let the model to learn low pass filtering with small parameters. For compression, instead of using log based dynamic compression, they extended Per-Channel Energy Normalization by replacing a fixed smoothing factor to learnable parameters and named it to sPCEN.
SP:5a1ad3ed5e9e4e7c7b0f10530688f2f52ee76948
Adaptive N-step Bootstrapping with Off-policy Data
1 INTRODUCTION . The goal of reinforcement learning ( RL ) is to find an optimal policy by interacting with the environment . In order to do that , a RL algorithm needs to define a target , e.g. , Q function or value function , and update it iteratively to bootstrap from scratch . The challenge of designing an efficient update target manifests both on the sample complexity and computation complexity . Ideally , the target should only be updated by the data generated by the corresponding policy to obtain an unbiased estimate ( Sutton & Barto , 2018 ) , while the amount of the data needs to reach a certain scale to control the variance ( Schulman et al. , 2015 ) . These two requirements limit the update frequency and lead to a high sample complexity finally . On the computational part , the consideration is to make a trade-off between the computation cost of each step and the number of total steps . Monte-Carlo returns has advantages on generalization ( behave well with function approximation ) and exploration ( a quick propagation of new findings ) at the cost of computing the whole trajectory on each step ( Sutton & Barto , 2018 ) . Bootstrapping methods apply readily on off-policy data and control the trace length ( Sutton & Barto , 2018 ) . However , they require more update steps to converge comparing with Monte-Carlo returns . Those design concerns are nested together , which makes it hard to achieve a good balance . N-step returns ( Sutton & Barto , 2018 ) serves as the basis of various update targets , due to its flexibility and simple implementation . Together with off-policy learning ( Sutton & Barto , 2018 ) and the replay buffer ( Mnih et al. , 2015 ) , n-step returns is able to update the target frequently while ensures that the variance is in a controllable range . However , a systematical study ( Fedus et al. , 2020 ) reveals that the performance of n-step returns highly relies on the exact value of n. Since the underlying working mechanism is unclear , previous research can only give some vague suggestion based on empirical results , that simply increases the value from one to a larger number , e.g . 3 or 4 . In this paper , we illustrate that the estimation error of n-step returns can be decomposed into offpolicy bias ( under-estimation part ) and approximation error ( over-estimation part ) , and the selection of n controls the balance between them . Data stored in the replay buffer are generated by previous policies . Thus , adopting them for update introduces the off-policy bias . Since the current policy is better than previous policies , the off-policy bias is an underestimation . Replay buffer is not the only source of the off-policy bias , epsilon-greedy exploration also introduces the off-policy issue . On the other hand , n-step returns adopts a max operator explicitly ( on Q-learning based algorithms ) or implicitly ( on actor-critic algorithms ) on an existing function to approximate the real target . The max operator brings the approximation error , which is an overestimation . Sec 4 gives the formal definition of the decomposition and verifies the conclusion by experiments . According to our analysis , the quantity of the off-policy bias and approximation error varies a lot on different data points . Thus , a fixed value of n is just a rough average , and there is plenty of room for improvement . We introduce a new metric , policy age , to quantify the off-policyness of each data point . As the policy age grows , the off-policy bias increases linearly , while the approximation error decreases exponentially . Based on this observation , we propose a novel algorithm , named adaptive n-step bootstrapping . Given the policy age of each data point , adaptive n-step calculates the optimal n by an exponential function . Hyperparameter of the function is determined by the tree-structured parzen estimator ( Bergstra et al. , 2011 ) . We conduct extensive experiments on both MuJoCo and Atari games . Adaptive n-step bootstrapping outperforms all fixed-value n settings with a large margin , in terms of both data efficiency and final reward . For the other update target definitions , we select Retrace ( Munos et al. , 2016 ) as a representative . Compared with Retrace , our method maintains the performance advantage under the premise of low computational complexity and simple implementation . 2 RELATED WORKS . 2.1 RESEARCH ON N-STEP RETURNS . The recent works on n-step returns focus on finding the optimal value of n. In Ape-X ( Horgan et al. , 2018 ) and R2D2 ( Kapturowski et al. , 2019 ) , the value of n is fixed , which is set by manual tuning or hyper-parameter search . Rainbow ( Hessel et al. , 2018 ) figures out that the final performance is sensitive to the value of n , and n = 3 achieves the best score in most cases on Atari games . Fedus et al . ( 2020 ) verifies that setting n to 3 is a good choice , and further reveals that the replay buffer must also be large enough to gain performance benefits . Those researches give some heuristic rules of setting the value of n , but the underlying mechanism of why n = 3 performs better than one-step temporal difference is still unclear . 2.2 OTHER UPDATE TARGETS . To improve the performance of the vanilla n-step returns , there are many other update target definitions in the literature ( Hernandez-Garcia & Sutton , 2019 ) . Importance sampling ( IS ) ( Precup et al. , 2000 ) provides a simple way to correct the off-policy bias . It can be seen as a weighted average of multiple one-step TD ( 0 ) target . However , IS brings large ( and possibly infinite ) variance , which makes it impractical on large-scale problems . Retrace ( Munos et al. , 2016 ) clips the IS ratio to a maximum value of 1 to reduce the large variance of IS targets . It has many applications in recent reinforcement learning agents , like distributed offpolicy learning agent Reactor ( Gruslys et al. , 2017 ) . The most serious disadvantage of Retrace is its high computation cost . Retrace needs to calculate O ( n ) times of Q and O ( n ) times of π in each time , compared with only O ( 1 ) from n-step returns ( n is trace length ) . In large-scale problems , evaluating Q and π requires a forward pass in the neural network , which is slow and expensive . Reactor ( Gruslys et al. , 2017 ) calculates the Retrace target as a linear combination of many n-step targets and dispatches those calculation workloads into different nodes for acceleration . Since the computation complexity is still high , reactor can not work under limited resources . Furthermore , even without considering the calculation cost , the application scope of Retrace is not as good as n-step returns , as reported in Hernandez-Garcia & Sutton ( 2019 ) . 3 PRELIMINARIES . Reinforcement learning ’ s goal is to find an optimal policy π∗ with maximal discounted returns Rπ = Eπ [ ∑ t γ t−1rt ] given the Markov Decision Process ( MDP ) . To achieve this , agents often estimate the state-action value function qπ ( s , a ) = Eπ [ ∑ t γ t−1rt|s0 = s , a0 = a ] . Let Qπ ( s , a ) denote the estimation of qπ ( s , a ) . In tabular settings , Qπ can be represented by a table of all stateaction pairs 〈s , a〉 , while in large-scale settings , Qπ is often approximated by a deep neural network ( DNN ) with parameter θ , written as Qπ ; θ . During the training process , Qπ is continuously updated by the update target Ĝπ , which is calculated from data points ( s , a , r , s′ ) . In tabular settings ( Watkins & Dayan , 1992 ) , the update equation can be written as : Qπ ( s , a ) ← Qπ ( s , a ) + α [ Ĝπ ( s , a ) − Qπ ( s , a ) ] , where α is the learning rate . In large-scale settings ( Mnih et al. , 2015 ; Lillicrap et al. , 2016 ) , Qπ ; θ is updated by mini-batch gradient descent on the neural network parameter θ as : θ ← θ − α 1N ∑N i=1∇θL ( Qπ ; θ ( si , ai ) , Ĝπ ( si , ai ) ) , where L is the loss function . Off-policy learning adopts two policies , the behavior policy µ for generating data points and the target policy π for learning from the data points . Replay buffer ( Fedus et al. , 2020 ) is often used together with the off-policy to handle the increasing sample complexity brought by large-scale problems . Agents draw data points from the replay buffer to update the estimator Qπ . This enables agents to learn from past experiences , thus yields higher sample efficiency . In the sequel , the term off-policy learning refers to adopt off-policy learning together with the replay buffer , as used by most recent off-policy algorithms . N-step returns ( Sutton & Barto , 2018 ) together with off-policy learning have strong empirical performance ( Hessel et al. , 2018 ) , as well as being easy to calculate . It calculates the Ĝnπ as : Ĝnπ ( s , a ) = r0 + γr1 + · · ·+ γn−1rn−1 + γnEan∼π ( sn ) [ Qπ ( sn , an ) ] . ( 1 ) It can be seen as a mix of Monte-Carlo ( MC ) estimation of qπ ≈ ∑ t γ t−1rt ( n → ∞ case ) and one-step TD ( 0 ) estimation r0 + γEa∼π ( s1 ) [ Qπ ( s1 , a ) ] ( n = 1 case ) . In off-policy learning , to calculate n-step returns Ĝnπ ( s0 , a0 ) for data point ( s0 , a0 , r0 , s ′ 0 ) , we draw consecutive transitions ( st , at , rt , s ′ t ) t=0,1,2 , ... in the same trajectory τ as current data point ( s0 , a0 , r0 , s ′ 0 ) from replay buffer . As n-step returns estimates qπ using trajectory τ generated by behavior policy µ , the discrepancy between π and µ makes it biased . We define this off-policy induced bias as off-policy bias , and difference between π and µ as off-policyness . 4 UNDERLYING WORKING MECHANISM OF N-STEP BOOTSTRAPPING . N-step returns lays in the center of designing the update target . It not only unifies the Monte-Carlo returns and one-step temporal difference but also lays the foundation of the eligibility traces ( Singh & Sutton , 1996 ) . Together with off-policy learning , n-step bootstrapping works well because it achieves a good balance between the bias ( TD ) and variance ( MC ) . Considering its importance and wide application , the underlying mechanism of n-step returns has not been studied in detail . In this section , we give a careful analysis of why n-step bootstrapping works and what property the optimal selection of n should satisfy . 4.1 DECOMPOSITION OF THE ESTIMATION ERROR . To understand how n-step bootstrapping works , we conduct a systematical analysis of the estimation error . We formalize the estimation error of an update target Ĝπ as its difference with ground truth qπ for every ( s , a ) pair that the agent experienced : E ( Ĝπ ) = Eµ [ Ĝπ ( s , a ) − qπ ( s , a ) ] . ( 2 ) As shown in Eq . 1 , the estimation error consists of two parts — the off-policy bias and approximation error . The off-policy bias is introduced by the difference of π and µ , as we mentioned before , while the approximation error comes as agent ’ s estimation of Qπ is not perfect , e.g . Qπ 6= qπ . Note that , the behavior policy µ is an older version of the target policy π . We split these two types of error by defining two intermediate targets , Ĝnπ , qπ ; τ and Ĝ n π , Qπ ; τ̃ , to eliminate the other type of error . Ĝnπ , qπ ; τ adopts the ground truth qπ instead of the estimated Qπ to eliminate approximation error : Ĝnπ , qπ ; τ ( s , a ) = r0 + γr1 + · · ·+ γ n−1rn−1 + γ nEan∼π ( sn ) [ qπ ( sn , an ) ] . Ĝnπ , Qπ ; τ̃ uses the trajectory τ̃ = ( s̃t , ãt , r̃t , s̃ ′ t ) t=0,1,2 , ... which is generated by the current policy π instead of the old trajectory τ to remove the off-policy bias : Ĝnπ , Qπ ; τ̃ ( s , a ) = r̃0 + γr̃1 + · · ·+ γ n−1r̃n−1 + γ nEãn∼π ( s̃n ) [ Qπ ( s̃n , ãn ) ] . Then we can quantify the off-policy bias Eoffpolicy and the approximation error Eapprox independently as : Eoffpolicy ( Ĝnπ ) = E ( Ĝnπ , qπ ; τ ) , Eapprox ( Ĝ n π ) = E ( Ĝnπ , Qπ ; τ̃ ) . Now , we can decompose the total error E ( Ĝnπ ) into the sum of off-policy bias and approximation error , plus a negligible small residual term Eresidual ( Ĝnπ ) : Eresidual ( Ĝnπ ) = E ( Ĝnπ ) − Eoffpolicy ( Ĝnπ ) − Eapprox ( Ĝnπ ) = γn ( Ea∼π ( sn ) [ Qπ ( sn , a ) − qπ ( sn , a ) ] − Eã∼π ( s̃n ) [ Qπ ( s̃n , ã ) − qπ ( s̃n , ã ) ] ) . The residual term Eresidual ( Ĝnπ ) is a difference of the discounted error γnEa∼π ( s ) [ Qπ ( s , a ) − qπ ( s , a ) ] on terminal states sn and s̃n . If n is small , the difference between Qπ and qπ on sn will be close to the difference between Qπ and qπ on s̃n . Otherwise , the discount factor γn will shrink exponentially , making the residual term very small . The experimental results in Sec 4.2 show that the residual term is an order of magnitude smaller than the two main sources in practice . Thus , it can be ignored and we get an approximate decomposition : E ( Ĝnπ ) ≈ Eoffpolicy ( Ĝnπ ) + Eapprox ( Ĝnπ ) . ( 3 )
The paper deals with an intriguing point in RL -- how to correctly choose an adaptive $n$ for $n$-step bootstrap. From such a paper one might expect theoretical results on the tradeoff between bias and variance in the presence of off-policyness. However, as opposed to the picture portrayed in the first section, no such analysis follows.
SP:a7a5a8dc7d3cbf7628878c285640dc8d670f71cf
Adaptive N-step Bootstrapping with Off-policy Data
1 INTRODUCTION . The goal of reinforcement learning ( RL ) is to find an optimal policy by interacting with the environment . In order to do that , a RL algorithm needs to define a target , e.g. , Q function or value function , and update it iteratively to bootstrap from scratch . The challenge of designing an efficient update target manifests both on the sample complexity and computation complexity . Ideally , the target should only be updated by the data generated by the corresponding policy to obtain an unbiased estimate ( Sutton & Barto , 2018 ) , while the amount of the data needs to reach a certain scale to control the variance ( Schulman et al. , 2015 ) . These two requirements limit the update frequency and lead to a high sample complexity finally . On the computational part , the consideration is to make a trade-off between the computation cost of each step and the number of total steps . Monte-Carlo returns has advantages on generalization ( behave well with function approximation ) and exploration ( a quick propagation of new findings ) at the cost of computing the whole trajectory on each step ( Sutton & Barto , 2018 ) . Bootstrapping methods apply readily on off-policy data and control the trace length ( Sutton & Barto , 2018 ) . However , they require more update steps to converge comparing with Monte-Carlo returns . Those design concerns are nested together , which makes it hard to achieve a good balance . N-step returns ( Sutton & Barto , 2018 ) serves as the basis of various update targets , due to its flexibility and simple implementation . Together with off-policy learning ( Sutton & Barto , 2018 ) and the replay buffer ( Mnih et al. , 2015 ) , n-step returns is able to update the target frequently while ensures that the variance is in a controllable range . However , a systematical study ( Fedus et al. , 2020 ) reveals that the performance of n-step returns highly relies on the exact value of n. Since the underlying working mechanism is unclear , previous research can only give some vague suggestion based on empirical results , that simply increases the value from one to a larger number , e.g . 3 or 4 . In this paper , we illustrate that the estimation error of n-step returns can be decomposed into offpolicy bias ( under-estimation part ) and approximation error ( over-estimation part ) , and the selection of n controls the balance between them . Data stored in the replay buffer are generated by previous policies . Thus , adopting them for update introduces the off-policy bias . Since the current policy is better than previous policies , the off-policy bias is an underestimation . Replay buffer is not the only source of the off-policy bias , epsilon-greedy exploration also introduces the off-policy issue . On the other hand , n-step returns adopts a max operator explicitly ( on Q-learning based algorithms ) or implicitly ( on actor-critic algorithms ) on an existing function to approximate the real target . The max operator brings the approximation error , which is an overestimation . Sec 4 gives the formal definition of the decomposition and verifies the conclusion by experiments . According to our analysis , the quantity of the off-policy bias and approximation error varies a lot on different data points . Thus , a fixed value of n is just a rough average , and there is plenty of room for improvement . We introduce a new metric , policy age , to quantify the off-policyness of each data point . As the policy age grows , the off-policy bias increases linearly , while the approximation error decreases exponentially . Based on this observation , we propose a novel algorithm , named adaptive n-step bootstrapping . Given the policy age of each data point , adaptive n-step calculates the optimal n by an exponential function . Hyperparameter of the function is determined by the tree-structured parzen estimator ( Bergstra et al. , 2011 ) . We conduct extensive experiments on both MuJoCo and Atari games . Adaptive n-step bootstrapping outperforms all fixed-value n settings with a large margin , in terms of both data efficiency and final reward . For the other update target definitions , we select Retrace ( Munos et al. , 2016 ) as a representative . Compared with Retrace , our method maintains the performance advantage under the premise of low computational complexity and simple implementation . 2 RELATED WORKS . 2.1 RESEARCH ON N-STEP RETURNS . The recent works on n-step returns focus on finding the optimal value of n. In Ape-X ( Horgan et al. , 2018 ) and R2D2 ( Kapturowski et al. , 2019 ) , the value of n is fixed , which is set by manual tuning or hyper-parameter search . Rainbow ( Hessel et al. , 2018 ) figures out that the final performance is sensitive to the value of n , and n = 3 achieves the best score in most cases on Atari games . Fedus et al . ( 2020 ) verifies that setting n to 3 is a good choice , and further reveals that the replay buffer must also be large enough to gain performance benefits . Those researches give some heuristic rules of setting the value of n , but the underlying mechanism of why n = 3 performs better than one-step temporal difference is still unclear . 2.2 OTHER UPDATE TARGETS . To improve the performance of the vanilla n-step returns , there are many other update target definitions in the literature ( Hernandez-Garcia & Sutton , 2019 ) . Importance sampling ( IS ) ( Precup et al. , 2000 ) provides a simple way to correct the off-policy bias . It can be seen as a weighted average of multiple one-step TD ( 0 ) target . However , IS brings large ( and possibly infinite ) variance , which makes it impractical on large-scale problems . Retrace ( Munos et al. , 2016 ) clips the IS ratio to a maximum value of 1 to reduce the large variance of IS targets . It has many applications in recent reinforcement learning agents , like distributed offpolicy learning agent Reactor ( Gruslys et al. , 2017 ) . The most serious disadvantage of Retrace is its high computation cost . Retrace needs to calculate O ( n ) times of Q and O ( n ) times of π in each time , compared with only O ( 1 ) from n-step returns ( n is trace length ) . In large-scale problems , evaluating Q and π requires a forward pass in the neural network , which is slow and expensive . Reactor ( Gruslys et al. , 2017 ) calculates the Retrace target as a linear combination of many n-step targets and dispatches those calculation workloads into different nodes for acceleration . Since the computation complexity is still high , reactor can not work under limited resources . Furthermore , even without considering the calculation cost , the application scope of Retrace is not as good as n-step returns , as reported in Hernandez-Garcia & Sutton ( 2019 ) . 3 PRELIMINARIES . Reinforcement learning ’ s goal is to find an optimal policy π∗ with maximal discounted returns Rπ = Eπ [ ∑ t γ t−1rt ] given the Markov Decision Process ( MDP ) . To achieve this , agents often estimate the state-action value function qπ ( s , a ) = Eπ [ ∑ t γ t−1rt|s0 = s , a0 = a ] . Let Qπ ( s , a ) denote the estimation of qπ ( s , a ) . In tabular settings , Qπ can be represented by a table of all stateaction pairs 〈s , a〉 , while in large-scale settings , Qπ is often approximated by a deep neural network ( DNN ) with parameter θ , written as Qπ ; θ . During the training process , Qπ is continuously updated by the update target Ĝπ , which is calculated from data points ( s , a , r , s′ ) . In tabular settings ( Watkins & Dayan , 1992 ) , the update equation can be written as : Qπ ( s , a ) ← Qπ ( s , a ) + α [ Ĝπ ( s , a ) − Qπ ( s , a ) ] , where α is the learning rate . In large-scale settings ( Mnih et al. , 2015 ; Lillicrap et al. , 2016 ) , Qπ ; θ is updated by mini-batch gradient descent on the neural network parameter θ as : θ ← θ − α 1N ∑N i=1∇θL ( Qπ ; θ ( si , ai ) , Ĝπ ( si , ai ) ) , where L is the loss function . Off-policy learning adopts two policies , the behavior policy µ for generating data points and the target policy π for learning from the data points . Replay buffer ( Fedus et al. , 2020 ) is often used together with the off-policy to handle the increasing sample complexity brought by large-scale problems . Agents draw data points from the replay buffer to update the estimator Qπ . This enables agents to learn from past experiences , thus yields higher sample efficiency . In the sequel , the term off-policy learning refers to adopt off-policy learning together with the replay buffer , as used by most recent off-policy algorithms . N-step returns ( Sutton & Barto , 2018 ) together with off-policy learning have strong empirical performance ( Hessel et al. , 2018 ) , as well as being easy to calculate . It calculates the Ĝnπ as : Ĝnπ ( s , a ) = r0 + γr1 + · · ·+ γn−1rn−1 + γnEan∼π ( sn ) [ Qπ ( sn , an ) ] . ( 1 ) It can be seen as a mix of Monte-Carlo ( MC ) estimation of qπ ≈ ∑ t γ t−1rt ( n → ∞ case ) and one-step TD ( 0 ) estimation r0 + γEa∼π ( s1 ) [ Qπ ( s1 , a ) ] ( n = 1 case ) . In off-policy learning , to calculate n-step returns Ĝnπ ( s0 , a0 ) for data point ( s0 , a0 , r0 , s ′ 0 ) , we draw consecutive transitions ( st , at , rt , s ′ t ) t=0,1,2 , ... in the same trajectory τ as current data point ( s0 , a0 , r0 , s ′ 0 ) from replay buffer . As n-step returns estimates qπ using trajectory τ generated by behavior policy µ , the discrepancy between π and µ makes it biased . We define this off-policy induced bias as off-policy bias , and difference between π and µ as off-policyness . 4 UNDERLYING WORKING MECHANISM OF N-STEP BOOTSTRAPPING . N-step returns lays in the center of designing the update target . It not only unifies the Monte-Carlo returns and one-step temporal difference but also lays the foundation of the eligibility traces ( Singh & Sutton , 1996 ) . Together with off-policy learning , n-step bootstrapping works well because it achieves a good balance between the bias ( TD ) and variance ( MC ) . Considering its importance and wide application , the underlying mechanism of n-step returns has not been studied in detail . In this section , we give a careful analysis of why n-step bootstrapping works and what property the optimal selection of n should satisfy . 4.1 DECOMPOSITION OF THE ESTIMATION ERROR . To understand how n-step bootstrapping works , we conduct a systematical analysis of the estimation error . We formalize the estimation error of an update target Ĝπ as its difference with ground truth qπ for every ( s , a ) pair that the agent experienced : E ( Ĝπ ) = Eµ [ Ĝπ ( s , a ) − qπ ( s , a ) ] . ( 2 ) As shown in Eq . 1 , the estimation error consists of two parts — the off-policy bias and approximation error . The off-policy bias is introduced by the difference of π and µ , as we mentioned before , while the approximation error comes as agent ’ s estimation of Qπ is not perfect , e.g . Qπ 6= qπ . Note that , the behavior policy µ is an older version of the target policy π . We split these two types of error by defining two intermediate targets , Ĝnπ , qπ ; τ and Ĝ n π , Qπ ; τ̃ , to eliminate the other type of error . Ĝnπ , qπ ; τ adopts the ground truth qπ instead of the estimated Qπ to eliminate approximation error : Ĝnπ , qπ ; τ ( s , a ) = r0 + γr1 + · · ·+ γ n−1rn−1 + γ nEan∼π ( sn ) [ qπ ( sn , an ) ] . Ĝnπ , Qπ ; τ̃ uses the trajectory τ̃ = ( s̃t , ãt , r̃t , s̃ ′ t ) t=0,1,2 , ... which is generated by the current policy π instead of the old trajectory τ to remove the off-policy bias : Ĝnπ , Qπ ; τ̃ ( s , a ) = r̃0 + γr̃1 + · · ·+ γ n−1r̃n−1 + γ nEãn∼π ( s̃n ) [ Qπ ( s̃n , ãn ) ] . Then we can quantify the off-policy bias Eoffpolicy and the approximation error Eapprox independently as : Eoffpolicy ( Ĝnπ ) = E ( Ĝnπ , qπ ; τ ) , Eapprox ( Ĝ n π ) = E ( Ĝnπ , Qπ ; τ̃ ) . Now , we can decompose the total error E ( Ĝnπ ) into the sum of off-policy bias and approximation error , plus a negligible small residual term Eresidual ( Ĝnπ ) : Eresidual ( Ĝnπ ) = E ( Ĝnπ ) − Eoffpolicy ( Ĝnπ ) − Eapprox ( Ĝnπ ) = γn ( Ea∼π ( sn ) [ Qπ ( sn , a ) − qπ ( sn , a ) ] − Eã∼π ( s̃n ) [ Qπ ( s̃n , ã ) − qπ ( s̃n , ã ) ] ) . The residual term Eresidual ( Ĝnπ ) is a difference of the discounted error γnEa∼π ( s ) [ Qπ ( s , a ) − qπ ( s , a ) ] on terminal states sn and s̃n . If n is small , the difference between Qπ and qπ on sn will be close to the difference between Qπ and qπ on s̃n . Otherwise , the discount factor γn will shrink exponentially , making the residual term very small . The experimental results in Sec 4.2 show that the residual term is an order of magnitude smaller than the two main sources in practice . Thus , it can be ignored and we get an approximate decomposition : E ( Ĝnπ ) ≈ Eoffpolicy ( Ĝnπ ) + Eapprox ( Ĝnπ ) . ( 3 )
This paper provides a novel algorithm to estimate the optimal value of $n$ for $n$-step temporal difference methods. The paper derives an optimal value of $n$ based on minimizing a bias term, then utilizes intuition to derive an online approximation algorithm. The paper compares its adaptive $n$-step algorithm against fixed $n$-step values with both DQN and SAC empirically on several domains.
SP:a7a5a8dc7d3cbf7628878c285640dc8d670f71cf
Coverage as a Principle for Discovering Transferable Behavior in Reinforcement Learning
1 INTRODUCTION . Unsupervised representation learning techniques have led to unprecedented results in domains like computer vision ( Hénaff et al. , 2019 ; He et al. , 2019 ) and natural language processing ( Devlin et al. , 2019 ; Radford et al. , 2019 ) . These methods are commonly composed of two stages – an initial unsupervised phase , followed by supervised fine-tuning on downstream tasks . The self-supervised nature of the learning objective allows to leverage large collections of unlabelled data in the first stage . This produces models that extract task-agnostic features that are well suited for transfer to downstream tasks . In reinforcement learning ( RL ) , auxiliary representation learning objectives provide denser signals that result in data efficiency gains ( Jaderberg et al. , 2017 ) and even bridge the gap between learning from true state and pixel observations ( Laskin et al. , 2020 ) . However , RL applications have not yet seen the advent of the two-stage setting where task-agnostic pre-training is followed by efficient transfer to downstream tasks . We argue that there are two reasons explaining this lag with respect to their supervised counterparts . First , these methods traditionally focus on transferring representations ( Lesort et al. , 2018 ) . While this is enough in supervised scenarios , we argue that leveraging pre-trained behavior is far more important in RL domains requiring structured exploration . Second , what type of self-supervised objectives enable the acquisition of transferable , task-agnostic knowledge is still an open question . Defining these objectives in the RL setting is complex , as they should account for the fact that the the distribution of the input data will be defined by the behavior of the agent . Transfer in deep learning is often performed through parameter initialization followed by fine-tuning . The most widespread procedure consists in initializing all weights in the neural network using those from the pre-trained model , and then adding an output layer with random parameters ( Girshick et al. , 2014 ; Devlin et al. , 2019 ) . Depending on the amount of available data , pre-trained parameters can either be fine-tuned or kept fixed . This builds on the intuition that the pre-trained model will map inputs to a feature space where the downstream task is easy to perform . In the RL setting , this procedure will completely dismiss the pre-trained policy and fall back to a random one when collecting experience . Given that complex RL problems require structured and temporally-extended behaviors , we argue that representation alone is not enough for efficient transfer in challenging domains . Pre-trained representations do indeed provide data efficiency gains in domains with dense reward signals ( Finn et al. , 2017 ; Yarats et al. , 2019 ; Stooke et al. , 2020a ) , but our experiments show that the standard fine-tuning procedure falls short in hard exploration problems ( c.f . Figure 1 ) . We observe this limitation even when fine-tuning the pre-trained policy , which is aligned with findings from previous works ( Finn et al. , 2017 ) . Learning in the downstream task can lead to catastrophically forgetting the pre-trained policy , something that depends on many difficult-to-measure factors such as the similarity between the tasks . We address the problem of leveraging arbitrary pre-trained policies when solving downstream tasks , a requirement towards enabling efficient transfer in RL . Defining unsupervised RL objectives remains an open problem , and existing solutions are often influenced by how the acquired knowledge will be used for solving downstream tasks . Modelbased approaches can learn world models from unsupervised interaction ( Ha & Schmidhuber , 2018 ) . However , the diversity of the training data will impact the accuracy of the model ( Sekar et al. , 2020 ) and deploying this type of approach in visually complex domains like Atari remains an open problem ( Hafner et al. , 2019 ) . Unsupervised RL has also been explored through the lens of empowerment ( Salge et al. , 2014 ; Mohamed & Rezende , 2015 ) , which studies agents that aim to discover intrinsic options ( Gregor et al. , 2016 ; Eysenbach et al. , 2019 ) . While these options can be leveraged by hierarchical agents ( Florensa et al. , 2017 ) or integrated within the universal successor features framework ( Barreto et al. , 2017 ; 2018 ; Borsa et al. , 2019 ; Hansen et al. , 2020 ) , their lack of coverage generally limits their applicability to complex downstream tasks ( Campos et al. , 2020 ) . We argue that maximizing coverage is a good objective for task-agnostic RL , as agents that succeed at this task will need to develop complex behaviors in order to efficiently explore the environment ( Kearns & Singh , 2002 ) . This problem can be formulated as that of finding policies that induce maximally entropic state distributions , which might become extremely inefficient in high-dimensional state spaces without proper priors ( Hazan et al. , 2019 ; Lee et al. , 2019 ) . In practice , exploration is often encouraged through intrinsic curiosity signals that incorporate priors in order to quantify how different the current state is from those already visited ( Bellemare et al. , 2016 ; Houthooft et al. , 2016 ; Ostrovski et al. , 2017 ; Puigdomènech Badia et al. , 2020b ) . Agents that maximize these novelty-seeking signals have been shown to discover useful behaviors in unsupervised settings ( Pathak et al. , 2017 ; Burda et al. , 2018a ) , but little research has been conducted towards leveraging the acquired knowledge once the agent is exposed to extrinsic reward . We show that coverage-seeking objectives are a good proxy for acquiring knowledge in task-agnostic settings , as leveraging the behaviors discovered in an unsupervised pre-training stage provides important gains when solving downstream tasks . Our contributions can be summarized as follows . ( 1 ) We study how to transfer knowledge in RL through behavior by re-using pre-trained policies , an approach that is complementary to re-using representations . We argue that pre-trained behavior can be used for both exploitation and exploration , and present techniques to achieve both goals . ( 2 ) We propose coverage as a principle for discovering behavior that is suitable for both exploitation and exploration . While coverage is naturally aligned with exploration , we show that this objective will lead to the discovery of behavior that is useful for exploitation as well . ( 3 ) We propose Coverage Pre-training for Transfer ( CPT ) , a method that implements the aforementioned hypotheses , and provide extensive experimental evaluation to support them . Our results show that leveraging the behavior of policies pre-trained to maximize coverage provides important benefits when solving downstream tasks . CPT obtains the largest gains in hard exploration games , where it almost doubles the median human normalized score achieved by our strongest baseline . Importantly , these benefits are observed even when the pre-trained policies are misaligned with the task being solved , confirming that the benefits do not come from a fortuitous alignment between our pre-training objective and the task reward . Furthermore , we show that CPT is able to leverage a single task-agnostic policy to solve multiple tasks in the same environment . 2 REINFORCEMENT LEARNING WITH UNSUPERVISED PRE-TRAINING . We follow a similar setup to that proposed by Hansen et al . ( 2020 ) . In an initial pre-training stage , agents are allowed as many interactions with the environment as needed as long as they are not exposed to task-specific rewards . Rewards are reinstated in a second stage , where the knowledge acquired during unsupervised pre-training should be leveraged in order to enable efficient learning . This is analogous to the evaluation setting for unsupervised learning methods , where pre-training on classification benchmarks with labels removed is evaluated after fine-tuning on small sets of annotated examples . The two-stage setup introduces two main challenges : defining pretext tasks in the absence of reward , and efficiently leveraging knowledge once rewards are reinstated . Our proposed method , Coverage Pre-training for Transfer ( CPT ) , relies on coverage maximization as a pretext task for task-agnostic pre-training in order to produce policies whose behavior can be leveraged for both exploitation and exploration when solving downstream tasks in the same environment . Figure 2 provides intuition about the potential benefits of CPT . 3 LEVERAGING PRE-TRAINED POLICIES . Transfer in supervised domains often exploits the fact that related tasks might be solved using similar representations . This practice deals with the data inefficiency of training large neural networks with stochastic gradient descent . However , there is an additional source of data inefficiency when training RL agents : unstructured exploration . If the agent fails at discovering reward while exploring , it will struggle even when fitting simple function approximators on top of the true state of the MDP . These two strategies are complementary , as they address different sources of inefficiency , which motivates the study of techniques for leveraging pre-trained behavior ( i.e . policies ) . Our approach relies on off-policy learning methods in order to leverage arbitrary pre-trained policies . We make use of the mapping from observations to actions of such policies ( i.e . their behavior ) , and do not transfer knowledge through pre-trained neural network weights . We consider value-based methods with experience replay that estimate action-value functions and derive greedy policies from them . The presented formulation considers a single pre-trained policy , πp , but note that it is straightforward to extend it to multiple such policies . No assumptions are made on how the pre-trained policy is obtained , and it is only used for acting . We propose using the behavior of the pre-trained policy for two complementary purposes : exploitation and exploration . Figure 2 provides intuition about the potential benefits of these two approaches on a simple environment , and pseudo-code for the proposed methods is included in Appendix A . Exploitation . When the behavior of πp is aligned with the downstream task , it can be used for zero-shot transfer . However , we are concerned with the more realistic scenario where only some of the behaviors of πp might be aligned with downstream tasks ( c.f . Figure 2 , right ) . We propose to leverage πp for exploitation by letting the agent combine primitive actions with the behavior of πp . This is achieved by considering an expanded action set A+ = A∪ { πp ( s ) } , so that the agent can fall back to πp for one step when taking the additional action . Intuitively , this new state-dependent action should enable faster convergence when the pre-trained policy discovered behaviors that are useful for the task , while letting the agent ignore it otherwise . The return of taking action a′ ∼ πp ( s ) is used as target to fit both Q ( s , πp ( s ) ) and Q ( s , a′ ) , which implements the observation that they are the same action and thus will lead to the same outcomes . Exploration . Following the pre-trained policy might bring the agent to states that are unlikely to be visited with unstructured exploration techniques such as -greedy . This property has the potential of accelerating learning even when the behavior of the pre-trained policy is not aligned with the downstream task , as it will effectively shorten the path between otherwise distant states ( Liu & Brunskill , 2018 ) . As we rely on off-policy methods that can learn from experience collected by arbitrary policies , we propose to perform temporally-extended exploration with πp , which we will refer to as flights . Inspired by z-greedy and its connection to Lévy flights ( Viswanathan et al. , 1996 ) , a class of ecological models for animal foraging , these flights are started randomly and their duration is sampled from a heavy-tailed distribution . Our proposal can be understood as a variant of z-greedy where pre-trained policies are used as exploration options . An exploratory flight might be started at any step with some probability . The duration for the flight is sampled from a heavy-tailed distribution , and control is handed over to πp during the complete flight . When not in a flight , the exploitative policy that maximizes the extrinsic reward is derived from the estimated Q-values using the -greedy operator . This ensures that all state-action pairs will be visited given enough time , as exploring only with πp does not guarantee such property . Note that this is not needed in z-greedy , which reduces to standard -greedy exploration when sampling a flight duration of one step .
This paper proposed a transfer approach for reinforcement learning. The proposed approach leverages a policy pre-trained via Never Give Up (NGU) approach, and can facilitate learning challenging RL tasks including the ones with sparse reward. This paper presents many strong pieces of evidence that this approach can be used to tackle challenging RL problems including hard exploration and multi-task learning.
SP:407fd895712f394b12b56e886901748270d11bd3
Coverage as a Principle for Discovering Transferable Behavior in Reinforcement Learning
1 INTRODUCTION . Unsupervised representation learning techniques have led to unprecedented results in domains like computer vision ( Hénaff et al. , 2019 ; He et al. , 2019 ) and natural language processing ( Devlin et al. , 2019 ; Radford et al. , 2019 ) . These methods are commonly composed of two stages – an initial unsupervised phase , followed by supervised fine-tuning on downstream tasks . The self-supervised nature of the learning objective allows to leverage large collections of unlabelled data in the first stage . This produces models that extract task-agnostic features that are well suited for transfer to downstream tasks . In reinforcement learning ( RL ) , auxiliary representation learning objectives provide denser signals that result in data efficiency gains ( Jaderberg et al. , 2017 ) and even bridge the gap between learning from true state and pixel observations ( Laskin et al. , 2020 ) . However , RL applications have not yet seen the advent of the two-stage setting where task-agnostic pre-training is followed by efficient transfer to downstream tasks . We argue that there are two reasons explaining this lag with respect to their supervised counterparts . First , these methods traditionally focus on transferring representations ( Lesort et al. , 2018 ) . While this is enough in supervised scenarios , we argue that leveraging pre-trained behavior is far more important in RL domains requiring structured exploration . Second , what type of self-supervised objectives enable the acquisition of transferable , task-agnostic knowledge is still an open question . Defining these objectives in the RL setting is complex , as they should account for the fact that the the distribution of the input data will be defined by the behavior of the agent . Transfer in deep learning is often performed through parameter initialization followed by fine-tuning . The most widespread procedure consists in initializing all weights in the neural network using those from the pre-trained model , and then adding an output layer with random parameters ( Girshick et al. , 2014 ; Devlin et al. , 2019 ) . Depending on the amount of available data , pre-trained parameters can either be fine-tuned or kept fixed . This builds on the intuition that the pre-trained model will map inputs to a feature space where the downstream task is easy to perform . In the RL setting , this procedure will completely dismiss the pre-trained policy and fall back to a random one when collecting experience . Given that complex RL problems require structured and temporally-extended behaviors , we argue that representation alone is not enough for efficient transfer in challenging domains . Pre-trained representations do indeed provide data efficiency gains in domains with dense reward signals ( Finn et al. , 2017 ; Yarats et al. , 2019 ; Stooke et al. , 2020a ) , but our experiments show that the standard fine-tuning procedure falls short in hard exploration problems ( c.f . Figure 1 ) . We observe this limitation even when fine-tuning the pre-trained policy , which is aligned with findings from previous works ( Finn et al. , 2017 ) . Learning in the downstream task can lead to catastrophically forgetting the pre-trained policy , something that depends on many difficult-to-measure factors such as the similarity between the tasks . We address the problem of leveraging arbitrary pre-trained policies when solving downstream tasks , a requirement towards enabling efficient transfer in RL . Defining unsupervised RL objectives remains an open problem , and existing solutions are often influenced by how the acquired knowledge will be used for solving downstream tasks . Modelbased approaches can learn world models from unsupervised interaction ( Ha & Schmidhuber , 2018 ) . However , the diversity of the training data will impact the accuracy of the model ( Sekar et al. , 2020 ) and deploying this type of approach in visually complex domains like Atari remains an open problem ( Hafner et al. , 2019 ) . Unsupervised RL has also been explored through the lens of empowerment ( Salge et al. , 2014 ; Mohamed & Rezende , 2015 ) , which studies agents that aim to discover intrinsic options ( Gregor et al. , 2016 ; Eysenbach et al. , 2019 ) . While these options can be leveraged by hierarchical agents ( Florensa et al. , 2017 ) or integrated within the universal successor features framework ( Barreto et al. , 2017 ; 2018 ; Borsa et al. , 2019 ; Hansen et al. , 2020 ) , their lack of coverage generally limits their applicability to complex downstream tasks ( Campos et al. , 2020 ) . We argue that maximizing coverage is a good objective for task-agnostic RL , as agents that succeed at this task will need to develop complex behaviors in order to efficiently explore the environment ( Kearns & Singh , 2002 ) . This problem can be formulated as that of finding policies that induce maximally entropic state distributions , which might become extremely inefficient in high-dimensional state spaces without proper priors ( Hazan et al. , 2019 ; Lee et al. , 2019 ) . In practice , exploration is often encouraged through intrinsic curiosity signals that incorporate priors in order to quantify how different the current state is from those already visited ( Bellemare et al. , 2016 ; Houthooft et al. , 2016 ; Ostrovski et al. , 2017 ; Puigdomènech Badia et al. , 2020b ) . Agents that maximize these novelty-seeking signals have been shown to discover useful behaviors in unsupervised settings ( Pathak et al. , 2017 ; Burda et al. , 2018a ) , but little research has been conducted towards leveraging the acquired knowledge once the agent is exposed to extrinsic reward . We show that coverage-seeking objectives are a good proxy for acquiring knowledge in task-agnostic settings , as leveraging the behaviors discovered in an unsupervised pre-training stage provides important gains when solving downstream tasks . Our contributions can be summarized as follows . ( 1 ) We study how to transfer knowledge in RL through behavior by re-using pre-trained policies , an approach that is complementary to re-using representations . We argue that pre-trained behavior can be used for both exploitation and exploration , and present techniques to achieve both goals . ( 2 ) We propose coverage as a principle for discovering behavior that is suitable for both exploitation and exploration . While coverage is naturally aligned with exploration , we show that this objective will lead to the discovery of behavior that is useful for exploitation as well . ( 3 ) We propose Coverage Pre-training for Transfer ( CPT ) , a method that implements the aforementioned hypotheses , and provide extensive experimental evaluation to support them . Our results show that leveraging the behavior of policies pre-trained to maximize coverage provides important benefits when solving downstream tasks . CPT obtains the largest gains in hard exploration games , where it almost doubles the median human normalized score achieved by our strongest baseline . Importantly , these benefits are observed even when the pre-trained policies are misaligned with the task being solved , confirming that the benefits do not come from a fortuitous alignment between our pre-training objective and the task reward . Furthermore , we show that CPT is able to leverage a single task-agnostic policy to solve multiple tasks in the same environment . 2 REINFORCEMENT LEARNING WITH UNSUPERVISED PRE-TRAINING . We follow a similar setup to that proposed by Hansen et al . ( 2020 ) . In an initial pre-training stage , agents are allowed as many interactions with the environment as needed as long as they are not exposed to task-specific rewards . Rewards are reinstated in a second stage , where the knowledge acquired during unsupervised pre-training should be leveraged in order to enable efficient learning . This is analogous to the evaluation setting for unsupervised learning methods , where pre-training on classification benchmarks with labels removed is evaluated after fine-tuning on small sets of annotated examples . The two-stage setup introduces two main challenges : defining pretext tasks in the absence of reward , and efficiently leveraging knowledge once rewards are reinstated . Our proposed method , Coverage Pre-training for Transfer ( CPT ) , relies on coverage maximization as a pretext task for task-agnostic pre-training in order to produce policies whose behavior can be leveraged for both exploitation and exploration when solving downstream tasks in the same environment . Figure 2 provides intuition about the potential benefits of CPT . 3 LEVERAGING PRE-TRAINED POLICIES . Transfer in supervised domains often exploits the fact that related tasks might be solved using similar representations . This practice deals with the data inefficiency of training large neural networks with stochastic gradient descent . However , there is an additional source of data inefficiency when training RL agents : unstructured exploration . If the agent fails at discovering reward while exploring , it will struggle even when fitting simple function approximators on top of the true state of the MDP . These two strategies are complementary , as they address different sources of inefficiency , which motivates the study of techniques for leveraging pre-trained behavior ( i.e . policies ) . Our approach relies on off-policy learning methods in order to leverage arbitrary pre-trained policies . We make use of the mapping from observations to actions of such policies ( i.e . their behavior ) , and do not transfer knowledge through pre-trained neural network weights . We consider value-based methods with experience replay that estimate action-value functions and derive greedy policies from them . The presented formulation considers a single pre-trained policy , πp , but note that it is straightforward to extend it to multiple such policies . No assumptions are made on how the pre-trained policy is obtained , and it is only used for acting . We propose using the behavior of the pre-trained policy for two complementary purposes : exploitation and exploration . Figure 2 provides intuition about the potential benefits of these two approaches on a simple environment , and pseudo-code for the proposed methods is included in Appendix A . Exploitation . When the behavior of πp is aligned with the downstream task , it can be used for zero-shot transfer . However , we are concerned with the more realistic scenario where only some of the behaviors of πp might be aligned with downstream tasks ( c.f . Figure 2 , right ) . We propose to leverage πp for exploitation by letting the agent combine primitive actions with the behavior of πp . This is achieved by considering an expanded action set A+ = A∪ { πp ( s ) } , so that the agent can fall back to πp for one step when taking the additional action . Intuitively , this new state-dependent action should enable faster convergence when the pre-trained policy discovered behaviors that are useful for the task , while letting the agent ignore it otherwise . The return of taking action a′ ∼ πp ( s ) is used as target to fit both Q ( s , πp ( s ) ) and Q ( s , a′ ) , which implements the observation that they are the same action and thus will lead to the same outcomes . Exploration . Following the pre-trained policy might bring the agent to states that are unlikely to be visited with unstructured exploration techniques such as -greedy . This property has the potential of accelerating learning even when the behavior of the pre-trained policy is not aligned with the downstream task , as it will effectively shorten the path between otherwise distant states ( Liu & Brunskill , 2018 ) . As we rely on off-policy methods that can learn from experience collected by arbitrary policies , we propose to perform temporally-extended exploration with πp , which we will refer to as flights . Inspired by z-greedy and its connection to Lévy flights ( Viswanathan et al. , 1996 ) , a class of ecological models for animal foraging , these flights are started randomly and their duration is sampled from a heavy-tailed distribution . Our proposal can be understood as a variant of z-greedy where pre-trained policies are used as exploration options . An exploratory flight might be started at any step with some probability . The duration for the flight is sampled from a heavy-tailed distribution , and control is handed over to πp during the complete flight . When not in a flight , the exploitative policy that maximizes the extrinsic reward is derived from the estimated Q-values using the -greedy operator . This ensures that all state-action pairs will be visited given enough time , as exploring only with πp does not guarantee such property . Note that this is not needed in z-greedy , which reduces to standard -greedy exploration when sampling a flight duration of one step .
In this work the author focus on transfer in RL via proposing to transfer knowledge through behavior instead of representations. They propose using coverage as an objective for the pre-training procedure. They then employ the NGU (Badia 2020) They also propose a method based on coverage pre-training for transfer and provide empirical evidence in support of their method for transfer in RL on the Atari suite.
SP:407fd895712f394b12b56e886901748270d11bd3
New Bounds For Distributed Mean Estimation and Variance Reduction
∑n v=1 xv , while min- imizing total communication cost . DME is a fundamental construct in distributed machine learning , and there has been considerable work on variants of this problem , especially in the context of distributed variance reduction for stochastic gradients in parallel SGD . Previous work typically assumes an upper bound on the norm of the input vectors , and achieves an error bound in terms of this norm . However , in many real applications , the input vectors are concentrated around the correct output µ , but µ itself has large norm . In such cases , previous output error bounds perform poorly . In this paper , we show that output error bounds need not depend on input norm . We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs , not on input norm , and show an analogous result for distributed variance reduction . The technique is based on a new connection with lattice theory . We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal . As the lattices achieving optimal bounds under ` 2-norm can be computationally impractical , we also present an extension which leverages easy-to-use cubic lattices , and is loose only up to a logarithmic factor in d. We show experimentally that our method yields practical improvements for common applications , relative to prior approaches . 1 Introduction . Several problems in distributed machine learning and optimization can be reduced to variants distributed mean estimation problem , in which n machines must cooperate to jointly estimate the mean of their d-dimensional inputs µ = 1n ∑n v=1 xv as closely as possible , while minimizing communication . In particular , this construct is often used for distributed variance reduction : here , each machine receives as input an independent probabilistic estimate of a d-dimensional vector ∇ , and the aim is for all machines to output a common estimate of ∇ with lower variance than the individual inputs , minimizing communication . Without any communication restrictions , the ideal output would be the mean of all machines ’ inputs . While variants of these fundamental problems have been considered since seminal work by Tsitsiklis & Luo ( 1987 ) , the task has seen renewed attention recently in the context of distributed machine learning . In particular , variance reduction is a key component in data-parallel distributed stochastic gradient descent ( SGD ) , the standard way to parallelize the training of deep neural networks , e.g . Bottou ( 2010 ) ; Abadi et al . ( 2016 ) , where it is used to estimate the average of gradient updates obtained in parallel at the nodes . Thus , several prior works proposed efficient compression schemes to solve variance reduction or mean estimation , see e.g . Suresh et al . ( 2017 ) ; Alistarh et al . ( 2017 ) ; Ramezani-Kebrya et al . ( 2019 ) ; Gandikota et al . ( 2019 ) , and Ben-Nun & Hoefler ( 2019 ) for a general survey of practical distribution schemes . These schemes seek to quantize nodes ’ inputs coordinatewise to one of a limited collection of values , in order to then efficiently encode and transmit these quantized values . A trade-off then arises between the number of bits sent , and the added variance due of quantization . Since the measure of output quality is variance , it appears most natural to evaluate this with respect to input variance , in order to show that variance reduction is indeed achieved . Surprisingly , however , we are aware of no previous works which do so ; all existing methods give bounds on output variance in terms of the squared input norm . This is clearly suboptimal when the squared norm is higher than the variance , i.e. , when inputs are not centered around the origin . In some practical scenarios this causes output variance to be higher than input variance , as we demonstrate in Section 4 . Contributions . In this paper , we provide the first bounds for distributed mean estimation and variance reduction which are still tight when inputs are not centered around the origin . Our results are based on new lattice-based quantization techniques , which may be of independent interest , and come with matching lower bounds , and practical extensions . More precisely , our contributions are as follows : • For distributed mean estimation , we show that , to achieve a reduction of a factor q in the input ‘ variance ’ ( which we define to be the maximum squared distance between inputs ) , it is necessary and sufficient for machines to communicate Θ ( d log q ) bits . • For variance reduction , we show tight Θ ( d logn ) bounds on the worst-case communication bits required to achieve optimal Θ ( n ) -factor variance reduction by n nodes over d-dimensional input , and indeed to achieve any variance reduction at all . We then show how incorporating error detection into our quantization scheme , we can also obtain tight bounds on the bits required in expectation . • We show how to efficiently instantiate our lattice-based quantization framework in practice , with guarantees . In particular , we devise a variant of the scheme which ensures close-to-optimal communication-variance bounds even for the standard cubic lattice , and use it to obtain improvements relative to the best known previous methods for distributed mean estimation , both on synthetic and real-world tasks . 1.1 Problem Definitions and Discussion . MeanEstimation is defined as follows : we have n machines v , and each receives as input a vector xv ∈ Rd . We also assume that all machines receive a common value y , with the guarantee that for any machines u , v , ‖xu−xv‖ ≤ y . Our goal is for all machines to output the same value EST ∈ Rd , which is an unbiased estimator of the mean µ = 1n ∑ v∈M xv , i.e . E [ EST ] = µ , with variance as low as possible . Notice that the input specification is entirely deterministic ; any randomness in the output arises only from the algorithm used . In the variant of VarianceReduction , we again have a set of n machines , and now an unknown true vector ∇ . Each machine v receives as input an independent unbiased estimator xv of ∇ ( i.e. , E [ xv ] = ∇ ) with variance E [ ‖xv −∇‖2 ] ≤ σ2 . Machines are assumed to have knowledge of σ . Our goal is for all machines to output the same value EST ∈ Rd , which is an unbiased estimator of ∇ , i.e. , E [ EST ] = ∇ , with low variance . Since the input is random , output randomness now stems from this input randomness as well as any randomness in the algorithm . VarianceReduction is common for instance in the context of gradient-based optimization of machine learning models , where we assume that each machine v processes local samples in order to obtain a stochastic gradient g̃v , which is an unbiased estimator of the true gradient ∇ , with variance bound σ2 . If we directly averaged the local stochastic gradients g̃v , we could obtain an unbiased estimator of the true gradient G with variance bound σ2/n , which can lead to faster convergence . Input Variance Assumption . The parameter y replaces the usual MeanEstimation assumption of a known bound M on the norms of input vectors . Note that , in the worst case , we can always set y = 2M and obtain the same asymptotic upper bounds as in e.g . Suresh et al . ( 2017 ) ; our results are therefore at least as good as previous approaches in all cases , but provide significant improvement when inputs are not centered around the origin . The reason for this change is to allow stronger bounds in scenarios where we expect inputs to be closer to each other than to the origin . In particular , it allows our MeanEstimation problem to more effectively generalize VarianceReduction . Parameter y is a deterministic analogue of the parameter σ for VarianceReduction ; both y and σ provide a bound on the distance of inputs from their mean , rather than from the origin . Accordingly , input variance σ2 for a VarianceReduction instance corresponds ( up to constant factors ) to y2 for a MeanEstimation instance . For consistency of terminology , we therefore refer to y2 as the input variance of the instance ( despite such inputs being deterministic ) . It is common in machine learning applications of VarianceReduction to assume that an estimate of the variance σ2 is known ( Alistarh et al. , 2017 ; Gandikota et al. , 2019 ) . To study both problems in a common framework , we make the analogous assumption about MeanEstimation , and assume knowledge of the input variance y2 . Even if the relevant bounds y or σ are not known a priori , they can usually be estimated in practice . Relationship Between Problems . If one allows unrestricted communication , the natural solution to both problems is to average the inputs . This is an exact solution to MeanEstimation with variance 0 , and is also an asymptotically optimal solution to VarianceReduction , of variance at most σ 2 n . 1 However , doing so would require the exchange of infi- nite precision real numbers . So , it is common to instead communicate quantized values of bounded bit-length ( Alistarh et al. , 2017 ) , which will engender additional variance caused by random choices within the quantization method . The resulting estimates will therefore have variance V arquant for MeanEstimation , and σ 2 n + V arquant for VarianceReduction . We will show a trade-off between bits of communication and output variance for both problems ; in the case of VarianceReduction , though , there is an ‘ upper limit ’ to this trade-off , since we can not go below Ω ( σ 2 n ) total output variance . The other major difference between the two problems is that in MeanEstimation , distances between inputs are bounded by y with certainty , whereas in VarianceReduction they are instead bounded by O ( σ ) only in expectation . This causes extra complications for quantization , and introduces a gap between average and worst-case communication cost . Distributed Model . We aim to provide a widely applicable method for distributed mean estimation , and therefore we avoid relying on the specifics of particular distributed models . Instead , we assume that the basic communication structures we use ( stars and binary trees ) can be constructed without significant overhead . This setting is supported by machine learning applications , which have very high input dimension ( i.e. , d n ) , and so the costs of synchronization or construction of an overlay ( which do not depend on d , and are generally poly-logarithmic in n ) , will be heavily dominated by the communication costs incurred subsequently during mean estimation . They also need only be incurred once , even if mean estimation or variance reduction is to be performed many times ( e.g . during distributed SGD ) . For these reasons , we do not include these model-specific setup costs in our stated complexities ; any implementation should take them into separate consideration . For simplicity , we will present our algorithms within a basic synchronous fault-free messagepassing model , in which machines can send arbitrary messages to any other machine , but they could naturally be extended to asynchronous and shared-memory models of communication . Our aim will be to minimize the number of bits sent and received by any machine during the algorithm ; we do not consider other measures such as round complexity . Vector Norms . When dealing with vectors in Rd , we will use names in bold , e.g . x , y . We will state most of our results in such a way that they will apply to any of the three 1For specific classes of input distribution , and for non-asymptotic concentration results , however , better estimators of ∇ are known ; see e.g . Joly et al . ( 2017 ) . most commonly-used norms on Rd in applications : ` 1 norm ‖x‖1 : = ∑d i=1 |xi| , ` 2 norm ‖x‖2 : = √∑d i=1 x 2 i , and ` ∞ norm ‖x‖∞ : = maxdi=1 ‖xi‖ . Throughout the paper we will therefore use the general notation ‖·‖ , which should be considered to be fixed as one of these norms , other than for statements specific to particular norms . Definitions which depend on norms , such as variance Var [ x ] : = E [ ‖x−E [ x ] ‖2 ] , are therefore assumed to also be under the appropriate norm .
The paper considers a particular setting of distributed mean estimation problem, where each party has a vector of potentially large $l_2$ norm, yet this vectors are fairly close to each other. The goal is to communicate as few bits as possible and estimate the mean of the vectors. Previous approaches had the dependence on the size of the ball containing all the vectors, which gives bad bounds if vectors are long (but close to each other).
SP:e43bf2bece7193a5b7dda847f81f3e730dffae9b
New Bounds For Distributed Mean Estimation and Variance Reduction
∑n v=1 xv , while min- imizing total communication cost . DME is a fundamental construct in distributed machine learning , and there has been considerable work on variants of this problem , especially in the context of distributed variance reduction for stochastic gradients in parallel SGD . Previous work typically assumes an upper bound on the norm of the input vectors , and achieves an error bound in terms of this norm . However , in many real applications , the input vectors are concentrated around the correct output µ , but µ itself has large norm . In such cases , previous output error bounds perform poorly . In this paper , we show that output error bounds need not depend on input norm . We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs , not on input norm , and show an analogous result for distributed variance reduction . The technique is based on a new connection with lattice theory . We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal . As the lattices achieving optimal bounds under ` 2-norm can be computationally impractical , we also present an extension which leverages easy-to-use cubic lattices , and is loose only up to a logarithmic factor in d. We show experimentally that our method yields practical improvements for common applications , relative to prior approaches . 1 Introduction . Several problems in distributed machine learning and optimization can be reduced to variants distributed mean estimation problem , in which n machines must cooperate to jointly estimate the mean of their d-dimensional inputs µ = 1n ∑n v=1 xv as closely as possible , while minimizing communication . In particular , this construct is often used for distributed variance reduction : here , each machine receives as input an independent probabilistic estimate of a d-dimensional vector ∇ , and the aim is for all machines to output a common estimate of ∇ with lower variance than the individual inputs , minimizing communication . Without any communication restrictions , the ideal output would be the mean of all machines ’ inputs . While variants of these fundamental problems have been considered since seminal work by Tsitsiklis & Luo ( 1987 ) , the task has seen renewed attention recently in the context of distributed machine learning . In particular , variance reduction is a key component in data-parallel distributed stochastic gradient descent ( SGD ) , the standard way to parallelize the training of deep neural networks , e.g . Bottou ( 2010 ) ; Abadi et al . ( 2016 ) , where it is used to estimate the average of gradient updates obtained in parallel at the nodes . Thus , several prior works proposed efficient compression schemes to solve variance reduction or mean estimation , see e.g . Suresh et al . ( 2017 ) ; Alistarh et al . ( 2017 ) ; Ramezani-Kebrya et al . ( 2019 ) ; Gandikota et al . ( 2019 ) , and Ben-Nun & Hoefler ( 2019 ) for a general survey of practical distribution schemes . These schemes seek to quantize nodes ’ inputs coordinatewise to one of a limited collection of values , in order to then efficiently encode and transmit these quantized values . A trade-off then arises between the number of bits sent , and the added variance due of quantization . Since the measure of output quality is variance , it appears most natural to evaluate this with respect to input variance , in order to show that variance reduction is indeed achieved . Surprisingly , however , we are aware of no previous works which do so ; all existing methods give bounds on output variance in terms of the squared input norm . This is clearly suboptimal when the squared norm is higher than the variance , i.e. , when inputs are not centered around the origin . In some practical scenarios this causes output variance to be higher than input variance , as we demonstrate in Section 4 . Contributions . In this paper , we provide the first bounds for distributed mean estimation and variance reduction which are still tight when inputs are not centered around the origin . Our results are based on new lattice-based quantization techniques , which may be of independent interest , and come with matching lower bounds , and practical extensions . More precisely , our contributions are as follows : • For distributed mean estimation , we show that , to achieve a reduction of a factor q in the input ‘ variance ’ ( which we define to be the maximum squared distance between inputs ) , it is necessary and sufficient for machines to communicate Θ ( d log q ) bits . • For variance reduction , we show tight Θ ( d logn ) bounds on the worst-case communication bits required to achieve optimal Θ ( n ) -factor variance reduction by n nodes over d-dimensional input , and indeed to achieve any variance reduction at all . We then show how incorporating error detection into our quantization scheme , we can also obtain tight bounds on the bits required in expectation . • We show how to efficiently instantiate our lattice-based quantization framework in practice , with guarantees . In particular , we devise a variant of the scheme which ensures close-to-optimal communication-variance bounds even for the standard cubic lattice , and use it to obtain improvements relative to the best known previous methods for distributed mean estimation , both on synthetic and real-world tasks . 1.1 Problem Definitions and Discussion . MeanEstimation is defined as follows : we have n machines v , and each receives as input a vector xv ∈ Rd . We also assume that all machines receive a common value y , with the guarantee that for any machines u , v , ‖xu−xv‖ ≤ y . Our goal is for all machines to output the same value EST ∈ Rd , which is an unbiased estimator of the mean µ = 1n ∑ v∈M xv , i.e . E [ EST ] = µ , with variance as low as possible . Notice that the input specification is entirely deterministic ; any randomness in the output arises only from the algorithm used . In the variant of VarianceReduction , we again have a set of n machines , and now an unknown true vector ∇ . Each machine v receives as input an independent unbiased estimator xv of ∇ ( i.e. , E [ xv ] = ∇ ) with variance E [ ‖xv −∇‖2 ] ≤ σ2 . Machines are assumed to have knowledge of σ . Our goal is for all machines to output the same value EST ∈ Rd , which is an unbiased estimator of ∇ , i.e. , E [ EST ] = ∇ , with low variance . Since the input is random , output randomness now stems from this input randomness as well as any randomness in the algorithm . VarianceReduction is common for instance in the context of gradient-based optimization of machine learning models , where we assume that each machine v processes local samples in order to obtain a stochastic gradient g̃v , which is an unbiased estimator of the true gradient ∇ , with variance bound σ2 . If we directly averaged the local stochastic gradients g̃v , we could obtain an unbiased estimator of the true gradient G with variance bound σ2/n , which can lead to faster convergence . Input Variance Assumption . The parameter y replaces the usual MeanEstimation assumption of a known bound M on the norms of input vectors . Note that , in the worst case , we can always set y = 2M and obtain the same asymptotic upper bounds as in e.g . Suresh et al . ( 2017 ) ; our results are therefore at least as good as previous approaches in all cases , but provide significant improvement when inputs are not centered around the origin . The reason for this change is to allow stronger bounds in scenarios where we expect inputs to be closer to each other than to the origin . In particular , it allows our MeanEstimation problem to more effectively generalize VarianceReduction . Parameter y is a deterministic analogue of the parameter σ for VarianceReduction ; both y and σ provide a bound on the distance of inputs from their mean , rather than from the origin . Accordingly , input variance σ2 for a VarianceReduction instance corresponds ( up to constant factors ) to y2 for a MeanEstimation instance . For consistency of terminology , we therefore refer to y2 as the input variance of the instance ( despite such inputs being deterministic ) . It is common in machine learning applications of VarianceReduction to assume that an estimate of the variance σ2 is known ( Alistarh et al. , 2017 ; Gandikota et al. , 2019 ) . To study both problems in a common framework , we make the analogous assumption about MeanEstimation , and assume knowledge of the input variance y2 . Even if the relevant bounds y or σ are not known a priori , they can usually be estimated in practice . Relationship Between Problems . If one allows unrestricted communication , the natural solution to both problems is to average the inputs . This is an exact solution to MeanEstimation with variance 0 , and is also an asymptotically optimal solution to VarianceReduction , of variance at most σ 2 n . 1 However , doing so would require the exchange of infi- nite precision real numbers . So , it is common to instead communicate quantized values of bounded bit-length ( Alistarh et al. , 2017 ) , which will engender additional variance caused by random choices within the quantization method . The resulting estimates will therefore have variance V arquant for MeanEstimation , and σ 2 n + V arquant for VarianceReduction . We will show a trade-off between bits of communication and output variance for both problems ; in the case of VarianceReduction , though , there is an ‘ upper limit ’ to this trade-off , since we can not go below Ω ( σ 2 n ) total output variance . The other major difference between the two problems is that in MeanEstimation , distances between inputs are bounded by y with certainty , whereas in VarianceReduction they are instead bounded by O ( σ ) only in expectation . This causes extra complications for quantization , and introduces a gap between average and worst-case communication cost . Distributed Model . We aim to provide a widely applicable method for distributed mean estimation , and therefore we avoid relying on the specifics of particular distributed models . Instead , we assume that the basic communication structures we use ( stars and binary trees ) can be constructed without significant overhead . This setting is supported by machine learning applications , which have very high input dimension ( i.e. , d n ) , and so the costs of synchronization or construction of an overlay ( which do not depend on d , and are generally poly-logarithmic in n ) , will be heavily dominated by the communication costs incurred subsequently during mean estimation . They also need only be incurred once , even if mean estimation or variance reduction is to be performed many times ( e.g . during distributed SGD ) . For these reasons , we do not include these model-specific setup costs in our stated complexities ; any implementation should take them into separate consideration . For simplicity , we will present our algorithms within a basic synchronous fault-free messagepassing model , in which machines can send arbitrary messages to any other machine , but they could naturally be extended to asynchronous and shared-memory models of communication . Our aim will be to minimize the number of bits sent and received by any machine during the algorithm ; we do not consider other measures such as round complexity . Vector Norms . When dealing with vectors in Rd , we will use names in bold , e.g . x , y . We will state most of our results in such a way that they will apply to any of the three 1For specific classes of input distribution , and for non-asymptotic concentration results , however , better estimators of ∇ are known ; see e.g . Joly et al . ( 2017 ) . most commonly-used norms on Rd in applications : ` 1 norm ‖x‖1 : = ∑d i=1 |xi| , ` 2 norm ‖x‖2 : = √∑d i=1 x 2 i , and ` ∞ norm ‖x‖∞ : = maxdi=1 ‖xi‖ . Throughout the paper we will therefore use the general notation ‖·‖ , which should be considered to be fixed as one of these norms , other than for statements specific to particular norms . Definitions which depend on norms , such as variance Var [ x ] : = E [ ‖x−E [ x ] ‖2 ] , are therefore assumed to also be under the appropriate norm .
This paper studies the problem of mean estimation of n vectors in R^d in a distributed setting. There are n machines. Each has one data point, a vector x_v. The goal is to compute the mean of x_v’s for v = 1, …, n with as few bits of communication as possible.
SP:e43bf2bece7193a5b7dda847f81f3e730dffae9b
FedMes: Speeding Up Federated Learning with Multiple Edge Servers
1 INTRODUCTION . With the explosive growth in the numbers of smart phones , wearable devices and Internet of Things ( IoT ) sensors , a large portion of data generated nowadays is collected outside the cloud , especially at the distributed end-devices at the edge . Federated learning ( McMahan et al. , 2017 ; Konecny et al. , 2016b ; a ; Bonawitz et al. , 2019 ; Li et al. , 2019a ) is a recent paradigm for this setup , which enables training of a machine learning model in a distributed network while significantly resolving privacy concerns of the individual devices . However , training requires repeated downloading and uploading of the models between the parameter server ( PS ) and devices , presenting significant challenges in terms of 1 ) the communication bottleneck at the PS and 2 ) the nonIID ( independent , identically distributed ) data characteristic across devices ( Zhao et al. , 2018 ; Sattler et al. , 2019 ; Li et al. , 2019b ; Reisizadeh et al. , 2019 ; Jeong et al. , 2018 ) . In federated learning , the PS can be located at the cloud or at the edge ( e.g. , small base stations ) . Most current studies on federated learning consider the former , with the assumption that millions of devices are within the coverage of the PS at the cloud ; at every global round , the devices in the system should communicate with the PS ( located at the cloud ) for downloading and uploading the models . However , an inherent limitation of this cloud-based system is the long distance between the device and the cloud server , which causes significant propagation delay during model downloading/uploading stages in federated learning ( Mao et al. , 2017 ; Nguyen et al. , 2019 ) . Specifically , it is reported in ( Mao et al. , 2017 ) that the supportable latency ( for inference ) of cloud-based systems is larger than 100 milliseconds , while the edge-based systems have supportable latency of less than tens of milliseconds . This large delay between the cloud and the devices directly affects the training time of cloud-based federated learning systems . In order to support latency-sensitive applications ( e.g. , smart cars ) or emergency events ( e.g. , disaster response by drones ) by federated learning , utilization of edge-based system is absolutely necessary . An issue , however , is that although the edge-based federated learning system can considerably reduce the latency between the PS and the devices , the coverage of an edge server is generally limited in practical systems ( e.g. , wireless cellular networks ) ; there are insufficient number of devices within the coverage of an edge server for training a global model with enough accuracy . Accordingly , the limited coverage of a single edge server could include biased datasets and thus could lead to a biased model after training . Thus in practice , performing federated learning with the devices in a single edge server would result in a significant performance degradation . Main contributions . To overcome the above practical challenges , we propose FedMes , a novel federated learning algorithm highly tailored to the environment with multiple edge servers ( ESs ) . Our idea here is to utilize the devices located in the overlapping areas between the coverage of ESs , which are typical in 5G and beyond systems with dense deployment of ESs . In the model-downloading stage , each ES sends the current model to the devices in its coverage area ; in this process the devices in the overlapped region receive multiple models from different ESs . These devices in the overlapping area take the average of the received models , and then update the model based on their local data . Then each device sends its updated model to the corresponding ES or ESs , which is aggregated at each ES . A high-level description of FedMes is given in Fig . 1 . For example , suppose that device k is located in the non-overlapped region of ES i while device l is in the overlapped region between ES i and ES j . In conventional federated learning systems , device l participates in the training process of only one of ES i or ES j ; on the other hand , in FedMes , device l can act as a bridge for sharing the trained models between both ESs . To be specific , the updated model of device k is averaged only at its associated ES i . In the next step , this averaged model is sent to the devices in its covered area , including device l. After the local model updates at the devices , device l sends its updated model to both ES i and ES j . From this point of view , even when some training samples are only in the coverage of a specific ES , these data can still assist the training process of other servers . Hence , the proposed scheme does not require costly communications with the central cloud server ( located at the higher tier of ESs ) for model synchronization , significantly reducing the overall training time compared to cloud-based federated learning systems . Comparing with the scheme which does not consider the overlapping areas , FedMes can provide a significant performance gain , especially when the data distributions across coverages of different servers are nonIID , e.g. , when a specific server has a biased dataset within its covered area . Especially in this nonIID setup , giving more weights to the devices located in the overlapping areas ( in each aggregation step at the ESs ) can further speed up training . From the service provider point of view , FedMes does not require any backhaul traffic between the ESs and the cloud server , significantly reducing the communication resources required for federated learning . Moreover , since the devices in the overlapping areas send their results to multiples ESs , our scheme can reduce the number of devices participating at each global round while achieving the desired performance . Extensive experimental results on various datasets show that FedMes provides remarkable performance gain compared to 1 ) the scheme that requires communications with the central cloud server for model synchronization ( i.e. , cloud-based federated learning ) and 2 ) the scheme that does not take the overlapping areas between servers into account . Related works . Thanks to the recent advent of edge computing , there has been an increased interest in edge-facilitated federated learning systems ( Tran et al. , 2019 ; Wang et al. , 2019 ; Lim et al. , 2020 ; Abad et al. , 2020 ; Liu et al. , 2019 ) . The authors of ( Wang et al. , 2019 ) focused on optimizing federated learning framework with a given resource budget in wireless edge networks . The authors of ( Tran et al. , 2019 ) considered resource allocation to minimize energy consumption at the devices in wireless networks . However , a single-server setup is considered in ( Tran et al. , 2019 ) and ( Wang et al. , 2019 ) , which is totally different from our work leveraging multiple edge servers . Only a few prior works on federated learning ( Abad et al. , 2020 ; Liu et al. , 2019 ) considered a setup with multiple edge servers . However , the schemes of ( Abad et al. , 2020 ; Liu et al. , 2019 ) still require costly communication with the central cloud server for model synchronization , which could significantly slow down the overall training process . If the communication period with the cloud is small , frequent model synchronization between edge servers is possible but incurs a large communication time delay . Infrequent model synchronization can lead to a bad performance especially with nonIID data across the servers . FedMes overcomes these challenges by enabling fast federated learning without any help with the cloud server , i.e. , only using the edge servers . It is shown later in Section 4 that our scheme can outperform the hierarchical scheme of ( Liu et al. , 2019 ) which requires costly communications with the central cloud server . 2 PROBLEM SETUP . Federated learning . Let K be the number of devices in the system . Let nk be the number of data samples in device k , with n = ∑K k=1 nk being the total number of training samples . We also denote the i-th sample in device k as xk , i , for i ∈ { 1 , 2 , ... , nk } . Our goal is to solve the following optimization problem min w F ( w ) = min w K∑ k=1 nk n Fk ( w ) , ( 1 ) where Fk ( w ) is the local loss function of data samples in device k , written as Fk ( w ) = 1 nk ∑nk i=1 ` ( xk , i ; w ) . Now we briefly describe the conventional cloud-based federated averaging ( FedAvg ) algorithm in ( McMahan et al. , 2017 ) , a typical way to solve this problem . At step t , each device downloads the current model w ( t ) from the PS , which is generally located at the cloud covering all the devices in the system . Then each device ( say device k ) sets wk ( t ) = w ( t ) and performs E local updates according to wk ( t+ i+ 1 ) = wk ( t+ i ) − ηt+i∇Fk ( wk ( t+ i ) , ξk ( t+ i ) ) , i = 0 , 1 , ... , E − 1 , ( 2 ) where ηt is the learning rate at step t and ξk ( t ) is a set of randomly selected data samples from device k at step t. Now each device sends the updated model to the PS , and the PS aggregates the model as w ( t+ E ) = ∑K k=1 nk n wk ( t+ E ) . However , full device participation at each aggregation step is impossible in practice and the PS often selects a set St+E ⊂ { 1 , 2 , ... , K } , containing the devices that transmit the results to the PS . Then , we have w ( t+ E ) = ∑ k∈St+E nk n wk ( t+ E ) . ( 3 ) This overall process is repeated until the model achieves the desired accuracy or some stopping condition is met . According to the algorithm , the model of the k-th device at step t+ 1 is written as wk ( t+ 1 ) = wk ( t ) − ηt∇Fk ( wk ( t ) , ξk ( t ) ) , if E - t+ 1∑ q∈St+1 nq n [ wq ( t ) − ηt∇Fq ( wq ( t ) , ξq ( t ) ) ] , otherwise . ( 4 ) Problem formulation . In contrast with the conventional cloud-based federated learning systems having a central cloud server covering the whole devices , in this paper we consider L ESs each covering its own local area . We call this local coverage of each edge server a cell . Especially with dense deployment of ESs in 5G and beyond networks , there generally exist more than one ES within the range of a specific user that can be reliably communicate with . We call this region in which the device can reliably communicate with multiple ESs overlapping cell area . Let Ci be the set of indices for users located in cell i ∈ { 1 , 2 , ... , L } . Now define Ui as the set of user indices for the non-overlapped region of cell i , which is the subset of Ci . We also define Vi , j as the set of user indices for the overlapping area between cell i and cell j ( i 6= j and Vi , j = Vj , i ) , which is also the subset of Ci . Here , the devices in Vi , j can communicate with both ES i and ES j during model download or upload . While we can similarly define overlapped regions with more than two ESs , we consider the case in which the coverage of at most two ESs overlapped for clarity of presentation . Then the coverage of cell i , i.e. , Ci can be written as Ci = Ui ∪ ( ∪ j∈ [ L ] \ { i } Vi , j ) ( 5 ) for all i ∈ { 1 , 2 , ... , L } . The overall coverage of the system can be written as C = { 1 , 2 , ... , K } = ∪Li=1 Ui ∪ ( ∪Li=1 ∪Lj=i+1 Vi , j ) . Note that each ES can communicate with the devices in its covered area only . Our goal is to solve the problem in ( 1 ) in this setup , without sharing the learning models at the higher tier of ESs ( i.e. , the cloud server ) during training .
In previous federated learning literature, people usually assume there is only one cloud server communicating with all edge nodes/clients. However, since each server has its own coverage in practice, the latency between the server and clients out of the coverage can be pretty long. This paper focuses on reducing the communication cost in this practical setting. The authors propose to use multiple edge servers, which have overlapped coverages. The clients in the overlapping areas will receive model parameters from multiple server and return the average model back. This can help to mix the information between different edge servers. Experiments on MNIST, EMNIST, and CIFAR10 datasets validate the effectiveness of the proposed algorithm: FedMes.
SP:066ab85834cd54a685a860021521eb1ffe35e60f
FedMes: Speeding Up Federated Learning with Multiple Edge Servers
1 INTRODUCTION . With the explosive growth in the numbers of smart phones , wearable devices and Internet of Things ( IoT ) sensors , a large portion of data generated nowadays is collected outside the cloud , especially at the distributed end-devices at the edge . Federated learning ( McMahan et al. , 2017 ; Konecny et al. , 2016b ; a ; Bonawitz et al. , 2019 ; Li et al. , 2019a ) is a recent paradigm for this setup , which enables training of a machine learning model in a distributed network while significantly resolving privacy concerns of the individual devices . However , training requires repeated downloading and uploading of the models between the parameter server ( PS ) and devices , presenting significant challenges in terms of 1 ) the communication bottleneck at the PS and 2 ) the nonIID ( independent , identically distributed ) data characteristic across devices ( Zhao et al. , 2018 ; Sattler et al. , 2019 ; Li et al. , 2019b ; Reisizadeh et al. , 2019 ; Jeong et al. , 2018 ) . In federated learning , the PS can be located at the cloud or at the edge ( e.g. , small base stations ) . Most current studies on federated learning consider the former , with the assumption that millions of devices are within the coverage of the PS at the cloud ; at every global round , the devices in the system should communicate with the PS ( located at the cloud ) for downloading and uploading the models . However , an inherent limitation of this cloud-based system is the long distance between the device and the cloud server , which causes significant propagation delay during model downloading/uploading stages in federated learning ( Mao et al. , 2017 ; Nguyen et al. , 2019 ) . Specifically , it is reported in ( Mao et al. , 2017 ) that the supportable latency ( for inference ) of cloud-based systems is larger than 100 milliseconds , while the edge-based systems have supportable latency of less than tens of milliseconds . This large delay between the cloud and the devices directly affects the training time of cloud-based federated learning systems . In order to support latency-sensitive applications ( e.g. , smart cars ) or emergency events ( e.g. , disaster response by drones ) by federated learning , utilization of edge-based system is absolutely necessary . An issue , however , is that although the edge-based federated learning system can considerably reduce the latency between the PS and the devices , the coverage of an edge server is generally limited in practical systems ( e.g. , wireless cellular networks ) ; there are insufficient number of devices within the coverage of an edge server for training a global model with enough accuracy . Accordingly , the limited coverage of a single edge server could include biased datasets and thus could lead to a biased model after training . Thus in practice , performing federated learning with the devices in a single edge server would result in a significant performance degradation . Main contributions . To overcome the above practical challenges , we propose FedMes , a novel federated learning algorithm highly tailored to the environment with multiple edge servers ( ESs ) . Our idea here is to utilize the devices located in the overlapping areas between the coverage of ESs , which are typical in 5G and beyond systems with dense deployment of ESs . In the model-downloading stage , each ES sends the current model to the devices in its coverage area ; in this process the devices in the overlapped region receive multiple models from different ESs . These devices in the overlapping area take the average of the received models , and then update the model based on their local data . Then each device sends its updated model to the corresponding ES or ESs , which is aggregated at each ES . A high-level description of FedMes is given in Fig . 1 . For example , suppose that device k is located in the non-overlapped region of ES i while device l is in the overlapped region between ES i and ES j . In conventional federated learning systems , device l participates in the training process of only one of ES i or ES j ; on the other hand , in FedMes , device l can act as a bridge for sharing the trained models between both ESs . To be specific , the updated model of device k is averaged only at its associated ES i . In the next step , this averaged model is sent to the devices in its covered area , including device l. After the local model updates at the devices , device l sends its updated model to both ES i and ES j . From this point of view , even when some training samples are only in the coverage of a specific ES , these data can still assist the training process of other servers . Hence , the proposed scheme does not require costly communications with the central cloud server ( located at the higher tier of ESs ) for model synchronization , significantly reducing the overall training time compared to cloud-based federated learning systems . Comparing with the scheme which does not consider the overlapping areas , FedMes can provide a significant performance gain , especially when the data distributions across coverages of different servers are nonIID , e.g. , when a specific server has a biased dataset within its covered area . Especially in this nonIID setup , giving more weights to the devices located in the overlapping areas ( in each aggregation step at the ESs ) can further speed up training . From the service provider point of view , FedMes does not require any backhaul traffic between the ESs and the cloud server , significantly reducing the communication resources required for federated learning . Moreover , since the devices in the overlapping areas send their results to multiples ESs , our scheme can reduce the number of devices participating at each global round while achieving the desired performance . Extensive experimental results on various datasets show that FedMes provides remarkable performance gain compared to 1 ) the scheme that requires communications with the central cloud server for model synchronization ( i.e. , cloud-based federated learning ) and 2 ) the scheme that does not take the overlapping areas between servers into account . Related works . Thanks to the recent advent of edge computing , there has been an increased interest in edge-facilitated federated learning systems ( Tran et al. , 2019 ; Wang et al. , 2019 ; Lim et al. , 2020 ; Abad et al. , 2020 ; Liu et al. , 2019 ) . The authors of ( Wang et al. , 2019 ) focused on optimizing federated learning framework with a given resource budget in wireless edge networks . The authors of ( Tran et al. , 2019 ) considered resource allocation to minimize energy consumption at the devices in wireless networks . However , a single-server setup is considered in ( Tran et al. , 2019 ) and ( Wang et al. , 2019 ) , which is totally different from our work leveraging multiple edge servers . Only a few prior works on federated learning ( Abad et al. , 2020 ; Liu et al. , 2019 ) considered a setup with multiple edge servers . However , the schemes of ( Abad et al. , 2020 ; Liu et al. , 2019 ) still require costly communication with the central cloud server for model synchronization , which could significantly slow down the overall training process . If the communication period with the cloud is small , frequent model synchronization between edge servers is possible but incurs a large communication time delay . Infrequent model synchronization can lead to a bad performance especially with nonIID data across the servers . FedMes overcomes these challenges by enabling fast federated learning without any help with the cloud server , i.e. , only using the edge servers . It is shown later in Section 4 that our scheme can outperform the hierarchical scheme of ( Liu et al. , 2019 ) which requires costly communications with the central cloud server . 2 PROBLEM SETUP . Federated learning . Let K be the number of devices in the system . Let nk be the number of data samples in device k , with n = ∑K k=1 nk being the total number of training samples . We also denote the i-th sample in device k as xk , i , for i ∈ { 1 , 2 , ... , nk } . Our goal is to solve the following optimization problem min w F ( w ) = min w K∑ k=1 nk n Fk ( w ) , ( 1 ) where Fk ( w ) is the local loss function of data samples in device k , written as Fk ( w ) = 1 nk ∑nk i=1 ` ( xk , i ; w ) . Now we briefly describe the conventional cloud-based federated averaging ( FedAvg ) algorithm in ( McMahan et al. , 2017 ) , a typical way to solve this problem . At step t , each device downloads the current model w ( t ) from the PS , which is generally located at the cloud covering all the devices in the system . Then each device ( say device k ) sets wk ( t ) = w ( t ) and performs E local updates according to wk ( t+ i+ 1 ) = wk ( t+ i ) − ηt+i∇Fk ( wk ( t+ i ) , ξk ( t+ i ) ) , i = 0 , 1 , ... , E − 1 , ( 2 ) where ηt is the learning rate at step t and ξk ( t ) is a set of randomly selected data samples from device k at step t. Now each device sends the updated model to the PS , and the PS aggregates the model as w ( t+ E ) = ∑K k=1 nk n wk ( t+ E ) . However , full device participation at each aggregation step is impossible in practice and the PS often selects a set St+E ⊂ { 1 , 2 , ... , K } , containing the devices that transmit the results to the PS . Then , we have w ( t+ E ) = ∑ k∈St+E nk n wk ( t+ E ) . ( 3 ) This overall process is repeated until the model achieves the desired accuracy or some stopping condition is met . According to the algorithm , the model of the k-th device at step t+ 1 is written as wk ( t+ 1 ) = wk ( t ) − ηt∇Fk ( wk ( t ) , ξk ( t ) ) , if E - t+ 1∑ q∈St+1 nq n [ wq ( t ) − ηt∇Fq ( wq ( t ) , ξq ( t ) ) ] , otherwise . ( 4 ) Problem formulation . In contrast with the conventional cloud-based federated learning systems having a central cloud server covering the whole devices , in this paper we consider L ESs each covering its own local area . We call this local coverage of each edge server a cell . Especially with dense deployment of ESs in 5G and beyond networks , there generally exist more than one ES within the range of a specific user that can be reliably communicate with . We call this region in which the device can reliably communicate with multiple ESs overlapping cell area . Let Ci be the set of indices for users located in cell i ∈ { 1 , 2 , ... , L } . Now define Ui as the set of user indices for the non-overlapped region of cell i , which is the subset of Ci . We also define Vi , j as the set of user indices for the overlapping area between cell i and cell j ( i 6= j and Vi , j = Vj , i ) , which is also the subset of Ci . Here , the devices in Vi , j can communicate with both ES i and ES j during model download or upload . While we can similarly define overlapped regions with more than two ESs , we consider the case in which the coverage of at most two ESs overlapped for clarity of presentation . Then the coverage of cell i , i.e. , Ci can be written as Ci = Ui ∪ ( ∪ j∈ [ L ] \ { i } Vi , j ) ( 5 ) for all i ∈ { 1 , 2 , ... , L } . The overall coverage of the system can be written as C = { 1 , 2 , ... , K } = ∪Li=1 Ui ∪ ( ∪Li=1 ∪Lj=i+1 Vi , j ) . Note that each ES can communicate with the devices in its covered area only . Our goal is to solve the problem in ( 1 ) in this setup , without sharing the learning models at the higher tier of ESs ( i.e. , the cloud server ) during training .
This paper considers federated learning for edge devices with multiple wireless edge servers. The paper proposes FedMes to leverage devices in overlapping areas covered by multiple edge servers. In particular, in FedMes, if a device is in the coverage area of multiple edge servers, the device receives current models from all the edge servers covering it. Each device uses a (weighted) average of the models it receives as a starting point, and performs local updates (using SGD). A device broadcasts the updated model to multiple edge servers that cover the device. The key idea is that these devices in the overlapping coverage area act as ‘bridges’, and communication between edge servers is not required (until the final averaging step). The authors carry out experiments to evaluate FedMes, and compare against hierarchical federated learning of (Liu et al., 2019).
SP:066ab85834cd54a685a860021521eb1ffe35e60f
Fuzzy c-Means Clustering for Persistence Diagrams
1 INTRODUCTION . Persistence diagrams , a concise representation of the topology of a point cloud with strong theoretical guarantees , have emerged as a new tool in the field of data analysis ( Edelsbrunner & Harer , 2010 ) . Persistence diagrams have been successfully used to analyse problems ranging from financial crashes ( Gidea & Katz , 2018 ) to protein binding ( Kovacev-Nikolic et al. , 2014 ) , but the non-Hilbertian nature of the space of persistence diagrams means it is difficult to directly use persistence diagrams for machine learning . In order to better integrate diagrams into machine learning workflows , efforts have been made to map them into a more manageable form ; primarily through embeddings into finite feature vectors , functional summaries , or by defining a positive-definite kernel on diagram space . In all cases , this explicitly or implicitly embeds diagrams into a Hilbert space which deforms the metric structure , potentially losing important information . With the exception of Topological Autoencoders , techniques to integrate these persistence-based summaries as topological regularisers and loss functions currently require prior knowledge about the correct topology of the dataset , which is clearly not feasible in most scenarios . Against this background , we give an algorithm to perform Fuzzy c-Means ( FCM ) clustering ( Bezdek , 1980 ) directly on collections of persistence diagrams , giving an important unsupervised learning algorithm and enabling learning from persistence diagrams without deforming the metric structure . We perform the convergence analysis for our algorithm , giving the same guarantees as traditional FCM clustering : that every convergent subsequence of iterates tends to a local minimum or saddle point . We demonstrate the value of our fuzzy clustering algorithm by using it to cluster datasets that benefit from both the topological and fuzzy nature of our algorithm . We apply our technique in two settings : lattice structures in materials science and the decision boundaries of CNNs . A key property for machine learning in materials science has been identified as “ invariance to the basis symmetries of physics [ ... ] rotation , reflection , translation ” ( Schmidt et al. , 2019 ) . Geometric clustering algorithms do not have this invariance , but persistence diagrams do , making them ideally suited for this application ; we can cluster transformed lattice structure datasets where geometric equivalents fail . In addition to this , our probabilistic membership values allow us to rank the top-k most likely lattices assigned to a cluster . This is particularly important in materials science , as further investigation requires expensive laboratory time and expertise . Our second application is inspired by Ramamurthy et al . ( 2019 ) , who show that models perform better on tasks if they have topologically similar decision boundaries . We use our algorithm to cluster models and tasks by the persistence diagrams of their decision boundaries . Not only is our algorithm able to successfully cluster models to the correct task , based just on the topology of its decision boundary , but we show that higher membership values imply better performance on unseen tasks . 1.1 RELATED WORK . Means of persistence diagrams . Our work relies on the existence of statistics in the space of persistence diagrams . Mileyko et al . ( 2011 ) first showed that means and expectations are well-defined in the space of persistence diagrams . Specifically , they showed that the Fréchet mean , an extension of means onto metric spaces , is well-defined under weak assumptions on the space of persistence diagrams . Turner et al . ( 2012 ) then developed an algorithm to compute the Fréchet mean . We adapt the algorithm by Turner et al . to the weighted case , but the combinatoric nature of their algorithm makes it computationally intense . Lacombe et al . ( 2018 ) framed the computation of means and barycentres in the space of persistence diagram as an optimal transport problem , allowing them to use the Sinkhorn algorithm ( Cuturi & Doucet , 2014 ) for fast computation of approximate solutions . The vectorisation of the diagram required by the algorithm by Lacombe et al . makes it unsuitable for integration into our work , as we remain in the space of persistence diagrams . Techniques to speed up the matching problem fundamental to our computation have also been proposed by Vidal et al . ( 2020 ) and Kerber et al . ( 2017 ) . Learning with persistence-based summaries . Integrating diagrams into machine learning workflows remained challenging even with well-defined means , as the space is non-Hilbertian ( Turner & Spreemann , 2019 ) . As such , efforts have been made to map diagrams into a Hilbert space ; primarily either by embedding into finite feature vectors ( Kališnik , 2018 ; Fabio & Ferri , 2015 ; Chepushtanova et al. , 2015 ) or functional summaries ( Bubenik , 2015 ; Rieck et al. , 2019 ) , or by defining a positivedefinite kernel on diagram space ( Reininghaus et al. , 2015 ; Carrière et al. , 2017 ; Le & Yamada , 2018 ) . These vectorisations have been integrated into deep learning either by learning parameters for the embedding ( Hofer et al. , 2017 ; Carrière et al. , 2020 ; Kim et al. , 2020 ; Zhao & Wang , 2019 ; Zieliński et al. , 2019 ) , or as part of a topological loss or regulariser ( Chen et al. , 2018 ; Gabrielsson et al. , 2020 ; Clough et al. , 2020 ; Moor et al. , 2019 ) . However , the embeddings used in these techniques deform the metric structure of persistence diagram space ( Bubenik & Wagner , 2019 ; Wagner , 2019 ; Carrière & Bauer , 2019 ) , potentially leading to the loss of important information . Furthermore , these techniques generally require prior knowledge of a ‘ correct ’ target topology which can not plausibly be known in most scenarios . In comparison , our algorithm acts in the space of persistence diagrams so it does not deform the structure of diagram space via embeddings , and is entirely unsupervised , requiring no prior knowledge about the topology . Hard clustering . Maroulas et al . ( 2017 ) gave an algorithm for hard clustering persistence diagrams based on the algorithm by Turner et al . Lacombe et al . ( 2018 ) gave an alternate implementation of hard clustering based on their algorithm for barycentre computation , providing a computational speed-up over previous the work by Maroulas et al . The primary advantages of our work over previous work on hard clustering are as follows . ( i ) The probabilistic membership values allow us to rank datasets in the cluster , enabling top-k candidate selection in settings where verifying correctness is expensive . The value provided by this fuzzy information is demonstrated in the experiments . ( ii ) The fuzzy membership values provide information about proximity to all clusters , whereas hard labelling loses most of that information . In our experiments we demonstrate that this additional information can be utilised in practice . ( iii ) The weighted cost function makes the convergence analysis ( which we provide ) entirely nontrivial in comparison to the non-fuzzy case . We consider this convergence analysis a primary contribution of our paper . ( iv ) Fuzzy membership values have been shown to be more robust to noise than discrete labels ( Klawonn , 2004 ) . ( v ) Unlike hard clustering , fuzzy clustering is analytically differentiable , allowing integration of the fuzzy clustering step into deep learning methods ( Wilder et al. , 2019 ) . Geometric equivalents . The most similar unsupervised learning technique to our algorithm is Wasserstein Barycentre Clustering ( WBC ) . It clusters datasets of point clouds by the Wasserstein distance between the point clouds , rather than the Wasserstein distance between their persistence diagrams . We compare our algorithm experimentally to WBC using ADMM ( Ye & Li , 2014 ) , Bregman ADMM ( Ye et al. , 2017 ) , Subgradient Descent ( Cuturi & Doucet , 2014 ) , Iterative Bregman Projection ( Benamou et al. , 2015 ) , and full linear programming ( Li & Wang , 2008 ) . Each of these algorithms computes or approximates the Wasserstein barycentre in different ways . Theoretically , fuzzy discrete distribution clustering ( d. A. T. de Carvalho et al. , 2015 ) is similar to our algorithm , but the addition of the diagonal in the persistence diagram makes our work distinct . 1.2 OUR CONTRIBUTIONS . 1 . Our main contribution is an algorithm for Fuzzy c-Means clustering of persistence diagrams , along with the convergence analysis . Given a collection of persistence diagrams D1 , . . . , Dn , we alternatively calculate cluster centres M1 , . . . , Mc and membership values rjk 2 [ 0 , 1 ] which denote the degree to which diagram Dj is associated with cluster Mk . We prove Theorem 1 , showing that every convergent subsequence of these alternative update steps tends to a local minimum or saddle point of the cost function . This is the same convergence guarantee provided by traditional FCM clustering ( Bezdek et al. , 1987 ) , but requires additional work as the space of persistence diagrams with the Wasserstein distance has far weaker theoretical properties than Euclidean space . 2 . Updating the cluster centres requires computing the weighted Fréchet mean . We extend the algorithm given by Turner et al . ( 2012 ) to the weighted case , justifying our addition of weights by extending their proof to show that the updated algorithm converges . 3 . We implement our algorithm in Python , available in the supplementary materials . It works with persistence diagrams from commonly used open-source libraries for Topological Data Analysis ( TDA ) ,1 so is available for easy integration into current workflows , offering a powerful unsupervised learning algorithm to data science practitioners using TDA . 4 . We demonstrate the application of our algorithm to settings where ( i ) the properties of persistence diagrams makes clustering them the natural choice over geometric equivalents and ( ii ) the probabilistic membership values can be used to rank candidates for top-k selection . Our algorithm classifies transformed lattice structures from materials science where geometric equivalents fail , whilst giving probabilistic rankings to help prioritise expensive further investigation . We also cluster the persistence diagrams of decision boundaries and labelled datasets , showing that our fuzzy clustering captures information about model performance on unseen tasks . 2 TOPOLOGICAL PRELIMINARIES . Topological Data Analysis emerged from the study of algebraic topology , providing a toolkit to fully describe the topology of a dataset . We offer a quick summary below ; for more comprehensive details see Edelsbrunner & Harer ( 2010 ) . A set of points in Rd are indicative of the shape of the distribution they are sampled from . By connecting points that are pairwise within ✏ > 0 distance of each other , we can create an approximation of the distribution called the Vietoris-Rips complex ( Vietoris , 1927 ) . Specifically , we add the convex hull of any collection of points that are pairwise at most ✏ apart to the ✏-Vietoris-Rips complex . However , choosing an ✏ remains problematic ; too low a value and key points can remain disconnected , too high a value and the points become fully connected . To overcome this we use persistence : we consider the approximation over all values of ✏ simultaneously , and study how the topology of that approximation evolves as ✏ grows large . We call the collection of complexes for all ✏ a filtration . For each ✏ , we compute the p-homology group . This tells us the topology of the ✏-Vietoris-Rips complex : the 0-homology counts the number of connected components , the 1-homology counts the number of holes , the 2-homology counts the number of voids , etc . ( Edelsbrunner et al. , 2000 ) . 1Dionysus and Ripser ( Bauer , 2019 ) . The p-persistent homology ( p-PH ) group is created by summing the p-homology groups over all ✏ . This results in a p-PH group that summarises information about the topology of the dataset at all granularities . If a topological feature , such as a connected component or hole , persists throughout a large range of granularities , then it ’ s more likely to be a feature of the distribution . If it only persists for a short amount of time , then it ’ s more likely to be noise ( Cohen-Steiner et al. , 2007 ) . We can stably map a p-PH group into a multiset in the extended plane called a persistence diagram ( Chazal et al. , 2012 ) . Each topological feature has a birth and death : a feature is born when it enters the complex , and dies when the complex grows enough to destroy it . For example , in Figure 1 ( a ) , a feature is born at ✏ = 1 when four lines form a hole . This feature dies at ✏ = 1.42 when the hole is filled in . This is shown in the persistence diagram in Figure 1 ( b ) as a point at ( 1 , 1.42 ) in 1-PH . By computing the birth/death points for each topological feature in the filtration , we get a complete picture of the topology of the point cloud at all granularities ( Zomorodian & Carlsson , 2005 ) . The persistence diagram is the collection of birth/death points , along with the diagonal = { ( a , a ) : a 2 R } with infinite multiplicity , added in order to make the space of persistence diagrams complete ( Mileyko et al. , 2011 ) .
The authors propose a novel algorithm for the fuzzy clustering of persistence diagrams. To determine cluster centroids, the Wasserstein-2 distance is used to minimize the weighted Fr\’echet mean between a potential cluster center and all PDs considered for clustering. The authors proof convergence of the clustering algorithm and conduct experiments on 1) synthetic data, 2) lattice structures, and 3) decision boundaries of neural networks. In the last experiment is was shown that models whose PDs cluster close to the PDs of a given task lead to higher classification performance than random classifiers, demonstrating the merit of PDs as a useful tool for model selection.
SP:eab7f68e9d6170869645eb7ca01ee340cf97d7a0
Fuzzy c-Means Clustering for Persistence Diagrams
1 INTRODUCTION . Persistence diagrams , a concise representation of the topology of a point cloud with strong theoretical guarantees , have emerged as a new tool in the field of data analysis ( Edelsbrunner & Harer , 2010 ) . Persistence diagrams have been successfully used to analyse problems ranging from financial crashes ( Gidea & Katz , 2018 ) to protein binding ( Kovacev-Nikolic et al. , 2014 ) , but the non-Hilbertian nature of the space of persistence diagrams means it is difficult to directly use persistence diagrams for machine learning . In order to better integrate diagrams into machine learning workflows , efforts have been made to map them into a more manageable form ; primarily through embeddings into finite feature vectors , functional summaries , or by defining a positive-definite kernel on diagram space . In all cases , this explicitly or implicitly embeds diagrams into a Hilbert space which deforms the metric structure , potentially losing important information . With the exception of Topological Autoencoders , techniques to integrate these persistence-based summaries as topological regularisers and loss functions currently require prior knowledge about the correct topology of the dataset , which is clearly not feasible in most scenarios . Against this background , we give an algorithm to perform Fuzzy c-Means ( FCM ) clustering ( Bezdek , 1980 ) directly on collections of persistence diagrams , giving an important unsupervised learning algorithm and enabling learning from persistence diagrams without deforming the metric structure . We perform the convergence analysis for our algorithm , giving the same guarantees as traditional FCM clustering : that every convergent subsequence of iterates tends to a local minimum or saddle point . We demonstrate the value of our fuzzy clustering algorithm by using it to cluster datasets that benefit from both the topological and fuzzy nature of our algorithm . We apply our technique in two settings : lattice structures in materials science and the decision boundaries of CNNs . A key property for machine learning in materials science has been identified as “ invariance to the basis symmetries of physics [ ... ] rotation , reflection , translation ” ( Schmidt et al. , 2019 ) . Geometric clustering algorithms do not have this invariance , but persistence diagrams do , making them ideally suited for this application ; we can cluster transformed lattice structure datasets where geometric equivalents fail . In addition to this , our probabilistic membership values allow us to rank the top-k most likely lattices assigned to a cluster . This is particularly important in materials science , as further investigation requires expensive laboratory time and expertise . Our second application is inspired by Ramamurthy et al . ( 2019 ) , who show that models perform better on tasks if they have topologically similar decision boundaries . We use our algorithm to cluster models and tasks by the persistence diagrams of their decision boundaries . Not only is our algorithm able to successfully cluster models to the correct task , based just on the topology of its decision boundary , but we show that higher membership values imply better performance on unseen tasks . 1.1 RELATED WORK . Means of persistence diagrams . Our work relies on the existence of statistics in the space of persistence diagrams . Mileyko et al . ( 2011 ) first showed that means and expectations are well-defined in the space of persistence diagrams . Specifically , they showed that the Fréchet mean , an extension of means onto metric spaces , is well-defined under weak assumptions on the space of persistence diagrams . Turner et al . ( 2012 ) then developed an algorithm to compute the Fréchet mean . We adapt the algorithm by Turner et al . to the weighted case , but the combinatoric nature of their algorithm makes it computationally intense . Lacombe et al . ( 2018 ) framed the computation of means and barycentres in the space of persistence diagram as an optimal transport problem , allowing them to use the Sinkhorn algorithm ( Cuturi & Doucet , 2014 ) for fast computation of approximate solutions . The vectorisation of the diagram required by the algorithm by Lacombe et al . makes it unsuitable for integration into our work , as we remain in the space of persistence diagrams . Techniques to speed up the matching problem fundamental to our computation have also been proposed by Vidal et al . ( 2020 ) and Kerber et al . ( 2017 ) . Learning with persistence-based summaries . Integrating diagrams into machine learning workflows remained challenging even with well-defined means , as the space is non-Hilbertian ( Turner & Spreemann , 2019 ) . As such , efforts have been made to map diagrams into a Hilbert space ; primarily either by embedding into finite feature vectors ( Kališnik , 2018 ; Fabio & Ferri , 2015 ; Chepushtanova et al. , 2015 ) or functional summaries ( Bubenik , 2015 ; Rieck et al. , 2019 ) , or by defining a positivedefinite kernel on diagram space ( Reininghaus et al. , 2015 ; Carrière et al. , 2017 ; Le & Yamada , 2018 ) . These vectorisations have been integrated into deep learning either by learning parameters for the embedding ( Hofer et al. , 2017 ; Carrière et al. , 2020 ; Kim et al. , 2020 ; Zhao & Wang , 2019 ; Zieliński et al. , 2019 ) , or as part of a topological loss or regulariser ( Chen et al. , 2018 ; Gabrielsson et al. , 2020 ; Clough et al. , 2020 ; Moor et al. , 2019 ) . However , the embeddings used in these techniques deform the metric structure of persistence diagram space ( Bubenik & Wagner , 2019 ; Wagner , 2019 ; Carrière & Bauer , 2019 ) , potentially leading to the loss of important information . Furthermore , these techniques generally require prior knowledge of a ‘ correct ’ target topology which can not plausibly be known in most scenarios . In comparison , our algorithm acts in the space of persistence diagrams so it does not deform the structure of diagram space via embeddings , and is entirely unsupervised , requiring no prior knowledge about the topology . Hard clustering . Maroulas et al . ( 2017 ) gave an algorithm for hard clustering persistence diagrams based on the algorithm by Turner et al . Lacombe et al . ( 2018 ) gave an alternate implementation of hard clustering based on their algorithm for barycentre computation , providing a computational speed-up over previous the work by Maroulas et al . The primary advantages of our work over previous work on hard clustering are as follows . ( i ) The probabilistic membership values allow us to rank datasets in the cluster , enabling top-k candidate selection in settings where verifying correctness is expensive . The value provided by this fuzzy information is demonstrated in the experiments . ( ii ) The fuzzy membership values provide information about proximity to all clusters , whereas hard labelling loses most of that information . In our experiments we demonstrate that this additional information can be utilised in practice . ( iii ) The weighted cost function makes the convergence analysis ( which we provide ) entirely nontrivial in comparison to the non-fuzzy case . We consider this convergence analysis a primary contribution of our paper . ( iv ) Fuzzy membership values have been shown to be more robust to noise than discrete labels ( Klawonn , 2004 ) . ( v ) Unlike hard clustering , fuzzy clustering is analytically differentiable , allowing integration of the fuzzy clustering step into deep learning methods ( Wilder et al. , 2019 ) . Geometric equivalents . The most similar unsupervised learning technique to our algorithm is Wasserstein Barycentre Clustering ( WBC ) . It clusters datasets of point clouds by the Wasserstein distance between the point clouds , rather than the Wasserstein distance between their persistence diagrams . We compare our algorithm experimentally to WBC using ADMM ( Ye & Li , 2014 ) , Bregman ADMM ( Ye et al. , 2017 ) , Subgradient Descent ( Cuturi & Doucet , 2014 ) , Iterative Bregman Projection ( Benamou et al. , 2015 ) , and full linear programming ( Li & Wang , 2008 ) . Each of these algorithms computes or approximates the Wasserstein barycentre in different ways . Theoretically , fuzzy discrete distribution clustering ( d. A. T. de Carvalho et al. , 2015 ) is similar to our algorithm , but the addition of the diagonal in the persistence diagram makes our work distinct . 1.2 OUR CONTRIBUTIONS . 1 . Our main contribution is an algorithm for Fuzzy c-Means clustering of persistence diagrams , along with the convergence analysis . Given a collection of persistence diagrams D1 , . . . , Dn , we alternatively calculate cluster centres M1 , . . . , Mc and membership values rjk 2 [ 0 , 1 ] which denote the degree to which diagram Dj is associated with cluster Mk . We prove Theorem 1 , showing that every convergent subsequence of these alternative update steps tends to a local minimum or saddle point of the cost function . This is the same convergence guarantee provided by traditional FCM clustering ( Bezdek et al. , 1987 ) , but requires additional work as the space of persistence diagrams with the Wasserstein distance has far weaker theoretical properties than Euclidean space . 2 . Updating the cluster centres requires computing the weighted Fréchet mean . We extend the algorithm given by Turner et al . ( 2012 ) to the weighted case , justifying our addition of weights by extending their proof to show that the updated algorithm converges . 3 . We implement our algorithm in Python , available in the supplementary materials . It works with persistence diagrams from commonly used open-source libraries for Topological Data Analysis ( TDA ) ,1 so is available for easy integration into current workflows , offering a powerful unsupervised learning algorithm to data science practitioners using TDA . 4 . We demonstrate the application of our algorithm to settings where ( i ) the properties of persistence diagrams makes clustering them the natural choice over geometric equivalents and ( ii ) the probabilistic membership values can be used to rank candidates for top-k selection . Our algorithm classifies transformed lattice structures from materials science where geometric equivalents fail , whilst giving probabilistic rankings to help prioritise expensive further investigation . We also cluster the persistence diagrams of decision boundaries and labelled datasets , showing that our fuzzy clustering captures information about model performance on unseen tasks . 2 TOPOLOGICAL PRELIMINARIES . Topological Data Analysis emerged from the study of algebraic topology , providing a toolkit to fully describe the topology of a dataset . We offer a quick summary below ; for more comprehensive details see Edelsbrunner & Harer ( 2010 ) . A set of points in Rd are indicative of the shape of the distribution they are sampled from . By connecting points that are pairwise within ✏ > 0 distance of each other , we can create an approximation of the distribution called the Vietoris-Rips complex ( Vietoris , 1927 ) . Specifically , we add the convex hull of any collection of points that are pairwise at most ✏ apart to the ✏-Vietoris-Rips complex . However , choosing an ✏ remains problematic ; too low a value and key points can remain disconnected , too high a value and the points become fully connected . To overcome this we use persistence : we consider the approximation over all values of ✏ simultaneously , and study how the topology of that approximation evolves as ✏ grows large . We call the collection of complexes for all ✏ a filtration . For each ✏ , we compute the p-homology group . This tells us the topology of the ✏-Vietoris-Rips complex : the 0-homology counts the number of connected components , the 1-homology counts the number of holes , the 2-homology counts the number of voids , etc . ( Edelsbrunner et al. , 2000 ) . 1Dionysus and Ripser ( Bauer , 2019 ) . The p-persistent homology ( p-PH ) group is created by summing the p-homology groups over all ✏ . This results in a p-PH group that summarises information about the topology of the dataset at all granularities . If a topological feature , such as a connected component or hole , persists throughout a large range of granularities , then it ’ s more likely to be a feature of the distribution . If it only persists for a short amount of time , then it ’ s more likely to be noise ( Cohen-Steiner et al. , 2007 ) . We can stably map a p-PH group into a multiset in the extended plane called a persistence diagram ( Chazal et al. , 2012 ) . Each topological feature has a birth and death : a feature is born when it enters the complex , and dies when the complex grows enough to destroy it . For example , in Figure 1 ( a ) , a feature is born at ✏ = 1 when four lines form a hole . This feature dies at ✏ = 1.42 when the hole is filled in . This is shown in the persistence diagram in Figure 1 ( b ) as a point at ( 1 , 1.42 ) in 1-PH . By computing the birth/death points for each topological feature in the filtration , we get a complete picture of the topology of the point cloud at all granularities ( Zomorodian & Carlsson , 2005 ) . The persistence diagram is the collection of birth/death points , along with the diagonal = { ( a , a ) : a 2 R } with infinite multiplicity , added in order to make the space of persistence diagrams complete ( Mileyko et al. , 2011 ) .
The paper proposes a new clustering algorithm for persistence diagrams. They use fuzzy c means clustering. The motivation for fuzzy clustering is that it allows each datum (persistence diagram) to have weighted (soft) membership in different clusters. The partial membership value is the ratio of the distance to that cluster center to sum of all memberships to other clusters. Empirical results on synthetic and real data shows that clustering of persistence diagrams outperforms others that depends on geometry. A convergence theorem is provided, based on previous fuzzy k-means convergence proof.
SP:eab7f68e9d6170869645eb7ca01ee340cf97d7a0
Deep Coherent Exploration For Continuous Control
1 INTRODUCTION . The balance of exploration and exploitation ( Kearns & Singh , 2002 ; Jaksch et al. , 2010 ) is a longstanding challenge in reinforcement learning ( RL ) . With insufficient exploration , states and actions with high rewards can be missed , resulting in policies prematurely converging to bad local optima . In contrast , with too much exploration , agents could waste their resources trying suboptimal states and actions , without leveraging their experiences efficiently . To learn successful strategies , this trade-off between exploration and exploitation must be balanced well , and this is known as the exploration vs. exploitation dilemma . At a high level , exploration can be divided into directed strategies and undirected strategies ( Thrun , 1992 ; Plappert et al. , 2018 ) . While directed strategies aim to extract useful information from existing experiences for better exploration , undirected strategies rely on injecting randomness into the agent ’ s decision-making . Over the years , many sophisticated directed exploration strategies have been proposed ( Tang et al. , 2016 ; Ostrovski et al. , 2017 ; Houthooft et al. , 2016 ; Pathak et al. , 2017 ) . However , since these strategies still require lower-level exploration to collect the experiences , or are either complicated or computationally intensive , undirected exploration strategies are still commonly used in RL literature in practice , where some well-known examples are -greedy ( Sutton , 1995 ) for discrete action space and additive Gaussian noise for continuous action space ( Williams , 1992 ) . Such strategies explore by randomly perturbing agents ’ actions at different steps independently and hence are referred to as performing step-based exploration in action space ( Deisenroth et al. , 2013 ) . As an alternative to those exploration strategies in action space , exploration by perturbing the weights of linear policies has been proposed ( Rückstieß et al. , 2010 ; Sehnke et al. , 2010 ; Kober & Peters , 2008 ) . Since these strategies in parameter space naturally explore conditioned on the states and are usually trajectory-based ( only perturb the weights at the beginning of each trajectory ) ( Deisenroth et al. , 2013 ) , they have the advantages of being more consistent , structured , and global ( Deisenroth et al. , 2013 ) . Later , van Hoof et al . ( 2017 ) proposed a generalized exploration ( GE ) scheme , bridging the gap between step-based and trajectory-based exploration in parameter space . With the advance of deep RL , NoisyNet ( Fortunato et al. , 2018 ) and Parameter Space Noise for Exploration ( PSNE ) ( Plappert et al. , 2018 ) were introduced , extending parameter-space exploration strategies for policies using deep neural networks . Although GE , NoisyNet , and PSNE improved over the vanilla exploration strategies in parameter space and were shown leading to more global and consistent exploration , they still suffer from several limitations . Given this , we propose a new exploration scheme with the following characteristics . 1 . Generalizing Step-based and Trajectory-based Exploration ( van Hoof et al. , 2017 ) Since both NoisyNet and PSNE are trajectory-based exploration strategies , they are considered relatively inefficient and bring insufficient stochasticity ( Deisenroth et al. , 2013 ) . Following van Hoof et al . ( 2017 ) , our method improves by interpolating between step-based and trajectory-based exploration in parameter space , where a more balanced trade-off between stability and stochasticity can be achieved . 2 . Recursive Analytical Integration of Latent Exploring Policies NoisyNet and PSNE address the uncertainty from sampling exploring policies using Monte Carlo integration , while GE uses analytical integration on full trajectories , which scales poorly in the number of time steps . In contrast , we apply analytical and recurrent integration after each step , which leads to low-variance and scalable updates . 3 . Perturbing Last Layers of Policy Networks Both NoisyNet and PSNE perturb all layers of the policy network . However , in general , only the uncertainty in parameters of the last ( linear ) layer can be integrated analytically . Furthermore , it is not clear that deep neural networks can be perturbed in meaningful ways for exploration ( Plappert et al. , 2018 ) . We thus propose and evaluate an architecture where perturbation is only applied on the parameters of the last layer . These characteristics define our contribution , which we will refer to as Deep Coherent Exploration . We evaluate the coherent versions of A2C ( Mnih et al. , 2016 ) , PPO ( Schulman et al. , 2017 ) , and SAC ( Haarnoja et al. , 2018 ) , where the experiments on OpenAI MuJoCo ( Todorov et al. , 2012 ; Brockman et al. , 2016 ) tasks show that Deep Coherent Exploration outperforms other exploration strategies in terms of both learning speed and stability . 2 RELATED WORK . As discussed , exploration can broadly be classified into directed and undirected strategies ( Thrun , 1992 ; Plappert et al. , 2018 ) , with undirected strategies being commonly used in practice because of their simplicity . Well known methods such as -greedy ( Sutton , 1995 ) or additive Gaussian noise ( Williams , 1992 ) randomly perturb the action at each time step independently . These high-frequency perturbations , however , can result in poor coverage of the state-action space due to random-walk behavior ( Rückstieß et al. , 2010 ; Deisenroth et al. , 2013 ) , washing-out of exploration by the environment dynamics ( Kober & Peters , 2008 ; Rückstieß et al. , 2010 ; Deisenroth et al. , 2013 ) , and to potential damage to mechanical systems ( Koryakovskiy et al. , 2017 ) . One alternative is to instead perturb the policy in parameter space , with the perturbation held constant for the duration of a trajectory . Rückstieß et al . ( 2010 ) and Sehnke et al . ( 2010 ) showed that such parameter-space methods could bring improved exploration behaviors because of reduced variance and faster convergence , when combined with REINFORCE ( Williams , 1992 ) or Natural Actor-Critic ( Peters et al. , 2005 ) . Another alternative to independent action-space perturbation , is to correlate the noise applied at subsequent actions ( Morimoto & Doya , 2000 ; Wawrzynski , 2015 ; Lillicrap et al. , 2016 ) , for example by generating perturbations from an Ornstein-Uhlenbeck ( OU ) process ( Uhlenbeck & Ornstein , 1930 ) . Later , van Hoof et al . ( 2017 ) used the same stochastic process but in the parameter space of the policy . This approach uses a temporally coherent exploring policy , which unifies step-based and trajectory-based exploration . Moreover , the author showed that , with linear policies , a more delicate balance between these two extreme strategies could have better performance . However , this approach was derived in a batch mode setting and requires storing the full trajectory history and the inversion of a matrix growing with the number of time step . Thus , it does not scale well to long trajectories or complex models . Although these methods pioneered the research of exploration in parameter space , their applicability is limited . More precisely , these methods were only evaluated with extremely shallow ( often linear ) policies and relatively simple tasks with low-dimensional state spaces and action spaces . Given this , NoisyNet ( Fortunato et al. , 2018 ) , PSNE ( Plappert et al. , 2018 ) and Stochastic A3C ( SA3C ) ( Shang et al. , 2019 ) were proposed , introducing more general and scalable methods for deep RL algorithms . All three of these methods can be seen as learning a distribution over policies for trajectory-based exploration in parameter space . These exploring policies are sampled by perturbing the weights across all layers of a deep neural network , with the uncertainty from sampling being addressed by Monte Carlo integration . Whereas NoisyNet learns the magnitudes of the noise for each parameter , PSNE heuristically adapts a single magnitude for all parameters . While showing good performance in practice ( Fortunato et al. , 2018 ; Plappert et al. , 2018 ) , these methods suffer from two potential limitations . Firstly , trajectory-based strategies can be inefficient as only one strategy can be evaluated for a potentially long trajectory ( Deisenroth et al. , 2013 ) , which could result in a failure to escape local optima . Secondly , Monte Carlo integration results in high-variance gradient estimates which could lead to oscillating updates . 3 BACKGROUND . This section provides background for reinforcement learning and related deep RL algorithms . 3.1 REINFORCEMENT LEARNING . Reinforcement learning is a sub-field of machine learning that studies how an agent learns strategies with high returns through trial-and-error by interacting with an environment . This interaction between an agent and an environment is described using Markov Decision Processes ( MDPs ) . A MDP is a tuple ( S , A , r , P , γ ) , where S is the state space , A is the action space , r : S × A × S → R is the reward function with rt = r ( st , at , st+1 ) , P : S × A × S → R+ is the transition probability function , and γ is a discount factor indicating the preference of short-term rewards . In RL with continuous action space , an agent aims to learn a parametrized ( e.g . Gaussian ) policy πθ ( a|s ) : S ×A → R+ , with parameters θ , that maximizes the expected return over trajectories : J ( θ ) = Eτ∼p ( τ |πθ ) [ R ( τ ) ] , ( 1 ) where τ = ( s0 , a0 , ... , aT−1 , sT ) is a trajectory and R ( τ ) = ∑T t=0 γ trt is the discounted return . 3.2 DEEP REINFORCEMENT LEANING ALGORITHMS . Deep reinforcement learning combines deep learning and reinforcement learning , where policies and value functions are represented by deep neural networks for more sophisticated and powerful function approximation . In our experiments , we consider the following three deep RL algorithms . Advantage Actor-Critic ( A2C ) Closely related to REINFORCE ( Williams , 1992 ) , A2C is an on-policy algorithm proposed as the synchronous version of the original Asynchronous Advantage Actor-Critic ( A3C ) ( Mnih et al. , 2016 ) . The gradient of A2C can be written as : ∇θJ ( θ ) = Eτ∼p ( τ |θ ) [ T−1∑ t=0 ∇θ log πθ ( at|st ) Aπθ ( st , at ) ] , ( 2 ) where Aπθ ( st , at ) is the estimated advantage following policy πθ . Proximal Policy Optimization ( PPO ) PPO is an on-policy algorithm developed to determine the largest step for update while still keeping the updated policy close to the old policy in terms of Kullback–Leibler ( KL ) divergence . Instead of using a second-order method as in Trust Region Policy Optimization ( TRPO ) ( Schulman et al. , 2015 ) , PPO applies a first-order method and combines several tricks to relieve the complexity . We consider the primary variant of PPO-Clip with the following surrogate objective : LCLIPθk ( θ ) = Eτ∼p ( τ |θk ) [ T−1∑ t=0 [ min ( rt ( θ ) , clip ( rt ( θ ) , 1− , 1 + ) ) A πθk t ] ] , ( 3 ) where rt ( θ ) = πθ ( at|st ) πθk ( at|st ) and is a small threshold that approximately restricts the distance between the new policy and the old policy . In practice , to prevent the new policy from changing too fast , the KL divergence from the new policy to the old policy approximated on a sampled batch is often used as a further constraint . Soft Actor-Critic ( SAC ) As an entropy-regularized ( Ziebart et al. , 2008 ) off-policy actor-critic method ( Lillicrap et al. , 2016 ; Fujimoto et al. , 2018 ) with a stochastic policy , SAC ( Haarnoja et al. , 2018 ) learns the optimal entropy-regularized Q-function through ‘ soft ’ Bellman back-ups with offpolicy data : Qπ ( s , a ) = Es′∼p ( s′|s , a ) , ã′∼π ( ã′|s′ ) [ r + γ ( Qπ ( s′ , ã′ ) + αH ( π ( ã′|s′ ) ) ) ] , ( 4 ) whereH is the entropy and α is the temperature parameter . The policy is then learned by maximizing the expected maximum entropy V -function via the reparameterization trick ( Kingma et al. , 2015 ) .
This paper proposes Deep Coherent Exploration that unifies step-based exploration and trajectory-based exploration on continuous control. There exists a prior work that bridges a gap between the two exploration methods for linear policies, and this paper generalizes the prior work for various deep RL methods: on-policy (A2C, PPO) and off-policy (SAC). Finally, Deep Coherent Exploration enhances the performance of baseline algorithms and has better performance than prior works (NoisyNet, PNSE) on Mujoco tasks.
SP:f410dbc73c4d044a0f3b113ca08f0a6e9e40b07a
Deep Coherent Exploration For Continuous Control
1 INTRODUCTION . The balance of exploration and exploitation ( Kearns & Singh , 2002 ; Jaksch et al. , 2010 ) is a longstanding challenge in reinforcement learning ( RL ) . With insufficient exploration , states and actions with high rewards can be missed , resulting in policies prematurely converging to bad local optima . In contrast , with too much exploration , agents could waste their resources trying suboptimal states and actions , without leveraging their experiences efficiently . To learn successful strategies , this trade-off between exploration and exploitation must be balanced well , and this is known as the exploration vs. exploitation dilemma . At a high level , exploration can be divided into directed strategies and undirected strategies ( Thrun , 1992 ; Plappert et al. , 2018 ) . While directed strategies aim to extract useful information from existing experiences for better exploration , undirected strategies rely on injecting randomness into the agent ’ s decision-making . Over the years , many sophisticated directed exploration strategies have been proposed ( Tang et al. , 2016 ; Ostrovski et al. , 2017 ; Houthooft et al. , 2016 ; Pathak et al. , 2017 ) . However , since these strategies still require lower-level exploration to collect the experiences , or are either complicated or computationally intensive , undirected exploration strategies are still commonly used in RL literature in practice , where some well-known examples are -greedy ( Sutton , 1995 ) for discrete action space and additive Gaussian noise for continuous action space ( Williams , 1992 ) . Such strategies explore by randomly perturbing agents ’ actions at different steps independently and hence are referred to as performing step-based exploration in action space ( Deisenroth et al. , 2013 ) . As an alternative to those exploration strategies in action space , exploration by perturbing the weights of linear policies has been proposed ( Rückstieß et al. , 2010 ; Sehnke et al. , 2010 ; Kober & Peters , 2008 ) . Since these strategies in parameter space naturally explore conditioned on the states and are usually trajectory-based ( only perturb the weights at the beginning of each trajectory ) ( Deisenroth et al. , 2013 ) , they have the advantages of being more consistent , structured , and global ( Deisenroth et al. , 2013 ) . Later , van Hoof et al . ( 2017 ) proposed a generalized exploration ( GE ) scheme , bridging the gap between step-based and trajectory-based exploration in parameter space . With the advance of deep RL , NoisyNet ( Fortunato et al. , 2018 ) and Parameter Space Noise for Exploration ( PSNE ) ( Plappert et al. , 2018 ) were introduced , extending parameter-space exploration strategies for policies using deep neural networks . Although GE , NoisyNet , and PSNE improved over the vanilla exploration strategies in parameter space and were shown leading to more global and consistent exploration , they still suffer from several limitations . Given this , we propose a new exploration scheme with the following characteristics . 1 . Generalizing Step-based and Trajectory-based Exploration ( van Hoof et al. , 2017 ) Since both NoisyNet and PSNE are trajectory-based exploration strategies , they are considered relatively inefficient and bring insufficient stochasticity ( Deisenroth et al. , 2013 ) . Following van Hoof et al . ( 2017 ) , our method improves by interpolating between step-based and trajectory-based exploration in parameter space , where a more balanced trade-off between stability and stochasticity can be achieved . 2 . Recursive Analytical Integration of Latent Exploring Policies NoisyNet and PSNE address the uncertainty from sampling exploring policies using Monte Carlo integration , while GE uses analytical integration on full trajectories , which scales poorly in the number of time steps . In contrast , we apply analytical and recurrent integration after each step , which leads to low-variance and scalable updates . 3 . Perturbing Last Layers of Policy Networks Both NoisyNet and PSNE perturb all layers of the policy network . However , in general , only the uncertainty in parameters of the last ( linear ) layer can be integrated analytically . Furthermore , it is not clear that deep neural networks can be perturbed in meaningful ways for exploration ( Plappert et al. , 2018 ) . We thus propose and evaluate an architecture where perturbation is only applied on the parameters of the last layer . These characteristics define our contribution , which we will refer to as Deep Coherent Exploration . We evaluate the coherent versions of A2C ( Mnih et al. , 2016 ) , PPO ( Schulman et al. , 2017 ) , and SAC ( Haarnoja et al. , 2018 ) , where the experiments on OpenAI MuJoCo ( Todorov et al. , 2012 ; Brockman et al. , 2016 ) tasks show that Deep Coherent Exploration outperforms other exploration strategies in terms of both learning speed and stability . 2 RELATED WORK . As discussed , exploration can broadly be classified into directed and undirected strategies ( Thrun , 1992 ; Plappert et al. , 2018 ) , with undirected strategies being commonly used in practice because of their simplicity . Well known methods such as -greedy ( Sutton , 1995 ) or additive Gaussian noise ( Williams , 1992 ) randomly perturb the action at each time step independently . These high-frequency perturbations , however , can result in poor coverage of the state-action space due to random-walk behavior ( Rückstieß et al. , 2010 ; Deisenroth et al. , 2013 ) , washing-out of exploration by the environment dynamics ( Kober & Peters , 2008 ; Rückstieß et al. , 2010 ; Deisenroth et al. , 2013 ) , and to potential damage to mechanical systems ( Koryakovskiy et al. , 2017 ) . One alternative is to instead perturb the policy in parameter space , with the perturbation held constant for the duration of a trajectory . Rückstieß et al . ( 2010 ) and Sehnke et al . ( 2010 ) showed that such parameter-space methods could bring improved exploration behaviors because of reduced variance and faster convergence , when combined with REINFORCE ( Williams , 1992 ) or Natural Actor-Critic ( Peters et al. , 2005 ) . Another alternative to independent action-space perturbation , is to correlate the noise applied at subsequent actions ( Morimoto & Doya , 2000 ; Wawrzynski , 2015 ; Lillicrap et al. , 2016 ) , for example by generating perturbations from an Ornstein-Uhlenbeck ( OU ) process ( Uhlenbeck & Ornstein , 1930 ) . Later , van Hoof et al . ( 2017 ) used the same stochastic process but in the parameter space of the policy . This approach uses a temporally coherent exploring policy , which unifies step-based and trajectory-based exploration . Moreover , the author showed that , with linear policies , a more delicate balance between these two extreme strategies could have better performance . However , this approach was derived in a batch mode setting and requires storing the full trajectory history and the inversion of a matrix growing with the number of time step . Thus , it does not scale well to long trajectories or complex models . Although these methods pioneered the research of exploration in parameter space , their applicability is limited . More precisely , these methods were only evaluated with extremely shallow ( often linear ) policies and relatively simple tasks with low-dimensional state spaces and action spaces . Given this , NoisyNet ( Fortunato et al. , 2018 ) , PSNE ( Plappert et al. , 2018 ) and Stochastic A3C ( SA3C ) ( Shang et al. , 2019 ) were proposed , introducing more general and scalable methods for deep RL algorithms . All three of these methods can be seen as learning a distribution over policies for trajectory-based exploration in parameter space . These exploring policies are sampled by perturbing the weights across all layers of a deep neural network , with the uncertainty from sampling being addressed by Monte Carlo integration . Whereas NoisyNet learns the magnitudes of the noise for each parameter , PSNE heuristically adapts a single magnitude for all parameters . While showing good performance in practice ( Fortunato et al. , 2018 ; Plappert et al. , 2018 ) , these methods suffer from two potential limitations . Firstly , trajectory-based strategies can be inefficient as only one strategy can be evaluated for a potentially long trajectory ( Deisenroth et al. , 2013 ) , which could result in a failure to escape local optima . Secondly , Monte Carlo integration results in high-variance gradient estimates which could lead to oscillating updates . 3 BACKGROUND . This section provides background for reinforcement learning and related deep RL algorithms . 3.1 REINFORCEMENT LEARNING . Reinforcement learning is a sub-field of machine learning that studies how an agent learns strategies with high returns through trial-and-error by interacting with an environment . This interaction between an agent and an environment is described using Markov Decision Processes ( MDPs ) . A MDP is a tuple ( S , A , r , P , γ ) , where S is the state space , A is the action space , r : S × A × S → R is the reward function with rt = r ( st , at , st+1 ) , P : S × A × S → R+ is the transition probability function , and γ is a discount factor indicating the preference of short-term rewards . In RL with continuous action space , an agent aims to learn a parametrized ( e.g . Gaussian ) policy πθ ( a|s ) : S ×A → R+ , with parameters θ , that maximizes the expected return over trajectories : J ( θ ) = Eτ∼p ( τ |πθ ) [ R ( τ ) ] , ( 1 ) where τ = ( s0 , a0 , ... , aT−1 , sT ) is a trajectory and R ( τ ) = ∑T t=0 γ trt is the discounted return . 3.2 DEEP REINFORCEMENT LEANING ALGORITHMS . Deep reinforcement learning combines deep learning and reinforcement learning , where policies and value functions are represented by deep neural networks for more sophisticated and powerful function approximation . In our experiments , we consider the following three deep RL algorithms . Advantage Actor-Critic ( A2C ) Closely related to REINFORCE ( Williams , 1992 ) , A2C is an on-policy algorithm proposed as the synchronous version of the original Asynchronous Advantage Actor-Critic ( A3C ) ( Mnih et al. , 2016 ) . The gradient of A2C can be written as : ∇θJ ( θ ) = Eτ∼p ( τ |θ ) [ T−1∑ t=0 ∇θ log πθ ( at|st ) Aπθ ( st , at ) ] , ( 2 ) where Aπθ ( st , at ) is the estimated advantage following policy πθ . Proximal Policy Optimization ( PPO ) PPO is an on-policy algorithm developed to determine the largest step for update while still keeping the updated policy close to the old policy in terms of Kullback–Leibler ( KL ) divergence . Instead of using a second-order method as in Trust Region Policy Optimization ( TRPO ) ( Schulman et al. , 2015 ) , PPO applies a first-order method and combines several tricks to relieve the complexity . We consider the primary variant of PPO-Clip with the following surrogate objective : LCLIPθk ( θ ) = Eτ∼p ( τ |θk ) [ T−1∑ t=0 [ min ( rt ( θ ) , clip ( rt ( θ ) , 1− , 1 + ) ) A πθk t ] ] , ( 3 ) where rt ( θ ) = πθ ( at|st ) πθk ( at|st ) and is a small threshold that approximately restricts the distance between the new policy and the old policy . In practice , to prevent the new policy from changing too fast , the KL divergence from the new policy to the old policy approximated on a sampled batch is often used as a further constraint . Soft Actor-Critic ( SAC ) As an entropy-regularized ( Ziebart et al. , 2008 ) off-policy actor-critic method ( Lillicrap et al. , 2016 ; Fujimoto et al. , 2018 ) with a stochastic policy , SAC ( Haarnoja et al. , 2018 ) learns the optimal entropy-regularized Q-function through ‘ soft ’ Bellman back-ups with offpolicy data : Qπ ( s , a ) = Es′∼p ( s′|s , a ) , ã′∼π ( ã′|s′ ) [ r + γ ( Qπ ( s′ , ã′ ) + αH ( π ( ã′|s′ ) ) ) ] , ( 4 ) whereH is the entropy and α is the temperature parameter . The policy is then learned by maximizing the expected maximum entropy V -function via the reparameterization trick ( Kingma et al. , 2015 ) .
This paper focuses on undirected exploration strategies in reinforcement learning. Following the prior work, this paper proposes an exploration method unifying the step-based and trajectory-based exploration. The authors propose to perturb only the last(linear) layer of the policy for exploration, instead of perturbing all layers of the policy network. Also, the authors use analytical and recurrent integration for policy updates. Experiments show that the proposed exploration strategy mostly helps A2C, PPO and SAC in three Mujoco environments.
SP:f410dbc73c4d044a0f3b113ca08f0a6e9e40b07a
On the Universal Approximability and Complexity Bounds of Deep Learning in Hybrid Quantum-Classical Computing
With the continuously increasing number of quantum bits in quantum computers , there are growing interests in exploring applications that can harvest the power of them . Recently , several attempts were made to implement neural networks , known to be computationally intensive , in hybrid quantum-classical scheme computing . While encouraging results are shown , two fundamental questions need to be answered : ( 1 ) whether neural networks in hybrid quantum-classical computing can leverage quantum power and meanwhile approximate any function within a given error bound , i.e. , universal approximability ; ( 2 ) how do these neural networks compare with ones on a classical computer in terms of representation power ? This work sheds light on these two questions from a theoretical perspective . 1 INTRODUCTION . Quantum computing has been rapidly evolving ( e.g. , IBM ( 2020 ) recently announced to debut quantum computer with 1,121 quantum bits ( qubits ) in 2023 ) , but the development of quantum applications is far behind ; in particular , it is still unclear what and how applications can take quantum advantages . Deep learning , one of the most prevalent applications , is well known to be computationintensive and therefore their backbone task , neural networks , is regarded as an important task to potentially take quantum advantages . Recent works ( Francesco et al. , 2019 ; Tacchino et al. , 2020 ; Jiang et al. , 2020 ) have demonstrated that the shallow neural networks with limited functions can be directly implemented on quantum computers without interfering with classical computers , but as pointed by Broughton et al . ( 2020 ) , the near-term Noisy Intermediate-Scale Quantum ( NISQ ) can hardly disentangle and generalize data in general applications , using quantum computers alone . This year , Google ( 2020 ) has put forward a library for hybrid quantum-classical neural networks , which attracts attention from both industry and academia to accelerate quantum deep learning . In a hybrid quantum-classical computing scheme , quantum computers act as hardware accelerators , working together with classical computers , to speedup the neural network computation . The incorporation of classical computers is promising to conduct operations that are hard or costly to be implemented on quantum computers ; however , it brings high data communication costs at the interface between quantum and classical computers . Therefore , instead of contiguous communication during execution , a better practice is a “ prologue-acceleration-epilogue ” scheme : the classical computer prepares data and post-processes data at prologue and epilogue , while only the quantum computer is active during the acceleration process for the main computations . Without explicit explanation , “ hybrid model ” refers to the prologue-acceleration-epilogue scheme in the rest of the paper . In a classical computing scheme , the universal approximability , i.e. , the ability to approximate a wide class of functions with arbitrary small error , and the complexity bounds of different types of neural networks have been well studied ( Cybenko , 1989 ; Hornik et al. , 1989 ; Mhaskar & Micchelli , 1992 ; Sonoda & Murata , 2017 ; Yarotsky , 2017 ; Ding et al. , 2019 ; Wang et al. , 2019 ; Fan et al. , 2020 ) . However , due to the differences in computing paradigms , not all types of neural networks can be directly implemented on quantum computers . As such , it is still unclear whether those can work with hybrid quantum-classical computing and still attain universal approximability . In addition , as quantum computing limits the types of computations to be handled , it is also unknown whether the hybrid quantum-classical neural networks can take quantum advantage over the classical networks under the same accuracy . This work explores these questions from a theoretical perspective . In this work , we first illustrate neural networks that are feasible in hybrid quantum-classical computing scheme . Then we use the method of bound-by-construction to demonstrate their universal approximability for a wide class of functions and the computation bounds , including network depth , qubit cost and gate cost , under a given error bound . In addition , compared with some of the lower complexity bounds for neural networks on classical computers , our established upper bounds are of lower asymptotic complexity , showing the potential of quantum advantage . 2 RELATED WORKS AND MOTIVATION . 2.1 NEURAL NETWORKS IN QUANTUM COMPUTING . Although the research on neural networks in quantum computing can trace back to the 1990s ( Kak , 1995 ; Purushothaman & Karayiannis , 1997 ; Ezhov & Ventura , 2000 ) , but only recently , along with the revolution of quantum computers , the implementation of neural networks on actual quantum computer emerges ( Francesco et al. , 2019 ; Jiang et al. , 2020 ; Bisarya et al. , 2020 ) . There are mainly three different directions to exploit the power of quantum computers : ( 1 ) applying the Quantum Random Access Memory ( QRAM ) ( Blencowe , 2010 ) ; ( 2 ) employing pure quantum computers ; ( 3 ) bridging different platforms for a hybrid quantum-classical computing ( McClean et al. , 2016 ) . Kerenidis et al . ( 2019 ) is a typical work to implement neural networks with QRAM . Using QRAM provides the highest flexibility , such as implementing non-linear functions using lookup tables . But QRAM itself has limitations : instead of using the widely applied superconducting qubits ( Arute et al. , 2019 ; IBM , 2016 ) , QRAM needs the support of spin qubit ( Veldhorst et al. , 2015 ) to provide relatively long lifetime . To make the system practical , there is still a long way to go . Alternatively , there are works which encode data to either qubits ( Francesco et al. , 2019 ) or qubit states ( Jiang et al. , 2020 ) and use superconducting-based quantum computers to run neural networks . These methods also have limitations : Due to the short decoherence times in current quantum computers , the condition statement is not supported , making it hard to implement some non-linear functions such as the most commonly used Rectified Linear Unit ( ReLU ) . But the advantages are also obvious : ( 1 ) the designs can be directly evaluated on actual quantum computers ; ( 2 ) little communication is needed between quantum and classical computers , which may otherwise be expensive . Hybrid quantum-classical computing tries to address the limitations of QRAM and pure quantum computer based approaches . Broughton et al . ( 2020 ) establishes a computing paradigm where different neurons can be implemented on either quantum or classical computers . This brings the flexibility in implementing functions ( e.g. , ReLU ) , while at the same time , it calls for fast interfaces for massive data transfer between quantum and classical computers . In this work , we focus on the hybrid quantum-classical computing scheme and follow the “ prologueacceleration-epilogue ” computing scheme . It offers the flexibility of implementation and at the same time requires minimal quantum-classical data transfer , as demonstrated in Figure 2 . 2.2 UNIVERSAL APPROXIMATION AND COMPLEXITY BOUND . Universal approximability of neural network indicates that for any given continuous function or a wide class of functions satisfying some constraints , and arbitrarily small error bound > 0 , there exists a neural network model which can approximate the function with no more than error . On classical computing , different types of neural networks have been proved to have universal approximability : multi-layer feedforward neural networks ( Cybenko , 1989 ; Hornik et al. , 1989 ) ; ReLU neural networks ( Mhaskar & Micchelli , 1992 ; Sonoda & Murata , 2017 ; Yarotsky , 2017 ) ; quantized neural networks ( Ding et al. , 2019 ; Wang et al. , 2019 ) ; and quadratic neural networks ( Fan et al. , 2020 ) . In addition , many of these works also establishes complexity bounds in terms of the number of weights , number of layers , or number of neurons needed for approximation with error bound . When it comes to quantum computing , in recent years , Delgado ( 2018 ) demonstrated quantum circuit with an additional trainable diagonal matrix can approximate the given functions , and Schuld et al . ( 2020 ) shown that the Fourier-type sum-based quantum models can be universal function approximators if the quantum circuit is measured enough many times . Most recently , we are wit- nessing the exponentially increasing research works to exploit the high-parallelism provided by quantum computers to accelerate neural networks ( Perdomo-Ortiz et al. , 2018 ; Huggins et al. , 2019 ; Cong et al. , 2019 ; Kerenidis et al. , 2019 ; Francesco et al. , 2019 ; Bisarya et al. , 2020 ; Broughton et al. , 2020 ; Jiang et al. , 2020 ; Tacchino et al. , 2020 ; Xia & Kais , 2020 ) . The existing works have demonstrated that the quantum neural network can achieve state-of-the-art accuracy for some tasks . For example , Figure 1 demonstrates the test accuracy comparison of different neural networks targeting pairwise classifiers in MNIST ( LeCun et al. , 1998 ) ) . The average accuracy gap between that in Figure 1 ( a ) and Figure 1 ( c ) is merely 0.5 % . However , the penalty for achieving high-parallelism in quantum computing is the constrained computation types to be performed . As a result , neural networks designed for quantum computing has limited operations , indicating that the networks designed for classical computing may not be implemented on quantum computers . This raises a fundamental problem : whether the neural networks in quantum computing can attain universal approximability ? The failure to attain universal approximability will fundamentally hinder the neural networks in quantum computing being used in practice due to the significant accuracy loss for specific functions . Therefore , it is imminent to understand the expressivity of neural networks dedicated to quantum computers . Motivated by this , we prove that the neural networks in a hybrid quantum-classical computing can approximate a wide class of functions with arbitarily small error . We also establish complexity bounds , which gives practical insights in designing networks for quantum computing . 3 MAIN RESULTS . Figure 2 illustrates the adopted prologue-acceleration-epilogue computing scheme . It is a practical scheme with the small number of quantum-classical data transfer and the neural network designed for this scheme can achieve competitive accuracy against classical computing as demonstrated in Figure 1 . In addition , the target computing scheme is a special case of that used in Tensorflow Quantum ( Broughton et al. , 2020 ) ; therefore , if we can prove that the neural networks designed for it have universal approximability , then the conclusion can be directly derived to Tensorflow Quantum . In this work , we follow the idea from ( Yarotsky , 2017 ; Ding et al. , 2019 ) by constructing a neural network ( namely BPNN , see Section 4.1 ) in the prologue-acceleration-epilogue computing scheme ( see Section 4.4 ) with bounded maximum error ( see Section 4.3 ) for a wide class of functions ( denoted as Fd , n , see Section 4.2 ) . In such a proof , the fundamental function to be implemented is the Taylor polynomial . In the next , we first state the main result of the Taylor polynomial of a function f ∈ Fd , n on the quantum computing ( see Appendix A.5 for the formal version ) . Results 3.1 . For any given function f ∈ Fd , n and an expansion point k , its Taylor polynomial at point k can be implemented on the quantum computer , such that ( i ) the network can exactly implements the Taylor polynomial , ( ii ) the depth is O ( log n ) , ( iii ) the number of gates is O ( n2 log n log d ) , ( iv ) the number of qubits is O ( n2 log d ) . Here , we observe that since the Taylor function can be exactly implemented by the quantum computer , the complexities on depth , gates , and qubits are not related to error bound . This will be the root cause that the upper bound of neural networks in hybrid quantum-classical computing scheme can approach to the lower bound of ones on classical computing ( see the comparison at the end of this section ) . In addition , the classical computing can not build such a system , because there are too many inputs , reaching up to dn+1 , which is infeasible for classical computing with exponentially increasing inputs . On the other hand , quantum computing can take advantage of encoding n inputs to log n qubits , and therefore , it is feasible to implement such a network on a quantum computer . The above result shows the ability of BPNN to exactly implement Taylor expansion at any point . Then , combined with the classical Prologue for quantum-state preparation and the classical Epilogue for accumulate results at all expansion points , we next state the main result of approximating a function f ∈ Fd , n on the prologue-acceleration-epilogue computing scheme as follows ; the formal version can be found in Appendix A.6 . Results 3.2 . For any given function f ∈ Fd , n , there is a binary polynomial neural network with a fixed structure that can be implemented in the hybrid quantum-classical computing scheme , such that ( i ) the network can approximate f with any error ∈ ( 0 , 1 ) , ( ii ) the overall depth is O ( 1 ) ; ( iii ) the number of quantum gates is O ( ( 1/ ) d n ) ; ( iv ) the number of qubits is O ( ( 1/ ) d n ) ; ( v ) the number of weights on classical computer is O ( ( 1/ ) d n ) . From the above result , we can see that the upper bounds on the depth and the number of weight/gates are of the same asymptotic complexity for both the quantum portion and classical portion in the hybrid computing system , which satisfies the constraint discussed in Section 4.1 to take full use of quantum computing . We further compare the complexity bounds between the BPNN on the hybrid quantum-classical computing scheme against the classical networks constructed for bound analysis . Comparison with the upper bounds for neural networks on classical computers : To attain an approximation error , Fan et al . ( 2020 ) demonstrates that the upper bound on the number of weights for unquantized quadratic network is O ( log ( log ( 1/ ) ) × ( 1/ ) d n ) ) , and Ding et al . ( 2019 ) demonstrates that the upper bound on the number of binary weights of the ReLU neural network is O ( log2 ( 1/ ) × ( 1/ ) dn ) . On the other hand , for the BPNN on hybrid quantum-classical computing , both the number of gates used in quantum acceleration and the weights used in classical prologue and epilogue are O ( ( 1/ ) d n ) . Although BPNN has similar expressive power compared with the binary ReLU network and reduced expressive power compared with the unquantized quadratic network ( due to the constraints on weight selection ) , the obtained upper bounds are of asymptotically lower complexity , which again shows the benefits of quantum computing for neural networks . Comparison with the lower bounds for neural networks on classical computers : We further compare the lower bound of the number of weights/gates needed to attain an error bound on a classical computer . The only established result in the literature is for unquantized ReLU network ( Yarotsky , 2017 ) , which suggests that to attain an approximation error bound of , the number of weights needed is at least Ω ( log−2p−1 ( 1/ ) × ( 1/ ) d/n ) with depth constraint of O ( logp ( 1/ ) ) where p is a constant to be chosen to determine the growth rate of depth . In this work , we demonstrate that the depth of BPNN in hybrid quantum-classical computing can be O ( 1 ) and the upper bounds on the number of weight/gates are O ( ( 1/ ) d/n ) ( both quantum and classical computers ) . Apparently , our upper bounds are even approching to the lower bound of the networks on classical computers , which are unquantized and should have stronger expressive power . This clearly demonstrates the potential quantum advantage that can be attained .
The problem studied in this work is of interest in the quantum machine learning community, as the power of small and noisy quantum computers for machine learning problems is far from being understood. Therefore, it is important to study the expressivity of quantum neural networks as function approximators. This work uses the model introduced by Tensorflow Quantum, where different neurons can be implemented on either quantum or classical computers.
SP:19ad4889e4e13927f7a6e5ab01d0b0e1ef925337
On the Universal Approximability and Complexity Bounds of Deep Learning in Hybrid Quantum-Classical Computing
With the continuously increasing number of quantum bits in quantum computers , there are growing interests in exploring applications that can harvest the power of them . Recently , several attempts were made to implement neural networks , known to be computationally intensive , in hybrid quantum-classical scheme computing . While encouraging results are shown , two fundamental questions need to be answered : ( 1 ) whether neural networks in hybrid quantum-classical computing can leverage quantum power and meanwhile approximate any function within a given error bound , i.e. , universal approximability ; ( 2 ) how do these neural networks compare with ones on a classical computer in terms of representation power ? This work sheds light on these two questions from a theoretical perspective . 1 INTRODUCTION . Quantum computing has been rapidly evolving ( e.g. , IBM ( 2020 ) recently announced to debut quantum computer with 1,121 quantum bits ( qubits ) in 2023 ) , but the development of quantum applications is far behind ; in particular , it is still unclear what and how applications can take quantum advantages . Deep learning , one of the most prevalent applications , is well known to be computationintensive and therefore their backbone task , neural networks , is regarded as an important task to potentially take quantum advantages . Recent works ( Francesco et al. , 2019 ; Tacchino et al. , 2020 ; Jiang et al. , 2020 ) have demonstrated that the shallow neural networks with limited functions can be directly implemented on quantum computers without interfering with classical computers , but as pointed by Broughton et al . ( 2020 ) , the near-term Noisy Intermediate-Scale Quantum ( NISQ ) can hardly disentangle and generalize data in general applications , using quantum computers alone . This year , Google ( 2020 ) has put forward a library for hybrid quantum-classical neural networks , which attracts attention from both industry and academia to accelerate quantum deep learning . In a hybrid quantum-classical computing scheme , quantum computers act as hardware accelerators , working together with classical computers , to speedup the neural network computation . The incorporation of classical computers is promising to conduct operations that are hard or costly to be implemented on quantum computers ; however , it brings high data communication costs at the interface between quantum and classical computers . Therefore , instead of contiguous communication during execution , a better practice is a “ prologue-acceleration-epilogue ” scheme : the classical computer prepares data and post-processes data at prologue and epilogue , while only the quantum computer is active during the acceleration process for the main computations . Without explicit explanation , “ hybrid model ” refers to the prologue-acceleration-epilogue scheme in the rest of the paper . In a classical computing scheme , the universal approximability , i.e. , the ability to approximate a wide class of functions with arbitrary small error , and the complexity bounds of different types of neural networks have been well studied ( Cybenko , 1989 ; Hornik et al. , 1989 ; Mhaskar & Micchelli , 1992 ; Sonoda & Murata , 2017 ; Yarotsky , 2017 ; Ding et al. , 2019 ; Wang et al. , 2019 ; Fan et al. , 2020 ) . However , due to the differences in computing paradigms , not all types of neural networks can be directly implemented on quantum computers . As such , it is still unclear whether those can work with hybrid quantum-classical computing and still attain universal approximability . In addition , as quantum computing limits the types of computations to be handled , it is also unknown whether the hybrid quantum-classical neural networks can take quantum advantage over the classical networks under the same accuracy . This work explores these questions from a theoretical perspective . In this work , we first illustrate neural networks that are feasible in hybrid quantum-classical computing scheme . Then we use the method of bound-by-construction to demonstrate their universal approximability for a wide class of functions and the computation bounds , including network depth , qubit cost and gate cost , under a given error bound . In addition , compared with some of the lower complexity bounds for neural networks on classical computers , our established upper bounds are of lower asymptotic complexity , showing the potential of quantum advantage . 2 RELATED WORKS AND MOTIVATION . 2.1 NEURAL NETWORKS IN QUANTUM COMPUTING . Although the research on neural networks in quantum computing can trace back to the 1990s ( Kak , 1995 ; Purushothaman & Karayiannis , 1997 ; Ezhov & Ventura , 2000 ) , but only recently , along with the revolution of quantum computers , the implementation of neural networks on actual quantum computer emerges ( Francesco et al. , 2019 ; Jiang et al. , 2020 ; Bisarya et al. , 2020 ) . There are mainly three different directions to exploit the power of quantum computers : ( 1 ) applying the Quantum Random Access Memory ( QRAM ) ( Blencowe , 2010 ) ; ( 2 ) employing pure quantum computers ; ( 3 ) bridging different platforms for a hybrid quantum-classical computing ( McClean et al. , 2016 ) . Kerenidis et al . ( 2019 ) is a typical work to implement neural networks with QRAM . Using QRAM provides the highest flexibility , such as implementing non-linear functions using lookup tables . But QRAM itself has limitations : instead of using the widely applied superconducting qubits ( Arute et al. , 2019 ; IBM , 2016 ) , QRAM needs the support of spin qubit ( Veldhorst et al. , 2015 ) to provide relatively long lifetime . To make the system practical , there is still a long way to go . Alternatively , there are works which encode data to either qubits ( Francesco et al. , 2019 ) or qubit states ( Jiang et al. , 2020 ) and use superconducting-based quantum computers to run neural networks . These methods also have limitations : Due to the short decoherence times in current quantum computers , the condition statement is not supported , making it hard to implement some non-linear functions such as the most commonly used Rectified Linear Unit ( ReLU ) . But the advantages are also obvious : ( 1 ) the designs can be directly evaluated on actual quantum computers ; ( 2 ) little communication is needed between quantum and classical computers , which may otherwise be expensive . Hybrid quantum-classical computing tries to address the limitations of QRAM and pure quantum computer based approaches . Broughton et al . ( 2020 ) establishes a computing paradigm where different neurons can be implemented on either quantum or classical computers . This brings the flexibility in implementing functions ( e.g. , ReLU ) , while at the same time , it calls for fast interfaces for massive data transfer between quantum and classical computers . In this work , we focus on the hybrid quantum-classical computing scheme and follow the “ prologueacceleration-epilogue ” computing scheme . It offers the flexibility of implementation and at the same time requires minimal quantum-classical data transfer , as demonstrated in Figure 2 . 2.2 UNIVERSAL APPROXIMATION AND COMPLEXITY BOUND . Universal approximability of neural network indicates that for any given continuous function or a wide class of functions satisfying some constraints , and arbitrarily small error bound > 0 , there exists a neural network model which can approximate the function with no more than error . On classical computing , different types of neural networks have been proved to have universal approximability : multi-layer feedforward neural networks ( Cybenko , 1989 ; Hornik et al. , 1989 ) ; ReLU neural networks ( Mhaskar & Micchelli , 1992 ; Sonoda & Murata , 2017 ; Yarotsky , 2017 ) ; quantized neural networks ( Ding et al. , 2019 ; Wang et al. , 2019 ) ; and quadratic neural networks ( Fan et al. , 2020 ) . In addition , many of these works also establishes complexity bounds in terms of the number of weights , number of layers , or number of neurons needed for approximation with error bound . When it comes to quantum computing , in recent years , Delgado ( 2018 ) demonstrated quantum circuit with an additional trainable diagonal matrix can approximate the given functions , and Schuld et al . ( 2020 ) shown that the Fourier-type sum-based quantum models can be universal function approximators if the quantum circuit is measured enough many times . Most recently , we are wit- nessing the exponentially increasing research works to exploit the high-parallelism provided by quantum computers to accelerate neural networks ( Perdomo-Ortiz et al. , 2018 ; Huggins et al. , 2019 ; Cong et al. , 2019 ; Kerenidis et al. , 2019 ; Francesco et al. , 2019 ; Bisarya et al. , 2020 ; Broughton et al. , 2020 ; Jiang et al. , 2020 ; Tacchino et al. , 2020 ; Xia & Kais , 2020 ) . The existing works have demonstrated that the quantum neural network can achieve state-of-the-art accuracy for some tasks . For example , Figure 1 demonstrates the test accuracy comparison of different neural networks targeting pairwise classifiers in MNIST ( LeCun et al. , 1998 ) ) . The average accuracy gap between that in Figure 1 ( a ) and Figure 1 ( c ) is merely 0.5 % . However , the penalty for achieving high-parallelism in quantum computing is the constrained computation types to be performed . As a result , neural networks designed for quantum computing has limited operations , indicating that the networks designed for classical computing may not be implemented on quantum computers . This raises a fundamental problem : whether the neural networks in quantum computing can attain universal approximability ? The failure to attain universal approximability will fundamentally hinder the neural networks in quantum computing being used in practice due to the significant accuracy loss for specific functions . Therefore , it is imminent to understand the expressivity of neural networks dedicated to quantum computers . Motivated by this , we prove that the neural networks in a hybrid quantum-classical computing can approximate a wide class of functions with arbitarily small error . We also establish complexity bounds , which gives practical insights in designing networks for quantum computing . 3 MAIN RESULTS . Figure 2 illustrates the adopted prologue-acceleration-epilogue computing scheme . It is a practical scheme with the small number of quantum-classical data transfer and the neural network designed for this scheme can achieve competitive accuracy against classical computing as demonstrated in Figure 1 . In addition , the target computing scheme is a special case of that used in Tensorflow Quantum ( Broughton et al. , 2020 ) ; therefore , if we can prove that the neural networks designed for it have universal approximability , then the conclusion can be directly derived to Tensorflow Quantum . In this work , we follow the idea from ( Yarotsky , 2017 ; Ding et al. , 2019 ) by constructing a neural network ( namely BPNN , see Section 4.1 ) in the prologue-acceleration-epilogue computing scheme ( see Section 4.4 ) with bounded maximum error ( see Section 4.3 ) for a wide class of functions ( denoted as Fd , n , see Section 4.2 ) . In such a proof , the fundamental function to be implemented is the Taylor polynomial . In the next , we first state the main result of the Taylor polynomial of a function f ∈ Fd , n on the quantum computing ( see Appendix A.5 for the formal version ) . Results 3.1 . For any given function f ∈ Fd , n and an expansion point k , its Taylor polynomial at point k can be implemented on the quantum computer , such that ( i ) the network can exactly implements the Taylor polynomial , ( ii ) the depth is O ( log n ) , ( iii ) the number of gates is O ( n2 log n log d ) , ( iv ) the number of qubits is O ( n2 log d ) . Here , we observe that since the Taylor function can be exactly implemented by the quantum computer , the complexities on depth , gates , and qubits are not related to error bound . This will be the root cause that the upper bound of neural networks in hybrid quantum-classical computing scheme can approach to the lower bound of ones on classical computing ( see the comparison at the end of this section ) . In addition , the classical computing can not build such a system , because there are too many inputs , reaching up to dn+1 , which is infeasible for classical computing with exponentially increasing inputs . On the other hand , quantum computing can take advantage of encoding n inputs to log n qubits , and therefore , it is feasible to implement such a network on a quantum computer . The above result shows the ability of BPNN to exactly implement Taylor expansion at any point . Then , combined with the classical Prologue for quantum-state preparation and the classical Epilogue for accumulate results at all expansion points , we next state the main result of approximating a function f ∈ Fd , n on the prologue-acceleration-epilogue computing scheme as follows ; the formal version can be found in Appendix A.6 . Results 3.2 . For any given function f ∈ Fd , n , there is a binary polynomial neural network with a fixed structure that can be implemented in the hybrid quantum-classical computing scheme , such that ( i ) the network can approximate f with any error ∈ ( 0 , 1 ) , ( ii ) the overall depth is O ( 1 ) ; ( iii ) the number of quantum gates is O ( ( 1/ ) d n ) ; ( iv ) the number of qubits is O ( ( 1/ ) d n ) ; ( v ) the number of weights on classical computer is O ( ( 1/ ) d n ) . From the above result , we can see that the upper bounds on the depth and the number of weight/gates are of the same asymptotic complexity for both the quantum portion and classical portion in the hybrid computing system , which satisfies the constraint discussed in Section 4.1 to take full use of quantum computing . We further compare the complexity bounds between the BPNN on the hybrid quantum-classical computing scheme against the classical networks constructed for bound analysis . Comparison with the upper bounds for neural networks on classical computers : To attain an approximation error , Fan et al . ( 2020 ) demonstrates that the upper bound on the number of weights for unquantized quadratic network is O ( log ( log ( 1/ ) ) × ( 1/ ) d n ) ) , and Ding et al . ( 2019 ) demonstrates that the upper bound on the number of binary weights of the ReLU neural network is O ( log2 ( 1/ ) × ( 1/ ) dn ) . On the other hand , for the BPNN on hybrid quantum-classical computing , both the number of gates used in quantum acceleration and the weights used in classical prologue and epilogue are O ( ( 1/ ) d n ) . Although BPNN has similar expressive power compared with the binary ReLU network and reduced expressive power compared with the unquantized quadratic network ( due to the constraints on weight selection ) , the obtained upper bounds are of asymptotically lower complexity , which again shows the benefits of quantum computing for neural networks . Comparison with the lower bounds for neural networks on classical computers : We further compare the lower bound of the number of weights/gates needed to attain an error bound on a classical computer . The only established result in the literature is for unquantized ReLU network ( Yarotsky , 2017 ) , which suggests that to attain an approximation error bound of , the number of weights needed is at least Ω ( log−2p−1 ( 1/ ) × ( 1/ ) d/n ) with depth constraint of O ( logp ( 1/ ) ) where p is a constant to be chosen to determine the growth rate of depth . In this work , we demonstrate that the depth of BPNN in hybrid quantum-classical computing can be O ( 1 ) and the upper bounds on the number of weight/gates are O ( ( 1/ ) d/n ) ( both quantum and classical computers ) . Apparently , our upper bounds are even approching to the lower bound of the networks on classical computers , which are unquantized and should have stronger expressive power . This clearly demonstrates the potential quantum advantage that can be attained .
The paper considers the expressivity and approximation properties of machine learning models where a parameterized quantum circuit is used to 'accelerate' a classical neural network. The results consider a model with a date encoder, a quantum circuit, and then a classical feedforward neural net for post-processing. To make the results non-trivial only models with asymptotically similar classical and quantum complexity are considered.
SP:19ad4889e4e13927f7a6e5ab01d0b0e1ef925337
What About Taking Policy as Input of Value Function: Policy-extended Value Function Approximator
1 INTRODUCTION . Reinforcement learning ( RL ) has been widely considered as a promising way to learn optimal policies in many decision making problems ( Mnih et al. , 2015 ; Lillicrap et al. , 2015 ; Silver et al. , 2016 ; You et al. , 2018 ; Schreck et al. , 2019 ; Vinyals et al. , 2019 ; Hafner et al. , 2020 ) . Lying in the heart of RL is the value function which defines the long-term evaluation of a policy . With function approximation ( e.g. , deep neural networks ) , a value function approximator ( VFA ) is able to approximate the values of a policy under large and continuous state spaces . As commonly recognized , most RL algorithms can be described as Generalized Policy Iteration ( GPI ) ( Sutton & Barto , 1998 ) . As illustrated in the left of Figure 1 , at each iteration the VFA is trained to approximate the true values of current policy , regarding which the policy are improved . However , value approximation can never be perfect and its quality influences the effectiveness of policy improvement , thus raising a requirement for better value approximation ( v. Hasselt , 2010 ; Bellemare et al. , 2017 ; Fujimoto et al. , 2018 ) . Since a conventional VFA only approximates the values ( i.e. , knowledge ( Sutton et al. , 2011 ) ) for one policy , the knowledge learned from previously encountered policies is not preserved and utilized for future learning in an explicit way . For example in GPI , a conventional VFA can not track the values of the changing policy by itself and has no idea of the direction of value generalization when approximating the values of a new policy . In this paper , we propose Policy-extended Value Function Approximator ( PeVFA ) , which additionally takes an explicit policy representation as input in contrast to conventional VFA . PeVFA is able to preserve values for multiple policies and induces an appealing characteristic , i.e. , value generalization among policies . We study the formal generalization and contraction conditions on the value approximation error of PeVFA , focusing specifically on value generalization along the policy improvement path which we call local generalization . Based on both theoretical and empirical evidences , we propose a new form of GPI with PeVFA ( the right of Figure 1 ) which can benefit from the closer approximation distance induced by local value generalization under some conditions ; thus , GPI with PeVFA is expected to be more efficient in consecutive value approximation along the policy improvement path . Moreover , we propose a framework to learn effective policy representation for an RL policy from policy network parameters and state-action pairs alternatively , through contrastive learning and an auxiliary loss of action prediction . Finally , based on Proximal Policy Optimization ( PPO ) , we derive a practical RL algorithm PPO-PeVFA from the above methods . Our experimental results demonstrate the effectiveness of both value generalization offered by PeVFA and policy representation learning . Our main contributions are summarized as follows : • We propose PeVFA which improves generalization of values among policies and provide a theoretical analysis of generalization especially in local generalization scenario . • We propose a new form of GPI with PeVFA resulting in closer value approximation along the policy improvement path demonstrated through experiments . • To our knowledge , we are the first to learn a representation ( low-dimensional embedding ) for an RL policy from its network parameters ( i.e. , weights and biases ) . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING . We consider a Markov Decision Process ( MDP ) defined as 〈S , A , r , P , γ〉where S is the state space , A is the action space , r is the reward function , P is the transition function and γ ∈ [ 0 , 1 ) is the discount factor . The goal of an RL agent is to learn a policy π ∈ Π where π ( a|s ) is a distribution of action given state that maximizes the expected long-term discounted return . The state-value function vπ ( s ) is defined in terms of the expected discounted return obtained through following the policy π from a state s : vπ ( s ) = Eπ [ ∑∞ t=0 γ trt+1|s0 = s ] for all s ∈ S where rt+1 = r ( st , at ) . We use V π = vπ ( · ) to denote the vector of values for all possible states . Value function is determined by policy π and environment models ( i.e. , P and r ) . For a conventional value function , policy is modeled implicitly within a table or a function approximator , i.e. , a mapping from only state to value . One can refer to Appendix E.1 to see a more detailed description . 2.2 EXTENSIONS OF CONVENTIONAL VALUE FUNCTION . Schaul et al . ( 2015 ) introduced Universal Value Function Approximators ( UVFA ) that generalize values over goals in goal-conditioned RL . Similar ideas are also adopted for low-level learning of Hierarchical RL ( Nachum et al. , 2018 ) . Such extensions are also studied in more challenging RL problems , e.g. , opponent modeling ( He & Boyd-Graber , 2016 ; Grover et al. , 2018 ; Tacchetti et al. , 2019 ) , and context-based Meta-RL ( Rakelly et al. , 2019 ; Lee et al. , 2020 ) . General value function ( GVF ) in ( Sutton et al. , 2011 ) are proposed as a form of general knowledge representation through cumulants instead of rewards . In an unified view , each approach generalizes different aspects of the conventional VFA , focusing on different components of the vector form Bellman equation ( Sutton & Barto , 1998 ) expanded on as discussed in Appendix E.2 . Concurrent to our work , several works also study to take policy as an input of value functions . Harb et al . ( 2020 ) propose Policy Evaluation Networks ( PVN ) to approximate the objective function J ( π ) = Es0∼ρ0 [ vπ ( s0 ) ] of different policy π , where ρ0 is the initial state distribution . Later in ( Raileanu et al. , 2020 ) , Policy-Dynamics Value Function ( PDVF ) is proposed which takes both policy and task context as additional inputs , for the purpose of value generalization among policies and tasks so that to adapt quickly to new tasks . PDVF can be viewed as an integration of PVN and task-specific context learning ( Rakelly et al. , 2019 ; Zhou et al. , 2019a ) . Both PVN and PDVF conduct value approximation for a given collection of policies and then optimize policy with gradients through policy-specific inputs ( shorted as GTPI below ) of well trained value function variants in a zero-shot manner . We view this as a typical case of global generalization discussed further in Sec . 3 . In contrast , we focus more on a local generalization scenario and utilize value generalization to improve learning during standard GPI process with no prior policies given . Closely related to our work , Faccio & Schmidhuber ( 2020 ) propose a class of Parameter-based Value Functions ( PVFs ) which take policy parameters as inputs . Based on PVFs , new policy gradients are introduced in the form of a combination of conventional policy gradients plus GTPI ( i.e. , by backpropagating through policy parameters in PVFs ) . Beyond zero-shot policy optimization , PVFs also utilize value generalization of PVFs in the iterative learning process . Our work differs with PVFs at two aspects : first , we consider a general policy representation in our proposed PeVFA along with a framework of policy representation learning , in contrast to parsing the policy parameters directly ; second , we do not resort to GTPI for the policy update in our algorithms and only utilize value generalization for more efficient value estimation in GPI . Moreover , in these previous works , it is not clear how the value generalization among policies can be and whether it is beneficial for learning or not . In this paper , we study the condition of beneficial value generalization from both theoretical and empirical lens . 3 POLICY-EXTEND VALUE FUNCTION APPROXIMATOR . In this paper , we propose Policy-extended Value Function Approximator ( PeVFA ) , an extension of the conventional value function that explicitly takes policy ( representation ) as input . Formally , consider a function g : Π → X = Rn in a general form that maps any policy π to a n-dimensional representation χπ = g ( π ) ∈ X . The policy-extended value function V : S × X → R , defines the values over state and policy space : V ( s , χπ ) = Eπ [ ∞∑ t=0 γtrt+1|s0 = s ] , for all s ∈ S , π ∈ Π . ( 1 ) When only one policy π is considered , V ( s , χπ ) is equivalent in definition to a conventional value function vπ ( s ) . Note that if the policy πω is explicitly parameterized and its parameters are used as the representation , i.e. , χπω = ω , PeVFA is equivalent to Parameter-based State Value Function ( PSVF ) in PVF family ( Faccio & Schmidhuber , 2020 ) . Similarly , we can define policy-extended action-value function Q ( s , a , χπ ) . We use V ( s , χπ ) for demonstration in this paper . The key point is , V ( s , · ) is not defined for a specific policy and is able to preserve the values of multiple policies . With function approximation , a PeVFA is expected to approximate values among policy space , i.e. , { vπ ( s ) } π∈Π . Formally , with a finite parameter space Θ , we consider the optimal parameter θ∗ with minimum approximation error over all possible states and policies as : θ∗ = arg min θ∈Θ F ( θ , Π ) , where F ( θ , Π ) = max π∈Π ‖Vθ ( π ) − V π‖ = max π∈Π ‖Vθ ( · , π ) − vπ ( · ) ‖∞ , where F ( θ , Π ) is the overall approximation error of Vθ which is defined as the maximum L∞ norm of value vector distance among Π , as a typical metric commonly adopted in study on value estimation approximation and policy iteration ( Kakade & Langford , 2002 ; Munos , 2003 ; Lagoudakis & Parr , 2003 ) . Therefore , the learning process of a PeVFA Vθ is θ → θ∗ . As commonly recognized , a conventional value function approximator ( VFA ) can generalize the values to unseen states or state-action pairs after properly training on known states and state-action pairs under a specific policy . Beyond this , PeVFA provides an appealing characteristic that allows values to be generalized among policies , evaluating new policies with the knowledge of old ones . In the following , we study the characteristic of value generalization theoretically ( Sec . 3.1 ) and provide some empirical evidences ( Sec . 3.2 ) , from which we finally introduce a new form of GPI ( Sec . 3.3 ) . 3.1 VALUE GENERALIZATION AMONG POLICIES . Concretely , we introduce two scenarios of value generalization : global generalization and local generalization , which are closely related to many RL problems . An illustration is show in Figure 2 . In this section , we first focus on a general two-policy case and study whether the learned value approximation of one policy can generalize to that of the other one . We propose a common form of generalization approximation error ( Theorem 1 ) and analyze the condition of generalization contraction ( Corollary 1 ) . Further , we go to the two generalization scenarios as illustrated in Figure 2 and especially introduce the closer approximation distance along the policy improvement path induced by PeVFA ( Corollary 2 ) , from which a more efficient form of GPI can be derived ( Sec . 3.3 ) . Formally , consider two policies π1 and π2 and a PeVFA Vθ to approximate the values of them . For convenience of demonstration , we consider an identical representation space ( i.e. , X = Π ) and an infinite parameter space Θ where zero approximation error can be achieved , i.e. , F ( θ∗ , Π ) = 0 . We then define the approximation loss f of Vθ for each policy π ∈ Π as fθ ( π ) = ‖Vθ ( π ) −Vθ∗ ( π ) ‖ = ‖Vθ ( · , π ) − Vθ∗ ( · , π ) ‖∞ ≥ 0 , which measures the value distance of θ to the optimal parameter at some policy π . In the following , we analyze the generalization performance on value estimation of unlearned policy π2 when PeVFA Vθ is only trained to approximate the values of policy π1 . We first introduce the two assumptions below : Assumption 1 ( Contraction ) . The learning process P of value approximation for π1 is γcontraction , that is fθ̂ ( π1 ) ≤ γfθ ( π1 ) where γ ∈ ( 0 , 1 ] and θ P ( π1 ) −−−−→ θ̂ . θ and θ̂ are the parameters of PeVFA before and after some learning of π1 ’ s values . We use f and f̂ as abbreviations for fθ and fθ̂ below for the ease of demonstration . Recent works ( Keskar et al. , 2017 ; Novak et al. , 2018 ; Wang et al. , 2018 ) suggest that generalization performance is related to local properties of the model . We then assume following smoothness property for following analysis : Assumption 2 ( Smoothness ) . f̂ is ( or has ) Lipschitz continuous ( gradient/Hessian ) at π1 with Lipschitz constant L̂0 , e.g. , |f̂ ( π ) − f̂ ( π1 ) | ≤ L̂0 · d ( π , π1 ) for π ∈ Π with some metric space ( Π , d ) . With above two assumptions , next we derive a general form of an upper bound of the generalized value approximation for unlearned policy π2 ( i.e. , f̂ ( π2 ) ) ) as follows : Theorem 1 . Under Assumption 1 and 2 , when f ( π1 ) ≤ f ( π2 ) , we have the following bound : f̂ ( π2 ) ≤ γf ( π2 ) ︸ ︷︷ ︸ generalized contraction + M ( π1 , π2 , L̂ ) ︸ ︷︷ ︸ locality margin . ( 2 ) The form of M depends on the smoothness property , e.g. , M ( π1 , π2 , L̂ ) = L̂0 · d ( π , π1 ) when f̂ is Lipschitz continuous . See Appendix A.1 for more instances ofM when higher-order smoothness properties ( Nesterov & Polyak , 2006 ) are considered . Proof . See Appendix A.1 . The main idea is to chain the Lipschitz continuity upper bound ( Assumption 2 ) , the contraction upper bound ( Assumption 1 ) , and the inequality of f . Remark 1 . The case f ( π1 ) ≤ f ( π2 ) considered in Theorem 1 can usually exist since only π1 ’ s values are learned . Under such circumstances , we suggest that the complementary case f ( π1 ) > f ( π2 ) is acceptable since the approximation error of π2 is already lower than the trained one . Theorem 1 provides a generalization upper bound for f̂ ( π2 ) , as a generalized contraction on f ( π2 ) plus a locality margin termM which is related to π1 , π2 and the smoothness property L̂ . Further , we analyze the condition when a PeVFA can also obtain a contraction approximation for unlearned policy π2 though only trained on π1 , as below : Corollary 1 . Followed by Theorem 1 , consider f̂ is Lipschitz continuous and f ( π2 ) 6= 0 , we have , f̂ ( π2 ) ≤ γgf ( π2 ) where γg = γ + L̂0·d ( π1 , π2 ) f ( π2 ) . When f ( π2 ) ≥ L̂0·d ( π1 , π2 ) 1−γ , θ P ( π1 ) −−−−→ θ̂ is also a γg-contraction for value approximation of π2 , with γg ∈ ( 0 , 1 ] . Proof . ReplaceM with Lipschitz continuous upper bound and transform the RHS in Equation 2 , then let RHS not greater than f ( π2 ) . See Appendix A.2 for complete derivation . Remark 2 . From the generalization contraction condition provided in Corollary 1 , we can find that : as i. γ → 0 , or ii . d ( π1 , π2 ) → 0 , or iii . L̂0 → 0 , the contraction condition is easier to achieve ( or the contraction gets tighter ) , i.e. , the generalization on unlearned policy π2 is better . In the above , we discuss value generalization in a two-policy case , i.e. , from one learned policy to another unlearned policy . Global generalization in Figure 2 ( a ) is an extension scenario of the two-policy case where values generalize from a known policy set ( π ∈ Π0 ) of which the values are learned to unseen policies ( π′ ∈ Π1 ) . In this paper , we focus more on local generalization of PeVFA as shown in Figure 2 ( b ) , Recall the GPI form shown in the left of Figure 1 , at each iteration t , value generalization from current policy πt to improved policy πt+1 is exactly the two-policy case we discussed above . However , it is unclear how local generalization can impact the value estimation in GPI ( i , e. , policy evaluation ) . To this end , we propose the following Corollary to see a connection between local generalization of PeVFA and a closer value approximation distance : Corollary 2 . At iteration t in local generalization scenario of PeVFA ( Figure 2 ( b ) ) , if fθt ( πt ) + fθt ( πt+1 ) ≤ ‖Vθ∗ ( πt ) − Vθ∗ ( πt+1 ) ‖ , then fθt ( πt+1 ) ≤ ‖Vθt ( πt ) − Vθ∗ ( πt+1 ) ‖ . Proof . Proof can be obtained by applying Triangle Inequality . See Appendix A.3 . Corollary 2 indicates that local generalization of PeVFA can induce a more preferable start point ( Vθt ( πt+1 ) ) which is closer to the optimal approximation target ( Vθ∗ ( πt+1 ) ) than the conventional one ( V πtθt , equivalent to Vθt ( πt ) in definition ) for policy evaluation process at iteration t+ 1 . Remark 3 . With closer start points Vθt ( πt+1 ) , policy evaluation ( i.e. , minimize fθt ( πt+1 ) ) can be more efficient with PeVFA . One can assume an ideal case with perfect local generalization , where policy evaluation is no longer necessary and policy improvement can be performed consecutively .
The authors propose PeVFA: a value function able to evaluate the expected return of multiple policies. They do so by extending the conventional value function, allowing it to receive as input the parameter (or a representation) of the policy. The authors study the local generalization property of PeVFA, propose possible ways of encoding the policy parameters and compare traditional PPO with an extended version of PPO using PeVFA. While the idea of generalization among many policies is an interesting topic in RL, there are many theoretical and experimental issues that prevent acceptance. Moreover, the authors do not at all compare their approach to recent work which also uses value functions with policy parameters as input.
SP:763cf5ceb0330e0c317f78438711e0fb6febe70d
What About Taking Policy as Input of Value Function: Policy-extended Value Function Approximator
1 INTRODUCTION . Reinforcement learning ( RL ) has been widely considered as a promising way to learn optimal policies in many decision making problems ( Mnih et al. , 2015 ; Lillicrap et al. , 2015 ; Silver et al. , 2016 ; You et al. , 2018 ; Schreck et al. , 2019 ; Vinyals et al. , 2019 ; Hafner et al. , 2020 ) . Lying in the heart of RL is the value function which defines the long-term evaluation of a policy . With function approximation ( e.g. , deep neural networks ) , a value function approximator ( VFA ) is able to approximate the values of a policy under large and continuous state spaces . As commonly recognized , most RL algorithms can be described as Generalized Policy Iteration ( GPI ) ( Sutton & Barto , 1998 ) . As illustrated in the left of Figure 1 , at each iteration the VFA is trained to approximate the true values of current policy , regarding which the policy are improved . However , value approximation can never be perfect and its quality influences the effectiveness of policy improvement , thus raising a requirement for better value approximation ( v. Hasselt , 2010 ; Bellemare et al. , 2017 ; Fujimoto et al. , 2018 ) . Since a conventional VFA only approximates the values ( i.e. , knowledge ( Sutton et al. , 2011 ) ) for one policy , the knowledge learned from previously encountered policies is not preserved and utilized for future learning in an explicit way . For example in GPI , a conventional VFA can not track the values of the changing policy by itself and has no idea of the direction of value generalization when approximating the values of a new policy . In this paper , we propose Policy-extended Value Function Approximator ( PeVFA ) , which additionally takes an explicit policy representation as input in contrast to conventional VFA . PeVFA is able to preserve values for multiple policies and induces an appealing characteristic , i.e. , value generalization among policies . We study the formal generalization and contraction conditions on the value approximation error of PeVFA , focusing specifically on value generalization along the policy improvement path which we call local generalization . Based on both theoretical and empirical evidences , we propose a new form of GPI with PeVFA ( the right of Figure 1 ) which can benefit from the closer approximation distance induced by local value generalization under some conditions ; thus , GPI with PeVFA is expected to be more efficient in consecutive value approximation along the policy improvement path . Moreover , we propose a framework to learn effective policy representation for an RL policy from policy network parameters and state-action pairs alternatively , through contrastive learning and an auxiliary loss of action prediction . Finally , based on Proximal Policy Optimization ( PPO ) , we derive a practical RL algorithm PPO-PeVFA from the above methods . Our experimental results demonstrate the effectiveness of both value generalization offered by PeVFA and policy representation learning . Our main contributions are summarized as follows : • We propose PeVFA which improves generalization of values among policies and provide a theoretical analysis of generalization especially in local generalization scenario . • We propose a new form of GPI with PeVFA resulting in closer value approximation along the policy improvement path demonstrated through experiments . • To our knowledge , we are the first to learn a representation ( low-dimensional embedding ) for an RL policy from its network parameters ( i.e. , weights and biases ) . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING . We consider a Markov Decision Process ( MDP ) defined as 〈S , A , r , P , γ〉where S is the state space , A is the action space , r is the reward function , P is the transition function and γ ∈ [ 0 , 1 ) is the discount factor . The goal of an RL agent is to learn a policy π ∈ Π where π ( a|s ) is a distribution of action given state that maximizes the expected long-term discounted return . The state-value function vπ ( s ) is defined in terms of the expected discounted return obtained through following the policy π from a state s : vπ ( s ) = Eπ [ ∑∞ t=0 γ trt+1|s0 = s ] for all s ∈ S where rt+1 = r ( st , at ) . We use V π = vπ ( · ) to denote the vector of values for all possible states . Value function is determined by policy π and environment models ( i.e. , P and r ) . For a conventional value function , policy is modeled implicitly within a table or a function approximator , i.e. , a mapping from only state to value . One can refer to Appendix E.1 to see a more detailed description . 2.2 EXTENSIONS OF CONVENTIONAL VALUE FUNCTION . Schaul et al . ( 2015 ) introduced Universal Value Function Approximators ( UVFA ) that generalize values over goals in goal-conditioned RL . Similar ideas are also adopted for low-level learning of Hierarchical RL ( Nachum et al. , 2018 ) . Such extensions are also studied in more challenging RL problems , e.g. , opponent modeling ( He & Boyd-Graber , 2016 ; Grover et al. , 2018 ; Tacchetti et al. , 2019 ) , and context-based Meta-RL ( Rakelly et al. , 2019 ; Lee et al. , 2020 ) . General value function ( GVF ) in ( Sutton et al. , 2011 ) are proposed as a form of general knowledge representation through cumulants instead of rewards . In an unified view , each approach generalizes different aspects of the conventional VFA , focusing on different components of the vector form Bellman equation ( Sutton & Barto , 1998 ) expanded on as discussed in Appendix E.2 . Concurrent to our work , several works also study to take policy as an input of value functions . Harb et al . ( 2020 ) propose Policy Evaluation Networks ( PVN ) to approximate the objective function J ( π ) = Es0∼ρ0 [ vπ ( s0 ) ] of different policy π , where ρ0 is the initial state distribution . Later in ( Raileanu et al. , 2020 ) , Policy-Dynamics Value Function ( PDVF ) is proposed which takes both policy and task context as additional inputs , for the purpose of value generalization among policies and tasks so that to adapt quickly to new tasks . PDVF can be viewed as an integration of PVN and task-specific context learning ( Rakelly et al. , 2019 ; Zhou et al. , 2019a ) . Both PVN and PDVF conduct value approximation for a given collection of policies and then optimize policy with gradients through policy-specific inputs ( shorted as GTPI below ) of well trained value function variants in a zero-shot manner . We view this as a typical case of global generalization discussed further in Sec . 3 . In contrast , we focus more on a local generalization scenario and utilize value generalization to improve learning during standard GPI process with no prior policies given . Closely related to our work , Faccio & Schmidhuber ( 2020 ) propose a class of Parameter-based Value Functions ( PVFs ) which take policy parameters as inputs . Based on PVFs , new policy gradients are introduced in the form of a combination of conventional policy gradients plus GTPI ( i.e. , by backpropagating through policy parameters in PVFs ) . Beyond zero-shot policy optimization , PVFs also utilize value generalization of PVFs in the iterative learning process . Our work differs with PVFs at two aspects : first , we consider a general policy representation in our proposed PeVFA along with a framework of policy representation learning , in contrast to parsing the policy parameters directly ; second , we do not resort to GTPI for the policy update in our algorithms and only utilize value generalization for more efficient value estimation in GPI . Moreover , in these previous works , it is not clear how the value generalization among policies can be and whether it is beneficial for learning or not . In this paper , we study the condition of beneficial value generalization from both theoretical and empirical lens . 3 POLICY-EXTEND VALUE FUNCTION APPROXIMATOR . In this paper , we propose Policy-extended Value Function Approximator ( PeVFA ) , an extension of the conventional value function that explicitly takes policy ( representation ) as input . Formally , consider a function g : Π → X = Rn in a general form that maps any policy π to a n-dimensional representation χπ = g ( π ) ∈ X . The policy-extended value function V : S × X → R , defines the values over state and policy space : V ( s , χπ ) = Eπ [ ∞∑ t=0 γtrt+1|s0 = s ] , for all s ∈ S , π ∈ Π . ( 1 ) When only one policy π is considered , V ( s , χπ ) is equivalent in definition to a conventional value function vπ ( s ) . Note that if the policy πω is explicitly parameterized and its parameters are used as the representation , i.e. , χπω = ω , PeVFA is equivalent to Parameter-based State Value Function ( PSVF ) in PVF family ( Faccio & Schmidhuber , 2020 ) . Similarly , we can define policy-extended action-value function Q ( s , a , χπ ) . We use V ( s , χπ ) for demonstration in this paper . The key point is , V ( s , · ) is not defined for a specific policy and is able to preserve the values of multiple policies . With function approximation , a PeVFA is expected to approximate values among policy space , i.e. , { vπ ( s ) } π∈Π . Formally , with a finite parameter space Θ , we consider the optimal parameter θ∗ with minimum approximation error over all possible states and policies as : θ∗ = arg min θ∈Θ F ( θ , Π ) , where F ( θ , Π ) = max π∈Π ‖Vθ ( π ) − V π‖ = max π∈Π ‖Vθ ( · , π ) − vπ ( · ) ‖∞ , where F ( θ , Π ) is the overall approximation error of Vθ which is defined as the maximum L∞ norm of value vector distance among Π , as a typical metric commonly adopted in study on value estimation approximation and policy iteration ( Kakade & Langford , 2002 ; Munos , 2003 ; Lagoudakis & Parr , 2003 ) . Therefore , the learning process of a PeVFA Vθ is θ → θ∗ . As commonly recognized , a conventional value function approximator ( VFA ) can generalize the values to unseen states or state-action pairs after properly training on known states and state-action pairs under a specific policy . Beyond this , PeVFA provides an appealing characteristic that allows values to be generalized among policies , evaluating new policies with the knowledge of old ones . In the following , we study the characteristic of value generalization theoretically ( Sec . 3.1 ) and provide some empirical evidences ( Sec . 3.2 ) , from which we finally introduce a new form of GPI ( Sec . 3.3 ) . 3.1 VALUE GENERALIZATION AMONG POLICIES . Concretely , we introduce two scenarios of value generalization : global generalization and local generalization , which are closely related to many RL problems . An illustration is show in Figure 2 . In this section , we first focus on a general two-policy case and study whether the learned value approximation of one policy can generalize to that of the other one . We propose a common form of generalization approximation error ( Theorem 1 ) and analyze the condition of generalization contraction ( Corollary 1 ) . Further , we go to the two generalization scenarios as illustrated in Figure 2 and especially introduce the closer approximation distance along the policy improvement path induced by PeVFA ( Corollary 2 ) , from which a more efficient form of GPI can be derived ( Sec . 3.3 ) . Formally , consider two policies π1 and π2 and a PeVFA Vθ to approximate the values of them . For convenience of demonstration , we consider an identical representation space ( i.e. , X = Π ) and an infinite parameter space Θ where zero approximation error can be achieved , i.e. , F ( θ∗ , Π ) = 0 . We then define the approximation loss f of Vθ for each policy π ∈ Π as fθ ( π ) = ‖Vθ ( π ) −Vθ∗ ( π ) ‖ = ‖Vθ ( · , π ) − Vθ∗ ( · , π ) ‖∞ ≥ 0 , which measures the value distance of θ to the optimal parameter at some policy π . In the following , we analyze the generalization performance on value estimation of unlearned policy π2 when PeVFA Vθ is only trained to approximate the values of policy π1 . We first introduce the two assumptions below : Assumption 1 ( Contraction ) . The learning process P of value approximation for π1 is γcontraction , that is fθ̂ ( π1 ) ≤ γfθ ( π1 ) where γ ∈ ( 0 , 1 ] and θ P ( π1 ) −−−−→ θ̂ . θ and θ̂ are the parameters of PeVFA before and after some learning of π1 ’ s values . We use f and f̂ as abbreviations for fθ and fθ̂ below for the ease of demonstration . Recent works ( Keskar et al. , 2017 ; Novak et al. , 2018 ; Wang et al. , 2018 ) suggest that generalization performance is related to local properties of the model . We then assume following smoothness property for following analysis : Assumption 2 ( Smoothness ) . f̂ is ( or has ) Lipschitz continuous ( gradient/Hessian ) at π1 with Lipschitz constant L̂0 , e.g. , |f̂ ( π ) − f̂ ( π1 ) | ≤ L̂0 · d ( π , π1 ) for π ∈ Π with some metric space ( Π , d ) . With above two assumptions , next we derive a general form of an upper bound of the generalized value approximation for unlearned policy π2 ( i.e. , f̂ ( π2 ) ) ) as follows : Theorem 1 . Under Assumption 1 and 2 , when f ( π1 ) ≤ f ( π2 ) , we have the following bound : f̂ ( π2 ) ≤ γf ( π2 ) ︸ ︷︷ ︸ generalized contraction + M ( π1 , π2 , L̂ ) ︸ ︷︷ ︸ locality margin . ( 2 ) The form of M depends on the smoothness property , e.g. , M ( π1 , π2 , L̂ ) = L̂0 · d ( π , π1 ) when f̂ is Lipschitz continuous . See Appendix A.1 for more instances ofM when higher-order smoothness properties ( Nesterov & Polyak , 2006 ) are considered . Proof . See Appendix A.1 . The main idea is to chain the Lipschitz continuity upper bound ( Assumption 2 ) , the contraction upper bound ( Assumption 1 ) , and the inequality of f . Remark 1 . The case f ( π1 ) ≤ f ( π2 ) considered in Theorem 1 can usually exist since only π1 ’ s values are learned . Under such circumstances , we suggest that the complementary case f ( π1 ) > f ( π2 ) is acceptable since the approximation error of π2 is already lower than the trained one . Theorem 1 provides a generalization upper bound for f̂ ( π2 ) , as a generalized contraction on f ( π2 ) plus a locality margin termM which is related to π1 , π2 and the smoothness property L̂ . Further , we analyze the condition when a PeVFA can also obtain a contraction approximation for unlearned policy π2 though only trained on π1 , as below : Corollary 1 . Followed by Theorem 1 , consider f̂ is Lipschitz continuous and f ( π2 ) 6= 0 , we have , f̂ ( π2 ) ≤ γgf ( π2 ) where γg = γ + L̂0·d ( π1 , π2 ) f ( π2 ) . When f ( π2 ) ≥ L̂0·d ( π1 , π2 ) 1−γ , θ P ( π1 ) −−−−→ θ̂ is also a γg-contraction for value approximation of π2 , with γg ∈ ( 0 , 1 ] . Proof . ReplaceM with Lipschitz continuous upper bound and transform the RHS in Equation 2 , then let RHS not greater than f ( π2 ) . See Appendix A.2 for complete derivation . Remark 2 . From the generalization contraction condition provided in Corollary 1 , we can find that : as i. γ → 0 , or ii . d ( π1 , π2 ) → 0 , or iii . L̂0 → 0 , the contraction condition is easier to achieve ( or the contraction gets tighter ) , i.e. , the generalization on unlearned policy π2 is better . In the above , we discuss value generalization in a two-policy case , i.e. , from one learned policy to another unlearned policy . Global generalization in Figure 2 ( a ) is an extension scenario of the two-policy case where values generalize from a known policy set ( π ∈ Π0 ) of which the values are learned to unseen policies ( π′ ∈ Π1 ) . In this paper , we focus more on local generalization of PeVFA as shown in Figure 2 ( b ) , Recall the GPI form shown in the left of Figure 1 , at each iteration t , value generalization from current policy πt to improved policy πt+1 is exactly the two-policy case we discussed above . However , it is unclear how local generalization can impact the value estimation in GPI ( i , e. , policy evaluation ) . To this end , we propose the following Corollary to see a connection between local generalization of PeVFA and a closer value approximation distance : Corollary 2 . At iteration t in local generalization scenario of PeVFA ( Figure 2 ( b ) ) , if fθt ( πt ) + fθt ( πt+1 ) ≤ ‖Vθ∗ ( πt ) − Vθ∗ ( πt+1 ) ‖ , then fθt ( πt+1 ) ≤ ‖Vθt ( πt ) − Vθ∗ ( πt+1 ) ‖ . Proof . Proof can be obtained by applying Triangle Inequality . See Appendix A.3 . Corollary 2 indicates that local generalization of PeVFA can induce a more preferable start point ( Vθt ( πt+1 ) ) which is closer to the optimal approximation target ( Vθ∗ ( πt+1 ) ) than the conventional one ( V πtθt , equivalent to Vθt ( πt ) in definition ) for policy evaluation process at iteration t+ 1 . Remark 3 . With closer start points Vθt ( πt+1 ) , policy evaluation ( i.e. , minimize fθt ( πt+1 ) ) can be more efficient with PeVFA . One can assume an ideal case with perfect local generalization , where policy evaluation is no longer necessary and policy improvement can be performed consecutively .
The paper conditions the value function on a representation of the policy. The representation can be based on a batch of state-action pairs or based on the policy filters. When conditioning on the representation of a new policy, the value function can better approximate the value of the new policy. Experiments show benefits on continuous control tasks.
SP:763cf5ceb0330e0c317f78438711e0fb6febe70d
Adversarial Data Generation of Multi-category Marked Temporal Point Processes with Sparse, Incomplete, and Small Training Samples
1 INTRODUCTION . Marked Temporal Point Processes ( MTPPs ) are widely used for modeling and analysis of asynchronous stochastic discrete events in continuous time ( Upadhyay et al. , 2018 ; Türkmen et al. , 2019 ; Yan , 2019 ) with applications in numerous domains such as homeland security , cybersecurity , consumer analytics , health care analytics , and social science . An MTPP models stochastic discrete events as marked points ( ei ) defined by its time of the occurrence ti and its category ci . Usually , point processes are characterized using the conditional intensity function , λ∗ ( t ) = λ ( t|Ht ) = P [ event ∈ [ t , t + dt ) |Ht ] , which given the past Ht = { ei = ( zi , ti ) |ti < t } specifies the probability of an event occurring at future time points . There are many popular intensity functional forms . Hawkes process ( self-exciting process ) ( Hawkes , 1971 ) is a point process used in both statistical and machine learning contexts where the intensity is a linear function of past events ( Ht ) ( Türkmen et al. , 2019 ) . In traditional parametric models , the conditional intensity functions are manually pre-specified ( Yan , 2019 ) . Recently , various neural network models ( generally called neural TPP ) have been used to learn arbitrary and unknown distributions while eliminating the manual intensity function selection . Reinforcement learning ( Zhu et al. , 2019 ; Li et al. , 2018 ) , recurrent Neural Networks ( RNN ) ( Du et al. , 2016 ) , and generative neural networks ( Xiao et al. , 2018 ) are used to approximate the intensity functions and learn complex MTPP distributions using larger datasets . Recent advances in data collection techniques allow collecting complex event data which form heterogeneous MTTPs where a marked point ( eij ) defines a time of occurrence ( ti ) and a category ( cj ) separately . Therefore , multi-category MTTPs not only concern about the time of occurrence but also the category of the next marked point . The multi-category MTTPs append extra dimensionality to the distribution which complicates the learning using existing technologies . In fact , multi-category MTPPs are greatly helpful to model the behavioral patterns of suspicious or specific individuals and groups in homeland security ( Campedelli et al. , 2019b ; a ; Hung et al. , 2018 ; 2019 ) , potential malicious network activities in cybersecurity ( Peng et al. , 2017 ) , recommendation systems in consumer analytics ( Vassøy et al. , 2019 ) , and the behavioral patterns of patients to determine certain illnesses ( Islam et al. , 2017 ; Mancini & Paganoni , 2019 ) . A number of challenges limit the collection and access to data in many fields often resulting in small and incomplete datasets . Scenarios involving social , political and crime behaviors are often incomplete due to data collection challenges such as data quality maintenance , privacy and confidentiality issues ( National Institutes of Health & Services , 2020 ) , but still a rigorous analysis with complete data is essential to produce accurate and reliable outcomes . So , there is a critical need for a technique to capture and learn from MTPP distribution , develop and apply machine learning algorithms , etc. , for a small set of data some of which may be incomplete . We present an adversarial multi-category MTPP generation technique which is capable of generating sparse , asynchronous , stochastic , multi-category , discrete events in continuous time based on a limited dataset . Adversarial training has recently evolved and is able to provide exceptional results in many data generation applications , mostly in image , audio , and video generation while precisely mimicking the features of an actual dataset . The primary GAN architecture ( Goodfellow et al. , 2014 ) only engages well for continuous and complete data distributions and GANs have not been used for learning the distribution of discrete variables ( Choi et al. , 2017 ) . Later , GAN architectures for discrete events have been introduced ( Makhzani et al. , 2015 ; Yu et al. , 2017 ) and also applied for MTTP generation using extensive training data ( Xiao et al. , 2018 ; 2017 ) . Adversarial autoencoders ( AAE ) are fluent in capturing latent discrete or continuous distributions ( Makhzani et al. , 2015 ) . In this work , we present feature mapping modules for accommodating incomplete data and make AAE capable of capturing the MTPP distributions of incomplete and small datasets . The incompleteness of the data points can be occurred in following ways . The marked points have been not collected or actors did not originally expose some marked points due to the dynamicity of these stochastic processes , which is the case especially in social and behavioral domains . Main contribution of the paper is a novel technique to synthetically generate high-fidelity multi-category MTPPs using adversarial autoencoders and feature mapping techniques by leveraging sparse , incomplete , and small datasets . To the best of our knowledge , there is no technique available for multi-category MTTP generation using such a dataset which is significantly more challenging than the existing generation scenarios . Section 2 reviews related literature on MTTPs and AAEs . Section 3 presents the definition of multicategory MTTPs and Section 4 discusses the usage of AAEs for incomplete , multi-category MTTP generation . Then Section 5 presents the unique preprocessing and postprocessing techniques include in the feature mapping encoder and the decoder . Section 6 discusses the results of the experiment , and Section 7 summarises the conclusion and future work . 2 RELATED WORK . MTPPs are widely used for modeling of asynchronous stochastic discrete events in continuous time ( Upadhyay et al. , 2018 ; Du et al. , 2016 ; Li et al. , 2018 ; Türkmen et al. , 2019 ) . Usually , an MTTP is defined using a conditional intensity function ( Türkmen et al. , 2019 ) which provides the instantaneous rate of events given previous points . Intensity functions are often approximated by various processes such as the Poisson process , Hawkes process ( self-exciting process ) ( Hawkes , 1971 ) , and self-correcting process ( Isham & Westcott , 1979 ) . In traditional MTPPs , the intensity function has to be explicitly defined ; however any mismatch between the manually defined and the underlying intensity function of a process can have a significant adverse impact on the accuracy of models and outcomes . Deep generative networks avoid the requirement of manually identifying the intensity and thus allows the use of arbitrary and complex distributions . Recurrent Neural Networks ( RNNs ) with reinforcement learning have been widely used in recent years ( Du et al. , 2016 ; Li et al. , 2018 ) as well as several hybrid and extended models are also presented . A stochastic sequential model is proposed in ( Sharma et al. , 2019 ) as a combination of a deep state space model and deterministic RNN for modeling MTPPs . FastPoint ( Türkmen et al. , 2019 ) uses deep RNNs to capture complex temporal patterns and self-excitation dynamics within each mark are modeled using Hawkes processes . A semi-parametric generative model is introduced in ( Zhu et al. , 2019 ) for spatio-temporal event data by combining spatial statistical models with reinforcement learning . The advanced data collection techniques and online social media platforms produce complex event data and thus social network analysis can now be used to inform solutions to many societal issues ( Bonchi et al. , 2011 ) . Many such processes of heterogeneous and complex events require multi-category MTPP-based representation . The data integrity is also a major concern in social networks as many fake and misleading data is not uncommon ( Muramudalige et al. , 2019 ) . In many disciplines , such as economics , biological and social sciences , removal of non-verifiable entries is crucial for maintaining the required data integrity , which in turn leads to incomplete and small datasets . Various techniques are introduced to handle missing data in different contexts ( Folch-Fortuny et al. , 2015 ; MacNeil Vroomen et al. , 2016 ) . Generative adversarial networks ( Goodfellow et al. , 2014 ) have become an alternative for data generation without extensive problem specific theoretical foundation or empirical verification ( Yan , 2019 ) . The initial GAN architecture ( Goodfellow et al. , 2014 ) is capable of capturing the exact distribution of continuous and complete data but can not be used for learning the distribution of discrete variables ( Choi et al. , 2017 ) . The recent improvement in the form of Wasserstein GAN ( Arjovsky et al. , 2017 ) is used to implement generative TPP models ( Xiao et al. , 2018 ) . medGAN is designed to learn the distribution of discrete features , such as diagnosis or medication codes , via a combination of an autoencoder and the adversarial framework ( Choi et al. , 2017 ) . Adversarial Autoencoder ( AAE ) ( Makhzani et al. , 2015 ) is a probabilistic autoencoder which uses the GAN framework as a variational inference algorithm for both discrete and continuous latent variables . An aggregated posterior distribution of q ( z ) on the latent code is defined with the encoding function q ( z|x ) and the data distribution pd ( x ) as follows where x denotes a input sample set . q ( z ) = ∫ x q ( z|x ) pd ( x ) dx ( 1 ) In general usage of an AAE , x represents consistent , discrete or continuous data samples where almost all data points are captured or completed in a given context . The challenge addressed in our paper is to apply an AAE for scattered and incomplete multi-category MTPPs generation using our proposed feature mapping techniques with a data approximation method . Details of such an AAE for sparse , incomplete , and multi-category MTPPs generation and feature mapping techniques are presented in Sections 4 and 5 respectively . 3 MULTI-CATEGORY MARKED TEMPORAL POINT PROCESSES . A marked temporal point process ( MTPP ) represents a certain set of asynchronous stochastic discrete actions/events in continuous time ( Upadhyay et al. , 2018 ; Li et al. , 2018 ) . Due to the immense availability of heterogeneous and complex event data in recent years , it is significant to model such complex events using multi-category MTPPs . A multi-category marked point is denoted as follows . A marked point eij is an event of category cj occurring at time ti as shown in Figure 1 . Tables in Appendix A illustrate categories ( i.e. , cj ) in datasets , where these heterogeneous events exhibits complex dependencies and correlations . Usually , a MTPP analysis deals with an ensemble of marked temporal point processes ( MTPPs ) . Multi-category MTPPs are described as follows . If the kth MTPP is Hk and its marked points are denoted as ekij ∈ Hk . Consider n events ( marked points ) and m categories in the kth MTPP , then marked points are characterized as ekij = ( ti , cj ) where i ∈ [ 1 , n ] and j ∈ [ 1 , m ] . Then , H = { ekij = ( ti , cj ) ∈ Hk ; k ∈ [ 1 , N ] } , ( 2 ) whereH represents N number of multi-category MTPPs . However , without loss of generality , we denote ekij as eij in the following discussion . In some problem domains , i ( time of occurrences ) or j ( categories ) values may change rapidly and some categories may not be recorded frequently . More importantly , with many problems in domains such as social and behavioral sciences , not all marked points ( eij ) of a MTPP are known or observable due to the limitations of information gathering process , confidentiality constraints , unverifiability , deception , etc . A notable aspect of our work , is the use of sparse , incomplete , and multi-category MTPPs where an actor ( an MTTP ) either did not carry out activities corresponding to a certain event categories and marked points or they carried them out , but such activities were not reported in the reliable or permissible sources . To address these challenges , we propose a feature mapping encoder and decoder which are capable of capturing the sparseness and incompleteness of the data . The proposed feature mapping techniques consist of multiple steps including the calculation of cumulative probabilities for each category and a data approximation technique for incomplete data ( briefly described in Algorithm 1 ) . More details of the feature mapping encoder and the decoder are discussed in Section 5 .
The authors propose a method for multi-category marked temporal point processes (MTPPs) generation with sparse, incomplete, and small training dataset. They apply Adversarial Autoencoder (AAE) and feature mapping techniques, which include a transformation between the categories and timestamps of marked points and the percentile distribution of the category. The paper shows effectiveness and robustness of the proposed method by comparing with Markov chain approach on three datasets: Radicalization Dataset, Mimic III Dataset and Stack Overflow Dataset.
SP:784bb8350a1f8d1f69734ecaa8395fc4a67b1abf
Adversarial Data Generation of Multi-category Marked Temporal Point Processes with Sparse, Incomplete, and Small Training Samples
1 INTRODUCTION . Marked Temporal Point Processes ( MTPPs ) are widely used for modeling and analysis of asynchronous stochastic discrete events in continuous time ( Upadhyay et al. , 2018 ; Türkmen et al. , 2019 ; Yan , 2019 ) with applications in numerous domains such as homeland security , cybersecurity , consumer analytics , health care analytics , and social science . An MTPP models stochastic discrete events as marked points ( ei ) defined by its time of the occurrence ti and its category ci . Usually , point processes are characterized using the conditional intensity function , λ∗ ( t ) = λ ( t|Ht ) = P [ event ∈ [ t , t + dt ) |Ht ] , which given the past Ht = { ei = ( zi , ti ) |ti < t } specifies the probability of an event occurring at future time points . There are many popular intensity functional forms . Hawkes process ( self-exciting process ) ( Hawkes , 1971 ) is a point process used in both statistical and machine learning contexts where the intensity is a linear function of past events ( Ht ) ( Türkmen et al. , 2019 ) . In traditional parametric models , the conditional intensity functions are manually pre-specified ( Yan , 2019 ) . Recently , various neural network models ( generally called neural TPP ) have been used to learn arbitrary and unknown distributions while eliminating the manual intensity function selection . Reinforcement learning ( Zhu et al. , 2019 ; Li et al. , 2018 ) , recurrent Neural Networks ( RNN ) ( Du et al. , 2016 ) , and generative neural networks ( Xiao et al. , 2018 ) are used to approximate the intensity functions and learn complex MTPP distributions using larger datasets . Recent advances in data collection techniques allow collecting complex event data which form heterogeneous MTTPs where a marked point ( eij ) defines a time of occurrence ( ti ) and a category ( cj ) separately . Therefore , multi-category MTTPs not only concern about the time of occurrence but also the category of the next marked point . The multi-category MTTPs append extra dimensionality to the distribution which complicates the learning using existing technologies . In fact , multi-category MTPPs are greatly helpful to model the behavioral patterns of suspicious or specific individuals and groups in homeland security ( Campedelli et al. , 2019b ; a ; Hung et al. , 2018 ; 2019 ) , potential malicious network activities in cybersecurity ( Peng et al. , 2017 ) , recommendation systems in consumer analytics ( Vassøy et al. , 2019 ) , and the behavioral patterns of patients to determine certain illnesses ( Islam et al. , 2017 ; Mancini & Paganoni , 2019 ) . A number of challenges limit the collection and access to data in many fields often resulting in small and incomplete datasets . Scenarios involving social , political and crime behaviors are often incomplete due to data collection challenges such as data quality maintenance , privacy and confidentiality issues ( National Institutes of Health & Services , 2020 ) , but still a rigorous analysis with complete data is essential to produce accurate and reliable outcomes . So , there is a critical need for a technique to capture and learn from MTPP distribution , develop and apply machine learning algorithms , etc. , for a small set of data some of which may be incomplete . We present an adversarial multi-category MTPP generation technique which is capable of generating sparse , asynchronous , stochastic , multi-category , discrete events in continuous time based on a limited dataset . Adversarial training has recently evolved and is able to provide exceptional results in many data generation applications , mostly in image , audio , and video generation while precisely mimicking the features of an actual dataset . The primary GAN architecture ( Goodfellow et al. , 2014 ) only engages well for continuous and complete data distributions and GANs have not been used for learning the distribution of discrete variables ( Choi et al. , 2017 ) . Later , GAN architectures for discrete events have been introduced ( Makhzani et al. , 2015 ; Yu et al. , 2017 ) and also applied for MTTP generation using extensive training data ( Xiao et al. , 2018 ; 2017 ) . Adversarial autoencoders ( AAE ) are fluent in capturing latent discrete or continuous distributions ( Makhzani et al. , 2015 ) . In this work , we present feature mapping modules for accommodating incomplete data and make AAE capable of capturing the MTPP distributions of incomplete and small datasets . The incompleteness of the data points can be occurred in following ways . The marked points have been not collected or actors did not originally expose some marked points due to the dynamicity of these stochastic processes , which is the case especially in social and behavioral domains . Main contribution of the paper is a novel technique to synthetically generate high-fidelity multi-category MTPPs using adversarial autoencoders and feature mapping techniques by leveraging sparse , incomplete , and small datasets . To the best of our knowledge , there is no technique available for multi-category MTTP generation using such a dataset which is significantly more challenging than the existing generation scenarios . Section 2 reviews related literature on MTTPs and AAEs . Section 3 presents the definition of multicategory MTTPs and Section 4 discusses the usage of AAEs for incomplete , multi-category MTTP generation . Then Section 5 presents the unique preprocessing and postprocessing techniques include in the feature mapping encoder and the decoder . Section 6 discusses the results of the experiment , and Section 7 summarises the conclusion and future work . 2 RELATED WORK . MTPPs are widely used for modeling of asynchronous stochastic discrete events in continuous time ( Upadhyay et al. , 2018 ; Du et al. , 2016 ; Li et al. , 2018 ; Türkmen et al. , 2019 ) . Usually , an MTTP is defined using a conditional intensity function ( Türkmen et al. , 2019 ) which provides the instantaneous rate of events given previous points . Intensity functions are often approximated by various processes such as the Poisson process , Hawkes process ( self-exciting process ) ( Hawkes , 1971 ) , and self-correcting process ( Isham & Westcott , 1979 ) . In traditional MTPPs , the intensity function has to be explicitly defined ; however any mismatch between the manually defined and the underlying intensity function of a process can have a significant adverse impact on the accuracy of models and outcomes . Deep generative networks avoid the requirement of manually identifying the intensity and thus allows the use of arbitrary and complex distributions . Recurrent Neural Networks ( RNNs ) with reinforcement learning have been widely used in recent years ( Du et al. , 2016 ; Li et al. , 2018 ) as well as several hybrid and extended models are also presented . A stochastic sequential model is proposed in ( Sharma et al. , 2019 ) as a combination of a deep state space model and deterministic RNN for modeling MTPPs . FastPoint ( Türkmen et al. , 2019 ) uses deep RNNs to capture complex temporal patterns and self-excitation dynamics within each mark are modeled using Hawkes processes . A semi-parametric generative model is introduced in ( Zhu et al. , 2019 ) for spatio-temporal event data by combining spatial statistical models with reinforcement learning . The advanced data collection techniques and online social media platforms produce complex event data and thus social network analysis can now be used to inform solutions to many societal issues ( Bonchi et al. , 2011 ) . Many such processes of heterogeneous and complex events require multi-category MTPP-based representation . The data integrity is also a major concern in social networks as many fake and misleading data is not uncommon ( Muramudalige et al. , 2019 ) . In many disciplines , such as economics , biological and social sciences , removal of non-verifiable entries is crucial for maintaining the required data integrity , which in turn leads to incomplete and small datasets . Various techniques are introduced to handle missing data in different contexts ( Folch-Fortuny et al. , 2015 ; MacNeil Vroomen et al. , 2016 ) . Generative adversarial networks ( Goodfellow et al. , 2014 ) have become an alternative for data generation without extensive problem specific theoretical foundation or empirical verification ( Yan , 2019 ) . The initial GAN architecture ( Goodfellow et al. , 2014 ) is capable of capturing the exact distribution of continuous and complete data but can not be used for learning the distribution of discrete variables ( Choi et al. , 2017 ) . The recent improvement in the form of Wasserstein GAN ( Arjovsky et al. , 2017 ) is used to implement generative TPP models ( Xiao et al. , 2018 ) . medGAN is designed to learn the distribution of discrete features , such as diagnosis or medication codes , via a combination of an autoencoder and the adversarial framework ( Choi et al. , 2017 ) . Adversarial Autoencoder ( AAE ) ( Makhzani et al. , 2015 ) is a probabilistic autoencoder which uses the GAN framework as a variational inference algorithm for both discrete and continuous latent variables . An aggregated posterior distribution of q ( z ) on the latent code is defined with the encoding function q ( z|x ) and the data distribution pd ( x ) as follows where x denotes a input sample set . q ( z ) = ∫ x q ( z|x ) pd ( x ) dx ( 1 ) In general usage of an AAE , x represents consistent , discrete or continuous data samples where almost all data points are captured or completed in a given context . The challenge addressed in our paper is to apply an AAE for scattered and incomplete multi-category MTPPs generation using our proposed feature mapping techniques with a data approximation method . Details of such an AAE for sparse , incomplete , and multi-category MTPPs generation and feature mapping techniques are presented in Sections 4 and 5 respectively . 3 MULTI-CATEGORY MARKED TEMPORAL POINT PROCESSES . A marked temporal point process ( MTPP ) represents a certain set of asynchronous stochastic discrete actions/events in continuous time ( Upadhyay et al. , 2018 ; Li et al. , 2018 ) . Due to the immense availability of heterogeneous and complex event data in recent years , it is significant to model such complex events using multi-category MTPPs . A multi-category marked point is denoted as follows . A marked point eij is an event of category cj occurring at time ti as shown in Figure 1 . Tables in Appendix A illustrate categories ( i.e. , cj ) in datasets , where these heterogeneous events exhibits complex dependencies and correlations . Usually , a MTPP analysis deals with an ensemble of marked temporal point processes ( MTPPs ) . Multi-category MTPPs are described as follows . If the kth MTPP is Hk and its marked points are denoted as ekij ∈ Hk . Consider n events ( marked points ) and m categories in the kth MTPP , then marked points are characterized as ekij = ( ti , cj ) where i ∈ [ 1 , n ] and j ∈ [ 1 , m ] . Then , H = { ekij = ( ti , cj ) ∈ Hk ; k ∈ [ 1 , N ] } , ( 2 ) whereH represents N number of multi-category MTPPs . However , without loss of generality , we denote ekij as eij in the following discussion . In some problem domains , i ( time of occurrences ) or j ( categories ) values may change rapidly and some categories may not be recorded frequently . More importantly , with many problems in domains such as social and behavioral sciences , not all marked points ( eij ) of a MTPP are known or observable due to the limitations of information gathering process , confidentiality constraints , unverifiability , deception , etc . A notable aspect of our work , is the use of sparse , incomplete , and multi-category MTPPs where an actor ( an MTTP ) either did not carry out activities corresponding to a certain event categories and marked points or they carried them out , but such activities were not reported in the reliable or permissible sources . To address these challenges , we propose a feature mapping encoder and decoder which are capable of capturing the sparseness and incompleteness of the data . The proposed feature mapping techniques consist of multiple steps including the calculation of cumulative probabilities for each category and a data approximation technique for incomplete data ( briefly described in Algorithm 1 ) . More details of the feature mapping encoder and the decoder are discussed in Section 5 .
The paper is concentrated at dealing with the data missing problem of MTPP and applies an AAE for the “incomplete multi-categorical MTPPs”. First, the problem description appears questionable. Point processes are a class of stochastic processes for modelling discrete event sequences in a continuous time domain. They are statistical models and have a well-defined mathematical meaning. The expression of “incomplete point process” is quite confusing. One possible reason is that the authors fail to distinguish between the model and the data. One can say “incomplete data” or “incomplete observations of a model”, but “incomplete model” is not acceptable unless properly defined. Therefore, the proposed method seems not specifically for point processes but for the sequential data.
SP:784bb8350a1f8d1f69734ecaa8395fc4a67b1abf
Maximum Entropy competes with Maximum Likelihood
1 INTRODUCTION . The maximum entropy ( MAXENT ) method was proposed within statistical physics ( Jaynes , 1957 ; Balian , 2007 ; Pressé et al. , 2013 ) , and later on got a wide range of inter-disciplinary applications in data science , probabilistic inference , biological data modeling etc ; see e.g . ( Erickson & Smith , 2013 ) . MAXENT estimates unknown probabilities ( that generated data ) via maximizing the Boltzmann-Gibbs-Shannon entropy under certain constraints which can be derived from the observed data ( Erickson & Smith , 2013 ) . MAXENT leads to non-parametric estimators whose form does not depend on the underlying mechanism that generated data ( i.e . prior assumptions ) . Also , MAXENT avoids the zero-probability problem , i.e . when operating on a sparse data—so that certain values of the involved random quantity may not appear due to a small , but non-zero probability—MAXENT still provides a controllable non-zero estimate for this small probability . MAXENT has has several formal justification ( Jaynes , 1957 ; Chakrabarti & Chakrabarty , 2005 ; Baez et al. , 2011 ; Van Campenhout & Cover , 1981 ; Topsøe , 1979 ; Shore & Johnson , 1980 ; Paris & Vencovská , 1997 ) . But the following open problems are basic for MAXENT , because their insufficient understanding prevents its valid applications . ( i ) Which constraints of entropy maximization are to be extracted from data , which is necessarily finite and noisy ? ( ii ) When and how these constraints can lead to overfitting , where —due to a noisy data—involving more constraints leads to poorer results ? ( iii ) How predictions of MAXENT compare with those of other estimators , e.g . the ( regularized ) maximum likelihood ? Here we approach these open problems via tools of Bayesian decision theory ( Cox & Hinkley , 1979 ) . We assume that the data is given as an i.i.d . sample of a finite length M from a random quantity with n outcomes and unknown probabilities that are instanced from a non-informative prior Dirichlet density , or a mixture of such densities . Focusing on the sparse data regime M < n we calculate average KL-distances between real probabilities and their estimates , decide on the quality of MAXENT under various constraints , and compare it with the ( regularized ) maximum-likelihood ( ML ) estimator . Our main results are that MAXENT does apply to sparse data , but does demand specific prior information . We explored two different scenarios of such information . First , the unknown probabilities are most probably deterministic . Second , there are prior rank correlations between the inferred random quantity and its probabilities . Moreover , in the latter case the nonparametric MAXENT estimator is better in terms of the average KL-distance than the optimally regularized ML ( parametric ) estimator . Some of above questions were already studied in literature . ( Good , 1970 ; Christensen , 1985 ; Zhu et al. , 1997 ; Pandey & Dukkipati , 2013 ) applied formal principles of statistics ( e.g . the Minimum Description Length ) to the selection of constraints ( question ( i ) ) . Our approach to studying this question will be direct and unambiguous , since , as shown below , the Bayesian decision theory leads to clear criteria for the validity of MAXENT estimators . We can also compare all predictions with the optimal Bayesian estimator . The latter is normally not available in practice due to insufficient knowledge of prior details , but it still does provide an important theoretical benchmark . Note that ( Thomas , 1979 ; Lebanon & Lafferty , 2002 ; Kazama & Tsujii , 2005 ; Altun & Smola , 2006 ; Dudik , 2007. ; Rau , 2011 ; Campbell , 1999 ; Friedlander & Gupta , 2005 ) studied soft constraints that allow incorporation of prior assumptions into the MAXENT estimator making it effectively parametric . Here MAXENT will be taken in its original meaning as providing non-parametric estimators . This paper is organized as follows . Section 2 recall the tenets of the Bayesian decision theory and describes the data-generation set-up . Section 3 introduces and motivates the Bayesian estimator and the regularized ML estimator . Section 4 recalls the basic formulas of MAXENT , applies them to the studied set-up , and discusses their symmetry features . Section 5 compares predictions of MAXENT with the regularized ML . We close in the last section with discussing open problems . Appendix A shows how to apply MAXENT to categorical data . Appendix B presents our preliminary results on the affine symmetry of MAXENT estimators , and establishes relations with the minimum entropy principle proposed in ( Good , 1970 ; Christensen , 1985 ; Zhu et al. , 1997 ; Pandey & Dukkipati , 2013 ) . 2 BAYESIAN DECISION THEORY . Consider a random quantityZ with values ( z1 , ... , zn ) and respective probabilities q = ( q1 , ... , qn ) = ( q ( z1 ) , ... , q ( zn ) ) . We look at an i.i.d . sample of length M : D = ( Z1 , ... , ZM ) , m = { mk } nk=1 , M ≡ n∑ k=1 mk , ( 1 ) where Zu ∈ ( z1 , ... , zn ) ( u = 1 , ... , M ) , and mk is the number of appearances of zk in ( 1 ) . This sample will be an instance of our data , e.g . constraints of MAXENT will be determined from it . The conditional probability of data D reads P ( D|q1 , ... , qn ) = P ( m1 , ... , mn|q1 , ... , qn ) = M ! n∏ k=1 qmkk mk ! . ( 2 ) To check the performance of various inference methods , the probabilities q̂ ( D ) = { q̂k ( D ) } nk=1 inferred from ( 1 ) are compared with true probabilities q = { q ( zk ) } nk=1 via the KL-distance K [ q , q̂ ( D ) ] = n∑ k=1 qk ln qk q̂k ( D ) , ( 3 ) where concrete forms of q̂ ( D ) are given below . The choice of distance ( 3 ) is motivated below , where we recall that it implies the global optimality of the standard ( posterior-mean ) Bayesian estimator . Another possible choice of distance is the squared ( symmetric ) Hellinger distance : distH [ q , q̂ ] ≡ 1− ∑n k=1 √ qk q̂k . In our situation , it frequently leads to the same qualitative results as ( 3 ) . How to compare various estimators with each other , and decide on the quality of a given estimator ? Bayesian decision theory comes to answer this question ; see chapter 11 of ( Cox & Hinkley , 1979 ) . The theory assumes that the probabilities of ( z1 , ... , zn ) are generated from a known probability density P ( q1 , ... , qn ) that encapsulates the prior information about the situation . Next it decides on the quality of an estimator q̂ ( D ) via the average distance 〈K〉 = ∫ n∏ k=1 dqk P ( q1 , ... , qn ) K , K = ∑ D P ( D|q ) K [ q , q̂ ( D ) ] . ( 4 ) where K is the average of ( 3 ) over samples ( 1 ) with fixed length M . Sometimes the Bayesian decision theory replaces the distance by the utility , loss etc ( Cox & Hinkley , 1979 ) . Note the difference between the proper Bayesian approach and the Bayesian decision theory ; cf . chapters 10 and 11 in ( Cox & Hinkley , 1979 ) . The former employs the data for moving from the prior ( 5 ) to the posterior ( 7 ) . It averages over the prior , e.g . when calculating the posterior mean . The latter advises on choosing estimators , whose form may or not may not depend on the prior ; see below for examples . The decision theory averages both over the data and over the prior , as seen in ( 4 ) . For the prior density of q = { qk } nk=1 we choose the Dirichlet density ( or a mixture of such densities as seen below ) ( Frigyik et al. , 2010 ; Schafer , 1997 ) : P ( q1 , ... , qn ; α1 , ... , αn ) = Γ [ ∑n k=1 αk ] ∏n k=1 Γ [ αk ] n∏ k=1 qαk−1k δ ( n∑ k=1 qk − 1 ) , ( 5 ) where Γ [ x ] = ∫∞ 0 dy yx−1 e−y is Euler ’ s Γ-function and delta-function δ ( ∑n k=1 qk−1 ) ensures the normalization of probabilities . Parameters αk > 0 determine the prior weight of qk ( Frigyik et al. , 2010 ; Schafer , 1997 ) : 〈qk〉 ≡ ∫ ∞ 0 n∏ l=1 dql qk P ( q1 , ... , qn ; α1 , ... , αn ) = αk A , A ≡ n∑ k=1 αk , ( 6 ) where the integration range goes over the simplex 0 ≤ qk ≤ 1 , ∀k , and ∑n k=1 qk = 1 . Dirichlet density ( 5 ) is unique in holding several desired features of non-informative prior density over unknown probabilities ; see ( Frigyik et al. , 2010 ; Schafer , 1997 ) for reviews . An important feature of density ( 5 ) is that it is conjugate to the multinomial conditional probability ( 2 ) P ( q1 , ... , qn|m1 , ... , mn ) = P ( q1 , ... , qn ; α1 +m1 , ... , αn +mn ) . ( 7 ) Eq . ( 7 ) is convenient when studying i.i.d . samples ( 1 ) of discrete random quantities . Here we assume that the prior density is known exactly [ see however ( 32 ) ] . In practice , such a knowledge need not be available . For example , it may be known that the prior density belongs to the Dirichlet family , but its hyper-parameters { αk } nk=1 are unknown and should be determined from the data , e.g . via empirical Bayes procedures ; see ( Frigyik et al. , 2010 ; Schafer , 1997 ; Claesen & De Moor , 2015 ; Ran & Hu , 2017 ; Bergstra & Bengio , 2012 ) for reviews on hyper-parameter estimation . 3 BAYESIAN AND REGULARIZED MAXIMUM LIKELIHOOD ( ML ) ESTIMATORS . Starting from ( 4 ) , we find the best estimator in terms of the minimal , average KL-distance : min [ 〈K 〉 ] = ∑ D P ( D ) min [ ∫ n∏ k=1 dqk P ( q|D ) K [ q , q̂ ( D ) ] ] , ( 8 ) where the minimization goes over inferred probabilities { q̂ ( D ) } , and where P ( q|D ) is recovered from P ( D|q ) : P ( D ) P ( q|D ) = P ( D|q ) P ( q ) ; cf . ( 1 , 2 ) . The equality in ( 8 ) follows from the fact that if q̂ ( D ) minimizes ∫ ∏n k=1 dqk P ( q|D ) K [ q , q̂ ( D ) ] , then it will minimize each term of the sum for everyD , and thus will minimize the whole sum . Then implementing the constraint ∑n k=1 q̂k ( D ) = 1 via a Lagrange multiplier , we get from ( 8 ) : argmin [ ∫ n∏ k=1 dqk P ( q|D ) K [ q , q̂ ( D ) ] ] = { ∫ n∏ k=1 dqk ql P ( q|D ) } n l=1 . ( 9 ) We got in ( 9 ) the posterior average , because we employed the KL distance K [ q , q̂ ( D ) ] . The optimal estimator will be different upon using another distance , e.g . KL distance K [ q̂ ( D ) , q ] of q̂ ( D ) from q , or the Hellinger distance . Note that in the proper Bayesian approach the posterior mean is simply postulated to be an estimator , since it is just a characteristics of the posterior distribution . In the present Bayesian decision approach the posterior emerges from minimizing a specific ( viz . KL ) distance . If another distance is used , the posterior mean is not anymore optimal . If the prior is a single Dirichlet density ( 5 ) we get from ( 7 , 9 ) for the Bayesian estimator : p ( zk ) = mk + αk M +A . ( 10 ) The average KL-distance ( 4 ) for the estimator ( 10 ) reads from ( 7 , 2 ) ( denoting ψ [ x ] ≡ ddx ln Γ [ x ] ) : 〈K [ q , p ] 〉 = 1 A n∑ k=1 αkψ ( 1 + αk ) − ψ ( 1 +A ) + ln ( M +A ) − Γ [ M + 1 ] Γ [ A ] Γ [ M +A+ 1 ] n∑ k=1 M∑ m=0 Γ [ m+ 1 + αk ] Γ [ M −m+A+ αk ] ln ( m+ αk ) Γ [ αk ] Γ [ A− αk ] Γ [ m+ 1 ] Γ [ M −m+ 1 ] . ( 11 ) If the prior density is given by mixture of Dirichlet densities with weights { πa } La=1 : L∑ a=1 πaP ( q1 , ... , qn ; α [ a ] 1 , ... , α [ a ] n ) , L∑ a=1 πa = 1 , ( 12 ) then instead of ( 6 ) and ( 10 ) we have from ( 9 ) 〈qk〉 = L∑ a=1 πa α [ a ] k A [ a ] , A [ a ] ≡ n∑ k=1 α [ a ] k , ( 13 ) p ( zk ) = ∑L a=1 πa Φ [ a ] mk+α [ a ] k M+A [ a ] ∑L a=1 πa Φ [ a ] , Φ [ a ] ≡ Γ [ A [ a ] ] Γ [ M +A [ a ] ] n∏ k=1 Γ [ mk + α [ a ] k ] Γ [ α [ a ] k ] . ( 14 ) For a mixture prior density , the Bayesian estimator ( 14 ) depends on all numbers { mk ; α [ 1 ] k , ... , α [ L ] k } not just on mk . Below we illustrate that not knowing precisely details of the prior mixture can lead to serious losses when applying Bayesian estimators . It is interesting ( both conceptually and practically ) to have a simple estimator , where the dependence on the prior is reduced to a single parameter . A good candidate is the regularized maximum likelihood ( ML ) estimator ( see ( Hausser & Strimmer , 2009 ) for a review ) : pML ( zk ) ≡ mk + b M + nb = λ mk M + ( 1− λ ) 1 n , λ = M M + nb , b ≥ 0 , 0 < λ < 1 , ( 15 ) where the regularizer b ( or λ ) takes care of the fact that for a finite sample ( 1 ) not all values zk had a chance to appear ( i.e . mk = 0 for them ) . Then ( 15 ) avoids to claim a zero probability due to b > 0 . Eq . ( 15 ) is a shrinkage estimator , where the proper ML estimator mkM is shrunk towards uniform distribution 1n by the shrinkage factor λ . The proper ML estimator pML ( zk ) |b=0 will be shown to be a meaningless estimator for not very long samples ( 1 ) producing results that are worse than { q ( zk ) = 1n } n k=1 . Moreover , for such samples the correct choice of b ( based on the prior information ) is crucial , i.e . ( 15 ) is generally a parametric estimator . The estimator ( 15 ) recovers true probabilities for M →∞ ( Cox & Hinkley , 1979 ) , where n and b are fixed , hence λ→ 1 in ( 15 ) . For the optimal estimator ( 15 ) , the value of b is found by minimizing the average KL-distance ( 4 ) . When the prior is given by a Dirichlet density ( 5 ) , the average KL-distance amounts to ( 11 ) , where we need to replace ln ( M+A ) → ln ( M+nb ) and ln ( m+αk ) → ln ( m+b ) . Now ( 9 , 10 ) imply that for a homogeneous Dirichlet prior , i.e . for ( 5 ) with αk = α , we have bopt = α for the optimal value of b , i.e . the regularized ML estimator coincides with the Bayesian estimator : pML ( zk ) = p ( zk ) . This does not anymore hold for the mixture of Dirichlet prior densities .
This paper investigates maximum entropy (MaxEnt) inference and compares it to a Bayesian estimator and regularized maximum likelihood for finite models. To assess the accuracy of the different estimators, the authors use the average KL-divergence between the ground truth and the estimator, where the average is computed over all datasets of a given size $M$ and all probabilistic models of size $n$ (generated by some prior, either a single Dirichlet or a mixture of Dirichlets). Using numerical experiments, the authors find that the performance of MaxEnt deteriorates for sparse data generated from uniform models. However, by exploiting knowledge about the order of probabilities, MaxEnt can outperform regularized maximum likelihood.
SP:f686bbac0c991408e1723e1590faf645d62131bc
Maximum Entropy competes with Maximum Likelihood
1 INTRODUCTION . The maximum entropy ( MAXENT ) method was proposed within statistical physics ( Jaynes , 1957 ; Balian , 2007 ; Pressé et al. , 2013 ) , and later on got a wide range of inter-disciplinary applications in data science , probabilistic inference , biological data modeling etc ; see e.g . ( Erickson & Smith , 2013 ) . MAXENT estimates unknown probabilities ( that generated data ) via maximizing the Boltzmann-Gibbs-Shannon entropy under certain constraints which can be derived from the observed data ( Erickson & Smith , 2013 ) . MAXENT leads to non-parametric estimators whose form does not depend on the underlying mechanism that generated data ( i.e . prior assumptions ) . Also , MAXENT avoids the zero-probability problem , i.e . when operating on a sparse data—so that certain values of the involved random quantity may not appear due to a small , but non-zero probability—MAXENT still provides a controllable non-zero estimate for this small probability . MAXENT has has several formal justification ( Jaynes , 1957 ; Chakrabarti & Chakrabarty , 2005 ; Baez et al. , 2011 ; Van Campenhout & Cover , 1981 ; Topsøe , 1979 ; Shore & Johnson , 1980 ; Paris & Vencovská , 1997 ) . But the following open problems are basic for MAXENT , because their insufficient understanding prevents its valid applications . ( i ) Which constraints of entropy maximization are to be extracted from data , which is necessarily finite and noisy ? ( ii ) When and how these constraints can lead to overfitting , where —due to a noisy data—involving more constraints leads to poorer results ? ( iii ) How predictions of MAXENT compare with those of other estimators , e.g . the ( regularized ) maximum likelihood ? Here we approach these open problems via tools of Bayesian decision theory ( Cox & Hinkley , 1979 ) . We assume that the data is given as an i.i.d . sample of a finite length M from a random quantity with n outcomes and unknown probabilities that are instanced from a non-informative prior Dirichlet density , or a mixture of such densities . Focusing on the sparse data regime M < n we calculate average KL-distances between real probabilities and their estimates , decide on the quality of MAXENT under various constraints , and compare it with the ( regularized ) maximum-likelihood ( ML ) estimator . Our main results are that MAXENT does apply to sparse data , but does demand specific prior information . We explored two different scenarios of such information . First , the unknown probabilities are most probably deterministic . Second , there are prior rank correlations between the inferred random quantity and its probabilities . Moreover , in the latter case the nonparametric MAXENT estimator is better in terms of the average KL-distance than the optimally regularized ML ( parametric ) estimator . Some of above questions were already studied in literature . ( Good , 1970 ; Christensen , 1985 ; Zhu et al. , 1997 ; Pandey & Dukkipati , 2013 ) applied formal principles of statistics ( e.g . the Minimum Description Length ) to the selection of constraints ( question ( i ) ) . Our approach to studying this question will be direct and unambiguous , since , as shown below , the Bayesian decision theory leads to clear criteria for the validity of MAXENT estimators . We can also compare all predictions with the optimal Bayesian estimator . The latter is normally not available in practice due to insufficient knowledge of prior details , but it still does provide an important theoretical benchmark . Note that ( Thomas , 1979 ; Lebanon & Lafferty , 2002 ; Kazama & Tsujii , 2005 ; Altun & Smola , 2006 ; Dudik , 2007. ; Rau , 2011 ; Campbell , 1999 ; Friedlander & Gupta , 2005 ) studied soft constraints that allow incorporation of prior assumptions into the MAXENT estimator making it effectively parametric . Here MAXENT will be taken in its original meaning as providing non-parametric estimators . This paper is organized as follows . Section 2 recall the tenets of the Bayesian decision theory and describes the data-generation set-up . Section 3 introduces and motivates the Bayesian estimator and the regularized ML estimator . Section 4 recalls the basic formulas of MAXENT , applies them to the studied set-up , and discusses their symmetry features . Section 5 compares predictions of MAXENT with the regularized ML . We close in the last section with discussing open problems . Appendix A shows how to apply MAXENT to categorical data . Appendix B presents our preliminary results on the affine symmetry of MAXENT estimators , and establishes relations with the minimum entropy principle proposed in ( Good , 1970 ; Christensen , 1985 ; Zhu et al. , 1997 ; Pandey & Dukkipati , 2013 ) . 2 BAYESIAN DECISION THEORY . Consider a random quantityZ with values ( z1 , ... , zn ) and respective probabilities q = ( q1 , ... , qn ) = ( q ( z1 ) , ... , q ( zn ) ) . We look at an i.i.d . sample of length M : D = ( Z1 , ... , ZM ) , m = { mk } nk=1 , M ≡ n∑ k=1 mk , ( 1 ) where Zu ∈ ( z1 , ... , zn ) ( u = 1 , ... , M ) , and mk is the number of appearances of zk in ( 1 ) . This sample will be an instance of our data , e.g . constraints of MAXENT will be determined from it . The conditional probability of data D reads P ( D|q1 , ... , qn ) = P ( m1 , ... , mn|q1 , ... , qn ) = M ! n∏ k=1 qmkk mk ! . ( 2 ) To check the performance of various inference methods , the probabilities q̂ ( D ) = { q̂k ( D ) } nk=1 inferred from ( 1 ) are compared with true probabilities q = { q ( zk ) } nk=1 via the KL-distance K [ q , q̂ ( D ) ] = n∑ k=1 qk ln qk q̂k ( D ) , ( 3 ) where concrete forms of q̂ ( D ) are given below . The choice of distance ( 3 ) is motivated below , where we recall that it implies the global optimality of the standard ( posterior-mean ) Bayesian estimator . Another possible choice of distance is the squared ( symmetric ) Hellinger distance : distH [ q , q̂ ] ≡ 1− ∑n k=1 √ qk q̂k . In our situation , it frequently leads to the same qualitative results as ( 3 ) . How to compare various estimators with each other , and decide on the quality of a given estimator ? Bayesian decision theory comes to answer this question ; see chapter 11 of ( Cox & Hinkley , 1979 ) . The theory assumes that the probabilities of ( z1 , ... , zn ) are generated from a known probability density P ( q1 , ... , qn ) that encapsulates the prior information about the situation . Next it decides on the quality of an estimator q̂ ( D ) via the average distance 〈K〉 = ∫ n∏ k=1 dqk P ( q1 , ... , qn ) K , K = ∑ D P ( D|q ) K [ q , q̂ ( D ) ] . ( 4 ) where K is the average of ( 3 ) over samples ( 1 ) with fixed length M . Sometimes the Bayesian decision theory replaces the distance by the utility , loss etc ( Cox & Hinkley , 1979 ) . Note the difference between the proper Bayesian approach and the Bayesian decision theory ; cf . chapters 10 and 11 in ( Cox & Hinkley , 1979 ) . The former employs the data for moving from the prior ( 5 ) to the posterior ( 7 ) . It averages over the prior , e.g . when calculating the posterior mean . The latter advises on choosing estimators , whose form may or not may not depend on the prior ; see below for examples . The decision theory averages both over the data and over the prior , as seen in ( 4 ) . For the prior density of q = { qk } nk=1 we choose the Dirichlet density ( or a mixture of such densities as seen below ) ( Frigyik et al. , 2010 ; Schafer , 1997 ) : P ( q1 , ... , qn ; α1 , ... , αn ) = Γ [ ∑n k=1 αk ] ∏n k=1 Γ [ αk ] n∏ k=1 qαk−1k δ ( n∑ k=1 qk − 1 ) , ( 5 ) where Γ [ x ] = ∫∞ 0 dy yx−1 e−y is Euler ’ s Γ-function and delta-function δ ( ∑n k=1 qk−1 ) ensures the normalization of probabilities . Parameters αk > 0 determine the prior weight of qk ( Frigyik et al. , 2010 ; Schafer , 1997 ) : 〈qk〉 ≡ ∫ ∞ 0 n∏ l=1 dql qk P ( q1 , ... , qn ; α1 , ... , αn ) = αk A , A ≡ n∑ k=1 αk , ( 6 ) where the integration range goes over the simplex 0 ≤ qk ≤ 1 , ∀k , and ∑n k=1 qk = 1 . Dirichlet density ( 5 ) is unique in holding several desired features of non-informative prior density over unknown probabilities ; see ( Frigyik et al. , 2010 ; Schafer , 1997 ) for reviews . An important feature of density ( 5 ) is that it is conjugate to the multinomial conditional probability ( 2 ) P ( q1 , ... , qn|m1 , ... , mn ) = P ( q1 , ... , qn ; α1 +m1 , ... , αn +mn ) . ( 7 ) Eq . ( 7 ) is convenient when studying i.i.d . samples ( 1 ) of discrete random quantities . Here we assume that the prior density is known exactly [ see however ( 32 ) ] . In practice , such a knowledge need not be available . For example , it may be known that the prior density belongs to the Dirichlet family , but its hyper-parameters { αk } nk=1 are unknown and should be determined from the data , e.g . via empirical Bayes procedures ; see ( Frigyik et al. , 2010 ; Schafer , 1997 ; Claesen & De Moor , 2015 ; Ran & Hu , 2017 ; Bergstra & Bengio , 2012 ) for reviews on hyper-parameter estimation . 3 BAYESIAN AND REGULARIZED MAXIMUM LIKELIHOOD ( ML ) ESTIMATORS . Starting from ( 4 ) , we find the best estimator in terms of the minimal , average KL-distance : min [ 〈K 〉 ] = ∑ D P ( D ) min [ ∫ n∏ k=1 dqk P ( q|D ) K [ q , q̂ ( D ) ] ] , ( 8 ) where the minimization goes over inferred probabilities { q̂ ( D ) } , and where P ( q|D ) is recovered from P ( D|q ) : P ( D ) P ( q|D ) = P ( D|q ) P ( q ) ; cf . ( 1 , 2 ) . The equality in ( 8 ) follows from the fact that if q̂ ( D ) minimizes ∫ ∏n k=1 dqk P ( q|D ) K [ q , q̂ ( D ) ] , then it will minimize each term of the sum for everyD , and thus will minimize the whole sum . Then implementing the constraint ∑n k=1 q̂k ( D ) = 1 via a Lagrange multiplier , we get from ( 8 ) : argmin [ ∫ n∏ k=1 dqk P ( q|D ) K [ q , q̂ ( D ) ] ] = { ∫ n∏ k=1 dqk ql P ( q|D ) } n l=1 . ( 9 ) We got in ( 9 ) the posterior average , because we employed the KL distance K [ q , q̂ ( D ) ] . The optimal estimator will be different upon using another distance , e.g . KL distance K [ q̂ ( D ) , q ] of q̂ ( D ) from q , or the Hellinger distance . Note that in the proper Bayesian approach the posterior mean is simply postulated to be an estimator , since it is just a characteristics of the posterior distribution . In the present Bayesian decision approach the posterior emerges from minimizing a specific ( viz . KL ) distance . If another distance is used , the posterior mean is not anymore optimal . If the prior is a single Dirichlet density ( 5 ) we get from ( 7 , 9 ) for the Bayesian estimator : p ( zk ) = mk + αk M +A . ( 10 ) The average KL-distance ( 4 ) for the estimator ( 10 ) reads from ( 7 , 2 ) ( denoting ψ [ x ] ≡ ddx ln Γ [ x ] ) : 〈K [ q , p ] 〉 = 1 A n∑ k=1 αkψ ( 1 + αk ) − ψ ( 1 +A ) + ln ( M +A ) − Γ [ M + 1 ] Γ [ A ] Γ [ M +A+ 1 ] n∑ k=1 M∑ m=0 Γ [ m+ 1 + αk ] Γ [ M −m+A+ αk ] ln ( m+ αk ) Γ [ αk ] Γ [ A− αk ] Γ [ m+ 1 ] Γ [ M −m+ 1 ] . ( 11 ) If the prior density is given by mixture of Dirichlet densities with weights { πa } La=1 : L∑ a=1 πaP ( q1 , ... , qn ; α [ a ] 1 , ... , α [ a ] n ) , L∑ a=1 πa = 1 , ( 12 ) then instead of ( 6 ) and ( 10 ) we have from ( 9 ) 〈qk〉 = L∑ a=1 πa α [ a ] k A [ a ] , A [ a ] ≡ n∑ k=1 α [ a ] k , ( 13 ) p ( zk ) = ∑L a=1 πa Φ [ a ] mk+α [ a ] k M+A [ a ] ∑L a=1 πa Φ [ a ] , Φ [ a ] ≡ Γ [ A [ a ] ] Γ [ M +A [ a ] ] n∏ k=1 Γ [ mk + α [ a ] k ] Γ [ α [ a ] k ] . ( 14 ) For a mixture prior density , the Bayesian estimator ( 14 ) depends on all numbers { mk ; α [ 1 ] k , ... , α [ L ] k } not just on mk . Below we illustrate that not knowing precisely details of the prior mixture can lead to serious losses when applying Bayesian estimators . It is interesting ( both conceptually and practically ) to have a simple estimator , where the dependence on the prior is reduced to a single parameter . A good candidate is the regularized maximum likelihood ( ML ) estimator ( see ( Hausser & Strimmer , 2009 ) for a review ) : pML ( zk ) ≡ mk + b M + nb = λ mk M + ( 1− λ ) 1 n , λ = M M + nb , b ≥ 0 , 0 < λ < 1 , ( 15 ) where the regularizer b ( or λ ) takes care of the fact that for a finite sample ( 1 ) not all values zk had a chance to appear ( i.e . mk = 0 for them ) . Then ( 15 ) avoids to claim a zero probability due to b > 0 . Eq . ( 15 ) is a shrinkage estimator , where the proper ML estimator mkM is shrunk towards uniform distribution 1n by the shrinkage factor λ . The proper ML estimator pML ( zk ) |b=0 will be shown to be a meaningless estimator for not very long samples ( 1 ) producing results that are worse than { q ( zk ) = 1n } n k=1 . Moreover , for such samples the correct choice of b ( based on the prior information ) is crucial , i.e . ( 15 ) is generally a parametric estimator . The estimator ( 15 ) recovers true probabilities for M →∞ ( Cox & Hinkley , 1979 ) , where n and b are fixed , hence λ→ 1 in ( 15 ) . For the optimal estimator ( 15 ) , the value of b is found by minimizing the average KL-distance ( 4 ) . When the prior is given by a Dirichlet density ( 5 ) , the average KL-distance amounts to ( 11 ) , where we need to replace ln ( M+A ) → ln ( M+nb ) and ln ( m+αk ) → ln ( m+b ) . Now ( 9 , 10 ) imply that for a homogeneous Dirichlet prior , i.e . for ( 5 ) with αk = α , we have bopt = α for the optimal value of b , i.e . the regularized ML estimator coincides with the Bayesian estimator : pML ( zk ) = p ( zk ) . This does not anymore hold for the mixture of Dirichlet prior densities .
This paper considers the maximum entropy (MAXENT) method for estimating underlying probabilities over a finite alphabet, i.e., the multinomial model. The authors compare MAXENT with the regularized maximum likelihood, that is the Bayesian estimator under the Dirichlet prior with a common hyperparameter, and the Bayesian estimator with a general Dirichlet prior in terms of the Bayes risk, i.e., the KL-divergence from the true distribution to the estimated distribution averaged over the prior. The authors also consider the case where the prior is extended to the mixture of Dirichlet distributions. These comparisons are done numerically with synthetic data.
SP:f686bbac0c991408e1723e1590faf645d62131bc
Learning Contextualized Knowledge Graph Structures for Commonsense Reasoning
1 INTRODUCTION . Commonsense knowledge is essential for developing human-level artificial intelligence systems that can understand and interact with the real world . However , commonsense knowledge is assumed by humans and thus rarely written down in text corpora for machines to learn from and make inferences with . Fig . 1 shows an example in a popular commonsense reasoning benchmark named CommonsenseQA ( Talmor et al. , 2019 ) . The knowledge about the relations between concepts , e.g. , the fact triple ( print , Requires , use paper ) , is not explicitly given in the question and answer . Without important background knowledge as clues , natural language understanding ( NLU ) models may fail to answer such simple commonsense questions that are trivial to humans . Current commonsense reasoning models can be classified into retrieval-augmented methods ( Banerjee et al. , 2019 ; Pan et al. , 2019 ) and KG-augmented methods ( Wang et al. , 2019b ; Kapanipathi et al. , 2020 ) . Retrieval-augmented methods retrieve relevant sentences from an external corpus such as Wikipedia . The retrieved sentences are usually not interconnected , and their unstructured nature makes it inherently difficult for models to do complex reasoning over them ( Zhang et al. , 2018 ) . On the other hand , symbolic commonsense KGs such as ConceptNet ( Speer et al. , 2017 ) provide structured representation of the relational knowledge between concepts , which is of critical importance for effective ( multi-hop ) reasoning and making interpretable predictions . Therefore , recent advances ( Lin et al. , 2019 ; Feng et al. , 2020 ; Malaviya et al. , 2020 ; Bosselut & Choi , 2019 ) have focused on KG-augmented neural-symbolic commonsense reasoning — integrating the symbolic commonsense knowledge with the pre-trained neural language models such as BERT ( Devlin et al. , 2019 ) . One of the key challenges for KG-augmented commonsense reasoning is how to obtain relevant and useful facts for the model to reason over . These supporting facts are usually not readily available to the model and require explicit annotation by humans . Most existing works ( Lin et al. , 2019 ; Wang et al. , 2019b ; Lv et al. , 2020 ) follow heuristic procedures to extract supporting fact triples from KGs , e.g. , by finding connections between concepts mentioned in the question and answer . This simplified extraction process may be sub-optimal because the commonsense KGs are usually incomplete ( Min 1Code has been uploaded and will be published . et al. , 2013 ) and supporting facts could be missing . To mitigate this issue , Wang et al . ( 2020b ) finetune a language model to generate pseudo-paths ( i.e. , sequences of triples ) between question and answer concepts as evidence for the reasoning context ( question and answer ) . However , when two input concepts are not closely related , the generated pseudo-paths are often unreliable as it ’ s hard to connect two entities using a small set of predefined KG relations ( i.e. , limited expressiveness ) . Besides , since KGs are context-agnostic , both extracted facts and generated facts do not necessarily relate to the central topic of the reasoning context , yielding misleading facts for reasoning . Additionally , KGs themselves store noisy facts . To summarize , low coverage of KG facts , limited expressiveness of KG relations , wrong and uncontextualized facts make neural-symbolic integration of commonsense knowledge and pre-trained language models less reliable or generalizable . In this paper , we propose a novel KG-augmented commonsense reasoning model , named Hybrid Graph Network ( HGN ) , to address these issues . It leverages both extracted facts ( with high precision ) and continuous feature representations for generated facts ( with high recall ) to build a contextualized graph with learnable edge features , which overcome the low coverage and limited expressiveness issue of the KG . It then iteratively prunes unreliable and unrelated edges during model learning , leading to a superior graph structure for reasoning . Fig . 1 shows an illustrative example of the graph structure HGN has learned . Besides triples extracted from ConceptNet , e.g. , ( print , RelatedTo , use ) , HGN manages to ( 1 ) generate novel triples and ( 2 ) identify critical evidence triples , e.g. , ( print , Requires , use paper ) and ( paper , HasProperty , expensive ) , while pruning triples that are unhelpful for reasoning , e.g. , ( use , · , expensive ) . The final contextualized graphs created by our HGN are shown to be more useful for models to reason over . We summarize our contributions as follows : ( 1 ) We propose HGN , a KG-augmented commonsense reasoning model that overcomes the low coverage , limited expressiveness , wrong and uncontextualized facts issues of KGs . It jointly generates features for novel facts to complement extracted facts and learns the structure of the contextualized knowledge graph while reasoning over it . ( 2 ) We conduct extensive experiments on three commonsense question answering benchmarks and show consistent improvement over previous approaches . ( 3 ) We show our contextualized graph structures are more helpful for the question-answering process with user studies . 2 NEURAL-SYMBOLIC MODELS FOR COMMONSENSE REASONING . We focus on the task of commonsense question answering ( QA ) , while the proposed model can be easily adapted to other tasks that require commonsense reasoning skills ( e.g. , natural language inference ) . In the typical scenario of KG-augmented question answering , given a question q , the model is asked to select the correct answer from a set of candidate answers { ai } with the help of symbolic knowledge from an external knowledge graph G = { E , R , F } . Here , E , R , F denote the set of entities , relations , and facts , respectively . A fact takes the form of a triple ( h , r , t ) ∈ F , where h ∈ E is the head entity , t ∈ E is the tail entity , and r ∈ R is their relation . We approach the multi-choice QA problem by measuring the plausibility ρ ( q , a ) between the question q and each candidate answer a . The candidate answer with the highest plausibility score will be chosen as the model ’ s prediction . Fig . 2 illustrates the workflow of a typical neural-symbolic model architecture for question answering , which our proposed model fits into . The final score is predicted based on the neural encoding of unstructured reasoning context and symbolic graph knowledge . Neural Encoding of Reasoning Context . The text of the question and answer itself serves as strong unstructured evidence in evaluating their plausibility . Recent years have witnessed great success of pretrained language models ( PLMs ) ( Devlin et al. , 2019 ; Liu et al. , 2019 ) in a range of NLP tasks , including question answering ( Su et al. , 2019 ; Lukovnikov et al. , 2019 ) and natural language inference ( Zhang et al. , 2019 ; Wang et al. , 2020a ) . Similar to previous works , here we adopt a PLM parameterized by θtext to encode the question and answer pair into the statement vector : s = ftext ( [ q , a ] ; θtext ) . We use [ · , · ] to denote the concatenation of sentences or vectors . Modeling Symbolic Graph Knowledge . Commonsense knowledge graphs , as an external source of knowledge , can also contribute to context understanding and reasoning by providing relational knowledge between concepts related to the question and answer . For each question-candidate answer pair ( q , a ) , we build a directed graph G = ( V , E ) with adjacentcy matrix A ∈ { 0 , 1 } n×n , which is termed as the contextualized knowledge graph . It represents the relevant knowledge structures ( concepts and their relations ) from the external KG . G ’ s node set V includes concepts mentioned in the question and candidate answer pair ( q , a ) . Edges in G represent the relations between their connected nodes . G stores knowledge that is related to the context , and serves as the structured evidence for answering the question . Fig . 1 presents an example of the contextualized knowledge graph for the question and a candidate answer ( “ use paper ” ) . For a contextualized KG G with n nodes and m edges , we let V = { v1 , . . . , vn } . We denote the node feature vectors as { xi | vi ∈ V } and the edge feature vectors as { x ( i , j ) | ( vi , vj ) ∈ E } . We stack node feature vectors and edge feature vectors to get the node feature matrix X ∈ Rn×dv and the edge feature matrix Xe ∈ Rm×de , respectively . We denote the learnable parameters involved in the function which maps ( V , E ) to feature embeddings ( X , Xe ) as θgraph-emb . For encoding the contextualize knowledge graph G , we consider a general formulation of the graph encoder g = fgraph-enc ( X , Xe , A , s ; θgraph-enc ) , parameterized by θgraph-enc . For simplicity , we denote it as g = fgraph ( q , a , s ; θgraph ) where θgraph = { θgraph-emb , θgraph-enc } . To predict the plausibility score ρ ( q , a ) , we feed [ s , g ] , the concatenation of s and g , to a multilayer perceptron ( MLP ) with parameters θMLP . The output of the MLP is then passed through to a softmax layer to calculate the final probability ρ̂ ( q , a ) for choosing a candidate answer , shown as follows . ρ ( q , a ; θ ) = fMLP ( [ s , g ] ; θMLP ) ; { ρ̂ ( q , ai ; θ ) } = softmax { ρ ( q , ai ; θ ) } . ( 1 ) Here θ = { θtext , θgraph , θMLP } is the set of all learnable parameters . To derive the contextualized KG , most existing works perform heuristic graph extraction ( Lin et al. , 2019 ; Feng et al. , 2020 ) , while Wang et al . ( 2020b ) generate relational paths to connect question and answer concepts . They all assume a perfect graph structure and fix the adjacency matrix during training . In contrast , our proposed HGN starts with an initial graph structure with both extracted and generated edges , and iteratively refines the adjacency matrix during graph encoding . 3 JOINTLY LEARNING GRAPH STRUCTURES AND PARAMETERS . As illustrated in Fig . 2 , given a question and candidate answer ( q , a ) , we encode individual contextualized KG with a graph encoder fgraph , and then use the output graph vector g to estimate the plausibility of the question-answer pair . However , static graphs extracted from external KGs often suffer from limited coverage , making it hard for the model to collect and reason over adequate supporting facts . We solve the problem by considering both extracted facts and generated facts during graph initialization , resulting in a graph with “ hybrid ” edge features from complementary sources . For the generated facts , we directly use the continuous relational features output by q generator instead of decoding into relations and re-encoding with a lookup table for better expressivity . While incorporating generated facts improve the recall , evidence precision could be impacted . Besides , the processes of extracting and generating facts are context-agnostic , leading to irrelevant facts that could mislead reasoning . Therefore , we introduce learnable edge weights to control message passing during graph encoding . We further impose entropy regularization on the learned edge weights to encourage pruning noisy edges while performing reasoning . The overview of our graph module is shown in Fig . 3 . Starting from an heuristically extracted graph ( with adjacency matrix Aextract ) , we first generate edge features to enrich Aextract into a graph with fully-connected edges between question and answer concepts , denoted by an adjacency matrix A0 . Then we iteratively update the edge embeddings , the adjacency matrix and the node embeddings . The graph vector g is derived by pooling over node embeddings at the final layer . Formally , we denote the label for question-answer pair ( q , a ) as y , where y = 1 means a is the correct answer to q and y = 0 means a is a wrong answer . The overall objective of jointly learning graph structure and model parameters on a training set Dtrain is defined as follows : L ( θ ) = ∑ ( q , a , y ) ∼Dtrain [ Ltask ( ρ̂ ( q , a ; θ ) ) , y ) + β · Lprune ( AL ( q , a ; θtext , θgraph ) ) ] , ( 2 ) where θ = { θtext , θgraph , θMLP } is the set of all learnable parameters , β is a hyperparameter . AL represents the final graph structure after L layers ’ refinement ( §3.2 ) . L can be decomposed into Ltask for the downstream classification task and Lprune for graph structure learning with regularization . In the following subsections , we first introduce how we initialize the contextualized graph G with densified adjacency matrix A0 , node features X and hybrid edge features Xe . Next we show how we encode the graph as g = fgraph-enc ( X , Xe , A 0 , s ) and calculate Ltask for the classification task . Finally we show how we calculate the regularization termLprune based on the learned graph structure .
In this paper, the authors propose a new approach towards incorporating knowledge graphs (KG) into commonsense QA frameworks. KGs are helpful for adding structured "world" information, which neural-symbolic architectures can leverage to do commonsense reasoning, e.g., "what is the expensive resource in printing on paper?" (paper). In such architectures, however, the authors argue that KG quality is a large impediment (e.g., missing or incorrect edges, distracting nodes, etc). Therefore, they propose a "hybrid" KG-based model (accordingly named "Hybrid Graph Network") that jointly learns to refine/augment the graph structure while also optimizing it for inference performance.
SP:c5bc1d50c01d86f3f5ccb52508cc4112a1937bd4
Learning Contextualized Knowledge Graph Structures for Commonsense Reasoning
1 INTRODUCTION . Commonsense knowledge is essential for developing human-level artificial intelligence systems that can understand and interact with the real world . However , commonsense knowledge is assumed by humans and thus rarely written down in text corpora for machines to learn from and make inferences with . Fig . 1 shows an example in a popular commonsense reasoning benchmark named CommonsenseQA ( Talmor et al. , 2019 ) . The knowledge about the relations between concepts , e.g. , the fact triple ( print , Requires , use paper ) , is not explicitly given in the question and answer . Without important background knowledge as clues , natural language understanding ( NLU ) models may fail to answer such simple commonsense questions that are trivial to humans . Current commonsense reasoning models can be classified into retrieval-augmented methods ( Banerjee et al. , 2019 ; Pan et al. , 2019 ) and KG-augmented methods ( Wang et al. , 2019b ; Kapanipathi et al. , 2020 ) . Retrieval-augmented methods retrieve relevant sentences from an external corpus such as Wikipedia . The retrieved sentences are usually not interconnected , and their unstructured nature makes it inherently difficult for models to do complex reasoning over them ( Zhang et al. , 2018 ) . On the other hand , symbolic commonsense KGs such as ConceptNet ( Speer et al. , 2017 ) provide structured representation of the relational knowledge between concepts , which is of critical importance for effective ( multi-hop ) reasoning and making interpretable predictions . Therefore , recent advances ( Lin et al. , 2019 ; Feng et al. , 2020 ; Malaviya et al. , 2020 ; Bosselut & Choi , 2019 ) have focused on KG-augmented neural-symbolic commonsense reasoning — integrating the symbolic commonsense knowledge with the pre-trained neural language models such as BERT ( Devlin et al. , 2019 ) . One of the key challenges for KG-augmented commonsense reasoning is how to obtain relevant and useful facts for the model to reason over . These supporting facts are usually not readily available to the model and require explicit annotation by humans . Most existing works ( Lin et al. , 2019 ; Wang et al. , 2019b ; Lv et al. , 2020 ) follow heuristic procedures to extract supporting fact triples from KGs , e.g. , by finding connections between concepts mentioned in the question and answer . This simplified extraction process may be sub-optimal because the commonsense KGs are usually incomplete ( Min 1Code has been uploaded and will be published . et al. , 2013 ) and supporting facts could be missing . To mitigate this issue , Wang et al . ( 2020b ) finetune a language model to generate pseudo-paths ( i.e. , sequences of triples ) between question and answer concepts as evidence for the reasoning context ( question and answer ) . However , when two input concepts are not closely related , the generated pseudo-paths are often unreliable as it ’ s hard to connect two entities using a small set of predefined KG relations ( i.e. , limited expressiveness ) . Besides , since KGs are context-agnostic , both extracted facts and generated facts do not necessarily relate to the central topic of the reasoning context , yielding misleading facts for reasoning . Additionally , KGs themselves store noisy facts . To summarize , low coverage of KG facts , limited expressiveness of KG relations , wrong and uncontextualized facts make neural-symbolic integration of commonsense knowledge and pre-trained language models less reliable or generalizable . In this paper , we propose a novel KG-augmented commonsense reasoning model , named Hybrid Graph Network ( HGN ) , to address these issues . It leverages both extracted facts ( with high precision ) and continuous feature representations for generated facts ( with high recall ) to build a contextualized graph with learnable edge features , which overcome the low coverage and limited expressiveness issue of the KG . It then iteratively prunes unreliable and unrelated edges during model learning , leading to a superior graph structure for reasoning . Fig . 1 shows an illustrative example of the graph structure HGN has learned . Besides triples extracted from ConceptNet , e.g. , ( print , RelatedTo , use ) , HGN manages to ( 1 ) generate novel triples and ( 2 ) identify critical evidence triples , e.g. , ( print , Requires , use paper ) and ( paper , HasProperty , expensive ) , while pruning triples that are unhelpful for reasoning , e.g. , ( use , · , expensive ) . The final contextualized graphs created by our HGN are shown to be more useful for models to reason over . We summarize our contributions as follows : ( 1 ) We propose HGN , a KG-augmented commonsense reasoning model that overcomes the low coverage , limited expressiveness , wrong and uncontextualized facts issues of KGs . It jointly generates features for novel facts to complement extracted facts and learns the structure of the contextualized knowledge graph while reasoning over it . ( 2 ) We conduct extensive experiments on three commonsense question answering benchmarks and show consistent improvement over previous approaches . ( 3 ) We show our contextualized graph structures are more helpful for the question-answering process with user studies . 2 NEURAL-SYMBOLIC MODELS FOR COMMONSENSE REASONING . We focus on the task of commonsense question answering ( QA ) , while the proposed model can be easily adapted to other tasks that require commonsense reasoning skills ( e.g. , natural language inference ) . In the typical scenario of KG-augmented question answering , given a question q , the model is asked to select the correct answer from a set of candidate answers { ai } with the help of symbolic knowledge from an external knowledge graph G = { E , R , F } . Here , E , R , F denote the set of entities , relations , and facts , respectively . A fact takes the form of a triple ( h , r , t ) ∈ F , where h ∈ E is the head entity , t ∈ E is the tail entity , and r ∈ R is their relation . We approach the multi-choice QA problem by measuring the plausibility ρ ( q , a ) between the question q and each candidate answer a . The candidate answer with the highest plausibility score will be chosen as the model ’ s prediction . Fig . 2 illustrates the workflow of a typical neural-symbolic model architecture for question answering , which our proposed model fits into . The final score is predicted based on the neural encoding of unstructured reasoning context and symbolic graph knowledge . Neural Encoding of Reasoning Context . The text of the question and answer itself serves as strong unstructured evidence in evaluating their plausibility . Recent years have witnessed great success of pretrained language models ( PLMs ) ( Devlin et al. , 2019 ; Liu et al. , 2019 ) in a range of NLP tasks , including question answering ( Su et al. , 2019 ; Lukovnikov et al. , 2019 ) and natural language inference ( Zhang et al. , 2019 ; Wang et al. , 2020a ) . Similar to previous works , here we adopt a PLM parameterized by θtext to encode the question and answer pair into the statement vector : s = ftext ( [ q , a ] ; θtext ) . We use [ · , · ] to denote the concatenation of sentences or vectors . Modeling Symbolic Graph Knowledge . Commonsense knowledge graphs , as an external source of knowledge , can also contribute to context understanding and reasoning by providing relational knowledge between concepts related to the question and answer . For each question-candidate answer pair ( q , a ) , we build a directed graph G = ( V , E ) with adjacentcy matrix A ∈ { 0 , 1 } n×n , which is termed as the contextualized knowledge graph . It represents the relevant knowledge structures ( concepts and their relations ) from the external KG . G ’ s node set V includes concepts mentioned in the question and candidate answer pair ( q , a ) . Edges in G represent the relations between their connected nodes . G stores knowledge that is related to the context , and serves as the structured evidence for answering the question . Fig . 1 presents an example of the contextualized knowledge graph for the question and a candidate answer ( “ use paper ” ) . For a contextualized KG G with n nodes and m edges , we let V = { v1 , . . . , vn } . We denote the node feature vectors as { xi | vi ∈ V } and the edge feature vectors as { x ( i , j ) | ( vi , vj ) ∈ E } . We stack node feature vectors and edge feature vectors to get the node feature matrix X ∈ Rn×dv and the edge feature matrix Xe ∈ Rm×de , respectively . We denote the learnable parameters involved in the function which maps ( V , E ) to feature embeddings ( X , Xe ) as θgraph-emb . For encoding the contextualize knowledge graph G , we consider a general formulation of the graph encoder g = fgraph-enc ( X , Xe , A , s ; θgraph-enc ) , parameterized by θgraph-enc . For simplicity , we denote it as g = fgraph ( q , a , s ; θgraph ) where θgraph = { θgraph-emb , θgraph-enc } . To predict the plausibility score ρ ( q , a ) , we feed [ s , g ] , the concatenation of s and g , to a multilayer perceptron ( MLP ) with parameters θMLP . The output of the MLP is then passed through to a softmax layer to calculate the final probability ρ̂ ( q , a ) for choosing a candidate answer , shown as follows . ρ ( q , a ; θ ) = fMLP ( [ s , g ] ; θMLP ) ; { ρ̂ ( q , ai ; θ ) } = softmax { ρ ( q , ai ; θ ) } . ( 1 ) Here θ = { θtext , θgraph , θMLP } is the set of all learnable parameters . To derive the contextualized KG , most existing works perform heuristic graph extraction ( Lin et al. , 2019 ; Feng et al. , 2020 ) , while Wang et al . ( 2020b ) generate relational paths to connect question and answer concepts . They all assume a perfect graph structure and fix the adjacency matrix during training . In contrast , our proposed HGN starts with an initial graph structure with both extracted and generated edges , and iteratively refines the adjacency matrix during graph encoding . 3 JOINTLY LEARNING GRAPH STRUCTURES AND PARAMETERS . As illustrated in Fig . 2 , given a question and candidate answer ( q , a ) , we encode individual contextualized KG with a graph encoder fgraph , and then use the output graph vector g to estimate the plausibility of the question-answer pair . However , static graphs extracted from external KGs often suffer from limited coverage , making it hard for the model to collect and reason over adequate supporting facts . We solve the problem by considering both extracted facts and generated facts during graph initialization , resulting in a graph with “ hybrid ” edge features from complementary sources . For the generated facts , we directly use the continuous relational features output by q generator instead of decoding into relations and re-encoding with a lookup table for better expressivity . While incorporating generated facts improve the recall , evidence precision could be impacted . Besides , the processes of extracting and generating facts are context-agnostic , leading to irrelevant facts that could mislead reasoning . Therefore , we introduce learnable edge weights to control message passing during graph encoding . We further impose entropy regularization on the learned edge weights to encourage pruning noisy edges while performing reasoning . The overview of our graph module is shown in Fig . 3 . Starting from an heuristically extracted graph ( with adjacency matrix Aextract ) , we first generate edge features to enrich Aextract into a graph with fully-connected edges between question and answer concepts , denoted by an adjacency matrix A0 . Then we iteratively update the edge embeddings , the adjacency matrix and the node embeddings . The graph vector g is derived by pooling over node embeddings at the final layer . Formally , we denote the label for question-answer pair ( q , a ) as y , where y = 1 means a is the correct answer to q and y = 0 means a is a wrong answer . The overall objective of jointly learning graph structure and model parameters on a training set Dtrain is defined as follows : L ( θ ) = ∑ ( q , a , y ) ∼Dtrain [ Ltask ( ρ̂ ( q , a ; θ ) ) , y ) + β · Lprune ( AL ( q , a ; θtext , θgraph ) ) ] , ( 2 ) where θ = { θtext , θgraph , θMLP } is the set of all learnable parameters , β is a hyperparameter . AL represents the final graph structure after L layers ’ refinement ( §3.2 ) . L can be decomposed into Ltask for the downstream classification task and Lprune for graph structure learning with regularization . In the following subsections , we first introduce how we initialize the contextualized graph G with densified adjacency matrix A0 , node features X and hybrid edge features Xe . Next we show how we encode the graph as g = fgraph-enc ( X , Xe , A 0 , s ) and calculate Ltask for the classification task . Finally we show how we calculate the regularization termLprune based on the learned graph structure .
The paper proposes a question answering model that is augmented with a common-sense knowledge graph (KG). The paper builds on the following two observations — (a) KGs are incomplete often lacking facts that would be needed for reasoning to answer a question. (b) Current methods over-retrieves facts (edges) from the KG leading to a lot of unrelated facts that potentially makes reasoning noisier and harder.
SP:c5bc1d50c01d86f3f5ccb52508cc4112a1937bd4
Model-based Navigation in Environments with Novel Layouts Using Abstract $2$-D Maps
Efficiently training agents with planning capabilities has long been one of the major challenges in decision-making . In this work , we focus on zero-shot navigation ability on a given abstract 2-D occupancy map , like human navigation by reading a paper map , by treating it as an image . To learn this ability , we need to efficiently train an agent on environments with a small proportion of training maps and share knowledge effectively across the environments . We hypothesize that model-based navigation can better adapt agent ’ s behaviors to a task , since it disentangles the variations in map layout and goal location and enables longer-term planning ability on novel locations compared to reactive policies . We propose to learn a hypermodel that can understand patterns from a limited number of abstract maps and goal locations , to maximize alignment between the hypermodel predictions and real trajectories to extract information from multi-task off-policy experiences , and to construct denser feedback for planners by n-step goal relabelling . We train our approach on DeepMind Lab environments with layouts from different maps , and demonstrate superior performance on zero-shot transfer to novel maps and goals . 1 INTRODUCTION . If we provide a rough solution of a problem to an agent , can the agent learn to follow the solution effectively ? In this paper , we study this question within the context of maze navigation , where an agent is situated within a maze whose layout has never been seen before , and the agent is expected to navigate to a goal without first training on or even exploring this novel maze . This task may appear impossible without further guidance , but we will provide the agent with additional information : an abstract 2-D occupancy map illustrating the rough layout of the environment , as well as indicators of its start and goal locations ( “ task context ” in Figure 1 ) . This is akin to a tourist attempting to find a landmark in a new city : without any further help , this would be very challenging ; but when equipped with a 2-D map with a “ you are here ” symbol and an indicator of the landmark , the tourist can easily plan a path to reach the landmark without needing to explore or train excessively . Navigation is a fundamental capability of all embodied agents , both artificial and natural , and therefore has been studied under many settings . In our case , we are most concerned with zero-shot navigation in novel environments , where the agent can not perform further training or even exploration of the new environment ; all that is needed to accomplish the task is technically provided by the abstract 2-D map . This differs from the vast set of approaches based on simultaneous localization and mapping ( SLAM ) typically used in robot navigation ( Thrun et al. , 2005 ) , where the agent can explore and build an accurate occupancy map of the environment prior to navigation . Recently , navigation approaches based on deep reinforcement learning ( RL ) approaches have also emerged , although they often require extensive training in the same environment ( Mirowski et al. , 2017 ; 2018 ) . Some deep RL approaches are even capable of navigating novel environments with new layouts without further training ; however , these approaches typically learn the strategy of efficiently exploring the new environment to understand the layout and find the goal , then exploiting that knowledge for the remainder of the episode to repeatedly reach that goal quickly ( Jaderberg et al. , 2017 ) . In contrast , since the solution is essentially provided to the agent via the abstract 2-D map , we require a more stringent version of zero-shot navigation , where it should not explore the new environment ; instead , we expect the agent to produce a near-optimal path in its first ( and only ) approach to the goal . Although the solution is technically accessible via the abstract 2-D map , some challenges remain to use it effectively . First , although we assume that the layout in the 2-D map is accurate , the map does not correspond to the state space of the agent in the environment , so the agent must learn the correspondence between its state and locations in the 2-D map . Second , actions in the 2-D map also can not be directly mapped into the agent ’ s action space ; moving betweend adjacent “ cells ” in the 2-D map requires a sequence of many actions , specified in terms of the agent ’ s translational and rotational velocities . Hence , one can not simply perform graph search on the 2-D map , then execute the abstract solution directly on the agent . Instead , we propose approaches that learn to use the provided abstract 2-D map via end-to-end learning . Concretely , we propose two approaches for navigation using abstract 2-D maps : • MMN ( Map-conditioned Multi-task Navigator ) : A model-based approach that learns a hypermodel ( Ha et al. , 2016 ) , which uses the provided 2-D map to produce a parameterized latent-space transition function fφ for that map . This transition function fφ is jointly trained with Monte-Carlo tree search ( MCTS ) to plan ( in latent space ) to reach the specified goal Schrittwieser et al . ( 2019 ) . • MAH ( Map-conditioned Ape-X HER DQN ) : A model-free approach based on Ape-X Deep QNetworks ( DQN ) ( Horgan et al. , 2018 ) , a high-performing distributed variant of DQN , that takes in the provided 2-D map as additional input . Furthermore , we supplement it with our proposed n-step modification of hindsight experience replay ( HER ) ( Andrychowicz et al. , 2017 ) . In experiments performed in DeepMind Lab ( Beattie et al. , 2016 ) , a 3-D maze simulation environments shown in Figure 1 , we show that both approaches achieve effective zero-shot navigation in novel environment layouts , and the model-based MMN is better at long-distance navigation . 2 BACKGROUND . We consider a distribution of navigation tasks ρ ( T ) . Each task is different in two aspects : map layout and goal location . ( 1 ) Abstract map . The layout of each navigation task is specified by an abstract map . Specifically , an abstract map m ∈ RN×N is a 2-D occupancy grid , where cell with 1s ( black ) indicate walls and 0s ( white ) indicate nagivable spaces . A cell does not correspond to the agent ’ s world , so the agent needs to learn to localize itself given an abstract 2-D map . We generate a set of maps and guarantee that any valid positions are reachable , i.e. , there is only one connected component in a map . ( 2 ) Goal position . Given a map , we can then specify a pair of start and goal position . Both start and goal are represented as a “ one-hot ” occupancy grid g ∈ R2×N×N provided to the agent . For simplicity , we use g to refer to both start and goal , and we denote the provided map and start-goal positions c = ( m , g ) as the task context . We formulate each navigation task as a goal-reaching Markov decision process ( MDP ) , consisting of a tuple 〈S , A , P , RG , ρ0 , γ〉 , where S is the state space , A is the action space , P is the transition function P : S ×A → ∆ ( S ) , ρ0 = ρ ( s0 ) is the initial state distribution , and γ ∈ ( 0 , 1 ] is the discount factor . We assume transitions are deterministic . For each task , the objective is to reach a subset of state space SG ⊂ S indicated by a reward function RG : S × A → R. We denote a task as T = 〈P , RG , ρ0〉 , since a map and goal specify the dynamics and reward function of a MDP , respectively . In the episodic goal-reaching setting , the objective is typically not discounted ( γ = 1 ) and the reward is −1 for all non-goal states , i.e. , RG ( s , a ) = −I [ s 6= g ] , g ∈ SG . 3 MAP-CONDITIONED PLANNING GIVEN ABSTRACT 2-D MAPS . To build a map-based navigation agent efficient in both training and transfer , there are several technical challenges . ( 1 ) A local change in map may introduce entirely different task structure , so we need the model and planner to adapt to the task context in a different way than conditioning on state , and not directly condition on the entire task context . ( 2 ) During training , we can only rely on a very small proportion of training tasks ( e.g. , 20 of 13×13 maps ) . This requires the agent to be efficient in terms of understanding each task ’ s structure , i.e. , the layout of training maps and goal locations . ( 3 ) Since reward is sparse and model learning and exploration are done simultaneously , we need to fully utilize the knowledge in the environment to train the model and planner , such as transition tuples and failure experiences . Corresponding to the challenges , we introduce the task-conditioned hypermodel first , followed by the forward pass of the planning computation by using the hypermodel in inference . We then detail the backward pass on the training target and optimization process . 3.1 TASK-CONDITIONED HYPERMODEL We aim to build a model adaptive to given abstract 2-D maps for the navigation planner . In a single-task training schema , a naive solution is to separately learn a parameterized transition function fi ( s , a ) for different maps . However , we want to share knowledge between tasks in navigation domain , in which maps have some common computational patterns . For example , in Figure 2 , moving right on center of the box on the left map shares some computation with the right one . This also applies to the larger scale of map area and also the reward prediction . When the agent is able to capture this type of computational pattern , it can better predict what will happen when transferring to a new task . We propose to build a meta network hψ , or hypermodel , to learn the “ computation ” of the transition model fψ si- multaneously for all maps with abstract 2-D maps as input . The transition model for task T ( mapgoal pair ) is a function fi that maps current ( latent ) state and action to a next ( latent ) state . The set { fi } represents transition functions of all tasks belonging to a navigation schema ( e.g. , a certain size of map ) , and these tasks have similar structure . We parameterize a transition function fi as a neural network with its parameter vector φi . We assume the set of transition networks have similar structure that can be characterized by a set of context variables c = ( m , g ) , i.e. , the abstract 2-D map and goal.1 This implies that parameter vectors φi live in a low-dimensional manifold . Thus , we define a mapping h : C → Φ that maps the context of a task to the parameter vector φi of its transition function fi . We parameterize h also as a network with parameter ψ:2 hψ : c 7→ φ , fφ : s , a 7→ s′ . ( 1 ) This can be viewed as soft weight sharing between multiple tasks . It efficiently maps lowdimensional structure in the MDP , specified by the map , to computation of the transition model . It may also be viewed as a learned “ dot-product ” between task context cT and state and action st , at to predict the next state . The idea of predicting the weights of a main network using another meta-network is also known as HyperNetworks ( Ha et al. , 2016 ; von Oswald et al. , 2019 ) . 1Concretely , a task context c ∈ R4×N×N has four components : downsampled global occupancy map , cropped local occupancy map , and ” one-hot ” goal and start occupancy maps , where N is downsampled size . 2We only predict weights of the transition model fφ : S × A → S which operates on a latent state space . The mapping from environment observations to latent states e : O → S is not predicted by a meta network . Since the latent space is low-dimensional , it is feasible to predict weight matrices of a transition network for it .
This paper tackles the task of going to a point-goal using an abstract 2D map of a given environment. The central idea is to use the given abstract 2D map to predict parameters for the transition function in the environment depicted by the map. This predicted transition function is used to search for actions to execute via planning.
SP:4896bf9436d8c03f6751b45118bf29ff0d116c6e
Model-based Navigation in Environments with Novel Layouts Using Abstract $2$-D Maps
Efficiently training agents with planning capabilities has long been one of the major challenges in decision-making . In this work , we focus on zero-shot navigation ability on a given abstract 2-D occupancy map , like human navigation by reading a paper map , by treating it as an image . To learn this ability , we need to efficiently train an agent on environments with a small proportion of training maps and share knowledge effectively across the environments . We hypothesize that model-based navigation can better adapt agent ’ s behaviors to a task , since it disentangles the variations in map layout and goal location and enables longer-term planning ability on novel locations compared to reactive policies . We propose to learn a hypermodel that can understand patterns from a limited number of abstract maps and goal locations , to maximize alignment between the hypermodel predictions and real trajectories to extract information from multi-task off-policy experiences , and to construct denser feedback for planners by n-step goal relabelling . We train our approach on DeepMind Lab environments with layouts from different maps , and demonstrate superior performance on zero-shot transfer to novel maps and goals . 1 INTRODUCTION . If we provide a rough solution of a problem to an agent , can the agent learn to follow the solution effectively ? In this paper , we study this question within the context of maze navigation , where an agent is situated within a maze whose layout has never been seen before , and the agent is expected to navigate to a goal without first training on or even exploring this novel maze . This task may appear impossible without further guidance , but we will provide the agent with additional information : an abstract 2-D occupancy map illustrating the rough layout of the environment , as well as indicators of its start and goal locations ( “ task context ” in Figure 1 ) . This is akin to a tourist attempting to find a landmark in a new city : without any further help , this would be very challenging ; but when equipped with a 2-D map with a “ you are here ” symbol and an indicator of the landmark , the tourist can easily plan a path to reach the landmark without needing to explore or train excessively . Navigation is a fundamental capability of all embodied agents , both artificial and natural , and therefore has been studied under many settings . In our case , we are most concerned with zero-shot navigation in novel environments , where the agent can not perform further training or even exploration of the new environment ; all that is needed to accomplish the task is technically provided by the abstract 2-D map . This differs from the vast set of approaches based on simultaneous localization and mapping ( SLAM ) typically used in robot navigation ( Thrun et al. , 2005 ) , where the agent can explore and build an accurate occupancy map of the environment prior to navigation . Recently , navigation approaches based on deep reinforcement learning ( RL ) approaches have also emerged , although they often require extensive training in the same environment ( Mirowski et al. , 2017 ; 2018 ) . Some deep RL approaches are even capable of navigating novel environments with new layouts without further training ; however , these approaches typically learn the strategy of efficiently exploring the new environment to understand the layout and find the goal , then exploiting that knowledge for the remainder of the episode to repeatedly reach that goal quickly ( Jaderberg et al. , 2017 ) . In contrast , since the solution is essentially provided to the agent via the abstract 2-D map , we require a more stringent version of zero-shot navigation , where it should not explore the new environment ; instead , we expect the agent to produce a near-optimal path in its first ( and only ) approach to the goal . Although the solution is technically accessible via the abstract 2-D map , some challenges remain to use it effectively . First , although we assume that the layout in the 2-D map is accurate , the map does not correspond to the state space of the agent in the environment , so the agent must learn the correspondence between its state and locations in the 2-D map . Second , actions in the 2-D map also can not be directly mapped into the agent ’ s action space ; moving betweend adjacent “ cells ” in the 2-D map requires a sequence of many actions , specified in terms of the agent ’ s translational and rotational velocities . Hence , one can not simply perform graph search on the 2-D map , then execute the abstract solution directly on the agent . Instead , we propose approaches that learn to use the provided abstract 2-D map via end-to-end learning . Concretely , we propose two approaches for navigation using abstract 2-D maps : • MMN ( Map-conditioned Multi-task Navigator ) : A model-based approach that learns a hypermodel ( Ha et al. , 2016 ) , which uses the provided 2-D map to produce a parameterized latent-space transition function fφ for that map . This transition function fφ is jointly trained with Monte-Carlo tree search ( MCTS ) to plan ( in latent space ) to reach the specified goal Schrittwieser et al . ( 2019 ) . • MAH ( Map-conditioned Ape-X HER DQN ) : A model-free approach based on Ape-X Deep QNetworks ( DQN ) ( Horgan et al. , 2018 ) , a high-performing distributed variant of DQN , that takes in the provided 2-D map as additional input . Furthermore , we supplement it with our proposed n-step modification of hindsight experience replay ( HER ) ( Andrychowicz et al. , 2017 ) . In experiments performed in DeepMind Lab ( Beattie et al. , 2016 ) , a 3-D maze simulation environments shown in Figure 1 , we show that both approaches achieve effective zero-shot navigation in novel environment layouts , and the model-based MMN is better at long-distance navigation . 2 BACKGROUND . We consider a distribution of navigation tasks ρ ( T ) . Each task is different in two aspects : map layout and goal location . ( 1 ) Abstract map . The layout of each navigation task is specified by an abstract map . Specifically , an abstract map m ∈ RN×N is a 2-D occupancy grid , where cell with 1s ( black ) indicate walls and 0s ( white ) indicate nagivable spaces . A cell does not correspond to the agent ’ s world , so the agent needs to learn to localize itself given an abstract 2-D map . We generate a set of maps and guarantee that any valid positions are reachable , i.e. , there is only one connected component in a map . ( 2 ) Goal position . Given a map , we can then specify a pair of start and goal position . Both start and goal are represented as a “ one-hot ” occupancy grid g ∈ R2×N×N provided to the agent . For simplicity , we use g to refer to both start and goal , and we denote the provided map and start-goal positions c = ( m , g ) as the task context . We formulate each navigation task as a goal-reaching Markov decision process ( MDP ) , consisting of a tuple 〈S , A , P , RG , ρ0 , γ〉 , where S is the state space , A is the action space , P is the transition function P : S ×A → ∆ ( S ) , ρ0 = ρ ( s0 ) is the initial state distribution , and γ ∈ ( 0 , 1 ] is the discount factor . We assume transitions are deterministic . For each task , the objective is to reach a subset of state space SG ⊂ S indicated by a reward function RG : S × A → R. We denote a task as T = 〈P , RG , ρ0〉 , since a map and goal specify the dynamics and reward function of a MDP , respectively . In the episodic goal-reaching setting , the objective is typically not discounted ( γ = 1 ) and the reward is −1 for all non-goal states , i.e. , RG ( s , a ) = −I [ s 6= g ] , g ∈ SG . 3 MAP-CONDITIONED PLANNING GIVEN ABSTRACT 2-D MAPS . To build a map-based navigation agent efficient in both training and transfer , there are several technical challenges . ( 1 ) A local change in map may introduce entirely different task structure , so we need the model and planner to adapt to the task context in a different way than conditioning on state , and not directly condition on the entire task context . ( 2 ) During training , we can only rely on a very small proportion of training tasks ( e.g. , 20 of 13×13 maps ) . This requires the agent to be efficient in terms of understanding each task ’ s structure , i.e. , the layout of training maps and goal locations . ( 3 ) Since reward is sparse and model learning and exploration are done simultaneously , we need to fully utilize the knowledge in the environment to train the model and planner , such as transition tuples and failure experiences . Corresponding to the challenges , we introduce the task-conditioned hypermodel first , followed by the forward pass of the planning computation by using the hypermodel in inference . We then detail the backward pass on the training target and optimization process . 3.1 TASK-CONDITIONED HYPERMODEL We aim to build a model adaptive to given abstract 2-D maps for the navigation planner . In a single-task training schema , a naive solution is to separately learn a parameterized transition function fi ( s , a ) for different maps . However , we want to share knowledge between tasks in navigation domain , in which maps have some common computational patterns . For example , in Figure 2 , moving right on center of the box on the left map shares some computation with the right one . This also applies to the larger scale of map area and also the reward prediction . When the agent is able to capture this type of computational pattern , it can better predict what will happen when transferring to a new task . We propose to build a meta network hψ , or hypermodel , to learn the “ computation ” of the transition model fψ si- multaneously for all maps with abstract 2-D maps as input . The transition model for task T ( mapgoal pair ) is a function fi that maps current ( latent ) state and action to a next ( latent ) state . The set { fi } represents transition functions of all tasks belonging to a navigation schema ( e.g. , a certain size of map ) , and these tasks have similar structure . We parameterize a transition function fi as a neural network with its parameter vector φi . We assume the set of transition networks have similar structure that can be characterized by a set of context variables c = ( m , g ) , i.e. , the abstract 2-D map and goal.1 This implies that parameter vectors φi live in a low-dimensional manifold . Thus , we define a mapping h : C → Φ that maps the context of a task to the parameter vector φi of its transition function fi . We parameterize h also as a network with parameter ψ:2 hψ : c 7→ φ , fφ : s , a 7→ s′ . ( 1 ) This can be viewed as soft weight sharing between multiple tasks . It efficiently maps lowdimensional structure in the MDP , specified by the map , to computation of the transition model . It may also be viewed as a learned “ dot-product ” between task context cT and state and action st , at to predict the next state . The idea of predicting the weights of a main network using another meta-network is also known as HyperNetworks ( Ha et al. , 2016 ; von Oswald et al. , 2019 ) . 1Concretely , a task context c ∈ R4×N×N has four components : downsampled global occupancy map , cropped local occupancy map , and ” one-hot ” goal and start occupancy maps , where N is downsampled size . 2We only predict weights of the transition model fφ : S × A → S which operates on a latent state space . The mapping from environment observations to latent states e : O → S is not predicted by a meta network . Since the latent space is low-dimensional , it is feasible to predict weight matrices of a transition network for it .
This submission tackles the Point Goal navigation task given access to the agent’s starting location, current state (position and velocity), the goal location and a top-down map. The submission presents two approaches for tackling this task. First is MMN (Map-conditioned Multi-task Navigator) which is model-based approach which learns to hyper-network to convert map input to a transition function. Second approach is MAH (Map-conditioned Ape-X HER DQN) which is model-free approach using Ape-X DQN with Hindsight experience replay.
SP:4896bf9436d8c03f6751b45118bf29ff0d116c6e
Contemplating Real-World Object Classification
1 INTRODUCTION . Object recognition3 can be said to be the most basic problem in vision sciences . It is required in the early stages of visual processing before a system , be it a human or a machine , can accomplish other tasks such as searching , navigating , or grasping . Application of a convolutional neural network architecture ( CNN ) known as LeNet ( LeCun et al. , 1998 ) , albeit with new bells and whistles ( Krizhevsky et al. , 2012 ) , revolutionized not only computer vision but also several other areas . With the initial excitement gradually damping , researchers have started to study the shortcomings of deep models and question their generalization ability . From prior research , we already know that CNNs : a ) lack generalization to out of distribution samples ( e.g. , Recht et al . ( 2019 ) ; Barbu et al . ( 2019 ) ; Shankar et al . ( 2020 ) ; Taori et al . ( 2020 ) ; Koh et al . ( 2020 ) ) . Even after being exposed to many different instances of the same object category , they fail to fully capture the concept . In stark contrast , humans can generalize from only few examples ( a.k.a few-shot learning ) , b ) perform poorly when applied to transformed versions of the same object . In other words , they 1https : //objectnet.dev/ 2See https : //openreview.net/forum ? id=Q4EUywJIkqr for reviews and discussions . A prelimnary version of this work has been published in Arxiv ( Borji , 2020 ) . 3Classification of an object appearing lonely in an image . For images containing multiple objects , object localization or detection is required first . are not invariant to spatial transformations ( e.g. , translation , in-plane and in-depth rotation , scale ) as shown in ( Azulay & Weiss , 2019 ; Engstrom et al. , 2019 ; Fawzi & Frossard , 2015 ) , as well as noise corruptions ( Hendrycks & Dietterich , 2019 ; Geirhos et al. , 2018b ) , and c ) are vulnerable to imperceptible adversarial image perturbations ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Nguyen et al. , 2015 ) . Majority of these works , however , have used either the ImageNet dataset or its variations , and thus might be biased towards ImageNet characteristics . Utilizing a very challenging dataset that has been proposed recently , known as ObjectNet ( Barbu et al. , 2019 ) , here we seek to answer how well the state of the art CNNs generalize to real world object recognition scenarios . We also explore the role of spatial context in object recognition and answer whether it is better to use cropped objects ( using bounding boxes ) or segmented objects to achieve higher accuracy and robustness . Furthermore , we study the relationship between object recognition , scene understanding , and object detection . These are important problems that have been less explored . Several datasets have been proposed for training and testing object recognition models , and to study their generalization ability ( e.g. , ImageNet by Deng et al . ( 2009 ) , Places by Zhou et al . ( 2017 ) , CIFAR by Krizhevsky et al . ( 2009 ) , NORB by LeCun et al . ( 2004 ) , and iLab20M by Borji et al . ( 2016 ) ) . As the most notable one , ImageNet dataset has been very instrumental for gauging the progress in object recognition over the past decade . A large number of studies have tested new ideas by training deep models on ImageNet ( from scratch ) , or by finetuning pre-trained ( on ImageNet ) classification models on other datasets . With the ImageNet being retired , the state of the object recognition problem remains unclear . Several questions such as out of distribution generalization , “ superhuman performance ” ( He et al. , 2016 ) and invariance to transformations persist . To rekindle the discourse , recently Barbu et al . ( 2019 ) introduced the ObjectNet dataset which according to their claim has less bias than other recognition datasets4 . This dataset is supposed to be used solely as a test set and comes with a licence that disallows the researchers to finetune models on it . Images are pictured by Mechanical Turk workers using a mobile app in a variety of backgrounds , rotations , and imaging viewpoints . ObjectNet contains 50,000 images across 313 categories , out of which 113 are in common with ImageNet categories . Astonishingly , Barbu et al . found that the state of the art object recognition models perform drastically lower on ObjectNet compared to their performance on ImageNet ( about 40-45 % drop ) . Our principal goal here it to revisit the Barbu et al. ’ s analysis and measure the actual performance drop on ObjectNet compared to ImageNet . To this end , we limit our analysis to the 113 overlapped categories between the two datasets . We first annotate the objects in the ObjectNet scenes by drawing boxes around them . We then apply a number of deep models on these object boxes and find that models perform significantly better now , compared to their performance on the entire scene ( as is done in Barbu et . al ) . Interestingly , and perhaps against the common belief , we also find that training and testing models on segmented objects , rather than the object bounding box or the full image , leads to consistent improvement in accuracy and robustness over a range of classification tasks and image transformations ( geometric , natural distortions , and adversarial attacks ) . Lastly , we provide a qualitative ( and somewhat anecdotal ) analysis of extreme cases in object recognition for humans and machines . 2 RELATED WORK . Robustness against synthetic distribution shifts . Most research on assessing model robustness has been focused on synthetic image perturbations ( e.g. , spatial transformations , noise corruptions , simulated weather artifacts , temporal changes ( Gu et al. , 2019 ) , and adversarial examples ) perhaps because it is easy to precisely define , implement , and apply them to arbitrary images . While models have improved significantly in robustness to these distribution shifts ( e.g. , Zhang ( 2019 ) ; Zhang et al . ( 2019 ) ; Cohen & Welling ( 2016 ) ) , they are still not as robust as humans . Geirhos et al . ( 2018b ) showed that humans are more tolerant against image manipulations like contrast reduction , additive noise , or novel eidolon-distortions than models . Further , humans and models behave differently ( witnessed by different error patterns ) as the signal gets weaker . Zhu et al . ( 2016 ) contrast the influence of the foreground object and image background on the performance of humans and models . Robustness against natural distribution shifts . Robustness on real data is a clear challenge for deep neural networks . Unlike synthetic distribution shifts , it is difficult to define distribution shifts that occur naturally in the real-world ( such as subtle changes in scene composition , object types , and lighting conditions ) . Recht et al . ( 2019 ) closely followed the original ImageNet creation process 4ObjectNet dataset , however , has it own biases . It consists of indoor objects that are available to many people , are mobile , are not too large , too small , fragile or dangerous . to build a new test set called ImageNetV2 . They reported a performance gap of about 11 % ( top-1 acc . ) between the performance of the best deep models on this dataset and the original test set . Similar observations have been made by Shankar et al . ( 2020 ) . By evaluating 204 ImageNet models in 213 different test conditions , Taori et al . ( 2020 ) found that a ) current synthetic robustness does not imply natural robustness . In other words , robustness measures for synthetic distribution shifts are weakly predictive of robustness on the natural distribution shifts , b ) robustness measurements should control for accuracy since higher robustness can sometimes be explained by the higher accuracy on a standard unperturbed test set , and c ) training models on larger and more diverse data improves robustness but does not lead to full closure of the performance gap . A comprehensive benchmark of distribution shifts in the wild , known as WILDS , has recently been published by Koh et al . ( 2020 ) , encompassing different data modalities including vision . In D ’ Amour et al . ( 2020 ) , authors regard “ underspecification ” a major challenge to the credibility and generalization of modern machine learning pipelines . An ML pipeline is underspecified when it returns models that perform very well on held-out test sets during training but perform poorly at deployment time . Contextual interference . Context plays a significant role in pattern recognition and visual reasoning ( e.g. , Bar ( 2004 ) ; Torralba & Sinha ( 2001 ) ; Rabinovich et al . ( 2007 ) ; Heitz & Koller ( 2008 ) ; Galleguillos & Belongie ( 2010 ) ) . The extent to which visual context is being used by deep models is still unclear . Unlike models , humans are very good at exploiting context when it is helpful and discard it when it causes ambiguity . In other words , deep models do not understand what is the foreground object and what constitutes the background5 . Nagarajan et al . ( 2020 ) mention that ML models utilize features ( e.g. , image background ) which are spuriously correlated with the label during training . This makes them fragile at the test time when statistics slightly differ . As we argue here , this is one of the main reasons why deep models are so vulnerable to geometric and adversarial perturbations . Geirhos et al . ( 2020 ) have studied this phenomenon under the “ shortcut learning ” terminology from a broader perspective . Insights from human vision . CNNs turn out to be good models of human vision and can explain the first feed-forward sweep of information ( See Kriegeskorte ( 2015 ) for a review ) . They , however , differ from human visual processing in several important ways . Current object recognition methods do not rely on segmentation , whereas figure-ground segmentation plays a significant role in human vision , in particular for the encoding of spatial relations between 3D object parts ( Biederman , 1987 ; Serre , 2019 ) . Some computer vision works , predating deep learning , have also shown that pre-segmenting the image before applying the recognition algorithms , improves the accuracy ( Malisiewicz & Efros , 2007 ; Rabinovich et al. , 2007 ; Rosenfeld & Weinshall , 2011 ) . Unlike the human vision system , CNNs are hindered drastically in crowded scenes ( e.g. , Volokitin et al . ( 2017 ) ) . CNNs rely more on texture whereas humans pay more attention to shape ( Geirhos et al. , 2018a ) . Utilizing minimal recognizable images , Ullman et al . ( 2016 ) argued that the human visual system uses features and processes that are not used by current deep models . 5As an example , consider a model that is trained to classify camels vs. cows , with camels always shown in sandy backgrounds and cows shown against grassy backgrounds . Although such a model does well during training , it gets confused when presented with cows in sandy backgrounds at test time ( Beery et al. , 2018 ) . See also Rosenfeld et al . ( 2018 ) for another example in the context of object detection O ur a na ly si s O bj ec tN et p ap er < Recognizers by year > 13.86 31.19 ObjectNet Top-5 ( box ) ObjectNet Top-1 ( box ) 49.84 61.50 39.48 27.89 25-35 % performance drop A cc ur ac y % A cc ur ac y % 40-45 % performance drop Using our code
This paper revisits ObjectNet dataset closely and found applying classifiers on object bounding box significantly reduces the gap between ImageNet and ObjectNet. The authors further investigates the robustness of CNNs against image perturbations and adversarial attacks, and found limiting the object area to their segmentation mask significantly improves model accuracy (and robustness). Qualitative evaluation is also performed over confident and less-confident / incorrect model predictions and find it correlates with human perception.
SP:c7be5b2c2dbf1bd54ace77553ebab21b9a219ef5
Contemplating Real-World Object Classification
1 INTRODUCTION . Object recognition3 can be said to be the most basic problem in vision sciences . It is required in the early stages of visual processing before a system , be it a human or a machine , can accomplish other tasks such as searching , navigating , or grasping . Application of a convolutional neural network architecture ( CNN ) known as LeNet ( LeCun et al. , 1998 ) , albeit with new bells and whistles ( Krizhevsky et al. , 2012 ) , revolutionized not only computer vision but also several other areas . With the initial excitement gradually damping , researchers have started to study the shortcomings of deep models and question their generalization ability . From prior research , we already know that CNNs : a ) lack generalization to out of distribution samples ( e.g. , Recht et al . ( 2019 ) ; Barbu et al . ( 2019 ) ; Shankar et al . ( 2020 ) ; Taori et al . ( 2020 ) ; Koh et al . ( 2020 ) ) . Even after being exposed to many different instances of the same object category , they fail to fully capture the concept . In stark contrast , humans can generalize from only few examples ( a.k.a few-shot learning ) , b ) perform poorly when applied to transformed versions of the same object . In other words , they 1https : //objectnet.dev/ 2See https : //openreview.net/forum ? id=Q4EUywJIkqr for reviews and discussions . A prelimnary version of this work has been published in Arxiv ( Borji , 2020 ) . 3Classification of an object appearing lonely in an image . For images containing multiple objects , object localization or detection is required first . are not invariant to spatial transformations ( e.g. , translation , in-plane and in-depth rotation , scale ) as shown in ( Azulay & Weiss , 2019 ; Engstrom et al. , 2019 ; Fawzi & Frossard , 2015 ) , as well as noise corruptions ( Hendrycks & Dietterich , 2019 ; Geirhos et al. , 2018b ) , and c ) are vulnerable to imperceptible adversarial image perturbations ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Nguyen et al. , 2015 ) . Majority of these works , however , have used either the ImageNet dataset or its variations , and thus might be biased towards ImageNet characteristics . Utilizing a very challenging dataset that has been proposed recently , known as ObjectNet ( Barbu et al. , 2019 ) , here we seek to answer how well the state of the art CNNs generalize to real world object recognition scenarios . We also explore the role of spatial context in object recognition and answer whether it is better to use cropped objects ( using bounding boxes ) or segmented objects to achieve higher accuracy and robustness . Furthermore , we study the relationship between object recognition , scene understanding , and object detection . These are important problems that have been less explored . Several datasets have been proposed for training and testing object recognition models , and to study their generalization ability ( e.g. , ImageNet by Deng et al . ( 2009 ) , Places by Zhou et al . ( 2017 ) , CIFAR by Krizhevsky et al . ( 2009 ) , NORB by LeCun et al . ( 2004 ) , and iLab20M by Borji et al . ( 2016 ) ) . As the most notable one , ImageNet dataset has been very instrumental for gauging the progress in object recognition over the past decade . A large number of studies have tested new ideas by training deep models on ImageNet ( from scratch ) , or by finetuning pre-trained ( on ImageNet ) classification models on other datasets . With the ImageNet being retired , the state of the object recognition problem remains unclear . Several questions such as out of distribution generalization , “ superhuman performance ” ( He et al. , 2016 ) and invariance to transformations persist . To rekindle the discourse , recently Barbu et al . ( 2019 ) introduced the ObjectNet dataset which according to their claim has less bias than other recognition datasets4 . This dataset is supposed to be used solely as a test set and comes with a licence that disallows the researchers to finetune models on it . Images are pictured by Mechanical Turk workers using a mobile app in a variety of backgrounds , rotations , and imaging viewpoints . ObjectNet contains 50,000 images across 313 categories , out of which 113 are in common with ImageNet categories . Astonishingly , Barbu et al . found that the state of the art object recognition models perform drastically lower on ObjectNet compared to their performance on ImageNet ( about 40-45 % drop ) . Our principal goal here it to revisit the Barbu et al. ’ s analysis and measure the actual performance drop on ObjectNet compared to ImageNet . To this end , we limit our analysis to the 113 overlapped categories between the two datasets . We first annotate the objects in the ObjectNet scenes by drawing boxes around them . We then apply a number of deep models on these object boxes and find that models perform significantly better now , compared to their performance on the entire scene ( as is done in Barbu et . al ) . Interestingly , and perhaps against the common belief , we also find that training and testing models on segmented objects , rather than the object bounding box or the full image , leads to consistent improvement in accuracy and robustness over a range of classification tasks and image transformations ( geometric , natural distortions , and adversarial attacks ) . Lastly , we provide a qualitative ( and somewhat anecdotal ) analysis of extreme cases in object recognition for humans and machines . 2 RELATED WORK . Robustness against synthetic distribution shifts . Most research on assessing model robustness has been focused on synthetic image perturbations ( e.g. , spatial transformations , noise corruptions , simulated weather artifacts , temporal changes ( Gu et al. , 2019 ) , and adversarial examples ) perhaps because it is easy to precisely define , implement , and apply them to arbitrary images . While models have improved significantly in robustness to these distribution shifts ( e.g. , Zhang ( 2019 ) ; Zhang et al . ( 2019 ) ; Cohen & Welling ( 2016 ) ) , they are still not as robust as humans . Geirhos et al . ( 2018b ) showed that humans are more tolerant against image manipulations like contrast reduction , additive noise , or novel eidolon-distortions than models . Further , humans and models behave differently ( witnessed by different error patterns ) as the signal gets weaker . Zhu et al . ( 2016 ) contrast the influence of the foreground object and image background on the performance of humans and models . Robustness against natural distribution shifts . Robustness on real data is a clear challenge for deep neural networks . Unlike synthetic distribution shifts , it is difficult to define distribution shifts that occur naturally in the real-world ( such as subtle changes in scene composition , object types , and lighting conditions ) . Recht et al . ( 2019 ) closely followed the original ImageNet creation process 4ObjectNet dataset , however , has it own biases . It consists of indoor objects that are available to many people , are mobile , are not too large , too small , fragile or dangerous . to build a new test set called ImageNetV2 . They reported a performance gap of about 11 % ( top-1 acc . ) between the performance of the best deep models on this dataset and the original test set . Similar observations have been made by Shankar et al . ( 2020 ) . By evaluating 204 ImageNet models in 213 different test conditions , Taori et al . ( 2020 ) found that a ) current synthetic robustness does not imply natural robustness . In other words , robustness measures for synthetic distribution shifts are weakly predictive of robustness on the natural distribution shifts , b ) robustness measurements should control for accuracy since higher robustness can sometimes be explained by the higher accuracy on a standard unperturbed test set , and c ) training models on larger and more diverse data improves robustness but does not lead to full closure of the performance gap . A comprehensive benchmark of distribution shifts in the wild , known as WILDS , has recently been published by Koh et al . ( 2020 ) , encompassing different data modalities including vision . In D ’ Amour et al . ( 2020 ) , authors regard “ underspecification ” a major challenge to the credibility and generalization of modern machine learning pipelines . An ML pipeline is underspecified when it returns models that perform very well on held-out test sets during training but perform poorly at deployment time . Contextual interference . Context plays a significant role in pattern recognition and visual reasoning ( e.g. , Bar ( 2004 ) ; Torralba & Sinha ( 2001 ) ; Rabinovich et al . ( 2007 ) ; Heitz & Koller ( 2008 ) ; Galleguillos & Belongie ( 2010 ) ) . The extent to which visual context is being used by deep models is still unclear . Unlike models , humans are very good at exploiting context when it is helpful and discard it when it causes ambiguity . In other words , deep models do not understand what is the foreground object and what constitutes the background5 . Nagarajan et al . ( 2020 ) mention that ML models utilize features ( e.g. , image background ) which are spuriously correlated with the label during training . This makes them fragile at the test time when statistics slightly differ . As we argue here , this is one of the main reasons why deep models are so vulnerable to geometric and adversarial perturbations . Geirhos et al . ( 2020 ) have studied this phenomenon under the “ shortcut learning ” terminology from a broader perspective . Insights from human vision . CNNs turn out to be good models of human vision and can explain the first feed-forward sweep of information ( See Kriegeskorte ( 2015 ) for a review ) . They , however , differ from human visual processing in several important ways . Current object recognition methods do not rely on segmentation , whereas figure-ground segmentation plays a significant role in human vision , in particular for the encoding of spatial relations between 3D object parts ( Biederman , 1987 ; Serre , 2019 ) . Some computer vision works , predating deep learning , have also shown that pre-segmenting the image before applying the recognition algorithms , improves the accuracy ( Malisiewicz & Efros , 2007 ; Rabinovich et al. , 2007 ; Rosenfeld & Weinshall , 2011 ) . Unlike the human vision system , CNNs are hindered drastically in crowded scenes ( e.g. , Volokitin et al . ( 2017 ) ) . CNNs rely more on texture whereas humans pay more attention to shape ( Geirhos et al. , 2018a ) . Utilizing minimal recognizable images , Ullman et al . ( 2016 ) argued that the human visual system uses features and processes that are not used by current deep models . 5As an example , consider a model that is trained to classify camels vs. cows , with camels always shown in sandy backgrounds and cows shown against grassy backgrounds . Although such a model does well during training , it gets confused when presented with cows in sandy backgrounds at test time ( Beery et al. , 2018 ) . See also Rosenfeld et al . ( 2018 ) for another example in the context of object detection O ur a na ly si s O bj ec tN et p ap er < Recognizers by year > 13.86 31.19 ObjectNet Top-5 ( box ) ObjectNet Top-1 ( box ) 49.84 61.50 39.48 27.89 25-35 % performance drop A cc ur ac y % A cc ur ac y % 40-45 % performance drop Using our code
The authors present a follow-up to the prior work of Barbu et al on the task of Object Recognition* (name confusion addressed in cons below). Barbu et al demonstrated that on a more realistic dataset like ObjectNet, models trained on a clean dataset like ImageNet suffer significant degradation. This work reduces the performance gap by cropping out the object using bounding box or mask information and running the recognition model on top of it. They do this for a variety of models (AlexNet, VGG-19, ResNet-152, Inception-v4, NASNet-A, PNASNet-5L) and transformations (image distortions, adversarial perturbations, context, geometric transformations).
SP:c7be5b2c2dbf1bd54ace77553ebab21b9a219ef5
Retrieval-Augmented Generation for Code Summarization via Hybrid GNN
1 INTRODUCTION . With software growing in size and complexity , developers tend to spend nearly 90 % ( Wan et al. , 2018 ) effort on software maintenance ( e.g. , version iteration and bug fix ) in the completed life cycle of software development . Source code summary , in the form of natural language , plays a critical role in the comprehension and maintenance process and greatly reduces the effort of reading and comprehending programs . However , manually writing code summaries is tedious and timeconsuming , and with the acceleration of software iteration , it has become a heavy burden for software developers . Hence , source code summarization which automates concise descriptions of programs is meaningful . Automatic source code summarization is a crucial yet far from the settled problem . The key challenges include : 1 ) the source code and the natural language summary are heterogeneous , which means they may not share common lexical tokens , synonyms , or language structures and 2 ) the source code is complex with complicated logic and variable grammatical structure , making it hard to learn the semantics . Conventionally , information retrieval ( IR ) techniques have been widely used in code summarization ( Eddy et al. , 2013 ; Haiduc et al. , 2010 ; Wong et al. , 2015 ; 2013 ) . Since code duplication ( Kamiya et al. , 2002 ; Li et al. , 2006 ) is common in “ big code ” ( Allamanis et al. , 2018 ) , early works summarize the new programs by retrieving the similar code snippet in the existing code database and use its summary directly . Essentially , the retrieval-based approaches transform the code summarization to the code similarity calculation task , which may achieve promising performance on similar programs , but are limited in generalization , i.e . they have poorer performance on programs that are very different from the code database . ∗Contact : shangqin001 @ e.ntu.edu.sg †Corresponding authors To improve the generalization performance , recent works focus on generation-based approaches . Some works explore Seq2Seq architectures ( Bahdanau et al. , 2014 ; Luong et al. , 2015 ) to generate summaries from the given source code . The Seq2Seq-based approaches ( Iyer et al. , 2016 ; Hu et al. , 2018a ; Alon et al. , 2018 ) usually treat the source code or abstract syntax tree parsed from the source code as a sequence and follow a paradigm of encoder-decoder with the attention mechanism for generating a summary . However , these works only rely on sequential models , which are struggling to capture the rich semantics of source code e.g. , control dependencies and data dependencies . In addition , generation-based approaches typically can not take advantage of similar examples from the retrieval database , as retrieval-based approaches do . To better learn the semantics of the source code , Allamanis et al . ( Allamanis et al. , 2017 ) lighted up this field by representing programs as graphs . Some follow-up works ( Fernandes et al. , 2018 ) attempted to encode more code structures ( e.g. , control flow , program dependencies ) into code graphs with graph neural networks ( GNNs ) , and achieved the promising performance than the sequencebased approaches . Existing works ( Allamanis et al. , 2017 ; Fernandes et al. , 2018 ) usually convert code into graph-structured input during preprocessing , and directly consume it via modern neural networks ( e.g. , GNNs ) for computing node and graph embeddings . However , most GNN-based encoders only allow message passing among nodes within a k-hop neighborhood ( where k is usually a small number such as 4 ) to avoid over-smoothing ( Zhao & Akoglu , 2019 ; Chen et al. , 2020a ) , thus capture only local neighborhood information and ignore global interactions among nodes . Even there are some works ( Li et al. , 2019 ) that try to address this challenging with deep GCNs ( i.e. , 56 layers ) ( Kipf & Welling , 2016 ) by the residual connection ( He et al. , 2016 ) , however , the computation cost can not endure in the program especially for a large and complex program . To address these challenges , we propose a framework for automatic code summarization , namely Hybrid GNN ( HGNN ) . Specifically , from the source code , we first construct a code property graph ( CPG ) based on the abstract syntax tree ( AST ) with different types of edges ( i.e. , Flow To , Reach ) . In order to combine the benefits of both retrieval-based and generation-based methods , we propose a retrieval-based augmentation mechanism to retrieve the source code that is most similar to the current program from the retrieval database ( excluding the current program itself ) , and add the retrieved code as well as the corresponding summary as auxiliary information for training the model . In order to go beyond local graph neighborhood information , and capture global interactions in the program , we further propose an attention-based dynamic graph by learning global attention scores ( i.e. , edge weights ) in the augmented static CPG . Then , a hybrid message passing ( HMP ) is performed on both static and dynamic graphs . We also release a new code summarization benchmark by crawling data from popular and diversified projects containing 95k+ functions in C programming language and make it public 1 . We highlight our main contributions as follows : • We propose a general-purpose framework for automatic code summarization , which combines the benefits of both retrieval-based and generation-based methods via a retrieval-based augmentation mechanism . • We innovate a Hybrid GNN by fusing the static graph ( based on code property graph ) and dynamic graph ( via structure-aware global attention mechanism ) to mitigate the limitation of the GNN on capturing global graph information . • We release a new challenging C benchmark for the task of source code summarization . • We conduct an extensive experiment to evaluate our framework . The proposed approach achieves the state-of-the-art performance and improves existing approaches by 1.42 , 2.44 and 1.29 in terms of BLEU-4 , ROUGE-L and METEOR metrics . 2 HYBRID GNN FRAMEWORK . In this section , we introduce the proposed framework Hybrid GNN ( HGNN ) , as shown in Figure 1 , which mainly includes four components : 1 ) Retrieval-augmented Static Graph Construction ( c.f. , Section 2.2 ) , which incorporates retrieved code-summary pairs to augment the original code for learning . 2 ) Attention-based Dynamic Graph Construction ( c.f. , Section 2.3 ) , which allows message passing among any pair of nodes via a structure-aware global attention mechanism . 3 ) HGNN , ( c.f. , 1https : //github.com/shangqing-liu/CCSD-benchmark-for-code-summarization Source Code Retrieval Database Section 2.4 ) , which incorporates information from both static graphs and dynamic graphs with Hybrid Message Passing . 4 ) Decoder ( c.f. , Section 2.5 ) , which utilizes an attention-based LSTM ( Hochreiter & Schmidhuber , 1997 ) model to generate a summary . 2.1 PROBLEM FORMULATION . In this work , we focus on generating natural language summaries for the given functions ( Wan et al. , 2018 ; Zhang et al. , 2020 ) . A simple example is illustrated in Listing 1 , which is crawled from Linux Kernel . Our goal is to generate the best summary “ set the time of day clock ” based on the given source code . Formally , we define a dataset as D = { ( c , s ) |c ∈ C , s ∈ S } , where c is the source code of a function in the function set C and s represents its targeted summary in the summary set S. The task of code summarization is , given a source code c , to generate the best summary consisting of a sequence of tokens ŝ = ( t1 , t2 , ... , tT ) that maximizes the conditional likelihood ŝ = argmaxsP ( s|c ) . Source Code : int pdc_tod_set ( unsigned long sec , unsigned long usec ) { int retval ; unsigned long flags ; spin_lock_irqsave ( & pdc_lock , flags ) ; retval = mem_pdc_call ( PDC_TOD , PDC_TOD_WRITE , sec , usec ) ; spin_unlock_irqrestore ( & pdc_lock , flags ) ; return retval ; } Ground-Truth : set the time of day clock Listing 1 : An example in our dataset crawled from Linux Kernel . 2.2 RETRIEVAL-AUGMENTED STATIC GRAPH . 2.2.1 GRAPH INITIALIZATION . The source code of a function can be represented as Code Property Graph ( CPG ) ( Yamaguchi et al. , 2014 ) , which is built on the abstract syntax tree ( AST ) with different type of edges ( i.e. , Flow To , Control , Define/Use , Reach ) . Formally , one raw function c could be represented by a multi-edged graph g ( V , E ) , where V is the set of AST nodes , ( v , u ) ∈ E denotes the edge between the node v and the node u . A node v consists of two parts : the node sequence and the node type . An illustrative example is shown in Figure 2 . For example , in the red node , a % 2 == 0 is the node sequence and Condition is the node type . An edge ( v , u ) has a type , named edge type , e.g. , AST type and Flow To type . For more details about the CPG , please refer to Appendix A. Initialization Representation . Given a CPG , we utilize a BiLSTM to encode its nodes . We represent each token of the node sequence and each edge type using the learned embedding matrix Eseqtoken and Eedgetype , respectively . Then nodes and edges of the CPG can be encoded as : h1 , ... , hl = BiLSTM ( E seqtoken v,1 , ... , E seqtoken v , l ) encode_node ( v ) = [ h→l ; h ← 1 ] encode_edge ( v , u ) = Eedgetypev , u if ( v , u ) ∈ E else 0 ( 1 ) Entry CFGEntryNode example ( ) FunctionDef int a = rand ( ) IdentifierDeclStatement a % 2 == 0 Condition int b = a ++ IdentifierDeclStatement EXIT CFGExitNode call ( b ) ExpressionStatement ParameterList void ReturnType int IdentifierDeclType a Identifier a = rand ( ) AssignmentExpr rand ( ) CallExpression a % 2 == 0 EqualityExpression a % 2 MultiplicativeExpression a Identifier 2 PrimaryExpression 0 PrimaryExpression int IdentifierDeclType b Identifier b = a ++ AssignmentExpr a ++ IncDecOp ++ IncDec a Identifier a Symbol call Callee b Argument b Identifierb Symbol b Identifier AST Flow To Control Define/Use Reach void example ( ) { int a = rand ( ) ; if ( a % 2 == 0 ) { int b = a++ ; call ( b ) ; } } A simple code example parse Figure 2 : An example of Code Property Graph ( CPG ) . where l is the number of tokens in the node sequence of v. For the sake of simplicity , in the following section , we use hv and ev , u to represent the embedding of the node v and the edge ( v , u ) , respectively , i.e. , encode_node ( v ) and encode_edge ( v , u ) . Given the source code c of a function as well as the CPG g ( V , E ) , Hc ∈ Rm×d denotes the initial node matrix of the CPG , where m is the total number of nodes in the CPG and d is the dimension of the node embedding .
This paper proposes a retrieval-augmented method for generating code summarization. The model encodes the input code based on its graph structure (Code Property Graph) with a hybrid GNN architecture. The model augments the initial graph representation of the input code based on the representation of the top-1 retrieval result. It also augments the final graph encoding with the BiLSTM encoding of the retrieved summary.
SP:dc665b731d4611b6164b2f84a031fa8b8d3701da
Retrieval-Augmented Generation for Code Summarization via Hybrid GNN
1 INTRODUCTION . With software growing in size and complexity , developers tend to spend nearly 90 % ( Wan et al. , 2018 ) effort on software maintenance ( e.g. , version iteration and bug fix ) in the completed life cycle of software development . Source code summary , in the form of natural language , plays a critical role in the comprehension and maintenance process and greatly reduces the effort of reading and comprehending programs . However , manually writing code summaries is tedious and timeconsuming , and with the acceleration of software iteration , it has become a heavy burden for software developers . Hence , source code summarization which automates concise descriptions of programs is meaningful . Automatic source code summarization is a crucial yet far from the settled problem . The key challenges include : 1 ) the source code and the natural language summary are heterogeneous , which means they may not share common lexical tokens , synonyms , or language structures and 2 ) the source code is complex with complicated logic and variable grammatical structure , making it hard to learn the semantics . Conventionally , information retrieval ( IR ) techniques have been widely used in code summarization ( Eddy et al. , 2013 ; Haiduc et al. , 2010 ; Wong et al. , 2015 ; 2013 ) . Since code duplication ( Kamiya et al. , 2002 ; Li et al. , 2006 ) is common in “ big code ” ( Allamanis et al. , 2018 ) , early works summarize the new programs by retrieving the similar code snippet in the existing code database and use its summary directly . Essentially , the retrieval-based approaches transform the code summarization to the code similarity calculation task , which may achieve promising performance on similar programs , but are limited in generalization , i.e . they have poorer performance on programs that are very different from the code database . ∗Contact : shangqin001 @ e.ntu.edu.sg †Corresponding authors To improve the generalization performance , recent works focus on generation-based approaches . Some works explore Seq2Seq architectures ( Bahdanau et al. , 2014 ; Luong et al. , 2015 ) to generate summaries from the given source code . The Seq2Seq-based approaches ( Iyer et al. , 2016 ; Hu et al. , 2018a ; Alon et al. , 2018 ) usually treat the source code or abstract syntax tree parsed from the source code as a sequence and follow a paradigm of encoder-decoder with the attention mechanism for generating a summary . However , these works only rely on sequential models , which are struggling to capture the rich semantics of source code e.g. , control dependencies and data dependencies . In addition , generation-based approaches typically can not take advantage of similar examples from the retrieval database , as retrieval-based approaches do . To better learn the semantics of the source code , Allamanis et al . ( Allamanis et al. , 2017 ) lighted up this field by representing programs as graphs . Some follow-up works ( Fernandes et al. , 2018 ) attempted to encode more code structures ( e.g. , control flow , program dependencies ) into code graphs with graph neural networks ( GNNs ) , and achieved the promising performance than the sequencebased approaches . Existing works ( Allamanis et al. , 2017 ; Fernandes et al. , 2018 ) usually convert code into graph-structured input during preprocessing , and directly consume it via modern neural networks ( e.g. , GNNs ) for computing node and graph embeddings . However , most GNN-based encoders only allow message passing among nodes within a k-hop neighborhood ( where k is usually a small number such as 4 ) to avoid over-smoothing ( Zhao & Akoglu , 2019 ; Chen et al. , 2020a ) , thus capture only local neighborhood information and ignore global interactions among nodes . Even there are some works ( Li et al. , 2019 ) that try to address this challenging with deep GCNs ( i.e. , 56 layers ) ( Kipf & Welling , 2016 ) by the residual connection ( He et al. , 2016 ) , however , the computation cost can not endure in the program especially for a large and complex program . To address these challenges , we propose a framework for automatic code summarization , namely Hybrid GNN ( HGNN ) . Specifically , from the source code , we first construct a code property graph ( CPG ) based on the abstract syntax tree ( AST ) with different types of edges ( i.e. , Flow To , Reach ) . In order to combine the benefits of both retrieval-based and generation-based methods , we propose a retrieval-based augmentation mechanism to retrieve the source code that is most similar to the current program from the retrieval database ( excluding the current program itself ) , and add the retrieved code as well as the corresponding summary as auxiliary information for training the model . In order to go beyond local graph neighborhood information , and capture global interactions in the program , we further propose an attention-based dynamic graph by learning global attention scores ( i.e. , edge weights ) in the augmented static CPG . Then , a hybrid message passing ( HMP ) is performed on both static and dynamic graphs . We also release a new code summarization benchmark by crawling data from popular and diversified projects containing 95k+ functions in C programming language and make it public 1 . We highlight our main contributions as follows : • We propose a general-purpose framework for automatic code summarization , which combines the benefits of both retrieval-based and generation-based methods via a retrieval-based augmentation mechanism . • We innovate a Hybrid GNN by fusing the static graph ( based on code property graph ) and dynamic graph ( via structure-aware global attention mechanism ) to mitigate the limitation of the GNN on capturing global graph information . • We release a new challenging C benchmark for the task of source code summarization . • We conduct an extensive experiment to evaluate our framework . The proposed approach achieves the state-of-the-art performance and improves existing approaches by 1.42 , 2.44 and 1.29 in terms of BLEU-4 , ROUGE-L and METEOR metrics . 2 HYBRID GNN FRAMEWORK . In this section , we introduce the proposed framework Hybrid GNN ( HGNN ) , as shown in Figure 1 , which mainly includes four components : 1 ) Retrieval-augmented Static Graph Construction ( c.f. , Section 2.2 ) , which incorporates retrieved code-summary pairs to augment the original code for learning . 2 ) Attention-based Dynamic Graph Construction ( c.f. , Section 2.3 ) , which allows message passing among any pair of nodes via a structure-aware global attention mechanism . 3 ) HGNN , ( c.f. , 1https : //github.com/shangqing-liu/CCSD-benchmark-for-code-summarization Source Code Retrieval Database Section 2.4 ) , which incorporates information from both static graphs and dynamic graphs with Hybrid Message Passing . 4 ) Decoder ( c.f. , Section 2.5 ) , which utilizes an attention-based LSTM ( Hochreiter & Schmidhuber , 1997 ) model to generate a summary . 2.1 PROBLEM FORMULATION . In this work , we focus on generating natural language summaries for the given functions ( Wan et al. , 2018 ; Zhang et al. , 2020 ) . A simple example is illustrated in Listing 1 , which is crawled from Linux Kernel . Our goal is to generate the best summary “ set the time of day clock ” based on the given source code . Formally , we define a dataset as D = { ( c , s ) |c ∈ C , s ∈ S } , where c is the source code of a function in the function set C and s represents its targeted summary in the summary set S. The task of code summarization is , given a source code c , to generate the best summary consisting of a sequence of tokens ŝ = ( t1 , t2 , ... , tT ) that maximizes the conditional likelihood ŝ = argmaxsP ( s|c ) . Source Code : int pdc_tod_set ( unsigned long sec , unsigned long usec ) { int retval ; unsigned long flags ; spin_lock_irqsave ( & pdc_lock , flags ) ; retval = mem_pdc_call ( PDC_TOD , PDC_TOD_WRITE , sec , usec ) ; spin_unlock_irqrestore ( & pdc_lock , flags ) ; return retval ; } Ground-Truth : set the time of day clock Listing 1 : An example in our dataset crawled from Linux Kernel . 2.2 RETRIEVAL-AUGMENTED STATIC GRAPH . 2.2.1 GRAPH INITIALIZATION . The source code of a function can be represented as Code Property Graph ( CPG ) ( Yamaguchi et al. , 2014 ) , which is built on the abstract syntax tree ( AST ) with different type of edges ( i.e. , Flow To , Control , Define/Use , Reach ) . Formally , one raw function c could be represented by a multi-edged graph g ( V , E ) , where V is the set of AST nodes , ( v , u ) ∈ E denotes the edge between the node v and the node u . A node v consists of two parts : the node sequence and the node type . An illustrative example is shown in Figure 2 . For example , in the red node , a % 2 == 0 is the node sequence and Condition is the node type . An edge ( v , u ) has a type , named edge type , e.g. , AST type and Flow To type . For more details about the CPG , please refer to Appendix A. Initialization Representation . Given a CPG , we utilize a BiLSTM to encode its nodes . We represent each token of the node sequence and each edge type using the learned embedding matrix Eseqtoken and Eedgetype , respectively . Then nodes and edges of the CPG can be encoded as : h1 , ... , hl = BiLSTM ( E seqtoken v,1 , ... , E seqtoken v , l ) encode_node ( v ) = [ h→l ; h ← 1 ] encode_edge ( v , u ) = Eedgetypev , u if ( v , u ) ∈ E else 0 ( 1 ) Entry CFGEntryNode example ( ) FunctionDef int a = rand ( ) IdentifierDeclStatement a % 2 == 0 Condition int b = a ++ IdentifierDeclStatement EXIT CFGExitNode call ( b ) ExpressionStatement ParameterList void ReturnType int IdentifierDeclType a Identifier a = rand ( ) AssignmentExpr rand ( ) CallExpression a % 2 == 0 EqualityExpression a % 2 MultiplicativeExpression a Identifier 2 PrimaryExpression 0 PrimaryExpression int IdentifierDeclType b Identifier b = a ++ AssignmentExpr a ++ IncDecOp ++ IncDec a Identifier a Symbol call Callee b Argument b Identifierb Symbol b Identifier AST Flow To Control Define/Use Reach void example ( ) { int a = rand ( ) ; if ( a % 2 == 0 ) { int b = a++ ; call ( b ) ; } } A simple code example parse Figure 2 : An example of Code Property Graph ( CPG ) . where l is the number of tokens in the node sequence of v. For the sake of simplicity , in the following section , we use hv and ev , u to represent the embedding of the node v and the edge ( v , u ) , respectively , i.e. , encode_node ( v ) and encode_edge ( v , u ) . Given the source code c of a function as well as the CPG g ( V , E ) , Hc ∈ Rm×d denotes the initial node matrix of the CPG , where m is the total number of nodes in the CPG and d is the dimension of the node embedding .
This paper leverages similar code-summary pairs from existing data to assist code summary generation. The model first retrieves a similar code snippet from the existing database. Then, the author applied GNN over the code property graphs (CPGs). A challenge is that CPGs are typically deep therefore it is difficult to capture long dependencies. The author proposed an attention mechanism to capture global information between nodes, and then a hybrid GNN layer encodes the retrieve-augmented graph. Finally, a generator takes both GNN's output and the retrieved text summary and predict outputs. Experimental results over a new C code indicates that the proposed method outperforms both IR and neural generation methods.
SP:dc665b731d4611b6164b2f84a031fa8b8d3701da
Neural Thompson Sampling
1 INTRODUCTION . The stochastic multi-armed bandit ( Bubeck & Cesa-Bianchi , 2012 ; Lattimore & Szepesvári , 2020 ) has been extensively studied , as an important model to optimize the trade-off between exploration and exploitation in sequential decision making . Among its many variants , the contextual bandit is widely used in real-world applications such as recommendation ( Li et al. , 2010 ) , advertising ( Graepel et al. , 2010 ) , robotic control ( Mahler et al. , 2016 ) , and healthcare ( Greenewald et al. , 2017 ) . In each round of a contextual bandit , the agent observes a feature vector ( the “ context ” ) for each of the K arms , pulls one of them , and in return receives a scalar reward . The goal is to maximize the cumulative reward , or minimize the regret ( to be defined later ) , in a total of T rounds . To do so , the agent must find a trade-off between exploration and exploitation . One of the most effective and widely used techniques is Thompson Sampling , or TS ( Thompson , 1933 ) . The basic idea is to compute the posterior distribution of each arm being optimal for the present context , and sample an arm from this distribution . TS is often easy to implement , and has found great success in practice ( Chapelle & Li , 2011 ; Graepel et al. , 2010 ; Kawale et al. , 2015 ; Russo et al. , 2017 ) . Recently , a series of work has applied TS or its variants to explore in contextual bandits with neural network models ( Blundell et al. , 2015 ; Kveton et al. , 2020 ; Lu & Van Roy , 2017 ; Riquelme et al. , 2018 ) . Riquelme et al . ( 2018 ) proposed NeuralLinear , which maintains a neural network and chooses the best arm in each round according to a Bayesian linear regression on top of the last network layer . Kveton et al . ( 2020 ) proposed DeepFPL , which trains a neural network based on perturbed training data and chooses the best arm in each round based on the neural network output . Similar approaches have also been used in more general reinforcement learning problem ( e.g. , Azizzadenesheli et al. , 2018 ; Fortunato et al. , 2018 ; Lipton et al. , 2018 ; Osband et al. , 2016a ) . Despite the reported empirical success , strong regret guarantees for TS remain limited to relatively simple models , under fairly restrictive assumptions on the reward function . Examples are linear functions ( Abeille & Lazaric , 2017 ; Agrawal & Goyal , 2013 ; Kocák et al. , 2014 ; Russo & Van Roy , 2014 ) , generalized linear functions ( Kveton et al. , 2020 ; Russo & Van Roy , 2014 ) , or functions with small RKHS norm induced by a properly selected kernel ( Chowdhury & Gopalan , 2017 ) . In this paper , we provide , to the best of our knowledge , the first near-optimal regret bound for neural network-based Thompson Sampling . Our contributions are threefold . First , we propose a new algorithm , Neural Thompson Sampling ( NeuralTS ) , to incorporate TS exploration with neural networks . It differs from NeuralLinear ( Riquelme et al. , 2018 ) by considering weight uncertainty in all layers , and from other neural network-based TS implementations ( Blundell et al. , 2015 ; Kveton et al. , 2020 ) by sampling the estimated reward from the posterior ( as opposed to sampling parameters ) . Second , we give a regret analysis for the algorithm , and obtain an Õ ( d̃ √ T ) regret , where d̃ is the effective dimension and T is the number of rounds . This result is comparable to previous bounds when specialized to the simpler , linear setting where the effective dimension coincides with the feature dimension ( Agrawal & Goyal , 2013 ; Chowdhury & Gopalan , 2017 ) . Finally , we corroborate the analysis with an empirical evaluation of the algorithm on several benchmarks . Experiments show that NeuralTS yields competitive performance , in comparison with stateof-the-art baselines , thus suggest its practical value in addition to strong theoretical guarantees . Notation : Scalars and constants are denoted by lower and upper case letters , respectively . Vectors are denoted by lower case bold face letters x , and matrices by upper case bold face letters A . We denote by [ k ] the set { 1 , 2 , · · · , k } for positive integers k. For two non-negative sequence { an } , { bn } , an = O ( bn ) means that there exists a positive constant C such that an ≤ Cbn , and we use Õ ( · ) to hide the log factor inO ( · ) . We denote by ‖ · ‖2 the Euclidean norm of vectors and the spectral norm of matrices , and by ‖ · ‖F the Frobenius norm of a matrix . 2 PROBLEM SETTING AND PROPOSED ALGORITHM . In this work , we consider contextualK-armed bandits , where the total number of rounds T is known . At round t ∈ [ T ] , the agent observes K contextual vectors { xt , k ∈ Rd | k ∈ [ K ] } . Then the agent selects an arm at and receives a reward rt , at . Our goal is to minimize the following pseudo regret : RT = E [ T∑ t=1 ( rt , a∗t − rt , at ) ] , ( 2.1 ) where a∗t is the optimal arm at round t that has the maximum expected reward : a ∗ t = argmaxa∈ [ K ] E [ rt , a ] . To estimate the unknown reward given a contextual vector x , we use a fully connected neural network f ( x ; θ ) of depth L ≥ 2 , defined recursively by f1 = W1 x , fl = Wl ReLU ( fl−1 ) , 2 ≤ l ≤ L , f ( x ; θ ) = √ mfL , ( 2.2 ) where ReLU ( x ) : = max { x , 0 } , m is the width of neural network , W1 ∈ Rm×d , Wl ∈ Rm×m , 2 ≤ l < L , WL ∈ R1×m , θ = ( vec ( W1 ) ; · · · ; vec ( WL ) ) ∈ Rp is the collection of parameters of the neural network , p = dm + m2 ( L − 2 ) + m , and g ( x ; θ ) = ∇θf ( x ; θ ) is the gradient of f ( x ; θ ) w.r.t . θ . Our Neural Thompson Sampling is given in Algorithm 1 . It maintains a Gaussian distribution for each arm ’ s reward . When selecting an arm , it samples the reward of each arm from the reward ’ s posterior distribution , and then pulls the greedy arm ( lines 4–8 ) . Once the reward is observed , it updates the posterior ( lines 9 & 10 ) . The mean of the posterior distribution is set to the output of the neural network , whose parameter is the solution to the following minimization problem : min θ L ( θ ) = t∑ i=1 [ f ( xi , ai ; θ ) − ri , ai ] 2/2 +mλ‖θ − θ0‖22/2 . ( 2.3 ) We can see that ( 2.3 ) is an ` 2-regularized square loss minimization problem , where the regularization term centers at the randomly initialized network parameter θ0 . We adapt gradient descent to solve ( 2.3 ) with step size η and total number of iterations J . Algorithm 1 Neural Thompson Sampling ( NeuralTS ) Input : Number of rounds T , exploration variance ν , network width m , regularization parameter λ . 1 : Set U0 = λI 2 : Initialize θ0 = ( vec ( W1 ) ; · · · ; vec ( WL ) ) ∈ Rp , where for each 1 ≤ l ≤ L − 1 , Wl = ( W,0 ; 0 , W ) , each entry of W is generated independently from N ( 0 , 4/m ) ; WL = ( w > , −w > ) , each entry of w is generated independently from N ( 0 , 2/m ) . 3 : for t = 1 , · · · , T do 4 : for k = 1 , · · · , K do 5 : σ2t , k = λg > ( xt , k ; θt−1 ) U −1 t−1 g ( xt , k ; θt−1 ) /m 6 : Sample estimated reward r̃t , k ∼ N ( f ( xt , k ; θt−1 ) , ν2σ2t , k ) 7 : end for 8 : Pull arm at and receive reward rt , at , where at = argmaxa r̃t , a 9 : Set θt to be the output of gradient descent for solving ( 2.3 ) 10 : Ut = Ut−1 + g ( xt , at ; θt ) g ( xt , at ; θt ) > /m 11 : end for A few observations about our algorithm are in place . First , compared to typical ways of implementing Thompson Sampling with neural networks , NeuralTS samples from the posterior distribution of the scalar reward , instead of the network parameters . It is therefore simpler and more efficient , as the number of parameters in practice can be large . Second , the algorithm maintains the posterior distributions related to parameters of all layers of the network , as opposed to the last layer only ( Riquelme et al. , 2018 ) . This difference is crucial in our regret analysis . It allows us to build a connection between Algorithm 1 and recent work about deep learning theory ( Allen-Zhu et al. , 2018 ; Cao & Gu , 2019 ) , in order to obtain theoretical guarantees as will be shown in the next section . Third , different from linear or kernelized TS ( Agrawal & Goyal , 2013 ; Chowdhury & Gopalan , 2017 ) , whose posterior can be computed in closed forms , NeuralTS solves a non-convex optimization problem ( 2.3 ) by gradient descent . This difference requires additional techniques in the regret analysis . Moreover , stochastic gradient descent can be used to solve the optimization problem with a similar theoretical guarantee ( Allen-Zhu et al. , 2018 ; Du et al. , 2018 ; Zou et al. , 2019 ) . For simplicity of exposition , we will focus on the exact gradient descent approach . 3 REGRET ANALYSIS . In this section , we provide a regret analysis of NeuralTS . We assume that there exists an unknown reward function h such that for any 1 ≤ t ≤ T and 1 ≤ k ≤ K , rt , k = h ( xt , k ) + ξt , k , with |h ( xt , k ) | ≤ 1 where { ξt , k } forms an R-sub-Gaussian martingale difference sequence with constant R > 0 , i.e. , E [ exp ( λξt , k ) |ξ1 : t−1 , k , x1 : t , k ] ≤ exp ( λ2R2 ) for all λ ∈ R. Such an assumption on the noise sequence is widely adapted in contextual bandit literature ( Agrawal & Goyal , 2013 ; Bubeck & CesaBianchi , 2012 ; Chowdhury & Gopalan , 2017 ; Chu et al. , 2011 ; Lattimore & Szepesvári , 2020 ; Valko et al. , 2013 ) . Next , we provide necessary background on the neural tangent kernel ( NTK ) theory ( Jacot et al. , 2018 ) , which plays a crucial role in our analysis . In the analysis , we denote by { xi } TKi=1 the set of observed contexts of all arms and all rounds : { xt , k } 1≤t≤T,1≤k≤K where i = K ( t− 1 ) + k. Definition 3.1 ( Jacot et al . ( 2018 ) ) . Define H̃ ( 1 ) i , j = Σ ( 1 ) i , j = 〈x i , xj〉 , A ( l ) i , j = ( Σ ( l ) i , i Σ ( l ) i , j Σ ( l ) i , j Σ ( l ) j , j ) , Σ ( l+1 ) i , j = 2E ( u , v ) ∼N ( 0 , A ( l ) i , j ) max { u , 0 } max { v , 0 } , H̃ ( l+1 ) i , j = 2H̃ ( l ) i , jE ( u , v ) ∼N ( 0 , A ( l ) i , j ) 1 ( u ≥ 0 ) 1 ( v ≥ 0 ) + Σ ( l+1 ) i , j . Then , H = ( H̃ ( L ) + Σ ( L ) ) /2 is called the neural tangent kernel matrix on the context set . The NTK technique builds a connection between deep neural networks and kernel methods . It enables us to adapt some complexity measures for kernel methods to describe the complexity of the neural network , as given by the following definition . Definition 3.2 . The effective dimension d̃ of matrix H with regularization parameter λ is defined as d̃ = log det ( I + H/λ ) log ( 1 + TK/λ ) . Remark 3.3 . The effective dimension is a metric to describe the actual underlying dimension in the set of observed contexts , and has been used by Valko et al . ( 2013 ) for the analysis of kernel UCB . Our definition here is adapted from Yang & Wang ( 2019 ) , which also considers UCB-based exploration . Compared with the maximum information gain γt used in Chowdhury & Gopalan ( 2017 ) , one can verify that their Lemma 3 shows that γt ≥ log det ( I + H/λ ) /2 . Therefore , γt and d̃ are of the same order up to a ratio of 1/ ( 2 log ( 1 + TK/λ ) ) . Furthermore , d̃ can be upper bounded if all contexts xi are nearly on some low-dimensional subspace of the RKHS space spanned by NTK ( Appendix D ) . We will make a regularity assumption on the contexts and the corresponding NTK matrix H. Assumption 3.4 . Let H be defined in Definition 3.1 . There exists λ0 > 0 , such that H λ0I . In addition , for any t ∈ [ T ] , k ∈ [ K ] , ‖xt , k‖2 = 1 and [ xt , k ] j = [ xt , k ] j+d/2 . The assumption that the NTK matrix is positive definite has been considered in prior work on NTK ( Arora et al. , 2019 ; Du et al. , 2018 ) . The assumption on context xt , a ensures that the initial output of neural network f ( x ; θ0 ) is 0 with the random initialization suggested in Algorithm 1 . The condition on x is easy to satisfy , since for any context x , one can always construct a new context x̃ as [ x/ ( √ 2‖x‖2 ) , x/ ( √ 2‖x‖2 ) ] > . We are now ready to present the main result of the paper : Theorem 3.5 . Under Assumption 3.4 , set the parameters in Algorithm 1 as λ = 1 + 1/T , ν = B + R √ d̃ log ( 1 + TK/λ ) + 2 + 2 log ( 1/δ ) where B = max { 1/ ( 22e √ π ) , √ 2h > H−1h } with h = ( h ( x1 ) , . . . , h ( xTK ) ) > , and R is the sub-Gaussian parameter . In line 9 of Algorithm 1 , set η = C1 ( mλ+mLT ) −1 and J = ( 1 +LT/λ ) ( C2 + log ( T 3Lλ−1 log ( 1/δ ) ) ) /C1 for some positive constants C1 , C2 . If the network width m satisfies : m ≥ poly ( λ , T , K , L , log ( 1/δ ) , λ−10 ) , then , with probability at least 1− δ , the regret of Algorithm 1 is bounded as RT ≤ C3 ( 1 + cT ) ν √ 2λL ( d̃ log ( 1 + TK ) + 1 ) T + ( 4 + C4 ( 1 + cT ) νL ) √ 2 log ( 3/δ ) T + 5 , where C3 , C4 are some positive absolute constants , and cT = √ 4 log T + 2 logK . Remark 3.6 . The definitionB in Theorem 3.5 is inspired by the RKHS norm of the reward function defined in Chowdhury & Gopalan ( 2017 ) . It can be verified that when the reward function h belongs to the function space induced by NTK , i.e. , ‖h‖H < ∞ , we have √ h > H−1h ≤ ‖h‖H according to Zhou et al . ( 2019 ) , which suggests that B ≤ max { 1/ ( 22e √ π ) , √ 2‖h‖H } . Remark 3.7 . Theorem 3.5 implies the regret of NeuralTS is on the order of Õ ( d̃T 1/2 ) . This result matches the state-of-the-art regret bound in Chowdhury & Gopalan ( 2017 ) ; Agrawal & Goyal ( 2013 ) ; Zhou et al . ( 2019 ) ; Kveton et al . ( 2020 ) . Remark 3.8 . In Theorem 3.5 , the requirement of m is specified in Condition 4.1 and the proof of Theorem 3.5 , which is a high-degree polynomial in the time horizon T , number of layers L and number of actions K. However , in our experiments , we can choose reasonably small m ( e.g. , m = 100 ) to obtain good performance of NeuralTS . See Appendix A.1 for more details . This discrepancy between theory and practice is due to the limitation of current NTK theory ( Du et al. , 2018 ; Allen-Zhu et al. , 2018 ; Zou et al. , 2019 ) . Closing the gap is a venue for future work . Remark 3.9 . Theorem 3.5 suggests that we need to know T before we run the algorithm in order to set m. When T is unknown , we can use the standard doubling trick ( See e.g. , Cesa-Bianchi & Lugosi ( 2006 ) ) to set m adaptively . In detail , we decompose the time interval ( 0 , +∞ ) as a union of non-overlapping intervals [ 2s , 2s+1 ) . When 2s ≤ t < 2s+1 , we restart NeuralTS with the input T = 2s+1 . It can be verified that similar Õ ( d̃ √ T ) regret still holds .
The paper proposes neural thompson sampling (TS) - a method to run TS without assuming that the reward is a linear function of the context, as is generally assumed in literature. This is not the first paper to use neural networks for TS, however existing papers either a) used TS only in the last layer, or b) maintained uncertainty over the weights and sampled the entire neural network. This paper instead maintains a single network that computes the mean of the reward distribution of an arm.
SP:e170d43e3733bc0cd7e38f380b63281056dce095
Neural Thompson Sampling
1 INTRODUCTION . The stochastic multi-armed bandit ( Bubeck & Cesa-Bianchi , 2012 ; Lattimore & Szepesvári , 2020 ) has been extensively studied , as an important model to optimize the trade-off between exploration and exploitation in sequential decision making . Among its many variants , the contextual bandit is widely used in real-world applications such as recommendation ( Li et al. , 2010 ) , advertising ( Graepel et al. , 2010 ) , robotic control ( Mahler et al. , 2016 ) , and healthcare ( Greenewald et al. , 2017 ) . In each round of a contextual bandit , the agent observes a feature vector ( the “ context ” ) for each of the K arms , pulls one of them , and in return receives a scalar reward . The goal is to maximize the cumulative reward , or minimize the regret ( to be defined later ) , in a total of T rounds . To do so , the agent must find a trade-off between exploration and exploitation . One of the most effective and widely used techniques is Thompson Sampling , or TS ( Thompson , 1933 ) . The basic idea is to compute the posterior distribution of each arm being optimal for the present context , and sample an arm from this distribution . TS is often easy to implement , and has found great success in practice ( Chapelle & Li , 2011 ; Graepel et al. , 2010 ; Kawale et al. , 2015 ; Russo et al. , 2017 ) . Recently , a series of work has applied TS or its variants to explore in contextual bandits with neural network models ( Blundell et al. , 2015 ; Kveton et al. , 2020 ; Lu & Van Roy , 2017 ; Riquelme et al. , 2018 ) . Riquelme et al . ( 2018 ) proposed NeuralLinear , which maintains a neural network and chooses the best arm in each round according to a Bayesian linear regression on top of the last network layer . Kveton et al . ( 2020 ) proposed DeepFPL , which trains a neural network based on perturbed training data and chooses the best arm in each round based on the neural network output . Similar approaches have also been used in more general reinforcement learning problem ( e.g. , Azizzadenesheli et al. , 2018 ; Fortunato et al. , 2018 ; Lipton et al. , 2018 ; Osband et al. , 2016a ) . Despite the reported empirical success , strong regret guarantees for TS remain limited to relatively simple models , under fairly restrictive assumptions on the reward function . Examples are linear functions ( Abeille & Lazaric , 2017 ; Agrawal & Goyal , 2013 ; Kocák et al. , 2014 ; Russo & Van Roy , 2014 ) , generalized linear functions ( Kveton et al. , 2020 ; Russo & Van Roy , 2014 ) , or functions with small RKHS norm induced by a properly selected kernel ( Chowdhury & Gopalan , 2017 ) . In this paper , we provide , to the best of our knowledge , the first near-optimal regret bound for neural network-based Thompson Sampling . Our contributions are threefold . First , we propose a new algorithm , Neural Thompson Sampling ( NeuralTS ) , to incorporate TS exploration with neural networks . It differs from NeuralLinear ( Riquelme et al. , 2018 ) by considering weight uncertainty in all layers , and from other neural network-based TS implementations ( Blundell et al. , 2015 ; Kveton et al. , 2020 ) by sampling the estimated reward from the posterior ( as opposed to sampling parameters ) . Second , we give a regret analysis for the algorithm , and obtain an Õ ( d̃ √ T ) regret , where d̃ is the effective dimension and T is the number of rounds . This result is comparable to previous bounds when specialized to the simpler , linear setting where the effective dimension coincides with the feature dimension ( Agrawal & Goyal , 2013 ; Chowdhury & Gopalan , 2017 ) . Finally , we corroborate the analysis with an empirical evaluation of the algorithm on several benchmarks . Experiments show that NeuralTS yields competitive performance , in comparison with stateof-the-art baselines , thus suggest its practical value in addition to strong theoretical guarantees . Notation : Scalars and constants are denoted by lower and upper case letters , respectively . Vectors are denoted by lower case bold face letters x , and matrices by upper case bold face letters A . We denote by [ k ] the set { 1 , 2 , · · · , k } for positive integers k. For two non-negative sequence { an } , { bn } , an = O ( bn ) means that there exists a positive constant C such that an ≤ Cbn , and we use Õ ( · ) to hide the log factor inO ( · ) . We denote by ‖ · ‖2 the Euclidean norm of vectors and the spectral norm of matrices , and by ‖ · ‖F the Frobenius norm of a matrix . 2 PROBLEM SETTING AND PROPOSED ALGORITHM . In this work , we consider contextualK-armed bandits , where the total number of rounds T is known . At round t ∈ [ T ] , the agent observes K contextual vectors { xt , k ∈ Rd | k ∈ [ K ] } . Then the agent selects an arm at and receives a reward rt , at . Our goal is to minimize the following pseudo regret : RT = E [ T∑ t=1 ( rt , a∗t − rt , at ) ] , ( 2.1 ) where a∗t is the optimal arm at round t that has the maximum expected reward : a ∗ t = argmaxa∈ [ K ] E [ rt , a ] . To estimate the unknown reward given a contextual vector x , we use a fully connected neural network f ( x ; θ ) of depth L ≥ 2 , defined recursively by f1 = W1 x , fl = Wl ReLU ( fl−1 ) , 2 ≤ l ≤ L , f ( x ; θ ) = √ mfL , ( 2.2 ) where ReLU ( x ) : = max { x , 0 } , m is the width of neural network , W1 ∈ Rm×d , Wl ∈ Rm×m , 2 ≤ l < L , WL ∈ R1×m , θ = ( vec ( W1 ) ; · · · ; vec ( WL ) ) ∈ Rp is the collection of parameters of the neural network , p = dm + m2 ( L − 2 ) + m , and g ( x ; θ ) = ∇θf ( x ; θ ) is the gradient of f ( x ; θ ) w.r.t . θ . Our Neural Thompson Sampling is given in Algorithm 1 . It maintains a Gaussian distribution for each arm ’ s reward . When selecting an arm , it samples the reward of each arm from the reward ’ s posterior distribution , and then pulls the greedy arm ( lines 4–8 ) . Once the reward is observed , it updates the posterior ( lines 9 & 10 ) . The mean of the posterior distribution is set to the output of the neural network , whose parameter is the solution to the following minimization problem : min θ L ( θ ) = t∑ i=1 [ f ( xi , ai ; θ ) − ri , ai ] 2/2 +mλ‖θ − θ0‖22/2 . ( 2.3 ) We can see that ( 2.3 ) is an ` 2-regularized square loss minimization problem , where the regularization term centers at the randomly initialized network parameter θ0 . We adapt gradient descent to solve ( 2.3 ) with step size η and total number of iterations J . Algorithm 1 Neural Thompson Sampling ( NeuralTS ) Input : Number of rounds T , exploration variance ν , network width m , regularization parameter λ . 1 : Set U0 = λI 2 : Initialize θ0 = ( vec ( W1 ) ; · · · ; vec ( WL ) ) ∈ Rp , where for each 1 ≤ l ≤ L − 1 , Wl = ( W,0 ; 0 , W ) , each entry of W is generated independently from N ( 0 , 4/m ) ; WL = ( w > , −w > ) , each entry of w is generated independently from N ( 0 , 2/m ) . 3 : for t = 1 , · · · , T do 4 : for k = 1 , · · · , K do 5 : σ2t , k = λg > ( xt , k ; θt−1 ) U −1 t−1 g ( xt , k ; θt−1 ) /m 6 : Sample estimated reward r̃t , k ∼ N ( f ( xt , k ; θt−1 ) , ν2σ2t , k ) 7 : end for 8 : Pull arm at and receive reward rt , at , where at = argmaxa r̃t , a 9 : Set θt to be the output of gradient descent for solving ( 2.3 ) 10 : Ut = Ut−1 + g ( xt , at ; θt ) g ( xt , at ; θt ) > /m 11 : end for A few observations about our algorithm are in place . First , compared to typical ways of implementing Thompson Sampling with neural networks , NeuralTS samples from the posterior distribution of the scalar reward , instead of the network parameters . It is therefore simpler and more efficient , as the number of parameters in practice can be large . Second , the algorithm maintains the posterior distributions related to parameters of all layers of the network , as opposed to the last layer only ( Riquelme et al. , 2018 ) . This difference is crucial in our regret analysis . It allows us to build a connection between Algorithm 1 and recent work about deep learning theory ( Allen-Zhu et al. , 2018 ; Cao & Gu , 2019 ) , in order to obtain theoretical guarantees as will be shown in the next section . Third , different from linear or kernelized TS ( Agrawal & Goyal , 2013 ; Chowdhury & Gopalan , 2017 ) , whose posterior can be computed in closed forms , NeuralTS solves a non-convex optimization problem ( 2.3 ) by gradient descent . This difference requires additional techniques in the regret analysis . Moreover , stochastic gradient descent can be used to solve the optimization problem with a similar theoretical guarantee ( Allen-Zhu et al. , 2018 ; Du et al. , 2018 ; Zou et al. , 2019 ) . For simplicity of exposition , we will focus on the exact gradient descent approach . 3 REGRET ANALYSIS . In this section , we provide a regret analysis of NeuralTS . We assume that there exists an unknown reward function h such that for any 1 ≤ t ≤ T and 1 ≤ k ≤ K , rt , k = h ( xt , k ) + ξt , k , with |h ( xt , k ) | ≤ 1 where { ξt , k } forms an R-sub-Gaussian martingale difference sequence with constant R > 0 , i.e. , E [ exp ( λξt , k ) |ξ1 : t−1 , k , x1 : t , k ] ≤ exp ( λ2R2 ) for all λ ∈ R. Such an assumption on the noise sequence is widely adapted in contextual bandit literature ( Agrawal & Goyal , 2013 ; Bubeck & CesaBianchi , 2012 ; Chowdhury & Gopalan , 2017 ; Chu et al. , 2011 ; Lattimore & Szepesvári , 2020 ; Valko et al. , 2013 ) . Next , we provide necessary background on the neural tangent kernel ( NTK ) theory ( Jacot et al. , 2018 ) , which plays a crucial role in our analysis . In the analysis , we denote by { xi } TKi=1 the set of observed contexts of all arms and all rounds : { xt , k } 1≤t≤T,1≤k≤K where i = K ( t− 1 ) + k. Definition 3.1 ( Jacot et al . ( 2018 ) ) . Define H̃ ( 1 ) i , j = Σ ( 1 ) i , j = 〈x i , xj〉 , A ( l ) i , j = ( Σ ( l ) i , i Σ ( l ) i , j Σ ( l ) i , j Σ ( l ) j , j ) , Σ ( l+1 ) i , j = 2E ( u , v ) ∼N ( 0 , A ( l ) i , j ) max { u , 0 } max { v , 0 } , H̃ ( l+1 ) i , j = 2H̃ ( l ) i , jE ( u , v ) ∼N ( 0 , A ( l ) i , j ) 1 ( u ≥ 0 ) 1 ( v ≥ 0 ) + Σ ( l+1 ) i , j . Then , H = ( H̃ ( L ) + Σ ( L ) ) /2 is called the neural tangent kernel matrix on the context set . The NTK technique builds a connection between deep neural networks and kernel methods . It enables us to adapt some complexity measures for kernel methods to describe the complexity of the neural network , as given by the following definition . Definition 3.2 . The effective dimension d̃ of matrix H with regularization parameter λ is defined as d̃ = log det ( I + H/λ ) log ( 1 + TK/λ ) . Remark 3.3 . The effective dimension is a metric to describe the actual underlying dimension in the set of observed contexts , and has been used by Valko et al . ( 2013 ) for the analysis of kernel UCB . Our definition here is adapted from Yang & Wang ( 2019 ) , which also considers UCB-based exploration . Compared with the maximum information gain γt used in Chowdhury & Gopalan ( 2017 ) , one can verify that their Lemma 3 shows that γt ≥ log det ( I + H/λ ) /2 . Therefore , γt and d̃ are of the same order up to a ratio of 1/ ( 2 log ( 1 + TK/λ ) ) . Furthermore , d̃ can be upper bounded if all contexts xi are nearly on some low-dimensional subspace of the RKHS space spanned by NTK ( Appendix D ) . We will make a regularity assumption on the contexts and the corresponding NTK matrix H. Assumption 3.4 . Let H be defined in Definition 3.1 . There exists λ0 > 0 , such that H λ0I . In addition , for any t ∈ [ T ] , k ∈ [ K ] , ‖xt , k‖2 = 1 and [ xt , k ] j = [ xt , k ] j+d/2 . The assumption that the NTK matrix is positive definite has been considered in prior work on NTK ( Arora et al. , 2019 ; Du et al. , 2018 ) . The assumption on context xt , a ensures that the initial output of neural network f ( x ; θ0 ) is 0 with the random initialization suggested in Algorithm 1 . The condition on x is easy to satisfy , since for any context x , one can always construct a new context x̃ as [ x/ ( √ 2‖x‖2 ) , x/ ( √ 2‖x‖2 ) ] > . We are now ready to present the main result of the paper : Theorem 3.5 . Under Assumption 3.4 , set the parameters in Algorithm 1 as λ = 1 + 1/T , ν = B + R √ d̃ log ( 1 + TK/λ ) + 2 + 2 log ( 1/δ ) where B = max { 1/ ( 22e √ π ) , √ 2h > H−1h } with h = ( h ( x1 ) , . . . , h ( xTK ) ) > , and R is the sub-Gaussian parameter . In line 9 of Algorithm 1 , set η = C1 ( mλ+mLT ) −1 and J = ( 1 +LT/λ ) ( C2 + log ( T 3Lλ−1 log ( 1/δ ) ) ) /C1 for some positive constants C1 , C2 . If the network width m satisfies : m ≥ poly ( λ , T , K , L , log ( 1/δ ) , λ−10 ) , then , with probability at least 1− δ , the regret of Algorithm 1 is bounded as RT ≤ C3 ( 1 + cT ) ν √ 2λL ( d̃ log ( 1 + TK ) + 1 ) T + ( 4 + C4 ( 1 + cT ) νL ) √ 2 log ( 3/δ ) T + 5 , where C3 , C4 are some positive absolute constants , and cT = √ 4 log T + 2 logK . Remark 3.6 . The definitionB in Theorem 3.5 is inspired by the RKHS norm of the reward function defined in Chowdhury & Gopalan ( 2017 ) . It can be verified that when the reward function h belongs to the function space induced by NTK , i.e. , ‖h‖H < ∞ , we have √ h > H−1h ≤ ‖h‖H according to Zhou et al . ( 2019 ) , which suggests that B ≤ max { 1/ ( 22e √ π ) , √ 2‖h‖H } . Remark 3.7 . Theorem 3.5 implies the regret of NeuralTS is on the order of Õ ( d̃T 1/2 ) . This result matches the state-of-the-art regret bound in Chowdhury & Gopalan ( 2017 ) ; Agrawal & Goyal ( 2013 ) ; Zhou et al . ( 2019 ) ; Kveton et al . ( 2020 ) . Remark 3.8 . In Theorem 3.5 , the requirement of m is specified in Condition 4.1 and the proof of Theorem 3.5 , which is a high-degree polynomial in the time horizon T , number of layers L and number of actions K. However , in our experiments , we can choose reasonably small m ( e.g. , m = 100 ) to obtain good performance of NeuralTS . See Appendix A.1 for more details . This discrepancy between theory and practice is due to the limitation of current NTK theory ( Du et al. , 2018 ; Allen-Zhu et al. , 2018 ; Zou et al. , 2019 ) . Closing the gap is a venue for future work . Remark 3.9 . Theorem 3.5 suggests that we need to know T before we run the algorithm in order to set m. When T is unknown , we can use the standard doubling trick ( See e.g. , Cesa-Bianchi & Lugosi ( 2006 ) ) to set m adaptively . In detail , we decompose the time interval ( 0 , +∞ ) as a union of non-overlapping intervals [ 2s , 2s+1 ) . When 2s ≤ t < 2s+1 , we restart NeuralTS with the input T = 2s+1 . It can be verified that similar Õ ( d̃ √ T ) regret still holds .
The paper proposes a novel Thompson sampling algorithm for neural networks which can be applied to any arbitrary, bounded reward function. While existing works apply Thompson sampling (TS) to neural networks in a heuristic way (e.g., sampling parameters in the last layer only), this algorithm considers the posterior distribution of all the parameters and is the first to provide a theoretical, tight regret upper bound. The work builds on the neural tangent kernel theory, which enables the use of techniques developed for linear reward functions (Agrawal and Goyal, 2013). Actually, the paper is analogous to the work of Zhou et al. (2020) which proposed the NeuralUCB algorithm by combining the neural tangent kernel theory and the Linear UCB methodology (Abbasi-Yadkori et al., 2011).
SP:e170d43e3733bc0cd7e38f380b63281056dce095
WeMix: How to Better Utilize Data Augmentation
1 INTRODUCTION . Data augmentation ( Baird , 1992 ; Schmidhuber , 2015 ) has been a key to the success of deep learning in image classification ( He et al. , 2019 ) , and is becoming increasingly common in other tasks such as natural language processing ( Zhang et al. , 2015 ) and object detection ( Zoph et al. , 2019 ) . The data augmentation expands training set by generating virtual instances through random augmentation to the original ones . This alleviates the overfitting ( Shorten & Khoshgoftaar , 2019 ) problem when training large deep neural networks . Despite many encouraging results , it is not the case that data augmentation will always improve generalization errors ( Min et al. , 2020 ; Raghunathan et al. , 2020 ) . In particular , Raghunathan et al . ( 2020 ) showed that training by augmented data will lead to a smaller robust error but potentially a larger standard error . Therefore , it is critical to answer the following two questions before applying data augmentation in deep learning : • When will the deep models benefit from data augmentation ? • How to better leverage augmented data during training ? Several previous works ( Raghunathan et al. , 2020 ; Wu et al. , 2020 ; Min et al. , 2020 ) tried to address the questions . Their analysis is limited to specific problems such as linear ridge regression therefore may not be applicable to deep learning . In this work , we aim to answer the two questions from a theoretical perspective under a more general non-convex setting . We address the first question in a more general form covering applications in deep learning . For the second question , we develop new approaches that are provably more effective than the conventional data augmentation approaches . Most data augmentation operations alter the data distribution during the training progress . This imposes a data distribution bias ( we simply use “ data bias ” in the rest of this paper ) between the augmented data and the original data , which may make it difficult to fully leverage the augmented data . To be more concrete , let us consider label-mixing augmentation ( e.g. , mixup ( Zhang et al. , 2018 ; Tokozume et al. , 2018 ) ) . Suppose we have n original data D = { ( xi , yi ) , i = 1 , . . . , n } , where the input-label pair ( xi , yi ) follows a distribution Pxy = ( Px , Py ( ·|x ) ) , Px is the marginal distribution of the inputs and Py ( ·|x ) is the conditional distribution of the labels given inputs ; we generate m augmented data D̃ = { ( x̃i , ỹi ) , i = 1 , . . . , m } , where ( x̃i , ỹi ) ∼ Px̃ỹ = ( Px̃ , Pỹ ( ·|x̃ ) ) , and Px = Px̃ but Py ( ·|x ) 6= Pỹ ( ·|x̃ ) . Given x ∼ Px , the data bias is defined as δy = maxy , ỹ ‖y − ỹ‖ . We will show that when the bias between D and D̃ is large , directly training on the augmented data will not be as effective as training on the original data . Given the fact that augmented data may hurt the performance , the next question is how to design better learning algorithms to leash out the power of augmented data . To this end , we develop two novel algorithms to alleviate the data bias . The first algorithm , termed AugDrop , corrects the data bias by introducing a constrained optimization problem . The second algorithm , termed MixLoss , corrects the data bias by introducing a modified loss function . We show that , both theoretically and empirically , even with a large data bias , the proposed algorithms can still improve the generalization performance by effectively leveraging the combination of augmented data and original data . We summarize the main contributions of this work as follows : • We prove that in a conventional training scheme , a deep model can benefit from augmented data when the data bias is small . • We design two algorithms termed AugDrop and MixLoss that can better leverage augmented data even when the data bias is large with theoretical guarantees . • Based on our theoretical findings , we empirically propose a new efficient algorithm WeMix by combining AugDrop and MixLoss , which has better performances without extra training cost . 2 RELATED WORK . A series of empirical works ( Cubuk et al. , 2019 ; Ho et al. , 2019 ; Lim et al. , 2019 ; Lin et al. , 2019a ; Cubuk et al. , 2020 ; Hataya et al. , 2019 ) on how to learn a good policy of using different data augmentations have been proposed without theoretical guarantees . In this section , we mainly focus on reviewing theoretical studies on data augmentation . For a survey of data augmentation , we refer readers to ( Shorten & Khoshgoftaar , 2019 ) and references therein for a comprehensive overview . Several works have attempted to establish theoretical understandings of data augmentation from different perspectives ( Dao et al. , 2019 ; Chen et al. , 2019 ; Rajput et al. , 2019 ) . Min et al . ( 2020 ) shown that , with more training data , weak augmentation can improve performance while strong augmentation always hurts the performance . Later on , Chen et al . ( 2020 ) study the gap between the generalization error ( please see the formal definition in ( Chen et al. , 2020 ) ) of adversarially-trained models and standard models . Both of their theoretical analyses were built on special linear binary classification model or linear regression model for label-preserving augmentation . Recently , Raghunathan et al . ( 2020 ) studied label-preserving transformation in data augmentation , which is identical to the first case in this paper . Their analysis is restricted to linear least square regression under noiseless setting , which is not applicable to training deep neural networks . Besides , their analysis requires infinite unlabeled data . By contrast , we do not require the original data to be unlimited . Wu et al . ( 2020 ) considered linear data augmentations . There are several major differences between their work and ours . First , they focus on the ridge linear regression problem which is strongly convex , while we consider non-convex optimization problems , which is more applicable in deep learning . Second , we study more general data augmentations beyond linear transformation . 3 PRELIMINARIES AND NOTATIONS . We study a learning problem for finding a classifier to map an input x ∈ X onto a label y ∈ Y ⊂ RK , where K is the number of classes . We assume the input-label pair ( x , y ) is drawn from a distribution Pxy = ( Px , Py ( ·|x ) ) . Since every augmented example ( x̃ , ỹ ) is generated by applying a certain transformation to either one or multiple examples , we will assume that ( x̃ , ỹ ) is drawn from a slightly different distribution Px̃ỹ = ( Px̃ , Pỹ ( ·|x̃ ) ) , where Px̃ is the marginal distribution on the inputs x̃ and Pỹ ( ·|x̃ ) ) ( we can write it as Pỹ for simplicity ) is the conditional distribution of the labels ỹ given inputs x̃ . We sample n training examples ( xi , yi ) , i = 1 , . . . , n from distribution Pxy and m training examples ( x̃i , ỹi ) , i = 1 , . . . , m from Px̃ỹ . We assume that m n due to the data augmentation . We denote by D = { ( xi , yi ) , i = 1 , . . . , n } and D̃ = ( x̃i , ỹi ) , i = 1 , . . . , m } the dataset sampled from Pxy and Px̃ỹ , respectively . We denote by T ( x ) the set of augmented data transformed from x . We use the notation E ( x , y ) ∼Pxy [ · ] to stand for the expectation that takes over a random variable ( x , y ) following a distribution Pxy . We denote by ∇wh ( w ) the gradient of a function h ( w ) in terms of variable w. When the variable to be taken a gradient is obvious , we use the notation ∇h ( w ) for simplicity . Let use ‖ · ‖ as the Euclidean norm for a vector or the Spectral norm for a matrix . The augmented data D̃ can be different from the original data D in two cases , according to ( Raghunathan et al. , 2020 ) . In the first case , often referred to as label-preserving , we consider Py ( ·|x ) = Pỹ ( ·|x̃ ) , ∀x̃ ∈ T ( x ) but Px 6= Px̃ . ( 1 ) In the second case , often referred to as label-mixing , we consider Px = Px̃ but Py ( ·|x ) 6= Pỹ ( ·|x̃ ) , ∃x̃ ∈ T ( x ) . ( 2 ) Examples of label-preserving augmentation include translation , adding noises , small rotation , and brightness or contrast changes ( Krizhevsky et al. , 2012 ; Raghunathan et al. , 2020 ) . One important example of label-mixing augmentation is mixup ( Zhang et al. , 2018 ; Tokozume et al. , 2018 ) . Due to the space limitation , we will focus on the label-mixing case , and the related studies and analysis for the label-preserving case can be found in Appendix A . To further quantify the difference between original data and augmented data when Px = Px̃ and Py 6= Pỹ , we introduce the data bias δy given x ∼ Px as following : δy : = max y , ỹ ‖y − ỹ‖ . ( 3 ) The equation in ( 3 ) measures the difference between the label from original data and the label from augmented data given input x . We aim to learn a prediction function f ( x ; w ) : RD × X → RK that is as close as possible to y , where w ∈ RD is the parameter and RD is a closed convex set . We respectively define two objective functions for optimization problems over the original data and the augmented data as L ( w ) = E ( x , y ) [ ` ( y , f ( x ; w ) ) ] , L̃ ( w ) = E ( x̃ , ỹ ) [ ` ( ỹ , f ( x̃ ; w ) ) ] , ( 4 ) where ` is a cross-entropy loss function which is given by ` ( y , f ( x ; w ) ) = K∑ i=1 yipi ( x ; w ) , where pi ( x ; w ) = − log ( exp ( fi ( x ; w ) ) ∑K j=1 exp ( fj ( x ; w ) ) ) . ( 5 ) We denote by w∗ and w̃∗ the optimal solutions to minw L ( w ) and minw L̃ ( w ) respectively , w∗ ∈ arg min w∈RD L ( w ) , w̃∗ ∈ arg min w∈RD L̃ ( w ) . ( 6 ) Taking L ( w ) as an example , we introduce some function properties used in our analysis . Definition 1 . The stochastic gradients of the objective functions L ( w ) is unbiased and bounded , if we have E ( x , y ) [ ∇w ` ( y , f ( x ; w ) ) ] = ∇L ( w ) , and there exists a constant G > 0 , such that ‖∇wp ( x ; w ) ‖ ≤ G , ∀x ∈ X , ∀w ∈ RD , where p ( x ; w ) = ( p1 ( x ; w ) , . . . , pK ( x ; w ) ) is a vector . Definition 2 . L ( w ) is smooth with an L-Lipchitz continuous gradient , if there exists a constant L > 0 such that ‖∇L ( w ) −∇L ( u ) ‖ ≤ L‖w− u‖ , ∀w , u ∈ RD , or equivalently , L ( w ) −L ( u ) ≤ 〈∇L ( u ) , w − u〉+ L2 ‖w − u‖ 2 , ∀w , u ∈ RD . The above properties are standard and widely used in the literature of non-convex optimization ( Ghadimi & Lan , 2013 ; Yan et al. , 2018 ; Yuan et al. , 2019 ; Wang et al. , 2019 ; Li et al. , 2020 ) . We introduce an important property termed Polyak-Łojasiewicz ( PL ) condition ( Polyak , 1963 ) on the objective function L ( w ) . Definition 3 . ( PL condition ) L ( w ) satisfies the PL condition , if there exists a constant µ > 0 such that 2µ ( L ( w ) − L ( w∗ ) ) ≤ ‖∇L ( w ) ‖2 , ∀w ∈ RD , where w∗ is defined in ( 6 ) . The PL condition has been observed in training deep and shallow neural networks ( Allen-Zhu et al. , 2019 ; Xie et al. , 2017 ) , and is widely used in many non-convex optimization studies ( Karimi et al. , 2016 ; Li & Li , 2018 ; Charles & Papailiopoulos , 2018 ; Yuan et al. , 2019 ; Li et al. , 2020 ) . It is also theoretically verified in ( Allen-Zhu et al. , 2019 ) and empirically estimated in ( Yuan et al. , 2019 ) for deep neural networks . It is worth noting that PL condition is weaker than many conditions such as strong convexity , restricted strong convexity and weak strong convexity ( Karimi et al. , 2016 ) . Finally , we will refer to κ = Lµ as condition number throughout this study .
This paper proposes a method to improve the generalization of deep networks when the input is applied strong augmentations, i.e. mixup, resulting in large data bias. The authors proposes theoretical foundation behind their two methods, AugDrop and MixLoss and show their effectiveness on CIFAR10/CIFAR100 datasets. Although the paper has technical novelty and it only provides incremental gains on relatively small-size dataset. Below, you can find some of my comments/questions about the paper.
SP:02802f3805d0c51d2a2d3105f7beeb620999bd66
WeMix: How to Better Utilize Data Augmentation
1 INTRODUCTION . Data augmentation ( Baird , 1992 ; Schmidhuber , 2015 ) has been a key to the success of deep learning in image classification ( He et al. , 2019 ) , and is becoming increasingly common in other tasks such as natural language processing ( Zhang et al. , 2015 ) and object detection ( Zoph et al. , 2019 ) . The data augmentation expands training set by generating virtual instances through random augmentation to the original ones . This alleviates the overfitting ( Shorten & Khoshgoftaar , 2019 ) problem when training large deep neural networks . Despite many encouraging results , it is not the case that data augmentation will always improve generalization errors ( Min et al. , 2020 ; Raghunathan et al. , 2020 ) . In particular , Raghunathan et al . ( 2020 ) showed that training by augmented data will lead to a smaller robust error but potentially a larger standard error . Therefore , it is critical to answer the following two questions before applying data augmentation in deep learning : • When will the deep models benefit from data augmentation ? • How to better leverage augmented data during training ? Several previous works ( Raghunathan et al. , 2020 ; Wu et al. , 2020 ; Min et al. , 2020 ) tried to address the questions . Their analysis is limited to specific problems such as linear ridge regression therefore may not be applicable to deep learning . In this work , we aim to answer the two questions from a theoretical perspective under a more general non-convex setting . We address the first question in a more general form covering applications in deep learning . For the second question , we develop new approaches that are provably more effective than the conventional data augmentation approaches . Most data augmentation operations alter the data distribution during the training progress . This imposes a data distribution bias ( we simply use “ data bias ” in the rest of this paper ) between the augmented data and the original data , which may make it difficult to fully leverage the augmented data . To be more concrete , let us consider label-mixing augmentation ( e.g. , mixup ( Zhang et al. , 2018 ; Tokozume et al. , 2018 ) ) . Suppose we have n original data D = { ( xi , yi ) , i = 1 , . . . , n } , where the input-label pair ( xi , yi ) follows a distribution Pxy = ( Px , Py ( ·|x ) ) , Px is the marginal distribution of the inputs and Py ( ·|x ) is the conditional distribution of the labels given inputs ; we generate m augmented data D̃ = { ( x̃i , ỹi ) , i = 1 , . . . , m } , where ( x̃i , ỹi ) ∼ Px̃ỹ = ( Px̃ , Pỹ ( ·|x̃ ) ) , and Px = Px̃ but Py ( ·|x ) 6= Pỹ ( ·|x̃ ) . Given x ∼ Px , the data bias is defined as δy = maxy , ỹ ‖y − ỹ‖ . We will show that when the bias between D and D̃ is large , directly training on the augmented data will not be as effective as training on the original data . Given the fact that augmented data may hurt the performance , the next question is how to design better learning algorithms to leash out the power of augmented data . To this end , we develop two novel algorithms to alleviate the data bias . The first algorithm , termed AugDrop , corrects the data bias by introducing a constrained optimization problem . The second algorithm , termed MixLoss , corrects the data bias by introducing a modified loss function . We show that , both theoretically and empirically , even with a large data bias , the proposed algorithms can still improve the generalization performance by effectively leveraging the combination of augmented data and original data . We summarize the main contributions of this work as follows : • We prove that in a conventional training scheme , a deep model can benefit from augmented data when the data bias is small . • We design two algorithms termed AugDrop and MixLoss that can better leverage augmented data even when the data bias is large with theoretical guarantees . • Based on our theoretical findings , we empirically propose a new efficient algorithm WeMix by combining AugDrop and MixLoss , which has better performances without extra training cost . 2 RELATED WORK . A series of empirical works ( Cubuk et al. , 2019 ; Ho et al. , 2019 ; Lim et al. , 2019 ; Lin et al. , 2019a ; Cubuk et al. , 2020 ; Hataya et al. , 2019 ) on how to learn a good policy of using different data augmentations have been proposed without theoretical guarantees . In this section , we mainly focus on reviewing theoretical studies on data augmentation . For a survey of data augmentation , we refer readers to ( Shorten & Khoshgoftaar , 2019 ) and references therein for a comprehensive overview . Several works have attempted to establish theoretical understandings of data augmentation from different perspectives ( Dao et al. , 2019 ; Chen et al. , 2019 ; Rajput et al. , 2019 ) . Min et al . ( 2020 ) shown that , with more training data , weak augmentation can improve performance while strong augmentation always hurts the performance . Later on , Chen et al . ( 2020 ) study the gap between the generalization error ( please see the formal definition in ( Chen et al. , 2020 ) ) of adversarially-trained models and standard models . Both of their theoretical analyses were built on special linear binary classification model or linear regression model for label-preserving augmentation . Recently , Raghunathan et al . ( 2020 ) studied label-preserving transformation in data augmentation , which is identical to the first case in this paper . Their analysis is restricted to linear least square regression under noiseless setting , which is not applicable to training deep neural networks . Besides , their analysis requires infinite unlabeled data . By contrast , we do not require the original data to be unlimited . Wu et al . ( 2020 ) considered linear data augmentations . There are several major differences between their work and ours . First , they focus on the ridge linear regression problem which is strongly convex , while we consider non-convex optimization problems , which is more applicable in deep learning . Second , we study more general data augmentations beyond linear transformation . 3 PRELIMINARIES AND NOTATIONS . We study a learning problem for finding a classifier to map an input x ∈ X onto a label y ∈ Y ⊂ RK , where K is the number of classes . We assume the input-label pair ( x , y ) is drawn from a distribution Pxy = ( Px , Py ( ·|x ) ) . Since every augmented example ( x̃ , ỹ ) is generated by applying a certain transformation to either one or multiple examples , we will assume that ( x̃ , ỹ ) is drawn from a slightly different distribution Px̃ỹ = ( Px̃ , Pỹ ( ·|x̃ ) ) , where Px̃ is the marginal distribution on the inputs x̃ and Pỹ ( ·|x̃ ) ) ( we can write it as Pỹ for simplicity ) is the conditional distribution of the labels ỹ given inputs x̃ . We sample n training examples ( xi , yi ) , i = 1 , . . . , n from distribution Pxy and m training examples ( x̃i , ỹi ) , i = 1 , . . . , m from Px̃ỹ . We assume that m n due to the data augmentation . We denote by D = { ( xi , yi ) , i = 1 , . . . , n } and D̃ = ( x̃i , ỹi ) , i = 1 , . . . , m } the dataset sampled from Pxy and Px̃ỹ , respectively . We denote by T ( x ) the set of augmented data transformed from x . We use the notation E ( x , y ) ∼Pxy [ · ] to stand for the expectation that takes over a random variable ( x , y ) following a distribution Pxy . We denote by ∇wh ( w ) the gradient of a function h ( w ) in terms of variable w. When the variable to be taken a gradient is obvious , we use the notation ∇h ( w ) for simplicity . Let use ‖ · ‖ as the Euclidean norm for a vector or the Spectral norm for a matrix . The augmented data D̃ can be different from the original data D in two cases , according to ( Raghunathan et al. , 2020 ) . In the first case , often referred to as label-preserving , we consider Py ( ·|x ) = Pỹ ( ·|x̃ ) , ∀x̃ ∈ T ( x ) but Px 6= Px̃ . ( 1 ) In the second case , often referred to as label-mixing , we consider Px = Px̃ but Py ( ·|x ) 6= Pỹ ( ·|x̃ ) , ∃x̃ ∈ T ( x ) . ( 2 ) Examples of label-preserving augmentation include translation , adding noises , small rotation , and brightness or contrast changes ( Krizhevsky et al. , 2012 ; Raghunathan et al. , 2020 ) . One important example of label-mixing augmentation is mixup ( Zhang et al. , 2018 ; Tokozume et al. , 2018 ) . Due to the space limitation , we will focus on the label-mixing case , and the related studies and analysis for the label-preserving case can be found in Appendix A . To further quantify the difference between original data and augmented data when Px = Px̃ and Py 6= Pỹ , we introduce the data bias δy given x ∼ Px as following : δy : = max y , ỹ ‖y − ỹ‖ . ( 3 ) The equation in ( 3 ) measures the difference between the label from original data and the label from augmented data given input x . We aim to learn a prediction function f ( x ; w ) : RD × X → RK that is as close as possible to y , where w ∈ RD is the parameter and RD is a closed convex set . We respectively define two objective functions for optimization problems over the original data and the augmented data as L ( w ) = E ( x , y ) [ ` ( y , f ( x ; w ) ) ] , L̃ ( w ) = E ( x̃ , ỹ ) [ ` ( ỹ , f ( x̃ ; w ) ) ] , ( 4 ) where ` is a cross-entropy loss function which is given by ` ( y , f ( x ; w ) ) = K∑ i=1 yipi ( x ; w ) , where pi ( x ; w ) = − log ( exp ( fi ( x ; w ) ) ∑K j=1 exp ( fj ( x ; w ) ) ) . ( 5 ) We denote by w∗ and w̃∗ the optimal solutions to minw L ( w ) and minw L̃ ( w ) respectively , w∗ ∈ arg min w∈RD L ( w ) , w̃∗ ∈ arg min w∈RD L̃ ( w ) . ( 6 ) Taking L ( w ) as an example , we introduce some function properties used in our analysis . Definition 1 . The stochastic gradients of the objective functions L ( w ) is unbiased and bounded , if we have E ( x , y ) [ ∇w ` ( y , f ( x ; w ) ) ] = ∇L ( w ) , and there exists a constant G > 0 , such that ‖∇wp ( x ; w ) ‖ ≤ G , ∀x ∈ X , ∀w ∈ RD , where p ( x ; w ) = ( p1 ( x ; w ) , . . . , pK ( x ; w ) ) is a vector . Definition 2 . L ( w ) is smooth with an L-Lipchitz continuous gradient , if there exists a constant L > 0 such that ‖∇L ( w ) −∇L ( u ) ‖ ≤ L‖w− u‖ , ∀w , u ∈ RD , or equivalently , L ( w ) −L ( u ) ≤ 〈∇L ( u ) , w − u〉+ L2 ‖w − u‖ 2 , ∀w , u ∈ RD . The above properties are standard and widely used in the literature of non-convex optimization ( Ghadimi & Lan , 2013 ; Yan et al. , 2018 ; Yuan et al. , 2019 ; Wang et al. , 2019 ; Li et al. , 2020 ) . We introduce an important property termed Polyak-Łojasiewicz ( PL ) condition ( Polyak , 1963 ) on the objective function L ( w ) . Definition 3 . ( PL condition ) L ( w ) satisfies the PL condition , if there exists a constant µ > 0 such that 2µ ( L ( w ) − L ( w∗ ) ) ≤ ‖∇L ( w ) ‖2 , ∀w ∈ RD , where w∗ is defined in ( 6 ) . The PL condition has been observed in training deep and shallow neural networks ( Allen-Zhu et al. , 2019 ; Xie et al. , 2017 ) , and is widely used in many non-convex optimization studies ( Karimi et al. , 2016 ; Li & Li , 2018 ; Charles & Papailiopoulos , 2018 ; Yuan et al. , 2019 ; Li et al. , 2020 ) . It is also theoretically verified in ( Allen-Zhu et al. , 2019 ) and empirically estimated in ( Yuan et al. , 2019 ) for deep neural networks . It is worth noting that PL condition is weaker than many conditions such as strong convexity , restricted strong convexity and weak strong convexity ( Karimi et al. , 2016 ) . Finally , we will refer to κ = Lµ as condition number throughout this study .
In this work, the authors first prove a deep model can benefit from augmented data when the data bias is small. Then they propose two methods, namely "AugDrop" that corrects and "MixLoss", that correct data bias by "constrained optimisation" and "modified loss function" respectively. Finally they show that these two methods can be combined and further improve the performance.
SP:02802f3805d0c51d2a2d3105f7beeb620999bd66
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
1 INTRODUCTION . Deep neural networks ( DNNs ) have shown promising performances on various tasks including computer vision , natural language processing , speech recognition , etc . However , a DNN usually comes with a large number of learnable parameters , ranging from millions of to even billions of ( e.g. , GPT-3 ( Brown et al. , 2020 ) ) , making the DNN model burdensome and difficult to be applied to real-world deployments . Therefore , researchers began to investigate how to speed up and compress DNNs via various methods such as knowledge distillation ( Hinton et al. , 2015 ) , quantization ( Jacob et al. , 2018 ; Zhou et al. , 2017 ) , designing efficient model architectures ( Howard et al. , 2017 ) , and structured sparsity ( Wen et al. , 2016 ; Li et al. , 2016 ) . In this paper , we focus on the problem of sparsifying DNNs . Sparsity in DNNs can be categorized into unstructured sparsity and structured sparsity . Unstructured sparsity prunes individual weights at any location , which is fine-grained and can achieve extremely high compression ratio ( Han et al. , 2015 ; Guo et al. , 2016 ) . However , unstructured sparsity struggles to take advantage of vectorprocessing architectures , which increases latency due to dependent sequences of reads ( Nvidia , 2020 ) . Compared with unstructured sparsity , structured sparsity is more friendly to hardware , especially for block pruning ( Wang et al. , 2019 ) , kernel shape sparsity ( Tan et al. , 2020 ) or channel and filter pruning ( Li et al. , 2016 ; Wen et al. , 2016 ) . Although structured sparsity can speed up DNNs on commodity hardware , it hurts model performance more significantly than unstructured fine-grained sparsity . For example , ResNet-50 network generated by unstructured pruning can achieve a 5.96× compression ratio , with the same accuracy as the original network , but it can only achieve 1× com- ∗The first two authors equally contribute to this paper . pression in the case of structured sparsity ( Renda et al. , 2020 ) . Therefore , how to combine the unstructured sparsity and structured sparsity to accelerate DNNs on modern hardware ( e.g. , GPU ) becomes a challenging yet valuable problem . Recently , Nvidia Ampere A100 is equipped with the Sparse Tensor Cores to accelerate 2:4 structured fine-grained sparsity . Here , N : M sparsity indicates the sparsity of DNNs in which onlyN weights are non-zero for every continuousM weights . To the best of our knowledge , A100 is the first commodity sparse hardware , where the sparse tensor core can support several common operations including linear , convolutional , recurrent cells , transformer blocks , etc . Specifically , suppose a typical matrix multiplication X ×W in DNNs , X andW denote input tensor and parameter tensor respectively . The Dense Tensor Cores implementX16×32×W32×8 matrix multiplication by 2 cycles while the Sparse Tensor Cores only need 1 cycle if the parameter tensorW satisfies the 2:4 structured sparse pattern . Nvidia has proposed an ASP1 ( APEX ’ s Automatic Sparsity ) solution ( Nvidia , 2020 ) to sparsify a dense neural network to satisfy the 2:4 fine-grained structured sparsity requirement . The recipe contains three steps : ( 1 ) training a dense network until converge ; ( 2 ) pruning for 2:4 sparsity with magnitude-based single-shot pruning ; ( 3 ) repeating the original training procedure . However , ASP is computationally expensive since it requires training the full dense models from scratch and finetuning again . Therefore , we still lack a simple recipe to obtain a structured sparse DNN model consistent with the dense network without extra fine-tuning . This paper addresses this question : Can we design a simple yet universal recipe to learn N : M sparse neural networks from scratch in an efficient way ? It is difficult to find the optimal sparse architecture ( connections ) and optimal parameters ( Evci et al. , 2019b ) simultaneously during training sparse CNNs and Transformers although SET-MLP could easily outperform dense MLP ( Bourgin et al. , 2019 ) . There are two schemes to obtain such sparse models . One is a two-stage scheme , which discovers a sparse neural architecture by pruning a well-trained dense network and then uses the same or even greater computational resources to retrain the sparse models ( Nvidia , 2020 ; Evci et al. , 2019b ; Han et al. , 2015 ; Frankle & Carbin , 2018 ) . The other is a one-stage scheme , which adopts the dynamic method to alternatively optimize parameters and prunes network architectures based on different criteria ( Bellec et al. , 2017 ; Mocanu et al. , 2018 ; Mostafa & Wang , 2019 ; Evci et al. , 2019b ; Kusupati et al. , 2020 ; Dettmers & Zettlemoyer , 2019 ) . Compared with the two-stage scheme , the one-stage scheme can save training time and cost however usually obtains lower performance . To overcome the aforementioned trade-off between training cost and performance , we present a simple yet effective framework to train sparse neural networks from scratch . Specifically , we employ the magnitude-based pruning method ( Renda et al. , 2020 ; Gale et al. , 2019 ) during the forward process . Considering that the pruning operation is a non-differentiable operator ( a similar dilemma in model quantization ( Courbariaux et al. , 2016 ) ) , we extend the widely used Straight-through Estimator ( STE ) ( Bengio et al. , 2013 ) in model quantization to aid sparse neural network ’ s backpropagation . However , perturbations are introduced during the back-propagation ( Yin et al. , 2019 ; Bengio et al. , 2013 ) . Hence we define Sparse Architecture Divergence ( SAD ) to further analyze N : M sparse networks trained by STE methods so that we can identify the impact of perturbations on sparse neural networks training . Based on SAD analysis , to alleviate the negative impact , we propose a sparse-refined term mitigating the approximated gradients ’ influence . We also compare the performance of neural networks with different granularities of fine-grained structured sparsity ( i.e. , 1:4 , 2:4 , 2:8 , 4:8 ) and conduct thorough experiments on several typical deep neural networks with different N : M sparsity levels , covering image classification , detection , segmentation , optical flow estimation , and machine translation . Experimental results have shown that the models with our proposed structured sparsity can achieve neglectful performance drop and can even sometimes outperform the dense model . The main contributions of this paper are summarized as three-fold . ( 1 ) To the best of our knowledge , this is the first systematic study into training N : M structured sparse neural networks from scratch without performance drop . The N : M structured sparsity is a missing yet promising ingredient in model acceleration , which can be a valuable supplement with various compression methods . ( 2 ) We extend STE to tackle the problem of training N : M sparse neural networks . To alleviate the limitations of STE on sparsifying networks , we propose a sparse refined term to enhance the effectiveness 1https : //github.com/NVIDIA/apex/tree/master/apex/contrib/sparsity on training the sparse neural networks from scratch . ( 3 ) We conduct extensive experiments on various tasks with N : M fine-grained sparse nets , and provide benchmarks for N : M sparse net training to facilitate co-development of related software and hardware design . 2 RELATED WORK . Unstructured and Structured Sparsity . Sparsity of DNNs is a promising direction to compress and accelerate a deep learning model . Among all sparsity types , unstructured sparsity can achieve a significantly high compression ratios ( e.g . 13× ( Han et al. , 2015 ) and 108× ( Guo et al. , 2016 ) ) while ensuring decent accuracy by pruning . Many different pruning criterions and pruning methods are proposed for unstructured sparsity , e.g. , magnitude-based pruning ( Han et al. , 2015 ; Frankle & Carbin , 2018 ) , Hessian based heuristics ( LeCun et al. , 1990 ) , and pruning with connection sensitivity ( Lee et al. , 2018 ) . However , unstructured sparsity ’ s ability to accelerate is highly limited since it takes a lot of overhead to store the irregular non-zero index matrix . On the other hand , Wen et al . ( 2016 ) introduces the structural sparsity to speed up deep models on GPUs . Existing structural sparsity contains filter-wise sparsity ( Li et al. , 2016 ) , channel-wise sparsity ( Li et al. , 2016 ) , filtershape-wise sparsity . Different from existing sparsity patterns ( fine-grained unstructured sparsity and coarse-grained structured sparsity ) , this paper presents an N : M fine-grained structured sparsity , a sparsity type that has both high efficiency and lossless performance . One-stage and two-stage methods . There are mainly two types of techniques to obtain a sparse neural network , one-stage methods and two-stage ones . The two-stage method first prunes a trained dense neural network and then retrains fixed sparse network to recover its performance . Typical two-stage methods include single-shot pruning ( Lee et al. , 2018 ) and iterative pruning ( Han et al. , 2015 ; Guo et al. , 2016 ) . Later , the lottery ticket hypothesis ( Frankle & Carbin , 2018 ) shows that the sparse sub-network ( winning tickets ) can be trained from scratch with the same initialization while the winning tickets are discovered by dense training . Deep-Rewiring ( Bellec et al. , 2017 ) , on the other hand , is a typical one-stage method , which takes a Bayesian perspective and samples sparse network connections from a posterior , however is computationally expensive and challenging to be applied to large-scale tasks . Sparse Evolutionary Training ( Mocanu et al. , 2018 ) ( SET ) is proposed as a simpler scheme where weights are pruned according to the standard magnitude criterion used in pruning and growing connections in random locations . Dettmers & Zettlemoyer ( 2019 ) uses the momentum of each parameter as the criterion for growing weights and receives an improvement in test accuracy . GMP ( Gale et al. , 2019 ) trains the unstructured sparse net using variational dropout and l0 regularization from scratch , and shows that unstructured sparse architectures learned through pruning can not be trained from scratch to have the same testing performance as dense models do . Recently proposed state-of-the-art method STR ( Kusupati et al. , 2020 ) introduces pruning learnable thresholds to obtain a non-uniform sparse network . RigL ( Evci et al. , 2019a ) uses the magnitudebased method to prune and the periodic dense gradients to regrow connection . However , compared with training dense neural networks from scratch , to achieve the same performance , RigL needs 5× more training time.The most closely related work to ours may be DNW ( Wortsman et al. , 2019 ) which uses a fully dense gradient in the backward run to discover optimal wiring on the fly . 3 METHOD . 3.1 N : M FINE-GRAINED STRUCTURED SPARSITY Here we define the problem of training a neural network with N : M fine-grained structured sparsity . A neural network with N : M sparsity satisfies that , in each group of M consecutive weights of the network , there are at most N weights have non-zero values . Fig . 1 illustrates a 2:4 sparse network . Generally , our objective is to train an N : M sparse neural network as min S ( W , N , M ) L ( W ; D ) , ( 1 ) where D denotes the observed data , L represents the loss function , W = { W l : 0 < l 5 L } indicates the parameters of an L-layer neural network , and S ( W , N , M ) is the N : M sparse neural network parameters . 3.2 STRAIGHT-THROUGH ESTIMATOR ( STE ) ON TRAINING N : M SPARSE NETWORKS A straightforward solution for training an N : M sparsity network is to simply extend Straightthrough Estimator ( STE ) ( Bengio et al. , 2013 ) to perform online magnitude-based pruning and sparse parameter updating , which is depicted in Fig . 2 ( a ) . STE is widely used in model quantization ( Rastegari et al. , 2016 ) , since the quantized function is non-differentiable without STE and the networks optimized with STE has decent performance under careful settings ( Yin et al. , 2019 ) . In STE , a dense network is maintained during the training process . During the forward pass , we project the dense weightsW into sparse weights W̃ = S ( W , N , M ) satisfying N : M sparsity . Let w ⊂ W be a group of consecutive M parameters inW and w̃ ⊂ W̃ be the corresponding group in W̃ . The projection of w can be formulated as : w̃i = { wi if |wi| > ξ 0 if |wi| < ξ , for i = 1 , 2 , . . . , M ( 2 ) where ξ is the N -th largest value in w = { |w1| , |w2| , . . . , |wM | } . Intuitively , this projection function S ( · ) produces sparse parameters W̃ by setting N parameters that have the least significant absolute values to zero in each consecutive M -parameter group , while keeping the other parameters the same as before . The computation of an N : M sparse sub-network on-the-fly in the forward pass is illustrated in Fig . 1 . The projection function S ( · ) , which is non-differentiable during back-propagation , generates the N : M sparse sub-network on the fly . To get gradients during back-propagation , STE computes the gradients of the sub-network g ( W̃ ) = 5W̃L ( W̃ ; D ) based on the sparse sub-network W̃ , which can be directly back-projected to the dense network as the approximated gradients of the dense parameters . The approximated parameter update rule for the dense network ( see Fig . 2 ( a ) in Appendix ) can be formulated as Wt+1 ←Wt − γtg ( W̃t ) , ( 3 ) whereWt represents dense parameters at iteration t and γt indicates the learning rate .
The authors proposed a new method for training N:M fine-grained structured sparse networks from scratch. The authors found that the SAD metric, which measures the number of weights whose pruning state is changed, became higher if the existing STE is used to train sparse networks and this metric had the positive relationship with accuracy drop. To reduce the SAD, SR-STE which gives higher weight decay coefficient to the pruned weights is applied to train sparse networks, improving the accuracy of sparse networks trained from scratch.
SP:38a415fd3aa50464470b6deeab96c007364afd17
Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch
1 INTRODUCTION . Deep neural networks ( DNNs ) have shown promising performances on various tasks including computer vision , natural language processing , speech recognition , etc . However , a DNN usually comes with a large number of learnable parameters , ranging from millions of to even billions of ( e.g. , GPT-3 ( Brown et al. , 2020 ) ) , making the DNN model burdensome and difficult to be applied to real-world deployments . Therefore , researchers began to investigate how to speed up and compress DNNs via various methods such as knowledge distillation ( Hinton et al. , 2015 ) , quantization ( Jacob et al. , 2018 ; Zhou et al. , 2017 ) , designing efficient model architectures ( Howard et al. , 2017 ) , and structured sparsity ( Wen et al. , 2016 ; Li et al. , 2016 ) . In this paper , we focus on the problem of sparsifying DNNs . Sparsity in DNNs can be categorized into unstructured sparsity and structured sparsity . Unstructured sparsity prunes individual weights at any location , which is fine-grained and can achieve extremely high compression ratio ( Han et al. , 2015 ; Guo et al. , 2016 ) . However , unstructured sparsity struggles to take advantage of vectorprocessing architectures , which increases latency due to dependent sequences of reads ( Nvidia , 2020 ) . Compared with unstructured sparsity , structured sparsity is more friendly to hardware , especially for block pruning ( Wang et al. , 2019 ) , kernel shape sparsity ( Tan et al. , 2020 ) or channel and filter pruning ( Li et al. , 2016 ; Wen et al. , 2016 ) . Although structured sparsity can speed up DNNs on commodity hardware , it hurts model performance more significantly than unstructured fine-grained sparsity . For example , ResNet-50 network generated by unstructured pruning can achieve a 5.96× compression ratio , with the same accuracy as the original network , but it can only achieve 1× com- ∗The first two authors equally contribute to this paper . pression in the case of structured sparsity ( Renda et al. , 2020 ) . Therefore , how to combine the unstructured sparsity and structured sparsity to accelerate DNNs on modern hardware ( e.g. , GPU ) becomes a challenging yet valuable problem . Recently , Nvidia Ampere A100 is equipped with the Sparse Tensor Cores to accelerate 2:4 structured fine-grained sparsity . Here , N : M sparsity indicates the sparsity of DNNs in which onlyN weights are non-zero for every continuousM weights . To the best of our knowledge , A100 is the first commodity sparse hardware , where the sparse tensor core can support several common operations including linear , convolutional , recurrent cells , transformer blocks , etc . Specifically , suppose a typical matrix multiplication X ×W in DNNs , X andW denote input tensor and parameter tensor respectively . The Dense Tensor Cores implementX16×32×W32×8 matrix multiplication by 2 cycles while the Sparse Tensor Cores only need 1 cycle if the parameter tensorW satisfies the 2:4 structured sparse pattern . Nvidia has proposed an ASP1 ( APEX ’ s Automatic Sparsity ) solution ( Nvidia , 2020 ) to sparsify a dense neural network to satisfy the 2:4 fine-grained structured sparsity requirement . The recipe contains three steps : ( 1 ) training a dense network until converge ; ( 2 ) pruning for 2:4 sparsity with magnitude-based single-shot pruning ; ( 3 ) repeating the original training procedure . However , ASP is computationally expensive since it requires training the full dense models from scratch and finetuning again . Therefore , we still lack a simple recipe to obtain a structured sparse DNN model consistent with the dense network without extra fine-tuning . This paper addresses this question : Can we design a simple yet universal recipe to learn N : M sparse neural networks from scratch in an efficient way ? It is difficult to find the optimal sparse architecture ( connections ) and optimal parameters ( Evci et al. , 2019b ) simultaneously during training sparse CNNs and Transformers although SET-MLP could easily outperform dense MLP ( Bourgin et al. , 2019 ) . There are two schemes to obtain such sparse models . One is a two-stage scheme , which discovers a sparse neural architecture by pruning a well-trained dense network and then uses the same or even greater computational resources to retrain the sparse models ( Nvidia , 2020 ; Evci et al. , 2019b ; Han et al. , 2015 ; Frankle & Carbin , 2018 ) . The other is a one-stage scheme , which adopts the dynamic method to alternatively optimize parameters and prunes network architectures based on different criteria ( Bellec et al. , 2017 ; Mocanu et al. , 2018 ; Mostafa & Wang , 2019 ; Evci et al. , 2019b ; Kusupati et al. , 2020 ; Dettmers & Zettlemoyer , 2019 ) . Compared with the two-stage scheme , the one-stage scheme can save training time and cost however usually obtains lower performance . To overcome the aforementioned trade-off between training cost and performance , we present a simple yet effective framework to train sparse neural networks from scratch . Specifically , we employ the magnitude-based pruning method ( Renda et al. , 2020 ; Gale et al. , 2019 ) during the forward process . Considering that the pruning operation is a non-differentiable operator ( a similar dilemma in model quantization ( Courbariaux et al. , 2016 ) ) , we extend the widely used Straight-through Estimator ( STE ) ( Bengio et al. , 2013 ) in model quantization to aid sparse neural network ’ s backpropagation . However , perturbations are introduced during the back-propagation ( Yin et al. , 2019 ; Bengio et al. , 2013 ) . Hence we define Sparse Architecture Divergence ( SAD ) to further analyze N : M sparse networks trained by STE methods so that we can identify the impact of perturbations on sparse neural networks training . Based on SAD analysis , to alleviate the negative impact , we propose a sparse-refined term mitigating the approximated gradients ’ influence . We also compare the performance of neural networks with different granularities of fine-grained structured sparsity ( i.e. , 1:4 , 2:4 , 2:8 , 4:8 ) and conduct thorough experiments on several typical deep neural networks with different N : M sparsity levels , covering image classification , detection , segmentation , optical flow estimation , and machine translation . Experimental results have shown that the models with our proposed structured sparsity can achieve neglectful performance drop and can even sometimes outperform the dense model . The main contributions of this paper are summarized as three-fold . ( 1 ) To the best of our knowledge , this is the first systematic study into training N : M structured sparse neural networks from scratch without performance drop . The N : M structured sparsity is a missing yet promising ingredient in model acceleration , which can be a valuable supplement with various compression methods . ( 2 ) We extend STE to tackle the problem of training N : M sparse neural networks . To alleviate the limitations of STE on sparsifying networks , we propose a sparse refined term to enhance the effectiveness 1https : //github.com/NVIDIA/apex/tree/master/apex/contrib/sparsity on training the sparse neural networks from scratch . ( 3 ) We conduct extensive experiments on various tasks with N : M fine-grained sparse nets , and provide benchmarks for N : M sparse net training to facilitate co-development of related software and hardware design . 2 RELATED WORK . Unstructured and Structured Sparsity . Sparsity of DNNs is a promising direction to compress and accelerate a deep learning model . Among all sparsity types , unstructured sparsity can achieve a significantly high compression ratios ( e.g . 13× ( Han et al. , 2015 ) and 108× ( Guo et al. , 2016 ) ) while ensuring decent accuracy by pruning . Many different pruning criterions and pruning methods are proposed for unstructured sparsity , e.g. , magnitude-based pruning ( Han et al. , 2015 ; Frankle & Carbin , 2018 ) , Hessian based heuristics ( LeCun et al. , 1990 ) , and pruning with connection sensitivity ( Lee et al. , 2018 ) . However , unstructured sparsity ’ s ability to accelerate is highly limited since it takes a lot of overhead to store the irregular non-zero index matrix . On the other hand , Wen et al . ( 2016 ) introduces the structural sparsity to speed up deep models on GPUs . Existing structural sparsity contains filter-wise sparsity ( Li et al. , 2016 ) , channel-wise sparsity ( Li et al. , 2016 ) , filtershape-wise sparsity . Different from existing sparsity patterns ( fine-grained unstructured sparsity and coarse-grained structured sparsity ) , this paper presents an N : M fine-grained structured sparsity , a sparsity type that has both high efficiency and lossless performance . One-stage and two-stage methods . There are mainly two types of techniques to obtain a sparse neural network , one-stage methods and two-stage ones . The two-stage method first prunes a trained dense neural network and then retrains fixed sparse network to recover its performance . Typical two-stage methods include single-shot pruning ( Lee et al. , 2018 ) and iterative pruning ( Han et al. , 2015 ; Guo et al. , 2016 ) . Later , the lottery ticket hypothesis ( Frankle & Carbin , 2018 ) shows that the sparse sub-network ( winning tickets ) can be trained from scratch with the same initialization while the winning tickets are discovered by dense training . Deep-Rewiring ( Bellec et al. , 2017 ) , on the other hand , is a typical one-stage method , which takes a Bayesian perspective and samples sparse network connections from a posterior , however is computationally expensive and challenging to be applied to large-scale tasks . Sparse Evolutionary Training ( Mocanu et al. , 2018 ) ( SET ) is proposed as a simpler scheme where weights are pruned according to the standard magnitude criterion used in pruning and growing connections in random locations . Dettmers & Zettlemoyer ( 2019 ) uses the momentum of each parameter as the criterion for growing weights and receives an improvement in test accuracy . GMP ( Gale et al. , 2019 ) trains the unstructured sparse net using variational dropout and l0 regularization from scratch , and shows that unstructured sparse architectures learned through pruning can not be trained from scratch to have the same testing performance as dense models do . Recently proposed state-of-the-art method STR ( Kusupati et al. , 2020 ) introduces pruning learnable thresholds to obtain a non-uniform sparse network . RigL ( Evci et al. , 2019a ) uses the magnitudebased method to prune and the periodic dense gradients to regrow connection . However , compared with training dense neural networks from scratch , to achieve the same performance , RigL needs 5× more training time.The most closely related work to ours may be DNW ( Wortsman et al. , 2019 ) which uses a fully dense gradient in the backward run to discover optimal wiring on the fly . 3 METHOD . 3.1 N : M FINE-GRAINED STRUCTURED SPARSITY Here we define the problem of training a neural network with N : M fine-grained structured sparsity . A neural network with N : M sparsity satisfies that , in each group of M consecutive weights of the network , there are at most N weights have non-zero values . Fig . 1 illustrates a 2:4 sparse network . Generally , our objective is to train an N : M sparse neural network as min S ( W , N , M ) L ( W ; D ) , ( 1 ) where D denotes the observed data , L represents the loss function , W = { W l : 0 < l 5 L } indicates the parameters of an L-layer neural network , and S ( W , N , M ) is the N : M sparse neural network parameters . 3.2 STRAIGHT-THROUGH ESTIMATOR ( STE ) ON TRAINING N : M SPARSE NETWORKS A straightforward solution for training an N : M sparsity network is to simply extend Straightthrough Estimator ( STE ) ( Bengio et al. , 2013 ) to perform online magnitude-based pruning and sparse parameter updating , which is depicted in Fig . 2 ( a ) . STE is widely used in model quantization ( Rastegari et al. , 2016 ) , since the quantized function is non-differentiable without STE and the networks optimized with STE has decent performance under careful settings ( Yin et al. , 2019 ) . In STE , a dense network is maintained during the training process . During the forward pass , we project the dense weightsW into sparse weights W̃ = S ( W , N , M ) satisfying N : M sparsity . Let w ⊂ W be a group of consecutive M parameters inW and w̃ ⊂ W̃ be the corresponding group in W̃ . The projection of w can be formulated as : w̃i = { wi if |wi| > ξ 0 if |wi| < ξ , for i = 1 , 2 , . . . , M ( 2 ) where ξ is the N -th largest value in w = { |w1| , |w2| , . . . , |wM | } . Intuitively , this projection function S ( · ) produces sparse parameters W̃ by setting N parameters that have the least significant absolute values to zero in each consecutive M -parameter group , while keeping the other parameters the same as before . The computation of an N : M sparse sub-network on-the-fly in the forward pass is illustrated in Fig . 1 . The projection function S ( · ) , which is non-differentiable during back-propagation , generates the N : M sparse sub-network on the fly . To get gradients during back-propagation , STE computes the gradients of the sub-network g ( W̃ ) = 5W̃L ( W̃ ; D ) based on the sparse sub-network W̃ , which can be directly back-projected to the dense network as the approximated gradients of the dense parameters . The approximated parameter update rule for the dense network ( see Fig . 2 ( a ) in Appendix ) can be formulated as Wt+1 ←Wt − γtg ( W̃t ) , ( 3 ) whereWt represents dense parameters at iteration t and γt indicates the learning rate .
The paper introduces a new sparse training algorithm (SR-STE) based on the straight through estimator which is specially designed for the hardware constraints of Nvidia A100 GPU. Auxiliary, in order to study better this algorithm, the paper introduces also a metric (SAD) to measure the changes in the sparse network topology during training. The contributions have real added value as they show that sparse neural networks can actually benefit of hardware designed to consider sparsity. The experiments on CNNs and Transformers support the claims.
SP:38a415fd3aa50464470b6deeab96c007364afd17
Spectral Synthesis for Satellite-to-Satellite Translation
1 INTRODUCTION . Climate change and related environmental issues - including the loss of biodiversity and extreme weather - are listed by the World Economic Forum as the most important risks to our planet ( 7 ) . Monitoring the Earth is critical to mitigating these risks , understanding the effects , and making future predictions ( 38 ) . Multi- and hyper-spectral satellite-based remote sensing enables global observation of the Earth , allowing scientists to study large-scale system dynamics and inform general circulation models ( 26 ) . In weather forecasts satellite data initializes the atmospheric state for future predictions . On longer time scales , these data are used to measure the effects of climate change such as land-cover variations , temperature trends , solar radiation levels , and the rate of snow/ice melt . In the coming decades , increased investments from the public and private sectors in satellite-based observations will continue to improve global monitoring , as highlighted in NASA ’ s decadal survey ( 25 ) . Satellites are designed based on specifications for a given set of applications with fiscal , technological , and physical constraints which limit their temporal , spatial , and spectral resolutions . Geostationary ( GEO ) satellites rotate with the Earth to stay over a constant position above the equator at a high elevation of 35,786km . This position enables GEO satellites with on-board multi-spectral imagers to take continuous and high-temporal snapshots over large spatial regions and are ideal for monitoring diurnal and fast moving events . Spectral bands measure brightness and radiance intensities of the electromagnetic spectrum at a specified center wavelength and bandwidth . Bands are selected to satisfy defined variables of interest constrained by technological cost and accuracy . Applications of GEO sensors include atmospheric winds measurement ( 35 ) , tropical cyclone tracking ( 36 ) , wildfire monitoring ( 41 ) , and short-term forecasting ( 24 ) . Multiple GEO satellites are needed to generate global high-temporal resolution datasets to better monitor these events around the world . However , variations in resolutions , sensor uncertainties , and temporal life spans leads to a set of separate datasets which are not consistent , making this process very challenging ( 26 ) . Developing consistent and homogeneous global datasets would relieve many of these challenges . Supplementary Material : https : //github.com/anonymous-ai-for-earth/ satellite-to-satellite-translation The current generation of GEO satellites are no exception . The GOES-16/17 satellites operated by NASA/NOAA ( cost : $ 11 billion ) have a set of 16 imaging bands covering the visible , near- , and thermal-infrared spectral range ( 29 ) . The Himawari-8 satellite operated by the Japanese Space Agency ( cost : $ 800 million ) similarly has 16 bands but swaps a NIR ( 1.38µm ) band for a green channel ( 0.51µm ) , enabling the construction of true color images ( 3 ) . The 1.38µm band is ideal for measuring Cirrus clouds , composed of ice particles in the upper troposphere , a major contributor to regulating the Earth ’ s climate that is not yet well understood ( 21 ; 10 ) . Without capturing this band , directly observing Cirrus clouds over Japan , East Asia , and Western Pacific region from Himawari-8 is not possible . Synthetic observations via virtual spectral sensors could be a low-cost solution to improving coverage availability and consistency with current satellites . We present an approach to generate synthetic spectral channels from a multi-domain unpaired satellite dataset . We treat satellites with either dissimilar spectral coverage or varying vantage points as separate spectral sets . In this way , the problem closely resembles that of colorization ( 40 ) and image-to-image translation tasks ( 22 ; 42 ; 9 ) in the case where paired images are not available but with the added complexity of a large number of spectral bands . We use a combination of variational autoencoder ( VAE ) and generative adverserial network ( GAN ) ( 8 ) architectures adapted to our problem to model a shared latent space , as in unsupervised image-to-image translation ( 22 ) . Generating synthetic bands is an under-constrained problem that paired with an adverserial loss in high dimensions promotes overfitting . Our approach mitigates these challenges by leveraging a weak supervision signal based on partial overlap in spectral bands between domains . By including a reconstruction loss on overlapping spectral bands between domain pairs we can substantially improve spectral band synthesis . To summarize our contributions , we 1 ) introduce a shared spectral reconstruction loss to a VAE-GAN architecture for synthetic band generation ; 2 ) test our methodology on real-world scenarios ; 3 ) present and release a test dataset of four hemispheric snapshots from three publicly available geostationary satellites for future research . In the following sections , we will introduce related work in remote sensing and image-to-image translation , describe the architecture , and review experiments . Lastly , we will discuss the implications on this work and conclude with future directions . 2 BACKGROUND . Remote Sensing . Current generation GEO satellites observe 16 spectral bands over large regions every 10-15 minutes at a 0.5-2km resolution . At a sub-optimal 2km , this produces full-disk images of size 5,424×5,424×16 which causes storage constraints while being computationally expensive to process . Physical and statistical models are used to convert these images into more easily interpreted variables such as precipitation , cloud cover , and surface temperature ( 30 ) . Multiple GEO satellites , currently in orbit , extend the spatial ranges to actively monitoring larger regions . However , differences in spectral bands and sensor uncertainties/biases present challenges to commonly used sensor specific models , especially existing downstream models do not generalize well to missing spectral information . Neural models have long been applied to process remote sensing data and generate downstream products . Hsu et al . ( 12 ) presented some of the first work that showed neural networks ( NNs ) could generate accurate and high-resolution precipitation products from satellite observations . In recent years , convolutional neural networks ( CNNs ) have been found to further improve this task ( 27 ) . Similarly , CNNs have successfully been applied to poverty mapping ( 15 ) , super-resolution ( 18 ) , subpixel classification ( 20 ) , model emulation ( 6 ) , and land-cover classification ( 2 ) , all from low-level satellite products . In terms of spectral synthesis , few studies have explored reconstruction of hyperspectral bands from RGB bands with supervised approaches ( 32 ; 1 ) . While many of these problems are within the class of image-to-image translation , they generally assume labels are widely available and focus on individual sensors . To the best of our knowledge , no studies have developed approaches to synthesize spectral information by learning across satellites in the unsupervised setting . Image-to-Image Translation . Many problems can be defined as an image-to-image translation task including super-resolution , style transfer , and colorization . Approaches to image-to-image translation have been developed for both supervised and unsupervised settings to map images from one domain to another . In the supervised setting , image pairs are available to learn a direct mapping from one to the other . Generative adverserial networks have been shown to be highly successful at this task ( 14 ; 33 ) . Numerous unsupervised learning methods have been developed for the common case of large unpaired datasets ( 22 ; 42 ; 39 ; 23 ) . CycleGAN , for instance , proposed an approach to directly map from one domain to another and back by incorporating a cycle consistency loss with a GAN ( 42 ) . UNIT ( 22 ) proposed a probabilistic approach that uses an intermediate latent space between domains with a Variational Autoencoder ( VAE ) ( 16 ) and GANs ( 8 ) . In contrast to prior work on image-to-image translation our scenario specifically requires spectral translation and across multiple domains . Rather than translating between relatively low-dimensional RGB images and segmentation maps , as is found in traditional multimodal image-to-image translation ( 43 ; 4 ; 13 ) , satellite imagery contains tens to hundreds of spectral bands . Domain adaptation is another area of active research which also considers the case of effectiveness in unseen environments with cycle consistency and domain invariant ( 11 ; 5 ) . ( 17 ) using a shared content loss to translate between RGB image styles . ( 28 ) presented an application of image-to-image translation for 4-band Sentinel-4 images between different times of day . Our approach is based on the proven fundamental techniques of learning a shared latent space using cycle consistency and adverserial losses extended in the spectral dimension . We also use the prior understanding of spatial consistency between domains to implement a partial skip connection . 3 APPROACH . VAEs and GANs are effective for image-to-image translation where pairs of images are not available ( 22 ) . This is the case with for satellites with no space-time overlap . However , as in ( 22 ) , a shared latent variable z can be used to approximate the joint distribution from marginals . An adverserial loss applied to cross reconstructions satisfies the shared latent space assumption but is under-constrained for high-dimensional , multi-spectral images . We shall observe that this leads to large errors in our task . To address this , we introduce a shared spectral reconstruction loss and skip connection to effectively generate synthetic spectral bands ( see Fig . 1 ) , the result is a 50-80 % reduction in mean absolute error . In the spectral domain , we consider the case of K satellites , S = { S1 , S2 , ... , SK } , such that Sk ∈ RH×W×Bk is a set of Bk spectral bands with height H and width W , illustrated as a Venn diagram in Fig . 1 . The union of all sets , ∪Ki=1Sk , represents the complete set of spectral channels in the data . We denote the intersection of two spectral sets as overlapping bands . Our goal is to generate synthetic bands will where Si ∩ Scj 6= ∅ for ∀ ( i , j ) where c denotes the complement . A shared latent variable z is modeled with a Gaussian prior to learn a general representation for mapping between sets such that the assumptions of shared spectral reconstruction , weight sharing , cycle consistency , and cross-domain adverserial losses are satisfied . VAE-GAN . For a given spectral set k , we define encoder-generator pairs { Ek , Gk } such that q ( zk|sk ) = N ( Ek ( sk ) , I ) and ŝk→kk = Gk ( zk ∼ qk ( zk|sk ) ) for a sk ∈ Sk . For any set j , ŝ k→j k corresponds to reconstruction from set k to j . The set of encoders { E1 , E2 , ... , Ek } share their last layer of weights to constrain the latent space to high-level representations . Using prior pη ( z ) ∼ N ( 0 , I ) , the VAE likelihood is defined as : LV AEk ( E , G ) = λ1KL ( qk ( zk|sk ) ||pη ( z ) ) − λ2Ezk∼qk ( zk|sk ) [ logpGk ( sk|zk ) ] . ( 1 ) Distributions pGk are modeled as Laplacian distributions and a Gaussian latent space with prior z ∼ N ( 0 , I ) . GANs are used to enforce realistic spatial/spectral distributions of reconstructed images from the latent space . Discriminator networks D1 to Dk compare observations with cross reconstructions from the latent space . LGANk = λ3Esk∼PSk [ log Dk ( sk ) ] + λ3 ∑ j 6=k Ezj∼PSk [ log ( 1−Dk ( Gk ( zj ) ) ) ] . ( 2 ) Cycle Consistency . VAE and GAN losses are under-constrained and do not satisfy the shared latent space constraint alone . As in ( 22 ) , a cycle consistent loss is used such that sk = Fj→k ( Fk→j ( sk ) ) for all satellite pairs ( j , k ) and where Fk→j ( sk ) = Gj ( Ek ( sk ) ) . The loss between sk and cycled reconstruction ŝk→j→kk is written as : LCCk→j ( Ek , Ej , Gk , Gj ) =λ4KL ( qk ( zk|sk ) ||pη ( z ) ) + λ4KL ( qj ( zj |s k→j k ) ||pη ( z ) ) − λ5Ezj∼qj ( zj |sk→jk ) [ logp ( Gk ( sk|zj ) ) ] ( 3 ) With multiple domains , each domain should cycle through every other domain . The cycle-consistency loss for each permutation results in a complete cyclical graph . This loss is written as : LCCk = ∑ k 6=j LCCk→j ( Ek , Ej , Gk , Gj ) ( 4 ) Shared Spectral Reconstruction Loss . Adverserial losses can be easily fooled with increased dimensions . To help avoid this we introduce an additional loss , LSSRk . In this problem , if the intersection of spectral channels Sk , j = Sk ∩ Sj between domains is not empty then the difference between p ( sk→kk |zk ) and p ( sk→kk |zk ) can be minimized with KL divergence : LSSRk = λ6 ∑ j 6=k KL ( p ( s̃k→kk |zk ) ||p ( s̃ k→j k |zk ) ) ( 5 ) where s̃k ∈ Sk , j . The SSR loss encourages decoders to reconstruct identical spectral wavelengths with similar distributions while still synthesizing dissimilar bands . In this scenario , partial constraints are placed between domains and allows sampling of unobserved spectra from the shared latent space . By decreasing λ6 the bias between bands will be relaxed which may reduce the effect of more uncertain domains . Total Loss The likelihood is maximized by optimizing the GAN mini-max problem such that the generator aims to fool the discriminator , alternating updates between ( E , G ) and ( G , D ) . L = min E , G max D K∑ k=1 [ LV AEk + LCCk + LGANk + LSSRk ] ( 6 ) The hyper-parameters used correspond to those in ( 22 ) and set as λ1 = 1 , λ2 = 0.01 , λ3 = 1 , λ4 = 1 , λ5 = 0.01 , and λ6 = 0.1 . Adam optimization is used to train the networks for 200,000 steps with a batch size of 8 with parameters β1 = 0.5 , β2 = 0.999 and learning rate 1e− 5 . The reader can find detailed information in the supplementary material . Below we show the steps for generating a new band . Algorithm 1 : Generate a synthetic band by translating from one satellite to another Result : Synthetic spectral band Image sk from satellite k ; Encode to latent space z = Ek ( sk ) ; Decode to other satellite s̃j = Gj ( z ) ; Select synthetic band from s̃j ; Data . Three geostationary satellite imagery datasets , GOES-16 ( G16 ) , GOES-17 ( G17 ) , and Himawari-8 ( H8 ) are used in our experiments . Each satellite captures hemispheric ( full-disk ) snapshots from a constant vantage point over time but of different regions . Examples shown in supplement . Images contain 16 bands ( channels ) in the visible , near-infrared , and thermal spectrum at 0.5-2km spatial resolution . G16 and G17 have identical specifications viewing the east and west regions of North America and include two visible ( blue , red ) , four near-infrared ( including cirrus ) , and ten thermal infrared bands . H8 has 15 overlapping bands with G16/G17 viewing the Pacific Ocean and East Asia , this ensures similar information content . H8 captures three visible ( blue , green , red ) , three near-infrared ( missing cirrus ) , and the same ten thermal infrared bands as G16/G17 . Thus , the G16 and G17 bands all overlap , cirrus ( 1.37µm ) exists in G16 and G17 but is not in H8 and green ( 0.51µm ) exists in H8 but not in G16 or G17 . These differences cause difficulties when applying models relying on green or cirrus bands across satellite sets . G16 observes the North , Central and South Americas , capturing a good distribution of land and ocean . G17 observes the Pacific Ocean as well as most of North and Central America . However , G17 has known problems with its thermal cooling system causing the near-infrared and thermalinfrared channels to be unusable during periods of high heat and biased throughout ( 31 ) . This further highlights the gain in replacing low quality bands of G17 with a virtual sensor . Periods of high heat are filtered out of our training and test sets with quality control checks to eliminate temporal periods of known uncertainty . After quality control , considerable space-time overlap between G16 and G17 can be used for testing . H8 observes East Asia , Australia , and the Western Pacific , partially overlapping with G17 . Discrepancies are expected between sensors caused by different solar and sensor viewing angles but we are not aware of a more appropriate dataset for evaluation . The data generated by ( 37 ) is used which normalized G16 , G17 , and H8 to a common geo-referenced gridding system in order to facilitate intercomparisons and processed with the Bidirectional Reflectance Distribution Function ( BRDF ) . Bands have resolutions varying from 500m to 2km which we interpolate to a common sub-optimal resolution of 2km . Full-disk images are on a common grid with tiles of size 300× 300×16 . Training data is generated from the multi-petabyte datasets . We randomly sample images to build a well distributed and large training dataset from years 2018 ( G16 , H8 ) and 2019 ( G17 ) which totaled 359GB of data . Each tile is split into 64×64×16 non-overlapping patches for training , generating millions of samples . A test set including 500 random tiles from 25 days in February 2019 from overlapping G16 and G17 observations . The random set of tiles assures a range of solar angles , system patterns , and land cover types . Similarly , four tiles of data from G17 and H8 on January 2 , 2019 at 04:00UTC are selected to evaluate synthetically generated green and cirrus bands ( spatial overlap of G17/H8 is mostly ocean ) . This dataset will be made publicly available consisting of tiles from each satellite .
This paper tries to generate synthetic unobserved spectral imagery from a set of existing spectral channels. This is an image-to-image translation task, for which VAEs and GANs are effective to address. For this specific satellite band-to-band translation task, authors adopt the VAE-GAN framework (adding a skip connection between the input and generator), with a new added spectral reconstruction loss. Experiments show the effectiveness of the proposed method.
SP:7d69ad75322ea8d15440567c810394321a4f1f34
Spectral Synthesis for Satellite-to-Satellite Translation
1 INTRODUCTION . Climate change and related environmental issues - including the loss of biodiversity and extreme weather - are listed by the World Economic Forum as the most important risks to our planet ( 7 ) . Monitoring the Earth is critical to mitigating these risks , understanding the effects , and making future predictions ( 38 ) . Multi- and hyper-spectral satellite-based remote sensing enables global observation of the Earth , allowing scientists to study large-scale system dynamics and inform general circulation models ( 26 ) . In weather forecasts satellite data initializes the atmospheric state for future predictions . On longer time scales , these data are used to measure the effects of climate change such as land-cover variations , temperature trends , solar radiation levels , and the rate of snow/ice melt . In the coming decades , increased investments from the public and private sectors in satellite-based observations will continue to improve global monitoring , as highlighted in NASA ’ s decadal survey ( 25 ) . Satellites are designed based on specifications for a given set of applications with fiscal , technological , and physical constraints which limit their temporal , spatial , and spectral resolutions . Geostationary ( GEO ) satellites rotate with the Earth to stay over a constant position above the equator at a high elevation of 35,786km . This position enables GEO satellites with on-board multi-spectral imagers to take continuous and high-temporal snapshots over large spatial regions and are ideal for monitoring diurnal and fast moving events . Spectral bands measure brightness and radiance intensities of the electromagnetic spectrum at a specified center wavelength and bandwidth . Bands are selected to satisfy defined variables of interest constrained by technological cost and accuracy . Applications of GEO sensors include atmospheric winds measurement ( 35 ) , tropical cyclone tracking ( 36 ) , wildfire monitoring ( 41 ) , and short-term forecasting ( 24 ) . Multiple GEO satellites are needed to generate global high-temporal resolution datasets to better monitor these events around the world . However , variations in resolutions , sensor uncertainties , and temporal life spans leads to a set of separate datasets which are not consistent , making this process very challenging ( 26 ) . Developing consistent and homogeneous global datasets would relieve many of these challenges . Supplementary Material : https : //github.com/anonymous-ai-for-earth/ satellite-to-satellite-translation The current generation of GEO satellites are no exception . The GOES-16/17 satellites operated by NASA/NOAA ( cost : $ 11 billion ) have a set of 16 imaging bands covering the visible , near- , and thermal-infrared spectral range ( 29 ) . The Himawari-8 satellite operated by the Japanese Space Agency ( cost : $ 800 million ) similarly has 16 bands but swaps a NIR ( 1.38µm ) band for a green channel ( 0.51µm ) , enabling the construction of true color images ( 3 ) . The 1.38µm band is ideal for measuring Cirrus clouds , composed of ice particles in the upper troposphere , a major contributor to regulating the Earth ’ s climate that is not yet well understood ( 21 ; 10 ) . Without capturing this band , directly observing Cirrus clouds over Japan , East Asia , and Western Pacific region from Himawari-8 is not possible . Synthetic observations via virtual spectral sensors could be a low-cost solution to improving coverage availability and consistency with current satellites . We present an approach to generate synthetic spectral channels from a multi-domain unpaired satellite dataset . We treat satellites with either dissimilar spectral coverage or varying vantage points as separate spectral sets . In this way , the problem closely resembles that of colorization ( 40 ) and image-to-image translation tasks ( 22 ; 42 ; 9 ) in the case where paired images are not available but with the added complexity of a large number of spectral bands . We use a combination of variational autoencoder ( VAE ) and generative adverserial network ( GAN ) ( 8 ) architectures adapted to our problem to model a shared latent space , as in unsupervised image-to-image translation ( 22 ) . Generating synthetic bands is an under-constrained problem that paired with an adverserial loss in high dimensions promotes overfitting . Our approach mitigates these challenges by leveraging a weak supervision signal based on partial overlap in spectral bands between domains . By including a reconstruction loss on overlapping spectral bands between domain pairs we can substantially improve spectral band synthesis . To summarize our contributions , we 1 ) introduce a shared spectral reconstruction loss to a VAE-GAN architecture for synthetic band generation ; 2 ) test our methodology on real-world scenarios ; 3 ) present and release a test dataset of four hemispheric snapshots from three publicly available geostationary satellites for future research . In the following sections , we will introduce related work in remote sensing and image-to-image translation , describe the architecture , and review experiments . Lastly , we will discuss the implications on this work and conclude with future directions . 2 BACKGROUND . Remote Sensing . Current generation GEO satellites observe 16 spectral bands over large regions every 10-15 minutes at a 0.5-2km resolution . At a sub-optimal 2km , this produces full-disk images of size 5,424×5,424×16 which causes storage constraints while being computationally expensive to process . Physical and statistical models are used to convert these images into more easily interpreted variables such as precipitation , cloud cover , and surface temperature ( 30 ) . Multiple GEO satellites , currently in orbit , extend the spatial ranges to actively monitoring larger regions . However , differences in spectral bands and sensor uncertainties/biases present challenges to commonly used sensor specific models , especially existing downstream models do not generalize well to missing spectral information . Neural models have long been applied to process remote sensing data and generate downstream products . Hsu et al . ( 12 ) presented some of the first work that showed neural networks ( NNs ) could generate accurate and high-resolution precipitation products from satellite observations . In recent years , convolutional neural networks ( CNNs ) have been found to further improve this task ( 27 ) . Similarly , CNNs have successfully been applied to poverty mapping ( 15 ) , super-resolution ( 18 ) , subpixel classification ( 20 ) , model emulation ( 6 ) , and land-cover classification ( 2 ) , all from low-level satellite products . In terms of spectral synthesis , few studies have explored reconstruction of hyperspectral bands from RGB bands with supervised approaches ( 32 ; 1 ) . While many of these problems are within the class of image-to-image translation , they generally assume labels are widely available and focus on individual sensors . To the best of our knowledge , no studies have developed approaches to synthesize spectral information by learning across satellites in the unsupervised setting . Image-to-Image Translation . Many problems can be defined as an image-to-image translation task including super-resolution , style transfer , and colorization . Approaches to image-to-image translation have been developed for both supervised and unsupervised settings to map images from one domain to another . In the supervised setting , image pairs are available to learn a direct mapping from one to the other . Generative adverserial networks have been shown to be highly successful at this task ( 14 ; 33 ) . Numerous unsupervised learning methods have been developed for the common case of large unpaired datasets ( 22 ; 42 ; 39 ; 23 ) . CycleGAN , for instance , proposed an approach to directly map from one domain to another and back by incorporating a cycle consistency loss with a GAN ( 42 ) . UNIT ( 22 ) proposed a probabilistic approach that uses an intermediate latent space between domains with a Variational Autoencoder ( VAE ) ( 16 ) and GANs ( 8 ) . In contrast to prior work on image-to-image translation our scenario specifically requires spectral translation and across multiple domains . Rather than translating between relatively low-dimensional RGB images and segmentation maps , as is found in traditional multimodal image-to-image translation ( 43 ; 4 ; 13 ) , satellite imagery contains tens to hundreds of spectral bands . Domain adaptation is another area of active research which also considers the case of effectiveness in unseen environments with cycle consistency and domain invariant ( 11 ; 5 ) . ( 17 ) using a shared content loss to translate between RGB image styles . ( 28 ) presented an application of image-to-image translation for 4-band Sentinel-4 images between different times of day . Our approach is based on the proven fundamental techniques of learning a shared latent space using cycle consistency and adverserial losses extended in the spectral dimension . We also use the prior understanding of spatial consistency between domains to implement a partial skip connection . 3 APPROACH . VAEs and GANs are effective for image-to-image translation where pairs of images are not available ( 22 ) . This is the case with for satellites with no space-time overlap . However , as in ( 22 ) , a shared latent variable z can be used to approximate the joint distribution from marginals . An adverserial loss applied to cross reconstructions satisfies the shared latent space assumption but is under-constrained for high-dimensional , multi-spectral images . We shall observe that this leads to large errors in our task . To address this , we introduce a shared spectral reconstruction loss and skip connection to effectively generate synthetic spectral bands ( see Fig . 1 ) , the result is a 50-80 % reduction in mean absolute error . In the spectral domain , we consider the case of K satellites , S = { S1 , S2 , ... , SK } , such that Sk ∈ RH×W×Bk is a set of Bk spectral bands with height H and width W , illustrated as a Venn diagram in Fig . 1 . The union of all sets , ∪Ki=1Sk , represents the complete set of spectral channels in the data . We denote the intersection of two spectral sets as overlapping bands . Our goal is to generate synthetic bands will where Si ∩ Scj 6= ∅ for ∀ ( i , j ) where c denotes the complement . A shared latent variable z is modeled with a Gaussian prior to learn a general representation for mapping between sets such that the assumptions of shared spectral reconstruction , weight sharing , cycle consistency , and cross-domain adverserial losses are satisfied . VAE-GAN . For a given spectral set k , we define encoder-generator pairs { Ek , Gk } such that q ( zk|sk ) = N ( Ek ( sk ) , I ) and ŝk→kk = Gk ( zk ∼ qk ( zk|sk ) ) for a sk ∈ Sk . For any set j , ŝ k→j k corresponds to reconstruction from set k to j . The set of encoders { E1 , E2 , ... , Ek } share their last layer of weights to constrain the latent space to high-level representations . Using prior pη ( z ) ∼ N ( 0 , I ) , the VAE likelihood is defined as : LV AEk ( E , G ) = λ1KL ( qk ( zk|sk ) ||pη ( z ) ) − λ2Ezk∼qk ( zk|sk ) [ logpGk ( sk|zk ) ] . ( 1 ) Distributions pGk are modeled as Laplacian distributions and a Gaussian latent space with prior z ∼ N ( 0 , I ) . GANs are used to enforce realistic spatial/spectral distributions of reconstructed images from the latent space . Discriminator networks D1 to Dk compare observations with cross reconstructions from the latent space . LGANk = λ3Esk∼PSk [ log Dk ( sk ) ] + λ3 ∑ j 6=k Ezj∼PSk [ log ( 1−Dk ( Gk ( zj ) ) ) ] . ( 2 ) Cycle Consistency . VAE and GAN losses are under-constrained and do not satisfy the shared latent space constraint alone . As in ( 22 ) , a cycle consistent loss is used such that sk = Fj→k ( Fk→j ( sk ) ) for all satellite pairs ( j , k ) and where Fk→j ( sk ) = Gj ( Ek ( sk ) ) . The loss between sk and cycled reconstruction ŝk→j→kk is written as : LCCk→j ( Ek , Ej , Gk , Gj ) =λ4KL ( qk ( zk|sk ) ||pη ( z ) ) + λ4KL ( qj ( zj |s k→j k ) ||pη ( z ) ) − λ5Ezj∼qj ( zj |sk→jk ) [ logp ( Gk ( sk|zj ) ) ] ( 3 ) With multiple domains , each domain should cycle through every other domain . The cycle-consistency loss for each permutation results in a complete cyclical graph . This loss is written as : LCCk = ∑ k 6=j LCCk→j ( Ek , Ej , Gk , Gj ) ( 4 ) Shared Spectral Reconstruction Loss . Adverserial losses can be easily fooled with increased dimensions . To help avoid this we introduce an additional loss , LSSRk . In this problem , if the intersection of spectral channels Sk , j = Sk ∩ Sj between domains is not empty then the difference between p ( sk→kk |zk ) and p ( sk→kk |zk ) can be minimized with KL divergence : LSSRk = λ6 ∑ j 6=k KL ( p ( s̃k→kk |zk ) ||p ( s̃ k→j k |zk ) ) ( 5 ) where s̃k ∈ Sk , j . The SSR loss encourages decoders to reconstruct identical spectral wavelengths with similar distributions while still synthesizing dissimilar bands . In this scenario , partial constraints are placed between domains and allows sampling of unobserved spectra from the shared latent space . By decreasing λ6 the bias between bands will be relaxed which may reduce the effect of more uncertain domains . Total Loss The likelihood is maximized by optimizing the GAN mini-max problem such that the generator aims to fool the discriminator , alternating updates between ( E , G ) and ( G , D ) . L = min E , G max D K∑ k=1 [ LV AEk + LCCk + LGANk + LSSRk ] ( 6 ) The hyper-parameters used correspond to those in ( 22 ) and set as λ1 = 1 , λ2 = 0.01 , λ3 = 1 , λ4 = 1 , λ5 = 0.01 , and λ6 = 0.1 . Adam optimization is used to train the networks for 200,000 steps with a batch size of 8 with parameters β1 = 0.5 , β2 = 0.999 and learning rate 1e− 5 . The reader can find detailed information in the supplementary material . Below we show the steps for generating a new band . Algorithm 1 : Generate a synthetic band by translating from one satellite to another Result : Synthetic spectral band Image sk from satellite k ; Encode to latent space z = Ek ( sk ) ; Decode to other satellite s̃j = Gj ( z ) ; Select synthetic band from s̃j ; Data . Three geostationary satellite imagery datasets , GOES-16 ( G16 ) , GOES-17 ( G17 ) , and Himawari-8 ( H8 ) are used in our experiments . Each satellite captures hemispheric ( full-disk ) snapshots from a constant vantage point over time but of different regions . Examples shown in supplement . Images contain 16 bands ( channels ) in the visible , near-infrared , and thermal spectrum at 0.5-2km spatial resolution . G16 and G17 have identical specifications viewing the east and west regions of North America and include two visible ( blue , red ) , four near-infrared ( including cirrus ) , and ten thermal infrared bands . H8 has 15 overlapping bands with G16/G17 viewing the Pacific Ocean and East Asia , this ensures similar information content . H8 captures three visible ( blue , green , red ) , three near-infrared ( missing cirrus ) , and the same ten thermal infrared bands as G16/G17 . Thus , the G16 and G17 bands all overlap , cirrus ( 1.37µm ) exists in G16 and G17 but is not in H8 and green ( 0.51µm ) exists in H8 but not in G16 or G17 . These differences cause difficulties when applying models relying on green or cirrus bands across satellite sets . G16 observes the North , Central and South Americas , capturing a good distribution of land and ocean . G17 observes the Pacific Ocean as well as most of North and Central America . However , G17 has known problems with its thermal cooling system causing the near-infrared and thermalinfrared channels to be unusable during periods of high heat and biased throughout ( 31 ) . This further highlights the gain in replacing low quality bands of G17 with a virtual sensor . Periods of high heat are filtered out of our training and test sets with quality control checks to eliminate temporal periods of known uncertainty . After quality control , considerable space-time overlap between G16 and G17 can be used for testing . H8 observes East Asia , Australia , and the Western Pacific , partially overlapping with G17 . Discrepancies are expected between sensors caused by different solar and sensor viewing angles but we are not aware of a more appropriate dataset for evaluation . The data generated by ( 37 ) is used which normalized G16 , G17 , and H8 to a common geo-referenced gridding system in order to facilitate intercomparisons and processed with the Bidirectional Reflectance Distribution Function ( BRDF ) . Bands have resolutions varying from 500m to 2km which we interpolate to a common sub-optimal resolution of 2km . Full-disk images are on a common grid with tiles of size 300× 300×16 . Training data is generated from the multi-petabyte datasets . We randomly sample images to build a well distributed and large training dataset from years 2018 ( G16 , H8 ) and 2019 ( G17 ) which totaled 359GB of data . Each tile is split into 64×64×16 non-overlapping patches for training , generating millions of samples . A test set including 500 random tiles from 25 days in February 2019 from overlapping G16 and G17 observations . The random set of tiles assures a range of solar angles , system patterns , and land cover types . Similarly , four tiles of data from G17 and H8 on January 2 , 2019 at 04:00UTC are selected to evaluate synthetically generated green and cirrus bands ( spatial overlap of G17/H8 is mostly ocean ) . This dataset will be made publicly available consisting of tiles from each satellite .
This paper proposes a new method for image-to-image translation on multi-spectral imagery. The proposed method uses variational auto-encoders and generative adversarial networks to generate synthetic bands in satellite imagery. Novelty of the proposed method is that the authors introduce a shared spectral reconstruction loss and skip connection to generate synthetic spectral bands. This allows to generate synthetic bands with higher accuracy that the original image-to image translation method.
SP:7d69ad75322ea8d15440567c810394321a4f1f34
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
1 INTRODUCTION . The introduction of deep , contextualized Masked Language Models ( MLM ) 1 trained on massive amounts of unlabeled data has led to significant advances across many different Natural Language Processing ( NLP ) tasks ( Peters et al. , 2018 ; Liu et al. , 2019a ) . Much of these recent advances can be attributed to the now well-known BERT approach ( Devlin et al. , 2018 ) . Substantial improvements over previous state-of-the-art results on the GLUE benchmark ( Wang et al. , 2018 ) have been obtained by multiple groups using BERT models with task specific fine-tuning . The “ BERT-variant + fine-tuning ” formula has continued to improve over time with newer work constantly pushing the state-of-the-art forward on the GLUE benchmark . The use of a single neural architecture for multiple NLP tasks has shown promise long before the current wave of BERT inspired methods ( Collobert & Weston , 2008 ) and recent work has argued that autoregressive language models ( ARLMs ) trained on large-scale datasets – such as the GPT family of models ( Radford et al. , 2018 ) , are in practice multi-task learners ( Brown et al. , 2020 ) . However , even with MLMs and ARLMs trained for multi-tasking , single task fine-tuning is usually also employed to achieve state-of-the-art performance on specific tasks of interest . Typically this fine-tuning process may entail : creating a task-specific fine-tuned model ( Devlin et al. , 2018 ) , training specialized model components for task-specific predictions ( Houlsby et al. , 2019 ) or fine-tuning a single multi-task architecture ( Liu et al. , 2019b ) . ∗Joint first-authors 1For reader convenience , all acronyms in this paper are summarized in section A.1 of the Appendix . Single-task fine-tuning overall pretrained model parameters may have other issues . Recent analyses of such MLM have shed light on the linguistic knowledge that is captured in the hidden states and attention maps ( Clark et al. , 2019b ; Tenney et al. , 2019a ; Merchant et al. , 2020 ) . Particularly , BERT has middle Transformer ( Vaswani et al. , 2017 ) layers that are typically the most transferable to a downstream task ( Liu et al. , 2019a ) . The model proxies the steps of the traditional NLP pipeline in a localizable way ( Tenney et al. , 2019a ) — with basic syntactic information appearing earlier in the network , while high-level semantic information appearing in higher-level layers . Since pretraining is usually done on large-scale datasets , it may be useful , for a variety of downstream tasks , to conserve that knowledge . However , single task fine-tuning causes catastrophic forgetting of the knowledge learned during MLM ( Howard & Ruder , 2018 ) . To preserve knowledge , freezing part of a pretrained network and using Adapters for new tasks have shown promising results ( Houlsby et al. , 2019 ) . Inspired by the human ability to transfer learned knowledge from one task to another new task , Multi-Task Learning ( MTL ) in a general sense ( Caruana , 1997 ; Rajpurkar et al. , 2016b ; Ruder , 2017 ) has been applied in many fields outside of NLP . Caruana ( 1993 ) showed that a model trained in a multi-task manner can take advantage of the inductive transfer between tasks , achieving a better generalization performance . MTL has the advantage of computational/storage efficiency ( Zhang & Yang , 2017 ) , but training models in a multi-task setting is a balancing act ; particularly with datasets that have different : ( a ) dataset sizes , ( b ) task difficulty levels , and ( c ) different types of loss functions . In practice , learning multiple tasks at once is challenging since negative transfer ( Wang et al. , 2019a ) , task interference ( Wu et al. , 2020 ; Yu et al. , 2020 ) and catastrophic forgetting ( Serrà et al. , 2018 ) can lead to worse data efficiency , training stability and generalization compared to single task fine-tuning . Using Conditionally Adaptive Learning , we seek to improve pretraining knowledge retention and multi-task inductive knowledge transfer . Our contributions are the following : • A new task conditioned Transformer that adapts and modulates pretrained weights ( Section 2.1 ) . • A novel way to prioritize tasks with an uncertainty based multi-task data sampling method that helps balance the sampling of tasks to avoid catastrophic forgetting ( Section 2.2 ) . Our Conditionally Adaptive Multi-Task Learning ( CA-MTL ) approach is illustrated in Figure 1 . To the best of our knowledge , our work is the first to explore the use of a latent representation of tasks to modularize and adapt pretrained architectures . Further , we believe our work is also the first to examine uncertainty sampling for large-scale multi-task learning in NLP . We show the efficacy of CA-MTL by : ( a ) testing on 26 different tasks and ( b ) presenting state-of-the-art results on a number of test sets as well as superior performance against both single-task and MTL baselines . Moreover , we further demonstrate that our method has advantages over ( c ) other adapter networks , and ( d ) other MTL sampling methods . Finally , we provide ablations and separate analysis of the MT-Uncertainty Sampling technique in section 4.1 and of each component of the adapter in 4.2 . 2 METHODOLOGY . This section is organized according to the two main MTL problems that we will tackle : ( 1 ) How to modularize a pretrained network with latent task representations ? ( 2 ) How to balance different tasks in MTL ? We define each task as : Ti , { pi ( yi|xi , zi ) , Li , p̃i ( xi ) } , where zi is task i ’ s learnable shallow embedding , Li is the task loss , and p̃i ( xi ) is the empirical distribution of the training data pair { xi , yi } , for i ∈ { 1 , . . . , T } and T the number of supervised tasks . The MTL objective is : min φ ( z ) , θ1 , ... , θT T∑ i=1 Li ( fφ ( zi ) , θi ( xi ) , yi ) ( 1 ) where f is the predictor function ( includes encoder model and decoder heads ) , φ ( z ) are learnable generated weights conditioned on z , and θi are task-specific parameters for the output decoder heads . z is constructed using an embedding lookup table . 2.1 TASK CONDITIONED TRANSFORMER . Our task conditioned Transformer architecture is based on one simple concept . We either add conditional layers or modulate existing pretrained weights using a task representation by extending Feature Wise Linear Modulation ( Perez et al. , 2018 ) functions in several ways depending on the Transformer layer . We define our framework below . Definition 1 ( Conditional Weight Transformations ) . Given a neural network weight matrix W , we compute transformations of the form φ ( W|zi ) = γi ( zi ) W + βi ( zi ) , where γi and βi are learned functions that transform the weights based on a learned vector embedding zi , for task i . Definition 2 ( Conditionally Adaptive Learning ) . In our setting , Conditionally Adaptive Learning is the process of learning a set of φs for the conditionally adaptive modules presented below along with a set of task embedding vectors zi for T tasks , using a multi-task loss ( see equation 1 ) . In the subsections that follow : We introduce a new Transformer Attention Module using blockdiagonal Conditional Attention that allows the original query-key based attention to account for task-specific biases ( section 2.1.1 ) . We propose a new Conditional Alignment method that aligns the data of diverse tasks and that performs better than its unconditioned and higher capacity predecessor ( section 2.1.2 ) . We adapt layer normalization statistics to specific tasks using a new Conditional Layer Normalization module ( section 2.1.3 ) . We add a Conditional Bottleneck that facilitates weight sharing and task-specific information flow from lower layers ( section 2.1.4 ) . In our experiments we provide an ablation study of these components ( Table 1 ) examining performance in terms of GLUE scores . 2.1.1 CONDITIONAL ATTENTION Given d , the input dimensions , the query Q , the key K , and the value V as defined in Vaswani et al . ( 2017 ) , we redefine the attention operation : Attention ( Q , K , V , zi ) ) = softmax [ M ( zi ) + QKT√ d ] V M ( zi ) = N⊕ n=1 A′n ( zi ) , A ′ n ( zi ) = Anγi ( zi ) + βi ( zi ) where ⊕ is the direct sum operator ( see section A.6 ) , N is the number of block matrices An ∈ R ( L/N ) × ( L/N ) along the diagonal of the attention matrix , L is the input sequence , M ( zi ) = diag ( A′1 , . . . , A′N ) is a block diagonal conditional matrix . Note that An is constructed using L/N trainable and randomly initialized L/N dimensional vectors . While the original attention matrix depends on the hidden states h , M ( zi ) is a learnable weight matrix that only depends on the task embedding zi ∈ Rd . γi , βi : Rd 7→ RL 2/N2 are Feature Wise Linear Modulation ( Perez et al. , 2018 ) functions . We also experimented with full-block Conditional Attention ∈ RL×L . Not only did it have N2 more parameters compared to the block-diagonal variant , but it also performed significantly worse on the GLUE development set ( see FBA variant in Table 10 ) . It is possible that GLUE tasks derive a certain benefit from localized attention that is a consequence of M ( zi ) . With M ( zi ) , each element in a sequence can only attend to other elements in its subsequence of length L/N . In our experiments we used N = d/L . The full Conditional Attention mechanism used in our experiments is illustrated in Figure 2 . 2.1.2 CONDITIONAL ALIGNMENT . Wu et al . ( 2020 ) showed that in MTL having T separate alignment modules R1 , . . . , RT increases BERTLARGE avg . scores on five GLUE tasks ( CoLA , MRPC , QNLI , RTE , SST-2 ) by 2.35 % . Inspired by this work , we found that adding a task conditioned alignment layer between the input embedding layer and the first BERT Transformer layer improved multi-task model performance . However , instead of having T separate alignment matrices Ri for each T task , one alignment matrix R̂ is generated as a function of the task embedding zi . As in Wu et al . ( 2020 ) , we tested this module on the same five GLUE tasks and with BERTLARGE . Enabling task conditioned weight sharing across covariance alignment modules allows us to outperforms BERTLARGE by 3.61 % . This is 1.26 % higher than having T separate alignment matrices . Inserting R̂ into BERT , yields the following encoder function f̂ : f̂ = T∑ t=1 gθi ( E ( xi ) R̂ ( zi ) B ) , R̂ ( zi ) = Rγi ( zi ) + βi ( zi ) ( 2 ) where xi ∈ Rd is the layer input , gθi is the decoder head function for task i with weights θi , E the frozen BERT embedding layer , B the BERT Transformer layers and R the linear weight matrix of a single task conditioned alignment matrix . γi , βi : Rd 7→ Rd are Feature Wise Linear Modulation functions .
This paper proposed a new transformer model framework for multitask learning on NLP tasks. To deal with challenges in multitask learning/co-training, the authors proposed five improvements, including modifications on the transformer layers with task conditioning and uncertainty sampling. In the experiments, the authors showed that the proposed model can outperform full fine-tuned BERT large model with less parameters (adding some parameters on a single co-trained model), and with less training data.
SP:6a1884e0f3d0e103ad14ab06b3f28308d4ccec2c
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
1 INTRODUCTION . The introduction of deep , contextualized Masked Language Models ( MLM ) 1 trained on massive amounts of unlabeled data has led to significant advances across many different Natural Language Processing ( NLP ) tasks ( Peters et al. , 2018 ; Liu et al. , 2019a ) . Much of these recent advances can be attributed to the now well-known BERT approach ( Devlin et al. , 2018 ) . Substantial improvements over previous state-of-the-art results on the GLUE benchmark ( Wang et al. , 2018 ) have been obtained by multiple groups using BERT models with task specific fine-tuning . The “ BERT-variant + fine-tuning ” formula has continued to improve over time with newer work constantly pushing the state-of-the-art forward on the GLUE benchmark . The use of a single neural architecture for multiple NLP tasks has shown promise long before the current wave of BERT inspired methods ( Collobert & Weston , 2008 ) and recent work has argued that autoregressive language models ( ARLMs ) trained on large-scale datasets – such as the GPT family of models ( Radford et al. , 2018 ) , are in practice multi-task learners ( Brown et al. , 2020 ) . However , even with MLMs and ARLMs trained for multi-tasking , single task fine-tuning is usually also employed to achieve state-of-the-art performance on specific tasks of interest . Typically this fine-tuning process may entail : creating a task-specific fine-tuned model ( Devlin et al. , 2018 ) , training specialized model components for task-specific predictions ( Houlsby et al. , 2019 ) or fine-tuning a single multi-task architecture ( Liu et al. , 2019b ) . ∗Joint first-authors 1For reader convenience , all acronyms in this paper are summarized in section A.1 of the Appendix . Single-task fine-tuning overall pretrained model parameters may have other issues . Recent analyses of such MLM have shed light on the linguistic knowledge that is captured in the hidden states and attention maps ( Clark et al. , 2019b ; Tenney et al. , 2019a ; Merchant et al. , 2020 ) . Particularly , BERT has middle Transformer ( Vaswani et al. , 2017 ) layers that are typically the most transferable to a downstream task ( Liu et al. , 2019a ) . The model proxies the steps of the traditional NLP pipeline in a localizable way ( Tenney et al. , 2019a ) — with basic syntactic information appearing earlier in the network , while high-level semantic information appearing in higher-level layers . Since pretraining is usually done on large-scale datasets , it may be useful , for a variety of downstream tasks , to conserve that knowledge . However , single task fine-tuning causes catastrophic forgetting of the knowledge learned during MLM ( Howard & Ruder , 2018 ) . To preserve knowledge , freezing part of a pretrained network and using Adapters for new tasks have shown promising results ( Houlsby et al. , 2019 ) . Inspired by the human ability to transfer learned knowledge from one task to another new task , Multi-Task Learning ( MTL ) in a general sense ( Caruana , 1997 ; Rajpurkar et al. , 2016b ; Ruder , 2017 ) has been applied in many fields outside of NLP . Caruana ( 1993 ) showed that a model trained in a multi-task manner can take advantage of the inductive transfer between tasks , achieving a better generalization performance . MTL has the advantage of computational/storage efficiency ( Zhang & Yang , 2017 ) , but training models in a multi-task setting is a balancing act ; particularly with datasets that have different : ( a ) dataset sizes , ( b ) task difficulty levels , and ( c ) different types of loss functions . In practice , learning multiple tasks at once is challenging since negative transfer ( Wang et al. , 2019a ) , task interference ( Wu et al. , 2020 ; Yu et al. , 2020 ) and catastrophic forgetting ( Serrà et al. , 2018 ) can lead to worse data efficiency , training stability and generalization compared to single task fine-tuning . Using Conditionally Adaptive Learning , we seek to improve pretraining knowledge retention and multi-task inductive knowledge transfer . Our contributions are the following : • A new task conditioned Transformer that adapts and modulates pretrained weights ( Section 2.1 ) . • A novel way to prioritize tasks with an uncertainty based multi-task data sampling method that helps balance the sampling of tasks to avoid catastrophic forgetting ( Section 2.2 ) . Our Conditionally Adaptive Multi-Task Learning ( CA-MTL ) approach is illustrated in Figure 1 . To the best of our knowledge , our work is the first to explore the use of a latent representation of tasks to modularize and adapt pretrained architectures . Further , we believe our work is also the first to examine uncertainty sampling for large-scale multi-task learning in NLP . We show the efficacy of CA-MTL by : ( a ) testing on 26 different tasks and ( b ) presenting state-of-the-art results on a number of test sets as well as superior performance against both single-task and MTL baselines . Moreover , we further demonstrate that our method has advantages over ( c ) other adapter networks , and ( d ) other MTL sampling methods . Finally , we provide ablations and separate analysis of the MT-Uncertainty Sampling technique in section 4.1 and of each component of the adapter in 4.2 . 2 METHODOLOGY . This section is organized according to the two main MTL problems that we will tackle : ( 1 ) How to modularize a pretrained network with latent task representations ? ( 2 ) How to balance different tasks in MTL ? We define each task as : Ti , { pi ( yi|xi , zi ) , Li , p̃i ( xi ) } , where zi is task i ’ s learnable shallow embedding , Li is the task loss , and p̃i ( xi ) is the empirical distribution of the training data pair { xi , yi } , for i ∈ { 1 , . . . , T } and T the number of supervised tasks . The MTL objective is : min φ ( z ) , θ1 , ... , θT T∑ i=1 Li ( fφ ( zi ) , θi ( xi ) , yi ) ( 1 ) where f is the predictor function ( includes encoder model and decoder heads ) , φ ( z ) are learnable generated weights conditioned on z , and θi are task-specific parameters for the output decoder heads . z is constructed using an embedding lookup table . 2.1 TASK CONDITIONED TRANSFORMER . Our task conditioned Transformer architecture is based on one simple concept . We either add conditional layers or modulate existing pretrained weights using a task representation by extending Feature Wise Linear Modulation ( Perez et al. , 2018 ) functions in several ways depending on the Transformer layer . We define our framework below . Definition 1 ( Conditional Weight Transformations ) . Given a neural network weight matrix W , we compute transformations of the form φ ( W|zi ) = γi ( zi ) W + βi ( zi ) , where γi and βi are learned functions that transform the weights based on a learned vector embedding zi , for task i . Definition 2 ( Conditionally Adaptive Learning ) . In our setting , Conditionally Adaptive Learning is the process of learning a set of φs for the conditionally adaptive modules presented below along with a set of task embedding vectors zi for T tasks , using a multi-task loss ( see equation 1 ) . In the subsections that follow : We introduce a new Transformer Attention Module using blockdiagonal Conditional Attention that allows the original query-key based attention to account for task-specific biases ( section 2.1.1 ) . We propose a new Conditional Alignment method that aligns the data of diverse tasks and that performs better than its unconditioned and higher capacity predecessor ( section 2.1.2 ) . We adapt layer normalization statistics to specific tasks using a new Conditional Layer Normalization module ( section 2.1.3 ) . We add a Conditional Bottleneck that facilitates weight sharing and task-specific information flow from lower layers ( section 2.1.4 ) . In our experiments we provide an ablation study of these components ( Table 1 ) examining performance in terms of GLUE scores . 2.1.1 CONDITIONAL ATTENTION Given d , the input dimensions , the query Q , the key K , and the value V as defined in Vaswani et al . ( 2017 ) , we redefine the attention operation : Attention ( Q , K , V , zi ) ) = softmax [ M ( zi ) + QKT√ d ] V M ( zi ) = N⊕ n=1 A′n ( zi ) , A ′ n ( zi ) = Anγi ( zi ) + βi ( zi ) where ⊕ is the direct sum operator ( see section A.6 ) , N is the number of block matrices An ∈ R ( L/N ) × ( L/N ) along the diagonal of the attention matrix , L is the input sequence , M ( zi ) = diag ( A′1 , . . . , A′N ) is a block diagonal conditional matrix . Note that An is constructed using L/N trainable and randomly initialized L/N dimensional vectors . While the original attention matrix depends on the hidden states h , M ( zi ) is a learnable weight matrix that only depends on the task embedding zi ∈ Rd . γi , βi : Rd 7→ RL 2/N2 are Feature Wise Linear Modulation ( Perez et al. , 2018 ) functions . We also experimented with full-block Conditional Attention ∈ RL×L . Not only did it have N2 more parameters compared to the block-diagonal variant , but it also performed significantly worse on the GLUE development set ( see FBA variant in Table 10 ) . It is possible that GLUE tasks derive a certain benefit from localized attention that is a consequence of M ( zi ) . With M ( zi ) , each element in a sequence can only attend to other elements in its subsequence of length L/N . In our experiments we used N = d/L . The full Conditional Attention mechanism used in our experiments is illustrated in Figure 2 . 2.1.2 CONDITIONAL ALIGNMENT . Wu et al . ( 2020 ) showed that in MTL having T separate alignment modules R1 , . . . , RT increases BERTLARGE avg . scores on five GLUE tasks ( CoLA , MRPC , QNLI , RTE , SST-2 ) by 2.35 % . Inspired by this work , we found that adding a task conditioned alignment layer between the input embedding layer and the first BERT Transformer layer improved multi-task model performance . However , instead of having T separate alignment matrices Ri for each T task , one alignment matrix R̂ is generated as a function of the task embedding zi . As in Wu et al . ( 2020 ) , we tested this module on the same five GLUE tasks and with BERTLARGE . Enabling task conditioned weight sharing across covariance alignment modules allows us to outperforms BERTLARGE by 3.61 % . This is 1.26 % higher than having T separate alignment matrices . Inserting R̂ into BERT , yields the following encoder function f̂ : f̂ = T∑ t=1 gθi ( E ( xi ) R̂ ( zi ) B ) , R̂ ( zi ) = Rγi ( zi ) + βi ( zi ) ( 2 ) where xi ∈ Rd is the layer input , gθi is the decoder head function for task i with weights θi , E the frozen BERT embedding layer , B the BERT Transformer layers and R the linear weight matrix of a single task conditioned alignment matrix . γi , βi : Rd 7→ Rd are Feature Wise Linear Modulation functions .
The paper explores a collection of strategies to improve multitask learning and bring performance on par with single task training. The strategies build on and show good awareness of existing work and achieve positive overall results on GLUE, SuperGLUE, and MRQA tasks. Abolation experiments demonstrate the value of the individual components being proposed. The results are particularly impressive given the small number of additional parameters introduced into the model and in the context of prior work that has shown mixed results from multitask training.
SP:6a1884e0f3d0e103ad14ab06b3f28308d4ccec2c
A straightforward line search approach on the expected empirical loss for stochastic deep learning problems
1 INTRODUCTION AND BACKGROUND . The automatic determination of an optimal learning rate schedule to train models with stochastic gradient descent or similar optimizers is still not solved satisfactorily for standard and especially new deep learning tasks . Frequently , optimization approaches utilize the information of the loss and gradient of a single batch to perform an update step . However , those approaches focus on the batch loss , whereas the optimal step size should actually be determined for the empirical loss , which is the expected loss over all batches . In classical optimization line searches are commonly used to determine good step sizes . In deep learning , however , the noisy loss functions makes it impractically costly to search for step sizes on the empirical loss . This work empirically revisits that the empirical loss has a simple shape in the direction of noisy gradients . Based on this information , it is shown that the empirical loss can be easily fitted with lower order polynomials in these directions . This is done by performing a straightforward , one-dimensional regression on batch losses sampled in such a direction . It then becomes simple to determine a suitable minimum and thus a good step size from the approximated function . This results in a line search on the empirical loss . Compared to the direct measurement of the empirical loss on several locations , our approach is cost-efficient since it solely requires a sample size of about 500 losses to approximate a cross section of the loss . From a practical point of view this is still too expensive to determine the step size for each step . Fortunately , it turns out to be sufficient to estimate a new step size only a few times during a training process , which , does not require any additional time due to more beneficial update steps . We show that this straightforward optimization approach called ELF ( Empirical Loss Fitting optimizer ) , performs robustly across datasets and models without the need for hyperparameter tuning . This makes ELF a choice to be considered in order to achieve good results for new deep learning tasks out of the box . In the following we will revisit the fundamentals of optimization in deep learning to make our approach easily understandable . Following Goodfellow et al . ( 2016 ) , the aim of optimization in deep learning generally means to find a global minimum of the true loss ( risk ) function Ltrue which is the expected loss over all elements of the data generating distribution pdata : Ltrue ( θ ) = E ( x , y ) ∼pdataL ( f ( x ; θ ) , y ) ( 1 ) where L is the loss function for each sample ( x , y ) , θ are the parameters to optimize and f the model function . However , pdata is usually unknown and we need to use an empirical approximation p̂data , which is usually indirectly given by a dataset T. Due to the central limit theorem we can assume p̂data to be Gaussian . In practice optimization is performed on the empirical loss Lemp : Lemp ( θ ) = E ( x , y ) ∼p̂dataL ( f ( x ; θ ) , y ) = 1 |T| ∑ ( x , y ) ∈T L ( f ( x ; θ ) , y ) ( 2 ) An unsolved task is to find a global minimum of Ltrue by optimizing on Lemp if |T| is finite . Thus , we have to assume that a small value of Lemp will also be small for Ltrue . Estimating Lemp is impractical and expensive , therefore we approximate it with mini batches : Lbatch ( θ , B ) = 1 |B| ∑ ( x , y ) ∈B⊂T L ( f ( x ; θ ) , y ) ( 3 ) where B denotes a batch . We call the dataset split in batches Tbatch . We now can reinterpret Lemp as the empirical mean value over a list of losses L which includes the output of Lbatch ( θ , B ) for each batch B : Lemp ( θ ) = 1 |L| ∑ Lbatch ( θ , B ) ∈L Lbatch ( θ , B ) ( 4 ) A vertical cross section lemp ( s ) of Lemp ( θ ) in the direction d through the parameter vector θ0 is given by lemp ( s ; θ0 , d ) = Lemp ( θ0 + s · d ) ( 5 ) For simplification , we refer to l as line function or cross section . The step size to the minimum of lemp ( s ) is called smin . Many direct and indirect line search approaches for deep learning are often applied on Lbatch ( θ , B ) ( Mutschler & Zell ( 2020 ) , Berrada et al . ( 2019 ) , Rolinek & Martius ( 2018 ) , Baydin et al . ( 2017 ) , Vaswani et al . ( 2019 ) ) . Mutschler & Zell ( 2020 ) approximate an exact line search , which implies estimating the global minimum of a line function , by using one-dimensional parabolic approximations . The other approaches , directly or indirectly , perform inexact line searches by estimating positions of the line function , which fulfill specific conditions , such as the Goldberg , Armijo and Wolfe conditions ( Jorge Nocedal ( 2006 ) ) . However , Mutschler & Zell ( 2020 ) empirically suggests that line searches on Lbatch are not optimal since minima of line functions of Lbatch are not always good estimators for the minima of line functions of Lemp . Thus , it seems more promising to perform a line search on Lemp . This is cost intensive since we need to determine L ( f ( x ; θ0+s ·d ) , y ) for all ( x , y ) ∈ T for multiple s of a line function . Probabilistic Line Search ( PLS ) ( Mahsereci & Hennig ( 2017 ) ) addresses this problem by performing Gaussian process regressions , which result in multiple one dimensional cubic splines . In addition , a probabilistic belief over the first ( = Armijo condition ) and second Wolfe condition is introduced to find good update positions . The major drawback of this conceptually appealing but complex method is , that for each batch the squared gradients of each input sample have to be computed . This is not supported by default by common deep learning libraries and therefore has to be implemented manually for every layer in the model , which makes its application impractical . Gradient-only line search ( GOLS1 ) ( Kafka & Wilke ( 2019 ) ) pointed out empirically that the noise of directional derivatives in negative gradient direction is considerably smaller than the noise of the losses . They argue that they can approximate a line search on Lemp by considering consecutive noisy directional derivatives . Adaptive methods , such as Kingma & Ba ( 2014 ) Luo et al . ( 2019 ) Reddi et al . ( 2018 ) Liu et al . ( 2019 ) Tieleman & Hinton ( 2012 ) Zeiler ( 2012 ) Robbins & Monro ( 1951 ) concentrate more on finding good directions than on optimal step sizes . Thus , they could benefit from line search approaches applied on their estimated directions . Second order methods , such as Berahas et al . ( 2019 ) Schraudolph et al . ( 2007 ) Martens & Grosse ( 2015 ) Ramamurthy & Duffy ( 2017 ) Botev et al . ( 2017 ) tend to find better directions but are generally too expensive for deep learning scenarios . Our approach follows PLS and GOLS1 by performing a line search directly on Lemp . We use a regression on multiple Lbatch ( θ0 + s · d , B ) values sampled with different step sizes s and different batches B , to estimate a minimum of a line function of Lemp in direction d. Consequently , this work is a further step towards efficient steepest descent line searches on Lemp , which show linear convergence on any deterministic function that is twice continuously differentiable , has a relative minimum and only positive eigenvalues of the Hessian at the minimum ( see Luenberger et al . ( 1984 ) ) . The details as well as the empirical foundation of our approach are explained in the following . 2 OUR APPROACH . 2.1 EMPIRICAL FOUNDATIONS . Xing et al . ( 2018 ) ; Mutschler & Zell ( 2020 ) ; Mahsereci & Hennig ( 2017 ) ; Chae & Wilke ( 2019 ) showed empirically that line functions of Lbatch in negative gradient directions tend to exhibit a simple shape for all analyzed deep learning problems . To get an intuition of how lines of the empirical loss in the direction of the negative gradient tend to behave , we tediously sampled Lbatch ( θt + s · −∇θtLbatch ( θt , Bt ) ) , B ) for 50 equally distributed s between −0.3 and 0.7 and every B ∈ T for a training process of a ResNet32 trained on CIFAR-10 with a batch size of 100 . The results are given in Figure 1 . 1 The results lead to the following characteristics : 1. lemp has a simple shape and can be approximated well by lower order polynomials , splines or fourier series . 2. lemp does not change much over consecutive lines . 3 . Minima of lines of Lbatch can be shifted from the minima of lemp lines and can even lead to update steps which increase Lemp . Characteristic 3 consolidates why line searches on Lemp are to be favored over line searches on Lbatch . Although we derived these findings only from one training process , we can assure , by analyzing the measured point clouds of our approach , that they seem to be valid for all datasets , tasks , and models considered ( see Appendix D ) . 2.2 OUR LINE SEARCH ON THE EXPECTED EMPIRICAL LOSS . There are two major challenges to be solved in order to perform line searches on Lemp : 1 . To measure lemp ( s ; θ , d ) it is required to determine every L ( f ( x ; θ0 + s · d ) , y ) for all ( x , y ) ∈ T for all steps sized s on a line . 2 . For a good direction of the line function one has to know ∇θLemp ( θ ) = 1|T| ∑ B∈T ∇Lbatch ( θ , B ) . We solve the first challenge by fitting lemp with lower order polynomials , which can be achieved accurately by sampling a considerably low number of batch loss values . We do not have an efficient solution for the second challenge , thus we have to simplify the problem by taking the unit gradient of the current batch Bt , which is ∇̂θLbatch ( θ , Bt ) , as the direction of the line search . The line function we search on is thus given by : lELF ( s ; θt , Bt ) = Lemp ( θt + s · −∇̂θtLbatch ( θt , Bt ) ) ≈ lower order polynomial ( 6 ) Note that θt , Bt are fixed during the line search . 1These results have already been published by the authors of this paper in another context in [ ref ] Our straightforward concept is to sample n losses Lbatch ( θ0 + si · −∇̂θLbatch ( θ , B0 ) , Bi ) , with i ranging from 1 to n and Bi uniformly chosen from T and si uniformly chosen from a reasonable interval , on which we will focus later . Now , we follow a classical function fitting or machine learning routine . An ordinary least square regression ( OLSR ) for polynomials is performed . Note , that our data is not homoscedastic , as required for OLSR 2 . This implies , that our resulting estimator is still unbiased , but we can not perform an analysis of variances ( see Goldberger et al . ( 1964 ) ) . However , the latter is not needed in our case . Those regressions are performed with increasing degree until the test error of the fitted polynomial is increasing . The test error is determined by a 5-fold cross-validation . The second last polynomial degree is chosen and the polynomial is again fitted on all loss values to get a more accurate fit . Consequently the closest minimum to the initial location is determined and additional losses are measured in a reasonable interval around it . This process is repeated four times . Finally , the step to the closest minimum of the fitted polynomial is chosen as update step if , existing and its value is positive . Otherwise , a new line search is started . This routine is described in more detail in Algorithm 1 . An empirical example of the search of the best fitting polynomial is given in Figure 2.2 . The empirically found heuristic to determine reasonable measure intervals is given in Algorithm 2 . This routine empirically ensures , that the point cloud of losses is wider than high , so that a correctly oriented polynomial is fitted . To determine when to measure a new step size with a line search , we utilize that one can estimate the expected improvement by lELF ( 0 ) − lELF ( smin ) . If the real improvement of the training loss times a factor is smaller than the expected improvement , both determined over a step window , a new step size is determined . The full ELF algorithm is given in Algorithm 3 in Appendix A . We note that all of the mentioned subroutines are easy to implement with the Numpy Python library , which reduces the implementation effort significantly . The presented pseudo codes include the most important aspects for understanding our approach . For a more detailed description we refer to our implementation found in the supplementary material . Based on our empirical experience with this approach we introduce the following additions : 1 . We measure 3 consecutive lines and take the average resulting step size to continue training with SGD . 2 . We have experienced that ELF generalizes better if not performing a step to the minimum , but to perform a step that decreased the loss by a decrease factor δ of the overall improvement . In detail , we estimate xtarget > xmin , which satisfies f ( xtarget ) = δ ( f ( x0 ) − xmin ) − f ( xmin ) with δ ∈ [ 0 , 1 ) 3 . We use a momentum term β on the gradient , which can lead to an improvement in generalization . 4 . To prevent over-fitting , the batch losses required for fitting polynomials are sampled from the validation set . 5 . At the beginning of the training a short grid search is done to find the maximal step size that still supports training . This reduces the chances of getting stuck in a local minima at the beginning of optimization . 2We indirectly use weighted OLSR by sampling more points in relevant intervals around the minimum , which softens the effect of heteroscedasticity . Algorithm 1 Pseudo code of ELF ’ s line search routine ( see Algorithm 3 ) Input : d : direction ( default : current unit gradient ) Input : θ0 : initial parameter space position Input : Lbatch ( θt ) : batch loss function which randomly chooses a batch Input : k : sample interval adaptations ( default : 5 ) Input : n : losses to sample per adaptation ( default : 100 ) 1 : interval width← 1.0 2 : sample positions← [ ] 3 : lineLosses← [ ] 4 : for r from 0 to k do 5 : if r ! = 0 then 6 : interval width← chose sample interval ( minimum location , sample positions , line losses , coefficents ) 7 : end if 8 : new sample positions← get uniformly distributed values ( n , interval width ) 9 : for m in new sample positions do 10 : line losses.append ( Lbatch ( θ0 +md ) ) 11 : end for 12 : sample positions.extend ( new sample positions ) 13 : last test error←∞ 14 : for degree from 0 to max polynomial degree do 15 : test error← 5-fold cross validation ( degree , sample positions , line losses ) 16 : if last test error < test error then 17 : best degree← degree−1 18 : last test error← test error 19 : break 20 : end if 21 : if degree == max polynomial degree then 22 : best degree← max polynomial degree 23 : break 24 : end if 25 : end for 26 : coefficients← fit polynomial ( best degree , sample positions , line losses ) 27 : minimum position , improvement← get minimum position nearest to 0 ( coefficients ) 28 : end for 29 : return minimum position , improvement , k · n Algorithm 2 Pseudo code of the chose sample interval routine of Algorithm 1 Input : minimum position Input : sample positions Input : line losses : list of losses corresponding to the sample positions Input : coefficents : coefficients of the polynomial of the current fit Input : min window size ( default : 50 ) 1 : window← { m : m ∈ sample positions and 0 ≤ m ≤ 2 ·minimum location } 2 : if |window| < min window size then 3 : window← get 50 nearest sample pos to minimum pos ( sample positions , minimum position ) 4 : end if 5 : target loss← get third quartile ( window , line losses [ window ] ) 6 : interval width← get nearest position where the absolut of polynomialtakes value ( coefficents , target loss ) 7 : return interval width
The paper tackles an important issue, namely how to tune the step sizes in SGD during the training, trying to approximate step sizes which would be used in GD (even though the search direction is still noisy). Relevant prior work is cited. The method is simple and easy to understand. Step sizes are kept piecewise constant over updates. New batches are sampled, and the loss values along the search line are fitted with a low-order polynomial. There is some heuristic to choose the interval around 0 for the step. Importantly, these batches are sampled from the validation set. While this sounds very elaborate to find the minimum of the approximation, they later say they are just trying for sufficient descent along the line. Compared to previous work, the method is simple and seems quite robust. As drawbacks, the method seems pretty expensive, and it has a large number of free parameters that need to be chosen.
SP:911e9288b3fc37197681962df1df1db17fcffd52
A straightforward line search approach on the expected empirical loss for stochastic deep learning problems
1 INTRODUCTION AND BACKGROUND . The automatic determination of an optimal learning rate schedule to train models with stochastic gradient descent or similar optimizers is still not solved satisfactorily for standard and especially new deep learning tasks . Frequently , optimization approaches utilize the information of the loss and gradient of a single batch to perform an update step . However , those approaches focus on the batch loss , whereas the optimal step size should actually be determined for the empirical loss , which is the expected loss over all batches . In classical optimization line searches are commonly used to determine good step sizes . In deep learning , however , the noisy loss functions makes it impractically costly to search for step sizes on the empirical loss . This work empirically revisits that the empirical loss has a simple shape in the direction of noisy gradients . Based on this information , it is shown that the empirical loss can be easily fitted with lower order polynomials in these directions . This is done by performing a straightforward , one-dimensional regression on batch losses sampled in such a direction . It then becomes simple to determine a suitable minimum and thus a good step size from the approximated function . This results in a line search on the empirical loss . Compared to the direct measurement of the empirical loss on several locations , our approach is cost-efficient since it solely requires a sample size of about 500 losses to approximate a cross section of the loss . From a practical point of view this is still too expensive to determine the step size for each step . Fortunately , it turns out to be sufficient to estimate a new step size only a few times during a training process , which , does not require any additional time due to more beneficial update steps . We show that this straightforward optimization approach called ELF ( Empirical Loss Fitting optimizer ) , performs robustly across datasets and models without the need for hyperparameter tuning . This makes ELF a choice to be considered in order to achieve good results for new deep learning tasks out of the box . In the following we will revisit the fundamentals of optimization in deep learning to make our approach easily understandable . Following Goodfellow et al . ( 2016 ) , the aim of optimization in deep learning generally means to find a global minimum of the true loss ( risk ) function Ltrue which is the expected loss over all elements of the data generating distribution pdata : Ltrue ( θ ) = E ( x , y ) ∼pdataL ( f ( x ; θ ) , y ) ( 1 ) where L is the loss function for each sample ( x , y ) , θ are the parameters to optimize and f the model function . However , pdata is usually unknown and we need to use an empirical approximation p̂data , which is usually indirectly given by a dataset T. Due to the central limit theorem we can assume p̂data to be Gaussian . In practice optimization is performed on the empirical loss Lemp : Lemp ( θ ) = E ( x , y ) ∼p̂dataL ( f ( x ; θ ) , y ) = 1 |T| ∑ ( x , y ) ∈T L ( f ( x ; θ ) , y ) ( 2 ) An unsolved task is to find a global minimum of Ltrue by optimizing on Lemp if |T| is finite . Thus , we have to assume that a small value of Lemp will also be small for Ltrue . Estimating Lemp is impractical and expensive , therefore we approximate it with mini batches : Lbatch ( θ , B ) = 1 |B| ∑ ( x , y ) ∈B⊂T L ( f ( x ; θ ) , y ) ( 3 ) where B denotes a batch . We call the dataset split in batches Tbatch . We now can reinterpret Lemp as the empirical mean value over a list of losses L which includes the output of Lbatch ( θ , B ) for each batch B : Lemp ( θ ) = 1 |L| ∑ Lbatch ( θ , B ) ∈L Lbatch ( θ , B ) ( 4 ) A vertical cross section lemp ( s ) of Lemp ( θ ) in the direction d through the parameter vector θ0 is given by lemp ( s ; θ0 , d ) = Lemp ( θ0 + s · d ) ( 5 ) For simplification , we refer to l as line function or cross section . The step size to the minimum of lemp ( s ) is called smin . Many direct and indirect line search approaches for deep learning are often applied on Lbatch ( θ , B ) ( Mutschler & Zell ( 2020 ) , Berrada et al . ( 2019 ) , Rolinek & Martius ( 2018 ) , Baydin et al . ( 2017 ) , Vaswani et al . ( 2019 ) ) . Mutschler & Zell ( 2020 ) approximate an exact line search , which implies estimating the global minimum of a line function , by using one-dimensional parabolic approximations . The other approaches , directly or indirectly , perform inexact line searches by estimating positions of the line function , which fulfill specific conditions , such as the Goldberg , Armijo and Wolfe conditions ( Jorge Nocedal ( 2006 ) ) . However , Mutschler & Zell ( 2020 ) empirically suggests that line searches on Lbatch are not optimal since minima of line functions of Lbatch are not always good estimators for the minima of line functions of Lemp . Thus , it seems more promising to perform a line search on Lemp . This is cost intensive since we need to determine L ( f ( x ; θ0+s ·d ) , y ) for all ( x , y ) ∈ T for multiple s of a line function . Probabilistic Line Search ( PLS ) ( Mahsereci & Hennig ( 2017 ) ) addresses this problem by performing Gaussian process regressions , which result in multiple one dimensional cubic splines . In addition , a probabilistic belief over the first ( = Armijo condition ) and second Wolfe condition is introduced to find good update positions . The major drawback of this conceptually appealing but complex method is , that for each batch the squared gradients of each input sample have to be computed . This is not supported by default by common deep learning libraries and therefore has to be implemented manually for every layer in the model , which makes its application impractical . Gradient-only line search ( GOLS1 ) ( Kafka & Wilke ( 2019 ) ) pointed out empirically that the noise of directional derivatives in negative gradient direction is considerably smaller than the noise of the losses . They argue that they can approximate a line search on Lemp by considering consecutive noisy directional derivatives . Adaptive methods , such as Kingma & Ba ( 2014 ) Luo et al . ( 2019 ) Reddi et al . ( 2018 ) Liu et al . ( 2019 ) Tieleman & Hinton ( 2012 ) Zeiler ( 2012 ) Robbins & Monro ( 1951 ) concentrate more on finding good directions than on optimal step sizes . Thus , they could benefit from line search approaches applied on their estimated directions . Second order methods , such as Berahas et al . ( 2019 ) Schraudolph et al . ( 2007 ) Martens & Grosse ( 2015 ) Ramamurthy & Duffy ( 2017 ) Botev et al . ( 2017 ) tend to find better directions but are generally too expensive for deep learning scenarios . Our approach follows PLS and GOLS1 by performing a line search directly on Lemp . We use a regression on multiple Lbatch ( θ0 + s · d , B ) values sampled with different step sizes s and different batches B , to estimate a minimum of a line function of Lemp in direction d. Consequently , this work is a further step towards efficient steepest descent line searches on Lemp , which show linear convergence on any deterministic function that is twice continuously differentiable , has a relative minimum and only positive eigenvalues of the Hessian at the minimum ( see Luenberger et al . ( 1984 ) ) . The details as well as the empirical foundation of our approach are explained in the following . 2 OUR APPROACH . 2.1 EMPIRICAL FOUNDATIONS . Xing et al . ( 2018 ) ; Mutschler & Zell ( 2020 ) ; Mahsereci & Hennig ( 2017 ) ; Chae & Wilke ( 2019 ) showed empirically that line functions of Lbatch in negative gradient directions tend to exhibit a simple shape for all analyzed deep learning problems . To get an intuition of how lines of the empirical loss in the direction of the negative gradient tend to behave , we tediously sampled Lbatch ( θt + s · −∇θtLbatch ( θt , Bt ) ) , B ) for 50 equally distributed s between −0.3 and 0.7 and every B ∈ T for a training process of a ResNet32 trained on CIFAR-10 with a batch size of 100 . The results are given in Figure 1 . 1 The results lead to the following characteristics : 1. lemp has a simple shape and can be approximated well by lower order polynomials , splines or fourier series . 2. lemp does not change much over consecutive lines . 3 . Minima of lines of Lbatch can be shifted from the minima of lemp lines and can even lead to update steps which increase Lemp . Characteristic 3 consolidates why line searches on Lemp are to be favored over line searches on Lbatch . Although we derived these findings only from one training process , we can assure , by analyzing the measured point clouds of our approach , that they seem to be valid for all datasets , tasks , and models considered ( see Appendix D ) . 2.2 OUR LINE SEARCH ON THE EXPECTED EMPIRICAL LOSS . There are two major challenges to be solved in order to perform line searches on Lemp : 1 . To measure lemp ( s ; θ , d ) it is required to determine every L ( f ( x ; θ0 + s · d ) , y ) for all ( x , y ) ∈ T for all steps sized s on a line . 2 . For a good direction of the line function one has to know ∇θLemp ( θ ) = 1|T| ∑ B∈T ∇Lbatch ( θ , B ) . We solve the first challenge by fitting lemp with lower order polynomials , which can be achieved accurately by sampling a considerably low number of batch loss values . We do not have an efficient solution for the second challenge , thus we have to simplify the problem by taking the unit gradient of the current batch Bt , which is ∇̂θLbatch ( θ , Bt ) , as the direction of the line search . The line function we search on is thus given by : lELF ( s ; θt , Bt ) = Lemp ( θt + s · −∇̂θtLbatch ( θt , Bt ) ) ≈ lower order polynomial ( 6 ) Note that θt , Bt are fixed during the line search . 1These results have already been published by the authors of this paper in another context in [ ref ] Our straightforward concept is to sample n losses Lbatch ( θ0 + si · −∇̂θLbatch ( θ , B0 ) , Bi ) , with i ranging from 1 to n and Bi uniformly chosen from T and si uniformly chosen from a reasonable interval , on which we will focus later . Now , we follow a classical function fitting or machine learning routine . An ordinary least square regression ( OLSR ) for polynomials is performed . Note , that our data is not homoscedastic , as required for OLSR 2 . This implies , that our resulting estimator is still unbiased , but we can not perform an analysis of variances ( see Goldberger et al . ( 1964 ) ) . However , the latter is not needed in our case . Those regressions are performed with increasing degree until the test error of the fitted polynomial is increasing . The test error is determined by a 5-fold cross-validation . The second last polynomial degree is chosen and the polynomial is again fitted on all loss values to get a more accurate fit . Consequently the closest minimum to the initial location is determined and additional losses are measured in a reasonable interval around it . This process is repeated four times . Finally , the step to the closest minimum of the fitted polynomial is chosen as update step if , existing and its value is positive . Otherwise , a new line search is started . This routine is described in more detail in Algorithm 1 . An empirical example of the search of the best fitting polynomial is given in Figure 2.2 . The empirically found heuristic to determine reasonable measure intervals is given in Algorithm 2 . This routine empirically ensures , that the point cloud of losses is wider than high , so that a correctly oriented polynomial is fitted . To determine when to measure a new step size with a line search , we utilize that one can estimate the expected improvement by lELF ( 0 ) − lELF ( smin ) . If the real improvement of the training loss times a factor is smaller than the expected improvement , both determined over a step window , a new step size is determined . The full ELF algorithm is given in Algorithm 3 in Appendix A . We note that all of the mentioned subroutines are easy to implement with the Numpy Python library , which reduces the implementation effort significantly . The presented pseudo codes include the most important aspects for understanding our approach . For a more detailed description we refer to our implementation found in the supplementary material . Based on our empirical experience with this approach we introduce the following additions : 1 . We measure 3 consecutive lines and take the average resulting step size to continue training with SGD . 2 . We have experienced that ELF generalizes better if not performing a step to the minimum , but to perform a step that decreased the loss by a decrease factor δ of the overall improvement . In detail , we estimate xtarget > xmin , which satisfies f ( xtarget ) = δ ( f ( x0 ) − xmin ) − f ( xmin ) with δ ∈ [ 0 , 1 ) 3 . We use a momentum term β on the gradient , which can lead to an improvement in generalization . 4 . To prevent over-fitting , the batch losses required for fitting polynomials are sampled from the validation set . 5 . At the beginning of the training a short grid search is done to find the maximal step size that still supports training . This reduces the chances of getting stuck in a local minima at the beginning of optimization . 2We indirectly use weighted OLSR by sampling more points in relevant intervals around the minimum , which softens the effect of heteroscedasticity . Algorithm 1 Pseudo code of ELF ’ s line search routine ( see Algorithm 3 ) Input : d : direction ( default : current unit gradient ) Input : θ0 : initial parameter space position Input : Lbatch ( θt ) : batch loss function which randomly chooses a batch Input : k : sample interval adaptations ( default : 5 ) Input : n : losses to sample per adaptation ( default : 100 ) 1 : interval width← 1.0 2 : sample positions← [ ] 3 : lineLosses← [ ] 4 : for r from 0 to k do 5 : if r ! = 0 then 6 : interval width← chose sample interval ( minimum location , sample positions , line losses , coefficents ) 7 : end if 8 : new sample positions← get uniformly distributed values ( n , interval width ) 9 : for m in new sample positions do 10 : line losses.append ( Lbatch ( θ0 +md ) ) 11 : end for 12 : sample positions.extend ( new sample positions ) 13 : last test error←∞ 14 : for degree from 0 to max polynomial degree do 15 : test error← 5-fold cross validation ( degree , sample positions , line losses ) 16 : if last test error < test error then 17 : best degree← degree−1 18 : last test error← test error 19 : break 20 : end if 21 : if degree == max polynomial degree then 22 : best degree← max polynomial degree 23 : break 24 : end if 25 : end for 26 : coefficients← fit polynomial ( best degree , sample positions , line losses ) 27 : minimum position , improvement← get minimum position nearest to 0 ( coefficients ) 28 : end for 29 : return minimum position , improvement , k · n Algorithm 2 Pseudo code of the chose sample interval routine of Algorithm 1 Input : minimum position Input : sample positions Input : line losses : list of losses corresponding to the sample positions Input : coefficents : coefficients of the polynomial of the current fit Input : min window size ( default : 50 ) 1 : window← { m : m ∈ sample positions and 0 ≤ m ≤ 2 ·minimum location } 2 : if |window| < min window size then 3 : window← get 50 nearest sample pos to minimum pos ( sample positions , minimum position ) 4 : end if 5 : target loss← get third quartile ( window , line losses [ window ] ) 6 : interval width← get nearest position where the absolut of polynomialtakes value ( coefficents , target loss ) 7 : return interval width
This work proposes ELF, a newl method to do line search. The key idea is to fit a low order polynomial of the empirical loss (by sampling multiple batches) along the direction of a mini-batch gradient. The method stays computationally efficient by only computing the step size every so often. Experiments on a variety of image classification tasks show that ELF is competitive to GOLS1, PLS, PAL, SGD, and Adam, while taking less time to train.
SP:911e9288b3fc37197681962df1df1db17fcffd52
Extracting Strong Policies for Robotics Tasks from Zero-Order Trajectory Optimizers
1 INTRODUCTION . The general purpose of model-based and model-free reinforcement learning ( RL ) is to optimize a trajectory or find a policy that is fast and accurate enough to be deployed on real robotic systems . Policies optimized by model-free RL algorithms achieve outstanding results for many challenging domains ( Heess et al. , 2017 ; Andrychowicz et al. , 2020 ) , however , in order to converge to the final performance , they require a large number of interactions with the environment and can hardly be used on real robots , which have a limited lifespan . Moreover , real robotic systems are high-dimensional and have a highly non-convex optimization landscape , which makes policy gradient methods prone to converge to locally optimal solutions . In addition , model-free RL methods only gather task-specific information , which inherently limits their generalization performance to new situations . On the other hand , recent advances in model-based RL show that it is possible to match modelfree performance by learning uncertainty-aware system dynamics ( Chua et al. , 2018 ; Deisenroth & Rasmussen , 2011 ; Du et al. , 2019 ) . The learned model can then be used within a model-predictive control framework for trajectory optimization . Zero-order optimizers are gaining a lot of traction in ∗equal contribution . We acknowledge the support from the German Federal Ministry of Education and Research ( BMBF ) through the Tübingen AI Center ( FKZ : 01IS18039B ) and from the Max Planck ETH Center for Learning Systems . the model-based RL community ( Chua et al. , 2018 ; Wang & Ba , 2020 ; Williams et al. , 2015 ) since they can be used with any choice of model and cost function , and can be surprisingly effective in finding high-performance solutions ( Pinneri et al. , 2020 ) ( close to a global optimum ) in contrast to their gradient-based counterparts , which are often highly dependent on hyperparameter tuning ( Henderson et al. , 2017 ) . One of the most popular optimizers is the Cross-Entropy Method ( CEM ) , originally introduced in the 90s by Rubinstein & Davidson ( 1999 ) . Despite their achievements , using zero-order methods for generating action sequences is time consuming in complex high-dimensional environments , due to the extensive sampling , making it hard to deploy them for real-time applications . Extracting a policy from powerful zero-order optimizers like CEM would bridge the gap between model-based RL in simulation and real-time robotics . As of today , this is still an open challenge ( Wang & Ba , 2020 ) . We analyze this issue and showcase several approaches for policy extraction from CEM . In particular , we will use the sample-efficient modification of CEM ( iCEM ) presented in Pinneri et al . ( 2020 ) . Throughout the paper , we will call these optimizers “ experts ” as they provide demonstration trajectories . To isolate the problem of bringing policy performance close to the expert ’ s one , we consider the true simulation dynamics as our forward model . Our contributions can be summarized as follows : • pinpointing the issues that arise when trying to distill a policy from a multimodal , stochastic teacher ; • introducing APEX , an Adaptive Policy EXtraction procedure that integrates iCEM with DAgger and a novel adaptive variant of Guided Policy Search ; • our specific integration of methods produces an improving adaptive teacher , with higher performance than the original iCEM optimizer ; • obtaining strong policies for hard robotic tasks in simulation ( HUMANOID STANDUP , FETCH PICK & PLACE , DOOR ) , where model-free policies would usually just converge to local optima . Videos showing the performance of the extracted policies and other information can be found at https : //martius-lab.github.io/APEX . 2 RELATED WORK . Our objective is to extract high-performing policies from CEM experts that can operate with a few planning samples to make iterative learning fast . Other kinds of zero-order optimizers have been used to generate control sequences ( Williams et al. , 2015 ; Lowrey et al. , 2019 ) but they still have to evaluate thousands of trajectories for each time step . Even simple random shooting has been used as a trajectory optimizer to bootstrap a model-free policy ( Nagabandi et al. , 2018 ) . To train policies from optimal control solutions , it was shown that the expert optimizers need to be guided towards the learning policy – known as guided policy search ( GPS ) ( Levine & Koltun , 2013 ; Levine & Abbeel , 2014 ) . In our work , the expert does not come from optimal control but is the stochastic iCEM optimizer , which we will also refer to as teacher . We apply GPS in a model model-predictive control setting , as done in Levine & Koltun ( 2013 ) ; Mordatch & Todorov ( 2014 ) ; Mordatch et al . ( 2015 ) ; Zhang et al . ( 2016 ) ; Kahn et al . ( 2017 ) ; Sun et al . ( 2018 ) using local trajectory optimization to generate a dataset for training a global policy through imitation learning . These approaches alternate between training a policy and creating new data with a model-based supervisor guided towards the learner , which was formalized in Sun et al . ( 2018 ) . Stochastic experts require particular guidance strategies , such as an adaptive cost formulation that we propose here , together with expert warm-starting via distribution initialization and additional samples from the policy . A simple form of warm-starting was already done in Wang & Ba ( 2020 ) . Recently , approaches like simple point-to-point supervised training such as Behavioral Cloning ( BC ) , or Generative Adversarial Network training ( GAN ) have been explored ( Wang & Ba , 2020 ) for policy distillation from CEM , but only largely sub-optimal policies could be extracted . When the policy is used alone at test time and not in combination with the MPC-CEM optimizer , its performance drops significantly , becoming almost random for some environments . We argue that the reason behind the difficulty in distilling a policy from CEM expert data is the multimodality of the CEM solution space and its inherent stochasticity due to the sampling , which we address below . Another possibility to train higher-performance policies from experts is DAgger ( Ross & Bagnell , 2010 ) , which is an on-policy method , asking the expert to relabel/correct policy actions . Nevertheless , as we will show in the following sections , DAgger alone is not sufficient to extract high-performing policies in our setting . To solve this problem , we use a guiding cost in combination with DAgger . A combination of GPS and DAgger-style relabeling was proposed in PLATO ( Kahn et al. , 2017 ) , however to create unbiased training data from iLQG experts . Since unguided DAgger is not appropriate with CEM , PLATO is not successful in our setting either . The components of our algorithm will be explained in the following sections . 3 METHODS . Trajectory optimization aims to find a suitable action sequence ~at = ( at , at+1 , . . . , at+h ) of horizon h that minimizes a cost function f ( ~at , st ) , where st is the current state of the system . i.e . ~a ? t ← argmin ~at f ( ~at , st ) . ( 1 ) The cost function f encodes the task . Optimal control is obtained if f is the trajectory cost until step h plus the cost-to-go under the optimal policy for an infinite-horizon problem , evaluated at the last state in the finite horizon trajectory . Typically , the cost-to-go is replaced by a proxy cost such as the distance to a target . In our case , the cost f is given by the sum of negative environment rewards up to the planning horizon . 3.1 IMPROVED CROSS-ENTROPY METHOD FOR MPC . To optimize Eq . 1 , we use a variant of the Cross-Entropy Method . Although originally introduced as an adaptive importance sampling procedure to estimate rare-event probabilities , it was recently employed in model-based RL ( Chua et al. , 2018 ; Wang & Ba , 2020 ) as a trajectory optimizer . A practical implementation of CEM uses a Gaussian proposal distribution over the optimization variables ~a , N ( µ , σ ) , evaluates the cost function f ( ~a ) , and refits this distribution iteratively by using the top-k samples . It finds low-cost regions of f and high-performing samples of the variable ~a . The cost function can be either expressed through a learned value function or through the dynamics model of the system , which in turn can be learned from data or in an analytical form . In the model-predictive control ( MPC ) framework , CEM can be used at every time-step to generate the optimal action plan , of which only the first action is executed , and the whole procedure is repeated for the next steps until task completion . In this paper , we will make use of iCEM , a sample-efficient improvement of CEM by Pinneri et al . ( 2020 ) that makes use of colored-noise and memory . In particular , it generates correlated action sequences with a non-flat frequency spectrum , differently from the flat Gaussian noise of CEM . This , together with the memory addition , results in an order of magnitude sample reduction . 3.1.1 USING A POLICY TO INFORM THE OPTIMIZATION : ICEMπ Unguided search for action sequences can be hard , in particular , if the cost function has large flat regions ( sparse rewards ) . Since we aim at extracting a policy π ( s ) , we can expect that after some examples the policy can be used to guide the search into the right region . Thus , we warm-start the mean µ of the iCEM Gaussian distribution with the policy actions . In particular , at time t = 0 where no prior information exists , the mean is initialized from rolling out the trajectory with the policy until the planning horizon : µ ← ( π ( s0 ) , π ( s1 ) , . . . π ( sh ) ) . Whenever the action for a new step is computed , iCEM uses shift initialization of the mean , and only the mean-action at the end of the horizon is initialized from the policy . Since iCEM is sample-based we also provide it with samples from the policy directly . More concretely , the actions performed by the policy are added in the last iteration if iCEM . The policy-informed iCEM algorithm is called iCEMπ and is shown in Alg . 1 . Algorithm 1 : Improved Cross-Entropy Method with warm-starting and samples from policy π , denoted as iCEMπ . Blue marks policy warm-starting and red is adding policy samples . Input : f ( ~a , s ) : cost function ; π : policy ; N : # of samples ; h : planning horizon ; k : elite-set size ; β : colored-noise exponent ; σinit : noise strength ; iter : # of iterations ; γ : reduction factor . 1 τ ← ∅ 2 for t = 0 to T−1 do 3 s← get the current state from the environment 4 if t == 0 then 5 µ0← h-steps rollout of model with π starting from s 6 else 7 µt← shifted µt−1 with last time-step action given by π 8 σt← constant vector in Rd×h with values σinit 9 for i = 0 to iter−1 do 10 Ni←max ( N · γ−i , 2 · k ) 11 samples← Ni samples from clip ( µt + Cβ ( d , h ) σ2t ) ; // with Cβ ( d , h ) colored-noise Normal distribution with noise-exponent β and dimension ( d , h ) 12 add a fraction of the shifted/reused elite-set to samples 13 if i == last-iter then 14 add µi to samples 15 add policy actions to samples ; // h-step rollout with policy from current state s 16 costs← cost function f ( ~a , s ) for ~a in samples 17 elite-sett← best k samples according to costs 18 µt , σt← fit Gaussian distribution to elite-sett with momentum 19 a← first action of best elite sequence 20 add ( s , a ) to τ 21 execute a 22 return τ ; // return the trajectory
This paper presents an approach to distill a model-based planning expert into a policy to enable real-time execution on robotic systems. An improved version of the Cross-Entropy Method, iCEM is used to generate trajectories via model-based optimisation where the forward model is the ground-truth simulator dynamics. The expert is further improved by warm-starting using the learned policy. The policy is learned via imitation learning of trajectories from the expert. Several approaches for this step are presented, including the vanilla approach of behaviour cloning (BC), dataset aggregation (DAGGER) to reduce covariate shift and Guided Policy Search (GPS) where the planner is encouraged to keep close to the policy distribution via a trust region KL loss. The paper discusses the merits of each approach and proposes Adaptive Policy Extraction (APEX), an approach that learns the policy from the expert through a combination of DAGGER and GPS where the tradeoff between the cost terms on planner exploration and policy trust region KL is set adaptively. This approach is tested on four continuous control tasks and achieves good performance compared to baselines — the learned policy gets significant improvements compared to the baselines. Additionally, a study that ablates different parts of the APEX algorithm is also presented.
SP:168f23644e133a08f2e16ee31b0a8f0620aea96d
Extracting Strong Policies for Robotics Tasks from Zero-Order Trajectory Optimizers
1 INTRODUCTION . The general purpose of model-based and model-free reinforcement learning ( RL ) is to optimize a trajectory or find a policy that is fast and accurate enough to be deployed on real robotic systems . Policies optimized by model-free RL algorithms achieve outstanding results for many challenging domains ( Heess et al. , 2017 ; Andrychowicz et al. , 2020 ) , however , in order to converge to the final performance , they require a large number of interactions with the environment and can hardly be used on real robots , which have a limited lifespan . Moreover , real robotic systems are high-dimensional and have a highly non-convex optimization landscape , which makes policy gradient methods prone to converge to locally optimal solutions . In addition , model-free RL methods only gather task-specific information , which inherently limits their generalization performance to new situations . On the other hand , recent advances in model-based RL show that it is possible to match modelfree performance by learning uncertainty-aware system dynamics ( Chua et al. , 2018 ; Deisenroth & Rasmussen , 2011 ; Du et al. , 2019 ) . The learned model can then be used within a model-predictive control framework for trajectory optimization . Zero-order optimizers are gaining a lot of traction in ∗equal contribution . We acknowledge the support from the German Federal Ministry of Education and Research ( BMBF ) through the Tübingen AI Center ( FKZ : 01IS18039B ) and from the Max Planck ETH Center for Learning Systems . the model-based RL community ( Chua et al. , 2018 ; Wang & Ba , 2020 ; Williams et al. , 2015 ) since they can be used with any choice of model and cost function , and can be surprisingly effective in finding high-performance solutions ( Pinneri et al. , 2020 ) ( close to a global optimum ) in contrast to their gradient-based counterparts , which are often highly dependent on hyperparameter tuning ( Henderson et al. , 2017 ) . One of the most popular optimizers is the Cross-Entropy Method ( CEM ) , originally introduced in the 90s by Rubinstein & Davidson ( 1999 ) . Despite their achievements , using zero-order methods for generating action sequences is time consuming in complex high-dimensional environments , due to the extensive sampling , making it hard to deploy them for real-time applications . Extracting a policy from powerful zero-order optimizers like CEM would bridge the gap between model-based RL in simulation and real-time robotics . As of today , this is still an open challenge ( Wang & Ba , 2020 ) . We analyze this issue and showcase several approaches for policy extraction from CEM . In particular , we will use the sample-efficient modification of CEM ( iCEM ) presented in Pinneri et al . ( 2020 ) . Throughout the paper , we will call these optimizers “ experts ” as they provide demonstration trajectories . To isolate the problem of bringing policy performance close to the expert ’ s one , we consider the true simulation dynamics as our forward model . Our contributions can be summarized as follows : • pinpointing the issues that arise when trying to distill a policy from a multimodal , stochastic teacher ; • introducing APEX , an Adaptive Policy EXtraction procedure that integrates iCEM with DAgger and a novel adaptive variant of Guided Policy Search ; • our specific integration of methods produces an improving adaptive teacher , with higher performance than the original iCEM optimizer ; • obtaining strong policies for hard robotic tasks in simulation ( HUMANOID STANDUP , FETCH PICK & PLACE , DOOR ) , where model-free policies would usually just converge to local optima . Videos showing the performance of the extracted policies and other information can be found at https : //martius-lab.github.io/APEX . 2 RELATED WORK . Our objective is to extract high-performing policies from CEM experts that can operate with a few planning samples to make iterative learning fast . Other kinds of zero-order optimizers have been used to generate control sequences ( Williams et al. , 2015 ; Lowrey et al. , 2019 ) but they still have to evaluate thousands of trajectories for each time step . Even simple random shooting has been used as a trajectory optimizer to bootstrap a model-free policy ( Nagabandi et al. , 2018 ) . To train policies from optimal control solutions , it was shown that the expert optimizers need to be guided towards the learning policy – known as guided policy search ( GPS ) ( Levine & Koltun , 2013 ; Levine & Abbeel , 2014 ) . In our work , the expert does not come from optimal control but is the stochastic iCEM optimizer , which we will also refer to as teacher . We apply GPS in a model model-predictive control setting , as done in Levine & Koltun ( 2013 ) ; Mordatch & Todorov ( 2014 ) ; Mordatch et al . ( 2015 ) ; Zhang et al . ( 2016 ) ; Kahn et al . ( 2017 ) ; Sun et al . ( 2018 ) using local trajectory optimization to generate a dataset for training a global policy through imitation learning . These approaches alternate between training a policy and creating new data with a model-based supervisor guided towards the learner , which was formalized in Sun et al . ( 2018 ) . Stochastic experts require particular guidance strategies , such as an adaptive cost formulation that we propose here , together with expert warm-starting via distribution initialization and additional samples from the policy . A simple form of warm-starting was already done in Wang & Ba ( 2020 ) . Recently , approaches like simple point-to-point supervised training such as Behavioral Cloning ( BC ) , or Generative Adversarial Network training ( GAN ) have been explored ( Wang & Ba , 2020 ) for policy distillation from CEM , but only largely sub-optimal policies could be extracted . When the policy is used alone at test time and not in combination with the MPC-CEM optimizer , its performance drops significantly , becoming almost random for some environments . We argue that the reason behind the difficulty in distilling a policy from CEM expert data is the multimodality of the CEM solution space and its inherent stochasticity due to the sampling , which we address below . Another possibility to train higher-performance policies from experts is DAgger ( Ross & Bagnell , 2010 ) , which is an on-policy method , asking the expert to relabel/correct policy actions . Nevertheless , as we will show in the following sections , DAgger alone is not sufficient to extract high-performing policies in our setting . To solve this problem , we use a guiding cost in combination with DAgger . A combination of GPS and DAgger-style relabeling was proposed in PLATO ( Kahn et al. , 2017 ) , however to create unbiased training data from iLQG experts . Since unguided DAgger is not appropriate with CEM , PLATO is not successful in our setting either . The components of our algorithm will be explained in the following sections . 3 METHODS . Trajectory optimization aims to find a suitable action sequence ~at = ( at , at+1 , . . . , at+h ) of horizon h that minimizes a cost function f ( ~at , st ) , where st is the current state of the system . i.e . ~a ? t ← argmin ~at f ( ~at , st ) . ( 1 ) The cost function f encodes the task . Optimal control is obtained if f is the trajectory cost until step h plus the cost-to-go under the optimal policy for an infinite-horizon problem , evaluated at the last state in the finite horizon trajectory . Typically , the cost-to-go is replaced by a proxy cost such as the distance to a target . In our case , the cost f is given by the sum of negative environment rewards up to the planning horizon . 3.1 IMPROVED CROSS-ENTROPY METHOD FOR MPC . To optimize Eq . 1 , we use a variant of the Cross-Entropy Method . Although originally introduced as an adaptive importance sampling procedure to estimate rare-event probabilities , it was recently employed in model-based RL ( Chua et al. , 2018 ; Wang & Ba , 2020 ) as a trajectory optimizer . A practical implementation of CEM uses a Gaussian proposal distribution over the optimization variables ~a , N ( µ , σ ) , evaluates the cost function f ( ~a ) , and refits this distribution iteratively by using the top-k samples . It finds low-cost regions of f and high-performing samples of the variable ~a . The cost function can be either expressed through a learned value function or through the dynamics model of the system , which in turn can be learned from data or in an analytical form . In the model-predictive control ( MPC ) framework , CEM can be used at every time-step to generate the optimal action plan , of which only the first action is executed , and the whole procedure is repeated for the next steps until task completion . In this paper , we will make use of iCEM , a sample-efficient improvement of CEM by Pinneri et al . ( 2020 ) that makes use of colored-noise and memory . In particular , it generates correlated action sequences with a non-flat frequency spectrum , differently from the flat Gaussian noise of CEM . This , together with the memory addition , results in an order of magnitude sample reduction . 3.1.1 USING A POLICY TO INFORM THE OPTIMIZATION : ICEMπ Unguided search for action sequences can be hard , in particular , if the cost function has large flat regions ( sparse rewards ) . Since we aim at extracting a policy π ( s ) , we can expect that after some examples the policy can be used to guide the search into the right region . Thus , we warm-start the mean µ of the iCEM Gaussian distribution with the policy actions . In particular , at time t = 0 where no prior information exists , the mean is initialized from rolling out the trajectory with the policy until the planning horizon : µ ← ( π ( s0 ) , π ( s1 ) , . . . π ( sh ) ) . Whenever the action for a new step is computed , iCEM uses shift initialization of the mean , and only the mean-action at the end of the horizon is initialized from the policy . Since iCEM is sample-based we also provide it with samples from the policy directly . More concretely , the actions performed by the policy are added in the last iteration if iCEM . The policy-informed iCEM algorithm is called iCEMπ and is shown in Alg . 1 . Algorithm 1 : Improved Cross-Entropy Method with warm-starting and samples from policy π , denoted as iCEMπ . Blue marks policy warm-starting and red is adding policy samples . Input : f ( ~a , s ) : cost function ; π : policy ; N : # of samples ; h : planning horizon ; k : elite-set size ; β : colored-noise exponent ; σinit : noise strength ; iter : # of iterations ; γ : reduction factor . 1 τ ← ∅ 2 for t = 0 to T−1 do 3 s← get the current state from the environment 4 if t == 0 then 5 µ0← h-steps rollout of model with π starting from s 6 else 7 µt← shifted µt−1 with last time-step action given by π 8 σt← constant vector in Rd×h with values σinit 9 for i = 0 to iter−1 do 10 Ni←max ( N · γ−i , 2 · k ) 11 samples← Ni samples from clip ( µt + Cβ ( d , h ) σ2t ) ; // with Cβ ( d , h ) colored-noise Normal distribution with noise-exponent β and dimension ( d , h ) 12 add a fraction of the shifted/reused elite-set to samples 13 if i == last-iter then 14 add µi to samples 15 add policy actions to samples ; // h-step rollout with policy from current state s 16 costs← cost function f ( ~a , s ) for ~a in samples 17 elite-sett← best k samples according to costs 18 µt , σt← fit Gaussian distribution to elite-sett with momentum 19 a← first action of best elite sequence 20 add ( s , a ) to τ 21 execute a 22 return τ ; // return the trajectory
In summary, this paper combines GPS and DAgger to learn a policy network by imitating a model-based controller, which uses iCEM as the optimizer. Their approach uses both the iCEM controller and the learned policy with DAgger-like relabeling to collect data and then train the network with behavior cloning. An auxiliary loss inspired GPS, which encourages the consistency between the expert controller and the learned policy, together with an adaptive weighting method, is added to reduce the multi-modal issue. The experiment results show promising results.
SP:168f23644e133a08f2e16ee31b0a8f0620aea96d
Learning Safe Multi-agent Control with Decentralized Neural Barrier Certificates
1 INTRODUCTION . Machine learning ( ML ) has created unprecedented opportunities for achieving full autonomy . However , learning-based methods in autonomous systems ( AS ) can and do fail due to the lack of formal guarantees and limited generalization capability , which poses significant challenges for developing safety-critical AS , especially large-scale multi-agent AS , that are provably dependable . On the other side , safety certificates ( Chang et al . ( 2019 ) ; Jin et al . ( 2020 ) ; Choi et al . ( 2020 ) ) , which widely exist in control theory and formal methods , serve as proofs for the satisfaction of the desired properties of a system , under certain control policies . For example , once found , a Control Barrier Function ( CBF ) ensures that the closed-loop system always stays inside some safe set ( Wieland & Allgöwer , 2007 ; Ames et al. , 2014 ) with a CBF Quadratic Programming ( QP ) supervisory controller . However , it is extremely difficult to synthesize CBF by hand for complex dynamic systems , which stems a growing interest in learning-based CBF ( Saveriano & Lee , 2020 ; Srinivasan et al. , 2020 ; Jin et al. , 2020 ; Boffi et al. , 2020 ; Taylor et al. , 2020 ; Robey et al. , 2020 ) . However , all of these studies only concern single-agent systems . How to develop learning-based approaches for safe multi-agent control that are both provably dependable and scalable remains open . In multi-agent control , there is a constant dilemma : centralized control strategies can hardly scale to a large number of agents , while decentralized control without coordination often misses safety and performance guarantees . In this work , we propose a novel learning framework that jointly designs multi-agent control policies and safety certificate from data , which can be implemented in a decentralized fashion and scalable to an arbitrary number of agents . Specifically , we first introduce the notion of decentralized CBF as safety certificates , then propose the framework of learning decentralized CBF , with generalization error guarantees . The decentralized CBF can be seen as a 1https : //realm.mit.edu/blog/learning-safe-multi-agent-control-decentralized-neural-barrier-certificates contract among agents , which allows agents to learn a mutual agreement with each other on how to avoid collisions . Once such a controller is achieved through the joint-learning framework , it can be applied on an arbitrarily number of agents and in scenarios that are different from the training scenarios , which resolves the fundamental scalability issue in multi-agent control . We also propose several effective techniques in Section 4 to make such a learning process even more scalable and practical , which are then validated extensively in Section 5 . Experimental results are indeed promising . We study both 2D and 3D safe multi-agent control problems , each with several distinct environments and complex nonholonomic dynamics . Our jointlearning framework performs exceptionally well : our control policies trained on scenarios with 8 agents can be used on up to 1024 agents while maintaining low collision rates , which has notably pushed the boundary of learning-based safe multi-agent control . Speaking of which , 1024 is not the limit of our approach but rather due to the limited computational capability of our laptop used for the experiments . We also compare our approach with both leading learning-based methods ( Lowe et al. , 2017 ; Zhang & Bastani , 2019 ; Liu et al. , 2020 ) and traditional planning methods ( Ma et al. , 2019 ; Fan et al. , 2020 ) . Our approach outperforms all the other approaches in terms of both completing the tasks and maintaining safety . Contributions . Our main contributions are three-fold : 1 ) We propose the first framework to jointly learning safe multi-agent control policies and CBF certificates , in a decentralized fashion . 2 ) We present several techniques that make the learning framework more effective and scalable for practical multi-agent systems , including the use of quantity-permutation invariant neural network architectures in learning to handle the permutation of neighbouring agents . 3 ) We demonstrate via extensive experiments that our method significantly outperforms other leading methods , and has exceptional generalization capability to unseen scenarios and an arbitrary number of agents , even in quite complex multi-agent environments such as ground robots and drones . The video that demonstrates the outstanding performance of our method can be found in the supplementary material . Related Work . Learning-Based Safe Control via CBF . Barrier certificates ( Prajna et al. , 2007 ) and CBF ( Wieland & Allgöwer , 2007 ) is a well-known effective tool for guaranteeing the safety of nonlinear dynamic systems . However , the existing methods for constructing CBFs either rely on specific problem structures ( Chen et al. , 2017b ) or do not scale well ( Mitchell et al. , 2005 ) . Recently , there has been an increasing interest in learning-based and data-driven safe control via CBFs , which primarily consist of two categories : learning CBFs from data ( Saveriano & Lee , 2020 ; Srinivasan et al. , 2020 ; Jin et al. , 2020 ; Boffi et al. , 2020 ) , and CBF-based approach for controlling unknown systems ( Wang et al. , 2017 ; 2018 ; Cheng et al. , 2019 ; Taylor et al. , 2020 ) . Our work is more pertinent to the former and is complementary to the latter , which usually assumes that the CBF is provided . None of these learning-enabled approaches , however , has addressed the multi-agent setting . Multi-Agent Safety Certificates and Collision Avoidance . Restricted to holonomic systems , guaranteeing safety in multi-agent systems has been approached by limiting the velocities of the agents ( Van den Berg et al. , 2008 ; Alonso-Mora et al. , 2013 ) . Later , Borrmann et al . ( 2015 ) Wang et al . ( 2017 ) have proposed the framework of multi-agent CBF to generate collision-free controllers , with either perfectly known system dynamics ( Borrmann et al. , 2015 ) , or with worst-case uncertainty bounds ( Wang et al. , 2017 ) . Recently , Chen et al . ( 2020 ) has proposed a decentralized controller synthesized approach under this CBF framework , which is scalable to an arbitrary number of agents . However , in Chen et al . ( 2020 ) the CBF controller relies on online integration of the dynamics under the backup strategy , which can be computationally challenging for complex systems . Due to space limit , we omit other non-learning multi-agent control methods but acknowledge their importance . Safe Multi-Agent ( Reinforcement ) Learning ( MARL ) . Safety concerns have drawn increasing attention in MARL , especially with the applications to safety-critical multi-agent systems ( Zhang & Bastani , 2019 ; Qie et al. , 2019 ; Shalev-Shwartz et al. , 2016 ) . Under the CBF framework , Cheng et al . ( 2020 ) considered the setting with unknown system dynamics , and proposed to design robust multiagent CBFs based on the learned dynamics . This mirrors the second category mentioned above in single-agent learning-based safe control , which is perpendicular to our focus . RL approaches have also been applied for multi-agent collision avoidance ( Chen et al. , 2017a ; Lowe et al. , 2017 ; Everett et al. , 2018 ; Zhang et al. , 2018 ) . Nonetheless , no formal guarantees of safety were established in these works . One exception is Zhang & Bastani ( 2019 ) , which proposed a multi-agent model predictive shielding algorithm that provably guarantees safety for any policy learned from MARL , which differs from our multi-agent CBF-based approach . More importantly , none of these MARL- based approaches scale to a massive number of , e.g. , thousands of agents , as our approach does . The most scalable MARL platform , to the best of our knowledge , is Zheng et al . ( 2017 ) , which may handle a comparable scale of agents as ours , but with discrete state-action spaces . This is in contrast to our continuous-space models that can model practical control systems such as robots and drones . 2 PRELIMINARIES . 2.1 CONTROL BARRIER FUNCTIONS AS SAFETY CERTIFICATES . One common approach for ( single-agent ) safety certificate is via control barrier functions ( Ames et al. , 2014 ) , which can enforce the states of dynamic systems to stay in the safe set . Specifically , let S ⊂ Rn be the state space , Sd ⊂ S is the dangerous set , Ss = S\Sd is the safe set , which contains the set of initial conditions S0 ⊂ Ss . Also define the space of control actions as U ⊂ Rm . For a dynamic system ṡ ( t ) = f ( s ( t ) , u ( t ) ) , a control barrier function h : Rn 7→ R satisfies : ( ∀s ∈ S0 , h ( s ) ≥ 0 ) ∧ ( ∀s ∈ Sd , h ( s ) < 0 ) ∧ ( ∀ s ∈ { s | h ( s ) ≥ 0 } , ∇sh · f ( s , u ) + α ( h ) ≥ 0 ) , ( 1 ) where α ( · ) is a class-K function , i.e. , α ( · ) is strictly increasing and satisfies α ( 0 ) = 0 . For a control policy π : S → U and CBF h , it is proved in Ames et al . ( 2014 ) that if s ( 0 ) ∈ { s | h ( s ) ≥ 0 } and the three conditions in ( 1 ) are satisfied with u = π ( x ) , then s ( t ) ∈ { s | h ( s ) ≥ 0 } for ∀t ∈ [ 0 , ∞ ) , which means the state would never enter the dangerous set Sd under π . 2.2 SAFETY OF MULTI-AGENT DYNAMIC SYSTEMS . Consider a multi-agent system with N agents , the joint state of which at time t is denoted by s ( t ) = { s1 ( t ) , s2 ( t ) , · · · , sN ( t ) } where si ( t ) ∈ Si ⊂ Rn denotes the state of agent i at time t. The dynamics of agent i is ṡi ( t ) = fi ( si ( t ) , ui ( t ) ) where ui ( t ) ∈ Ui ⊂ Rm is the control action of agent i . The overall state space and input space are denoted as S .= N⊗ i=1 Si , U .= N ⊗ i=1 Ui . For each agent i , we define Ni ( t ) as the set of its neighborhood agents at time t. Let oi ( t ) ∈ Rn×|Ni ( t ) | be the local observation of agent i , which is the states of |Ni ( t ) | neighborhood agents . Notice that the dimension of oi ( t ) is not fixed and depends on the quantity of neighboring agents . We assume that the safety of agent i is jointly determined by si and oi . Let Oi be the set of all possible observations and Xi : = Si ×Oi be the state-observation space that contains the safe set Xi , s , dangerous set Xi , d and initial conditions Xi,0 ⊂ Xi , s . Let d : Xi → R describe the minimum distance from agent i to other agents that it observes , d ( si , oi ) < κs implies collision . Then Xi , s = { ( si , oi ) |d ( si , oi ) ≥ κs } and Xi , d = { ( si , oi ) |d ( si , oi ) < κs } . Let d̄i : S → R be the lifting of d from Xi to S , which is welldefined since there is a surjection from S to Xi . Then define Ss . = { s ∈ S|∀i = 1 , ... , N , d̄i ( s ) ≥ κs } . The safety of a multi-agent system can be formally defined as follows : Definition 1 ( Safety of Multi-Agent Systems ) . If the state-observation satisfies d ( si , oi ) ≥ κs for agent i and time t , then agent i is safe at time t. If for ∀i , agent i is safe at time t , then the multi-agent system is safe at time t , and s ∈ Ss . A main objective of this paper is to learn the control policy πi ( si ( t ) , oi ( t ) ) for ∀i such that the multi-agent system is safe . The control policy is decentralized ( i.e. , each agent has its own control policy and there does not exist a central controller to coordinate all the agents ) . In this way , our decentralized approach has the hope to scale to very a large number of agents .
This paper extends the neural barrier certificate method from reinforcement learning to the decentralized multi-agent reinforcement learning setting. In particular, it borrows the idea of the control barrier function, which enforces the states of the dynamic system to stay in the safe set. It is known that in the single agent system if the control barrier function h satisfies the equation (1), the agent will never enter the dangerous set under policy pi. A straightforward way to extend the control barrier function in the MARL setting is to replace the state-action pair by the joint state-action pair in equation (1). However it may suffer from the exponentially large state and action space. Therefore, the author proposes a decentralized setting, where each agent just maintains its own control barrier function. To learn the policy and function h simultaneously, the author penalizes the violation of equation (2). At last, they test the proposed method in Navigation, Predator-Prey, ground robots and Drones compare it with several baselines.
SP:e01cbc55e8f7bc90ed66b50387234c154f547e5e
Learning Safe Multi-agent Control with Decentralized Neural Barrier Certificates
1 INTRODUCTION . Machine learning ( ML ) has created unprecedented opportunities for achieving full autonomy . However , learning-based methods in autonomous systems ( AS ) can and do fail due to the lack of formal guarantees and limited generalization capability , which poses significant challenges for developing safety-critical AS , especially large-scale multi-agent AS , that are provably dependable . On the other side , safety certificates ( Chang et al . ( 2019 ) ; Jin et al . ( 2020 ) ; Choi et al . ( 2020 ) ) , which widely exist in control theory and formal methods , serve as proofs for the satisfaction of the desired properties of a system , under certain control policies . For example , once found , a Control Barrier Function ( CBF ) ensures that the closed-loop system always stays inside some safe set ( Wieland & Allgöwer , 2007 ; Ames et al. , 2014 ) with a CBF Quadratic Programming ( QP ) supervisory controller . However , it is extremely difficult to synthesize CBF by hand for complex dynamic systems , which stems a growing interest in learning-based CBF ( Saveriano & Lee , 2020 ; Srinivasan et al. , 2020 ; Jin et al. , 2020 ; Boffi et al. , 2020 ; Taylor et al. , 2020 ; Robey et al. , 2020 ) . However , all of these studies only concern single-agent systems . How to develop learning-based approaches for safe multi-agent control that are both provably dependable and scalable remains open . In multi-agent control , there is a constant dilemma : centralized control strategies can hardly scale to a large number of agents , while decentralized control without coordination often misses safety and performance guarantees . In this work , we propose a novel learning framework that jointly designs multi-agent control policies and safety certificate from data , which can be implemented in a decentralized fashion and scalable to an arbitrary number of agents . Specifically , we first introduce the notion of decentralized CBF as safety certificates , then propose the framework of learning decentralized CBF , with generalization error guarantees . The decentralized CBF can be seen as a 1https : //realm.mit.edu/blog/learning-safe-multi-agent-control-decentralized-neural-barrier-certificates contract among agents , which allows agents to learn a mutual agreement with each other on how to avoid collisions . Once such a controller is achieved through the joint-learning framework , it can be applied on an arbitrarily number of agents and in scenarios that are different from the training scenarios , which resolves the fundamental scalability issue in multi-agent control . We also propose several effective techniques in Section 4 to make such a learning process even more scalable and practical , which are then validated extensively in Section 5 . Experimental results are indeed promising . We study both 2D and 3D safe multi-agent control problems , each with several distinct environments and complex nonholonomic dynamics . Our jointlearning framework performs exceptionally well : our control policies trained on scenarios with 8 agents can be used on up to 1024 agents while maintaining low collision rates , which has notably pushed the boundary of learning-based safe multi-agent control . Speaking of which , 1024 is not the limit of our approach but rather due to the limited computational capability of our laptop used for the experiments . We also compare our approach with both leading learning-based methods ( Lowe et al. , 2017 ; Zhang & Bastani , 2019 ; Liu et al. , 2020 ) and traditional planning methods ( Ma et al. , 2019 ; Fan et al. , 2020 ) . Our approach outperforms all the other approaches in terms of both completing the tasks and maintaining safety . Contributions . Our main contributions are three-fold : 1 ) We propose the first framework to jointly learning safe multi-agent control policies and CBF certificates , in a decentralized fashion . 2 ) We present several techniques that make the learning framework more effective and scalable for practical multi-agent systems , including the use of quantity-permutation invariant neural network architectures in learning to handle the permutation of neighbouring agents . 3 ) We demonstrate via extensive experiments that our method significantly outperforms other leading methods , and has exceptional generalization capability to unseen scenarios and an arbitrary number of agents , even in quite complex multi-agent environments such as ground robots and drones . The video that demonstrates the outstanding performance of our method can be found in the supplementary material . Related Work . Learning-Based Safe Control via CBF . Barrier certificates ( Prajna et al. , 2007 ) and CBF ( Wieland & Allgöwer , 2007 ) is a well-known effective tool for guaranteeing the safety of nonlinear dynamic systems . However , the existing methods for constructing CBFs either rely on specific problem structures ( Chen et al. , 2017b ) or do not scale well ( Mitchell et al. , 2005 ) . Recently , there has been an increasing interest in learning-based and data-driven safe control via CBFs , which primarily consist of two categories : learning CBFs from data ( Saveriano & Lee , 2020 ; Srinivasan et al. , 2020 ; Jin et al. , 2020 ; Boffi et al. , 2020 ) , and CBF-based approach for controlling unknown systems ( Wang et al. , 2017 ; 2018 ; Cheng et al. , 2019 ; Taylor et al. , 2020 ) . Our work is more pertinent to the former and is complementary to the latter , which usually assumes that the CBF is provided . None of these learning-enabled approaches , however , has addressed the multi-agent setting . Multi-Agent Safety Certificates and Collision Avoidance . Restricted to holonomic systems , guaranteeing safety in multi-agent systems has been approached by limiting the velocities of the agents ( Van den Berg et al. , 2008 ; Alonso-Mora et al. , 2013 ) . Later , Borrmann et al . ( 2015 ) Wang et al . ( 2017 ) have proposed the framework of multi-agent CBF to generate collision-free controllers , with either perfectly known system dynamics ( Borrmann et al. , 2015 ) , or with worst-case uncertainty bounds ( Wang et al. , 2017 ) . Recently , Chen et al . ( 2020 ) has proposed a decentralized controller synthesized approach under this CBF framework , which is scalable to an arbitrary number of agents . However , in Chen et al . ( 2020 ) the CBF controller relies on online integration of the dynamics under the backup strategy , which can be computationally challenging for complex systems . Due to space limit , we omit other non-learning multi-agent control methods but acknowledge their importance . Safe Multi-Agent ( Reinforcement ) Learning ( MARL ) . Safety concerns have drawn increasing attention in MARL , especially with the applications to safety-critical multi-agent systems ( Zhang & Bastani , 2019 ; Qie et al. , 2019 ; Shalev-Shwartz et al. , 2016 ) . Under the CBF framework , Cheng et al . ( 2020 ) considered the setting with unknown system dynamics , and proposed to design robust multiagent CBFs based on the learned dynamics . This mirrors the second category mentioned above in single-agent learning-based safe control , which is perpendicular to our focus . RL approaches have also been applied for multi-agent collision avoidance ( Chen et al. , 2017a ; Lowe et al. , 2017 ; Everett et al. , 2018 ; Zhang et al. , 2018 ) . Nonetheless , no formal guarantees of safety were established in these works . One exception is Zhang & Bastani ( 2019 ) , which proposed a multi-agent model predictive shielding algorithm that provably guarantees safety for any policy learned from MARL , which differs from our multi-agent CBF-based approach . More importantly , none of these MARL- based approaches scale to a massive number of , e.g. , thousands of agents , as our approach does . The most scalable MARL platform , to the best of our knowledge , is Zheng et al . ( 2017 ) , which may handle a comparable scale of agents as ours , but with discrete state-action spaces . This is in contrast to our continuous-space models that can model practical control systems such as robots and drones . 2 PRELIMINARIES . 2.1 CONTROL BARRIER FUNCTIONS AS SAFETY CERTIFICATES . One common approach for ( single-agent ) safety certificate is via control barrier functions ( Ames et al. , 2014 ) , which can enforce the states of dynamic systems to stay in the safe set . Specifically , let S ⊂ Rn be the state space , Sd ⊂ S is the dangerous set , Ss = S\Sd is the safe set , which contains the set of initial conditions S0 ⊂ Ss . Also define the space of control actions as U ⊂ Rm . For a dynamic system ṡ ( t ) = f ( s ( t ) , u ( t ) ) , a control barrier function h : Rn 7→ R satisfies : ( ∀s ∈ S0 , h ( s ) ≥ 0 ) ∧ ( ∀s ∈ Sd , h ( s ) < 0 ) ∧ ( ∀ s ∈ { s | h ( s ) ≥ 0 } , ∇sh · f ( s , u ) + α ( h ) ≥ 0 ) , ( 1 ) where α ( · ) is a class-K function , i.e. , α ( · ) is strictly increasing and satisfies α ( 0 ) = 0 . For a control policy π : S → U and CBF h , it is proved in Ames et al . ( 2014 ) that if s ( 0 ) ∈ { s | h ( s ) ≥ 0 } and the three conditions in ( 1 ) are satisfied with u = π ( x ) , then s ( t ) ∈ { s | h ( s ) ≥ 0 } for ∀t ∈ [ 0 , ∞ ) , which means the state would never enter the dangerous set Sd under π . 2.2 SAFETY OF MULTI-AGENT DYNAMIC SYSTEMS . Consider a multi-agent system with N agents , the joint state of which at time t is denoted by s ( t ) = { s1 ( t ) , s2 ( t ) , · · · , sN ( t ) } where si ( t ) ∈ Si ⊂ Rn denotes the state of agent i at time t. The dynamics of agent i is ṡi ( t ) = fi ( si ( t ) , ui ( t ) ) where ui ( t ) ∈ Ui ⊂ Rm is the control action of agent i . The overall state space and input space are denoted as S .= N⊗ i=1 Si , U .= N ⊗ i=1 Ui . For each agent i , we define Ni ( t ) as the set of its neighborhood agents at time t. Let oi ( t ) ∈ Rn×|Ni ( t ) | be the local observation of agent i , which is the states of |Ni ( t ) | neighborhood agents . Notice that the dimension of oi ( t ) is not fixed and depends on the quantity of neighboring agents . We assume that the safety of agent i is jointly determined by si and oi . Let Oi be the set of all possible observations and Xi : = Si ×Oi be the state-observation space that contains the safe set Xi , s , dangerous set Xi , d and initial conditions Xi,0 ⊂ Xi , s . Let d : Xi → R describe the minimum distance from agent i to other agents that it observes , d ( si , oi ) < κs implies collision . Then Xi , s = { ( si , oi ) |d ( si , oi ) ≥ κs } and Xi , d = { ( si , oi ) |d ( si , oi ) < κs } . Let d̄i : S → R be the lifting of d from Xi to S , which is welldefined since there is a surjection from S to Xi . Then define Ss . = { s ∈ S|∀i = 1 , ... , N , d̄i ( s ) ≥ κs } . The safety of a multi-agent system can be formally defined as follows : Definition 1 ( Safety of Multi-Agent Systems ) . If the state-observation satisfies d ( si , oi ) ≥ κs for agent i and time t , then agent i is safe at time t. If for ∀i , agent i is safe at time t , then the multi-agent system is safe at time t , and s ∈ Ss . A main objective of this paper is to learn the control policy πi ( si ( t ) , oi ( t ) ) for ∀i such that the multi-agent system is safe . The control policy is decentralized ( i.e. , each agent has its own control policy and there does not exist a central controller to coordinate all the agents ) . In this way , our decentralized approach has the hope to scale to very a large number of agents .
This paper brings the recently introduced idea of control barrier functions (CBF) as safety certificate to the multi-agent land. To this end it introduces the idea of decentralized CBFs. This is followed with a framework for jointy learning the CBFs and policies with a PointNet inspired network architecture. The paper provides some generalization guarantees for the approach as well. Finally the approach is compared against various learning and planning based baselines where it significantly outperforms.
SP:e01cbc55e8f7bc90ed66b50387234c154f547e5e
Two steps at a time --- taking GAN training in stride with Tseng's method
1 INTRODUCTION . Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) have proven to be a powerful class of generative models , producing for example unseen realistic images . Two neural networks , called generator and discriminator , compete against each other in a game . In the special case of a zero sum game this task can be formulated as a minimax ( aka saddle point ) problem . Conventionally , GANs are trained using variants of ( stochastic ) Gradient Descent Ascent ( GDA ) which are known to exhibit oscillatory behavior and thus fail to converge even for simple bilinear saddle point problems , see Goodfellow ( 2016 ) . We therefore propose the use of methods with provable convergence guarantees for ( stochastic ) convex-concave minimax problems , even though GANs are well known to not warrant these properties . Along similar considerations an adaptation of the Extragradient method ( EG ) ( Korpelevich , 1976 ) for the training of GANs was suggested in Gidel et al . ( 2019 ) , whereas Daskalakis et al . ( 2018 ) ; Daskalakis & Panageas ( 2018 ) ; Liang & Stokes ( 2019 ) studied Optimistic Gradient Descent Ascent ( OGDA ) based on optimistic mirror descent ( Rakhlin & Sridharan , 2013a ; b ) . We however investigate the Forward-Backward-Forward ( FBF ) method ( Tseng , 1991 ) from monotone operator theory , which uses two gradient evaluations per update , similar to EG , in order to circumvent the aforementioned issues . Instead of trying to improve GAN performance via new architectures , loss functions , etc. , we contribute to the theoretical foundation of their training from the point of view of optimization . Contribution . Establishing the connection between GAN training and monotone inclusions motivates to use the FBF method , originally designed to solve this type of problems . This approach allows to naturally extend the constrained setting to a regularized one making use of the proximal operator . We also propose a variant of FBF reusing previous gradients to reduce the computational cost per iteration , which turns out to be a known method , related to OGDA . By developing a unifying scheme that captures FBF and a generalization of OGDA , we reveal a hitherto unknown connection . Using this approach we prove novel non asymptotic convergence statements in terms of the minimax gap for both methods in the context of saddle point problems . In the deterministic and stochastic setting we obtain rates of O ( 1/k ) and O ( 1/√k ) , respectively . Concluding , we highlight the relevance of our proposed method as well as the role of regularizers by showing empirical improvements in the training of Wasserstein GANs on the CIFAR10 dataset . Organization . This paper is structured as follows . In Section 2 we highlight the connection of GAN training and monotone inclusions and give an extensive review of methods with convergence guarantees for the latter . The main results as well as a precise definition of the measure of optimality are discussed in Section 3 . Concluding , Section 4 illustrates the empirical performance in the training of GANs as well as solving bilinear problems . 2 GAN TRAINING AS MONOTONE INCLUSION . The GAN objective was originally cast as a two-player zero-sum game between the discriminator Dy and the generator Gx ( Goodfellow et al. , 2014 ) given by min x max y Eρ∼q [ log ( Dy ( ρ ) ) ] + Eζ∼p [ log ( 1−Dy ( Gx ( ζ ) ) ) ] , exhibiting the aforementioned minimax structure . Due to problems with vanishing gradients in the training of such models , a successful alternative formulation called Wasserstein GAN ( WGAN ) ( Arjovsky et al. , 2017 ) has been proposed . In this case the minimization tries to reduce the Wasserstein distance between the true distribution q and the one learned by the generator . Reformulating this distance via the Kantorovich Rubinstein duality leads to an inner maximization over 1-Lipschitz functions which are approximated via neural networks , yielding the saddle point problem min x max y : ‖Dy‖Lip≤1 Eρ∼q [ Dy ( ρ ) ] − Eζ∼p [ Dy ( Gx ( ζ ) ) ] . 2.1 CONVEX-CONCAVE MINIMAX PROBLEMS . Due to the observations made in the previous paragraph we study the following abstract minimax problem min x∈Rd max y∈Rn Ψ ( x , y ) : = f ( x ) + Eξ∼Q [ Φ ( x , y ; ξ ) ] − h ( y ) , ( 1 ) where the convex-concave coupling function Φ ( x , y ) : = Eξ∼Q [ Φ ( x , y ; ξ ) ] , which hides the stochasticity for ease of notation , is differentiable with L-Lipschitz continuous gradient . The proper , convex and lower semicontinuous functions f : Rd → R ∪ { +∞ } and h : Rn → R ∪ { +∞ } act as regularizers . A solution of ( 1 ) is given by a so-called saddle point ( x∗ , y∗ ) fulfilling for all x and y Ψ ( x∗ , y ) ≤ Ψ ( x∗ , y∗ ) ≤ Ψ ( x , y∗ ) . In the context of two-player games this corresponds to a pair of strategies , where no player can be better off by changing just their own strategy . For the purpose of this motivating section , we will restrict ourselves for now to the special case of the deterministic constrained version of ( 1 ) , given by min x∈X max y∈Y Φ ( x , y ) , where f and h are given by indicator functions of closed convex sets X and Y , respectively . The indicator function δC of a set C is defined as δC ( z ) = 0 for z ∈ C and δC ( z ) = +∞ otherwise . 2.2 MINIMAX PROBLEMS AS MONOTONE INCLUSIONS . If the coupling function Φ is convex-concave and differentiable then solving ( 1 ) is equivalent to solving the first order optimality conditions which can be written as a so-called monotone inclusion with w = ( x , y ) ∈ Rm and m = d+ n , given by 0 ∈ F ( w ) +NΩ ( w ) . ( 2 ) The entities involved are F ( x , y ) : = ( ∇xΦ ( x , y ) , −∇yΦ ( x , y ) ) , ( 3 ) and the normal cone NΩ of the convex set Ω : = X × Y . The normal cone mapping is given by NΩ ( w ) = { v ∈ Rm : 〈v , w′ − w〉 ≤ 0 ∀w′ ∈ Ω } , for w ∈ Ω and NΩ ( w ) = ∅ for w /∈ Ω . Here , the operators F and NΩ satisfy well known properties from convex analysis ( Bauschke & Combettes , 2011 ) , in particular the first one is monotone ( and Lipschitz if ∇Φ is so ) whereas the latter one is maximal monotone . We call a , possibly set-valued , operator A from Rm to itself monotone if 〈u− u′ , z − z′〉 ≥ 0 ∀u ∈ A ( z ) , u′ ∈ A ( z′ ) . We say A is maximal monotone , if there exists no monotone operator A′ such that the graph of A is properly contained in the graph of A′ . Problems of type ( 2 ) have been studied thoroughly in convex optimization , with the most established solution methods being Extragradient ( Korpelevich , 1976 ) and Forward-Backward-Forward ( Tseng , 1991 ) . Both methods are known to generate sequences of iterates converging to a solution of ( 2 ) . Note that in the unconstrained setting ( i.e . if Ω is the entire space ) both of these algorithms even produce the same iterates . 2.3 SOLVING MONOTONE INCLUSIONS . The connection between monotone inclusions and saddle point problems is of course not new . The application of Extragradient ( EG ) to minimax problems has been studied in the seminal paper Nemirovski ( 2004 ) under the name of Mirror Prox and a convergence rate of O ( 1/k ) in terms of the function values has been proven . Even a stochastic version of the Mirror Prox algorithm has been studied in Juditsky et al . ( 2011 ) with a convergence rate of O ( 1/√k ) . Applied to problem ( 2 ) , with PΩ being the projection onto Ω , it iterates EG : ⌊ wk = PΩ [ zk − αkF ( zk ) ] zk+1 = PΩ [ zk − αkF ( wk ) ] . The Forward-Backward-Forward ( FBF ) method , introduced in Tseng ( 1991 ) , has not been studied rigorously for minimax problems in terms of function values yet , despite promising applications in Boţ et al . ( 2020 ) and its advantage of it only requiring one projection , whereas EG needs two . It is given by FBF : ⌊ wk = PΩ [ zk − αkF ( zk ) ] zk+1 = wk + αk ( F ( zk ) − F ( wk ) ) . ( 4 ) Both , EG and FBF , have the “ disadvantage ” of needing two gradient evaluations per iteration . A possible remedy — suggested in Gidel et al . ( 2019 ) for EG under the name of extrapolation from the past — is to recycle previous gradients . In a similar fashion we consider FBFp : ⌊ wk = PΩ [ zk − αkF ( wk−1 ) ] zk+1 = wk + αk ( F ( wk−1 ) − F ( wk ) ) , ( 5 ) where we replaced F ( zk ) by F ( wk−1 ) twice in ( 4 ) . As a matter of fact , the above method can be written exclusively in terms of the first variable wk by incrementing the index k in the first update and then substituting in the second line . This results in wk+1 = PΩ [ wk − αk+1F ( wk ) + αk ( F ( wk−1 ) − F ( wk ) ) ] . ( 6 ) This way we rediscover a known method which was studied in Malitsky & Tam ( 2020 ) for general monotone inclusions under the name of forward-reflected-backward . It reduces to optimistic mirror descent ( Rakhlin & Sridharan , 2013a ; b ) in the unconstrained case with constant step size αk = α , giving wk+1 = wk − α ( 2F ( wk ) − F ( wk−1 ) ) ( 7 ) which has been proposed for the training of GANs under the name of Optimistic Gradient Descent Ascent ( OGDA ) , see Daskalakis et al . ( 2018 ) ; Daskalakis & Panageas ( 2018 ) ; Liang & Stokes ( 2019 ) . All of the above methods and extensions rely solely on the monotone operator formulation of the saddle point problem where the two components x and y play a symmetric role . Taking the special minimax structure into consideration , Hamedani & Aybat ( 2018 ) showed convergence of a method that uses an optimistic step ( 7 ) in one component and a regular gradient step in the other , thus requiring less storing of past gradients in comparison to ( 6 ) . On the downside , however , by reducing the number of required gradient evaluations per iteration , the largest possible step size is reduced from 1/L ( see Korpelevich ( 1976 ) or Section 3 ) to 1/2L ( see Gidel et al . ( 2019 ) ; Malitsky & Tam ( 2020 ) ; Malitsky ( 2015 ) or Section 3 ) . To summarize , the number of required gradient evaluations is halved , but so is the step size , resulting in no clear net gain .
The authors in this paper, inspired by the applications of min-max optimization in GANs, study the problem of min-max optimization for convex-concave functions. The main contribution of the paper is proving novel convergence results for Forward-Backward-Forward (FBF) algorithms as well as Optimistic Gradient Descent Ascent (OGDA) based on tools from monotone inclusion problems. Their convergence results cover both deterministic and stochastic settings and the rates of convergence for suitably chosen gap function are non-asymptotic. Finally, they apply their algorithms both on toy problems but also on training GANs on CIFAR-10.
SP:84d3ab8eb02111204e89559450fe3062212bcc9d
Two steps at a time --- taking GAN training in stride with Tseng's method
1 INTRODUCTION . Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) have proven to be a powerful class of generative models , producing for example unseen realistic images . Two neural networks , called generator and discriminator , compete against each other in a game . In the special case of a zero sum game this task can be formulated as a minimax ( aka saddle point ) problem . Conventionally , GANs are trained using variants of ( stochastic ) Gradient Descent Ascent ( GDA ) which are known to exhibit oscillatory behavior and thus fail to converge even for simple bilinear saddle point problems , see Goodfellow ( 2016 ) . We therefore propose the use of methods with provable convergence guarantees for ( stochastic ) convex-concave minimax problems , even though GANs are well known to not warrant these properties . Along similar considerations an adaptation of the Extragradient method ( EG ) ( Korpelevich , 1976 ) for the training of GANs was suggested in Gidel et al . ( 2019 ) , whereas Daskalakis et al . ( 2018 ) ; Daskalakis & Panageas ( 2018 ) ; Liang & Stokes ( 2019 ) studied Optimistic Gradient Descent Ascent ( OGDA ) based on optimistic mirror descent ( Rakhlin & Sridharan , 2013a ; b ) . We however investigate the Forward-Backward-Forward ( FBF ) method ( Tseng , 1991 ) from monotone operator theory , which uses two gradient evaluations per update , similar to EG , in order to circumvent the aforementioned issues . Instead of trying to improve GAN performance via new architectures , loss functions , etc. , we contribute to the theoretical foundation of their training from the point of view of optimization . Contribution . Establishing the connection between GAN training and monotone inclusions motivates to use the FBF method , originally designed to solve this type of problems . This approach allows to naturally extend the constrained setting to a regularized one making use of the proximal operator . We also propose a variant of FBF reusing previous gradients to reduce the computational cost per iteration , which turns out to be a known method , related to OGDA . By developing a unifying scheme that captures FBF and a generalization of OGDA , we reveal a hitherto unknown connection . Using this approach we prove novel non asymptotic convergence statements in terms of the minimax gap for both methods in the context of saddle point problems . In the deterministic and stochastic setting we obtain rates of O ( 1/k ) and O ( 1/√k ) , respectively . Concluding , we highlight the relevance of our proposed method as well as the role of regularizers by showing empirical improvements in the training of Wasserstein GANs on the CIFAR10 dataset . Organization . This paper is structured as follows . In Section 2 we highlight the connection of GAN training and monotone inclusions and give an extensive review of methods with convergence guarantees for the latter . The main results as well as a precise definition of the measure of optimality are discussed in Section 3 . Concluding , Section 4 illustrates the empirical performance in the training of GANs as well as solving bilinear problems . 2 GAN TRAINING AS MONOTONE INCLUSION . The GAN objective was originally cast as a two-player zero-sum game between the discriminator Dy and the generator Gx ( Goodfellow et al. , 2014 ) given by min x max y Eρ∼q [ log ( Dy ( ρ ) ) ] + Eζ∼p [ log ( 1−Dy ( Gx ( ζ ) ) ) ] , exhibiting the aforementioned minimax structure . Due to problems with vanishing gradients in the training of such models , a successful alternative formulation called Wasserstein GAN ( WGAN ) ( Arjovsky et al. , 2017 ) has been proposed . In this case the minimization tries to reduce the Wasserstein distance between the true distribution q and the one learned by the generator . Reformulating this distance via the Kantorovich Rubinstein duality leads to an inner maximization over 1-Lipschitz functions which are approximated via neural networks , yielding the saddle point problem min x max y : ‖Dy‖Lip≤1 Eρ∼q [ Dy ( ρ ) ] − Eζ∼p [ Dy ( Gx ( ζ ) ) ] . 2.1 CONVEX-CONCAVE MINIMAX PROBLEMS . Due to the observations made in the previous paragraph we study the following abstract minimax problem min x∈Rd max y∈Rn Ψ ( x , y ) : = f ( x ) + Eξ∼Q [ Φ ( x , y ; ξ ) ] − h ( y ) , ( 1 ) where the convex-concave coupling function Φ ( x , y ) : = Eξ∼Q [ Φ ( x , y ; ξ ) ] , which hides the stochasticity for ease of notation , is differentiable with L-Lipschitz continuous gradient . The proper , convex and lower semicontinuous functions f : Rd → R ∪ { +∞ } and h : Rn → R ∪ { +∞ } act as regularizers . A solution of ( 1 ) is given by a so-called saddle point ( x∗ , y∗ ) fulfilling for all x and y Ψ ( x∗ , y ) ≤ Ψ ( x∗ , y∗ ) ≤ Ψ ( x , y∗ ) . In the context of two-player games this corresponds to a pair of strategies , where no player can be better off by changing just their own strategy . For the purpose of this motivating section , we will restrict ourselves for now to the special case of the deterministic constrained version of ( 1 ) , given by min x∈X max y∈Y Φ ( x , y ) , where f and h are given by indicator functions of closed convex sets X and Y , respectively . The indicator function δC of a set C is defined as δC ( z ) = 0 for z ∈ C and δC ( z ) = +∞ otherwise . 2.2 MINIMAX PROBLEMS AS MONOTONE INCLUSIONS . If the coupling function Φ is convex-concave and differentiable then solving ( 1 ) is equivalent to solving the first order optimality conditions which can be written as a so-called monotone inclusion with w = ( x , y ) ∈ Rm and m = d+ n , given by 0 ∈ F ( w ) +NΩ ( w ) . ( 2 ) The entities involved are F ( x , y ) : = ( ∇xΦ ( x , y ) , −∇yΦ ( x , y ) ) , ( 3 ) and the normal cone NΩ of the convex set Ω : = X × Y . The normal cone mapping is given by NΩ ( w ) = { v ∈ Rm : 〈v , w′ − w〉 ≤ 0 ∀w′ ∈ Ω } , for w ∈ Ω and NΩ ( w ) = ∅ for w /∈ Ω . Here , the operators F and NΩ satisfy well known properties from convex analysis ( Bauschke & Combettes , 2011 ) , in particular the first one is monotone ( and Lipschitz if ∇Φ is so ) whereas the latter one is maximal monotone . We call a , possibly set-valued , operator A from Rm to itself monotone if 〈u− u′ , z − z′〉 ≥ 0 ∀u ∈ A ( z ) , u′ ∈ A ( z′ ) . We say A is maximal monotone , if there exists no monotone operator A′ such that the graph of A is properly contained in the graph of A′ . Problems of type ( 2 ) have been studied thoroughly in convex optimization , with the most established solution methods being Extragradient ( Korpelevich , 1976 ) and Forward-Backward-Forward ( Tseng , 1991 ) . Both methods are known to generate sequences of iterates converging to a solution of ( 2 ) . Note that in the unconstrained setting ( i.e . if Ω is the entire space ) both of these algorithms even produce the same iterates . 2.3 SOLVING MONOTONE INCLUSIONS . The connection between monotone inclusions and saddle point problems is of course not new . The application of Extragradient ( EG ) to minimax problems has been studied in the seminal paper Nemirovski ( 2004 ) under the name of Mirror Prox and a convergence rate of O ( 1/k ) in terms of the function values has been proven . Even a stochastic version of the Mirror Prox algorithm has been studied in Juditsky et al . ( 2011 ) with a convergence rate of O ( 1/√k ) . Applied to problem ( 2 ) , with PΩ being the projection onto Ω , it iterates EG : ⌊ wk = PΩ [ zk − αkF ( zk ) ] zk+1 = PΩ [ zk − αkF ( wk ) ] . The Forward-Backward-Forward ( FBF ) method , introduced in Tseng ( 1991 ) , has not been studied rigorously for minimax problems in terms of function values yet , despite promising applications in Boţ et al . ( 2020 ) and its advantage of it only requiring one projection , whereas EG needs two . It is given by FBF : ⌊ wk = PΩ [ zk − αkF ( zk ) ] zk+1 = wk + αk ( F ( zk ) − F ( wk ) ) . ( 4 ) Both , EG and FBF , have the “ disadvantage ” of needing two gradient evaluations per iteration . A possible remedy — suggested in Gidel et al . ( 2019 ) for EG under the name of extrapolation from the past — is to recycle previous gradients . In a similar fashion we consider FBFp : ⌊ wk = PΩ [ zk − αkF ( wk−1 ) ] zk+1 = wk + αk ( F ( wk−1 ) − F ( wk ) ) , ( 5 ) where we replaced F ( zk ) by F ( wk−1 ) twice in ( 4 ) . As a matter of fact , the above method can be written exclusively in terms of the first variable wk by incrementing the index k in the first update and then substituting in the second line . This results in wk+1 = PΩ [ wk − αk+1F ( wk ) + αk ( F ( wk−1 ) − F ( wk ) ) ] . ( 6 ) This way we rediscover a known method which was studied in Malitsky & Tam ( 2020 ) for general monotone inclusions under the name of forward-reflected-backward . It reduces to optimistic mirror descent ( Rakhlin & Sridharan , 2013a ; b ) in the unconstrained case with constant step size αk = α , giving wk+1 = wk − α ( 2F ( wk ) − F ( wk−1 ) ) ( 7 ) which has been proposed for the training of GANs under the name of Optimistic Gradient Descent Ascent ( OGDA ) , see Daskalakis et al . ( 2018 ) ; Daskalakis & Panageas ( 2018 ) ; Liang & Stokes ( 2019 ) . All of the above methods and extensions rely solely on the monotone operator formulation of the saddle point problem where the two components x and y play a symmetric role . Taking the special minimax structure into consideration , Hamedani & Aybat ( 2018 ) showed convergence of a method that uses an optimistic step ( 7 ) in one component and a regular gradient step in the other , thus requiring less storing of past gradients in comparison to ( 6 ) . On the downside , however , by reducing the number of required gradient evaluations per iteration , the largest possible step size is reduced from 1/L ( see Korpelevich ( 1976 ) or Section 3 ) to 1/2L ( see Gidel et al . ( 2019 ) ; Malitsky & Tam ( 2020 ) ; Malitsky ( 2015 ) or Section 3 ) . To summarize , the number of required gradient evaluations is halved , but so is the step size , resulting in no clear net gain .
This work studies minimax optimization (a.k.a saddle-point problems) with nonsmooth regularizers. By leveraging the monotone operator theory, the authors propose to use the forward-backward-forward method so as to avoids the notorious limit cycling problem. The classical FBF method requires two gradient evaluations per step, the authors introduce a new algorithm which reuses the past gradient in the same way as OGDA. In the setting of convex-concave minimax optimization, the authors claim to prove novel convergence rates for both methods.
SP:84d3ab8eb02111204e89559450fe3062212bcc9d
Convolutional Neural Networks are not invariant to translation, but they can learn to be
1 INTRODUCTION . The equivalence of an objects across different viewpoints is considered a fundamental capacity of human vision recognition ( Hummel , 2002 ) . This is mediated by the inferior temporal cortex , which appears to provide the bases for scale , translation , and rotation invariance ( Tanaka , 1996 ; O ’ Reilly & Munakata , 2019 ) . Taking inspiration from biological models ( LeCun et al. , 1998 ) , Artificial Neural Networks have been endowed with convolution and pooling operations ( LeCun et al. , 1998 ; 1990 ) . It is often claimed that Convolutional Neural Networks ( CNNs ) are less susceptible to irrelevant sources of variation such as image translation , scaling , and other small deformations ( Gens & Domingos , 2014 ; Xu et al. , 2014 ; LeCun & Bengio , 1995 ; Fukushima , 1980 ) . While it is difficult to overstate the importance of convolution and pooling operations in deep learning , their ability to make a network invariant to image transformations has been overestimated : for example , Gong et al . ( 2014 ) showed that CNNs achieve neither rotation nor scale invariance . Similarly , multiple studies have reported highly limited translation invariance ( Kauderer-Abrams , 2017 ; Gong et al. , 2014 ; Azulay & Weiss , 2019 ; Chen et al. , 2017 ; Blything et al. , 2020 ) . It is important to understand the reason for the misconception regarding the ability of CNNs to be invariant to translation . We believe this is due to two misunderstanding . Firstly , it is commonly assumed that CNNs are ‘ architecturally ’ invariant to translation ( that is , the invariance is built in the architecture through pooling and/or convolution ) . For example : “ [ CNNs ] have an architecture hard-wired for some translation-invariance while they rely heavily on learning through extensive data or data augmentation for invariance to other transformations ” ( Han et al. , 2020 ) , and “ Most deep learning networks make heavy use of a technique called convolution ( LeCun , 1989 ) , which constrains the neural connections in the network such that they innately capture a property known as translational invariance . This is essentially the idea that an object can slide around an image while maintaining its identity ; a circle in the top left can be presumed , even absent direct experience ) to be the same as a circle in the bottom right. ” ( Marcus , 2018 ) , see also LeCun & Bengio ( 1995 ) ; Gens & Domingos ( 2014 ) ; Xu et al . ( 2014 ) ; Marcos et al . ( 2016 ) . In fact , the convolution operation is translationally equivariant , not invariant , meaning that a transformation applied to the input is transferred to the output ( Lenc & Vedaldi , 2019 ) . Even when this point is made , such in LeCun & Bengio ( 1995 ) , is it still assumed that the equivariance is enough to support an important degree of translation invariance . For example , LeCun & Bengio ( 1995 ) write : “ Once a feature has been detected its exact location becomes less important as long as its approximate position relative to other features is preserved ” . As a matter of fact , equivariance and invariance are mutually exclusive functions ( a representation can not support both ) , and accordingly , any invariance supported by a network must be coded into the fully connected part rather than in the equivariant convolutional layers . Moreover , perfect equivariance can be lost in the convolutional layers ( Azulay & Weiss , 2019 ; Zhang , 2019 ) through subsequent sub-sampling ( implemented with pooling and striding operations , commonly used in almost any CNN ) . Therefore , overall , most modern CNNs are neither architecturally invariant nor perfectly equivariant to translation . The other reason for overestimating the extent that CNNs are invariant to translation resides in the failure to distinguish between trained and online translation invariance ( see Bowers et al . 2016 ) . Trained invariance refers to the ability to correctly classify unseen instances of a class in trained location . For instance , a network trained on the whole visual field on identifying instances of dogs , will be able to identify a new image of a dog across multiple locations . This feature is obtained by data-augmentation : jittering the training samples so that the network is trained on items across different locations ( Kauderer-Abrams , 2017 ; Furukawa , 2017 ) . However , this should not be considered a form of translation invariance , as it is simply a case of identifying a test image ( a novel image of a dog ) at a trained location . More interesting is the concept of ‘ online ’ translation invariance : learning to identify an object at one location immediately affords the capacity to identify that object at multiple other1 . Online translation invariance is generally measured by training a network on images placed on a certain location ( generally the center of a canvas ) , and then testing with the same images placed on untrained location . In many reports CNNs performed at chance level on untrained locations ( Kauderer-Abrams , 2017 ; Gong et al. , 2014 ; Azulay & Weiss , 2019 ; Chen et al. , 2017 ; Blything et al. , 2020 ) . This problem has been tackled with several architectural changes : Sundaramoorthi & Wang ( 2019 ) suggested a solution based on Gaussian-Hermite basis ; Bruna & Mallat ( 2012 ) used a wavelet scattering network model ; Jaderberg et al . ( 2015 ) added a new module that can account for any affine transformation ; Blything et al . ( 2020 ) used Global Average Pooling . Without having to resort to new architectures , two works have recently contrasted the previous findings , obtaining a high degree of online translation invariance : Han et al . ( 2020 ) found that a CNN exhibited an almost perfect online translation invariance on a Korean Characters recognition task , but it did not compare these results with the previous literature . Blything et al . ( 2020 ) also found perfect online translation invariance when a VGG16 network was pretrained on ImageNet but found almost no invariance when the same ( vanilla ) network was untrained . This latter finding explains Han et al . results , as they also used a pretrained network when assessing online translation invariance . Together these results hint to the fact that translationally invariant representations do not need to be built inside the network architecture , but can be learned . In the current work , we further explore this idea . 2 CURRENT WORK . In this work we focus on ‘ online ’ translation invariance on a classic CNN , using VGG16 ( Simonyan & Zisserman , 2014 ) as a typical convolutional network . We show how , even though classic CNNs are not ‘ architectural ’ invariance , they can ‘ learn ’ to be invariant to translation by extracting latent features of their visual environmenta ( the dataset ) . Learning is used in the sense that the invariance is coded within the network weights , optimized through backpropagation , and not hard-wired in the network architecture , such as Sundaramoorthi & Wang ( 2019 ) or Bruna & Mallat ( 2012 ) . 1 ‘ Trained translation invariance ’ in which images can be identified across the canvas after training exemplar images across many locations is not to be confused with our approach of ‘ training ’ translation invariance , in which a network is trained to exhibit ‘ online ’ translation invariance We trained on environments in which the key characteristic was that items ’ categories were independent on their position ( for CNNs learning the categories based on their position , see Semih Kayhan & van Gemert 2020 ) . Our main contribution is finding that by pretraining on such environments , CNNs would indeed learn to be invariant to translation . Why is this important ? First , it is important to know how CNNs work , and there is currently confusion about how and when CNNs support translation invariance . Second , a network that learns deep characteristic of its visual environment such as being invariant to translation , rotation , etc. , is able to accelerate subsequent training , and accordingly , it is important to understand the conditions that foster invariance . In addition , because CNNs have been recently suggested as a model for the human brain ( Richards et al. , 2019 ; Ma & Peters , 2020 ; Kriegeskorte , 2015 ; Zhuang et al. , 2020 ) , it is important to understand if and how they can learn fundamental perceptual properties of human vision , of which invariance to translation is one ( Blything et al. , 2020 ; Bowers et al. , 2016 ; Koffka , 2013 ) . 3 OVERVIEW OF THE EXPERIMENTS . Blything et al . ( 2020 ) found that using a VGG16 network pretrained on ImageNet would result in an almost perfect translation invariance on a different dataset in which items were trained only on one location . They compared these results with a vanilla network , that is a non-pretrained network , which showed a lack of translation invariance . Here we replicate these findings with a wider variety of datasets ( Section 3.2 ) . We then show that it is possible to obtain similar results using much simpler artificial datasets in which objects were fully-translated across the canvas , but with some limitations due to the difference between the pretraining and the fine-tuning datasets ( Section 3.3 ) . As a sanity check , we showed that pretraining on the whole canvas is not enough to acquire translation invariance , but the network must be pretrained on fully-translated objects ( Section 3.4 ) . Next , we report studies that assess translation invariance learned from partial information in the environment , that is , whether invariance can be generalized to the whole visual field when the network was only trained on one area of the visual field , and whether invariance extends to all classes when only a subset of classes was trained at all locations ( Section 3.5 ) . We found that generalization failed in these cases . Finally , we report a cosine similarity analysis that provides some insight into the learned internal network representations ( Section 3.6 ) , and shows that the poor performance on some conditions in Section 3.3 was likely due to catastrophic forgetting/interference . 3.1 DATASETS . We used six datasets spanning a high range of complexity . From the more complex to the less complex2 , we used : EMNIST ( Cohen et al. , 2017 ) ; FashionMNIST ( FMNIST ) , from Xiao et al . ( 2017 ) , Kuzushiji MNIST ( KMNIST ) , from Clanuwat et al . ( 2018 ) ; MNIST , from ( LeCun et al. , 1998 ) ; and two versions of the Leek dataset used in Blything et al . : one contained only 10 images instead of 24 of the original dataset ( Leek10 ) . The other contained only two images from the original dataset ( Leek2 ) , disjointed from the images used for Leek10 . Representative examples ( and , for Leek10 and Leek2 , the entire datasets ) are shown in Figure 1B . We did not apply any data-augmentation to the datasets ( apart translating the items on the canvas , as explained below ) .
This paper analysis and studies translation invariance in convolution neural networks. It argues that typically it is claimed that CNNs are translation invariant due to the convolution function, and that actually convolution are equivariant. While pooling is the actual function that gives local invariance (or global when the pooling is across all locations), this is not included in the description of invariance. One neural network, VGG-16, is used for the analysis in different scenarios: 1) pre-trained on Imagenet and fine-tuned to the new dataset on one location 2) trained from scratch using the new dataset in one location 3) trained from scratch using the new datasets in all locations of the canvas and test on the other datasets. The main conclusion in the paper is that CNNs are not invariant to translation by design of the architecture, but that when pre-trained on naturalistic images, they can be.
SP:d038f28f90bdf4625c61eefe75dd062ebf583fc8
Convolutional Neural Networks are not invariant to translation, but they can learn to be
1 INTRODUCTION . The equivalence of an objects across different viewpoints is considered a fundamental capacity of human vision recognition ( Hummel , 2002 ) . This is mediated by the inferior temporal cortex , which appears to provide the bases for scale , translation , and rotation invariance ( Tanaka , 1996 ; O ’ Reilly & Munakata , 2019 ) . Taking inspiration from biological models ( LeCun et al. , 1998 ) , Artificial Neural Networks have been endowed with convolution and pooling operations ( LeCun et al. , 1998 ; 1990 ) . It is often claimed that Convolutional Neural Networks ( CNNs ) are less susceptible to irrelevant sources of variation such as image translation , scaling , and other small deformations ( Gens & Domingos , 2014 ; Xu et al. , 2014 ; LeCun & Bengio , 1995 ; Fukushima , 1980 ) . While it is difficult to overstate the importance of convolution and pooling operations in deep learning , their ability to make a network invariant to image transformations has been overestimated : for example , Gong et al . ( 2014 ) showed that CNNs achieve neither rotation nor scale invariance . Similarly , multiple studies have reported highly limited translation invariance ( Kauderer-Abrams , 2017 ; Gong et al. , 2014 ; Azulay & Weiss , 2019 ; Chen et al. , 2017 ; Blything et al. , 2020 ) . It is important to understand the reason for the misconception regarding the ability of CNNs to be invariant to translation . We believe this is due to two misunderstanding . Firstly , it is commonly assumed that CNNs are ‘ architecturally ’ invariant to translation ( that is , the invariance is built in the architecture through pooling and/or convolution ) . For example : “ [ CNNs ] have an architecture hard-wired for some translation-invariance while they rely heavily on learning through extensive data or data augmentation for invariance to other transformations ” ( Han et al. , 2020 ) , and “ Most deep learning networks make heavy use of a technique called convolution ( LeCun , 1989 ) , which constrains the neural connections in the network such that they innately capture a property known as translational invariance . This is essentially the idea that an object can slide around an image while maintaining its identity ; a circle in the top left can be presumed , even absent direct experience ) to be the same as a circle in the bottom right. ” ( Marcus , 2018 ) , see also LeCun & Bengio ( 1995 ) ; Gens & Domingos ( 2014 ) ; Xu et al . ( 2014 ) ; Marcos et al . ( 2016 ) . In fact , the convolution operation is translationally equivariant , not invariant , meaning that a transformation applied to the input is transferred to the output ( Lenc & Vedaldi , 2019 ) . Even when this point is made , such in LeCun & Bengio ( 1995 ) , is it still assumed that the equivariance is enough to support an important degree of translation invariance . For example , LeCun & Bengio ( 1995 ) write : “ Once a feature has been detected its exact location becomes less important as long as its approximate position relative to other features is preserved ” . As a matter of fact , equivariance and invariance are mutually exclusive functions ( a representation can not support both ) , and accordingly , any invariance supported by a network must be coded into the fully connected part rather than in the equivariant convolutional layers . Moreover , perfect equivariance can be lost in the convolutional layers ( Azulay & Weiss , 2019 ; Zhang , 2019 ) through subsequent sub-sampling ( implemented with pooling and striding operations , commonly used in almost any CNN ) . Therefore , overall , most modern CNNs are neither architecturally invariant nor perfectly equivariant to translation . The other reason for overestimating the extent that CNNs are invariant to translation resides in the failure to distinguish between trained and online translation invariance ( see Bowers et al . 2016 ) . Trained invariance refers to the ability to correctly classify unseen instances of a class in trained location . For instance , a network trained on the whole visual field on identifying instances of dogs , will be able to identify a new image of a dog across multiple locations . This feature is obtained by data-augmentation : jittering the training samples so that the network is trained on items across different locations ( Kauderer-Abrams , 2017 ; Furukawa , 2017 ) . However , this should not be considered a form of translation invariance , as it is simply a case of identifying a test image ( a novel image of a dog ) at a trained location . More interesting is the concept of ‘ online ’ translation invariance : learning to identify an object at one location immediately affords the capacity to identify that object at multiple other1 . Online translation invariance is generally measured by training a network on images placed on a certain location ( generally the center of a canvas ) , and then testing with the same images placed on untrained location . In many reports CNNs performed at chance level on untrained locations ( Kauderer-Abrams , 2017 ; Gong et al. , 2014 ; Azulay & Weiss , 2019 ; Chen et al. , 2017 ; Blything et al. , 2020 ) . This problem has been tackled with several architectural changes : Sundaramoorthi & Wang ( 2019 ) suggested a solution based on Gaussian-Hermite basis ; Bruna & Mallat ( 2012 ) used a wavelet scattering network model ; Jaderberg et al . ( 2015 ) added a new module that can account for any affine transformation ; Blything et al . ( 2020 ) used Global Average Pooling . Without having to resort to new architectures , two works have recently contrasted the previous findings , obtaining a high degree of online translation invariance : Han et al . ( 2020 ) found that a CNN exhibited an almost perfect online translation invariance on a Korean Characters recognition task , but it did not compare these results with the previous literature . Blything et al . ( 2020 ) also found perfect online translation invariance when a VGG16 network was pretrained on ImageNet but found almost no invariance when the same ( vanilla ) network was untrained . This latter finding explains Han et al . results , as they also used a pretrained network when assessing online translation invariance . Together these results hint to the fact that translationally invariant representations do not need to be built inside the network architecture , but can be learned . In the current work , we further explore this idea . 2 CURRENT WORK . In this work we focus on ‘ online ’ translation invariance on a classic CNN , using VGG16 ( Simonyan & Zisserman , 2014 ) as a typical convolutional network . We show how , even though classic CNNs are not ‘ architectural ’ invariance , they can ‘ learn ’ to be invariant to translation by extracting latent features of their visual environmenta ( the dataset ) . Learning is used in the sense that the invariance is coded within the network weights , optimized through backpropagation , and not hard-wired in the network architecture , such as Sundaramoorthi & Wang ( 2019 ) or Bruna & Mallat ( 2012 ) . 1 ‘ Trained translation invariance ’ in which images can be identified across the canvas after training exemplar images across many locations is not to be confused with our approach of ‘ training ’ translation invariance , in which a network is trained to exhibit ‘ online ’ translation invariance We trained on environments in which the key characteristic was that items ’ categories were independent on their position ( for CNNs learning the categories based on their position , see Semih Kayhan & van Gemert 2020 ) . Our main contribution is finding that by pretraining on such environments , CNNs would indeed learn to be invariant to translation . Why is this important ? First , it is important to know how CNNs work , and there is currently confusion about how and when CNNs support translation invariance . Second , a network that learns deep characteristic of its visual environment such as being invariant to translation , rotation , etc. , is able to accelerate subsequent training , and accordingly , it is important to understand the conditions that foster invariance . In addition , because CNNs have been recently suggested as a model for the human brain ( Richards et al. , 2019 ; Ma & Peters , 2020 ; Kriegeskorte , 2015 ; Zhuang et al. , 2020 ) , it is important to understand if and how they can learn fundamental perceptual properties of human vision , of which invariance to translation is one ( Blything et al. , 2020 ; Bowers et al. , 2016 ; Koffka , 2013 ) . 3 OVERVIEW OF THE EXPERIMENTS . Blything et al . ( 2020 ) found that using a VGG16 network pretrained on ImageNet would result in an almost perfect translation invariance on a different dataset in which items were trained only on one location . They compared these results with a vanilla network , that is a non-pretrained network , which showed a lack of translation invariance . Here we replicate these findings with a wider variety of datasets ( Section 3.2 ) . We then show that it is possible to obtain similar results using much simpler artificial datasets in which objects were fully-translated across the canvas , but with some limitations due to the difference between the pretraining and the fine-tuning datasets ( Section 3.3 ) . As a sanity check , we showed that pretraining on the whole canvas is not enough to acquire translation invariance , but the network must be pretrained on fully-translated objects ( Section 3.4 ) . Next , we report studies that assess translation invariance learned from partial information in the environment , that is , whether invariance can be generalized to the whole visual field when the network was only trained on one area of the visual field , and whether invariance extends to all classes when only a subset of classes was trained at all locations ( Section 3.5 ) . We found that generalization failed in these cases . Finally , we report a cosine similarity analysis that provides some insight into the learned internal network representations ( Section 3.6 ) , and shows that the poor performance on some conditions in Section 3.3 was likely due to catastrophic forgetting/interference . 3.1 DATASETS . We used six datasets spanning a high range of complexity . From the more complex to the less complex2 , we used : EMNIST ( Cohen et al. , 2017 ) ; FashionMNIST ( FMNIST ) , from Xiao et al . ( 2017 ) , Kuzushiji MNIST ( KMNIST ) , from Clanuwat et al . ( 2018 ) ; MNIST , from ( LeCun et al. , 1998 ) ; and two versions of the Leek dataset used in Blything et al . : one contained only 10 images instead of 24 of the original dataset ( Leek10 ) . The other contained only two images from the original dataset ( Leek2 ) , disjointed from the images used for Leek10 . Representative examples ( and , for Leek10 and Leek2 , the entire datasets ) are shown in Figure 1B . We did not apply any data-augmentation to the datasets ( apart translating the items on the canvas , as explained below ) .
This paper addresses the problem of how convolutional neural networks (CNNs) achieve translation invariance, and the authors argue that this invariance es mostly learned from suitable datasets, rather than a result of the architecture. In particular, ImageNet-pretrained networks have learned to be invariant to translation, and fine tuned. The experiments are performed in MNIST-like datasets evaluating classification performance at different locations. The authors conclude that invariance is achieved when the CNN is trained with the different objects being presented at different locations across the canvas, and that the invariance can be forgotten after subsequent training.
SP:d038f28f90bdf4625c61eefe75dd062ebf583fc8
Is Attention Better Than Matrix Decomposition?
1 INTRODUCTION . Since self-attention and transformer ( Vaswani et al. , 2017 ) showed significant advantages over recurrent neural networks and convolutional neural networks in capturing long-distance dependencies , attention has been widely adopted by computer vision ( Wang et al. , 2018 ; Zhang et al. , 2019a ) and natural language processing ( Devlin et al. , 2019 ) for global information mining . However , is hand-crafted attention irreplaceable when modeling the global context ? This paper focuses on a new approach to design global context modules . The key idea is , if we formulate the inductive bias like the global context into an objective function , the optimization algorithm to minimize the objective function can construct a computational graph , i.e. , the architecture we need in the networks . We particularize this idea by developing a counterpart for the most representative global context module , self-attention . Considering extracting global information in the networks as finding a dictionary and the corresponding codes to capture the inherent correlation , we model the context discovery as low-rank completion of the input tensor and solve it via matrix decomposition . This paper then proposes a global correlation block , Hamburger , by employing matrix decomposition to factorize the learned representation into sub-matrices so as to recover the clean low-rank signal subspace . The iterative optimization algorithm to solve matrix decomposition defines the central computational graph , i.e. , Hamburger ’ s architecture . Our work takes advantage of the matrix decomposition models as the foundation of Hamburger , including Vector Quantization ( VQ ) ( Gray & Neuhoff , 1998 ) , Concept Decomposition ( CD ) ( Dhillon & Modha , 2001 ) , and Non-negative Matrix Factorization ( NMF ) ( Lee & Seung , 1999 ) . Additionally , instead of directly applying Back-Propagation Through Time ( BPTT ) algorithm ( Werbos et al. , 1990 ) to differentiate the iterative optimization , we adopt a truncated BPTT algorithm , i.e. , one-step gradient , to back-propagate the gradient effectively . We illustrate the advantages of Hamburger in ∗Equal first authorship †Corresponding author the fundamental vision tasks where global information has been proven crucial , including semantic segmentation and image generation . The experiments prove that optimization-designed Hamburger can perform competitively with state-of-the-art attention models when avoiding the unstable gradient back-propagated through the iterative computational graph of MD . Hamburger sets new state-ofthe-art records on the PASCAL VOC dataset ( Everingham et al. , 2010 ) and PASCAL Context dataset ( Mottaghi et al. , 2014 ) for semantic segmentation and surpasses existing attention modules for GANs in the large scale image generation on ImageNet ( Deng et al. , 2009 ) . The contributions of this paper are listed as follows : • We show a white-box approach to design global information blocks , i.e. , by turning the optimization algorithm that minimizes an objective function , in which modeling the global correlation is formulated as a low-rank completion problem , into the architecture . • We propose Hamburger , a light yet powerful global context module with O ( n ) complexity , surpassing various attention modules on semantic segmentation and image generation . • We figure out that the main obstacle of applying MD in the networks is the unstable backward gradient through its iterative optimization algorithm . As a pragmatic solution , the proposed one-step gradient facilitates the training of Hamburger with MDs . 2 METHODOLOGY . 2.1 WARM UP . Since matrix decomposition is pivotal to the proposed Hamburger , we first review the idea of matrix decomposition . A common view is that matrix decomposition factorizes the observed matrix into a product of several sub-matrices , e.g. , Singular Value Decomposition . However , a more illuminating perspective is that , by assuming the generation process , matrix decomposition acts as the inverse of the generation , disassembling the atoms that make up the complex data . From the reconstruction of the original matrices , matrix decomposition recovers the latent structure of observed data . Suppose that the given data are arranged as the columns of a large matrix X = [ x1 , · · · , xn ] ∈ Rd×n . A general assumption is that there is a low-dimensional subspace , or a union of multiple subspaces hidden in X . That is , there exists a dictionary matrix D = [ d1 , · · · , dr ] ∈ Rd×r and corresponding codes C = [ c1 , · · · , cn ] ∈ Rr×n that X can be expressed as generation←−−−−−−−−− X = X̄ + E = DC + E , −−−−−−−−−→ decomposition ( 1 ) where X̄ ∈ Rd×n is the output low-rank reconstruction , and E ∈ Rd×n is the noise matrix to be discarded . Here we assume that the recovered matrix X̄ has the low-rank property , such that rank ( X̄ ) ≤ min ( rank ( D ) , rank ( C ) ) ≤ r min ( d , n ) . ( 2 ) Different MDs can be derived by assuming structures to matrices D , C , and E ( Kolda & Bader , 2009 ; Udell et al. , 2016 ) . MD is usually formulated as an objective with various constraints and then solved by optimization algorithms , with classic applications to image denoising ( Wright et al. , 2009 ; Lu et al. , 2014 ) , inpainting ( Mairal et al. , 2010 ) , and feature extraction ( Zhang et al. , 2012 ) . 2.2 PROPOSED METHOD . We focus on building global context modules for the networks without painstaking hand-crafted design . Before starting our discussion , we review the representative hand-designed context block self-attention pithily . The attention mechanism aims at finding a group of concepts for further conscious reasoning from massive unconscious context ( Xu et al. , 2015 ; Bengio , 2017 ; Goyal et al. , 2019 ) . As a representative , self-attention ( Vaswani et al. , 2017 ) is proposed for learning long-range dependencies in machine translation , Attention ( Q , K , V ) = softmax ( QK > √ d ) V , ( 3 ) where Q , K , V ∈ Rn×d are features projected by linear transformations from the input . Selfattention extracts global information via attending all tokens at a time rather than the typical one-byone processing of recurrent neural networks . Though self-attention and its variants achieved great success , researchers are confronted with ( 1 ) developing new global context modules based on self-attention , typically via hand-crafted engineering , and ( 2 ) explaining why current attention models work . This paper bypasses both issues and finds a method to easily design global context modules via a well-defined white-box toolkit . We try to formulate the human inductive bias , like the global context , as an objective function and use the optimization algorithm to solve such a problem to design the module ’ s architecture . The optimization algorithm creates a computational graph , takes some input , and finally outputs the solution . We apply the computational graph of optimization algorithms for the central part of our context module . Based on this approach , we need to model the networks ’ global information issue as an optimization problem . Take the convolutional neural networks ( CNN ) as an example for further discussion . The networks output a tensor X ∈ RC×H×W after we feed into an image . Since the tensor can be seen as a set of HW C-dimensional hyper-pixels , we unfold the tensor into a matrix X ∈ RC×HW . When the module learns the long-range dependencies or the global context , the hidden assumption is that the hyper-pixels are inherently correlated . For the sake of simplicity , we assume that hyper-pixels are linearly dependent , which means that each hyper-pixel in X can be expressed as the linear combination of bases whose elements are typically much less than HW . In the ideal situation , the global information hidden in X can be low-rank . However , due to vanilla CNN ’ s poor ability to model the global context ( Wang et al. , 2018 ; Zhang et al. , 2019a ) , the learned X is usually corrupted with redundant information or incompleteness . The above analysis suggests a potential method to model the global context , i.e. , by completing the low-rank part X̄ in the unfolded matrix X and discarding the noise part E , using the classic matrix decomposition models described in Eq . ( 1 ) , which filters out the redundancy and incompleteness at the same time . We thus model learning the global context as a low-rank completion problem with matrix decomposition as its solution . Using the notion of Sec . 2.1 , the general objective function of matrix decomposition is min D , C L ( X , DC ) +R1 ( D ) +R2 ( C ) ( 4 ) where L is the reconstruction loss , R1 andR2 are regularization terms for the dictionary D and the codes C. Denote the optimization algorithm to minimize Eq . ( 4 ) asM.M is the core architecture we deploy in our global context module . To help readers further understand this modeling , We also provide a more intuitive illustration in Appendix G. In the later sections , we introduce our global context block , Hamburger , and then discuss detailed MD models and optimization algorithms forM . Finally , we handle the gradient issue for back-propagation through matrix decomposition . 2.2.1 HAMBURGER . Hamburger consists of one slice of “ ham ” ( matrix decomposition ) and two slices of “ bread ” ( linear transformation ) . As the name implies , Hamburger first maps the input Z ∈ Rdz×n into feature space with a linear transformation Wl ∈ Rd×dz , namely “ lower bread ” , then uses matrix decomposition M to solve a low-rank signal subspace , corresponding to the “ ham ” , and finally transforms extracted signals into the output with another linear transformation Wu ∈ Rdz×d , called “ upper bread ” , H ( Z ) = WuM ( WlZ ) , ( 5 ) where M is matrix decomposition to recover the clear latent structure , functioning as a global nonlinearity . Detailed architectures ofM , i.e. , optimization algorithms to factorize X , are discussed in Sec . 2.2.2 . Fig . 1 describes the architecture of Hamburger , where it collaborates with the networks via Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) , a skip connection , and finally outputs Y , Y = Z + BN ( H ( Z ) ) . ( 6 ) 2.2.2 HAMS . This section describes the structure of “ ham ” , i.e. , M in Eq . ( 5 ) . As discussed in the previous section , by formulating the global information discovery as an optimization problem of MD , algorithms to solve MD naturally composeM.M takes the output of “ lower bread ” as its input and computes a low-rank reconstruction as its output , denoted as X and X̄ , respectively . M ( X ) = X̄ = DC . ( 7 ) We investigate two MD models forM , Vector Quantization ( VQ ) , and Non-negative Matrix Factorization ( NMF ) to solve D and C and reconstruct X̄ , while leaving Concept Decomposition ( CD ) to Appendix B . The selected MD models are introduced briefly because we endeavor to illustrate the importance of the low-rank inductive bias and the optimization-driven designing method for global context modules rather than any specific MD models . It is preferred to abstract the MD part as a whole , i.e. , M in the context of this paper , and focus on how Hamburger can show the superiority in its entirety . Vector Quantization Vector Quantization ( VQ ) ( Gray & Neuhoff , 1998 ) , a classic data compression algorithm , can be formulated as an optimization problem in term of matrix decomposition : min D , C ‖X −DC‖F s.t . ci ∈ { e1 , e2 , · · · , er } , ( 8 ) where ei is the canonical basis vector , ei = [ 0 , · · · , 1 , · · · , 0 ] > ith . The solution to minimize the objective in Eq . ( 8 ) is K-means ( Gray & Neuhoff , 1998 ) . However , to ensure that VQ is differentiable , we replace the hard arg min and Euclidean distance with softmax and cosine similarity , leading to Alg . 1 , where cosine ( D , X ) is a similarity matrix whose entries satisfy cosine ( D , X ) ij = d > i xj ‖d‖‖x‖ , and softmax is applied column-wise and T is the temperature . Further we can obtain a hard assignment by a one-hot vector when T → 0 . Algorithm 1 Ham : Soft VQ Input X . Initialize D , C. for k from 1 to K do C ← softmax ( 1T cosine ( D , X ) ) D ←XC > diag ( C1n ) −1 end for Output X̄ = DC . Algorithm 2 Ham : NMF with MU Input X . Initialize non-negative D , C for k from 1 to K do Cij ← Cij ( D > X ) ij ( D > DC ) ij Dij ←Dij ( XC > ) ij ( DCC > ) ij end for Output X̄ = DC . Non-negative Matrix Factorization If we impose non-negative constraints on the dictionary D and the codes C , it leads to Non-negative Matrix Factorization ( NMF ) ( Lee & Seung , 1999 ) : min D , C ‖X −DC‖F s.t.Dij ≥ 0 , Cjk ≥ 0 . ( 9 ) To satisfy the non-negative constraints , we add a ReLU non-linearity before putting X into NMF . We apply the Multiplicative Update ( MU ) rules ( Lee & Seung , 2001 ) in Alg . 2 to solve NMF , which guarantees the convergence . As white-box global context modules , VQ , CD , and NMF are straightforward and light , showing remarkable efficiency . They are formulated into optimization algorithms that mainly consist of matrix multiplications with the complexity O ( ndr ) , much cheaper than complexity O ( n2d ) in self-attention as r n. All three MDs are memory-friendly since they avoid generating a large n× n matrix as an intermediate variable , like the product of Q and K of self-attention in Eq . ( 3 ) . In the later section , our experiments prove MDs are at least on par with self-attention , though the architectures ofM are created by optimization and look different from classic dot product self-attention .
The paper presents a method based on matrix decomposition (MD) for encoding global context in computer vision tasks. In particular, a "Hamburger" block is proposed encompassing matrix decomposition as its central part, between two linear projection layers. Direct comparison and relations are drawn between the proposed method and the widely adopted self-attention paradigm. The proposed method leads to improved results when Hamburger blocks are used instead of self-attention blocks, leading at the same time to reduced number of parameters, memory footprint and inference time.
SP:ea04662e871c7eef151c4c7b61464453f391ed9f
Is Attention Better Than Matrix Decomposition?
1 INTRODUCTION . Since self-attention and transformer ( Vaswani et al. , 2017 ) showed significant advantages over recurrent neural networks and convolutional neural networks in capturing long-distance dependencies , attention has been widely adopted by computer vision ( Wang et al. , 2018 ; Zhang et al. , 2019a ) and natural language processing ( Devlin et al. , 2019 ) for global information mining . However , is hand-crafted attention irreplaceable when modeling the global context ? This paper focuses on a new approach to design global context modules . The key idea is , if we formulate the inductive bias like the global context into an objective function , the optimization algorithm to minimize the objective function can construct a computational graph , i.e. , the architecture we need in the networks . We particularize this idea by developing a counterpart for the most representative global context module , self-attention . Considering extracting global information in the networks as finding a dictionary and the corresponding codes to capture the inherent correlation , we model the context discovery as low-rank completion of the input tensor and solve it via matrix decomposition . This paper then proposes a global correlation block , Hamburger , by employing matrix decomposition to factorize the learned representation into sub-matrices so as to recover the clean low-rank signal subspace . The iterative optimization algorithm to solve matrix decomposition defines the central computational graph , i.e. , Hamburger ’ s architecture . Our work takes advantage of the matrix decomposition models as the foundation of Hamburger , including Vector Quantization ( VQ ) ( Gray & Neuhoff , 1998 ) , Concept Decomposition ( CD ) ( Dhillon & Modha , 2001 ) , and Non-negative Matrix Factorization ( NMF ) ( Lee & Seung , 1999 ) . Additionally , instead of directly applying Back-Propagation Through Time ( BPTT ) algorithm ( Werbos et al. , 1990 ) to differentiate the iterative optimization , we adopt a truncated BPTT algorithm , i.e. , one-step gradient , to back-propagate the gradient effectively . We illustrate the advantages of Hamburger in ∗Equal first authorship †Corresponding author the fundamental vision tasks where global information has been proven crucial , including semantic segmentation and image generation . The experiments prove that optimization-designed Hamburger can perform competitively with state-of-the-art attention models when avoiding the unstable gradient back-propagated through the iterative computational graph of MD . Hamburger sets new state-ofthe-art records on the PASCAL VOC dataset ( Everingham et al. , 2010 ) and PASCAL Context dataset ( Mottaghi et al. , 2014 ) for semantic segmentation and surpasses existing attention modules for GANs in the large scale image generation on ImageNet ( Deng et al. , 2009 ) . The contributions of this paper are listed as follows : • We show a white-box approach to design global information blocks , i.e. , by turning the optimization algorithm that minimizes an objective function , in which modeling the global correlation is formulated as a low-rank completion problem , into the architecture . • We propose Hamburger , a light yet powerful global context module with O ( n ) complexity , surpassing various attention modules on semantic segmentation and image generation . • We figure out that the main obstacle of applying MD in the networks is the unstable backward gradient through its iterative optimization algorithm . As a pragmatic solution , the proposed one-step gradient facilitates the training of Hamburger with MDs . 2 METHODOLOGY . 2.1 WARM UP . Since matrix decomposition is pivotal to the proposed Hamburger , we first review the idea of matrix decomposition . A common view is that matrix decomposition factorizes the observed matrix into a product of several sub-matrices , e.g. , Singular Value Decomposition . However , a more illuminating perspective is that , by assuming the generation process , matrix decomposition acts as the inverse of the generation , disassembling the atoms that make up the complex data . From the reconstruction of the original matrices , matrix decomposition recovers the latent structure of observed data . Suppose that the given data are arranged as the columns of a large matrix X = [ x1 , · · · , xn ] ∈ Rd×n . A general assumption is that there is a low-dimensional subspace , or a union of multiple subspaces hidden in X . That is , there exists a dictionary matrix D = [ d1 , · · · , dr ] ∈ Rd×r and corresponding codes C = [ c1 , · · · , cn ] ∈ Rr×n that X can be expressed as generation←−−−−−−−−− X = X̄ + E = DC + E , −−−−−−−−−→ decomposition ( 1 ) where X̄ ∈ Rd×n is the output low-rank reconstruction , and E ∈ Rd×n is the noise matrix to be discarded . Here we assume that the recovered matrix X̄ has the low-rank property , such that rank ( X̄ ) ≤ min ( rank ( D ) , rank ( C ) ) ≤ r min ( d , n ) . ( 2 ) Different MDs can be derived by assuming structures to matrices D , C , and E ( Kolda & Bader , 2009 ; Udell et al. , 2016 ) . MD is usually formulated as an objective with various constraints and then solved by optimization algorithms , with classic applications to image denoising ( Wright et al. , 2009 ; Lu et al. , 2014 ) , inpainting ( Mairal et al. , 2010 ) , and feature extraction ( Zhang et al. , 2012 ) . 2.2 PROPOSED METHOD . We focus on building global context modules for the networks without painstaking hand-crafted design . Before starting our discussion , we review the representative hand-designed context block self-attention pithily . The attention mechanism aims at finding a group of concepts for further conscious reasoning from massive unconscious context ( Xu et al. , 2015 ; Bengio , 2017 ; Goyal et al. , 2019 ) . As a representative , self-attention ( Vaswani et al. , 2017 ) is proposed for learning long-range dependencies in machine translation , Attention ( Q , K , V ) = softmax ( QK > √ d ) V , ( 3 ) where Q , K , V ∈ Rn×d are features projected by linear transformations from the input . Selfattention extracts global information via attending all tokens at a time rather than the typical one-byone processing of recurrent neural networks . Though self-attention and its variants achieved great success , researchers are confronted with ( 1 ) developing new global context modules based on self-attention , typically via hand-crafted engineering , and ( 2 ) explaining why current attention models work . This paper bypasses both issues and finds a method to easily design global context modules via a well-defined white-box toolkit . We try to formulate the human inductive bias , like the global context , as an objective function and use the optimization algorithm to solve such a problem to design the module ’ s architecture . The optimization algorithm creates a computational graph , takes some input , and finally outputs the solution . We apply the computational graph of optimization algorithms for the central part of our context module . Based on this approach , we need to model the networks ’ global information issue as an optimization problem . Take the convolutional neural networks ( CNN ) as an example for further discussion . The networks output a tensor X ∈ RC×H×W after we feed into an image . Since the tensor can be seen as a set of HW C-dimensional hyper-pixels , we unfold the tensor into a matrix X ∈ RC×HW . When the module learns the long-range dependencies or the global context , the hidden assumption is that the hyper-pixels are inherently correlated . For the sake of simplicity , we assume that hyper-pixels are linearly dependent , which means that each hyper-pixel in X can be expressed as the linear combination of bases whose elements are typically much less than HW . In the ideal situation , the global information hidden in X can be low-rank . However , due to vanilla CNN ’ s poor ability to model the global context ( Wang et al. , 2018 ; Zhang et al. , 2019a ) , the learned X is usually corrupted with redundant information or incompleteness . The above analysis suggests a potential method to model the global context , i.e. , by completing the low-rank part X̄ in the unfolded matrix X and discarding the noise part E , using the classic matrix decomposition models described in Eq . ( 1 ) , which filters out the redundancy and incompleteness at the same time . We thus model learning the global context as a low-rank completion problem with matrix decomposition as its solution . Using the notion of Sec . 2.1 , the general objective function of matrix decomposition is min D , C L ( X , DC ) +R1 ( D ) +R2 ( C ) ( 4 ) where L is the reconstruction loss , R1 andR2 are regularization terms for the dictionary D and the codes C. Denote the optimization algorithm to minimize Eq . ( 4 ) asM.M is the core architecture we deploy in our global context module . To help readers further understand this modeling , We also provide a more intuitive illustration in Appendix G. In the later sections , we introduce our global context block , Hamburger , and then discuss detailed MD models and optimization algorithms forM . Finally , we handle the gradient issue for back-propagation through matrix decomposition . 2.2.1 HAMBURGER . Hamburger consists of one slice of “ ham ” ( matrix decomposition ) and two slices of “ bread ” ( linear transformation ) . As the name implies , Hamburger first maps the input Z ∈ Rdz×n into feature space with a linear transformation Wl ∈ Rd×dz , namely “ lower bread ” , then uses matrix decomposition M to solve a low-rank signal subspace , corresponding to the “ ham ” , and finally transforms extracted signals into the output with another linear transformation Wu ∈ Rdz×d , called “ upper bread ” , H ( Z ) = WuM ( WlZ ) , ( 5 ) where M is matrix decomposition to recover the clear latent structure , functioning as a global nonlinearity . Detailed architectures ofM , i.e. , optimization algorithms to factorize X , are discussed in Sec . 2.2.2 . Fig . 1 describes the architecture of Hamburger , where it collaborates with the networks via Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) , a skip connection , and finally outputs Y , Y = Z + BN ( H ( Z ) ) . ( 6 ) 2.2.2 HAMS . This section describes the structure of “ ham ” , i.e. , M in Eq . ( 5 ) . As discussed in the previous section , by formulating the global information discovery as an optimization problem of MD , algorithms to solve MD naturally composeM.M takes the output of “ lower bread ” as its input and computes a low-rank reconstruction as its output , denoted as X and X̄ , respectively . M ( X ) = X̄ = DC . ( 7 ) We investigate two MD models forM , Vector Quantization ( VQ ) , and Non-negative Matrix Factorization ( NMF ) to solve D and C and reconstruct X̄ , while leaving Concept Decomposition ( CD ) to Appendix B . The selected MD models are introduced briefly because we endeavor to illustrate the importance of the low-rank inductive bias and the optimization-driven designing method for global context modules rather than any specific MD models . It is preferred to abstract the MD part as a whole , i.e. , M in the context of this paper , and focus on how Hamburger can show the superiority in its entirety . Vector Quantization Vector Quantization ( VQ ) ( Gray & Neuhoff , 1998 ) , a classic data compression algorithm , can be formulated as an optimization problem in term of matrix decomposition : min D , C ‖X −DC‖F s.t . ci ∈ { e1 , e2 , · · · , er } , ( 8 ) where ei is the canonical basis vector , ei = [ 0 , · · · , 1 , · · · , 0 ] > ith . The solution to minimize the objective in Eq . ( 8 ) is K-means ( Gray & Neuhoff , 1998 ) . However , to ensure that VQ is differentiable , we replace the hard arg min and Euclidean distance with softmax and cosine similarity , leading to Alg . 1 , where cosine ( D , X ) is a similarity matrix whose entries satisfy cosine ( D , X ) ij = d > i xj ‖d‖‖x‖ , and softmax is applied column-wise and T is the temperature . Further we can obtain a hard assignment by a one-hot vector when T → 0 . Algorithm 1 Ham : Soft VQ Input X . Initialize D , C. for k from 1 to K do C ← softmax ( 1T cosine ( D , X ) ) D ←XC > diag ( C1n ) −1 end for Output X̄ = DC . Algorithm 2 Ham : NMF with MU Input X . Initialize non-negative D , C for k from 1 to K do Cij ← Cij ( D > X ) ij ( D > DC ) ij Dij ←Dij ( XC > ) ij ( DCC > ) ij end for Output X̄ = DC . Non-negative Matrix Factorization If we impose non-negative constraints on the dictionary D and the codes C , it leads to Non-negative Matrix Factorization ( NMF ) ( Lee & Seung , 1999 ) : min D , C ‖X −DC‖F s.t.Dij ≥ 0 , Cjk ≥ 0 . ( 9 ) To satisfy the non-negative constraints , we add a ReLU non-linearity before putting X into NMF . We apply the Multiplicative Update ( MU ) rules ( Lee & Seung , 2001 ) in Alg . 2 to solve NMF , which guarantees the convergence . As white-box global context modules , VQ , CD , and NMF are straightforward and light , showing remarkable efficiency . They are formulated into optimization algorithms that mainly consist of matrix multiplications with the complexity O ( ndr ) , much cheaper than complexity O ( n2d ) in self-attention as r n. All three MDs are memory-friendly since they avoid generating a large n× n matrix as an intermediate variable , like the product of Q and K of self-attention in Eq . ( 3 ) . In the later section , our experiments prove MDs are at least on par with self-attention , though the architectures ofM are created by optimization and look different from classic dot product self-attention .
This paper proposes to use matrix decomposition to construct low-rank representations to find the long-distance correlations in context, which is demonstrated more effective than popular self-attention mechanism. Combining linear transformation and matrix decomposition (core part), authors design Hamburger block to model global dependencies from input as residual output. The authors propose differentiable modified Vector Quantization and Non-negative Matrix Factorization to perform matrix decomposition. They propose one-step gradient, an approximation of Back-Propagation Through Time (BPTT) algorithm, to back-propagate gradient of matrix decomposition. They conduct experiments on semantic segmentation and image generation to demonstrate the superiority of their methods regarding modelling global dependencies and computational cost.
SP:ea04662e871c7eef151c4c7b61464453f391ed9f
ColdExpand: Semi-Supervised Graph Learning in Cold Start
1 INTRODUCTION . Graph-based semi-supervised learning has attracted much attention thanks to its applicability to real-world problems . For example , a social network is graph-structured data in which people in the network are considered to be nodes and relationships between people are considered to be edges : two people are friends or sharing posts , etc . With this structural information , we can infer some unknown attributes of a person ( node ) based on the information of people he is connected to ( i.e. , semi-supervised node classification ) . In the case of retail applications , customers and products can be viewed as heterogeneous nodes and edges between customers and products can represent relationships between the customers and the purchased products . Such a graph can be used to represent spending habits of each customer and we can recommend a product to a user by inferring the likelihood of connection between the user and the product ( i.e. , semi-supervised link prediction ) . Recent progress on Graph Neural Networks ( GNN ) ( Bruna et al. , 2013 ; Kipf & Welling , 2016a ; Gilmer et al. , 2017 ; Veličković et al. , 2018 ; Jia et al. , 2019 ) allows us to effectively utilize the expressive power of the graph-structured data and to solve various graph related tasks . Early GNN methods tackled semi-supervised node classification task , a task to label all nodes within the graph when only a small subset of nodes is labeled , achieving a satisfactory performance ( Zhou et al. , 2004 ) . Link prediction is another graph-related task that was covered comparatively less than other tasks in the field of GNNs . In the link prediction task , the goal is to estimate the likelihood of connection between two nodes given node feature data and topological structure data . Link prediction can be used in recommendation tasks ( Chen et al. , 2005 ; Sarwar et al. , 2001 ; Lika et al. , 2014 ; Li & Chen , 2013 ; Berg et al. , 2017 ) or graph completion tasks ( Kazemi & Poole , 2018 ; Zhang & Chen , 2018 ) . Most of the work on semi-supervised graph learning and link prediction assumes a static graph , that is , the structural information is at least “ partially ” observable in terms of nodes . In the real world , however , new users or items can be added ( as nodes ) without any topological information ( Gope & Jain , 2017 ) . This is also referred as the cold start problem when a new node is presented to an existing graph without a single edge . In contrast to the warm start case , in which at least some topological information is provided , the cold start problem is an extreme case where there isn ’ t any topological information to refer . In this setting , previous semi-supervised learning algorithms can not propagate information to the cold nodes . Even though the cold start problem is an extreme setting , it is an inevitable problem that occurs often in the real world data . In this paper , we postulates the cold start problem as a fundamental issue in graph learning and propose a new learning setting , expanded semi-supervised learning . In expanded semi-supervised learning we extend the original semi-supervised learning setting even to new cold nodes that are disconnected from the graph as shown in Figure 1 . To this end , we suggest the ColdExpand method , the method that uses multi-task learning strategy to alleviate the cold start problem that typical graph learning methods face . We experimentally prove that by adding an additional goal to the existing link prediction method , performance on the link prediction task is enhanced in every benchmark dataset and at most by 24 % . We also prove that our method can expand semi-supervised node classification even to unseen , cold nodes . To the best of our knowledge , this is the first study to expand semisupervised learning methods to unseen nodes . In the next section , we briefly introduce related works . In Section 3 , we define our problem definition . Finally , in Section 4 we propose our ColdExpand model followed by our experimental environments along with corresponding results . 2 RELATED WORK . 2.1 SEMI-SUPERVISED NODE CLASSIFICATION . Semi-supervised learning is a combination of unsupervised learning and supervised learning where true labels are only partially given . Semi-supervised learning algorithms aim to improve the model ’ s performance by using unlabeled data points on top of labeled ones to approximate the underlying marginal data distribution ( Van Engelen & Hoos , 2020 ) . One of the most popular fields of semi-supervised learning is graph based semi-supervised node classification where all nodes should be classified when only a few node labels are given . In order to smooth the given subset of labels throughout the graph , various methods have been studied to effectively represent nodes . Deep Walk ( Perozzi et al. , 2014 ) , LINE ( Tang et al. , 2015 ) , node2vec ( Grover & Leskovec , 2016 ) were early deep-learning-based methods targeting the node classification task by learning latent representations from truncated random walks . However , these models fail to share parameters between nodes causing learning to be inefficient . Kipf & Welling ( 2016a ) introduced Graph Convolutional Networks ( GCN ) which use an efficient layer-wise propagation rule by approximating the first-order of spectral graph convolution . By limiting the spectral convolution to the first-order , GCNs not only lightened computation cost of the operation but also alleviated the over-fitting problem that previous spectral methods had . Commonly , GCN model is formed from stacked convolutional layers ; the number of which decides how many hops of neighbors to consider for the convolution operation . Gilmer et al . ( 2017 ) presented Message Passing Neural Networks ( MPNN ) as a general form of spatial convolution operation and treated GCN as specific kind of a message passing process . In MPNNs , information between nodes is delivered directly by edges without visiting any spectral domains . 2.2 LINK PREDICTION . Links in graph-structured data appeal important interactions between nodes that may not be represented solely by node attributes . To this end , link prediction is an important task in order to effectively utilize graph data . In the link prediction task , we aim to predict whether an edge exists between two nodes from the same graph . Predicting the existence of links between nodes is useful for many applications including recommendation systems and graph completion tasks . On a high level , link prediction methods can be separated into similarity-based and learning-based methods . Similarity-based link prediction is the simplest way to predict the link by assigning a similarity score Sxy between two nodes x , y . Common Neighbors ( Marchette & Priebe , 2008 ) , Salton Index , Jaccard Index ( Leydesdorff , 2008 ) , Sorensen Index , Hub Promoted Index , Hub Depressed Index ( Zhou et al. , 2009 ) , Leicht-Holme-Newman Index ( LHN1 ) ( Leicht et al. , 2006 ) each define similarity scores by using the overlapping set of neighboring nodes between two nodes . While the former neighborhood-based methods focus on somewhat local similarity , other similarity-based methods consider global similarity by taking into account higher order links . Katz Index ( Katz , 1953 ) considers the ensemble of all paths between two nodes summed with quadratic weight according to the distance of each path . SimRank ( Jeh & Widom , 2002 ) also uses random walk process but measures how soon two random walkers from the two different nodes meet . However , when a cold node is introduced , the required structural information needed for previous similarity-based approaches is missing , thus making them impossible to be applied in cold start situations ( Wang et al. , 2016 ) . On the other hand , learning-based methods formulate link prediction as a binary classification problem given feature embedding vectors of two nodes ; if an edge exists between the two nodes a positive label is given and vice versa . Kipf & Welling ( 2016b ) introduced the graph auto-encoder ( GAE ) , an unsupervised learning technique , which aims to reconstruct the original adjacency matrix . As a variant of GAE , variational graph auto-encoders ( VGAE ) additionally train a separate GCN module resulting in the logarithm of the variant matrix , log ( σ2 ) . Additional Kullback-Leibler divergence loss between the distribution of encoder and decoder is added to the original reconstruction loss . Pan et al . ( 2019 ) further improved upon GAEs by applying adversarial training to regularize the latent node embedding matrix Z to match a prior distribution of the original node feature matrix X . A simple MLP-based discriminator D was used for adversarial learning between the real distribution of G = { X , A } and the latent node matrix Z . Unlike similarity-based methods , learning-based methods can be applied to cold start problems since they make use of node feature information along with topological structure information . 2.3 COLD START PROBLEM . The cold start problem is a common problem in real-world graph-structured data where new nodes may be added to an existing graph . Most work tackling the cold start problem focuses on solving the recommendation problem in a cold start setting . Recommendation systems are widely used in the real world applications such as friend recommendation , movie recommendation and purchase recommendation . In the recommendation problem , the task is to predict recommendation links from a heterogeneous graph containing two types of graphs ; a user-user social graph and user-item rating graph . Users are involved in both types of graphs to connect the two types of graphs into a single heterogeneous graph . The collaborative filtering ( CF ) method is one of the traditional methods where a rating is predicted based on similarities between users or items . In short , the CF method makes recommendations based on ratings given by other users in the social graph . Sarwar et al . ( 2001 ) introduced an item-based CF method , which computes item-item similarities , and proved it outperforms other user-based collaborative filtering recommendation methods . Efforts to make use of CF for recommendation systems in the cold start scenario have been made . Lika et al . ( 2014 ) approached the cold user problem by exploiting CF followed by hard-clustering of users based on their attributes . Once the cluster of the new user has been decided , CF is only applied to members of the same cluster . Berg et al . ( 2017 ) formulated the matrix completion task of recommendation systems as a link prediction problem by viewing the user-item graph as a bipartite interaction graph . However , they instead used a warm start setting where at least one rating for a user is given which is not precisely a cold start assumption . 3 PROBLEM DEFINITION . A graph G is denoted as a pair ( V , E ) with V = { v1 , · · · , vN } the set of nodes ( vertices ) , and E ∈ V × V the set of edges . Each node vi is associated with a feature vector xi ∈ RF . To make notation more compact , the set of node feature vectors of graph G is denoted as a matrix X = [ x1 , x2 , · · · , xN ] > ∈ RN×F . Additionally , a graph has an N -by-N adjacency matrix A where Ai , j represents the existence of an edge between vi and vj and a degree matrix D , a diagonal matrix which contains information about the degree of each node . In the cold start setting , we will call those nodes that are newly introduced without any link from the existing graph cold nodes and let them be denoted by vci . In the same manner , existing nodes in the graph are denoted as vej . Then , Xe only contains feature vectors of existing nodes xe0 , x e 1 , x e 2 , . . . , x e Ne and Xc contains feature vectors of cold nodes xc0 , x c 1 , x c 2 , . . . , x c Nc , where Nc and Ne are the number of existing nodes and cold nodes , respectively . In our problem setting , we assume that new nodes are added to the existing graph and need to find possible links between new nodes and existing nodes followed by a node classification to figure out the hidden attributes of new nodes . Our problem definition is as follows : Given existing graph , G = ( Ve , A ) , and cold nodes Xc , find out all possible links between cold nodes Xc and existing nodes Xe , then predict the class of cold nodes Xc .
This paper is about the cold-start problem for representation learning on dynamics graphs. More specifically, the proposed method (ColdExpand) uses convolutional networks and multi-task learning (node classification loss and link prediction loss) to learn embeddings for new unseen nodes in the graph, i.e., nodes that we only know their features and not their connectivity to the existing graph. Such a problem is useful for applications of the link prediction problem and more specifically, recommendation tasks or graph completion tasks. The authors test their technique using three benchmark datasets (Cora, CiteSeer and PubMed) and by using building blocks of existing works they create some baselines to compare. They claim to be the first study for semi-supervised learning to unseen nodes. However, there are a couple of works in this field already, as for example the GraphSAGE model that they cite in this paper (see below for more).
SP:0365e3071ba7036ff25566c543c65cfc92531674
ColdExpand: Semi-Supervised Graph Learning in Cold Start
1 INTRODUCTION . Graph-based semi-supervised learning has attracted much attention thanks to its applicability to real-world problems . For example , a social network is graph-structured data in which people in the network are considered to be nodes and relationships between people are considered to be edges : two people are friends or sharing posts , etc . With this structural information , we can infer some unknown attributes of a person ( node ) based on the information of people he is connected to ( i.e. , semi-supervised node classification ) . In the case of retail applications , customers and products can be viewed as heterogeneous nodes and edges between customers and products can represent relationships between the customers and the purchased products . Such a graph can be used to represent spending habits of each customer and we can recommend a product to a user by inferring the likelihood of connection between the user and the product ( i.e. , semi-supervised link prediction ) . Recent progress on Graph Neural Networks ( GNN ) ( Bruna et al. , 2013 ; Kipf & Welling , 2016a ; Gilmer et al. , 2017 ; Veličković et al. , 2018 ; Jia et al. , 2019 ) allows us to effectively utilize the expressive power of the graph-structured data and to solve various graph related tasks . Early GNN methods tackled semi-supervised node classification task , a task to label all nodes within the graph when only a small subset of nodes is labeled , achieving a satisfactory performance ( Zhou et al. , 2004 ) . Link prediction is another graph-related task that was covered comparatively less than other tasks in the field of GNNs . In the link prediction task , the goal is to estimate the likelihood of connection between two nodes given node feature data and topological structure data . Link prediction can be used in recommendation tasks ( Chen et al. , 2005 ; Sarwar et al. , 2001 ; Lika et al. , 2014 ; Li & Chen , 2013 ; Berg et al. , 2017 ) or graph completion tasks ( Kazemi & Poole , 2018 ; Zhang & Chen , 2018 ) . Most of the work on semi-supervised graph learning and link prediction assumes a static graph , that is , the structural information is at least “ partially ” observable in terms of nodes . In the real world , however , new users or items can be added ( as nodes ) without any topological information ( Gope & Jain , 2017 ) . This is also referred as the cold start problem when a new node is presented to an existing graph without a single edge . In contrast to the warm start case , in which at least some topological information is provided , the cold start problem is an extreme case where there isn ’ t any topological information to refer . In this setting , previous semi-supervised learning algorithms can not propagate information to the cold nodes . Even though the cold start problem is an extreme setting , it is an inevitable problem that occurs often in the real world data . In this paper , we postulates the cold start problem as a fundamental issue in graph learning and propose a new learning setting , expanded semi-supervised learning . In expanded semi-supervised learning we extend the original semi-supervised learning setting even to new cold nodes that are disconnected from the graph as shown in Figure 1 . To this end , we suggest the ColdExpand method , the method that uses multi-task learning strategy to alleviate the cold start problem that typical graph learning methods face . We experimentally prove that by adding an additional goal to the existing link prediction method , performance on the link prediction task is enhanced in every benchmark dataset and at most by 24 % . We also prove that our method can expand semi-supervised node classification even to unseen , cold nodes . To the best of our knowledge , this is the first study to expand semisupervised learning methods to unseen nodes . In the next section , we briefly introduce related works . In Section 3 , we define our problem definition . Finally , in Section 4 we propose our ColdExpand model followed by our experimental environments along with corresponding results . 2 RELATED WORK . 2.1 SEMI-SUPERVISED NODE CLASSIFICATION . Semi-supervised learning is a combination of unsupervised learning and supervised learning where true labels are only partially given . Semi-supervised learning algorithms aim to improve the model ’ s performance by using unlabeled data points on top of labeled ones to approximate the underlying marginal data distribution ( Van Engelen & Hoos , 2020 ) . One of the most popular fields of semi-supervised learning is graph based semi-supervised node classification where all nodes should be classified when only a few node labels are given . In order to smooth the given subset of labels throughout the graph , various methods have been studied to effectively represent nodes . Deep Walk ( Perozzi et al. , 2014 ) , LINE ( Tang et al. , 2015 ) , node2vec ( Grover & Leskovec , 2016 ) were early deep-learning-based methods targeting the node classification task by learning latent representations from truncated random walks . However , these models fail to share parameters between nodes causing learning to be inefficient . Kipf & Welling ( 2016a ) introduced Graph Convolutional Networks ( GCN ) which use an efficient layer-wise propagation rule by approximating the first-order of spectral graph convolution . By limiting the spectral convolution to the first-order , GCNs not only lightened computation cost of the operation but also alleviated the over-fitting problem that previous spectral methods had . Commonly , GCN model is formed from stacked convolutional layers ; the number of which decides how many hops of neighbors to consider for the convolution operation . Gilmer et al . ( 2017 ) presented Message Passing Neural Networks ( MPNN ) as a general form of spatial convolution operation and treated GCN as specific kind of a message passing process . In MPNNs , information between nodes is delivered directly by edges without visiting any spectral domains . 2.2 LINK PREDICTION . Links in graph-structured data appeal important interactions between nodes that may not be represented solely by node attributes . To this end , link prediction is an important task in order to effectively utilize graph data . In the link prediction task , we aim to predict whether an edge exists between two nodes from the same graph . Predicting the existence of links between nodes is useful for many applications including recommendation systems and graph completion tasks . On a high level , link prediction methods can be separated into similarity-based and learning-based methods . Similarity-based link prediction is the simplest way to predict the link by assigning a similarity score Sxy between two nodes x , y . Common Neighbors ( Marchette & Priebe , 2008 ) , Salton Index , Jaccard Index ( Leydesdorff , 2008 ) , Sorensen Index , Hub Promoted Index , Hub Depressed Index ( Zhou et al. , 2009 ) , Leicht-Holme-Newman Index ( LHN1 ) ( Leicht et al. , 2006 ) each define similarity scores by using the overlapping set of neighboring nodes between two nodes . While the former neighborhood-based methods focus on somewhat local similarity , other similarity-based methods consider global similarity by taking into account higher order links . Katz Index ( Katz , 1953 ) considers the ensemble of all paths between two nodes summed with quadratic weight according to the distance of each path . SimRank ( Jeh & Widom , 2002 ) also uses random walk process but measures how soon two random walkers from the two different nodes meet . However , when a cold node is introduced , the required structural information needed for previous similarity-based approaches is missing , thus making them impossible to be applied in cold start situations ( Wang et al. , 2016 ) . On the other hand , learning-based methods formulate link prediction as a binary classification problem given feature embedding vectors of two nodes ; if an edge exists between the two nodes a positive label is given and vice versa . Kipf & Welling ( 2016b ) introduced the graph auto-encoder ( GAE ) , an unsupervised learning technique , which aims to reconstruct the original adjacency matrix . As a variant of GAE , variational graph auto-encoders ( VGAE ) additionally train a separate GCN module resulting in the logarithm of the variant matrix , log ( σ2 ) . Additional Kullback-Leibler divergence loss between the distribution of encoder and decoder is added to the original reconstruction loss . Pan et al . ( 2019 ) further improved upon GAEs by applying adversarial training to regularize the latent node embedding matrix Z to match a prior distribution of the original node feature matrix X . A simple MLP-based discriminator D was used for adversarial learning between the real distribution of G = { X , A } and the latent node matrix Z . Unlike similarity-based methods , learning-based methods can be applied to cold start problems since they make use of node feature information along with topological structure information . 2.3 COLD START PROBLEM . The cold start problem is a common problem in real-world graph-structured data where new nodes may be added to an existing graph . Most work tackling the cold start problem focuses on solving the recommendation problem in a cold start setting . Recommendation systems are widely used in the real world applications such as friend recommendation , movie recommendation and purchase recommendation . In the recommendation problem , the task is to predict recommendation links from a heterogeneous graph containing two types of graphs ; a user-user social graph and user-item rating graph . Users are involved in both types of graphs to connect the two types of graphs into a single heterogeneous graph . The collaborative filtering ( CF ) method is one of the traditional methods where a rating is predicted based on similarities between users or items . In short , the CF method makes recommendations based on ratings given by other users in the social graph . Sarwar et al . ( 2001 ) introduced an item-based CF method , which computes item-item similarities , and proved it outperforms other user-based collaborative filtering recommendation methods . Efforts to make use of CF for recommendation systems in the cold start scenario have been made . Lika et al . ( 2014 ) approached the cold user problem by exploiting CF followed by hard-clustering of users based on their attributes . Once the cluster of the new user has been decided , CF is only applied to members of the same cluster . Berg et al . ( 2017 ) formulated the matrix completion task of recommendation systems as a link prediction problem by viewing the user-item graph as a bipartite interaction graph . However , they instead used a warm start setting where at least one rating for a user is given which is not precisely a cold start assumption . 3 PROBLEM DEFINITION . A graph G is denoted as a pair ( V , E ) with V = { v1 , · · · , vN } the set of nodes ( vertices ) , and E ∈ V × V the set of edges . Each node vi is associated with a feature vector xi ∈ RF . To make notation more compact , the set of node feature vectors of graph G is denoted as a matrix X = [ x1 , x2 , · · · , xN ] > ∈ RN×F . Additionally , a graph has an N -by-N adjacency matrix A where Ai , j represents the existence of an edge between vi and vj and a degree matrix D , a diagonal matrix which contains information about the degree of each node . In the cold start setting , we will call those nodes that are newly introduced without any link from the existing graph cold nodes and let them be denoted by vci . In the same manner , existing nodes in the graph are denoted as vej . Then , Xe only contains feature vectors of existing nodes xe0 , x e 1 , x e 2 , . . . , x e Ne and Xc contains feature vectors of cold nodes xc0 , x c 1 , x c 2 , . . . , x c Nc , where Nc and Ne are the number of existing nodes and cold nodes , respectively . In our problem setting , we assume that new nodes are added to the existing graph and need to find possible links between new nodes and existing nodes followed by a node classification to figure out the hidden attributes of new nodes . Our problem definition is as follows : Given existing graph , G = ( Ve , A ) , and cold nodes Xc , find out all possible links between cold nodes Xc and existing nodes Xe , then predict the class of cold nodes Xc .
The paper proposes a new task in Graph Learning. Basically, the idea is the following: suppose we have a node classification model trained on a Graph G, suppose we have a new node (not present in G) and we want to classify it. Given that the new node has no connections with G’s other nodes we cannot leverage any structural information to run the classifier. This is an issue that authors present it as cold-start in semi-supervised graph learning. The solution, even if simple, is very effective and for this reason even more interesting. Basically, they start with a retrieval step (they call it link prediction) that is trained using a link reconstruction loss and it’s based on dot-product to make the link prediction phase scalable. After those links are reconstructed they run a node classification step (based on GCN, GraphSAGE, or GAT) on G plus the new node and the predicted links.

The results obtained in the experiments are really encouraging with improvements ranging from 15 to 25% over a baseline that does not consider the link prediction phase.
SP:0365e3071ba7036ff25566c543c65cfc92531674
Multi-Agent Imitation Learning with Copulas
1 INTRODUCTION . Recent years have witnessed great success of reinforcement learning ( RL ) for single-agent sequential decision making tasks . As many real-world applications ( e.g. , multi-player games ( Silver et al. , 2017 ; Brown & Sandholm , 2019 ) and traffic light control ( Chu et al. , 2019 ) ) involve the participation of multiple agents , multi-agent reinforcement learning ( MARL ) has gained more and more attention . However , a key limitation of RL and MARL is the difficulty of designing suitable reward functions for complex tasks with implicit goals ( e.g. , dialogue systems ) ( Russell , 1998 ; Ng et al. , 2000 ; Fu et al. , 2017 ; Song et al. , 2018 ) . Indeed , hand-tuning reward functions to induce desired behaviors becomes especially challenging in multi-agent systems , since different agents may have completely different goals and state-action representations ( Yu et al. , 2019 ) . Imitation learning provides an alternative approach to directly programming agents by taking advantage of expert demonstrations on how a task should be solved . Although appealing , most prior works on multi-agent imitation learning typically assume agents make independent decisions after observing a state ( i.e. , mean-field factorization of the joint policy ) ( Zhan et al. , 2018 ; Le et al. , 2017 ; Song et al. , 2018 ; Yu et al. , 2019 ) , ignoring the potentially complex dependencies that exist among agents . Recently , Tian et al . ( 2019 ) and Liu et al . ( 2020 ) proposed to implement correlated policies with opponent modeling , which incurs unnecessary modeling cost and redundancy , while still lacking coordination during execution . Compared to the single-agent setting , one major and fundamental challenge in multi-agent learning is how to model the dependence among multiple agents in an effective and scalable way . Inspired by probability theory and statistical dependence modeling , in this work , we propose to use copulas ( Sklar , 1959b ; Nelsen , 2007 ; Joe , 2014 ) to model multi-agent behavioral patterns . Copulas are powerful statistical tools to describe the dependence among random variables , which have been widely used in quantitative finance for risk measurement and portfolio optimization ( Bouyé et al. , 2000 ) . Using a copulas-based multi-agent policy enables us to separately learn marginals that capture the local behavioral patterns of each individual agent and a copula function that only and fully captures the dependence structure among the agents . Such a factorization is capable of modeling arbitrarily complex joint policy and leads to interpretable , efficient and scalable multi-agent imitation learning . As a motivating example ( see Figure 1 ) , suppose there are two agents , each with one-dimensional action space . In Figure 1a , although two joint policies are quite different , they actually share the same copula ( dependence structure ) and one marginal . Our proposed copula-based policy is capable a2 π ( a1 , a2|s ) da2 ) as well as the copula c ( F1 ( a1|s ) , F2 ( a2|s ) ) on the unit cube . Here Fi is the cumulative distribution function of the marginal πi ( ai|s ) and ui : = Fi ( ai|s ) is the uniformly distributed random variable obtained by probability integral transform with Fi . More details and definitions can be found in Section 3.2. of capturing such information and more importantly , we may leverage such information to develop efficient algorithms for such transfer learning scenarios . For example , when we want to model teamplay in a soccer game and one player is replaced by his/her substitute while the dependence among different roles are basically the same regardless of players , we can immediately obtain a new joint policy by switching in the new player ’ s marginal while keeping the copula and other marginals unchanged . On the other hand , as shown in Figure 1b , two different joint policies may share the same marginals while having different copulas , which implies that the mean-field policy in previous works ( only modeling marginal policies and making independent decisions ) can not differentiate these two scenarios to achieve coordination correctly . Towards this end , in this paper , we propose a copula-based multi-agent imitation learning algorithm , which is interpretable , efficient and scalable for modeling complex multi-agent interactions . Extensive experimental results on synthetic and real-world datasets show that our proposed method outperforms state-of-the-art multi-agent imitation learning methods in various scenarios and generates multi-agent trajectories close to expert demonstrations . 2 PRELIMINARIES . In this work , we consider the problem of multi-agent imitation learning under the framework of Markov games ( Littman , 1994 ) , which generalize Markov Decision Processes to multi-agent settings , where N agents are interacting with each other . Specifically , in a Markov game , S is the common state space , Ai is the action space for agent i ∈ { 1 , . . . , N } , η ∈ P ( S ) is the initial state distribution and P : S × A1 × . . . × AN → P ( S ) is the state transition distribution of the environment that the agents are interacting with . Here P ( S ) denotes the set of probability distributions over state space S. Suppose at time t , agents observe s [ t ] ∈ S and take actions a [ t ] : = ( a1 [ t ] , . . . , aN [ t ] ) ∈ A1× . . .×AN , the agents will observe state s [ t+1 ] ∈ S at time t+1 with probability P ( s [ t + 1 ] |s [ t ] , a1 [ t ] , . . . , aN [ t ] ) . In this process , the agents select the joint action a [ t ] by sampling from a stochastic joint policy π : S → P ( A1 × . . . × AN ) . In the following , we will use subscript −i to denote all agents except for agent i . For example , ( ai , a−i ) represents the actions of all agents ; πi ( ai|s ) and πi ( ai|s , a−i ) represent the marginal and conditional policy of agent i induced by the joint policy π ( a|s ) ( through marginalization and Bayes ’ s rule respectively ) . We consider the following off-line imitation learning problem . Suppose we have access to a set of demonstrations D = { τ j } Mj=1 provided by some expert policy πE ( a|s ) , where each expert trajectory τ j = { ( sj [ t ] , aj [ t ] ) } Tt=1 is collected by the following sampling process : s1 ∼ η ( s ) , a [ t ] ∼ πE ( a|s [ t ] ) , s [ t+ 1 ] ∼ P ( s|s [ t ] , a [ t ] ) for t ∈ { 1 , . . . , T } . The goal is to learn a parametric joint policy πθ to approximate the expert policy πE such that we can do downstream inferences ( e.g. , action prediction and trajectory generation ) . The learning problem is off-line as we can not ask for additional interactions with the expert policy or the environment during training , and the reward is also unknown . 3 MODELING MULTI-AGENT INTERACTIONS WITH COPULAS . 3.1 MOTIVATION . Many modeling methods for multi-agent learning tasks employ a simplifying mean-field assumption that the agents make independent action choices after observing a state ( Albrecht & Stone , 2018 ; Song et al. , 2018 ; Yu et al. , 2019 ) , which means the joint policy can be factorized as follows : π ( a1 , . . . , aN |s ) = N∏ i=1 πi ( ai|s ) ( 1 ) Such a mean-field assumption essentially allows for independent construction of each agent ’ s policy . For example , multi-agent behavior cloning by maximum likelihood estimation is now equivalent to performing N single-agent behavior cloning tasks : max π E ( s , a ) ∼ρπE [ logπ ( a|s ) ] = N∑ i=1 max πi E ( s , ai ) ∼ρπE , i [ log πi ( ai|s ) ] ( 2 ) where the occupancy measure ρπ : S × A1 × . . . × AN → R denotes the state action distribution encountered when navigating the environment using the joint policy π ( Syed et al. , 2008 ; Puterman , 2014 ) and ρπ , i is the corresponding marginal occupancy measure . However , when the expert agents are making correlated action choices ( e.g. , due to joint plan and communication in a soccer game ) , such a simplifying modeling choice is not able to capture the rich dependency structure and coordination among agent actions . To address this issue , recent works ( Tian et al. , 2019 ; Liu et al. , 2020 ) propose to use a different factorization of the joint policy such that the dependency among N agents can be preserved : π ( ai , a−i|s ) = πi ( ai|s , a−i ) π−i ( a−i|s ) for i ∈ { 1 , . . . , N } . ( 3 ) Although such a factorization is general and captures the dependency among multi-agent interactions , several issues still remain . First , the modeling cost is increased significantly , because now we need to learn N different and complicated opponent policies π−i ( a−i|s ) as well as N different marginal conditional policies πi ( ai|s , a−i ) , each with a deep neural network . It should be noted that there are many redundancies in such a modeling choice . Specifically , suppose there are N agents and N > 3 , for agent 1 and N , we need to learn opponent policies π−1 ( a2 , . . . , aN |s ) and π−N ( a1 , . . . , aN−1|s ) respectively . These are potentially high dimensional and might require flexible function approximations . However , the dependency structure among agent 2 to agent N − 1 are modeled in both π−1 and π−N , which incurs unnecessary modeling cost . Second , when executing the policy , each agent i makes decisions through its marginal policy πi ( ai|s ) = Eπ−i ( a−i|s ) ( ai|s , a−i ) by first sampling a−i from its opponent policy π−i then sampling its action ai from πi ( ·|s , a−i ) . Since each agent is performing such decision process independently , coordination among agents are still impossible due to sampling randomness . Moreover , a set of independently learned conditional distributions are not necessarily consistent with each other ( i.e. , induced by the same joint policy ) ( Yu et al. , 2019 ) . In this work , to address above challenges , we draw inspiration from probability theory and propose to use copulas , a statistical tool for describing the dependency structure between random variables , to model the complicated multi-agent interactions in a scalable and efficient way . 3.2 COPULAS . When the components of a multivariate random variable x = ( x1 , . . . , xN ) are jointly independent , the density of x can be written as : p ( x ) = N∏ i=1 p ( xi ) ( 4 ) When the components are not independent , this equality does not hold any more as the dependencies among x1 , . . . , xN can not be captured by the marginals p ( xi ) . However , the differences can be corrected by multiplying the right hand side of Equation ( 4 ) with a function that only and fully describes the dependency . Such a function is called a copula ( Nelsen , 2007 ) , a multivariate distribution function on the unit hyper-cube with uniform marginals . Intuitively , let us consider a random variable xi with continuous cumulative distribution function Fi . Applying probability integral transform gives us a random variable ui = Fi ( xi ) , which has standard uniform distribution . Thus one can use this property to separate the information in marginals from the dependency structures among x1 , . . . , xN by first projecting each marginal onto one axis of the hyper-cube and then capture the pure dependency with a distribution on the unit hyper-cube . Formally , a copula is the joint distribution of random variables u1 , . . . , uN , each of which is marginally uniformly distributed on the interval [ 0 , 1 ] . Furthermore , we introduce the following theorem that provides the theoretical foundations of copulas : Theorem 1 ( ( Sklar , 1959a ) ) . Suppose the multivariate random variable ( x1 , . . . , xN ) has marginal cumulative distribution functions F1 , . . . , FN and joint cumulative distribution function F , then there exists a unique copula C : [ 0 , 1 ] N → [ 0 , 1 ] such that : F ( x1 , . . . , xN ) = C ( F1 ( x1 ) , . . . , FN ( xN ) ) ( 5 ) When the multivariate distribution has a joint density f and marginal densities f1 , . . . , fN , we have : f ( x1 , . . . , xN ) = N∏ i=1 fi ( xi ) · c ( F1 ( x1 ) , . . . , FN ( xN ) ) ( 6 ) where c is the probability density function of the copula . The converse is also true . Given a copula C and marginals Fi ( xi ) , then C ( F1 ( x1 ) , . . . , FN ( xN ) ) = F ( x1 , . . . , xN ) is a N -dimensional cumulative distribution function with marginal distributions Fi ( xi ) . Theorem 1 states that every multivariate cumulative distribution function F ( x1 , . . . , xN ) can be expressed in terms of its marginals Fi ( xi ) and a copula C ( F1 ( x1 ) , . . . , FN ( xN ) ) . Comparing Eq . ( 4 ) with Eq . ( 6 ) , we can see that a copula function encoding correlations between random variables can be used to correct the mean-field approximation for arbitrarily complex distribution .
The paper proposes the use of "Copulas" to capture dependencies among agents in multi-player environments. The authors argue that prior work(eg GNN, VAE, LSTM etc based) do not leverage the common structure among agent behaviour. They explain how the copula can more efficiently encode the dependency among policies, as compared to simply factorizing. The authors provide empirical results on real world and synthetic datasets, showcasing superior performance against baseline approaches.
SP:900ada28e9a27c6ad856871ad5f04c50dc95e023