paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Rethinking Convolution: Towards an Optimal Efficiency
In this paper , we present our recent research about the computational efficiency in convolution . Convolution operation is the most critical component in recent surge of deep learning research . Conventional 2D convolution takes O ( C2K2HW ) to calculate , where C is the channel size , K is the kernel size , while H and W are the output height and width . Such computation has become really costly considering that these parameters increased over the past few years to meet the needs of demanding applications . Among various implementations of the convolution , separable convolution has been proven to be more efficient in reducing the computational demand . For example , depth separable convolution reduces the complexity to O ( CHW · ( C + K2 ) ) while spatial separable convolution reduces the complexity to O ( C2KHW ) . However , these are considered an ad hoc design which can not ensure that they can in general achieve optimal separation . In this research , we propose a novel operator called optimal separable convolution which can be calculated at O ( C 3 2KHW ) by optimal design for the internal number of groups and kernel sizes for general separable convolutions . When there is no restriction in the number of separated convolutions , an even lower complexity at O ( CHW · log ( CK2 ) ) can be achieved . Experimental results demonstrate that the proposed optimal separable convolution is able to achieve an improved accuracyFLOPs and accuracy- # Params trade-offs over both conventional and depth/spatial separable convolutions . 1 INTRODUCTION . Tremendous progresses have been made in recent years towards more accurate image analysis tasks , such as image classification , with deep convolutional neural networks ( DCNNs ) ( Krizhevsky et al. , 2012 ; Srivastava et al. , 2015 ; He et al. , 2016 ; Real et al. , 2019 ; Tan & Le , 2019 ; Dai et al. , 2020 ) . However , the computational complexity for state-of-the-art DCNN models has also become increasingly high and computationally expensive . This can significantly defer their deployment to real-world applications , such as mobile platforms and robotics , where the resources are highly constrained ( Howard et al. , 2017 ; Dai et al. , 2020 ) . It is very much desired that a DCNN could achieve better performance with less computation and fewer model parameters . The most time-consuming building block of a DCNN is the convolutional layer . There have been many previous works aiming at reducing the amount of computation in the convolutional layer . Historically , researchers apply Fast Fourier Transform ( FFT ) ( Nussbaumer , 1981 ; Quarteroni et al. , 2010 ) to implement convolution and they gain great speed up for large convolutional kernels . For small convolutional kernels , a direct application is often still cheaper ( Podlozhnyuk , 2007 ) . Researchers also explore low rank approximation ( Jaderberg et al. , 2014 ; Ioannou et al. , 2015 ) to implement convolutions . However , most of the existing methods start from a pre-trained model and mainly focus on network pruning and compression . In this research , we study how to design a separable convolution to achieve an optimal implementation in terms of computational complexity . Enabling convolution separable has been proven to be an efficient way to reduce the computational complexity ( Sifre & Mallat , 2014 ; Howard et al. , 2017 ; Szegedy et al. , 2016 ) . Comparing to the FFT and low rank approximation approaches , a welldesigned separable convolution shall be efficient for both small and large kernel sizes and shall not require a pre-trained model to operate on . In the DCNN research , two most well-known separable convolutions are depth separable ( Sifre & Mallat , 2014 ) and spatial separable ( Szegedy et al. , 2016 ) convolutions . Both are able to reduce the computational complexity of a convolution . The complexity of a conventional 2D convolution is quadratic with three hyper-parameters : number of channels ( C ) , kernel size ( K ) , and spatial dimensions ( H or W ) , and its computational complexity is actually O ( C2K2HW ) . Depth separable convolution is constructed as a depth-wise convolution followed by a point-wise convolution , where depth-wise convolution is a group convolution with its number of groups g = C and point-wise convolution is a 1× 1 convolution . Spatial separable convolution replaces a K ×K kernel to K × 1 and 1 ×K kernels . Different types of convolutions and their computational costs are summarized in Table 1 . From this table , we can easily verify that depth separable convolution has a complexity of O ( CHW · ( C +K2 ) ) and spatial separable convolution has a complexity of O ( C2KHW ) . Both depth separable and spatial separable convolutions follow an ad hoc design . They are able to reduce the computational cost to some degree but normally will not achieve an optimal separation . A separable convolution in general has three sets of parameters : the internal number of groups , channel size , and kernel size of each separated convolution . Instead of setting these parameters in an ad hoc fashion , we design a scheme to achieve an optimal separation . The resulting separable convolution is called optimal separable convolution in this research . To prevent the proposed optimal separable convolution from being degenerated , we assume that the internal channel size is in an order of O ( C ) and propose the following volumetric receptive field condition . As illustrated in Fig . 1a , similar to the receptive field ( RF ) of a convolution which is defined as the region in the input space that a particular CNN ’ s feature is looking at ( or affected by ) ( Lindeberg , 2013 ) , we define the volumetric RF of a convolution to be the volume in the input space that affects CNN ’ s output . The volumetric RF condition requires that a properly decomposed separable convolution maintains the same volumetric RF as the original convolution before decomposition . Hence , the proposed optimal separable convolution will be equivalent to optimizing the internal number of groups and kernel sizes to achieve the computational objective ( measured in FLOPs1 ) while satisfying the volumetric RF condition . Formally , the objective function is defined by Equation ( 2 ) under the constraints defined by Equations ( 3 ) - ( 6 ) . The solution to this optimization problem will be described in detail in Section 2 . 1In this research , similar to ( He et al. , 2016 ) , FLOPs are measured in number of multiply-adds . We shall show that the proposed optimal separable convolution can be calculated at the order of O ( C 3 2KHW ) . This is at least a factor of √ C more efficient than the depth separable and spatial separable convolutions . The proposed optimal separable convolution is able to be easily generalized into an N -separable case , where the number of separated convolutions N can be optimized further . In such a generalized case , an even lower complexity at O ( CHW · log ( CK2 ) ) can be achieved . Extensive experiments are carried out to demonstrate the effectiveness of the proposed optimal separable convolution . As illustrated in Fig . 3 , on the CIFAR10 dataset ( Krizhevsky et al. , 2009 ) , the proposed optimal separable convolution achieves a better Pareto-frontier2 than both conventional and depth separable convolutions using the ResNet ( He et al. , 2016 ) architecture . To demonstrate that the proposed optimal separable convolution generalizes well to other DCNN architectures , we adopt the DARTS ( Liu et al. , 2018 ) architecture by replacing the depth separable convolution with the proposed optimal separable convolution . The accuracy is improved from 97.24 % to 97.67 % with fewer parameters . On the ImageNet dataset ( Deng et al. , 2009 ) , the proposed optimal separable convolution also achieves an improved performance . For the DARTS architecture , the proposed approach achieves 74.2 % top1 accuracy with only 4.5 million parameters . 2 THE PROPOSED APPROACH . 2.1 CONVOLUTION AND ITS COMPUTATIONAL COMPLEXITY . A convolutional layer takes an input tensor Bl−1 of shape ( Cl−1 , Hl−1 , Wl−1 ) and produces an output tensor Bl of shape ( Cl , Hl , Wl ) , where C∗ , H∗ , W∗ are input and output channels , feature heights and widths . The convolutional layer is parameterized with a convolutional kernel of shape ( Cl , Cl−1 , K H l , K W l ) , where K ∗ l are the kernel sizes , and the superscript indicates whether it is aligned with the features in height or width . In this research , we take C∗ = O ( C ) , H∗ = O ( H ) , W∗ = O ( W ) , and K H|W ∗ = O ( K ) for complexity analysis . Formally , we have Bl ( cl , hl , wl ) = ∑ cl−1 ∑ kHl ∑ kWl Bl−1 ( cl−1 , hl−1 , wl−1 ) · Fl ( cl , cl−1 , kHl , kWl ) , ( 1 ) where hl = hl−1 + kHl and wl = wl−1 + k W l . Hence , the number of FLOPs ( multiply-adds ) for convolution is ClHlWl · Cl−1KHl KWl = O ( C2K2HW ) and the number of parameters is ClCl−1K H l K W l = O ( C 2K2 ) . For a group convolution , we have g convolutions with kernels of shape ( Cl/g , Cl−1/g , KHl , K W l ) . Hence , it has O ( C2K2HW/g ) FLOPs and O ( C2K2/g ) parameters , where g is the number of groups . A depth-wise convolution is equivalent to a group convolution with g = C∗ = C. A pointwise convolution is a 1×1 convolution . A depth separable convolution is composed of a depth-wise convolution and a point-wise convolution . A spatial separable convolution replaces a K ×K kernel with K × 1 and 1 ×K kernels . Different types of convolutions are summarized in Table 1 . From this table , their FLOPs and number of parameters can be easily verified . 2.2 RETHINKING CONVOLUTION AND THE VOLUMETRIC RECEPTIVE FIELD CONDITION . Separable convolution has been proven to be efficient in reducing the computational demand in convolution . However , existing approaches including both depth separable and spatial separable convolutions follow an ad hoc design . They are able to reduce the computational cost to some extent but will not normally achieve an optimal separation . In this research , we shall design an efficient convolution operator achieving the computational objective by optimal design of its internal hyperparameters . The resulting operator is called optimal separable convolution . One difficulty is that if we do not pose any restriction to a separable convolution , optimizing the FLOPs target will resulting in a separable convolution being equivalent to a degenerated channel scaling operator3 . Hence , we propose the following volumetric receptive field condition . 2In multi-objective optimization , a Pareto-frontier is the set of parameterizations ( allocations ) that are all Pareto-optimal . An allocation is Pareto-optimal if there is no alternative allocation where improvement can be made to one participant ’ s well-being without sacrificing any other ’ s . Here , Pareto-frontier represents the curve of the accuracies we are able to achieve for different FLOPs/ # Params . 3From Table 1 , let g = C and K = 1 , a convolution will have C parameters and CHW FLOPs . This is in fact a channel scaling operator . Composition of such operators is not meaningful because the composition itself is equivalent to a single channel scaling operator . As illustrated in Fig . 1a , the receptive field ( RF ) of a convolution is defined to be the region in the input space that a particular CNN ’ s feature is affected by ( Lindeberg , 2013 ) . We define the channel RF to be the channels that affect CNN ’ s output and define the volumetric RF to be the Cartesian product of the RF and channel RF of this convolution . The volumetric RF of a convolution is actually the volume in the input space that affects CNN ’ s output . The volumetric RF condition requires that a properly decomposed separable convolution maintains ( at least ) the same volumetric RF as the original convolution before decomposition . Hence , the proposed optimal separable convolution will be equivalent to optimizing its internal parameters while satisfying the volumetric RF condition .
This paper proposes a novel analysis for optimal separable convolution considering the number of parameters and FLOPs. The idea is to constraint the input information consumed stationery throughout the optimization of the parameters for separable convolutions. More specifically, this paper proposes the notion of volumetric receptive field and by holding it constant throughout optimization, one can arrive at a constrained optimization problem for solving the parameters for the optimal separable convolution. Empirical results of replacing common convolution with optimal convolution in various modern CNNs have demonstrated the effectiveness of the proposed optimal separable convolution.
SP:d50fb34a105a8212a0766593cc4f7edaea560f9f
Meta Adversarial Training
1 INTRODUCTION . Deep learning is currently the most promising method for open-world perception tasks such as in automated driving and robotics . However , the use in safety-critical domains is questionable , since a lack of robustness of deep learning-based perception has been demonstrated ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ; Metzen et al. , 2017 ; Hendrycks & Dietterich , 2019 ) . Physical-world adversarial attacks ( Kurakin et al. , 2017 ; Athalye et al. , 2018 ; Braunegg et al. , 2020 ) are one of most problematic failures in robustness of deep learning . Examples of such attacks are fooling models for traffic sign recognition ( Chen et al. , 2018 ; Eykholt et al. , 2018a ; b ; Huang et al. , 2019 ) , face recognition ( Sharif et al. , 2016 ; 2017 ) , optical flow estimation ( Ranjan et al. , 2019 ) , person detection ( Thys et al. , 2019 ; Wu et al. , 2020b ; Xu et al. , 2020 ) , and LiDAR perception ( Cao et al. , 2019 ) . In this work , we focus on two subsets of these physical-world attacks : local ones which place a printed pattern in a scene that does not overlap with the target object ( Lee & Kolter , 2019 ; Huang et al. , 2019 ) and global ones which attach a mainlytranslucent sticker on the lens of a camera ( Li et al. , 2019 ) . Note that these physical-world attacks have cor- responding digital-domain attacks , in which the attacker directly modifies the signal after it was received by the sensor and before it is processed by the model . The corresponding digital-domain attack for the adversarial camera sticker is a type of universal adversarial perturbation ( MoosaviDezfooli et al. , 2017 ) , while the digital adversarial patch attack ( Brown et al. , 2017 ; Anonymous , 2020 ) corresponds to physical patch attacks ( Lee & Kolter , 2019 ; Huang et al. , 2019 ) . We focus on increasing robustness against digital-domain attacks . Digital-domain attacks are strictly stronger than the corresponding physical-world attacks since they allow the attacker to have complete control over the change of the signal . In contrast , physical-world attacks need to be invariant under non-controllable effects such as scale , rotation , object position , and light conditions , which can not be controlled by the attacker . Therefore , a system robust against digital-domain attacks is also robust against the corresponding physical-world attacks . Currently , the most promising method for increasing robustness against adversarial attacks is adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) . Adversarial training simulates an adversarial attack for every mini-batch and trains the model to become robust against such an attack . Adversarial training against digital-domain universal perturbations or patches is complicated by the fact that these attacks are computationally much more expensive than image-dependent adversarial attacks and existing approaches for speeding up adversarial training ( Shafahi et al. , 2019 ; Zhang et al. , 2019 ; Zheng et al. , 2020 ) are not directly applicable . Existing approaches for tailoring adversarial training to universal perturbations or patches either refrain from simulating attacks in every minibatch ( Moosavi-Dezfooli et al. , 2017 ; Hayes & Danezis , 2018 ; Perolat et al. , 2018 ) , which bears the risk that the model easily overfits these fixed or rarely updated universal perturbations . Alternative approaches use proxy attacks that are computationally cheaper such as “ universal adversarial training ” ( UAT ) ( Shafahi et al. , 2018 ) and “ shared adversarial training ” ( SAT ) ( Mummadi et al. , 2019 ) . These approaches face the challenge of balancing the implicit trade-off between simulating universal perturbation attacks accurately and keeping computation cost of the proxy attacks small . We propose meta adversarial training ( MAT ) 1 , which falls into the category of proxy attacks . MAT combines adversarial training with meta-learning . We summarize the key novel contributions of MAT and refer to Section 3 for details : • MAT amortizes the cost of computing universal perturbations by sharing information about optimal perturbations over consecutive steps of model training , which reduces the cost of generating strong approximations of universal perturbations considerably . In contrast to UAT ( Shafahi et al. , 2018 ) , MAT uses meta-learning for sharing of information rather than joint training , which empirically generates stronger perturbations and a more robust model . • MAT meta-learns a large set of perturbations concurrently . While a model easily overfits a single perturbation , even if it changes as in UAT , overfitting is much less likely for a larger set of perturbations such as those generated with MAT . • MAT encourages diversity of the generated perturbations by assigning random but fixed target classes and step-sizes to each perturbation during meta-learning . This avoids that many perturbations focus on exploiting the same vulnerability of a model . We perform an extensive empirical evaluation and ablation study of MAT on image classification and traffic-light detection tasks against a variety of attacks to show the robustness of MAT against universal patches and perturbations ( see Section 4 ) . We refer to Figure 1 for an illustration of MAT for universal patch attacks against traffic light detection . 2 RELATED WORK . We review work on generating universal perturbations , defending against them , and meta-learning . Generating Universal Perturbations Adversarial perturbations are changes to the input that are crafted with the intention of fooling a model ’ s prediction on the input . Universal perturbations are a special case in which one perturbation needs to be effective on the majority of samples from the input distribution . Most work focuses on small additive perturbations that are bounded by some ` p-norm constraint . For example , Moosavi-Dezfooli et al . ( 2017 ) proposed the first approach by extending the DeepFool algorithm ( Moosavi-Dezfooli et al. , 2016 ) . Similarly , Metzen et al . ( 2017 ) extended the iterative fast gradient sign method ( Kurakin et al. , 2017 ) for generating universal perturbations on semantic image segmentation . Mopuri et al . ( 2017 ; 2018 ) presented data-independent attacks and 1Code will be publicly released upon acceptance and can be found in the supplementary material . Hayes & Danezis ( 2018 ) proposed using a generative model for learning a diverse distribution of universal perturbations . Li et al . ( 2019 ) presented a physical-world attack in which a translucent sticker is placed on the lens of a camera , which adds a universal perturbation to the image taken by the camera , and showed that this can fool an image classification system . Other types of universal perturbations are so-called adversarial patches ( Brown et al. , 2017 ) . In these universal patch attacks , the adversary can arbitrarily modify a small part of the image , typically a connected rectangular area , while leaving the remaining part of the image unchanged . Following Athalye et al . ( 2018 ) , randomizing conditions such as location , rotation , scale , and lighting during the attack can make the universal patch sufficiently effective to fool the model when it is printed out and placed in the physical world . Later work has generalized these physical-world attacks to object detection ( Lee & Kolter , 2019 ; Huang et al. , 2019 ) and optical flow estimation ( Ranjan et al. , 2019 ) . Defending Universal Perturbations First works for defending against universal perturbations are based on training a model against a fixed or slowly updated set/distribution of universal perturbations : Moosavi-Dezfooli et al . ( 2017 ) precompute a set of universal perturbations that are used during training , Hayes & Danezis ( 2018 ) learn a generative model of universal perturbations , and Perolat et al . ( 2018 ) build a slowly increasing set of universal perturbations concurrent to model training . A shortcoming of these approaches is that the model might overfit the fixed or slowly changing distribution of universal perturbations . However , re-computing universal perturbations in every mini-batch from scratch is prohibitively expensive . To address this issue , SAT Mummadi et al . ( 2019 ) trains a model against so-called shared perturbations . These shared perturbations do not have to be universal but only need to fool the model on a fixed subset of the batch . However , since the shared perturbations are recomputed in every mini-batch , it assumes a few gradient steps are sufficient to find strong perturbations from random initialization . In contrast , our method meta-learns strong initial perturbations . In UAT ( Shafahi et al. , 2018 ) , training the neural network ’ s weights and updating a single universal perturbation happen concurrently , which scales to a large dataset . However , our experiments in Section 4 indicate that a single incrementally and slowly updated perturbation is not sufficiently strong and diverse for making a model robust against all possible universal perturbations . Instead , our method meta-learns a large and diverse collection of perturbations during training . For defending against adversarial patches , Chiang et al . ( 2020 ) proposed an approach of extending interval-bound propagation ( Gowal et al. , 2019 ) to the patch threat model . While this allows certification of robustness , it only scales to tiny patches and reduces clean accuracy considerably . Wu et al . ( 2020a ) proposed the “ defense against occlusion attack ” , which applies adversarial training to inputs perturbed with input-dependent adversarial patches placed at specific positions determined , for example , by the input gradient magnitude . Since they generate patches from scratch , they require an expensive optimization of the patch for every training batch . Moreover , robustness against stronger attacks such as those proposed in Section 4.1 remains unclear . Saha et al . ( 2019 ) hypothesize that vulnerability of object detectors against adversarial patches stems from contextual reasoning . Accordingly , they propose Grad-defense which penalizes strong dependence of object detections on their context in a data-driven manner , where dependence is determined by Grad-CAM ( Selvaraju et al. , 2019 ) . Lastly , some non-adversarial data augmentation techniques resemble the universal adversarial patch scenario : they add a Gaussian noise patch ( Lopes et al. , 2019 ) or a patch from a different image ( CutMix ) ( Yun et al. , 2019 ) to each input . CutMix is conceptually very similar to the out-of-context defense ( Saha et al. , 2019 ) . However , as demonstrated in our experiments in Section 4 , even though these approaches increase robustness against occlusions , they are unlikely to increase robustness against universal patch attacks . Meta-Learning Gradient-based meta-learning methods such as MAML ( Finn et al. , 2017 ) or REPTILE ( Nichol et al. , 2018 ) allow learning initial parameters for a class of optimization tasks , so that one can find close-to-optimal parameters on a novel task from the distribution with a small number of gradient steps . Moreover , meta-learning can also be used to learn the task optimizer itself such as by Xiong & Hsieh ( 2020 ) in the context of adversarial training . While it is common to meta-learn initial weights for neural networks , we propose that these algorithms can also be used to meta-learn initial values for universal perturbations . In this work , we combine REPTILE with adversarial training because of the low computational overhead of REPTILE ; however , in principle other gradient-based meta-learning methods could also be used as part of our method .
This paper studies the use of meta learning and adversarial training to defend against universal perturbations. This approach tries to learn a set of perturbations through meta-learning and train the model to defend against such attacks. Experimental results on Tiny ImageNet and the Bosch small traffic lights dataset show the method is effective in defending against universal patch-based attacks.
SP:01813ebeda8ca73bca2bb3f50a3afefd1a04643b
Meta Adversarial Training
1 INTRODUCTION . Deep learning is currently the most promising method for open-world perception tasks such as in automated driving and robotics . However , the use in safety-critical domains is questionable , since a lack of robustness of deep learning-based perception has been demonstrated ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ; Metzen et al. , 2017 ; Hendrycks & Dietterich , 2019 ) . Physical-world adversarial attacks ( Kurakin et al. , 2017 ; Athalye et al. , 2018 ; Braunegg et al. , 2020 ) are one of most problematic failures in robustness of deep learning . Examples of such attacks are fooling models for traffic sign recognition ( Chen et al. , 2018 ; Eykholt et al. , 2018a ; b ; Huang et al. , 2019 ) , face recognition ( Sharif et al. , 2016 ; 2017 ) , optical flow estimation ( Ranjan et al. , 2019 ) , person detection ( Thys et al. , 2019 ; Wu et al. , 2020b ; Xu et al. , 2020 ) , and LiDAR perception ( Cao et al. , 2019 ) . In this work , we focus on two subsets of these physical-world attacks : local ones which place a printed pattern in a scene that does not overlap with the target object ( Lee & Kolter , 2019 ; Huang et al. , 2019 ) and global ones which attach a mainlytranslucent sticker on the lens of a camera ( Li et al. , 2019 ) . Note that these physical-world attacks have cor- responding digital-domain attacks , in which the attacker directly modifies the signal after it was received by the sensor and before it is processed by the model . The corresponding digital-domain attack for the adversarial camera sticker is a type of universal adversarial perturbation ( MoosaviDezfooli et al. , 2017 ) , while the digital adversarial patch attack ( Brown et al. , 2017 ; Anonymous , 2020 ) corresponds to physical patch attacks ( Lee & Kolter , 2019 ; Huang et al. , 2019 ) . We focus on increasing robustness against digital-domain attacks . Digital-domain attacks are strictly stronger than the corresponding physical-world attacks since they allow the attacker to have complete control over the change of the signal . In contrast , physical-world attacks need to be invariant under non-controllable effects such as scale , rotation , object position , and light conditions , which can not be controlled by the attacker . Therefore , a system robust against digital-domain attacks is also robust against the corresponding physical-world attacks . Currently , the most promising method for increasing robustness against adversarial attacks is adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) . Adversarial training simulates an adversarial attack for every mini-batch and trains the model to become robust against such an attack . Adversarial training against digital-domain universal perturbations or patches is complicated by the fact that these attacks are computationally much more expensive than image-dependent adversarial attacks and existing approaches for speeding up adversarial training ( Shafahi et al. , 2019 ; Zhang et al. , 2019 ; Zheng et al. , 2020 ) are not directly applicable . Existing approaches for tailoring adversarial training to universal perturbations or patches either refrain from simulating attacks in every minibatch ( Moosavi-Dezfooli et al. , 2017 ; Hayes & Danezis , 2018 ; Perolat et al. , 2018 ) , which bears the risk that the model easily overfits these fixed or rarely updated universal perturbations . Alternative approaches use proxy attacks that are computationally cheaper such as “ universal adversarial training ” ( UAT ) ( Shafahi et al. , 2018 ) and “ shared adversarial training ” ( SAT ) ( Mummadi et al. , 2019 ) . These approaches face the challenge of balancing the implicit trade-off between simulating universal perturbation attacks accurately and keeping computation cost of the proxy attacks small . We propose meta adversarial training ( MAT ) 1 , which falls into the category of proxy attacks . MAT combines adversarial training with meta-learning . We summarize the key novel contributions of MAT and refer to Section 3 for details : • MAT amortizes the cost of computing universal perturbations by sharing information about optimal perturbations over consecutive steps of model training , which reduces the cost of generating strong approximations of universal perturbations considerably . In contrast to UAT ( Shafahi et al. , 2018 ) , MAT uses meta-learning for sharing of information rather than joint training , which empirically generates stronger perturbations and a more robust model . • MAT meta-learns a large set of perturbations concurrently . While a model easily overfits a single perturbation , even if it changes as in UAT , overfitting is much less likely for a larger set of perturbations such as those generated with MAT . • MAT encourages diversity of the generated perturbations by assigning random but fixed target classes and step-sizes to each perturbation during meta-learning . This avoids that many perturbations focus on exploiting the same vulnerability of a model . We perform an extensive empirical evaluation and ablation study of MAT on image classification and traffic-light detection tasks against a variety of attacks to show the robustness of MAT against universal patches and perturbations ( see Section 4 ) . We refer to Figure 1 for an illustration of MAT for universal patch attacks against traffic light detection . 2 RELATED WORK . We review work on generating universal perturbations , defending against them , and meta-learning . Generating Universal Perturbations Adversarial perturbations are changes to the input that are crafted with the intention of fooling a model ’ s prediction on the input . Universal perturbations are a special case in which one perturbation needs to be effective on the majority of samples from the input distribution . Most work focuses on small additive perturbations that are bounded by some ` p-norm constraint . For example , Moosavi-Dezfooli et al . ( 2017 ) proposed the first approach by extending the DeepFool algorithm ( Moosavi-Dezfooli et al. , 2016 ) . Similarly , Metzen et al . ( 2017 ) extended the iterative fast gradient sign method ( Kurakin et al. , 2017 ) for generating universal perturbations on semantic image segmentation . Mopuri et al . ( 2017 ; 2018 ) presented data-independent attacks and 1Code will be publicly released upon acceptance and can be found in the supplementary material . Hayes & Danezis ( 2018 ) proposed using a generative model for learning a diverse distribution of universal perturbations . Li et al . ( 2019 ) presented a physical-world attack in which a translucent sticker is placed on the lens of a camera , which adds a universal perturbation to the image taken by the camera , and showed that this can fool an image classification system . Other types of universal perturbations are so-called adversarial patches ( Brown et al. , 2017 ) . In these universal patch attacks , the adversary can arbitrarily modify a small part of the image , typically a connected rectangular area , while leaving the remaining part of the image unchanged . Following Athalye et al . ( 2018 ) , randomizing conditions such as location , rotation , scale , and lighting during the attack can make the universal patch sufficiently effective to fool the model when it is printed out and placed in the physical world . Later work has generalized these physical-world attacks to object detection ( Lee & Kolter , 2019 ; Huang et al. , 2019 ) and optical flow estimation ( Ranjan et al. , 2019 ) . Defending Universal Perturbations First works for defending against universal perturbations are based on training a model against a fixed or slowly updated set/distribution of universal perturbations : Moosavi-Dezfooli et al . ( 2017 ) precompute a set of universal perturbations that are used during training , Hayes & Danezis ( 2018 ) learn a generative model of universal perturbations , and Perolat et al . ( 2018 ) build a slowly increasing set of universal perturbations concurrent to model training . A shortcoming of these approaches is that the model might overfit the fixed or slowly changing distribution of universal perturbations . However , re-computing universal perturbations in every mini-batch from scratch is prohibitively expensive . To address this issue , SAT Mummadi et al . ( 2019 ) trains a model against so-called shared perturbations . These shared perturbations do not have to be universal but only need to fool the model on a fixed subset of the batch . However , since the shared perturbations are recomputed in every mini-batch , it assumes a few gradient steps are sufficient to find strong perturbations from random initialization . In contrast , our method meta-learns strong initial perturbations . In UAT ( Shafahi et al. , 2018 ) , training the neural network ’ s weights and updating a single universal perturbation happen concurrently , which scales to a large dataset . However , our experiments in Section 4 indicate that a single incrementally and slowly updated perturbation is not sufficiently strong and diverse for making a model robust against all possible universal perturbations . Instead , our method meta-learns a large and diverse collection of perturbations during training . For defending against adversarial patches , Chiang et al . ( 2020 ) proposed an approach of extending interval-bound propagation ( Gowal et al. , 2019 ) to the patch threat model . While this allows certification of robustness , it only scales to tiny patches and reduces clean accuracy considerably . Wu et al . ( 2020a ) proposed the “ defense against occlusion attack ” , which applies adversarial training to inputs perturbed with input-dependent adversarial patches placed at specific positions determined , for example , by the input gradient magnitude . Since they generate patches from scratch , they require an expensive optimization of the patch for every training batch . Moreover , robustness against stronger attacks such as those proposed in Section 4.1 remains unclear . Saha et al . ( 2019 ) hypothesize that vulnerability of object detectors against adversarial patches stems from contextual reasoning . Accordingly , they propose Grad-defense which penalizes strong dependence of object detections on their context in a data-driven manner , where dependence is determined by Grad-CAM ( Selvaraju et al. , 2019 ) . Lastly , some non-adversarial data augmentation techniques resemble the universal adversarial patch scenario : they add a Gaussian noise patch ( Lopes et al. , 2019 ) or a patch from a different image ( CutMix ) ( Yun et al. , 2019 ) to each input . CutMix is conceptually very similar to the out-of-context defense ( Saha et al. , 2019 ) . However , as demonstrated in our experiments in Section 4 , even though these approaches increase robustness against occlusions , they are unlikely to increase robustness against universal patch attacks . Meta-Learning Gradient-based meta-learning methods such as MAML ( Finn et al. , 2017 ) or REPTILE ( Nichol et al. , 2018 ) allow learning initial parameters for a class of optimization tasks , so that one can find close-to-optimal parameters on a novel task from the distribution with a small number of gradient steps . Moreover , meta-learning can also be used to learn the task optimizer itself such as by Xiong & Hsieh ( 2020 ) in the context of adversarial training . While it is common to meta-learn initial weights for neural networks , we propose that these algorithms can also be used to meta-learn initial values for universal perturbations . In this work , we combine REPTILE with adversarial training because of the low computational overhead of REPTILE ; however , in principle other gradient-based meta-learning methods could also be used as part of our method .
The authors propose a novel meta adversarial training method. In particular, to improve the robustness against digital domain attacks, the proposed meta adversarial training (MAT) combines adversarial training and meta-learning, which reduces the computing cost compared with adversarial training by generating a set of stronger perturbations. The proposed approach sounds with extensive experimental results.
SP:01813ebeda8ca73bca2bb3f50a3afefd1a04643b
Ringing ReLUs: Harmonic Distortion Analysis of Nonlinear Feedforward Networks
1 INTRODUCTION . In the past decade , the emergence of practical deep neural networks arguably has had disruptive impact on applications of machine learning . Depth as such appears to be key to expressive models ( Raghu et al. , 2017 ) . However , depth also comes with challenges concerning training stability . Theoretical problems include vanishing and exploding gradients ( Hochreiter , 1991 ) , chaotic feedforward dynamics ( Poole et al. , 2016 ) , or decorrelation of gradients ( Balduzzi et al. , 2017 ) . In practice , a number of “ recipes ” are widely used , such as specific nonlinearities ( Glorot et al. , 2011 ; He et al. , 2015 ) , normalization methods such as batch normalization ( Ioffe & Szegedy , 2015 ) , shortcut architectures ( Srivastava et al. , 2015 ; He et al. , 2016a ; b ) , or multi-path architecture with ( Huang et al. , 2017 ) and without shortcuts ( Szegedy et al. , 2016 ) . Broadly speaking , a key research question is to understand how the shape of the network function , i.e. , the map from inputs and parameters to outputs , is affected by architectural choices . Our paper considers specifically the roughness of the weights-to-outputs function ( “ w-o function ” ) of nonlinear feed-forward networks . Motivated by the recent visualizations of ( Li et al. , 2018 ) , which show how depth increases roughness and residual connections smoothen the output again , our goal is to provide an analytical explanation of this effect , and study its implications on network design and trainability . To this end , we first formalize “ roughness ” as the decay-rate of the expected power spectrum of a function class . Our main contribution is to then apply harmonic distortion analysis to nonlinear feedforward networks , which predicts the creation of high-frequency “ harmonics ” ( thereby “ blueshifting ” the power spectrum ) by polynomial nonlinearities with large higher-order coefficients . Based on this model , we discuss how network depth increases blueshift and thus roughness , while shortcut connections , low-degree nonlinearities and parallel computation paths dampen it . In relation to trainability , we show an analytic link between blueshift and exploding gradients . Unlike the former model , the spectral view describes a more global behavior of the w-o function over regions in the parameter domain . Experiments confirm the theoretical predictions : We observe the predicted effects of depth , shortcuts and parallel computation on blueshift , and are able to differentiate different types of nonlinearities by the decay rate of coefficients of a polynomial approximation . The findings are in-line with known advantages in trainability of the different architectures . We further strengthen the evidence by training a large set of networks with a different amount of nonlinearity and depth , which shows a clear correlation between blueshift and training-problems , as well as a trade-off with expressivity . In summary , our paper explains how network architecture affects roughness , shows a connection to trainability , and thereby provides a new tool for analyzing the design of deep networks . 2 RELATED WORK . Vanishing or exploding gradients are a central numerical problem ( Hochreiter , 1991 ; Pascanu et al. , 2013 ; Yang et al. , 2019 ) : If the the magnitudes of the singular values of layer Jacobians deviates from one , subspaces are attenuated ( |σ| < 1 ) or amplified ( |σ| > 1 ) , potentially cascading exponentially over multiple layers ( Pennington et al. , 2017a ) . Formally , the behavior of stacks of matrices and nonlinear functions can be modeled by random matrix theory or Gaussian mean-field approximations ( Poole et al. , 2016 ; Pennington et al. , 2017b ; 2018 ) . The gist is that at initialization , orthogonal weight matrices are needed , which is challenging for convolutional architectures . A solution for tanh-networks is given by Xiao et al . ( 2018 ) ; for ReLU , there is a negative result ( Pennington et al. , 2017b ) . Using mean-field theory , it can be shown that batch normalization ( Ioffe & Szegedy , 2015 ) leads to exploding gradients at initialization ( Yang et al. , 2019 ) ( which equalize after a few steps , but that might be too late ( Frankle et al. , 2020 ) ) . A different route is taken by Balduzzi et al . ( 2017 ) , who observe an increasing decorrelation of gradients in the input space . Similar to our paper , they show that deeper networks lead to spectral whitening ( starting from brown noise ) ; however , the analysis is performed with respect to the inputs x , not weights W. The scale-space structure shown by our model might give further hints on the mechanisms behind training difficulties . By visualizing random slices of the loss surface , Li et al . ( 2018 ) observe that the loss surface of deep feedforward networks transitions between nearly convex to chaotic with increasing depth ; our work explains these observations by spectral analysis . Duvenaud et al . ( 2014 ) visualize pathologies on the landscape of deep gaussian processes that model deep wide-limit nonlinear networks . Fourier analysis of network functions ( Candès , 1999 ) wrt . input ( Rahaman et al. , 2019 ; Xu et al. , 2019 ; Xu , 2018 ; Basri et al. , 2019 ; Yang & Salman , 2019 ) has been used to show an inductive bias towards low-frequency functions ( wrt . input x ) , as well as a strong anisotropy of this spectrum . Wang et al . ( 2020 ) prove under some assumptions that all `` bad '' local minima of a deep residual network are very shallow . 3 HARMONIC DISTORTION . We now analyze the effect of a nonlinearity by relating the Fourier spectrum of a preactivation with that of its postactivation . Let f denote the preactivation of a single neuron of a neural network consisting of L-layers . We use x to denote the input to the whole network and thus to f , W to denote the weights , and φ to denote the employed nonlinearity . Li et al . ( 2018 ) visualize “ roughness ” using random 2D-slices in weight space . We follow their basic idea and consider 1D slices p ( t ) = f ( x , W + α−1 · t ·D ) ( 1 ) for random directions D and t ∈ [ 0 , 1 ] . D is initialized with entries from N0,1 and normalized to ||D||F = 1. α determines the path length . By varying D , this samples a ball of radius α around a point W in parameter space . For φ = id , the network f is multi-linear in W and thus p is polynomial in t ; empirically , this yields rather smooth functions ( Fig . 1 ) . To understand general nonlinearities φ better , we represent p by a complex Fourier series ( Appendix B.1 discusses convergence and approximation quality ) : p ( t ) = ∞∑ k=−∞ zk exp ( 2πikt ) , zk ∈ C. ( 2 ) Here , the sequence z : Z→ C contains the Fourier coefficients for p : [ 0 , 1 ] → R. As p is real , z is symmetric in the sense of zk = z−k . 3.1 FORMALIZING ROUGHNESS . The roughness of a class of random functions can be characterized by the statistics of their powerspectrum ( Musgrave , 1993 ) : Given a random process that generates functions p , we consider mean µk and variance σ2k of the Fourier coefficients zk . For paths with short length as used in our experiments , the means µk are empirically very close to zero except for z0 ( `` DC '' coefficient ) , which is excluded in all experiments . Therefore , we can focus on variance : In general , functions where the variance of high-frequency components drop off more quickly will appear smoother . A common model , which often fits natural data well , is fractal Brownian motion ( FBM- ) noise , where the σk drop off according to a power law : σk ∼ O ( 1/kh ) for some h > 0 . ( 3 ) The so-called “ fractal coefficient ” h describes the roughness of the noise function . A similar approach has been taken by Hoffer et al . ( 2017 ) , who modeled the loss surface by analyzing the dynamics of a random walk on a random potential . In our experiments , we estimate the average power-spectrum ED ( |zk|2 ) ( by sampling D uniformly on a unit sphere ) and fit a power-law to these spectra in order to quantify the roughness in a single number h. Experiments ( Section 4 ) and analytical arguments ( Appendix B.2 ) , show that the FBM/power-law model is a realistic model of the functions computed by the lower layers of a neural network . For higher layers the fit becomes worse , a phenomenon we will explore in the next chapter using harmonic distortion analysis . 3.2 WHY IS THE OUTPUT FUNCTION GETTING ROUGHER ? . Intuitively , applying ReLU to a function p is reminiscent of clipping an audio signal in amplitude , which is known to produce high-frequent ringing artifacts . We describe the effect of a single nonlinearity φ on the spectrum of a preactivation p ; Inductively , this describes the spectral shift of the whole network . For the analysis , we assume that φ is a K-th order polynomial : φ ( x ) = K∑ j=0 ajx j . ( 4 ) The effect of polynomial nonlinear maps on the spectrum of a function can be understood by harmonic distortion analysis ( see e.g . Feynman et al . ( 1965 ) , Ch . 50.8 ) . We can simply ( see Appendices B.1 – B.3 for details ) plug the Fourier expansion of p into the polynomial representation of φ : φ ( p ( t ) ) = K∑ j=0 aj [ ∞∑ k=−∞ zk exp ( 2πikt ) ] j . ( 5 ) The convolution theorem tells us that j-th power of functions corresponds to convolving the spectrum z of the function j-times with itself . We designate by z the vector containing all Fourier coefficients zk and by ⊗ the convolution operator . We can then write the spectrum of the output z′ as : z′ : = F ( φ ( p ) ) = K∑ j=0 aj j⊗ 1=1 z . ( 6 ) Discussion : We can make three important observations : • First , each repeated auto-convolutions in Eq . 6 broadens the spectrum by adding higher-frequency terms . We call this broadening effect blueshift . The exact magnitude is hard to quantify and thus left to the experiments ( Appendix B.4 gives informal arguments for a growth of O ( j1/2 ) rather than the trivial upper bound of O ( j ) ) . • Second and correspondingly , larger coefficients aj for larger orders j , increase the blueshift . • Third , j-fold convolutions correspond to j-th powers of the zk . Hence , larger magnitudes |zk| also increase the blueshift ( the nonlinearity becomes more visible with larger magnitude ) . 3.3 COMMON NONLINEARITIES . In practice , nonlinearities are usually not polynomial , and might not even have a globally convergent Taylor series . It is possible to approximate any continuous function by a polynomial ( Stone-Weierstrass theorem ) . We conjecture that a close polynomial approximation will be sufficient for a qualitative prediction of the blueshift effect ( our theoretical model does not guarantee convergence – we therefore validate this claim experimentally ) . We employ a Chebyshef approximation ( which is reasonably stable and non-oscillatory ) on the interval [ −5 , 5 ] and compare the speed at which higher order coefficients drop off : Fig . 2 shows the coefficient magnitudes for ReLU , tanh and softplus ( which we focus on in our experiments ) . softplus has the strongest drop-off , followed by tanh and ReLU . Appendix F.1 gives more details and covers several additional popular nonlinearities ( Figures 18 , 19 ) . Figure 3 , 17 show that this correlates with blueshift , as expected .
The papers proposes an interesting analysis that links several aspects of architectural design in Deep NNs to the spectral analysis and observed roughness. Different activations functions are considered in the study, mainly centered on deep CNN with or without skip connections (in the framework of ResNet v1 and v2). The starting point, which is not novel, actually, but relevant, is that specific types of non-linearities introduce harmonic distortions, and the effect is potentially amplified when multiple non-linearities are stacked. Theoretically, the paper shows that there is a concrete link between architectural choices in the network design and the blueshift in the frequency domain. Experimentally, the observations support the mathematical analysis. All in all, some of the conclusions regarding trainability of CNN architectures with skip connections have been already noted and do not seem greatly new, but the paper introduces a nice perspective to see this phenomenon in another light.
SP:98db562b40246e689a13aa290e72031ef2bcea8d
Ringing ReLUs: Harmonic Distortion Analysis of Nonlinear Feedforward Networks
1 INTRODUCTION . In the past decade , the emergence of practical deep neural networks arguably has had disruptive impact on applications of machine learning . Depth as such appears to be key to expressive models ( Raghu et al. , 2017 ) . However , depth also comes with challenges concerning training stability . Theoretical problems include vanishing and exploding gradients ( Hochreiter , 1991 ) , chaotic feedforward dynamics ( Poole et al. , 2016 ) , or decorrelation of gradients ( Balduzzi et al. , 2017 ) . In practice , a number of “ recipes ” are widely used , such as specific nonlinearities ( Glorot et al. , 2011 ; He et al. , 2015 ) , normalization methods such as batch normalization ( Ioffe & Szegedy , 2015 ) , shortcut architectures ( Srivastava et al. , 2015 ; He et al. , 2016a ; b ) , or multi-path architecture with ( Huang et al. , 2017 ) and without shortcuts ( Szegedy et al. , 2016 ) . Broadly speaking , a key research question is to understand how the shape of the network function , i.e. , the map from inputs and parameters to outputs , is affected by architectural choices . Our paper considers specifically the roughness of the weights-to-outputs function ( “ w-o function ” ) of nonlinear feed-forward networks . Motivated by the recent visualizations of ( Li et al. , 2018 ) , which show how depth increases roughness and residual connections smoothen the output again , our goal is to provide an analytical explanation of this effect , and study its implications on network design and trainability . To this end , we first formalize “ roughness ” as the decay-rate of the expected power spectrum of a function class . Our main contribution is to then apply harmonic distortion analysis to nonlinear feedforward networks , which predicts the creation of high-frequency “ harmonics ” ( thereby “ blueshifting ” the power spectrum ) by polynomial nonlinearities with large higher-order coefficients . Based on this model , we discuss how network depth increases blueshift and thus roughness , while shortcut connections , low-degree nonlinearities and parallel computation paths dampen it . In relation to trainability , we show an analytic link between blueshift and exploding gradients . Unlike the former model , the spectral view describes a more global behavior of the w-o function over regions in the parameter domain . Experiments confirm the theoretical predictions : We observe the predicted effects of depth , shortcuts and parallel computation on blueshift , and are able to differentiate different types of nonlinearities by the decay rate of coefficients of a polynomial approximation . The findings are in-line with known advantages in trainability of the different architectures . We further strengthen the evidence by training a large set of networks with a different amount of nonlinearity and depth , which shows a clear correlation between blueshift and training-problems , as well as a trade-off with expressivity . In summary , our paper explains how network architecture affects roughness , shows a connection to trainability , and thereby provides a new tool for analyzing the design of deep networks . 2 RELATED WORK . Vanishing or exploding gradients are a central numerical problem ( Hochreiter , 1991 ; Pascanu et al. , 2013 ; Yang et al. , 2019 ) : If the the magnitudes of the singular values of layer Jacobians deviates from one , subspaces are attenuated ( |σ| < 1 ) or amplified ( |σ| > 1 ) , potentially cascading exponentially over multiple layers ( Pennington et al. , 2017a ) . Formally , the behavior of stacks of matrices and nonlinear functions can be modeled by random matrix theory or Gaussian mean-field approximations ( Poole et al. , 2016 ; Pennington et al. , 2017b ; 2018 ) . The gist is that at initialization , orthogonal weight matrices are needed , which is challenging for convolutional architectures . A solution for tanh-networks is given by Xiao et al . ( 2018 ) ; for ReLU , there is a negative result ( Pennington et al. , 2017b ) . Using mean-field theory , it can be shown that batch normalization ( Ioffe & Szegedy , 2015 ) leads to exploding gradients at initialization ( Yang et al. , 2019 ) ( which equalize after a few steps , but that might be too late ( Frankle et al. , 2020 ) ) . A different route is taken by Balduzzi et al . ( 2017 ) , who observe an increasing decorrelation of gradients in the input space . Similar to our paper , they show that deeper networks lead to spectral whitening ( starting from brown noise ) ; however , the analysis is performed with respect to the inputs x , not weights W. The scale-space structure shown by our model might give further hints on the mechanisms behind training difficulties . By visualizing random slices of the loss surface , Li et al . ( 2018 ) observe that the loss surface of deep feedforward networks transitions between nearly convex to chaotic with increasing depth ; our work explains these observations by spectral analysis . Duvenaud et al . ( 2014 ) visualize pathologies on the landscape of deep gaussian processes that model deep wide-limit nonlinear networks . Fourier analysis of network functions ( Candès , 1999 ) wrt . input ( Rahaman et al. , 2019 ; Xu et al. , 2019 ; Xu , 2018 ; Basri et al. , 2019 ; Yang & Salman , 2019 ) has been used to show an inductive bias towards low-frequency functions ( wrt . input x ) , as well as a strong anisotropy of this spectrum . Wang et al . ( 2020 ) prove under some assumptions that all `` bad '' local minima of a deep residual network are very shallow . 3 HARMONIC DISTORTION . We now analyze the effect of a nonlinearity by relating the Fourier spectrum of a preactivation with that of its postactivation . Let f denote the preactivation of a single neuron of a neural network consisting of L-layers . We use x to denote the input to the whole network and thus to f , W to denote the weights , and φ to denote the employed nonlinearity . Li et al . ( 2018 ) visualize “ roughness ” using random 2D-slices in weight space . We follow their basic idea and consider 1D slices p ( t ) = f ( x , W + α−1 · t ·D ) ( 1 ) for random directions D and t ∈ [ 0 , 1 ] . D is initialized with entries from N0,1 and normalized to ||D||F = 1. α determines the path length . By varying D , this samples a ball of radius α around a point W in parameter space . For φ = id , the network f is multi-linear in W and thus p is polynomial in t ; empirically , this yields rather smooth functions ( Fig . 1 ) . To understand general nonlinearities φ better , we represent p by a complex Fourier series ( Appendix B.1 discusses convergence and approximation quality ) : p ( t ) = ∞∑ k=−∞ zk exp ( 2πikt ) , zk ∈ C. ( 2 ) Here , the sequence z : Z→ C contains the Fourier coefficients for p : [ 0 , 1 ] → R. As p is real , z is symmetric in the sense of zk = z−k . 3.1 FORMALIZING ROUGHNESS . The roughness of a class of random functions can be characterized by the statistics of their powerspectrum ( Musgrave , 1993 ) : Given a random process that generates functions p , we consider mean µk and variance σ2k of the Fourier coefficients zk . For paths with short length as used in our experiments , the means µk are empirically very close to zero except for z0 ( `` DC '' coefficient ) , which is excluded in all experiments . Therefore , we can focus on variance : In general , functions where the variance of high-frequency components drop off more quickly will appear smoother . A common model , which often fits natural data well , is fractal Brownian motion ( FBM- ) noise , where the σk drop off according to a power law : σk ∼ O ( 1/kh ) for some h > 0 . ( 3 ) The so-called “ fractal coefficient ” h describes the roughness of the noise function . A similar approach has been taken by Hoffer et al . ( 2017 ) , who modeled the loss surface by analyzing the dynamics of a random walk on a random potential . In our experiments , we estimate the average power-spectrum ED ( |zk|2 ) ( by sampling D uniformly on a unit sphere ) and fit a power-law to these spectra in order to quantify the roughness in a single number h. Experiments ( Section 4 ) and analytical arguments ( Appendix B.2 ) , show that the FBM/power-law model is a realistic model of the functions computed by the lower layers of a neural network . For higher layers the fit becomes worse , a phenomenon we will explore in the next chapter using harmonic distortion analysis . 3.2 WHY IS THE OUTPUT FUNCTION GETTING ROUGHER ? . Intuitively , applying ReLU to a function p is reminiscent of clipping an audio signal in amplitude , which is known to produce high-frequent ringing artifacts . We describe the effect of a single nonlinearity φ on the spectrum of a preactivation p ; Inductively , this describes the spectral shift of the whole network . For the analysis , we assume that φ is a K-th order polynomial : φ ( x ) = K∑ j=0 ajx j . ( 4 ) The effect of polynomial nonlinear maps on the spectrum of a function can be understood by harmonic distortion analysis ( see e.g . Feynman et al . ( 1965 ) , Ch . 50.8 ) . We can simply ( see Appendices B.1 – B.3 for details ) plug the Fourier expansion of p into the polynomial representation of φ : φ ( p ( t ) ) = K∑ j=0 aj [ ∞∑ k=−∞ zk exp ( 2πikt ) ] j . ( 5 ) The convolution theorem tells us that j-th power of functions corresponds to convolving the spectrum z of the function j-times with itself . We designate by z the vector containing all Fourier coefficients zk and by ⊗ the convolution operator . We can then write the spectrum of the output z′ as : z′ : = F ( φ ( p ) ) = K∑ j=0 aj j⊗ 1=1 z . ( 6 ) Discussion : We can make three important observations : • First , each repeated auto-convolutions in Eq . 6 broadens the spectrum by adding higher-frequency terms . We call this broadening effect blueshift . The exact magnitude is hard to quantify and thus left to the experiments ( Appendix B.4 gives informal arguments for a growth of O ( j1/2 ) rather than the trivial upper bound of O ( j ) ) . • Second and correspondingly , larger coefficients aj for larger orders j , increase the blueshift . • Third , j-fold convolutions correspond to j-th powers of the zk . Hence , larger magnitudes |zk| also increase the blueshift ( the nonlinearity becomes more visible with larger magnitude ) . 3.3 COMMON NONLINEARITIES . In practice , nonlinearities are usually not polynomial , and might not even have a globally convergent Taylor series . It is possible to approximate any continuous function by a polynomial ( Stone-Weierstrass theorem ) . We conjecture that a close polynomial approximation will be sufficient for a qualitative prediction of the blueshift effect ( our theoretical model does not guarantee convergence – we therefore validate this claim experimentally ) . We employ a Chebyshef approximation ( which is reasonably stable and non-oscillatory ) on the interval [ −5 , 5 ] and compare the speed at which higher order coefficients drop off : Fig . 2 shows the coefficient magnitudes for ReLU , tanh and softplus ( which we focus on in our experiments ) . softplus has the strongest drop-off , followed by tanh and ReLU . Appendix F.1 gives more details and covers several additional popular nonlinearities ( Figures 18 , 19 ) . Figure 3 , 17 show that this correlates with blueshift , as expected .
This paper proposes a new approach for how to analyze the ruggedness of the surface of the neural network loss. Specifically, the paper proposes to apply harmonic distortion on the weight-to-output (w-o) maps. That is, the method casts the w-o functions in the Fourier domain and then aggregate the surface characteristics by virtue of averaging the different order Fourier coefficients. The paper shows that non-linearities are responsible for blueshifting with deeper layers, that is for "transferring more energy" on the higher frequencies. The consequence is rougher surfaces, as well as higher frequencies for gradients, which can lead to exploding gradients in the deeper layers. The remedy is with skip connections and feature averaging, which although are methods already known to improve trainability, the paper corroborates that they also make sense in terms of said approach. The paper conducts various empirical and ablation studies, providing evidence of the claims.
SP:98db562b40246e689a13aa290e72031ef2bcea8d
Regularization Cocktails for Tabular Datasets
1 INTRODUCTION . In most supervised learning application domains , the available data for training predictive models is both limited and noisy with respect to the target variable . Therefore , it is paramount to regularize machine learning models for generalizing the predictive performance on future unseen data . The concept of regularization is well-studied and constitutes one of the pillar components of machine learning . Throughout this work we use the term “ regularization ” for all methods that explicitly or implicitly take measures that reduce the overfitting phenomenon ; we categorize these non-exhaustively into weight decay , data augmentation , model averaging , structure and linearization , and implicit regularization families ( detailed in Section 2 ) . In this paper , we propose a new principled strategy that highlights the need for automatically learning the optimal combination of regularizers , denoted as regularization cocktails , via a hyperparameter optimization procedure . Truth be told , combining regularization methods is of course far from being a novel practice per se . As a matter of fact , most modern deep learning models use combinations of a number of regularizers . For instance , EfficientNet ( Tan & Q.Le , 2019 ) mixes components of structural regularization and linearization via ResNet-style skip connections ( He et al. , 2016 ) , learning rate scheduling , DropOut ensembling ( Srivastava et al. , 2014 ) and AutoAugment data augmentation ( Cubuk et al. , 2019 ) . However , even though each of those regularizers is motivated in isolation , the reasoning behind a specific combination of regularizers is largely based on accuracy-driven manual trial-and-error iterations , mostly on image classification benchmarks like CIFAR ( Krizhevsky et al. , 2009 ) and ImageNet ( Deng et al. , 2009 ) . Unfortunately , the manual search for combinations of regularizers is sub-optimal , unsustainable , and in essence consists of an example of manual hyperparameter tuning , which in turn is easily outperformed by automated algorithms ( Snoek et al. , 2012 ; Thornton et al. , 2013 ; Feurer et al. , 2015 ; Olson & Moore , 2016 ; Jin et al. , 2019 ; Erickson et al. , 2020 ; Zimmer et al. , 2020 ) . Following the spirit of AutoML ( Hutter et al. , 2018 ) , we , therefore , propose a strategy for learning the optimal dataset-specific regularization cocktail by means of a modern hyperparameter optimization ( HPO ) method . To the best of our knowledge , there exists no study providing empirical evidence that a mixture of numerous regularizers outperforms individual regularizers ; this paper fills this gap . More precisely , the research hypothesis of this paper is that a properly mixed regularization cocktail outperforms every individual regularizer in it , in terms of accuracy under the same run-time budget , and that the best cocktail to use depends on the dataset . To validate this hypothesis , we executed a large-scale experimental study employing 40 diverse tabular datasets and 13 prominent regularizers with a thorough hyperparameter tuning for all regularizers . We focus on tabular datasets because , in contrast to large image datasets , a thorough hyper-parameter search procedure is feasible . Neural networks are high variance models for tabular datasets , therefore improved regularization schemes can provide a relatively higher generalization gain on tabular datasets compared to other data types . Thereby , we make the followings contributions : 1 . We demonstrate the empirical accuracy gains of regularization cocktails in a systematic manner via a large-scale experimental study on tabular datasets ; 2 . We challenge the status-quo practices of designing universal dataset-agnostic regularizers , by showing that an optimal regularization cocktail is highly dataset-dependent ; 3 . We demonstrate that regularization cocktails achieve state-of-the-art classification accuracy performance on tabular datasets and outperform Gradient-Boosted Decision Trees ( GBDT ) with a statistically-significant margin ; 4 . As an overarching contribution , this paper provides previously-lacking in-depth empirical evidence to better understand the importance of combining different mechanisms for regularization , one of the most fundamental concepts in machine learning . 2 RELATED WORK . Weight decay : The classical approaches of regularization focused on minimizing the norms of the parameter values , concretely either the L1 ( Tibshirani , 1996 ) , the L2 ( Tikhonov , 1943 ) , or a combination of L1 and L2 known as the Elastic Net ( Zou & Hastie , 2005 ) . A recent work fixes the malpractice of adding the decay penalty term before momentum-based adaptive learning rate steps ( e.g. , in common implementations of Adam ( Kingma & Ba , 2015 ) ) , by decoupling the regularization from the loss and applying it after the learning rate computation ( Loshchilov & Hutter , 2019 ) . Data Augmentation : A different treatment of the overfitting phenomenon relies on enriching the training dataset via instance augmentation . The literature on data augmentation is vast , especially for image data , ranging from basic image manipulations ( e.g. , geometric transformations , or mixing images ) up to parametric augmentation strategies such as adversarial and controller-based methods ( Shorten & Khoshgoftaar , 2019 ) . For example , Cut-Out ( Devries & Taylor , 2017 ) proposes to mask a subset of input features ( e.g. , pixel patches for images ) for ensuring that the predictions remain invariant to distortions in the input space . Along similar lines , Mix-Up ( Zhang et al. , 2018 ) generates new instances as a linear span of pairs of training examples , while Cut-Mix ( Yun et al. , 2019 ) suggests super-positions of instance pairs with mutually-exclusive pixel masks . A recent technique , called Aug-Mix ( Hendrycks et al. , 2020 ) , generates instances by sampling chains of augmentation operations . On the other hand , the direction of reinforcement learning ( RL ) for augmentation policies was elaborated by Auto-Augment ( Cubuk et al. , 2019 ) , followed by a technique that speeds up the training of the RL policy ( S.Lim et al. , 2019 ) . Last but not least , adversarial attack strategies ( e.g. , FGSM ( Goodfellow et al. , 2015 ) ) generate synthetic examples with minimal perturbations , which are employed in training robust models ( Madry et al. , 2018 ) . Model Averaging : Ensembled machine learning models have been shown to reduce variance and act as regularizers ( Polikar , 2012 ) . A popular ensemble neural network with shared weights among its base models is Drop-Out ( Srivastava et al. , 2014 ) , which was extended to a variational version with a Gaussian posterior of the model parameters ( Kingma et al. , 2015 ) . A follow-up work that is known as Mix-Out ( Lee et al. , 2020 ) extends Drop-Out by statistically fusing the parameters of two base models . Furthermore , ensembles can be created using models from the local optima discovered along a single convergence procedure ( Huang et al. , 2016 ) . Structural and Linearization : One strategy of regularizing deep learning models is to discover dedicated neural structures that generalize on particular tasks , such as image classification or Natural Language Processing ( NLP ) . In that context , ResNet adds skip connections across layers ( He et al. , 2016 ) , while the Inception model computes latent representations by aggregating diverse convolutional filter sizes ( Szegedy et al. , 2017 ) . The attention mechanism gave rise to the popular Transformer architecture in the realm of NLP ( Vaswani et al. , 2017 ) . Recently , EfficientNet is an architecture that easily scales deep convolutional neural networks by controlling only a few hyperparameters ( Tan & Q.Le , 2019 ) . Besides the aforementioned manually-designed architectures , the stream of Neural Architecture Search ( Elsken et al. , 2019 ) focuses on exploring neural connectivity graphs for finding the optimal architectures via reinforcement learning ( Zoph & Le , 2017 ) , black-box search ( Real et al. , 2019 ) or differentiable solvers ( Liu et al. , 2019 ) . A recent trend adds a dosage of linearization to deep models , where skip connections transfer embeddings from previous less non-linear layers ( He et al. , 2016 ; Huang et al. , 2017 ) . Along similar lines , the Shake-Shake regularization deploys skip connections in parallel convolutional blocks and aggregates the parallel representations through affine combinations ( Gastaldi , 2017 ) , while Shake-Drop extends this mechanism to a larger number of CNN architectures ( Yamada et al. , 2018 ) . Implicit : The last family of regularizers broadly encapsulates methods which do not directly propose novel regularization techniques but have an implicit regularization effect as a virtue of their ‘ modus operandi ’ ( Arora et al. , 2019 ) . For instance , Batch Normalization improves generalization by reducing the internal covariate shifts ( Ioffe & Szegedy , 2015 ) , while early stopping of the optimization procedure also yields a similar generalization effect ( Yao et al. , 2007 ) . On the other hand , stabilizing the convergence of the training routine is another implicit regularization , for instance by introducing learning rate scheduling schemes ( Loshchilov & Hutter , 2017 ) . The recent strategy of stochastic weight averaging relies on averaging parameter values from the local optima encountered along the sequence of optimization steps ( Izmailov et al. , 2018 ) , while another approach conducts updates in the direction of a few ‘ lookahead ’ steps ( Zhang et al. , 2019 ) . Positioning in the realm of AutoML : In contrast to the prior literature , we do not propose a new individual regularization method , but empirically identify the superiority of learning regularization cocktails among a set of existing regularizers from the aforementioned categories . We train datasetspecific cocktails as a hyperparameter optimization ( HPO ) task ( Feurer & Hutter , 2019 ) . In that regard , our work is positioned in the realm of AutoML and is a special case of a combined algorithm selection and hyperparameter optimization ( Thornton et al. , 2013 ) . We learn the regularization cocktails and optimize the joint hyperparameter configuration space by means of BOHB ( Falkner et al. , 2018b ) , which is a variation of Hyperband ( Li et al. , 2017 ) with model-based surrogates and is one of the current state of the art approaches for efficient HPO . 3 MIXING THE REGULARIZATION COCKTAIL . 3.1 THE RECIPE AS PROBLEM STATEMENT . A training set is composed of features X ( Train ) and targets y ( Train ) , while the test dataset is denoted by X ( Test ) , y ( Test ) . A parameterized function approximates the targets as ŷ = f ( X ; θ ) whereas the parameters θ are trained to minimize a differentiable loss function L as : θ∗ ∈ arg min θ L ( y ( Train ) , f ( X ( Train ) ; θ ) ) . ( 1 ) To generalize into minimizing L ( y ( Test ) , f ( X ( Test ) ; θ ) , the parameters of f are controlled with a regularization technique Ω that avoids overfitting to the peculiarities of the training data . With a slight abuse of notation we denote f ( X ; Ω ( θ ; λ ) ) to be the predictions of the model f whose parameters θ are optimized under the regime of the regularization method Ω ( · ; λ ) , where λ ∈ Λ represent the hyperparameters of Ω . The training data is further divided into two subsets as training and validation splits,1 the later denoted by X ( Val ) , y ( Val ) , such that λ can be tuned on the validation loss via the following hyperparameter optimization objective : λ∗ ∈ arg min λ∈Λ L ( y ( Val ) , f ( X ( Val ) ; Ω ( θ∗ ; λ ) ) ) , ( 2 ) s.t . θ∗ ∈ arg min θ L ( y ( Train ) , f ( X ( Train ) ; Ω ( θ ; λ ) ) . ( 3 ) 1For simplicity , we only discuss hold-out validation scheme here , but in principle any other validation scheme , such as cross validation and bootstrap sampling , would be possible . While the search for optimal hyperparameters λ is an active field of research in the realm of AutoML ( Hutter et al. , 2018 ) , still the choice of the regularizer Ω mostly remains an ad-hoc practice , where practitioners select few combinations among popular regularizers ( Dropout , L2 , Batch Normalization , etc. ) . In contrast to prior studies , we hypothesize that the optimal regularizer is a cocktail mixture of a large set of regularization methods , all being simultaneously applied with different strengths ( i.e. , dataset-specific hyperparameters ) . Given a set of K regularizers { ( Ω ( k ) ( · ; λ ( k ) ) } K k=1 : = { Ω ( 1 ) ( · ; λ ( 1 ) ) , . . . , Ω ( K ) ( · ; λ ( K ) ) } , each with its own hyperparameters λ ( k ) ∈ Λ ( k ) , ∀k ∈ { 1 , . . . , K } , the problem of finding the optimal cocktail of regularizers is : λ∗ ∈ arg min λ ( 1 ) ∈Λ ( k ) , ... , λ ( K ) ∈Λ ( k ) L ( y ( Val ) , f ( X ( Val ) ; { Ω ( k ) ( θ∗ , λ ( k ) ) } K k=1 ) ) ( 4 ) s.t . θ∗ ∈ arg min θ L ( y ( Train ) , f ( X ( Train ) ; { Ω ( k ) ( θ , λ ( k ) ) } K k=1 ) ) ( 5 ) The intuitive interpretation of Equations 4-5 is searching for the optimal hyperparameters λ ( i.e. , strengths ) of the cocktail ’ s regularizers using the validation set ( Equation 4 ) , given that the optimal prediction model parameters θ are trained under the regime of all the regularizers being applied jointly ( Equation 5 ) . We stress that the hyperparameters λ ( k ) include a conditional hyperparameter controlling whether the k-th regularizer is applied at all , or skipped . Therefore , the best cocktail might consist of combinations of a subset of regularizers .
This work takes a step towards understanding the effect of automated selection of regularisation techniques and analyses the results across 42 structured datasets. It defines a search space over 13 regularisation techniques and employs one flavour of Bayesian Optimisation + Hyperband approach to find an optimal combination of regularisers. It concludes by substantiating three claims with corresponding experiments.
SP:7b69dc66e28d4bfe787c40ee087749f27ffe6b98
Regularization Cocktails for Tabular Datasets
1 INTRODUCTION . In most supervised learning application domains , the available data for training predictive models is both limited and noisy with respect to the target variable . Therefore , it is paramount to regularize machine learning models for generalizing the predictive performance on future unseen data . The concept of regularization is well-studied and constitutes one of the pillar components of machine learning . Throughout this work we use the term “ regularization ” for all methods that explicitly or implicitly take measures that reduce the overfitting phenomenon ; we categorize these non-exhaustively into weight decay , data augmentation , model averaging , structure and linearization , and implicit regularization families ( detailed in Section 2 ) . In this paper , we propose a new principled strategy that highlights the need for automatically learning the optimal combination of regularizers , denoted as regularization cocktails , via a hyperparameter optimization procedure . Truth be told , combining regularization methods is of course far from being a novel practice per se . As a matter of fact , most modern deep learning models use combinations of a number of regularizers . For instance , EfficientNet ( Tan & Q.Le , 2019 ) mixes components of structural regularization and linearization via ResNet-style skip connections ( He et al. , 2016 ) , learning rate scheduling , DropOut ensembling ( Srivastava et al. , 2014 ) and AutoAugment data augmentation ( Cubuk et al. , 2019 ) . However , even though each of those regularizers is motivated in isolation , the reasoning behind a specific combination of regularizers is largely based on accuracy-driven manual trial-and-error iterations , mostly on image classification benchmarks like CIFAR ( Krizhevsky et al. , 2009 ) and ImageNet ( Deng et al. , 2009 ) . Unfortunately , the manual search for combinations of regularizers is sub-optimal , unsustainable , and in essence consists of an example of manual hyperparameter tuning , which in turn is easily outperformed by automated algorithms ( Snoek et al. , 2012 ; Thornton et al. , 2013 ; Feurer et al. , 2015 ; Olson & Moore , 2016 ; Jin et al. , 2019 ; Erickson et al. , 2020 ; Zimmer et al. , 2020 ) . Following the spirit of AutoML ( Hutter et al. , 2018 ) , we , therefore , propose a strategy for learning the optimal dataset-specific regularization cocktail by means of a modern hyperparameter optimization ( HPO ) method . To the best of our knowledge , there exists no study providing empirical evidence that a mixture of numerous regularizers outperforms individual regularizers ; this paper fills this gap . More precisely , the research hypothesis of this paper is that a properly mixed regularization cocktail outperforms every individual regularizer in it , in terms of accuracy under the same run-time budget , and that the best cocktail to use depends on the dataset . To validate this hypothesis , we executed a large-scale experimental study employing 40 diverse tabular datasets and 13 prominent regularizers with a thorough hyperparameter tuning for all regularizers . We focus on tabular datasets because , in contrast to large image datasets , a thorough hyper-parameter search procedure is feasible . Neural networks are high variance models for tabular datasets , therefore improved regularization schemes can provide a relatively higher generalization gain on tabular datasets compared to other data types . Thereby , we make the followings contributions : 1 . We demonstrate the empirical accuracy gains of regularization cocktails in a systematic manner via a large-scale experimental study on tabular datasets ; 2 . We challenge the status-quo practices of designing universal dataset-agnostic regularizers , by showing that an optimal regularization cocktail is highly dataset-dependent ; 3 . We demonstrate that regularization cocktails achieve state-of-the-art classification accuracy performance on tabular datasets and outperform Gradient-Boosted Decision Trees ( GBDT ) with a statistically-significant margin ; 4 . As an overarching contribution , this paper provides previously-lacking in-depth empirical evidence to better understand the importance of combining different mechanisms for regularization , one of the most fundamental concepts in machine learning . 2 RELATED WORK . Weight decay : The classical approaches of regularization focused on minimizing the norms of the parameter values , concretely either the L1 ( Tibshirani , 1996 ) , the L2 ( Tikhonov , 1943 ) , or a combination of L1 and L2 known as the Elastic Net ( Zou & Hastie , 2005 ) . A recent work fixes the malpractice of adding the decay penalty term before momentum-based adaptive learning rate steps ( e.g. , in common implementations of Adam ( Kingma & Ba , 2015 ) ) , by decoupling the regularization from the loss and applying it after the learning rate computation ( Loshchilov & Hutter , 2019 ) . Data Augmentation : A different treatment of the overfitting phenomenon relies on enriching the training dataset via instance augmentation . The literature on data augmentation is vast , especially for image data , ranging from basic image manipulations ( e.g. , geometric transformations , or mixing images ) up to parametric augmentation strategies such as adversarial and controller-based methods ( Shorten & Khoshgoftaar , 2019 ) . For example , Cut-Out ( Devries & Taylor , 2017 ) proposes to mask a subset of input features ( e.g. , pixel patches for images ) for ensuring that the predictions remain invariant to distortions in the input space . Along similar lines , Mix-Up ( Zhang et al. , 2018 ) generates new instances as a linear span of pairs of training examples , while Cut-Mix ( Yun et al. , 2019 ) suggests super-positions of instance pairs with mutually-exclusive pixel masks . A recent technique , called Aug-Mix ( Hendrycks et al. , 2020 ) , generates instances by sampling chains of augmentation operations . On the other hand , the direction of reinforcement learning ( RL ) for augmentation policies was elaborated by Auto-Augment ( Cubuk et al. , 2019 ) , followed by a technique that speeds up the training of the RL policy ( S.Lim et al. , 2019 ) . Last but not least , adversarial attack strategies ( e.g. , FGSM ( Goodfellow et al. , 2015 ) ) generate synthetic examples with minimal perturbations , which are employed in training robust models ( Madry et al. , 2018 ) . Model Averaging : Ensembled machine learning models have been shown to reduce variance and act as regularizers ( Polikar , 2012 ) . A popular ensemble neural network with shared weights among its base models is Drop-Out ( Srivastava et al. , 2014 ) , which was extended to a variational version with a Gaussian posterior of the model parameters ( Kingma et al. , 2015 ) . A follow-up work that is known as Mix-Out ( Lee et al. , 2020 ) extends Drop-Out by statistically fusing the parameters of two base models . Furthermore , ensembles can be created using models from the local optima discovered along a single convergence procedure ( Huang et al. , 2016 ) . Structural and Linearization : One strategy of regularizing deep learning models is to discover dedicated neural structures that generalize on particular tasks , such as image classification or Natural Language Processing ( NLP ) . In that context , ResNet adds skip connections across layers ( He et al. , 2016 ) , while the Inception model computes latent representations by aggregating diverse convolutional filter sizes ( Szegedy et al. , 2017 ) . The attention mechanism gave rise to the popular Transformer architecture in the realm of NLP ( Vaswani et al. , 2017 ) . Recently , EfficientNet is an architecture that easily scales deep convolutional neural networks by controlling only a few hyperparameters ( Tan & Q.Le , 2019 ) . Besides the aforementioned manually-designed architectures , the stream of Neural Architecture Search ( Elsken et al. , 2019 ) focuses on exploring neural connectivity graphs for finding the optimal architectures via reinforcement learning ( Zoph & Le , 2017 ) , black-box search ( Real et al. , 2019 ) or differentiable solvers ( Liu et al. , 2019 ) . A recent trend adds a dosage of linearization to deep models , where skip connections transfer embeddings from previous less non-linear layers ( He et al. , 2016 ; Huang et al. , 2017 ) . Along similar lines , the Shake-Shake regularization deploys skip connections in parallel convolutional blocks and aggregates the parallel representations through affine combinations ( Gastaldi , 2017 ) , while Shake-Drop extends this mechanism to a larger number of CNN architectures ( Yamada et al. , 2018 ) . Implicit : The last family of regularizers broadly encapsulates methods which do not directly propose novel regularization techniques but have an implicit regularization effect as a virtue of their ‘ modus operandi ’ ( Arora et al. , 2019 ) . For instance , Batch Normalization improves generalization by reducing the internal covariate shifts ( Ioffe & Szegedy , 2015 ) , while early stopping of the optimization procedure also yields a similar generalization effect ( Yao et al. , 2007 ) . On the other hand , stabilizing the convergence of the training routine is another implicit regularization , for instance by introducing learning rate scheduling schemes ( Loshchilov & Hutter , 2017 ) . The recent strategy of stochastic weight averaging relies on averaging parameter values from the local optima encountered along the sequence of optimization steps ( Izmailov et al. , 2018 ) , while another approach conducts updates in the direction of a few ‘ lookahead ’ steps ( Zhang et al. , 2019 ) . Positioning in the realm of AutoML : In contrast to the prior literature , we do not propose a new individual regularization method , but empirically identify the superiority of learning regularization cocktails among a set of existing regularizers from the aforementioned categories . We train datasetspecific cocktails as a hyperparameter optimization ( HPO ) task ( Feurer & Hutter , 2019 ) . In that regard , our work is positioned in the realm of AutoML and is a special case of a combined algorithm selection and hyperparameter optimization ( Thornton et al. , 2013 ) . We learn the regularization cocktails and optimize the joint hyperparameter configuration space by means of BOHB ( Falkner et al. , 2018b ) , which is a variation of Hyperband ( Li et al. , 2017 ) with model-based surrogates and is one of the current state of the art approaches for efficient HPO . 3 MIXING THE REGULARIZATION COCKTAIL . 3.1 THE RECIPE AS PROBLEM STATEMENT . A training set is composed of features X ( Train ) and targets y ( Train ) , while the test dataset is denoted by X ( Test ) , y ( Test ) . A parameterized function approximates the targets as ŷ = f ( X ; θ ) whereas the parameters θ are trained to minimize a differentiable loss function L as : θ∗ ∈ arg min θ L ( y ( Train ) , f ( X ( Train ) ; θ ) ) . ( 1 ) To generalize into minimizing L ( y ( Test ) , f ( X ( Test ) ; θ ) , the parameters of f are controlled with a regularization technique Ω that avoids overfitting to the peculiarities of the training data . With a slight abuse of notation we denote f ( X ; Ω ( θ ; λ ) ) to be the predictions of the model f whose parameters θ are optimized under the regime of the regularization method Ω ( · ; λ ) , where λ ∈ Λ represent the hyperparameters of Ω . The training data is further divided into two subsets as training and validation splits,1 the later denoted by X ( Val ) , y ( Val ) , such that λ can be tuned on the validation loss via the following hyperparameter optimization objective : λ∗ ∈ arg min λ∈Λ L ( y ( Val ) , f ( X ( Val ) ; Ω ( θ∗ ; λ ) ) ) , ( 2 ) s.t . θ∗ ∈ arg min θ L ( y ( Train ) , f ( X ( Train ) ; Ω ( θ ; λ ) ) . ( 3 ) 1For simplicity , we only discuss hold-out validation scheme here , but in principle any other validation scheme , such as cross validation and bootstrap sampling , would be possible . While the search for optimal hyperparameters λ is an active field of research in the realm of AutoML ( Hutter et al. , 2018 ) , still the choice of the regularizer Ω mostly remains an ad-hoc practice , where practitioners select few combinations among popular regularizers ( Dropout , L2 , Batch Normalization , etc. ) . In contrast to prior studies , we hypothesize that the optimal regularizer is a cocktail mixture of a large set of regularization methods , all being simultaneously applied with different strengths ( i.e. , dataset-specific hyperparameters ) . Given a set of K regularizers { ( Ω ( k ) ( · ; λ ( k ) ) } K k=1 : = { Ω ( 1 ) ( · ; λ ( 1 ) ) , . . . , Ω ( K ) ( · ; λ ( K ) ) } , each with its own hyperparameters λ ( k ) ∈ Λ ( k ) , ∀k ∈ { 1 , . . . , K } , the problem of finding the optimal cocktail of regularizers is : λ∗ ∈ arg min λ ( 1 ) ∈Λ ( k ) , ... , λ ( K ) ∈Λ ( k ) L ( y ( Val ) , f ( X ( Val ) ; { Ω ( k ) ( θ∗ , λ ( k ) ) } K k=1 ) ) ( 4 ) s.t . θ∗ ∈ arg min θ L ( y ( Train ) , f ( X ( Train ) ; { Ω ( k ) ( θ , λ ( k ) ) } K k=1 ) ) ( 5 ) The intuitive interpretation of Equations 4-5 is searching for the optimal hyperparameters λ ( i.e. , strengths ) of the cocktail ’ s regularizers using the validation set ( Equation 4 ) , given that the optimal prediction model parameters θ are trained under the regime of all the regularizers being applied jointly ( Equation 5 ) . We stress that the hyperparameters λ ( k ) include a conditional hyperparameter controlling whether the k-th regularizer is applied at all , or skipped . Therefore , the best cocktail might consist of combinations of a subset of regularizers .
This paper provides an empirical study of combining different regularizers. Fourteen regularizers including batch norm, weight decay, etc. are considered. The authors use BOHB (Falkner et al. 2018) to optimize for whether each regularizer is active, and additional regularizer-specific hyperparameters. Using 40 tabular datasets, they show mixtures nearly always outperform tuning a single regularizer, and that the benefits of regularization improves for smaller datasets.
SP:7b69dc66e28d4bfe787c40ee087749f27ffe6b98
Cross-Domain Few-Shot Learning by Representation Fusion
1 INTRODUCTION . Currently , deep learning is criticized because it is data hungry , has limited capacity for transfer , insufficiently integrates prior knowledge , and presumes a largely stable world ( Marcus , 2018 ) . In particular , these problems appear after a domain shift , that is , a change of the input-target distribution . A domain shift forces deep learning models to adapt . The goal is to exploit models that were trained on the typically rich original data for solving tasks from the new domain with much less data . Examples for domain shifts are new users or customers , new products and product lines , new diseases ( e.g . adapting from SARS to COVID19 ) , new images from another field ( e.g . from cats to dogs or from cats to bicycles ) , new social behaviors after societal change ( e.g . introduction of cell phones , pandemic ) , self-driving cars in new cities or countries ( e.g . from European countries to Arabic countries ) , and robot manipulation of new objects . Domain shifts are often tackled by meta-learning ( Schmidhuber , 1987 ; Bengio et al. , 1990 ; Hochreiter et al. , 2001 ) , since it exploits already acquired knowledge to adapt to new data . One prominent application of meta-learning dealing with domain shifts is few-shot learning , since , typically , from the new domain much less data is available than from the original domain . Meta-learning methods perform well on small domain shifts like new target classes with similar inputs . However , larger domain shifts are still challenging for current approaches . Large domain shifts lead to inputs , which are considerably different from the original inputs and possess different high-level concepts . Nonetheless , low-level concepts are often still shared between the inputs of the original domain and the inputs of the new domain . For images , such shared low-level concepts can be edges , textures , small shapes , etc . One way of obtaining low level concepts is to train a new deep learning model from scratch , where the new data is merged with the original data . However , although models of the original domain are often available , the original data , which the models were trained on , often are not . This might have several reasons , e.g . the data owner does no longer grant access to the data , General Data Protection Regulation ( GDPR ) does no longer allow access to the data , IP restrictions prevent access to the data , sensitive data items must not be touched anymore ( e.g . phase III drug candidates ) , or data is difficult to extract again . We therefore suggest to effectively exploit original data models directly by accessing not only high level but also low level abstractions . In this context , we propose a cross-domain few-shot learning method extracting information from different levels of abstraction in a deep neural network . Representation fusion . Deep Learning constructs neural network models that represent the data at multiple levels of abstraction ( LeCun et al. , 2015 ) . We introduce representation fusion , which is the concept of unifying and merging information from different levels of abstraction . Representation fusion uses a fast and adaptive system for detecting relevant information at different abstraction levels of a deep neural network , which we will show allows solving versatile and complex cross-domain tasks . CHEF . We propose cross-domain ensemble few-shot learning ( CHEF ) that achieves representation fusion by an ensemble of Hebbian learners , which are built upon a trained network . CHEF naturally addresses the problem of domain shifts which occur in a wide range of real-world applications . Furthermore , since CHEF only builds on representation fusion , it can adapt to new characteristics of tasks like unbalanced data sets , classes with few examples , change of the measurement method , new measurements in unseen ranges , new kind of labeling errors , and more . The usage of simple Hebbian learners allows the application of CHEF without needing to backpropagate information through the backbone network . The main contributions of this paper are : • We introduce representation fusion as the concept of unifying and merging information from different layers of abstraction . • We introduce CHEF1 as our new cross-domain few-shot learning method that builds on representation fusion . We show that using different layers of abstraction allows one to successfully tackle various few-shot learning tasks across a wide range of different domains . CHEF does not need to backpropagate information through the backbone network . • We apply CHEF to various cross-domain few-shot tasks and obtain several state-of-theart results . We further apply CHEF to cross-domain real-world applications from drug discovery , where we outperform all competitors . Related work . Representation fusion builds on learning a meaningful representation ( Bengio et al. , 2013 ; Girshick et al. , 2014 ) at multiple levels of abstraction ( LeCun et al. , 2015 ; Schmidhuber , 2015 ) . The concept of using representations from different layers of abstraction has been used in CNN architectures ( LeCun et al. , 1998 ) such as Huang et al . ( 2017 ) ; Rumetshofer et al . ( 2018 ) ; Hofmarcher et al . ( 2019 ) , in CNNs for semantic segmentation in the form of multi-scale context pooling ( Yu & Koltun , 2015 ; Chen et al. , 2018 ) , and in the form of context capturing and symmetric upsampling ( Ronneberger et al. , 2015 ) . Learning representations from different domains has been explored by Federici et al . ( 2020 ) ; Tschannen et al . ( 2020 ) under the viewpoint of mutual information optimization . Work on domain shifts discusses the problem that new inputs are considerably different from the original inputs ( Kouw & Loog , 2019 ; Wouter , 2018 ; Webb et al. , 2018 ; Gama et al. , 2014 ; Widmer & Kubat , 1996 ) . Domain adaptation ( Pan & Yang , 2009 ; Ben-David et al. , 2010 ) overcomes this problem by e.g . reweighting the original samples ( Jiayuan et al. , 2007 ) , learning features that are invariant to a domain shift ( Ganin et al. , 2016 ; Xu et al. , 2019 ) or learning a classifier in the new domain . Domain adaptation where only few data is available in the new domain ( Ben-David et al. , 2010 ; Lu et al. , 2020 ) is called cross-domain few-shot learning ( Guo et al. , 2019 ; Lu et al. , 2020 ; Tseng et al. , 2020 ) , which is an instance of the general few-shot learning setting ( Fei-Fei et al. , 2006 ) . Few-shot learning can be roughly divided into three approaches ( Lu et al. , 2020 ; Hospedales et al. , 2020 ) : ( i ) augmentation , ( ii ) metric learning , and ( iii ) meta-learning . For ( i ) , where the idea is to learn an augmentation to produce more than the few samples available , supervised ( Dixit et al. , 2017 ; Kwitt et al. , 2016 ) and unsupervised ( Hariharan & Girshick , 2017 ; Pahde et al. , 2019 ; Gao et al. , 2018 ) methods are considered . For ( ii ) , approaches aim to learn a pairwise similarity metric under which similar samples obtain high similarity scores ( Koch et al. , 2015 ; Ye & Guo , 2018 ; Hertz et al. , 2006 ) . For ( iii ) , methods comprise embedding and nearest-neighbor approaches ( Snell et al. , 2017b ; 1Our implementation is available at github.com/tomte812/chef . Sung et al. , 2018 ; Vinyals et al. , 2016 ) , finetuning approaches ( Finn et al. , 2017 ; Rajeswaran et al. , 2019 ; Ravi & Larochelle , 2017 ; Andrychowicz et al. , 2016 ) , and parametrized approaches ( Gidaris & Komodakis , 2018 ; Ye et al. , 2020 ; Lee et al. , 2019 ; Yoon et al. , 2019 ; Mishra et al. , 2018 ; Hou et al. , 2019 ; Rusu et al. , 2018 ) . Few-shot classification under domain shifts for metric-based methods has been discussed in Tseng et al . ( 2020 ) . Ensemble methods for few-shot learning have been applied in Dvornik et al . ( 2019 ) , where an ensemble of distance-based classifiers is designed from different networks . In contrast , our method builds an ensemble of different layers from the same network . Hebbian learning as part of a few-shot learning method has been implemented in Munkhdalai & Trischler ( 2018 ) , where fast weights that are used for binding labels to representations are generated by a Hebbian learning rule . 2 CROSS-DOMAIN FEW-SHOT LEARNING . Domain shifts . We assume to have data ( x , y ) , where x ∈ X is the input data and y ∈ Y is the target data . A domain is a distribution p over X ×Y assigning each pair ( x , y ) a probability p ( x , y ) . A domain shift is a change from p ( x , y ) to p̃ ( x , y ) . We measure the magnitude of the domain shift by a distance d ( p , p̃ ) between the distributions p and p̃ . We consider four types of domain shifts ( Kouw & Loog , 2019 ; Wouter , 2018 ; Webb et al. , 2018 ; Gama et al. , 2014 ; Widmer & Kubat , 1996 ) : • Prior shift ( small domain shift ) : p ( y ) is changed to p̃ ( y ) , while p ( x | y ) stays the same . For example , when new classes are considered ( typical case in few-shot learning ) : p ( x , y ) = p ( y ) p ( x | y ) and p̃ ( x , y ) = p̃ ( y ) p ( x | y ) . • Covariate shift ( large domain shift ) : p ( x ) is changed to p̃ ( x ) , while p ( y | x ) stays the same . For example , when new inputs are considered , which occurs when going from color to grayscale images , using a new measurement device , or looking at traffic data from different continents : p ( x , y ) = p ( x ) p ( y | x ) and p̃ ( x , y ) = p̃ ( x ) p ( y | x ) . • Concept shift : p ( y | x ) is changed to p̃ ( y | x ) , while p ( x ) stays the same . For example , when including new aspects changes the decision boundaries : p ( x , y ) = p ( x ) p ( y | x ) and p̃ ( x , y ) = p ( x ) p̃ ( y | x ) . • General domain shift : domain shift between p ( x , y ) to p̃ ( x , y ) . For example , going from Imagenet data to grayscale X-ray images ( typical case in cross-domain datasets ) . Domain shift for images . We consider the special case that the input x is an image . In general , domain shifts can be measured on the raw image distributions e.g . by using theH-divergence ( BenDavid et al. , 2010 ) . However , distances between raw image distributions were shown to be less meaningful in computer vision tasks than abstract representations of deep neural networks ( Heusel et al. , 2017 ; Salimans et al. , 2016 ) . We approximate the distance between the joint distributions d ( p ( x , y ) , p̃ ( x , y ) ) by the distance between the marginals d ( p ( x ) , p̃ ( x ) ) , which is exact in the case of the covariate shift for certain choices of d ( · , · ) , like e.g . the Jensen-Shannon divergence . To measure the distance between the marginals d ( p ( x ) , p̃ ( x ) ) we use the Fréchet Inception Distance ( FID ; Heusel et al. , 2017 ) , i.e . the Wasserstein-2 distance of the features of the respective images activating an Inception v3 network ( Szegedy et al. , 2016 ) under a Gaussian assumption . The FID has proven reliable for measuring performance of Generative Adversarial Networks ( Goodfellow et al. , 2014 ) . Cross-domain few-shot learning . Large domain shifts lead to inputs , which are considerably different from the original inputs . As a result , the model trained on the original domain will not work anymore on the new domain . To overcome this problem , domain adaptation techniques are applied ( Pan & Yang , 2009 ; Ben-David et al. , 2010 ) . Domain adaption can be achieved in several ways , e.g . by reweighting the original samples ( Jiayuan et al. , 2007 ) . Another possibility is to learn a classifier in the new domain . Domain adaptation where in the new domain only few data is available ( Ben-David et al. , 2010 ) which can be used for learning is called cross-domain few-shot learning ( Guo et al. , 2019 ; Lu et al. , 2020 ; Tseng et al. , 2020 ) . In an N -shot K-way few-shot learning setting , the training set ( in meta learning also called one episode ) consists of N samples for each of the K classes .
In this paper, the authors focus on cross-domain few-shot learning in the case of large source-target domain shifts. In particular, a new Cross-domain Hebbian Ensemble Few-shot (CHEF) learning method is proposed that performs representation fusion using an ensemble of Hebbian learners on different layers of a DNN trained on the source domain. The proposed CHEF method is validated on classification benchmark datasets with smaller domain shifts (miniImagenet and tieredImagenet) and larger domain shifts (drug discovery, ChEMBL20), and it can outperform related SOTA methods, especially with larger shifts.
SP:b336aead05ceba504836cebfbf4f36516c94ca09
Cross-Domain Few-Shot Learning by Representation Fusion
1 INTRODUCTION . Currently , deep learning is criticized because it is data hungry , has limited capacity for transfer , insufficiently integrates prior knowledge , and presumes a largely stable world ( Marcus , 2018 ) . In particular , these problems appear after a domain shift , that is , a change of the input-target distribution . A domain shift forces deep learning models to adapt . The goal is to exploit models that were trained on the typically rich original data for solving tasks from the new domain with much less data . Examples for domain shifts are new users or customers , new products and product lines , new diseases ( e.g . adapting from SARS to COVID19 ) , new images from another field ( e.g . from cats to dogs or from cats to bicycles ) , new social behaviors after societal change ( e.g . introduction of cell phones , pandemic ) , self-driving cars in new cities or countries ( e.g . from European countries to Arabic countries ) , and robot manipulation of new objects . Domain shifts are often tackled by meta-learning ( Schmidhuber , 1987 ; Bengio et al. , 1990 ; Hochreiter et al. , 2001 ) , since it exploits already acquired knowledge to adapt to new data . One prominent application of meta-learning dealing with domain shifts is few-shot learning , since , typically , from the new domain much less data is available than from the original domain . Meta-learning methods perform well on small domain shifts like new target classes with similar inputs . However , larger domain shifts are still challenging for current approaches . Large domain shifts lead to inputs , which are considerably different from the original inputs and possess different high-level concepts . Nonetheless , low-level concepts are often still shared between the inputs of the original domain and the inputs of the new domain . For images , such shared low-level concepts can be edges , textures , small shapes , etc . One way of obtaining low level concepts is to train a new deep learning model from scratch , where the new data is merged with the original data . However , although models of the original domain are often available , the original data , which the models were trained on , often are not . This might have several reasons , e.g . the data owner does no longer grant access to the data , General Data Protection Regulation ( GDPR ) does no longer allow access to the data , IP restrictions prevent access to the data , sensitive data items must not be touched anymore ( e.g . phase III drug candidates ) , or data is difficult to extract again . We therefore suggest to effectively exploit original data models directly by accessing not only high level but also low level abstractions . In this context , we propose a cross-domain few-shot learning method extracting information from different levels of abstraction in a deep neural network . Representation fusion . Deep Learning constructs neural network models that represent the data at multiple levels of abstraction ( LeCun et al. , 2015 ) . We introduce representation fusion , which is the concept of unifying and merging information from different levels of abstraction . Representation fusion uses a fast and adaptive system for detecting relevant information at different abstraction levels of a deep neural network , which we will show allows solving versatile and complex cross-domain tasks . CHEF . We propose cross-domain ensemble few-shot learning ( CHEF ) that achieves representation fusion by an ensemble of Hebbian learners , which are built upon a trained network . CHEF naturally addresses the problem of domain shifts which occur in a wide range of real-world applications . Furthermore , since CHEF only builds on representation fusion , it can adapt to new characteristics of tasks like unbalanced data sets , classes with few examples , change of the measurement method , new measurements in unseen ranges , new kind of labeling errors , and more . The usage of simple Hebbian learners allows the application of CHEF without needing to backpropagate information through the backbone network . The main contributions of this paper are : • We introduce representation fusion as the concept of unifying and merging information from different layers of abstraction . • We introduce CHEF1 as our new cross-domain few-shot learning method that builds on representation fusion . We show that using different layers of abstraction allows one to successfully tackle various few-shot learning tasks across a wide range of different domains . CHEF does not need to backpropagate information through the backbone network . • We apply CHEF to various cross-domain few-shot tasks and obtain several state-of-theart results . We further apply CHEF to cross-domain real-world applications from drug discovery , where we outperform all competitors . Related work . Representation fusion builds on learning a meaningful representation ( Bengio et al. , 2013 ; Girshick et al. , 2014 ) at multiple levels of abstraction ( LeCun et al. , 2015 ; Schmidhuber , 2015 ) . The concept of using representations from different layers of abstraction has been used in CNN architectures ( LeCun et al. , 1998 ) such as Huang et al . ( 2017 ) ; Rumetshofer et al . ( 2018 ) ; Hofmarcher et al . ( 2019 ) , in CNNs for semantic segmentation in the form of multi-scale context pooling ( Yu & Koltun , 2015 ; Chen et al. , 2018 ) , and in the form of context capturing and symmetric upsampling ( Ronneberger et al. , 2015 ) . Learning representations from different domains has been explored by Federici et al . ( 2020 ) ; Tschannen et al . ( 2020 ) under the viewpoint of mutual information optimization . Work on domain shifts discusses the problem that new inputs are considerably different from the original inputs ( Kouw & Loog , 2019 ; Wouter , 2018 ; Webb et al. , 2018 ; Gama et al. , 2014 ; Widmer & Kubat , 1996 ) . Domain adaptation ( Pan & Yang , 2009 ; Ben-David et al. , 2010 ) overcomes this problem by e.g . reweighting the original samples ( Jiayuan et al. , 2007 ) , learning features that are invariant to a domain shift ( Ganin et al. , 2016 ; Xu et al. , 2019 ) or learning a classifier in the new domain . Domain adaptation where only few data is available in the new domain ( Ben-David et al. , 2010 ; Lu et al. , 2020 ) is called cross-domain few-shot learning ( Guo et al. , 2019 ; Lu et al. , 2020 ; Tseng et al. , 2020 ) , which is an instance of the general few-shot learning setting ( Fei-Fei et al. , 2006 ) . Few-shot learning can be roughly divided into three approaches ( Lu et al. , 2020 ; Hospedales et al. , 2020 ) : ( i ) augmentation , ( ii ) metric learning , and ( iii ) meta-learning . For ( i ) , where the idea is to learn an augmentation to produce more than the few samples available , supervised ( Dixit et al. , 2017 ; Kwitt et al. , 2016 ) and unsupervised ( Hariharan & Girshick , 2017 ; Pahde et al. , 2019 ; Gao et al. , 2018 ) methods are considered . For ( ii ) , approaches aim to learn a pairwise similarity metric under which similar samples obtain high similarity scores ( Koch et al. , 2015 ; Ye & Guo , 2018 ; Hertz et al. , 2006 ) . For ( iii ) , methods comprise embedding and nearest-neighbor approaches ( Snell et al. , 2017b ; 1Our implementation is available at github.com/tomte812/chef . Sung et al. , 2018 ; Vinyals et al. , 2016 ) , finetuning approaches ( Finn et al. , 2017 ; Rajeswaran et al. , 2019 ; Ravi & Larochelle , 2017 ; Andrychowicz et al. , 2016 ) , and parametrized approaches ( Gidaris & Komodakis , 2018 ; Ye et al. , 2020 ; Lee et al. , 2019 ; Yoon et al. , 2019 ; Mishra et al. , 2018 ; Hou et al. , 2019 ; Rusu et al. , 2018 ) . Few-shot classification under domain shifts for metric-based methods has been discussed in Tseng et al . ( 2020 ) . Ensemble methods for few-shot learning have been applied in Dvornik et al . ( 2019 ) , where an ensemble of distance-based classifiers is designed from different networks . In contrast , our method builds an ensemble of different layers from the same network . Hebbian learning as part of a few-shot learning method has been implemented in Munkhdalai & Trischler ( 2018 ) , where fast weights that are used for binding labels to representations are generated by a Hebbian learning rule . 2 CROSS-DOMAIN FEW-SHOT LEARNING . Domain shifts . We assume to have data ( x , y ) , where x ∈ X is the input data and y ∈ Y is the target data . A domain is a distribution p over X ×Y assigning each pair ( x , y ) a probability p ( x , y ) . A domain shift is a change from p ( x , y ) to p̃ ( x , y ) . We measure the magnitude of the domain shift by a distance d ( p , p̃ ) between the distributions p and p̃ . We consider four types of domain shifts ( Kouw & Loog , 2019 ; Wouter , 2018 ; Webb et al. , 2018 ; Gama et al. , 2014 ; Widmer & Kubat , 1996 ) : • Prior shift ( small domain shift ) : p ( y ) is changed to p̃ ( y ) , while p ( x | y ) stays the same . For example , when new classes are considered ( typical case in few-shot learning ) : p ( x , y ) = p ( y ) p ( x | y ) and p̃ ( x , y ) = p̃ ( y ) p ( x | y ) . • Covariate shift ( large domain shift ) : p ( x ) is changed to p̃ ( x ) , while p ( y | x ) stays the same . For example , when new inputs are considered , which occurs when going from color to grayscale images , using a new measurement device , or looking at traffic data from different continents : p ( x , y ) = p ( x ) p ( y | x ) and p̃ ( x , y ) = p̃ ( x ) p ( y | x ) . • Concept shift : p ( y | x ) is changed to p̃ ( y | x ) , while p ( x ) stays the same . For example , when including new aspects changes the decision boundaries : p ( x , y ) = p ( x ) p ( y | x ) and p̃ ( x , y ) = p ( x ) p̃ ( y | x ) . • General domain shift : domain shift between p ( x , y ) to p̃ ( x , y ) . For example , going from Imagenet data to grayscale X-ray images ( typical case in cross-domain datasets ) . Domain shift for images . We consider the special case that the input x is an image . In general , domain shifts can be measured on the raw image distributions e.g . by using theH-divergence ( BenDavid et al. , 2010 ) . However , distances between raw image distributions were shown to be less meaningful in computer vision tasks than abstract representations of deep neural networks ( Heusel et al. , 2017 ; Salimans et al. , 2016 ) . We approximate the distance between the joint distributions d ( p ( x , y ) , p̃ ( x , y ) ) by the distance between the marginals d ( p ( x ) , p̃ ( x ) ) , which is exact in the case of the covariate shift for certain choices of d ( · , · ) , like e.g . the Jensen-Shannon divergence . To measure the distance between the marginals d ( p ( x ) , p̃ ( x ) ) we use the Fréchet Inception Distance ( FID ; Heusel et al. , 2017 ) , i.e . the Wasserstein-2 distance of the features of the respective images activating an Inception v3 network ( Szegedy et al. , 2016 ) under a Gaussian assumption . The FID has proven reliable for measuring performance of Generative Adversarial Networks ( Goodfellow et al. , 2014 ) . Cross-domain few-shot learning . Large domain shifts lead to inputs , which are considerably different from the original inputs . As a result , the model trained on the original domain will not work anymore on the new domain . To overcome this problem , domain adaptation techniques are applied ( Pan & Yang , 2009 ; Ben-David et al. , 2010 ) . Domain adaption can be achieved in several ways , e.g . by reweighting the original samples ( Jiayuan et al. , 2007 ) . Another possibility is to learn a classifier in the new domain . Domain adaptation where in the new domain only few data is available ( Ben-David et al. , 2010 ) which can be used for learning is called cross-domain few-shot learning ( Guo et al. , 2019 ; Lu et al. , 2020 ; Tseng et al. , 2020 ) . In an N -shot K-way few-shot learning setting , the training set ( in meta learning also called one episode ) consists of N samples for each of the K classes .
This paper primarily deals with cross-domain few-shot learning. Under this setting, there is a large shift in domain going from the meta-train dataset to the few-shot datasets. Inspired by previous work, the authors argue that high-level concepts might not be useful in this setting but low-level concepts like edges, textures and shapes can be utilized. They propose a Cross-domain Hebbian Ensemble Few-shot (CHEF) learner, that learns an ensemble of classifiers at multiple levels of a deep neural network, thus making use of both low and high level concepts. Experimental results show that CHEF does better, in most cases, than learning a separate classifier at a given level. They show results under the cross-domain and the standard few-shot setting.
SP:b336aead05ceba504836cebfbf4f36516c94ca09
Discrete Graph Structure Learning for Forecasting Multiple Time Series
1 INTRODUCTION . Time series data are widely studied in science and engineering that involve temporal measurements . Time series forecasting is concerned with the prediction of future values based on observed ones in the past . It has played important roles in climate studies , market analysis , traffic control , and energy grid management ( Makridakis et al. , 1997 ) and has inspired the development of various predictive models that capture the temporal dynamics of the underlying system . These models range from early autoregressive approaches ( Hamilton , 1994 ; Asteriou & Hall , 2011 ) to the recent deep learning methods ( Seo et al. , 2016 ; Li et al. , 2018 ; Yu et al. , 2018 ; Zhao et al. , 2019 ) . Analysis of univariate time series ( a single longitudinal variable ) has been extended to multivariate time series and multiple ( univariate or multivariate ) time series . Multivariate forecasting models find strong predictive power in stressing the interdependency ( and even causal relationship ) among the variables . The vector autoregressive model ( Hamilton , 1994 ) is an example of multivariate analysis , wherein the coefficient magnitudes offer hints into the Granger causality ( Granger , 1969 ) of one variable to another . For multiple time series , pairwise similarities or connections among them have also been explored to improve the forecasting accuracy ( Yu et al. , 2018 ) . An example is the traffic network where each node denotes a time series recording captured by a particular sensor . The spatial connections of the roads offer insights into how traffic dynamics propagates along the network . Several graph neural ∗This work was done while C. Shang was an intern at MIT-IBM Watson AI Lab , IBM Research . †To whom correspondence should be addressed . network ( GNN ) approaches ( Seo et al. , 2016 ; Li et al. , 2018 ; Yu et al. , 2018 ; Zhao et al. , 2019 ) have been proposed recently to leverage the graph structure for forecasting all time series simultaneously . The graph structure however is not always available or it may be incomplete . There could be several reasons , including the difficulty in obtaining such information or a deliberate shielding for the protection of sensitive information . For example , a data set comprising sensory readings of the nation-wide energy grid is granted access to specific users without disclosure of the grid structure . Such practical situations incentivize the automatic learning of the hidden graph structure jointly with the forecasting model . Because GNN approaches show promise in forecasting multiple interrelated time series , in this paper we are concerned with structure learning methods applicable to the downstream use of GNNs . A prominent example is the recent work of Franceschi et al . ( 2019 ) ( named LDS ) , which is a meta-learning approach that treats the graph as a hyperparameter in a bilevel optimization framework ( Franceschi et al. , 2017 ) . Specifically , let Xtrain and Xval denote the training and the validation sets of time series respectively , A ∈ { 0 , 1 } n×n denote the graph adjacency matrix of the n time series , w denote the parameters used in the GNN , and L and F denote the the loss functions used during training and validation respectively ( which may not be identical ) . LDS formulates the problem as learning the probability matrix θ ∈ [ 0 , 1 ] n×n , which parameterizes the element-wise Bernoulli distribution from which the adjacency matrix A is sampled : min θ EA∼Ber ( θ ) [ F ( A , w ( θ ) , Xval ) ] , s.t . w ( θ ) = argmin w EA∼Ber ( θ ) [ L ( A , w , Xtrain ) ] . ( 1 ) Formulation ( 1 ) gives a bilevel optimization problem . The constraint ( which by itself is an optimization problem ) defines the GNN weights as a function of the given graph , so that the objective is to optimize over such a graph only . Note that for differentiability , one does not directly operate on the discrete graph adjacency matrix A , but on the continuous probabilities θ instead . LDS has two drawbacks . First , its computation is expensive . The derivative of w with respect to θ is computed by applying the chain rule on a recursive-dynamics surrogate of the inner optimization argmin . Applying the chain rule on this surrogate is equivalent to differentiating an RNN , which is either memory intensive if done in the reverse mode or time consuming if done in the forward mode , when unrolling a deep dynamics . Second , it is challenging to scale . The matrix θ has Θ ( n2 ) entries to optimize and thus the method is hard to scale to increasingly more time series . In light of the challenges of LDS , we instead advocate a unilevel optimization : min w EA∼Ber ( θ ( w ) ) [ F ( A , w , Xtrain ) ] . ( 2 ) Formulation ( 2 ) trains the GNN model as usual , except that the probabilities θ ( which parameterizes the distribution from which A is sampled ) , is by itself parameterized . We absorb these parameters , together with the GNN parameters , into the notation w. We still use a validation set Xval for usual hyperparameter tuning , but these hyperparameters are not θ as treated by ( 1 ) . In fact , formulation ( 1 ) may need a second validation set to tune other hyperparameters . The major distinction of our approach from LDS is the parameterization θ ( w ) , as opposed to an inner optimization w ( θ ) . In our approach , a modeler owns the freedom to design the parameterization and better control the number of parameters as n2 increases . To this end , time series representation learning and link prediction techniques offer ample inspiration for modeling . In contrast , LDS is more agnostic as no modeling is needed . The effort , instead , lies in the nontrivial treatment of the inner optimization ( in particular , its differentiation ) . As such , our approach is advantageous in two regards . First , its computation is less expensive , because the gradient computation of a unilevel optimization is straightforward and efficient and implementations are mature . Second , it better scales , because the number of parameters does not grow quadratically with the number of time series . We coin our approach GTS ( short for “ graph for time series ” ) , signaling the usefulness of graph structure learning for enhancing time series forecasting . It is important to note that the end purpose of the graph is to improve forecasting quality , rather than identifying causal relationship of the series or recovering the ground-truth graph , if any . While causal discovery of multiple scalar variables is an established field , identifying causality among multiple multivariate time series requires a nontrivial extension that spans beyond the current study . On the other hand , the graph , either learned or preexisting , serves as additional information that helps the model better capture global signals and apply on each series . There does not exist a golden measure for the quality of the learned graph except forecasting accuracy . For example , the traffic network does not necessarily offer the best pairwise relationship a GNN can exploit for forecasting traffic series . Nevertheless , to robustify GTS we incorporate regularization that penalizes significant departure from one ’ s prior belief . If a certain “ ground-truth ” graph is believed , the learned graph will be a healthy variation of it for a more accurate forecast . 2 RELATED WORK . Time series forecasting has been studied for decades by statisticians . It is out of the scope of this paper to comprehensively survey the literature , but we will focus more on late developments under the deep learning context . Early textbook methods include ( vector ) autoregressive models ( Hamilton , 1994 ) , autoregressive integrated moving average ( ARIMA ) ( Asteriou & Hall , 2011 ) , hidden Markov models ( HMM ) ( Baum & Petrie , 1966 ) , and Kalman filters ( Zarchan & Musoff , 2000 ) . Generally speaking , these are linear models that use a window of the past information to predict the next time step , although nonlinear versions with parameterization are subsequently developed . A notable nonlinear extension was the RNN ( Williams et al. , 1986 ) , which later evolved into LSTM ( Hochreiter & Schmidhuber , 1997 ) , BiLSTM ( Schuster & Paliwal , 1997 ) , and GRU ( Cho et al. , 2014 ) , which addressed several limitations of the vanilla RNN , such as the vanishing gradient problem . These architectures are hard to parallelize because of the recurrent nature of the forward and backward computation . More recently , Transformer ( Vaswani et al. , 2017 ) and BERT ( Devlin et al. , 2019 ) were developed to address parallelization , by introducing attention mechanisms that simultaneously digested past ( and future ) information . Although these models are more heavily used for sequence data under the context of natural language processing , they are readily applicable for time series as well ( Shih et al. , 2019 ; Li et al. , 2019 ) . Graph neural networks ( Zhang et al. , 2018 ; Zhou et al. , 2018 ; Wu et al. , 2019 ) emerged quickly in deep learning to handle graph-structured data . Typically , graph nodes are represented by feature vectors , but for the case of time series , a number of specialized architectures were recently developed ; see , e.g. , GCRN ( Seo et al. , 2016 ) , DCRNN ( Li et al. , 2018 ) , STGCN ( Yu et al. , 2018 ) , and TGCN ( Zhao et al. , 2019 ) . These architectures essentially combine the temporal recurrent processing with graph convolution to augment the representation learning of the individual time series . Graph structure learning ( not necessarily for time series ) appears in various contexts and thus methods span a broad spectrum . One field of study is probabilistic graphical models and casual inference , whereby the directed acyclic structure is enforced . Gradient-based approaches in this context include NOTEARS ( Zheng et al. , 2018 ) , DAG-GNN ( Yu et al. , 2019 ) , and GraN-DAG ( Lachapelle et al. , 2020 ) . On the other hand , a general graph may still be useful without resorting to causality . LDS ( Franceschi et al. , 2019 ) is a meta-learning approach that demonstrates to improve the performance on node classification tasks . MTGNN ( Wu et al. , 2020 ) parameterizes the graph as a degree-k graph , which is learned end-to-end with a GNN for forecasting time series . We , on the other hand , allow a more general structural prior for the graph . NRI ( Kipf et al. , 2018 ) adopts a latent-variable approach and learns a latent graph for forecasting system dynamics . Our approach is closely related to NRI and we will compare with it in the following section after introducing the technical details . 3 METHOD . In this section , we present the proposed GTS method , elaborate the model parameterization , and describe the training technique . We also highlight the distinctions from NRI ( Kipf et al. , 2018 ) . Let us first settle the notations . Denote by X the training data , which is a three dimensional tensor , with the three dimensions being feature , time , and the n series . Superscript refers to the series and subscript refers to time ; that is , Xi denotes the i-th series for all features and time and Xt denotes the t-th time step for all features and series . There are in total S time steps for training . The model will use a window of T steps to forecast the next τ steps . For each valid t , denote by feature extractor link predictor recurrent graph convolution recurrent graph convolution … entire data X sampled graph A windowed data forecast sampling learned structure 𝜃 t + 1 t + 2 t + T t + T + 1 t + T + 𝜏 recurrent graph convolution recurrent graph convolution … Figure 1 : GTS architecture . X̂t+T+1 : t+T+τ = f ( A , w , Xt+1 : t+T ) the model , which forecasts X̂t+T+1 : t+T+τ from observations Xt+1 : t+T , through exploiting the graph structureA and being parameterized byw . Using ` to denote the loss function between the prediction and the ground truth , a typical training objective reads∑ t ` ( f ( A , w , Xt+1 : t+T ) , Xt+T+1 : t+T+τ ) . ( 3 ) Three remaining details are the parameterization of A , the model f , and the loss ` .
The paper considers learning both graph structures and NNs for time series data, similar to the idea of LDS (Franceschi et al., 2019). Observing the computation and scalability issues with LDS, authors propose a unilevel optimization form wrt. the mean performance over the graph distribution. This is done via NNs, with input being the observed sequence, to output a real matrix whose elements are then treated as weights for the Gumbel trick. NN structures, training procedure, etc. mostly follow existing works.
SP:de1248456feddd9a8f2617e49f913e6585a9951f
Discrete Graph Structure Learning for Forecasting Multiple Time Series
1 INTRODUCTION . Time series data are widely studied in science and engineering that involve temporal measurements . Time series forecasting is concerned with the prediction of future values based on observed ones in the past . It has played important roles in climate studies , market analysis , traffic control , and energy grid management ( Makridakis et al. , 1997 ) and has inspired the development of various predictive models that capture the temporal dynamics of the underlying system . These models range from early autoregressive approaches ( Hamilton , 1994 ; Asteriou & Hall , 2011 ) to the recent deep learning methods ( Seo et al. , 2016 ; Li et al. , 2018 ; Yu et al. , 2018 ; Zhao et al. , 2019 ) . Analysis of univariate time series ( a single longitudinal variable ) has been extended to multivariate time series and multiple ( univariate or multivariate ) time series . Multivariate forecasting models find strong predictive power in stressing the interdependency ( and even causal relationship ) among the variables . The vector autoregressive model ( Hamilton , 1994 ) is an example of multivariate analysis , wherein the coefficient magnitudes offer hints into the Granger causality ( Granger , 1969 ) of one variable to another . For multiple time series , pairwise similarities or connections among them have also been explored to improve the forecasting accuracy ( Yu et al. , 2018 ) . An example is the traffic network where each node denotes a time series recording captured by a particular sensor . The spatial connections of the roads offer insights into how traffic dynamics propagates along the network . Several graph neural ∗This work was done while C. Shang was an intern at MIT-IBM Watson AI Lab , IBM Research . †To whom correspondence should be addressed . network ( GNN ) approaches ( Seo et al. , 2016 ; Li et al. , 2018 ; Yu et al. , 2018 ; Zhao et al. , 2019 ) have been proposed recently to leverage the graph structure for forecasting all time series simultaneously . The graph structure however is not always available or it may be incomplete . There could be several reasons , including the difficulty in obtaining such information or a deliberate shielding for the protection of sensitive information . For example , a data set comprising sensory readings of the nation-wide energy grid is granted access to specific users without disclosure of the grid structure . Such practical situations incentivize the automatic learning of the hidden graph structure jointly with the forecasting model . Because GNN approaches show promise in forecasting multiple interrelated time series , in this paper we are concerned with structure learning methods applicable to the downstream use of GNNs . A prominent example is the recent work of Franceschi et al . ( 2019 ) ( named LDS ) , which is a meta-learning approach that treats the graph as a hyperparameter in a bilevel optimization framework ( Franceschi et al. , 2017 ) . Specifically , let Xtrain and Xval denote the training and the validation sets of time series respectively , A ∈ { 0 , 1 } n×n denote the graph adjacency matrix of the n time series , w denote the parameters used in the GNN , and L and F denote the the loss functions used during training and validation respectively ( which may not be identical ) . LDS formulates the problem as learning the probability matrix θ ∈ [ 0 , 1 ] n×n , which parameterizes the element-wise Bernoulli distribution from which the adjacency matrix A is sampled : min θ EA∼Ber ( θ ) [ F ( A , w ( θ ) , Xval ) ] , s.t . w ( θ ) = argmin w EA∼Ber ( θ ) [ L ( A , w , Xtrain ) ] . ( 1 ) Formulation ( 1 ) gives a bilevel optimization problem . The constraint ( which by itself is an optimization problem ) defines the GNN weights as a function of the given graph , so that the objective is to optimize over such a graph only . Note that for differentiability , one does not directly operate on the discrete graph adjacency matrix A , but on the continuous probabilities θ instead . LDS has two drawbacks . First , its computation is expensive . The derivative of w with respect to θ is computed by applying the chain rule on a recursive-dynamics surrogate of the inner optimization argmin . Applying the chain rule on this surrogate is equivalent to differentiating an RNN , which is either memory intensive if done in the reverse mode or time consuming if done in the forward mode , when unrolling a deep dynamics . Second , it is challenging to scale . The matrix θ has Θ ( n2 ) entries to optimize and thus the method is hard to scale to increasingly more time series . In light of the challenges of LDS , we instead advocate a unilevel optimization : min w EA∼Ber ( θ ( w ) ) [ F ( A , w , Xtrain ) ] . ( 2 ) Formulation ( 2 ) trains the GNN model as usual , except that the probabilities θ ( which parameterizes the distribution from which A is sampled ) , is by itself parameterized . We absorb these parameters , together with the GNN parameters , into the notation w. We still use a validation set Xval for usual hyperparameter tuning , but these hyperparameters are not θ as treated by ( 1 ) . In fact , formulation ( 1 ) may need a second validation set to tune other hyperparameters . The major distinction of our approach from LDS is the parameterization θ ( w ) , as opposed to an inner optimization w ( θ ) . In our approach , a modeler owns the freedom to design the parameterization and better control the number of parameters as n2 increases . To this end , time series representation learning and link prediction techniques offer ample inspiration for modeling . In contrast , LDS is more agnostic as no modeling is needed . The effort , instead , lies in the nontrivial treatment of the inner optimization ( in particular , its differentiation ) . As such , our approach is advantageous in two regards . First , its computation is less expensive , because the gradient computation of a unilevel optimization is straightforward and efficient and implementations are mature . Second , it better scales , because the number of parameters does not grow quadratically with the number of time series . We coin our approach GTS ( short for “ graph for time series ” ) , signaling the usefulness of graph structure learning for enhancing time series forecasting . It is important to note that the end purpose of the graph is to improve forecasting quality , rather than identifying causal relationship of the series or recovering the ground-truth graph , if any . While causal discovery of multiple scalar variables is an established field , identifying causality among multiple multivariate time series requires a nontrivial extension that spans beyond the current study . On the other hand , the graph , either learned or preexisting , serves as additional information that helps the model better capture global signals and apply on each series . There does not exist a golden measure for the quality of the learned graph except forecasting accuracy . For example , the traffic network does not necessarily offer the best pairwise relationship a GNN can exploit for forecasting traffic series . Nevertheless , to robustify GTS we incorporate regularization that penalizes significant departure from one ’ s prior belief . If a certain “ ground-truth ” graph is believed , the learned graph will be a healthy variation of it for a more accurate forecast . 2 RELATED WORK . Time series forecasting has been studied for decades by statisticians . It is out of the scope of this paper to comprehensively survey the literature , but we will focus more on late developments under the deep learning context . Early textbook methods include ( vector ) autoregressive models ( Hamilton , 1994 ) , autoregressive integrated moving average ( ARIMA ) ( Asteriou & Hall , 2011 ) , hidden Markov models ( HMM ) ( Baum & Petrie , 1966 ) , and Kalman filters ( Zarchan & Musoff , 2000 ) . Generally speaking , these are linear models that use a window of the past information to predict the next time step , although nonlinear versions with parameterization are subsequently developed . A notable nonlinear extension was the RNN ( Williams et al. , 1986 ) , which later evolved into LSTM ( Hochreiter & Schmidhuber , 1997 ) , BiLSTM ( Schuster & Paliwal , 1997 ) , and GRU ( Cho et al. , 2014 ) , which addressed several limitations of the vanilla RNN , such as the vanishing gradient problem . These architectures are hard to parallelize because of the recurrent nature of the forward and backward computation . More recently , Transformer ( Vaswani et al. , 2017 ) and BERT ( Devlin et al. , 2019 ) were developed to address parallelization , by introducing attention mechanisms that simultaneously digested past ( and future ) information . Although these models are more heavily used for sequence data under the context of natural language processing , they are readily applicable for time series as well ( Shih et al. , 2019 ; Li et al. , 2019 ) . Graph neural networks ( Zhang et al. , 2018 ; Zhou et al. , 2018 ; Wu et al. , 2019 ) emerged quickly in deep learning to handle graph-structured data . Typically , graph nodes are represented by feature vectors , but for the case of time series , a number of specialized architectures were recently developed ; see , e.g. , GCRN ( Seo et al. , 2016 ) , DCRNN ( Li et al. , 2018 ) , STGCN ( Yu et al. , 2018 ) , and TGCN ( Zhao et al. , 2019 ) . These architectures essentially combine the temporal recurrent processing with graph convolution to augment the representation learning of the individual time series . Graph structure learning ( not necessarily for time series ) appears in various contexts and thus methods span a broad spectrum . One field of study is probabilistic graphical models and casual inference , whereby the directed acyclic structure is enforced . Gradient-based approaches in this context include NOTEARS ( Zheng et al. , 2018 ) , DAG-GNN ( Yu et al. , 2019 ) , and GraN-DAG ( Lachapelle et al. , 2020 ) . On the other hand , a general graph may still be useful without resorting to causality . LDS ( Franceschi et al. , 2019 ) is a meta-learning approach that demonstrates to improve the performance on node classification tasks . MTGNN ( Wu et al. , 2020 ) parameterizes the graph as a degree-k graph , which is learned end-to-end with a GNN for forecasting time series . We , on the other hand , allow a more general structural prior for the graph . NRI ( Kipf et al. , 2018 ) adopts a latent-variable approach and learns a latent graph for forecasting system dynamics . Our approach is closely related to NRI and we will compare with it in the following section after introducing the technical details . 3 METHOD . In this section , we present the proposed GTS method , elaborate the model parameterization , and describe the training technique . We also highlight the distinctions from NRI ( Kipf et al. , 2018 ) . Let us first settle the notations . Denote by X the training data , which is a three dimensional tensor , with the three dimensions being feature , time , and the n series . Superscript refers to the series and subscript refers to time ; that is , Xi denotes the i-th series for all features and time and Xt denotes the t-th time step for all features and series . There are in total S time steps for training . The model will use a window of T steps to forecast the next τ steps . For each valid t , denote by feature extractor link predictor recurrent graph convolution recurrent graph convolution … entire data X sampled graph A windowed data forecast sampling learned structure 𝜃 t + 1 t + 2 t + T t + T + 1 t + T + 𝜏 recurrent graph convolution recurrent graph convolution … Figure 1 : GTS architecture . X̂t+T+1 : t+T+τ = f ( A , w , Xt+1 : t+T ) the model , which forecasts X̂t+T+1 : t+T+τ from observations Xt+1 : t+T , through exploiting the graph structureA and being parameterized byw . Using ` to denote the loss function between the prediction and the ground truth , a typical training objective reads∑ t ` ( f ( A , w , Xt+1 : t+T ) , Xt+T+1 : t+T+τ ) . ( 3 ) Three remaining details are the parameterization of A , the model f , and the loss ` .
This paper proposes an approach for time series forecasting that learns the graph structure among multiple (multivariate) time series simultaneously with the parameters of a Graph Neural Network (GNN). The problem is formulated as learning a probabilistic graphical model by optimizing the expectation over the graph distribution, which is parameterized by a neural network and encapsulated in a single differentiable objective. Empirical evidence suggests that the proposed GTS obtains superior forecasting performance to both deep and non-deep learning based, as well as graph and non-graph based, competitor forecasting models. In addition, GTS appears to be more computationally efficient compared to LDS, a recently proposed meta-learning graph-based approach.
SP:de1248456feddd9a8f2617e49f913e6585a9951f
Closing the Generalization Gap in One-Shot Object Detection
Search Task Generalization to Novel CategoriesA B 1 INTRODUCTION . It ’ s January 2021 and your long awaited household robot finally arrives . Equipped with the latest “ Deep Learning Technology ” , it can recognize over 21,000 objects . Your initial excitement quickly vanishes as you realize that your casserole is not one of them . When you contact customer service they ask you to send some pictures of the casserole so they can fix this . They tell you that the fix will be some time , though , as they need to collect about a thousand images of casseroles to retrain the neural network . While you are making the call your robot knocks over the olive oil because the steam coming from the pot of boiling water confused it . You start filling out the return form ... While not 100 % realistic , the above story highlights an important obstacle towards truly autonomous agents such as household robots : such systems should be able to detect novel , previously unseen objects and learn to recognize them based on ( ideally ) a single example . Solving this one-shot object detection problem can be decomposed into three subproblems : ( 1 ) designing a class-agnostic object proposal mechanism that detects both known and previously unseen objects ; ( 2 ) learning a suitably general visual representation ( metric ) that supports recognition of the detected objects ; ( 3 ) continuously updating the classifier to accommodate new object classes or training examples of existing classes . In this paper , we focus on the detection and representation learning part of the pipeline , and we ask : what does it take to learn a visual representation that allows detection and recognition of previously unseen object categories based on a single example ? We operationalize this question using an example-based visual search task ( Fig . 1 ) that has been investigated before using handwritten characters ( Omniglot ; Michaelis et al . ( 2018a ) ) and real-world image datasets ( Pascal VOC , COCO ; Michaelis et al . ( 2018b ) ; Hsieh et al . ( 2019 ) ; Zhang et al . ( 2019 ) ; Fan et al . ( 2020 ) ; Li et al . ( 2020 ) ) . Our central hypothesis is that scaling up the number of object categories used for training should improve the generalization capabilities of the learned representation . This hypothesis is motivated by the following observations . On ( cluttered ) Omniglot ( Michaelis et al. , 2018a ) , recognition of novel characters works almost as well as for characters seen during training . In this case , sampling enough categories during training relative to the visual complexity of the objects is sufficient to learn a metric that generalizes to novel categories . In contrast , models trained on visually more complex datasets like Pascal VOC and COCO exhibit a large generalization gap : novel categories are detected much less reliably than ones seen during training . This result suggests that on the natural image datasets , the number of categories is too small given the visual complexity of the objects and the models retreat to a shortcut ( Geirhos et al. , 2020 ) – memorizing the training categories . To test the hypothesis that wider datasets improve generalization , we increase the number of object categories during training by using datasets ( LVIS , Objects365 ) that have a larger number of categories annotated . Our experiments support this hypothesis and suggest the following conclusions : • The generalization gap between training and novel categories is a key problem in one-shot object detection . • This generalization gap can be almost closed by increasing the number of categories used for training : going from 80 classses in COCO to 1200 in LVIS improves relative performance from 45 % to 89 % . • The number of categories , not the amount of data , is the driving force behind this effect . • Closing the generalization gap allows us to use established methods from the object detec- tion community ( like e.g . stronger backbones ) to make further progress . • We use these insights to improve state-of-the-art performance on COCO by 5.4 % AP50 ( from 22 % AP50 to 27.5 % AP50 ) using annotations from LVIS . 2 RELATED WORK . Object detection Object detection has seen huge progress since the widespread adoption of DNNs ( Girshick et al. , 2014 ; Ren et al. , 2015 ; He et al. , 2017 ; Lin et al. , 2017a ; Chen et al. , 2019a ; Wu et al. , 2019b ; Carion et al. , 2020 ) . Similarly the number of datasets has grown steadily , fueled by the importance this task has for computer vision applications ( Everingham et al. , 2010 ; Russakovsky et al. , 2015 ; Lin et al. , 2014 ; Zhou et al. , 2017 ; Neuhold et al. , 2017 ; Krasin et al. , 2017 ; Gupta et al. , 2019 ; Shao et al. , 2019 ) . However most models and datasets focus on scenarios where abundant examples per category are available . Few-shot learning The two most common approaches to few-shot learning have been , broadly speaking , based on metric learning ( Koch et al. , 2015 ; Vinyals et al. , 2016 ; Snell et al. , 2017 ) and meta learning : Learn a good way to learn a new task ( Finn et al. , 2017 ; Rusu et al. , 2018 ) , or combinations thereof ( Sun et al. , 2019 ) . However , recent work has shown that much simpler approaches based on transfer learning achieve competitive performance ( Chen et al. , 2019b ; Nakamura & Harada , 2019 ; Dhillon et al. , 2019 ) . A particularly impressive example of this line of work is Big Transfer ( Kolesnikov et al. , 2019 ) , which uses transfer learning from a huge architecture trained on a huge dataset to perform one-shot ImageNet classification . Few-shot & one-shot object detection Recently , several groups have started to tackle few-shot learning for object detection . Two training and evaluation paradigms have emerged . The first is inspired by continual learning : incorporate a set of new categories with only a few labeled images per category into an existing classifier ( Kang et al. , 2018 ; Yan et al. , 2019 ; Wang et al. , 2019 ; 2020 ) . The second one phrases the problem as an example-based visual search : detect objects based on a single example image ( Fig . 1 left ; Michaelis et al. , 2018b ; Hsieh et al. , 2019 ; Zhang et al. , 2019 ; Fan et al. , 2020 ; Li et al. , 2020 ) . We refer to the former ( continual learning ) as few-shot object detection , since typically 10–30 images are used for experiments on COCO . In contrast , we refer to the latter ( visual search ) as one-shot object detection , since the focus is on the setting with a single example . In the present paper we work with this latter paradigm , since it focuses on the representation learning part of the problem and avoids the additional complexity of continual learning . Methods for one-shot object detection Existing methods for one-shot object detection employ a combination of a standard object detection architecture with a siamese backbone and various forms of feature attention and concatenation on the backbone output or in the heads ( Biswas & Milanfar , 2015 ; Michaelis et al. , 2018b ; Hsieh et al. , 2019 ; Zhang et al. , 2019 ; Fan et al. , 2020 ; Osokin et al. , 2020 ; Li et al. , 2020 ) . Spatially aware similarity measures ( Li et al. , 2020 ) or transformations ( Biswas & Milanfar , 2015 ; Osokin et al. , 2020 ) improve recognition in cases where the pose of the reference objects differs from that of the detected object . We here use one of the most straightforward models , Siamese Faster R-CNN ( Michaelis et al. , 2018b ) , to demonstrate that a change of the training data rather than the model architecture is sufficient to substantially reduce the generalization gap between known and novel categories . Related tasks A number of related pieces of work propose approaches to slightly different example-based search tasks . Examples include one-shot segmentation using handwritten characters ( Michaelis et al. , 2018a ) , natural textures ( Ustyuzhaninov et al. , 2018 ) and natural images ( Shaban et al. , 2017 ) . In addition , several groups have suggested one-shot and few-shot detection tasks with slightly different focus and protocols ( Dong et al. , 2018 ; Chen et al. , 2018 ; Schwartz et al. , 2019 ; Wu et al. , 2019a ) , including episodic evaluation ( Wu et al. , 2019a ) , transfer across datasets ( Chen et al. , 2018 ) and fine-grained detection ( Schwartz et al. , 2019 ) . Also closely related are instance retrieval ( Tolias et al. , 2016 ) and co-segmentation ( Rother et al. , 2006 ; Hsu et al. , 2019 ) . The key difference of our work is that we do not propose a new architecture , but instead investigate the relationship between the number of categories used during training and the generalization to novel categories . Number of categories in few-shot learning Most of the few-shot learning literature focuses on developing new methods for existing benchmarks . The influence of the training data was mostly observed indirectly , e.g . through better performance on datasets with more categories such as tieredImageNet vs. miniImageNet . Two concurrent studies report that more categories help few-shot object detection ( Fan et al. , 2020 ) and investigate the influence of data diversity , image complexity , intra- and inter-category diversity and other factors on few-shot classification ( Jiang et al. , 2020 ) . Both publications are consistent with out results that the number of categories is a key factor for improving few-shot performance . 3 EXPERIMENTS . Models We mainly use Siamese Faster R-CNN , a one-shot detection version of Faster R-CNN ( Ren et al. , 2015 ) similar to Siamese Mask R-CNN ( Michaelis et al. , 2018b ) . Briefly , it consists of a feature extractor , a matching step and a standard region proposal network and bounding box head ( Fig . 2 ) . The feature extractor ( called backbone in object detection ) is a standard ResNet-50 with feature pyramid networks ( He et al. , 2016 ; Lin et al. , 2017a ) which is applied to the image and reference with weight sharing . In the matching step the reference representation is compared to the image representation in a sliding window approach by computing a feature-wise L1 difference . The resulting similarity encoding representation is concatenated to the image representation and passed on to the region proposal network ( RPN ) . The RPN proposes a set of bounding boxes which potentially contain objects . These boxes are then classified as containing an object from the reference class or something else ( other object or background ) . Box coordinates are refined by bounding box regression and overlapping boxes are removed using non-maximum suppression . We additionally developed Siamese RetinaNet , a single-stage detector based on RetinaNet ( Lin et al. , 2017b ) . The feature extraction and matching steps are identical to Siamese Faster R-CNN , but it uses Siamese Faster R-CNN the unified RetinaHead to jointly propose and classify bounding boxes . To counter the effect of too many negative samples , the classifier is trained with focal loss Lin et al . ( 2017b ) . Training & Evaluation During training a reference category is randomly chosen for every image by picking a category with at least one instance in the image . A reference is retrieved by randomly selecting one instance from this category in another image and tightly cropping it . The labels for each bounding box are changed to 0 or 1 depending on whether the object is from the reference category or not . Annotations for objects from the held-out categories are removed from the dataset before training . At test time a similar procedure is chosen but instead of picking one category for each image , all categories with at least one object in the image are chosen Michaelis et al . ( 2018b ) and one ( 1-shot ) or five ( 5-shot ) reference images are provided . Predictions are assigned their corresponding category label and evaluation is performed using standard tools and metrics . Implementation We implemented Siamese Faster R-CNN and Siamese RetinaNet in mmdetection v1.0rc ( Chen et al. , 2019a ) , which improved performance by more than 30 % over the original paper ( Table 4 ; Michaelis et al. , 2018b ) . We keep all hyperparameters the same as in the standard Faster R-CNN implementation of mmdetection ( which achieves 36.4 % mAP/58.4 % AP50 on regular COCO ) . Due to resource constraints we reduce the number of samples per epoch to 120k for Objects365 . Datasets We use the four datasets shown in Table 1 : COCO ( Lin et al. , 2014 ) , Objects365 ( Shao et al. , 2019 ) , LVIS ( Gupta et al. , 2019 ) and Pascal VOC ( Everingham et al. , 2010 ) . We use standard splits and test on the validation sets except for Pascal VOC where we test on the 2007 test set . Due to resource constraints , we evaluate Objects365 on a fixed subset of 10k images from the validation set . Following common protocol ( Michaelis et al. , 2018b ; Shaban et al. , 2017 ) we split the categories in each dataset into four splits using every fourth category as hold-out set and the other 3/4 categories for training . So on Pascal VOC there are 15 categories for training in each split , on COCO there are 60 , on Objects365 274 and on LVIS 902 . We train and test four models ( one for each split ) and report the mean over those four models , so performance is always measured on all categories . Computing performance in this way across all categories is preferable to using a fixed subset as some categories may be harder than others . During evaluation , the reference images are chosen randomly . We therefore run the evaluation five times , reporting the average AP50 over splits . The 95 % confidence intervals for the average AP50 is below ±0.2 % AP50 for all experiments .
This paper provides a variety of studies to understand the generalization gap between known and novel classes in one-shot object detection. The studies are carried out by using siamese Faster R-CNN framework on four benchmark datasets. The most notable observation was that it was more important to increase the number of object category than to increase the number of instances per each category in order to reduce the generalization gap. This observation is very useful to anyone planning to build a dataset for this task or implement the appropriate method. Figure 5 is very important and well presented to support the main claim.
SP:91139c0d3614e87c45be3e66110fb01813492e06
Closing the Generalization Gap in One-Shot Object Detection
Search Task Generalization to Novel CategoriesA B 1 INTRODUCTION . It ’ s January 2021 and your long awaited household robot finally arrives . Equipped with the latest “ Deep Learning Technology ” , it can recognize over 21,000 objects . Your initial excitement quickly vanishes as you realize that your casserole is not one of them . When you contact customer service they ask you to send some pictures of the casserole so they can fix this . They tell you that the fix will be some time , though , as they need to collect about a thousand images of casseroles to retrain the neural network . While you are making the call your robot knocks over the olive oil because the steam coming from the pot of boiling water confused it . You start filling out the return form ... While not 100 % realistic , the above story highlights an important obstacle towards truly autonomous agents such as household robots : such systems should be able to detect novel , previously unseen objects and learn to recognize them based on ( ideally ) a single example . Solving this one-shot object detection problem can be decomposed into three subproblems : ( 1 ) designing a class-agnostic object proposal mechanism that detects both known and previously unseen objects ; ( 2 ) learning a suitably general visual representation ( metric ) that supports recognition of the detected objects ; ( 3 ) continuously updating the classifier to accommodate new object classes or training examples of existing classes . In this paper , we focus on the detection and representation learning part of the pipeline , and we ask : what does it take to learn a visual representation that allows detection and recognition of previously unseen object categories based on a single example ? We operationalize this question using an example-based visual search task ( Fig . 1 ) that has been investigated before using handwritten characters ( Omniglot ; Michaelis et al . ( 2018a ) ) and real-world image datasets ( Pascal VOC , COCO ; Michaelis et al . ( 2018b ) ; Hsieh et al . ( 2019 ) ; Zhang et al . ( 2019 ) ; Fan et al . ( 2020 ) ; Li et al . ( 2020 ) ) . Our central hypothesis is that scaling up the number of object categories used for training should improve the generalization capabilities of the learned representation . This hypothesis is motivated by the following observations . On ( cluttered ) Omniglot ( Michaelis et al. , 2018a ) , recognition of novel characters works almost as well as for characters seen during training . In this case , sampling enough categories during training relative to the visual complexity of the objects is sufficient to learn a metric that generalizes to novel categories . In contrast , models trained on visually more complex datasets like Pascal VOC and COCO exhibit a large generalization gap : novel categories are detected much less reliably than ones seen during training . This result suggests that on the natural image datasets , the number of categories is too small given the visual complexity of the objects and the models retreat to a shortcut ( Geirhos et al. , 2020 ) – memorizing the training categories . To test the hypothesis that wider datasets improve generalization , we increase the number of object categories during training by using datasets ( LVIS , Objects365 ) that have a larger number of categories annotated . Our experiments support this hypothesis and suggest the following conclusions : • The generalization gap between training and novel categories is a key problem in one-shot object detection . • This generalization gap can be almost closed by increasing the number of categories used for training : going from 80 classses in COCO to 1200 in LVIS improves relative performance from 45 % to 89 % . • The number of categories , not the amount of data , is the driving force behind this effect . • Closing the generalization gap allows us to use established methods from the object detec- tion community ( like e.g . stronger backbones ) to make further progress . • We use these insights to improve state-of-the-art performance on COCO by 5.4 % AP50 ( from 22 % AP50 to 27.5 % AP50 ) using annotations from LVIS . 2 RELATED WORK . Object detection Object detection has seen huge progress since the widespread adoption of DNNs ( Girshick et al. , 2014 ; Ren et al. , 2015 ; He et al. , 2017 ; Lin et al. , 2017a ; Chen et al. , 2019a ; Wu et al. , 2019b ; Carion et al. , 2020 ) . Similarly the number of datasets has grown steadily , fueled by the importance this task has for computer vision applications ( Everingham et al. , 2010 ; Russakovsky et al. , 2015 ; Lin et al. , 2014 ; Zhou et al. , 2017 ; Neuhold et al. , 2017 ; Krasin et al. , 2017 ; Gupta et al. , 2019 ; Shao et al. , 2019 ) . However most models and datasets focus on scenarios where abundant examples per category are available . Few-shot learning The two most common approaches to few-shot learning have been , broadly speaking , based on metric learning ( Koch et al. , 2015 ; Vinyals et al. , 2016 ; Snell et al. , 2017 ) and meta learning : Learn a good way to learn a new task ( Finn et al. , 2017 ; Rusu et al. , 2018 ) , or combinations thereof ( Sun et al. , 2019 ) . However , recent work has shown that much simpler approaches based on transfer learning achieve competitive performance ( Chen et al. , 2019b ; Nakamura & Harada , 2019 ; Dhillon et al. , 2019 ) . A particularly impressive example of this line of work is Big Transfer ( Kolesnikov et al. , 2019 ) , which uses transfer learning from a huge architecture trained on a huge dataset to perform one-shot ImageNet classification . Few-shot & one-shot object detection Recently , several groups have started to tackle few-shot learning for object detection . Two training and evaluation paradigms have emerged . The first is inspired by continual learning : incorporate a set of new categories with only a few labeled images per category into an existing classifier ( Kang et al. , 2018 ; Yan et al. , 2019 ; Wang et al. , 2019 ; 2020 ) . The second one phrases the problem as an example-based visual search : detect objects based on a single example image ( Fig . 1 left ; Michaelis et al. , 2018b ; Hsieh et al. , 2019 ; Zhang et al. , 2019 ; Fan et al. , 2020 ; Li et al. , 2020 ) . We refer to the former ( continual learning ) as few-shot object detection , since typically 10–30 images are used for experiments on COCO . In contrast , we refer to the latter ( visual search ) as one-shot object detection , since the focus is on the setting with a single example . In the present paper we work with this latter paradigm , since it focuses on the representation learning part of the problem and avoids the additional complexity of continual learning . Methods for one-shot object detection Existing methods for one-shot object detection employ a combination of a standard object detection architecture with a siamese backbone and various forms of feature attention and concatenation on the backbone output or in the heads ( Biswas & Milanfar , 2015 ; Michaelis et al. , 2018b ; Hsieh et al. , 2019 ; Zhang et al. , 2019 ; Fan et al. , 2020 ; Osokin et al. , 2020 ; Li et al. , 2020 ) . Spatially aware similarity measures ( Li et al. , 2020 ) or transformations ( Biswas & Milanfar , 2015 ; Osokin et al. , 2020 ) improve recognition in cases where the pose of the reference objects differs from that of the detected object . We here use one of the most straightforward models , Siamese Faster R-CNN ( Michaelis et al. , 2018b ) , to demonstrate that a change of the training data rather than the model architecture is sufficient to substantially reduce the generalization gap between known and novel categories . Related tasks A number of related pieces of work propose approaches to slightly different example-based search tasks . Examples include one-shot segmentation using handwritten characters ( Michaelis et al. , 2018a ) , natural textures ( Ustyuzhaninov et al. , 2018 ) and natural images ( Shaban et al. , 2017 ) . In addition , several groups have suggested one-shot and few-shot detection tasks with slightly different focus and protocols ( Dong et al. , 2018 ; Chen et al. , 2018 ; Schwartz et al. , 2019 ; Wu et al. , 2019a ) , including episodic evaluation ( Wu et al. , 2019a ) , transfer across datasets ( Chen et al. , 2018 ) and fine-grained detection ( Schwartz et al. , 2019 ) . Also closely related are instance retrieval ( Tolias et al. , 2016 ) and co-segmentation ( Rother et al. , 2006 ; Hsu et al. , 2019 ) . The key difference of our work is that we do not propose a new architecture , but instead investigate the relationship between the number of categories used during training and the generalization to novel categories . Number of categories in few-shot learning Most of the few-shot learning literature focuses on developing new methods for existing benchmarks . The influence of the training data was mostly observed indirectly , e.g . through better performance on datasets with more categories such as tieredImageNet vs. miniImageNet . Two concurrent studies report that more categories help few-shot object detection ( Fan et al. , 2020 ) and investigate the influence of data diversity , image complexity , intra- and inter-category diversity and other factors on few-shot classification ( Jiang et al. , 2020 ) . Both publications are consistent with out results that the number of categories is a key factor for improving few-shot performance . 3 EXPERIMENTS . Models We mainly use Siamese Faster R-CNN , a one-shot detection version of Faster R-CNN ( Ren et al. , 2015 ) similar to Siamese Mask R-CNN ( Michaelis et al. , 2018b ) . Briefly , it consists of a feature extractor , a matching step and a standard region proposal network and bounding box head ( Fig . 2 ) . The feature extractor ( called backbone in object detection ) is a standard ResNet-50 with feature pyramid networks ( He et al. , 2016 ; Lin et al. , 2017a ) which is applied to the image and reference with weight sharing . In the matching step the reference representation is compared to the image representation in a sliding window approach by computing a feature-wise L1 difference . The resulting similarity encoding representation is concatenated to the image representation and passed on to the region proposal network ( RPN ) . The RPN proposes a set of bounding boxes which potentially contain objects . These boxes are then classified as containing an object from the reference class or something else ( other object or background ) . Box coordinates are refined by bounding box regression and overlapping boxes are removed using non-maximum suppression . We additionally developed Siamese RetinaNet , a single-stage detector based on RetinaNet ( Lin et al. , 2017b ) . The feature extraction and matching steps are identical to Siamese Faster R-CNN , but it uses Siamese Faster R-CNN the unified RetinaHead to jointly propose and classify bounding boxes . To counter the effect of too many negative samples , the classifier is trained with focal loss Lin et al . ( 2017b ) . Training & Evaluation During training a reference category is randomly chosen for every image by picking a category with at least one instance in the image . A reference is retrieved by randomly selecting one instance from this category in another image and tightly cropping it . The labels for each bounding box are changed to 0 or 1 depending on whether the object is from the reference category or not . Annotations for objects from the held-out categories are removed from the dataset before training . At test time a similar procedure is chosen but instead of picking one category for each image , all categories with at least one object in the image are chosen Michaelis et al . ( 2018b ) and one ( 1-shot ) or five ( 5-shot ) reference images are provided . Predictions are assigned their corresponding category label and evaluation is performed using standard tools and metrics . Implementation We implemented Siamese Faster R-CNN and Siamese RetinaNet in mmdetection v1.0rc ( Chen et al. , 2019a ) , which improved performance by more than 30 % over the original paper ( Table 4 ; Michaelis et al. , 2018b ) . We keep all hyperparameters the same as in the standard Faster R-CNN implementation of mmdetection ( which achieves 36.4 % mAP/58.4 % AP50 on regular COCO ) . Due to resource constraints we reduce the number of samples per epoch to 120k for Objects365 . Datasets We use the four datasets shown in Table 1 : COCO ( Lin et al. , 2014 ) , Objects365 ( Shao et al. , 2019 ) , LVIS ( Gupta et al. , 2019 ) and Pascal VOC ( Everingham et al. , 2010 ) . We use standard splits and test on the validation sets except for Pascal VOC where we test on the 2007 test set . Due to resource constraints , we evaluate Objects365 on a fixed subset of 10k images from the validation set . Following common protocol ( Michaelis et al. , 2018b ; Shaban et al. , 2017 ) we split the categories in each dataset into four splits using every fourth category as hold-out set and the other 3/4 categories for training . So on Pascal VOC there are 15 categories for training in each split , on COCO there are 60 , on Objects365 274 and on LVIS 902 . We train and test four models ( one for each split ) and report the mean over those four models , so performance is always measured on all categories . Computing performance in this way across all categories is preferable to using a fixed subset as some categories may be harder than others . During evaluation , the reference images are chosen randomly . We therefore run the evaluation five times , reporting the average AP50 over splits . The 95 % confidence intervals for the average AP50 is below ±0.2 % AP50 for all experiments .
The paper suggests that a major factor for increasing few-shot performance in the few-shot object detection task is the number of categories in the base training set used to pre-train the few-shot model on a large set of data before it is adapted to novel categories using only a few (or even 1) examples. This effect is measured by the authors by trying out the existing Siamese few-shot detector on 4 datasets: PASCAL, COCO, Objects365, and LVIS showing that the gap in performance on the seen training and the unseen (novel) testing categories is reduced when the base dataset has more classes (e.g. on LVIS where there are more than 1K classes, this "generalization" gap is shown to be minimal). The authors also quantify empirically the effect of increasing the model size and of prolonging the training schedule on this gap. As well as testing on COCO classes while training on LVIS.
SP:91139c0d3614e87c45be3e66110fb01813492e06
Gradient-based training of Gaussian Mixture Models for High-Dimensional Streaming Data
1 INTRODUCTION . This contribution focuses Gaussian Mixture Models ( GMMs ) , which represent a probabilistic unsupervised model for clustering and density estimation and allowing sampling and outlier detection . GMMs have been used in a wide range of scenarios , e.g. , Melnykov & Maitra ( 2010 ) . Commonly , free parameters of a GMM are estimated by using the Expectation-Maximizations ( EMs ) algorithm ( Dempster et al. , 1977 ) , as it does not require learning rates and automatically enforces all GMM constraints . A popular online variant is stochastic Expectation Maximization ( sEM ) ( Cappé & Moulines , 2009 ) , which can be trained mini-batch wise and is , thus , more suited for large datasets or streaming data . 1.1 MOTIVATION . Intrinsically , EM is a batch-type algorithm . Memory requirements can therefore become excessive for large datasets . In addition , streaming-data scenarios require data samples to be processed one by one , which is impossible for a batch-type algorithm . Moreover , data statistics may be subject to changes over time ( concept drift/shift ) , to which the GMM should adapt . In such scenarios , an online , mini-batch type of optimization such as SGD is attractive , as it can process samples one by one , has modest , fixed memory requirements and can adapt to changing data statistics . 1.2 RELATED WORK . Online EM is a technique for performing EM mini-batch wise , allowing to process large datasets . One branch of previous research Newton et al . ( 1986 ) ; Lange ( 1995 ) ; Chen et al . ( 2018 ) has been devoted to the development of stochastic Expectation Maximization ( sEM ) algorithms that reduce the original EM method in the limit of large batch sizes . The variant of Cappé & Moulines ( 2009 ) is widely used due to its simplicity and efficiency for large datasets . These approaches come at the price of additional hyper-parameters ( e.g. , learning rate or mini-batch size ) , thus , removing a key advantage of EM over SGD . Another common approach is to modify the EM algorithm itself by , e.g. , including heuristics for adding , splitting and merging centroids Vlassis & Likas ( 2002 ) ; Engel & Heinen ( 2010 ) ; Pinto & Engel ( 2015 ) ; Cederborg et al . ( 2010 ) ; Song & Wang ( 2005 ) ; Kristan et al . ( 2008 ) ; Vijayakumar et al . ( 2005 ) . This allows GMM-like models to be trained by presenting one sample after another . The models work well in several application scenarios , but their learning dynamics are impossible to analyze mathematically . They also introduce a high number of parameters . Apart from these works , some authors avoid the issue of extensive datasets by determining smaller “ core sets ” of representative samples and performing vanilla EM Feldman et al . ( 2011 ) . SGD for training GMMs has , as far as we know , been recently treated only by Hosseini & Sra ( 2015 ; 2019 ) . In this body of work , GMM constraint enforcement is ensured by using manifold optimization techniques and re-parameterization/regularization . Thereby , additional hyper-parameters are introduced . The issue of local optima is sidestepped by a k-means type centroid initialization , and the used image datasets are low-dimensional ( 36 dimensions ) . Additionally , enforcing positive definiteness constraints by Cholesky decomposition is discussed . Annealing and Approximation approaches for GMMs were proposed by Verbeek et al . ( 2005 ) ; Pinheiro & Bates ( 1995 ) ; Ormoneit & Tresp ( 1998 ) ; Dognin et al . ( 2009 ) . However , the regularizers proposed by Verbeek et al . ( 2005 ) ; Ormoneit & Tresp ( 1998 ) significantly differ from our scheme . GMM log-likelihood approximations , similar to the one used here , are discussed in , e.g. , Pinheiro & Bates ( 1995 ) and Dognin et al . ( 2009 ) , but only in combination with EM training . GMM Training in High-Dimensional Spaces is discussed in several publications : A conceptually very interesting procedure is proposed by Ge et al . ( 2015 ) . It exploits the properties of highdimensional spaces in order to achieve learning with a number of samples that is polynomial in the number of Gaussian components . This is difficult to apply in streaming settings , since higher-order moments need to be estimated beforehand , and also because the number of samples usually can not be controlled in practice . Training GMM-like lower-dimensional factor analysis models by SGD on high-dimensional image data is successfully demonstrated in Richardson & Weiss ( 2018 ) , avoiding numerical issues , but , again , sidestepping the local optima issue by using k-means initialization . The numerical issues associated with log-likelihood computation in high-dimensional spaces are generally mitigated by using the “ logsumexp ” trick Nielsen & Sun ( 2016 ) , which is , however , insufficient for ensuring numerical stability for particularly high-dimensional data , such as images . 1.3 GOALS AND CONTRIBUTIONS . The goals of this article are to establish GMM training by SGD as a simple and scalable alternative to sEM in streaming scenarios with potentially high-dimensional data . The main novel contributions are : • a proposal for numerically stable GMM training by SGD that outperforms sEM for high data dimensionalities • an automatic annealing procedure that ensures SGD convergence from a wide range of initial conditions without prior knowledge of the data ( e.g. , no k-means initialization ) which is especially beneficial for streaming data • a computationally efficient method for enforcing all GMM constraints in SGD Apart from these contents , we provide a publicly available TensorFlow implementation.1 2 DATASETS . We use a variety of different image-based datasets as well as a non-image dataset for evaluation purposes . All datasets are normalized to the [ 0 , 1 ] range . MNIST ( LeCun et al. , 1998 ) contains gray scale images , which depict handwritten digits from 0 to 9 in a resolution of 28×28 pixels – the common benchmark for computer vision systems . SVHN ( Wang et al. , 2012 ) contains color images of house numbers ( 0-9 , resolution 32× 32 ) . FashionMNIST ( Xiao et al. , 2017 ) contains gray scale images of 10 cloth categories and is considered as more challenging classification task compared to MNIST . Fruits 360 ( Murean & Oltean , 2018 ) consists of colored pictures showing different types of fruits ( 100× 100× 3 pixels ) . The ten best-represented classes are selected from this dataset . Devanagari ( Acharya et al. , 2016 ) includes gray scale images of handwritten Devanagari letters with a resolution of 32×32 pixels – the first 10 classes are selected . NotMNIST ( Yaroslav Bulatov , 2011 ) is a gray scale image dataset ( resolution 28× 28 pixels ) of 1https : //github.com/gmm-iclr21/sgd-gmm letters from A to J extracted from different public available fonts . ISOLET ( Cole & Fanty , 1990 ) is a non-image dataset containing 7 797 samples of spoken letters recorded from 150 subjects . Each sample was encoded and is represented by 617 float values . 3 GAUSSIAN MIXTURE MODELS . GMMs are probabilistic models that intend to explain the observed data X = { xn } by expressing their density as a weighted mixture of K Gaussian component densities N ( x ; µk , Pk ) ≡Nk ( x ) : p ( x ) = ∑K k πkNk ( x ) . We work with precision matrices P = Σ−1 instead of covariances Σ . This is realized by optimizing the ( incomplete ) log-likelihood L = En [ log ∑ k πkNk ( xn ) ] . ( 1 ) 3.1 GMMS AND SGD . GMMs require the mixture weights to be normalized : ∑ k πk = 1 and the precision matrices to be positive definite : xTPkx≥ 0 ∀x . These constraints must be explicitly enforced after each SGD step : Weights πk are adapted according to Hosseini & Sra ( 2015 ) , which replaces them by other free parameters ξk from which the πk are computed so that normalization is ensured : πk = exp ( ξk ) ∑ j exp ( ξj ) . ( 2 ) Precision Matrices need to be positive-definite , so we re-parameterize these as Pk = ( DTkDk ) , where the upper-diagonal matrices Dk result from a Cholesky decomposition . Consequently , detΣk = detP −1 k = ( det ( DTkDk ) ) −1 = ( tr ( Dk ) ) −2 can be computed efficiently . To avoid recomputing the costly Cholesky decomposition of the Pk at every iteration , we perform it on the initial precision matrices and just erase the elements below the diagonal in theDk after each gradient step . 3.2 MAX-COMPONENT APPROXIMATION FOR GMMS . The log-likelihood Eq . ( 1 ) is difficult to optimize by SGD ( see Sec . 3.3 ) . This is why we intend to find a lower bound that we can optimize instead . A simple scheme is given by L=En [ log ∑ k πkNk ( xn ) ] ≥En [ log maxk ( πkNk ( xn ) ) ] = L̂=En [ log ( πk∗Nk∗ ( xn ) ) ] ( 3 ) where k∗= arg maxk πkNk ( xn ) . This is what we call the max-component approximation of Eq . ( 3 ) . In contrast to the lower bound that is constructed for EM-type algorithms , this bound is usually not tight . The advantages of L̂ are the avoidance of local optima in SGD , and the elimination of exponentials causing numerical instabilities for high data dimensions . The “ logsumexp ” trick is normally employed with GMMs to rectify this by factoring out the largest component probability Nk∗ . This mitigates , but does not avoid numerical problems when distances are high . To give an example : we normalize the component probability Nk = e−101 ( using 32-bit floats ) by the highest probability Nk∗ = e3 , and we obtain NkNk∗ = e −104 , which is numerically problematic . 3.3 UNDESIRABLE LOCAL OPTIMA IN SGD TRAINING . An issue when performing SGD without k-means initialization concerns undesirable local optima . Degenerate Solutions occur when naively optimizing L by SGD ( see Fig . 1a ) . All components have the same weight , centroid and covariance matrix : πk ≈ 1K , µk =E [ X ] , Σk = Cov ( X ) ∀ k , in which case all gradients vanish ( see App . A.3 for a proof ) . These solutions are avoided by L̂ , since only a subset of components is updated by SGD , thereby breaking the symmetry between components . Single/Sparse-Component Solutions occur when optimizing L̂ by SGD ( see Fig . 1b ) . They are characterized by one or several components { ki } that have large weights with centroid and precision matrices given by the mean and covariance of a significant subsetXki ⊂X of the dataX : πki 0 , µki =E [ Xki ] , Σki = Cov ( Xki ) , whereas the remaining components k are characterized by πk ≈ 0 , µk =µ ( t= 0 ) , Pk =P ( t = 0 ) . Thus , these unconverged components are almost never best-matching components k∗ . The max-operation in L̂ causes gradients like ∂L̂∂µk to contain δkk∗ ( see App . A.3 ) . This implies that they are non-zero only for the best-matching component k∗ . Thus the gradients of unconverged components vanish , implying that they remain in their unconverged state .
A major concern about the paper is related to the unsupported claims and contribution throughout the paper. For example, the way the training copes with distribution shift or alleviate forgetting is not clear or elaborated on. Beyond the abstract and before the empirical validation no theory or justification is provided to substantiate this claim. The idea of the paper and the motivation are very interesting. The experiments look convincing. Writing and presentation are a good start point for improving the paper.
SP:5f9e6a9b02d4ed607e2943e4b78fca65a56edf15
Gradient-based training of Gaussian Mixture Models for High-Dimensional Streaming Data
1 INTRODUCTION . This contribution focuses Gaussian Mixture Models ( GMMs ) , which represent a probabilistic unsupervised model for clustering and density estimation and allowing sampling and outlier detection . GMMs have been used in a wide range of scenarios , e.g. , Melnykov & Maitra ( 2010 ) . Commonly , free parameters of a GMM are estimated by using the Expectation-Maximizations ( EMs ) algorithm ( Dempster et al. , 1977 ) , as it does not require learning rates and automatically enforces all GMM constraints . A popular online variant is stochastic Expectation Maximization ( sEM ) ( Cappé & Moulines , 2009 ) , which can be trained mini-batch wise and is , thus , more suited for large datasets or streaming data . 1.1 MOTIVATION . Intrinsically , EM is a batch-type algorithm . Memory requirements can therefore become excessive for large datasets . In addition , streaming-data scenarios require data samples to be processed one by one , which is impossible for a batch-type algorithm . Moreover , data statistics may be subject to changes over time ( concept drift/shift ) , to which the GMM should adapt . In such scenarios , an online , mini-batch type of optimization such as SGD is attractive , as it can process samples one by one , has modest , fixed memory requirements and can adapt to changing data statistics . 1.2 RELATED WORK . Online EM is a technique for performing EM mini-batch wise , allowing to process large datasets . One branch of previous research Newton et al . ( 1986 ) ; Lange ( 1995 ) ; Chen et al . ( 2018 ) has been devoted to the development of stochastic Expectation Maximization ( sEM ) algorithms that reduce the original EM method in the limit of large batch sizes . The variant of Cappé & Moulines ( 2009 ) is widely used due to its simplicity and efficiency for large datasets . These approaches come at the price of additional hyper-parameters ( e.g. , learning rate or mini-batch size ) , thus , removing a key advantage of EM over SGD . Another common approach is to modify the EM algorithm itself by , e.g. , including heuristics for adding , splitting and merging centroids Vlassis & Likas ( 2002 ) ; Engel & Heinen ( 2010 ) ; Pinto & Engel ( 2015 ) ; Cederborg et al . ( 2010 ) ; Song & Wang ( 2005 ) ; Kristan et al . ( 2008 ) ; Vijayakumar et al . ( 2005 ) . This allows GMM-like models to be trained by presenting one sample after another . The models work well in several application scenarios , but their learning dynamics are impossible to analyze mathematically . They also introduce a high number of parameters . Apart from these works , some authors avoid the issue of extensive datasets by determining smaller “ core sets ” of representative samples and performing vanilla EM Feldman et al . ( 2011 ) . SGD for training GMMs has , as far as we know , been recently treated only by Hosseini & Sra ( 2015 ; 2019 ) . In this body of work , GMM constraint enforcement is ensured by using manifold optimization techniques and re-parameterization/regularization . Thereby , additional hyper-parameters are introduced . The issue of local optima is sidestepped by a k-means type centroid initialization , and the used image datasets are low-dimensional ( 36 dimensions ) . Additionally , enforcing positive definiteness constraints by Cholesky decomposition is discussed . Annealing and Approximation approaches for GMMs were proposed by Verbeek et al . ( 2005 ) ; Pinheiro & Bates ( 1995 ) ; Ormoneit & Tresp ( 1998 ) ; Dognin et al . ( 2009 ) . However , the regularizers proposed by Verbeek et al . ( 2005 ) ; Ormoneit & Tresp ( 1998 ) significantly differ from our scheme . GMM log-likelihood approximations , similar to the one used here , are discussed in , e.g. , Pinheiro & Bates ( 1995 ) and Dognin et al . ( 2009 ) , but only in combination with EM training . GMM Training in High-Dimensional Spaces is discussed in several publications : A conceptually very interesting procedure is proposed by Ge et al . ( 2015 ) . It exploits the properties of highdimensional spaces in order to achieve learning with a number of samples that is polynomial in the number of Gaussian components . This is difficult to apply in streaming settings , since higher-order moments need to be estimated beforehand , and also because the number of samples usually can not be controlled in practice . Training GMM-like lower-dimensional factor analysis models by SGD on high-dimensional image data is successfully demonstrated in Richardson & Weiss ( 2018 ) , avoiding numerical issues , but , again , sidestepping the local optima issue by using k-means initialization . The numerical issues associated with log-likelihood computation in high-dimensional spaces are generally mitigated by using the “ logsumexp ” trick Nielsen & Sun ( 2016 ) , which is , however , insufficient for ensuring numerical stability for particularly high-dimensional data , such as images . 1.3 GOALS AND CONTRIBUTIONS . The goals of this article are to establish GMM training by SGD as a simple and scalable alternative to sEM in streaming scenarios with potentially high-dimensional data . The main novel contributions are : • a proposal for numerically stable GMM training by SGD that outperforms sEM for high data dimensionalities • an automatic annealing procedure that ensures SGD convergence from a wide range of initial conditions without prior knowledge of the data ( e.g. , no k-means initialization ) which is especially beneficial for streaming data • a computationally efficient method for enforcing all GMM constraints in SGD Apart from these contents , we provide a publicly available TensorFlow implementation.1 2 DATASETS . We use a variety of different image-based datasets as well as a non-image dataset for evaluation purposes . All datasets are normalized to the [ 0 , 1 ] range . MNIST ( LeCun et al. , 1998 ) contains gray scale images , which depict handwritten digits from 0 to 9 in a resolution of 28×28 pixels – the common benchmark for computer vision systems . SVHN ( Wang et al. , 2012 ) contains color images of house numbers ( 0-9 , resolution 32× 32 ) . FashionMNIST ( Xiao et al. , 2017 ) contains gray scale images of 10 cloth categories and is considered as more challenging classification task compared to MNIST . Fruits 360 ( Murean & Oltean , 2018 ) consists of colored pictures showing different types of fruits ( 100× 100× 3 pixels ) . The ten best-represented classes are selected from this dataset . Devanagari ( Acharya et al. , 2016 ) includes gray scale images of handwritten Devanagari letters with a resolution of 32×32 pixels – the first 10 classes are selected . NotMNIST ( Yaroslav Bulatov , 2011 ) is a gray scale image dataset ( resolution 28× 28 pixels ) of 1https : //github.com/gmm-iclr21/sgd-gmm letters from A to J extracted from different public available fonts . ISOLET ( Cole & Fanty , 1990 ) is a non-image dataset containing 7 797 samples of spoken letters recorded from 150 subjects . Each sample was encoded and is represented by 617 float values . 3 GAUSSIAN MIXTURE MODELS . GMMs are probabilistic models that intend to explain the observed data X = { xn } by expressing their density as a weighted mixture of K Gaussian component densities N ( x ; µk , Pk ) ≡Nk ( x ) : p ( x ) = ∑K k πkNk ( x ) . We work with precision matrices P = Σ−1 instead of covariances Σ . This is realized by optimizing the ( incomplete ) log-likelihood L = En [ log ∑ k πkNk ( xn ) ] . ( 1 ) 3.1 GMMS AND SGD . GMMs require the mixture weights to be normalized : ∑ k πk = 1 and the precision matrices to be positive definite : xTPkx≥ 0 ∀x . These constraints must be explicitly enforced after each SGD step : Weights πk are adapted according to Hosseini & Sra ( 2015 ) , which replaces them by other free parameters ξk from which the πk are computed so that normalization is ensured : πk = exp ( ξk ) ∑ j exp ( ξj ) . ( 2 ) Precision Matrices need to be positive-definite , so we re-parameterize these as Pk = ( DTkDk ) , where the upper-diagonal matrices Dk result from a Cholesky decomposition . Consequently , detΣk = detP −1 k = ( det ( DTkDk ) ) −1 = ( tr ( Dk ) ) −2 can be computed efficiently . To avoid recomputing the costly Cholesky decomposition of the Pk at every iteration , we perform it on the initial precision matrices and just erase the elements below the diagonal in theDk after each gradient step . 3.2 MAX-COMPONENT APPROXIMATION FOR GMMS . The log-likelihood Eq . ( 1 ) is difficult to optimize by SGD ( see Sec . 3.3 ) . This is why we intend to find a lower bound that we can optimize instead . A simple scheme is given by L=En [ log ∑ k πkNk ( xn ) ] ≥En [ log maxk ( πkNk ( xn ) ) ] = L̂=En [ log ( πk∗Nk∗ ( xn ) ) ] ( 3 ) where k∗= arg maxk πkNk ( xn ) . This is what we call the max-component approximation of Eq . ( 3 ) . In contrast to the lower bound that is constructed for EM-type algorithms , this bound is usually not tight . The advantages of L̂ are the avoidance of local optima in SGD , and the elimination of exponentials causing numerical instabilities for high data dimensions . The “ logsumexp ” trick is normally employed with GMMs to rectify this by factoring out the largest component probability Nk∗ . This mitigates , but does not avoid numerical problems when distances are high . To give an example : we normalize the component probability Nk = e−101 ( using 32-bit floats ) by the highest probability Nk∗ = e3 , and we obtain NkNk∗ = e −104 , which is numerically problematic . 3.3 UNDESIRABLE LOCAL OPTIMA IN SGD TRAINING . An issue when performing SGD without k-means initialization concerns undesirable local optima . Degenerate Solutions occur when naively optimizing L by SGD ( see Fig . 1a ) . All components have the same weight , centroid and covariance matrix : πk ≈ 1K , µk =E [ X ] , Σk = Cov ( X ) ∀ k , in which case all gradients vanish ( see App . A.3 for a proof ) . These solutions are avoided by L̂ , since only a subset of components is updated by SGD , thereby breaking the symmetry between components . Single/Sparse-Component Solutions occur when optimizing L̂ by SGD ( see Fig . 1b ) . They are characterized by one or several components { ki } that have large weights with centroid and precision matrices given by the mean and covariance of a significant subsetXki ⊂X of the dataX : πki 0 , µki =E [ Xki ] , Σki = Cov ( Xki ) , whereas the remaining components k are characterized by πk ≈ 0 , µk =µ ( t= 0 ) , Pk =P ( t = 0 ) . Thus , these unconverged components are almost never best-matching components k∗ . The max-operation in L̂ causes gradients like ∂L̂∂µk to contain δkk∗ ( see App . A.3 ) . This implies that they are non-zero only for the best-matching component k∗ . Thus the gradients of unconverged components vanish , implying that they remain in their unconverged state .
This paper presented a stochastic gradient descent approach to learn a non-stationary high-dimensional Gaussian mixture model from online data. The authors identified 3 challenges - local optima, numerical instability, and catastrophic forgetting, and proposed to address these challenges respective with adaptive annealing, exponential-free approximation, and adaptive SGD learning rate. The proposed approach is demonstrated with several vision/non-vision tasks.
SP:5f9e6a9b02d4ed607e2943e4b78fca65a56edf15
Learn2Weight: Weights Transfer Defense against Similar-domain Adversarial Attacks
1 INTRODUCTION . As machine learning models are applied to more and more real-world tasks , addressing machine learning safety is becoming an increasingly pressing issue . Deep learning algorithms have been shown to be vulnerable to adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Papernot et al. , 2016a ) . In particular , prior black-box adversarial attacks assume that the adversary is not aware of the target model architecture , parameters or training data , but is capable of querying the target model with supplied inputs and obtaining the output predictions . The phenomenon that adversarial examples generated from one model may also be adversarial to another model is known as adversarial transferability ( Szegedy et al. , 2013 ) . Motivated by adversarial transferability , we conjecture another black-box attack pipeline where the adversary does not even need to have access to the target model nor query labels from crafted inputs . Instead , as long as the adversary knows the task of the target , he can choose a similar domain , to build a substitute model and then attack the target model with adversarial examples that are generated from the attack domain . The similar-domain adversarial attack may be more practical than prior blackbox attacks as label querying from target model is not needed . This attack can be illustrated with the following example ( Figure 1b ) in medical insurance fraud ( Finlayson et al. , 2019 ) . Insurance companies may use hypothetical opioid risk models to classify the likelihood ( high/low ) of a patient to abuse the opioids to be prescribed , based on the patient ’ s medical history as text input . Physicians can run the original patient history through the attack pipeline to generate an adversarial patient history , where the original is more likely to be rejected ( ” High ” risk ) and the adversarial is more likely to be accepted ( ” Low ” risk ) . Perturbations in patient history could be , for example , a slight perturbation from ” alcohol abuse ” to ” alcohol dependence ” , and it may successfully fool the insurance company ’ s model . Based on domain adaption theory ( Ben-David et al. , 2010 ) , we conjecture that it is the domain-variant features that cause the success of the similar-domain attack . The adversarial examples with domainvariant features are likely to reside in the low-density regions ( far away from decision boundary ) of the empirical distribution of the target training data which could fool the target model ( Zhang et al. , 2019b ) . Literature indicates that worsened generalizability is a tradeoff faced by existing defenses such as adversarial training ( Raghunathan et al. , 2019 ) and domain generalization techniques ( Wang et al. , 2019 ) . In trying to increase robustness against adversarial inputs , a model faces a tradeoff of weakened accuracy towards clean inputs . Given that an adversarial training loss function is composed of a loss against clean inputs and loss against adversarial inputs , improper optimization where the latter is highly-optimized and the former weakly-optimized does not improve general performance in the real-world . To curb this issue , methods have been proposed ( Zhang et al. , 2019b ; Lamb et al. , 2019 ; Schmidt et al. , 2018 ) , such as factoring in under-represented data points in training set . To defend against this similar-domain adversarial attack , we propose a weight transfer network approach , Learn2Weight , so that the target model ’ s decision boundary can adapt to the examples from low-density regions . Experiments confirm the effectiveness of our approach against the similardomain attack over other baseline defense methods . Moreover , our approach is able to improve robustness accuracy without losing the target model ’ s standard generalization accuracy . Our contribution can be summarized as follows : • We are among the first to propose the similar-domain adversarial attack . This attack pipeline relaxes the previous black-box attack assumption that the adversary has access to the target model and can query the model with crafted examples . • We propose a defensive strategy for this attack based on domain adaptation theory . Experiment results show the effectiveness of our approach over existing defense methods , against the similar-domain attack . 2 RELATED WORK . Recent work in adversarial attack for NLP systems has attracted attention . See ( Zhang et al. , 2020 ) survey for an overview of the adversarial attack in NLP . Existing research proposes different attack methods for generating adversarial text examples ( Moosavi-Dezfooli et al. , 2016 ; Ebrahimi et al. , 2018 ; Wallace et al. , 2019 ) . The crafted adversarial text examples have been shown to fool the state-of-the-art NLP systems such as BERT ( Jin et al. , 2019 ) . A large body of adversarial attack research focuses on black-box attack where the adversary builds a substitute model by querying the target model with supplied inputs and obtaining the output predictions . The key idea behind such black-box attack is that adversarial examples generated from one model may also be mis-classified by another model , which is known as adversarial transferability ( Szegedy et al. , 2013 ; Cheng et al. , 2019 ) . While prior work examines the transferability between different models trained over the same dataset , or the transferability between the same or different model trained over disjoint subsets of a dataset , our work examines the adversarial transferability between different domains , which we call similar-domain attack . 3 SIMILAR-DOMAIN ADVERSARIAL ATTACK . 3.1 ADVERSARIAL ATTACK BACKGROUND . Adversarial attacks modify inputs to cause errors in machine learning inference ( Szegedy et al. , 2013 ) . We utilize the basic gradient-based attack method Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2014 ) and its variants , RAND-FGSM ( Tramèr et al. , 2017 ) and Basic Iterative Method ( BIM ) ( Kurakin et al. , 2016a ; b ; Xie et al. , 2018 ) . Other NLP adversarial generation algorithms could also be used , such as DeepFool ( Moosavi-Dezfooli et al. , 2016 ) , HotFlip ( Ebrahimi et al. , 2018 ) , universal adversarial trigger ( Wallace et al. , 2019 ) , and TextFooler ( Jin et al. , 2019 ) . To perform gradient-based perturbations upon discrete space data , we follow ( Yang et al. , 2018 ) to generate adversarial text . Our proposed similar-domain adversarial attack is in-variant to adversarial algorithm , meaning that the adversarial algorithm used would not affect the attack performance . Without losing generality , we denote Adv ( f , x ) as an NLP adversarial text generation method , defined as below . Definition 1 . NLP Adversarial Generation . Given a deep neural network model f built on text data X , an NLP adversarial generation method produces one adversarial instances x′ ← Adv ( f , x ) for x ∈ X , x′ ≈ x . The goal of the adversarial attack is to deviate the label to incorrect one f ( x′ ) 6= f ( x ) . 3.2 SIMILAR-DOMAIN ADVERSARIAL ATTACK . We present the architecture of similar-domain adversarial attack in Figure 1a . The defender , the target of the attack , constructs a target model trained on domain text data T ( 0 ) . An attacker , only having a rough idea about the target ’ s task but lacking direct access to the target data or target model parameters , collects attack data from a similar domain S and trains an attack model ( 1 ) . He runs the attack model on the test data ( 2 ) to obtain correctly-classified instances ( 3 ) . He chooses an adversarial attack algorithm and generates a set of adversarial samples A ( 4 ) . He exposes A to the target model , hoping A mislead the target model to produce an output of his choice ( 5 ) . This type of attack works best as an adversarial attack that compromises systems that base decision-making on one-instance . Definition 2 . Similar-domain Adversarial Attack . A target model f , built on target domain data T , is a deep neural network model with parameter weights WT that maps a text instance to a label : y ← f ( X , WT ) . An adversary chooses a source attack domain S , builds a substitute model fS , and generates a set of adversarial examples A from S using Adv ( fS , S ) , so that during an attack f ( A , WT ) = fS ( A ) . 3.3 DOMAIN SIMILARITY . Here , domain similarity refers to the similarity between attacker ’ s chosen domain and defender ’ s domain . SharedVocab measures the overlap of unique words , in each of the datasets ; a higher degree of overlapping vocabulary implies the two domains are more similar . We also use Transfer Loss , a standard metric for domain adaptation Blitzer et al . ( 2007 ) ; Glorot et al . ( 2011 ) , to measure domain similarity ; lower loss indicates higher similarity . The test error from a target model trained on target domain T and evaluated on attack domain S returns transfer error e ( S , T ) . The baseline error e ( T , T ) term is the test error obtained from target model trained on target domain ( train ) data T and tested on target domain ( evaluation ) data T . This computes the transfer loss , tf ( S , T ) = e ( S , T ) − e ( T , T ) . 4 IS THE ATTACK EFFECTIVE ? . 4.1 SETUP . Dataset . We simulate the similar-domain adversarial attack using Amazon ’ s multi-domain sentiment classification dataset ( Blitzer et al. , 2007 ) , a commonly-used dataset in cross-domain sentiment classification1 , with 1,000 positive and 1,000 negative reviews for each of the 25 product categories . Model . In practice , there could be unlimited choice for the attack model and target model , such as different deep learning architecture , different training parameters . To simplify the discussion , we choose Long Short-Term Memory ( LSTM ) network as a suitable baseline sentiment classification model ( Wang et al. , 2018 ) for our target model and attack model . The architecture consists of 64 LSTM cells , 80 % dropout , using a sigmoid activation function . Metrics . We first report the accuracy of the target models on the target domain test samples before the attack as the original accuracy . Then we measure the accuracy of the target models against adversarial samples crafted from the attack domain samples , denoted as the after-attack accuracy . Intra-attack accuracy denotes the after-attack accuracy where the attack domain is identical to the target domain . By comparing original and after-attack accuracy , we can evaluate the success of the attack . The greater the gap between the original and after-attack accuracy , the more successful the attack . Unperturbed accuracy measures the accuracy of the target model against the complete , unperturbed test set of the attack domain , to demonstrate that any drop in classification accuracy is not from domain shift alone but from adversarial transferability . 4.2 RESULTS . The similar-domain adversarial attack results are presented in Table 2 . We see a significant gap between original accuracy and after-attack accuracy indicating that this attack can impose valid threat to a target NLP system . After the similar-domain adversarial attack , the accuracy drops dramatically by a large margin . Take the book target domain for example , when the attack domain is magazine , the after-attack accuracy drops to 0.398 , and when the attack domain is baby , the accuracy is 0.421 . Moreover , we observe a positive correlation between transfer loss and after-attack accuracy , and a negative correlation between shared vocab and after-attack accuracy . 1Data is available from https : //www.cs.jhu.edu/˜mdredze/datasets/sentiment/
This paper is about generating adversarial examples for some target model and protecting from such attacks. Authors consider a setting when an adversary has access to some "similar to target " domain data, and can use this data to generate a surrogate model. Using this surrogate model an adversary can generate adversarial examples, that apparently also fool the target model. Then authors also propose a defense mechanism from this type of attack, Learn2Weight. This is a learnt network that, for a given example, returns perturbation of weights to the target model which will be applied to the target before inference. This model is trained by a defender on synthetic domains generated as perturbations to the target data
SP:d489a94958b9f496aa7713249451c5ffe0c6892c
Learn2Weight: Weights Transfer Defense against Similar-domain Adversarial Attacks
1 INTRODUCTION . As machine learning models are applied to more and more real-world tasks , addressing machine learning safety is becoming an increasingly pressing issue . Deep learning algorithms have been shown to be vulnerable to adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Papernot et al. , 2016a ) . In particular , prior black-box adversarial attacks assume that the adversary is not aware of the target model architecture , parameters or training data , but is capable of querying the target model with supplied inputs and obtaining the output predictions . The phenomenon that adversarial examples generated from one model may also be adversarial to another model is known as adversarial transferability ( Szegedy et al. , 2013 ) . Motivated by adversarial transferability , we conjecture another black-box attack pipeline where the adversary does not even need to have access to the target model nor query labels from crafted inputs . Instead , as long as the adversary knows the task of the target , he can choose a similar domain , to build a substitute model and then attack the target model with adversarial examples that are generated from the attack domain . The similar-domain adversarial attack may be more practical than prior blackbox attacks as label querying from target model is not needed . This attack can be illustrated with the following example ( Figure 1b ) in medical insurance fraud ( Finlayson et al. , 2019 ) . Insurance companies may use hypothetical opioid risk models to classify the likelihood ( high/low ) of a patient to abuse the opioids to be prescribed , based on the patient ’ s medical history as text input . Physicians can run the original patient history through the attack pipeline to generate an adversarial patient history , where the original is more likely to be rejected ( ” High ” risk ) and the adversarial is more likely to be accepted ( ” Low ” risk ) . Perturbations in patient history could be , for example , a slight perturbation from ” alcohol abuse ” to ” alcohol dependence ” , and it may successfully fool the insurance company ’ s model . Based on domain adaption theory ( Ben-David et al. , 2010 ) , we conjecture that it is the domain-variant features that cause the success of the similar-domain attack . The adversarial examples with domainvariant features are likely to reside in the low-density regions ( far away from decision boundary ) of the empirical distribution of the target training data which could fool the target model ( Zhang et al. , 2019b ) . Literature indicates that worsened generalizability is a tradeoff faced by existing defenses such as adversarial training ( Raghunathan et al. , 2019 ) and domain generalization techniques ( Wang et al. , 2019 ) . In trying to increase robustness against adversarial inputs , a model faces a tradeoff of weakened accuracy towards clean inputs . Given that an adversarial training loss function is composed of a loss against clean inputs and loss against adversarial inputs , improper optimization where the latter is highly-optimized and the former weakly-optimized does not improve general performance in the real-world . To curb this issue , methods have been proposed ( Zhang et al. , 2019b ; Lamb et al. , 2019 ; Schmidt et al. , 2018 ) , such as factoring in under-represented data points in training set . To defend against this similar-domain adversarial attack , we propose a weight transfer network approach , Learn2Weight , so that the target model ’ s decision boundary can adapt to the examples from low-density regions . Experiments confirm the effectiveness of our approach against the similardomain attack over other baseline defense methods . Moreover , our approach is able to improve robustness accuracy without losing the target model ’ s standard generalization accuracy . Our contribution can be summarized as follows : • We are among the first to propose the similar-domain adversarial attack . This attack pipeline relaxes the previous black-box attack assumption that the adversary has access to the target model and can query the model with crafted examples . • We propose a defensive strategy for this attack based on domain adaptation theory . Experiment results show the effectiveness of our approach over existing defense methods , against the similar-domain attack . 2 RELATED WORK . Recent work in adversarial attack for NLP systems has attracted attention . See ( Zhang et al. , 2020 ) survey for an overview of the adversarial attack in NLP . Existing research proposes different attack methods for generating adversarial text examples ( Moosavi-Dezfooli et al. , 2016 ; Ebrahimi et al. , 2018 ; Wallace et al. , 2019 ) . The crafted adversarial text examples have been shown to fool the state-of-the-art NLP systems such as BERT ( Jin et al. , 2019 ) . A large body of adversarial attack research focuses on black-box attack where the adversary builds a substitute model by querying the target model with supplied inputs and obtaining the output predictions . The key idea behind such black-box attack is that adversarial examples generated from one model may also be mis-classified by another model , which is known as adversarial transferability ( Szegedy et al. , 2013 ; Cheng et al. , 2019 ) . While prior work examines the transferability between different models trained over the same dataset , or the transferability between the same or different model trained over disjoint subsets of a dataset , our work examines the adversarial transferability between different domains , which we call similar-domain attack . 3 SIMILAR-DOMAIN ADVERSARIAL ATTACK . 3.1 ADVERSARIAL ATTACK BACKGROUND . Adversarial attacks modify inputs to cause errors in machine learning inference ( Szegedy et al. , 2013 ) . We utilize the basic gradient-based attack method Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2014 ) and its variants , RAND-FGSM ( Tramèr et al. , 2017 ) and Basic Iterative Method ( BIM ) ( Kurakin et al. , 2016a ; b ; Xie et al. , 2018 ) . Other NLP adversarial generation algorithms could also be used , such as DeepFool ( Moosavi-Dezfooli et al. , 2016 ) , HotFlip ( Ebrahimi et al. , 2018 ) , universal adversarial trigger ( Wallace et al. , 2019 ) , and TextFooler ( Jin et al. , 2019 ) . To perform gradient-based perturbations upon discrete space data , we follow ( Yang et al. , 2018 ) to generate adversarial text . Our proposed similar-domain adversarial attack is in-variant to adversarial algorithm , meaning that the adversarial algorithm used would not affect the attack performance . Without losing generality , we denote Adv ( f , x ) as an NLP adversarial text generation method , defined as below . Definition 1 . NLP Adversarial Generation . Given a deep neural network model f built on text data X , an NLP adversarial generation method produces one adversarial instances x′ ← Adv ( f , x ) for x ∈ X , x′ ≈ x . The goal of the adversarial attack is to deviate the label to incorrect one f ( x′ ) 6= f ( x ) . 3.2 SIMILAR-DOMAIN ADVERSARIAL ATTACK . We present the architecture of similar-domain adversarial attack in Figure 1a . The defender , the target of the attack , constructs a target model trained on domain text data T ( 0 ) . An attacker , only having a rough idea about the target ’ s task but lacking direct access to the target data or target model parameters , collects attack data from a similar domain S and trains an attack model ( 1 ) . He runs the attack model on the test data ( 2 ) to obtain correctly-classified instances ( 3 ) . He chooses an adversarial attack algorithm and generates a set of adversarial samples A ( 4 ) . He exposes A to the target model , hoping A mislead the target model to produce an output of his choice ( 5 ) . This type of attack works best as an adversarial attack that compromises systems that base decision-making on one-instance . Definition 2 . Similar-domain Adversarial Attack . A target model f , built on target domain data T , is a deep neural network model with parameter weights WT that maps a text instance to a label : y ← f ( X , WT ) . An adversary chooses a source attack domain S , builds a substitute model fS , and generates a set of adversarial examples A from S using Adv ( fS , S ) , so that during an attack f ( A , WT ) = fS ( A ) . 3.3 DOMAIN SIMILARITY . Here , domain similarity refers to the similarity between attacker ’ s chosen domain and defender ’ s domain . SharedVocab measures the overlap of unique words , in each of the datasets ; a higher degree of overlapping vocabulary implies the two domains are more similar . We also use Transfer Loss , a standard metric for domain adaptation Blitzer et al . ( 2007 ) ; Glorot et al . ( 2011 ) , to measure domain similarity ; lower loss indicates higher similarity . The test error from a target model trained on target domain T and evaluated on attack domain S returns transfer error e ( S , T ) . The baseline error e ( T , T ) term is the test error obtained from target model trained on target domain ( train ) data T and tested on target domain ( evaluation ) data T . This computes the transfer loss , tf ( S , T ) = e ( S , T ) − e ( T , T ) . 4 IS THE ATTACK EFFECTIVE ? . 4.1 SETUP . Dataset . We simulate the similar-domain adversarial attack using Amazon ’ s multi-domain sentiment classification dataset ( Blitzer et al. , 2007 ) , a commonly-used dataset in cross-domain sentiment classification1 , with 1,000 positive and 1,000 negative reviews for each of the 25 product categories . Model . In practice , there could be unlimited choice for the attack model and target model , such as different deep learning architecture , different training parameters . To simplify the discussion , we choose Long Short-Term Memory ( LSTM ) network as a suitable baseline sentiment classification model ( Wang et al. , 2018 ) for our target model and attack model . The architecture consists of 64 LSTM cells , 80 % dropout , using a sigmoid activation function . Metrics . We first report the accuracy of the target models on the target domain test samples before the attack as the original accuracy . Then we measure the accuracy of the target models against adversarial samples crafted from the attack domain samples , denoted as the after-attack accuracy . Intra-attack accuracy denotes the after-attack accuracy where the attack domain is identical to the target domain . By comparing original and after-attack accuracy , we can evaluate the success of the attack . The greater the gap between the original and after-attack accuracy , the more successful the attack . Unperturbed accuracy measures the accuracy of the target model against the complete , unperturbed test set of the attack domain , to demonstrate that any drop in classification accuracy is not from domain shift alone but from adversarial transferability . 4.2 RESULTS . The similar-domain adversarial attack results are presented in Table 2 . We see a significant gap between original accuracy and after-attack accuracy indicating that this attack can impose valid threat to a target NLP system . After the similar-domain adversarial attack , the accuracy drops dramatically by a large margin . Take the book target domain for example , when the attack domain is magazine , the after-attack accuracy drops to 0.398 , and when the attack domain is baby , the accuracy is 0.421 . Moreover , we observe a positive correlation between transfer loss and after-attack accuracy , and a negative correlation between shared vocab and after-attack accuracy . 1Data is available from https : //www.cs.jhu.edu/˜mdredze/datasets/sentiment/
The paper considers the adversarial attacks via a surrogate model constructed using data from a different domain. The authors propose a defense from such attacks by a special kind of adversarial training inspired by the idea of domain adaptation. The idea can be useful but raises a lot of questions, especially when looking at the evaluation of the proposed approach.
SP:d489a94958b9f496aa7713249451c5ffe0c6892c
Normalizing Flows for Calibration and Recalibration
In machine learning , due to model misspecification and overfitting , estimates of the aleatoric uncertainty are often inaccurate . One approach to fix this is isotonic regression , in which a monotonic function is fit on a validation set to map the model ’ s CDF to an optimally calibrated CDF . However , this makes it infeasible to compute additional statistics of interest on the model distribution ( such as the mean ) . In this paper , through a reframing of recalibration as MLE , we replace isotonic regression with normalizing flows . This allows us to retain the ability to compute the statistical properties of the model ( such as closed-form likelihoods , mean , correlation , etc . ) and provides an opportunity for additional capacity at the cost of possible overfitting . Most importantly , the fundamental properties of normalizing flows allow us to generalize recalibration to conditional and multivariate distributions . To aid in detecting miscalibration and measuring our success at fixing it , we use a simple extension of the calibration Q-Q plot . 1 INTRODUCTION . Recent advances in deep learning have led to models with significantly higher overall accuracy on both classification and regression tasks compared to what was achievable in the past . However , an important component in conjunction with accuracy is a model ’ s ability to accurately assess the uncertainty in its prediction . Most taxonomies classify uncertainty into three sources : approximation , aleatoric , and epistemic uncertainty ( Der Kiureghian & Ditlevsen , 2009 ) . Approximation uncertainty quantifies the error from fitting a simple model to complex data . Aleatoric uncertainty quantifies the uncertainty of the conditional distribution of the target variable given features . This uncertainty arises from hidden variables or measurement errors and can not be reduced through collecting more data under the same experimental conditions . Epistemic uncertainty quantifies the uncertainty arising from fitting a model utilizing finite data , i.e . it is inversely proportional to the density of the training examples and can be reduced by collecting data in the low density regions . These different sources of uncertainty have different techniques for handling them . Using high capacity models such as neural networks removes a large part of the approximation uncertainty . By fitting a full distribution on the target conditional on features , we can model the aleatoric uncertainty from observations . Inaccurate estimates of aleatoric uncertainty can be explained by underfitting ( insufficient complexity in the conditional distributions ) or overfitting ( models with sufficient capacity can memorize the data , leading to the distributions collapsing to deltas ) . Though epistemic uncertainty is important for the model to answer what it does not know , the focus of this paper is on improving estimates of the aleatoric uncertainty . Our approach in this paper is to handle both model fit and calibration using normalizing flows . Normalizing flows can be used in conjunction with amortized inference to improve the flexibility of the output distribution , and further , through a reframing of recalibration as maximum likelihood estimation ( MLE ) , normalizing flows can be used to handle any miscalibration found on a validation set . Further , we use a simple extension of the calibration plot from Kuleshov et al . ( 2018 ) to help with the the analysis of the calibration of a model across different regions of the data . 2 RELATED WORK . One method for handling aleatoric uncertainty is amortized inference with Gaussians ( Lakshminarayanan et al. , 2017 ; Nix & Weigend , 1994 ; Kendall & Gal , 2017 ) where a model , such as a neural network , maps from features to the parameters of a Gaussian . This approach models aleatoric uncertainty directly but suffers from approximation uncertainty as a Gaussian can not model complex targets . Another approach is Bayesian methods such as Bayesian Ridge Regression ( Tipping , 2001 ) and MC Dropout ( Gal & Ghahramani , 2016 ) . Similar to amortized inference with Gaussians , the output distribution limits the capacity of the model . Full Bayesian techniques with neural networks are often too computationally expensive in practice , and approximate methods often fail to capture the full complexity of the uncertainty ( Lakshminarayanan et al. , 2017 ) . Another family of methods uses quantile regression with non-linear techniques such as decision trees or neural networks . Some of these methods require a predefined set of quantiles ( Takeuchi et al. , 2006 ; Wen et al. , 2017 ; Rodrigues & Pereira , 2018 ; Taylor , 2000 ) . Simultaneous Quantile Regression ( SQR , Tagasovska & Lopez-Paz ( 2019 ) ) trains one model on all quantiles and is able to learn complex shaped distributions . However the training procedure requires the model to learn to be monotonic instead of being constrained to be so and is not trivial to extend to multidimensional outputs . Pearce et al . ( 2018 ) learns a finite set of quantiles by using quality metrics for predictive intervals . Normalizing flows have been used in the contexts of variational inference and generative modeling . The approaches to normalizing flows can be categorized into autoregressive methods ( Kingma et al. , 2016 ; Papamakarios et al. , 2017 ; Huang et al. , 2018 ; Cao et al. , 2019 ) , coupling layers ( Dinh et al. , 2014 ; 2016 ; Kingma & Dhariwal , 2018 ; Ho et al. , 2019 ) , residual networks ( Rezende & Mohamed , 2015 ; van den Berg et al. , 2018 ; Gopal , 2020 ) and continuous flows ( Grathwohl et al. , 2018 ) . 3 BACKGROUND . 3.1 RECALIBRATION . An important goal in modeling is to have well-calibrated distributions as this allows users of the model to understand the confidence the model places on its prediction . In other words , with a well-calibrated model , we can better ascertain the uncertainty in the model ’ s predictions and respond differently to those predictions depending on the uncertainty surrounding them . Guo et al . ( 2017 ) showed that unlike techniques used decades ago such as Bayesian Ridge Regression , modern neural network-based classifiers are very poorly calibrated . A simple variant of Platt scaling and other histogram based techniques applied to a validation set were shown to help alleviate the calibration problem where perfect calibration is defined as P ( Ŷ = Y |P̂ = p ) = p , ∀p ∈ [ 0 , 1 ] where Ŷ is a class prediction and P̂ is its predicted probability of correctness . Kuleshov et al . ( 2018 ) extended the analysis in Guo et al . ( 2017 ) to neural network-based regressors ; isotonic regression , a method for learning monotonic univariate functions , was applied to map from Fxj ( yj ) = p̂j to | { yn|Fxn ( yn ) < p̂j , n = 1 , . . . , N } |/N ( the fraction of the data where the model CDF is less than p̂ ) to improve calibration where perfect calibration is defined as P ( Y < F−1X ( p ) ) = p , ∀p ∈ [ 0 , 1 ] where FX is the predicted CDF function . Kuleshov et al . ( 2018 ) further introduced calibration error as a metric to quantitatively measure how well the quantiles are aligned : p̂j = | { yn|Fxn ( yn ) < pj , n = 1 , . . . , N } | / N cal ( y1 , . . . , yN ) = M∑ j=1 ( pj − p̂j ) 2 ( 1 ) where M is the number of quantiles that are evaluated . In this paper , we set this to 100 evenly spaced quantiles . 3.2 NORMALIZING FLOWS . A crucial component in improving uncertainty estimates in this paper is to fit normalizing flows , a generative model for density estimation using invertible functions . Suppose that we wish to formulate a joint distribution on an n-dimensional real vector x . A flowbased approach treats x as the result of a transformation g applied to an underlying vector z sampled from a base distribution pz ( z ) . The generative process for flows is defined as : z ∼ pz ( z ) x = g ( z ) where pz is often a Normal distribution and g is an invertible function . Notationally , we will use f = g−1 . Using change of variables , the log likelihood of x is log px ( x ) = log pz ( f ( x ) ) + log ∣∣∣∣det ( ∂f ( x ) ∂x ) ∣∣∣∣ To train flows ( i.e . maximize the log likelihood of data points ) , we need to be able to compute the logarithm of the absolute value of the determinant of the Jacobian of f , also called the log-determinant . To construct large normalizing flows , we can compose smaller ones as this is still invertible and the log-determinant of this composition is the sum of the individual log-determinants . An important observation is that every univariate distribution can be viewed as a flow in which the base distribution pz ( z ) is a Uniform distribution over [ 0 , 1 ] and g is the inverse CDF of the univariate distribution . This can be generalized to multivariate distributions using the chain rule . 3.3 CONDITIONAL QUAR FLOWS . In this paper , the specific normalizing flow used is Quasi-Autoregressive Residual Flows ( QuAR Flows , Gopal ( 2020 ) ) . QuAR Flows were shown to handle modeling complex distributions while having nice optimization properties and computationally efficient log-likelihoods . Other commonly used normalizing flows have drawbacks . Coupling layers do not work for one-dimensional inputs . Autoregressive flows are often conditional Gaussians which means in the one-dimensional case , the flow is simply a Gaussian . Residual Flows ( Chen et al. , 2019 ) are expressive but , though equivalent to QuAR Flows in the one-dimensional case , are computationally expensive for high-dimensional distributions . If we would like to condition a QuAR Flow given some features c , we could use a hypernetwork that maps c to the parameters of the QuAR Flow ( Figure 1a ) . However , this can be computationally expensive if the QuAR Flow is parameterized with thousands of parameters . Instead , conditional information can be incorporated by concatenating the conditions c before applying the first layer in the residual connection within a QuAR Flow ( Figure 1b ) . 4 REGRESSION USING NORMALIZING FLOWS . The goal of regression is to model the conditional probability distribution p ( y|x ) . Frequently , this task is reduced to obtaining point estimates of the mean or median by minimizing the mean squared error or mean absolute error respectively . In order to access uncertainties , we must model the full conditional distribution ; in addition to this , we can still retrieve point estimates of the mean , median , or any other distributional statistic . In this section , we discuss how normalizing flows can be used to this end as well as how recalibration can be reframed as maximum likelihood estimation ( MLE ) . 4.1 CALIBRATION WITH NORMALIZING FLOWS . One approach to modeling aleatoric uncertainty is to use amortized inference with a Normal distribution as its output distribution , conditioned on features ( Lakshminarayanan et al. , 2017 ) . However , if a Gaussian is not appropriate to fit the target , then the error in miscalibration can be attributed to approximation error , i.e . insufficient capacity to learn the distribution . A simple example of this is where we have no features to condition on ( or all the features we have contain no relevant information ) with a one-dimensional target . In Figure 2 , we can see that since the target is bimodal , the Gaussian fit does not well describe the data . To improve this model , we replace the Gaussian with a normalizing flow conditioned on our data , similar to Figure 1b . In this way , the distribution fit to the targets has sufficient capacity to learn complex distributions , including multimodal distributions ( Figure 2 ) .
The paper proposes to use normalizing flows to improve estimates of aleatoric uncertainty in regression tasks. First, the paper suggests that since normalizing flows can improve the flexibility of output distribution, they can be used to mitigate issues of underfitting. Second, the paper proposes an approach that uses normalizing flows for recalibration, which allows people to still query the model’s statistical properties. The authors also introduce a plot that is useful for diagnosing calibration issues.
SP:bc770f3d7472b4a89f0773960ac365193f5e0754
Normalizing Flows for Calibration and Recalibration
In machine learning , due to model misspecification and overfitting , estimates of the aleatoric uncertainty are often inaccurate . One approach to fix this is isotonic regression , in which a monotonic function is fit on a validation set to map the model ’ s CDF to an optimally calibrated CDF . However , this makes it infeasible to compute additional statistics of interest on the model distribution ( such as the mean ) . In this paper , through a reframing of recalibration as MLE , we replace isotonic regression with normalizing flows . This allows us to retain the ability to compute the statistical properties of the model ( such as closed-form likelihoods , mean , correlation , etc . ) and provides an opportunity for additional capacity at the cost of possible overfitting . Most importantly , the fundamental properties of normalizing flows allow us to generalize recalibration to conditional and multivariate distributions . To aid in detecting miscalibration and measuring our success at fixing it , we use a simple extension of the calibration Q-Q plot . 1 INTRODUCTION . Recent advances in deep learning have led to models with significantly higher overall accuracy on both classification and regression tasks compared to what was achievable in the past . However , an important component in conjunction with accuracy is a model ’ s ability to accurately assess the uncertainty in its prediction . Most taxonomies classify uncertainty into three sources : approximation , aleatoric , and epistemic uncertainty ( Der Kiureghian & Ditlevsen , 2009 ) . Approximation uncertainty quantifies the error from fitting a simple model to complex data . Aleatoric uncertainty quantifies the uncertainty of the conditional distribution of the target variable given features . This uncertainty arises from hidden variables or measurement errors and can not be reduced through collecting more data under the same experimental conditions . Epistemic uncertainty quantifies the uncertainty arising from fitting a model utilizing finite data , i.e . it is inversely proportional to the density of the training examples and can be reduced by collecting data in the low density regions . These different sources of uncertainty have different techniques for handling them . Using high capacity models such as neural networks removes a large part of the approximation uncertainty . By fitting a full distribution on the target conditional on features , we can model the aleatoric uncertainty from observations . Inaccurate estimates of aleatoric uncertainty can be explained by underfitting ( insufficient complexity in the conditional distributions ) or overfitting ( models with sufficient capacity can memorize the data , leading to the distributions collapsing to deltas ) . Though epistemic uncertainty is important for the model to answer what it does not know , the focus of this paper is on improving estimates of the aleatoric uncertainty . Our approach in this paper is to handle both model fit and calibration using normalizing flows . Normalizing flows can be used in conjunction with amortized inference to improve the flexibility of the output distribution , and further , through a reframing of recalibration as maximum likelihood estimation ( MLE ) , normalizing flows can be used to handle any miscalibration found on a validation set . Further , we use a simple extension of the calibration plot from Kuleshov et al . ( 2018 ) to help with the the analysis of the calibration of a model across different regions of the data . 2 RELATED WORK . One method for handling aleatoric uncertainty is amortized inference with Gaussians ( Lakshminarayanan et al. , 2017 ; Nix & Weigend , 1994 ; Kendall & Gal , 2017 ) where a model , such as a neural network , maps from features to the parameters of a Gaussian . This approach models aleatoric uncertainty directly but suffers from approximation uncertainty as a Gaussian can not model complex targets . Another approach is Bayesian methods such as Bayesian Ridge Regression ( Tipping , 2001 ) and MC Dropout ( Gal & Ghahramani , 2016 ) . Similar to amortized inference with Gaussians , the output distribution limits the capacity of the model . Full Bayesian techniques with neural networks are often too computationally expensive in practice , and approximate methods often fail to capture the full complexity of the uncertainty ( Lakshminarayanan et al. , 2017 ) . Another family of methods uses quantile regression with non-linear techniques such as decision trees or neural networks . Some of these methods require a predefined set of quantiles ( Takeuchi et al. , 2006 ; Wen et al. , 2017 ; Rodrigues & Pereira , 2018 ; Taylor , 2000 ) . Simultaneous Quantile Regression ( SQR , Tagasovska & Lopez-Paz ( 2019 ) ) trains one model on all quantiles and is able to learn complex shaped distributions . However the training procedure requires the model to learn to be monotonic instead of being constrained to be so and is not trivial to extend to multidimensional outputs . Pearce et al . ( 2018 ) learns a finite set of quantiles by using quality metrics for predictive intervals . Normalizing flows have been used in the contexts of variational inference and generative modeling . The approaches to normalizing flows can be categorized into autoregressive methods ( Kingma et al. , 2016 ; Papamakarios et al. , 2017 ; Huang et al. , 2018 ; Cao et al. , 2019 ) , coupling layers ( Dinh et al. , 2014 ; 2016 ; Kingma & Dhariwal , 2018 ; Ho et al. , 2019 ) , residual networks ( Rezende & Mohamed , 2015 ; van den Berg et al. , 2018 ; Gopal , 2020 ) and continuous flows ( Grathwohl et al. , 2018 ) . 3 BACKGROUND . 3.1 RECALIBRATION . An important goal in modeling is to have well-calibrated distributions as this allows users of the model to understand the confidence the model places on its prediction . In other words , with a well-calibrated model , we can better ascertain the uncertainty in the model ’ s predictions and respond differently to those predictions depending on the uncertainty surrounding them . Guo et al . ( 2017 ) showed that unlike techniques used decades ago such as Bayesian Ridge Regression , modern neural network-based classifiers are very poorly calibrated . A simple variant of Platt scaling and other histogram based techniques applied to a validation set were shown to help alleviate the calibration problem where perfect calibration is defined as P ( Ŷ = Y |P̂ = p ) = p , ∀p ∈ [ 0 , 1 ] where Ŷ is a class prediction and P̂ is its predicted probability of correctness . Kuleshov et al . ( 2018 ) extended the analysis in Guo et al . ( 2017 ) to neural network-based regressors ; isotonic regression , a method for learning monotonic univariate functions , was applied to map from Fxj ( yj ) = p̂j to | { yn|Fxn ( yn ) < p̂j , n = 1 , . . . , N } |/N ( the fraction of the data where the model CDF is less than p̂ ) to improve calibration where perfect calibration is defined as P ( Y < F−1X ( p ) ) = p , ∀p ∈ [ 0 , 1 ] where FX is the predicted CDF function . Kuleshov et al . ( 2018 ) further introduced calibration error as a metric to quantitatively measure how well the quantiles are aligned : p̂j = | { yn|Fxn ( yn ) < pj , n = 1 , . . . , N } | / N cal ( y1 , . . . , yN ) = M∑ j=1 ( pj − p̂j ) 2 ( 1 ) where M is the number of quantiles that are evaluated . In this paper , we set this to 100 evenly spaced quantiles . 3.2 NORMALIZING FLOWS . A crucial component in improving uncertainty estimates in this paper is to fit normalizing flows , a generative model for density estimation using invertible functions . Suppose that we wish to formulate a joint distribution on an n-dimensional real vector x . A flowbased approach treats x as the result of a transformation g applied to an underlying vector z sampled from a base distribution pz ( z ) . The generative process for flows is defined as : z ∼ pz ( z ) x = g ( z ) where pz is often a Normal distribution and g is an invertible function . Notationally , we will use f = g−1 . Using change of variables , the log likelihood of x is log px ( x ) = log pz ( f ( x ) ) + log ∣∣∣∣det ( ∂f ( x ) ∂x ) ∣∣∣∣ To train flows ( i.e . maximize the log likelihood of data points ) , we need to be able to compute the logarithm of the absolute value of the determinant of the Jacobian of f , also called the log-determinant . To construct large normalizing flows , we can compose smaller ones as this is still invertible and the log-determinant of this composition is the sum of the individual log-determinants . An important observation is that every univariate distribution can be viewed as a flow in which the base distribution pz ( z ) is a Uniform distribution over [ 0 , 1 ] and g is the inverse CDF of the univariate distribution . This can be generalized to multivariate distributions using the chain rule . 3.3 CONDITIONAL QUAR FLOWS . In this paper , the specific normalizing flow used is Quasi-Autoregressive Residual Flows ( QuAR Flows , Gopal ( 2020 ) ) . QuAR Flows were shown to handle modeling complex distributions while having nice optimization properties and computationally efficient log-likelihoods . Other commonly used normalizing flows have drawbacks . Coupling layers do not work for one-dimensional inputs . Autoregressive flows are often conditional Gaussians which means in the one-dimensional case , the flow is simply a Gaussian . Residual Flows ( Chen et al. , 2019 ) are expressive but , though equivalent to QuAR Flows in the one-dimensional case , are computationally expensive for high-dimensional distributions . If we would like to condition a QuAR Flow given some features c , we could use a hypernetwork that maps c to the parameters of the QuAR Flow ( Figure 1a ) . However , this can be computationally expensive if the QuAR Flow is parameterized with thousands of parameters . Instead , conditional information can be incorporated by concatenating the conditions c before applying the first layer in the residual connection within a QuAR Flow ( Figure 1b ) . 4 REGRESSION USING NORMALIZING FLOWS . The goal of regression is to model the conditional probability distribution p ( y|x ) . Frequently , this task is reduced to obtaining point estimates of the mean or median by minimizing the mean squared error or mean absolute error respectively . In order to access uncertainties , we must model the full conditional distribution ; in addition to this , we can still retrieve point estimates of the mean , median , or any other distributional statistic . In this section , we discuss how normalizing flows can be used to this end as well as how recalibration can be reframed as maximum likelihood estimation ( MLE ) . 4.1 CALIBRATION WITH NORMALIZING FLOWS . One approach to modeling aleatoric uncertainty is to use amortized inference with a Normal distribution as its output distribution , conditioned on features ( Lakshminarayanan et al. , 2017 ) . However , if a Gaussian is not appropriate to fit the target , then the error in miscalibration can be attributed to approximation error , i.e . insufficient capacity to learn the distribution . A simple example of this is where we have no features to condition on ( or all the features we have contain no relevant information ) with a one-dimensional target . In Figure 2 , we can see that since the target is bimodal , the Gaussian fit does not well describe the data . To improve this model , we replace the Gaussian with a normalizing flow conditioned on our data , similar to Figure 1b . In this way , the distribution fit to the targets has sufficient capacity to learn complex distributions , including multimodal distributions ( Figure 2 ) .
The authors propose an approach to calibrate conditional distribution estimation models. The approach uses normalizing flows to transform an existing model's predictions into a prediction that better matches the empirical quantiles to the theoretical quantiles. After this remapping procedure, the authors introduce a new plot to visualize calibration. Empirical benchmarks are run on a suite of UCI datasets.
SP:bc770f3d7472b4a89f0773960ac365193f5e0754
Coordinated Multi-Agent Exploration Using Shared Goals
1 INTRODUCTION . Cooperative multi-agent reinforcement learning ( MARL ) is an increasingly important field . Indeed , many real-world problems are naturally modeled using MARL techniques . For instance , tasks from areas as diverse as robot fleet coordination ( Swamy et al. , 2020 ; Hüttenrauch et al. , 2019 ) and autonomous traffic control ( Bazzan , 2008 ; Sunehag et al. , 2018 ) fit MARL formulations . To address MARL problems , early work followed the independent single-agent reinforcement learning paradigm ( Tampuu et al. , 2015 ; Tan , 1993 ; Matignon et al. , 2012 ) . However , more recently , specifically tailored techniques such as monotonic value function factorization ( QMIX ) ( Rashid et al. , 2018 ) , multi-agent deep deterministic policy gradient ( MADDPG ) ( Lowe et al. , 2017 ) , and counterfactual multi-agent policy gradients ( COMA ) ( Foerster et al. , 2018 ) have been developed . Those methods excel in a multi-agent setting because they address the non-stationary issue of MARL and develop communication protocols between agents . Despite those advances and the resulting reported performance improvements , a common issue remained : all of the aforementioned methods use exploration techniques from classical algorithms . Specifically , these methods employ noise-based exploration , i.e. , the exploration policy is a noisy version of the actor policy . For instance , Lowe et al . ( 2017 ) add Ornstein-Uhlenbeck ( OU ) ( Uhlenbeck & Ornstein , 1930 ) noise or Gaussian noise to the actor policy . Foerster et al . ( 2016 ) ; Rashid et al . ( 2018 ) ; Yang et al . ( 2018 ) ; Foerster et al . ( 2017 ) use variants of ǫ-greedy exploration , where a random suboptimal action is selected with probability ǫ . It was recognized recently that use of classical exploration techniques is sub-optimal in a multi-agent reinforcement learning setting . Specifically , Mahajan et al . ( 2019 ) show that QMIX with ǫ-greedy exploration results in slow exploration and sub-optimality . Mahajan et al . ( 2019 ) improve exploration by conditioning an agent ’ s behavior on a shared latent variable controlled by a hierarchical policy . Even more recently , Wang et al . ( 2020 ) encourage coordinated exploration by considering the influence of one agent ’ s behavior on other agents ’ behaviors . While all of the aforementioned exploration techniques for multi-agent reinforcement learning significantly improve results , they suffer from a common challenge : agents struggle to identify states that are worth exploring , and don ’ t coordinate their exploration efforts toward those states . To give an example , consider a push-box task , where two agents need to jointly push a heavy box to a specific location before observing a reward . In this situation , instead of exploring the environment independently , agents need to coordinate pushing the box within the environment to find the specific location . To address this issue , we propose coordinated multi-agent exploration ( CMAE ) , where multiple agents share a common goal . We achieve this by first projecting the joint state space to multiple subspaces . We develop a normalized entropy ( Cover & Thomas. , 1991 ) -based technique to select a goal from the under-explored subspaces . Then , exploration policies are trained to reach the goals in a coordinated manner . To show that CMAE improves results , we evaluate our approach on various environments with sparse-rewards from Wang et al . ( 2020 ) , and the sparse-reward version of the Starcraft multi-agent challenge ( SMAC ) ( Samvelyan et al. , 2019 ) , which requires coordinated actions among agents over extended time steps before observing a reward . The experimental results show that our approach needs only 1 % − 5 % of environment steps to achieve similar or better average test episode returns than current state-of-the-art baselines . 2 PRELIMINARIES . In this section , we define the multi-agent Markov decision process ( MDP ) in Sec . 2.1 , and introduce the multi-agent reinforcement learning setting in Sec . 2.2 . 2.1 MULTI-AGENT MARKOV DECISION PROCESS . We model a cooperative multi-agent system as a multi-agent Markov decision process ( MDP ) . An n-agent MDP is defined by a tuple G = ( S , A , T , R , Z , O , n , γ , H ) . S is the global state space of the environment . A is the action space . At each time step t , each agent ’ s policy πi , i ∈ { 1 , . . . , n } , selects an action ati ∈ A . All selected actions form a joint action a t ∈ An . The transition function T maps the current state st and the joint action at to the next state st+1 , i.e. , T : S ×An → S . All agents receive a collective reward rt ∈ R according to the reward function R : S × An → R. The goal of all agents ’ policies is to maximize the collective expected return ∑H t=0 γ trt , where γ ∈ [ 0 , 1 ] is the discount factor , H is the horizon , and rt is the collective reward obtained at timestep t. Each agent i observes local observation oti ∈ Z according to the observation function O : S → Z . Note , observations usually reveal partial information about the global state . For instance , suppose the global state contains the location of agents , while the local observation of an agent may only contain the location of other agents within a limited distance . All agents ’ local observations form a joint observation , denoted by ot . A global state space S is the product of component spaces Vi , i.e. , S = ∏M i=1 Vi , where Vi ⊆ R ( Samvelyan et al. , 2019 ; Lowe et al. , 2017 ; Rashid et al. , 2018 ; Foerster et al. , 2018 ; Mahajan et al. , 2019 ) . We refer to Vi as a ‘ state component. ’ The set of all component spaces of a product space is referred to as the component set . For instance , the component set of S is { Vi|i ∈ { 1 , . . . , M } } . Each entity , e.g. , agents , objects , etc. , in the environment are described by a set of state components . We refer to a set of state components that is associated with an entity in the environment as an ‘ entity set. ’ For instance , in a 2-agent push-box environment , where two agents can only collaboratively push a box to a goal location , we have the global state space S = ∏6 i=1 Vi , where { V1 , V2 } , { V3 , V4 } , { V5 , V6 } represent the location of agent one , agent two , and the box , separately . Consequently , { V1 , V2 } , { V3 , V4 } , { V5 , V6 } are three entity sets . 2.2 MULTI-AGENT REINFORCEMENT LEARNING . In this paper , we follow the standard centralized training and decentralized execution ( CTDE ) paradigm ( Lowe et al. , 2017 ; Rashid et al. , 2018 ; Foerster et al. , 2018 ; Mahajan et al. , 2019 ) . That is , at training time , the learning algorithm has access to all agents ’ local observations , actions , and the global state . At execution time , i.e. , at test time , each individual agent ’ s policy only has access to its own local observation . The proposed CMAE is applicable to off-policy MARL methods ( e.g. , Rashid et al. , 2018 ; Lowe et al. , 2017 ; Sunehag et al. , 2018 ; Matignon et al. , 2012 ) . In off-policy MARL , exploration poli- Algorithm 1 : Training with Coordinated Multi-Agent Exploration ( CMAE ) Initialize exploration policies { µi } n i=1 , target policies { πi } n i=1 , counters { ck } K k=1 ; Initialize the environment and replay buffer D ; Initialize α = 1 ; for episode = 1 . . .M do Reset the environment , and observe global states st and observations ot = ( ot1 . . . o t n ) ; for t = 1 . . . T do UpdateCounter ( { ck } K k=1 , s t , ot ) ; Select actions at using a mixture of exploration and target policies αµi + ( 1− α ) πi , α decreases linearly to 0 ; Apply at to the environment ; Observe rewards rt , state st+1 , and local observations ot+1 ; Add transition tuple { st , ot , a , st+1 , ot+1 , rt } to D ; TrainTarget ( { πi } n i=1 , D ) ; end TrainExp ( { µi } n i=1 , { ck } K k=1 , D ) ; end cies µi , i ∈ { 1 , . . . , n } are responsible for collecting data from the environment . The data in the form of transition tuple ( st , ot , at , st+1 , ot+1 ) is stored in a replay memory D , i.e. , D = { ( st , ot , at , st+1 , ot+1 ) } t. The target policies are trained using transition tuples from the replay memory . 3 COORDINATED MULTI-AGENT EXPLORATION ( CMAE ) . In the following we first present an overview of CMAE before we discuss the method more formally . Overview : The goal is to train the target policies { πi } i∈ { 1 , ... , n } of n agents to maximize the environment episode return . Classical off-policy algorithms ( Lowe et al. , 2017 ; Rashid et al. , 2018 ) typically use a noisy version of the target policies πi as the exploration policies µi , i.e. , to collect data actions are selected based on exploration policies µi . In contrast , in CMAE , we propose to train the exploration policies by training with a modified reward . Specifically , target polices are trained to maximize the usual external episode return . In contrast , exploration policies are trained to collect data from subspaces that haven ’ t been well explored . We find this strategy to significantly improve training of target policies in the multi-agent reinforcement learning setting because this strategy can encourage multiple agents to jointly explore configurations of the state space . Alg . 1 summarizes this approach . At each step , a mixture of the exploration policies { µi } n i=1 and target policies { πi } n i=1 are used to select actions . The resulting experience tuple is then stored in a replay memory D. The target policies are trained directly using the data within the replay memory D at each step . Note that the exploration policies will only be updated at the end of each episode using a reshaped reward that encourages exploration polices to explore under-explored subspaces in a collaborative manner . In the following we will provide details about how we propose to train the exploration policies . 3.1 TRAINING OF EXPLORATION POLICIES . To train the exploration policies µi , i ∈ { 1 , . . . , n } we use a modified reward r̂ . This modified reward specifies the goal of the exploration . For example , in the two-agent push-box task , we specify a specific joint location of both agents and the box as a goal . Note , the agents will ignore all external rewards and only see positive reward when the goal , i.e. , the specified position is reached . The reward for the goal is set to b while the rewards for all other situations are zero . To find the goal situation we use K counters ck , k ∈ { 1 , . . . , K } . A counter ck operates on a low-dimensional subspace Sk of the state space S , i.e. , Sk ⊆ S . Occurrence of every configuration sk ∈ Sk within the low-dimensional subspace will be recorded using the current replay buffer D. Algorithm 2 : Train Exploration Policies ( TrainExp ) Input : exploration policies { µi } n i=1 , counters { ck } K k=1 , replay buffer D ; Initialize bonus b ; Compute normalized entropy η ( k ) of subspace k based on associated counter ck ; k∗ = argmink η ( k ) ; Sample a batch B = { si } M i=1 from D ; g = argmins∈B ck∗ ( projk∗ ( s ) ) ; for { st , ot , a , st+1 , ot+1 , rt } ∈ D do if st == g then rt = b ; else rt = 0 ; end Update { µi } n i=1 by { s t , ot , a , st+1 , ot+1 , rt } end Let projk be the projection from global state space to the subspace k. Formally , we obtain ck ( sk ) = ∑ s∈D ✶ [ projk ( s ) = sk ] , where ✶ [ · ] is the indicator function ( 1 if argument is true ; 0 otherwise ) and projk ( s ) denotes the restriction of state s ∈ S to subspace Sk . Note that we are successively incrementing the counts , i.e. , the counters ck are not recomputed from scratch every time we train the exploration policies . We subsequently normalize the counters ck ( sk ) into a probability distribution pk ( sk ) = ck ( sk ) / ∑ ŝk∈Sk ck ( ŝk ) which is then used to compute a normalized entropy ηk = H/Hmax = − ( ∑ s∈Sk pk ( s ) log pk ( s ) ) / log ( |Sk| ) . We select the subspace k ∗ with the smallest normalized entropy . From this subspace we choose the joint goal state g by first sampling a batch of states B from the replay buffer . From those states we select in a second step the state with the smallest count as the goal state g , i.e. , g = argmins∈B ck∗ ( s ) . Sampling of states is performed in order to avoid selecting unreachable states as a goal , i.e. , we encourage to explore states that we have seen rarely but at least once . Given the goal state g , we train the exploration policies µi using the replay buffer D modified by a revised reward r̂ = b if sk∗ = g. Note , r̂ = 0 otherwise . Consequently , the exploration policies µi focus exclusively on achieving the desired goal g. This strategy is summarized in Alg . 2 . As an alternative to the aforementioned subspace selection method , one could use probabilistic subspace selection , where the probability of a subspace being chosen is inversely proportional to its normalized entropy . The two different subspace selection approaches result in different exploration behaviors . Specifically , the probabilistic subspace selection will encourage exploration policies to explore more subspaces while the smallest normalized entropy method focuses on the most underexplored subspace .
The paper introduces an exploration bonus tailored to multi-agent learning in the CTDE (centralized training, decentralizing execution) setting. The bonus works by: 1) dividing up the observation space into subspaces (in this case, corresponding to the entities, pairs of entities, triplets of entities, etc), 2) maintaining running counters within each subspace of every possible configuration within that subspace, 3) identifying the lowest entropy subspace, 4) sampling the replay buffer to find a rarely occurring (but importantly, reachable) “goal” state within the lowest entropy subspace, and finally, 5) rewarding the exploration policy for visiting the goal state. The authors test their method in two domains: 1) a series of coordinated multi-agent exploration challenges in grid worlds, and 2) the Starcraft Multi-Agent Challenge (SMAC). They demonstrate much faster learning over a series of baselines.
SP:07924615714b3ca8e074692d1c0552c700951574
Coordinated Multi-Agent Exploration Using Shared Goals
1 INTRODUCTION . Cooperative multi-agent reinforcement learning ( MARL ) is an increasingly important field . Indeed , many real-world problems are naturally modeled using MARL techniques . For instance , tasks from areas as diverse as robot fleet coordination ( Swamy et al. , 2020 ; Hüttenrauch et al. , 2019 ) and autonomous traffic control ( Bazzan , 2008 ; Sunehag et al. , 2018 ) fit MARL formulations . To address MARL problems , early work followed the independent single-agent reinforcement learning paradigm ( Tampuu et al. , 2015 ; Tan , 1993 ; Matignon et al. , 2012 ) . However , more recently , specifically tailored techniques such as monotonic value function factorization ( QMIX ) ( Rashid et al. , 2018 ) , multi-agent deep deterministic policy gradient ( MADDPG ) ( Lowe et al. , 2017 ) , and counterfactual multi-agent policy gradients ( COMA ) ( Foerster et al. , 2018 ) have been developed . Those methods excel in a multi-agent setting because they address the non-stationary issue of MARL and develop communication protocols between agents . Despite those advances and the resulting reported performance improvements , a common issue remained : all of the aforementioned methods use exploration techniques from classical algorithms . Specifically , these methods employ noise-based exploration , i.e. , the exploration policy is a noisy version of the actor policy . For instance , Lowe et al . ( 2017 ) add Ornstein-Uhlenbeck ( OU ) ( Uhlenbeck & Ornstein , 1930 ) noise or Gaussian noise to the actor policy . Foerster et al . ( 2016 ) ; Rashid et al . ( 2018 ) ; Yang et al . ( 2018 ) ; Foerster et al . ( 2017 ) use variants of ǫ-greedy exploration , where a random suboptimal action is selected with probability ǫ . It was recognized recently that use of classical exploration techniques is sub-optimal in a multi-agent reinforcement learning setting . Specifically , Mahajan et al . ( 2019 ) show that QMIX with ǫ-greedy exploration results in slow exploration and sub-optimality . Mahajan et al . ( 2019 ) improve exploration by conditioning an agent ’ s behavior on a shared latent variable controlled by a hierarchical policy . Even more recently , Wang et al . ( 2020 ) encourage coordinated exploration by considering the influence of one agent ’ s behavior on other agents ’ behaviors . While all of the aforementioned exploration techniques for multi-agent reinforcement learning significantly improve results , they suffer from a common challenge : agents struggle to identify states that are worth exploring , and don ’ t coordinate their exploration efforts toward those states . To give an example , consider a push-box task , where two agents need to jointly push a heavy box to a specific location before observing a reward . In this situation , instead of exploring the environment independently , agents need to coordinate pushing the box within the environment to find the specific location . To address this issue , we propose coordinated multi-agent exploration ( CMAE ) , where multiple agents share a common goal . We achieve this by first projecting the joint state space to multiple subspaces . We develop a normalized entropy ( Cover & Thomas. , 1991 ) -based technique to select a goal from the under-explored subspaces . Then , exploration policies are trained to reach the goals in a coordinated manner . To show that CMAE improves results , we evaluate our approach on various environments with sparse-rewards from Wang et al . ( 2020 ) , and the sparse-reward version of the Starcraft multi-agent challenge ( SMAC ) ( Samvelyan et al. , 2019 ) , which requires coordinated actions among agents over extended time steps before observing a reward . The experimental results show that our approach needs only 1 % − 5 % of environment steps to achieve similar or better average test episode returns than current state-of-the-art baselines . 2 PRELIMINARIES . In this section , we define the multi-agent Markov decision process ( MDP ) in Sec . 2.1 , and introduce the multi-agent reinforcement learning setting in Sec . 2.2 . 2.1 MULTI-AGENT MARKOV DECISION PROCESS . We model a cooperative multi-agent system as a multi-agent Markov decision process ( MDP ) . An n-agent MDP is defined by a tuple G = ( S , A , T , R , Z , O , n , γ , H ) . S is the global state space of the environment . A is the action space . At each time step t , each agent ’ s policy πi , i ∈ { 1 , . . . , n } , selects an action ati ∈ A . All selected actions form a joint action a t ∈ An . The transition function T maps the current state st and the joint action at to the next state st+1 , i.e. , T : S ×An → S . All agents receive a collective reward rt ∈ R according to the reward function R : S × An → R. The goal of all agents ’ policies is to maximize the collective expected return ∑H t=0 γ trt , where γ ∈ [ 0 , 1 ] is the discount factor , H is the horizon , and rt is the collective reward obtained at timestep t. Each agent i observes local observation oti ∈ Z according to the observation function O : S → Z . Note , observations usually reveal partial information about the global state . For instance , suppose the global state contains the location of agents , while the local observation of an agent may only contain the location of other agents within a limited distance . All agents ’ local observations form a joint observation , denoted by ot . A global state space S is the product of component spaces Vi , i.e. , S = ∏M i=1 Vi , where Vi ⊆ R ( Samvelyan et al. , 2019 ; Lowe et al. , 2017 ; Rashid et al. , 2018 ; Foerster et al. , 2018 ; Mahajan et al. , 2019 ) . We refer to Vi as a ‘ state component. ’ The set of all component spaces of a product space is referred to as the component set . For instance , the component set of S is { Vi|i ∈ { 1 , . . . , M } } . Each entity , e.g. , agents , objects , etc. , in the environment are described by a set of state components . We refer to a set of state components that is associated with an entity in the environment as an ‘ entity set. ’ For instance , in a 2-agent push-box environment , where two agents can only collaboratively push a box to a goal location , we have the global state space S = ∏6 i=1 Vi , where { V1 , V2 } , { V3 , V4 } , { V5 , V6 } represent the location of agent one , agent two , and the box , separately . Consequently , { V1 , V2 } , { V3 , V4 } , { V5 , V6 } are three entity sets . 2.2 MULTI-AGENT REINFORCEMENT LEARNING . In this paper , we follow the standard centralized training and decentralized execution ( CTDE ) paradigm ( Lowe et al. , 2017 ; Rashid et al. , 2018 ; Foerster et al. , 2018 ; Mahajan et al. , 2019 ) . That is , at training time , the learning algorithm has access to all agents ’ local observations , actions , and the global state . At execution time , i.e. , at test time , each individual agent ’ s policy only has access to its own local observation . The proposed CMAE is applicable to off-policy MARL methods ( e.g. , Rashid et al. , 2018 ; Lowe et al. , 2017 ; Sunehag et al. , 2018 ; Matignon et al. , 2012 ) . In off-policy MARL , exploration poli- Algorithm 1 : Training with Coordinated Multi-Agent Exploration ( CMAE ) Initialize exploration policies { µi } n i=1 , target policies { πi } n i=1 , counters { ck } K k=1 ; Initialize the environment and replay buffer D ; Initialize α = 1 ; for episode = 1 . . .M do Reset the environment , and observe global states st and observations ot = ( ot1 . . . o t n ) ; for t = 1 . . . T do UpdateCounter ( { ck } K k=1 , s t , ot ) ; Select actions at using a mixture of exploration and target policies αµi + ( 1− α ) πi , α decreases linearly to 0 ; Apply at to the environment ; Observe rewards rt , state st+1 , and local observations ot+1 ; Add transition tuple { st , ot , a , st+1 , ot+1 , rt } to D ; TrainTarget ( { πi } n i=1 , D ) ; end TrainExp ( { µi } n i=1 , { ck } K k=1 , D ) ; end cies µi , i ∈ { 1 , . . . , n } are responsible for collecting data from the environment . The data in the form of transition tuple ( st , ot , at , st+1 , ot+1 ) is stored in a replay memory D , i.e. , D = { ( st , ot , at , st+1 , ot+1 ) } t. The target policies are trained using transition tuples from the replay memory . 3 COORDINATED MULTI-AGENT EXPLORATION ( CMAE ) . In the following we first present an overview of CMAE before we discuss the method more formally . Overview : The goal is to train the target policies { πi } i∈ { 1 , ... , n } of n agents to maximize the environment episode return . Classical off-policy algorithms ( Lowe et al. , 2017 ; Rashid et al. , 2018 ) typically use a noisy version of the target policies πi as the exploration policies µi , i.e. , to collect data actions are selected based on exploration policies µi . In contrast , in CMAE , we propose to train the exploration policies by training with a modified reward . Specifically , target polices are trained to maximize the usual external episode return . In contrast , exploration policies are trained to collect data from subspaces that haven ’ t been well explored . We find this strategy to significantly improve training of target policies in the multi-agent reinforcement learning setting because this strategy can encourage multiple agents to jointly explore configurations of the state space . Alg . 1 summarizes this approach . At each step , a mixture of the exploration policies { µi } n i=1 and target policies { πi } n i=1 are used to select actions . The resulting experience tuple is then stored in a replay memory D. The target policies are trained directly using the data within the replay memory D at each step . Note that the exploration policies will only be updated at the end of each episode using a reshaped reward that encourages exploration polices to explore under-explored subspaces in a collaborative manner . In the following we will provide details about how we propose to train the exploration policies . 3.1 TRAINING OF EXPLORATION POLICIES . To train the exploration policies µi , i ∈ { 1 , . . . , n } we use a modified reward r̂ . This modified reward specifies the goal of the exploration . For example , in the two-agent push-box task , we specify a specific joint location of both agents and the box as a goal . Note , the agents will ignore all external rewards and only see positive reward when the goal , i.e. , the specified position is reached . The reward for the goal is set to b while the rewards for all other situations are zero . To find the goal situation we use K counters ck , k ∈ { 1 , . . . , K } . A counter ck operates on a low-dimensional subspace Sk of the state space S , i.e. , Sk ⊆ S . Occurrence of every configuration sk ∈ Sk within the low-dimensional subspace will be recorded using the current replay buffer D. Algorithm 2 : Train Exploration Policies ( TrainExp ) Input : exploration policies { µi } n i=1 , counters { ck } K k=1 , replay buffer D ; Initialize bonus b ; Compute normalized entropy η ( k ) of subspace k based on associated counter ck ; k∗ = argmink η ( k ) ; Sample a batch B = { si } M i=1 from D ; g = argmins∈B ck∗ ( projk∗ ( s ) ) ; for { st , ot , a , st+1 , ot+1 , rt } ∈ D do if st == g then rt = b ; else rt = 0 ; end Update { µi } n i=1 by { s t , ot , a , st+1 , ot+1 , rt } end Let projk be the projection from global state space to the subspace k. Formally , we obtain ck ( sk ) = ∑ s∈D ✶ [ projk ( s ) = sk ] , where ✶ [ · ] is the indicator function ( 1 if argument is true ; 0 otherwise ) and projk ( s ) denotes the restriction of state s ∈ S to subspace Sk . Note that we are successively incrementing the counts , i.e. , the counters ck are not recomputed from scratch every time we train the exploration policies . We subsequently normalize the counters ck ( sk ) into a probability distribution pk ( sk ) = ck ( sk ) / ∑ ŝk∈Sk ck ( ŝk ) which is then used to compute a normalized entropy ηk = H/Hmax = − ( ∑ s∈Sk pk ( s ) log pk ( s ) ) / log ( |Sk| ) . We select the subspace k ∗ with the smallest normalized entropy . From this subspace we choose the joint goal state g by first sampling a batch of states B from the replay buffer . From those states we select in a second step the state with the smallest count as the goal state g , i.e. , g = argmins∈B ck∗ ( s ) . Sampling of states is performed in order to avoid selecting unreachable states as a goal , i.e. , we encourage to explore states that we have seen rarely but at least once . Given the goal state g , we train the exploration policies µi using the replay buffer D modified by a revised reward r̂ = b if sk∗ = g. Note , r̂ = 0 otherwise . Consequently , the exploration policies µi focus exclusively on achieving the desired goal g. This strategy is summarized in Alg . 2 . As an alternative to the aforementioned subspace selection method , one could use probabilistic subspace selection , where the probability of a subspace being chosen is inversely proportional to its normalized entropy . The two different subspace selection approaches result in different exploration behaviors . Specifically , the probabilistic subspace selection will encourage exploration policies to explore more subspaces while the smallest normalized entropy method focuses on the most underexplored subspace .
The paper proposes to improve the exponential sample complexity of finding a coordinated multi-agent strategy by learning an exploration policy for each agent that conditions on a shared goal. The exploration policy is mixed with the normal RL policy according to a parameter alpha, which is scaled down over time. The shared goal that agents pursue is selected by using an explicit counter mechanism over objects in the environment.
SP:07924615714b3ca8e074692d1c0552c700951574
Learning Continuous-Time Dynamics by Stochastic Differential Networks
1 INTRODUCTION AND RELATED WORKS . Many real-world systems experience complicated stochastic dynamics over a continuous time period . The challenges on modeling the stochastic dynamics mainly come from two sources . First , the underlying state transitions of many systems are often uncertain , as they are placed in unpredictable environment with their states continuously affected by unknown disturbances . Second , the monitoring data collected may be sparse and at irregular intervals as a result of the sampling strategy or data corruption . The sporadic data sequence loses a large amount of information and system behaviors hidden behind the intervals of the observed data . In order to accurately model and analyze dynamics of these systems , it is important to reliably and efficiently represent the continuous-time stochastic process based on the discrete-time observations . In some domains , the derivation of the continuous-time stochastic model relies heavily on human knowledge and many studies focus on its inference problem ( Ryder et al. , 2018 ; Särkkä et al. , 2015 ) . But in more domains ( e.g. , video analysis ( Vondrick et al. , 2016 ) and human activity detection ( Rubanova et al. , 2019 ) ) , it is difficult and sometimes intractable to derive an accurate model to capture the underlying temporal evolution from the collected sequence of data . Although some studies have been made on approximating the stochastic process from the data collected , the majority of these methods define the system dynamics with a linear model ( Macke et al. , 2011 ; Yu et al. , 2009b ; a ) , which can not well represent high-dimensional data with nonlinear relationship . Recently , the Neural Ordinary Differential Equation ( ODE ) studies ( Chen et al. , 2018 ; Rubanova et al. , 2019 ; Jia & Benson , 2019 ; De Brouwer et al. , 2019 ; Yildiz et al. , 2019 ; Kidger et al. , 2020 ) introduce deep learning models to learn an ODE and apply it to approximate continuous-time dynamics . Nevertheless , these methods generally neglect the randomness of the latent state trajectories and posit simplified assumptions on the data distribution ( e.g . Gaussian ) , which strongly limits their capability of modeling complicated continuous-time stochastic processes . Compared to ODE , Stochastic Differential Equation ( SDE ) ( Jørgensen et al. , 2020 ) is a more practical solution in modeling the continuous-time stochastic process . Recently there have been some studies on bridging the gap between deep neural networks and SDEs ( Ha et al. , 2018 ) . In ( Hegde et al. , 2019 ; Liu et al. , 2020 ; Peluchetti & Favaro , 2020 ; Wang et al. , 2019 ; Kong et al. , 2020 ) , SDEs are introduced to define more robust and accurate deep learning architectures for supervised learning problems ( e.g . classification and regression ) . These studies focus on the design of neural network architectures , and are orthogonal to our work on the modeling of sporadic time series . In ( Tzen & Raginsky , 2019a ; b ) the authors studied the theoretical guarantees of the optimization and inference problems of Neural SDEs . In ( Li et al. , 2020 ) , a stochastic adjoint method is proposed to efficiently compute the gradients for neural SDEs . In this paper , we propose a new continuous-time stochastic recurrent network called Variational Stochastic Differential Network ( VSDN ) that incorporates SDEs into recurrent neural model to effectively model the continuous-time stochastic dynamics based only on sparse or irregular observations . Taking advantage of the capacity of deep neural networks , VSDN has higher flexibility and generalizability in modeling the nonlinear stochastic dependency from high-dimensional observations . Compared to Neural ODEs , VSDN incorporates the latent state trajectory to capture the underlying factors of the system dynamics . The trajectory helps to more flexibly model the data distribution and more accurately generate the output data than Neural ODEs . Parallel to the theoretical analysis ( Tzen & Raginsky , 2019a ; b ) and gradient computations ( Li et al. , 2020 ) , our study focuses more on exploring the feasible variational loss and flexible recurrent architecture for the Neural SDEs to model the sporadic data . The contributions of this paper are three-fold : 1 . We incorporate the continuous-time variants of VAE and IWAE losses into VSDN to train the continuous-time stochastic neural networks with latent state trajectories . 2 . We propose the efficient and flexible network architecture of VSDN which can model the complicated stochastic process under high-dimensional sporadic data sequences . 3 . We conduct comprehensive experiments to show that VSDN outperforms state-of-the-art deep learning methods on modeling the continuous-time dynamics and achieves remarkable performance in the prediction and interpolation of irregular or sporadic time series . The rest of this paper is organized as follows . In Section 2 , we first present the continuous-time variants of VAE loss , and then derive a continuous-time IWAE loss to train continuous-time statespace models with deep neural networks . In Section 3 , we propose the deep learning structures of VSDN . Comprehensive experiments are presented in section 4 and conclusion is given in section 5 . 2 CONTINUOUS-TIME VARIATIONAL BAYES . In this section , we first introduce the basic notations and formulate our problem . We then define the continuous-time variants of the Variational Auto-Encoding ( VAE ) and Importance-Weighted AutoEncoding ( IWAE ) lower bounds to enable the efficient training of our models . Due to the page limit , we present all deductions in Appendix A . 2.1 BASIC NOTATIONS AND PROBLEM FORMULATION . Throughout this paper , we define Xt ∈ Rd1 as the continuous-time latent state at time t and Yn ∈ Rd2 as the nth discrete-time observed data at time tn . d1 and d2 are the dimensions of the latent state and observation , respectively . X < t is the continuous trajectory before time t and X≤t is the path up to time t. Yn1 : n2 is the sequence of data points and Xtn1 : tn2 is the continuous-time state trajectory from tn1 to tn2 . Yt = { Yn|tn < t } is the historical observations before t and Yt = { Yn|tn ≥ t } is the current and future observations . For simplicity , we also assume that the initial value of the latent state is constant . The results in this paper can be easily extended to the situation that the initial states are also random variables . Given K data sequences { y ( i ) 1 : ni } , i = 1 , · · · , K , the target of our study is to learn an accurate continuous-time generative model G that maximizes the log-likelihood : G = arg max G 1 K K∑ i=1 logPG ( y ( i ) 1 : ni ) . ( 1 ) For Multivariate sequential data , there exists a complicated nonlinear relationship between the observed data and the unobservable latent state , which can be either the physical state of a dynamic system or the low-dimensional manifold of data . In our study , the latent state evolves in the continuous time domain and generates the observation through some transformation . 2.2 CONTINUOUS-TIME VARIATIONAL INFERENCE . In order to capture the underlying stochastic process from sporadic data , we design the generative model as a neural continuous-time state-space model , which consists of a latent Stochastic Differential Equation ( SDE ) and a conditional distribution of the observation . The latent SDE describes the stochastic process of the latent states and the conditional distribution depicts the probabilistic dependency of the current data with the latent states and historical observations : dXt =HG ( Xt , Yt ; t ) dt+RG ( Yt ; t ) dWt , ( 2 ) PG ( Yn|Y1 : n−1 , Xtn ) = Φ ( Yn|f ( Y1 : n−1 , Xtn ) ) , ( 3 ) where HG and RG are the drift and diffusion functions of the latent SDE . Wt denotes the a Wiener process , which is also called standard Brownian motion . To integrate the information of the observed data , HG is the function of the current state Xt and the historical observations Yt . However , RG only uses the historical data as input . It is not beneficial to include Xt as the input of the diffusion function , as it will inject more noise into gradients of the network parameters . A detailed example and analysis of the noise injection problem is given in Appendix B. Φ ( · ) is a parametric family of distributions over the data and f ( · ) is the function to compute the parameters of Φ . With the advance of deep learning methods , we parameterize HG , RG and f ( · ) by deep neural networks . Continuous-Time Auto-Encoding Variational Bayes : The exact log-likelihood of the generative model is given as logPG ( y1 : n ) = log ∫ PG ( X≤tn ) n∏ i=1 PG ( yi|y1 : n−1 , Xti ) dX≤tn ( 4 ) which does not have the closed-form solution in general . Therefore , G can not be directly trained by maximizing log-likelihood . To overcome this difficulty , an inference modelQ is introduced to depict the stochastic dependency of the latent state on observed data . Similar to the generative model , Q consists of a posterior SDE : dXt =HQ ( Xt , Yt , Yt ; t ) dt+RG ( Yt ; t ) dWt , ( 5 ) whereHQ is the posterior drift function . Different fromHG , HQ also uses the future observation Yt as the input and therefore the inference model Q induces the posterior distribution PQ ( X≤tn |y1 : n ) . Based on Auto-Encoding Variational Bayes ( Kingma & Welling , 2014 ) , it is straightforward to introduce a continuous-time variant of the VAE lower bound of the log-likelihood : LV AE ( y1 : n ) =− βKL ( PQ||PG ) + n∑ i=1 EPQ ( Xti ) logPG ( yi|y1 : n−1 , Xti ) , ( 6 ) KL ( PQ||PG ) = 1 2 ∫ tn 0 EPQ ( Xt ) ( ( HQ −HG ) T [ RGRTG ] −1 ( HQ −HG ) ) dt . ( 7 ) where PG ( X≤tn ) and PQ ( X≤tn ) are the probability density of the latent states induced by the prior SDE Eq . ( 2 ) and the posterior SDE Eq . ( 5 ) . KL ( ·||· ) denotes the KL divergence between two distributions and β is a hyper-parameter to weight the effect of the KL terms . In this paper , we fix β as 1.0 and LV AE is the original VAE objective ( Kingma & Welling , 2014 ) . In β-VAE ( Higgins et al. , 2017 ; Burgess et al. , 2018 ) , it is shown that a larger β can encourage the model to learn more efficient and disentangled representation from the data . Eq . ( 5 ) is restricted to having the same diffusion function as Eq . ( 2 ) . A feasible LV AE can not be defined to train VSDN-SDE without this restriction , as the KL divergence of two SDEs with different diffusions will be infinite ( Archambeau et al. , 2008 ) . The VAE objective has been widely used for discrete-time stochastic recurrent modals , such as LFADS ( Sussillo et al. , 2016 ) , VRNN ( Chung et al. , 2015 ) and SRNN ( Fraccaro et al. , 2016 ) . The major difference between these models and our work is that we incorporate a continuous-time latent state into our model while the latent states of the discrete-time models evolve only at distinct and separate time slots . Continuous-Time Importance Weighted Variational Bayes : LV AE ( y1 : n ) equals the exact loglikelihood when PQ ( X≤tn ) of the inference model is identical to the exact posterior distribution induced by the generative model . The errors of the inference model can result in the looseness of the VAE loss for the model training . Under the framework of Importance-Weighted Auto-Encoder ( IWAE ) ( Burda et al. , 2016 ; Cremer et al. , 2017 ) , we can define a tighter evidence lower bound : L̃KIWAE ( y1 : n ) =Ex1≤tn , ··· , xK≤tn∼PQ ( x≤tn ) ( log 1 K K∑ k=1 wk n∏ i=1 PG ( yi|y1 : n−1 , Xti ) ) , ( 8 ) where the importance weights satisfy the following SDE : d logwk =d log PG ( x k ≤tn ) PQ ( xk≤tn ) = −1 2 ( HQ −HG ) T [ RGRTG ] −1 ( HQ −HG ) dt − ( HQ −HG ) T [ RG ] −1dWt . ( 9 ) Given the variational auto-encoding lower bound LV AE ( · ) and the importance weighted autoencoding lower bound LKIWAE ( · ) for the continuous-time generative model , the tightness of the lower bounds are given by the following inequality : logPG ( y1 : n ) ≥ L̃K+1IWAE ( · ) ≥ L̃ K IWAE ( · ) ≥ LV AE ( · ) , ( 10 ) for any positive integer K. Consequently , L̃KIWAE ( · ) is infinite if the diffusions of Eq . ( 2 ) and Eq . ( 5 ) are different . In our implementation , we notice that the training of our models by L̃KIWAE is not stable , possibly due to the drawbacks of importance sampling and the Signal-To-Noise problem ( Rainforth et al. , 2018 ) . To alleviate the problem , we train our model by a convex combination of the VAE and IWAE losses : LKIWAE ( y1 : n ) = ( 1− α ) LV AE ( y1 : n ) + αL̃KIWAE ( y1 : n ) , α ∈ ( 0 , 1 ) . ( 11 ) With the use of reparameterization ( Kingma & Welling , 2014 ) , both LV AE ( y1 : n ) and LKIWAE ( y1 : n ) are differentiable with respect to the parameters of the generative and inference models . Therefore , they can be applied to train continuous-time stochastic models with deep learning components .
This paper aims to model the complicated continuous time-series by using SDE for modeling the latent state trajectories. The authors claim that using SDE instead of ODE for the latent states has higher flexibility to capture more complex dynamics. Then they propose a continuous-time versions of variational evidence lower bounds (ELBO) which can be trained using ODE-RNNs as the inference networks. The proposed VSDN model has the capability to capture the latent state stochasticity not via the initial states but rather via SDEs, as opposed to other methods like latent ODE and latent SDE.
SP:a5b2b3f420b135829267204092e681f2b10d04fe
Learning Continuous-Time Dynamics by Stochastic Differential Networks
1 INTRODUCTION AND RELATED WORKS . Many real-world systems experience complicated stochastic dynamics over a continuous time period . The challenges on modeling the stochastic dynamics mainly come from two sources . First , the underlying state transitions of many systems are often uncertain , as they are placed in unpredictable environment with their states continuously affected by unknown disturbances . Second , the monitoring data collected may be sparse and at irregular intervals as a result of the sampling strategy or data corruption . The sporadic data sequence loses a large amount of information and system behaviors hidden behind the intervals of the observed data . In order to accurately model and analyze dynamics of these systems , it is important to reliably and efficiently represent the continuous-time stochastic process based on the discrete-time observations . In some domains , the derivation of the continuous-time stochastic model relies heavily on human knowledge and many studies focus on its inference problem ( Ryder et al. , 2018 ; Särkkä et al. , 2015 ) . But in more domains ( e.g. , video analysis ( Vondrick et al. , 2016 ) and human activity detection ( Rubanova et al. , 2019 ) ) , it is difficult and sometimes intractable to derive an accurate model to capture the underlying temporal evolution from the collected sequence of data . Although some studies have been made on approximating the stochastic process from the data collected , the majority of these methods define the system dynamics with a linear model ( Macke et al. , 2011 ; Yu et al. , 2009b ; a ) , which can not well represent high-dimensional data with nonlinear relationship . Recently , the Neural Ordinary Differential Equation ( ODE ) studies ( Chen et al. , 2018 ; Rubanova et al. , 2019 ; Jia & Benson , 2019 ; De Brouwer et al. , 2019 ; Yildiz et al. , 2019 ; Kidger et al. , 2020 ) introduce deep learning models to learn an ODE and apply it to approximate continuous-time dynamics . Nevertheless , these methods generally neglect the randomness of the latent state trajectories and posit simplified assumptions on the data distribution ( e.g . Gaussian ) , which strongly limits their capability of modeling complicated continuous-time stochastic processes . Compared to ODE , Stochastic Differential Equation ( SDE ) ( Jørgensen et al. , 2020 ) is a more practical solution in modeling the continuous-time stochastic process . Recently there have been some studies on bridging the gap between deep neural networks and SDEs ( Ha et al. , 2018 ) . In ( Hegde et al. , 2019 ; Liu et al. , 2020 ; Peluchetti & Favaro , 2020 ; Wang et al. , 2019 ; Kong et al. , 2020 ) , SDEs are introduced to define more robust and accurate deep learning architectures for supervised learning problems ( e.g . classification and regression ) . These studies focus on the design of neural network architectures , and are orthogonal to our work on the modeling of sporadic time series . In ( Tzen & Raginsky , 2019a ; b ) the authors studied the theoretical guarantees of the optimization and inference problems of Neural SDEs . In ( Li et al. , 2020 ) , a stochastic adjoint method is proposed to efficiently compute the gradients for neural SDEs . In this paper , we propose a new continuous-time stochastic recurrent network called Variational Stochastic Differential Network ( VSDN ) that incorporates SDEs into recurrent neural model to effectively model the continuous-time stochastic dynamics based only on sparse or irregular observations . Taking advantage of the capacity of deep neural networks , VSDN has higher flexibility and generalizability in modeling the nonlinear stochastic dependency from high-dimensional observations . Compared to Neural ODEs , VSDN incorporates the latent state trajectory to capture the underlying factors of the system dynamics . The trajectory helps to more flexibly model the data distribution and more accurately generate the output data than Neural ODEs . Parallel to the theoretical analysis ( Tzen & Raginsky , 2019a ; b ) and gradient computations ( Li et al. , 2020 ) , our study focuses more on exploring the feasible variational loss and flexible recurrent architecture for the Neural SDEs to model the sporadic data . The contributions of this paper are three-fold : 1 . We incorporate the continuous-time variants of VAE and IWAE losses into VSDN to train the continuous-time stochastic neural networks with latent state trajectories . 2 . We propose the efficient and flexible network architecture of VSDN which can model the complicated stochastic process under high-dimensional sporadic data sequences . 3 . We conduct comprehensive experiments to show that VSDN outperforms state-of-the-art deep learning methods on modeling the continuous-time dynamics and achieves remarkable performance in the prediction and interpolation of irregular or sporadic time series . The rest of this paper is organized as follows . In Section 2 , we first present the continuous-time variants of VAE loss , and then derive a continuous-time IWAE loss to train continuous-time statespace models with deep neural networks . In Section 3 , we propose the deep learning structures of VSDN . Comprehensive experiments are presented in section 4 and conclusion is given in section 5 . 2 CONTINUOUS-TIME VARIATIONAL BAYES . In this section , we first introduce the basic notations and formulate our problem . We then define the continuous-time variants of the Variational Auto-Encoding ( VAE ) and Importance-Weighted AutoEncoding ( IWAE ) lower bounds to enable the efficient training of our models . Due to the page limit , we present all deductions in Appendix A . 2.1 BASIC NOTATIONS AND PROBLEM FORMULATION . Throughout this paper , we define Xt ∈ Rd1 as the continuous-time latent state at time t and Yn ∈ Rd2 as the nth discrete-time observed data at time tn . d1 and d2 are the dimensions of the latent state and observation , respectively . X < t is the continuous trajectory before time t and X≤t is the path up to time t. Yn1 : n2 is the sequence of data points and Xtn1 : tn2 is the continuous-time state trajectory from tn1 to tn2 . Yt = { Yn|tn < t } is the historical observations before t and Yt = { Yn|tn ≥ t } is the current and future observations . For simplicity , we also assume that the initial value of the latent state is constant . The results in this paper can be easily extended to the situation that the initial states are also random variables . Given K data sequences { y ( i ) 1 : ni } , i = 1 , · · · , K , the target of our study is to learn an accurate continuous-time generative model G that maximizes the log-likelihood : G = arg max G 1 K K∑ i=1 logPG ( y ( i ) 1 : ni ) . ( 1 ) For Multivariate sequential data , there exists a complicated nonlinear relationship between the observed data and the unobservable latent state , which can be either the physical state of a dynamic system or the low-dimensional manifold of data . In our study , the latent state evolves in the continuous time domain and generates the observation through some transformation . 2.2 CONTINUOUS-TIME VARIATIONAL INFERENCE . In order to capture the underlying stochastic process from sporadic data , we design the generative model as a neural continuous-time state-space model , which consists of a latent Stochastic Differential Equation ( SDE ) and a conditional distribution of the observation . The latent SDE describes the stochastic process of the latent states and the conditional distribution depicts the probabilistic dependency of the current data with the latent states and historical observations : dXt =HG ( Xt , Yt ; t ) dt+RG ( Yt ; t ) dWt , ( 2 ) PG ( Yn|Y1 : n−1 , Xtn ) = Φ ( Yn|f ( Y1 : n−1 , Xtn ) ) , ( 3 ) where HG and RG are the drift and diffusion functions of the latent SDE . Wt denotes the a Wiener process , which is also called standard Brownian motion . To integrate the information of the observed data , HG is the function of the current state Xt and the historical observations Yt . However , RG only uses the historical data as input . It is not beneficial to include Xt as the input of the diffusion function , as it will inject more noise into gradients of the network parameters . A detailed example and analysis of the noise injection problem is given in Appendix B. Φ ( · ) is a parametric family of distributions over the data and f ( · ) is the function to compute the parameters of Φ . With the advance of deep learning methods , we parameterize HG , RG and f ( · ) by deep neural networks . Continuous-Time Auto-Encoding Variational Bayes : The exact log-likelihood of the generative model is given as logPG ( y1 : n ) = log ∫ PG ( X≤tn ) n∏ i=1 PG ( yi|y1 : n−1 , Xti ) dX≤tn ( 4 ) which does not have the closed-form solution in general . Therefore , G can not be directly trained by maximizing log-likelihood . To overcome this difficulty , an inference modelQ is introduced to depict the stochastic dependency of the latent state on observed data . Similar to the generative model , Q consists of a posterior SDE : dXt =HQ ( Xt , Yt , Yt ; t ) dt+RG ( Yt ; t ) dWt , ( 5 ) whereHQ is the posterior drift function . Different fromHG , HQ also uses the future observation Yt as the input and therefore the inference model Q induces the posterior distribution PQ ( X≤tn |y1 : n ) . Based on Auto-Encoding Variational Bayes ( Kingma & Welling , 2014 ) , it is straightforward to introduce a continuous-time variant of the VAE lower bound of the log-likelihood : LV AE ( y1 : n ) =− βKL ( PQ||PG ) + n∑ i=1 EPQ ( Xti ) logPG ( yi|y1 : n−1 , Xti ) , ( 6 ) KL ( PQ||PG ) = 1 2 ∫ tn 0 EPQ ( Xt ) ( ( HQ −HG ) T [ RGRTG ] −1 ( HQ −HG ) ) dt . ( 7 ) where PG ( X≤tn ) and PQ ( X≤tn ) are the probability density of the latent states induced by the prior SDE Eq . ( 2 ) and the posterior SDE Eq . ( 5 ) . KL ( ·||· ) denotes the KL divergence between two distributions and β is a hyper-parameter to weight the effect of the KL terms . In this paper , we fix β as 1.0 and LV AE is the original VAE objective ( Kingma & Welling , 2014 ) . In β-VAE ( Higgins et al. , 2017 ; Burgess et al. , 2018 ) , it is shown that a larger β can encourage the model to learn more efficient and disentangled representation from the data . Eq . ( 5 ) is restricted to having the same diffusion function as Eq . ( 2 ) . A feasible LV AE can not be defined to train VSDN-SDE without this restriction , as the KL divergence of two SDEs with different diffusions will be infinite ( Archambeau et al. , 2008 ) . The VAE objective has been widely used for discrete-time stochastic recurrent modals , such as LFADS ( Sussillo et al. , 2016 ) , VRNN ( Chung et al. , 2015 ) and SRNN ( Fraccaro et al. , 2016 ) . The major difference between these models and our work is that we incorporate a continuous-time latent state into our model while the latent states of the discrete-time models evolve only at distinct and separate time slots . Continuous-Time Importance Weighted Variational Bayes : LV AE ( y1 : n ) equals the exact loglikelihood when PQ ( X≤tn ) of the inference model is identical to the exact posterior distribution induced by the generative model . The errors of the inference model can result in the looseness of the VAE loss for the model training . Under the framework of Importance-Weighted Auto-Encoder ( IWAE ) ( Burda et al. , 2016 ; Cremer et al. , 2017 ) , we can define a tighter evidence lower bound : L̃KIWAE ( y1 : n ) =Ex1≤tn , ··· , xK≤tn∼PQ ( x≤tn ) ( log 1 K K∑ k=1 wk n∏ i=1 PG ( yi|y1 : n−1 , Xti ) ) , ( 8 ) where the importance weights satisfy the following SDE : d logwk =d log PG ( x k ≤tn ) PQ ( xk≤tn ) = −1 2 ( HQ −HG ) T [ RGRTG ] −1 ( HQ −HG ) dt − ( HQ −HG ) T [ RG ] −1dWt . ( 9 ) Given the variational auto-encoding lower bound LV AE ( · ) and the importance weighted autoencoding lower bound LKIWAE ( · ) for the continuous-time generative model , the tightness of the lower bounds are given by the following inequality : logPG ( y1 : n ) ≥ L̃K+1IWAE ( · ) ≥ L̃ K IWAE ( · ) ≥ LV AE ( · ) , ( 10 ) for any positive integer K. Consequently , L̃KIWAE ( · ) is infinite if the diffusions of Eq . ( 2 ) and Eq . ( 5 ) are different . In our implementation , we notice that the training of our models by L̃KIWAE is not stable , possibly due to the drawbacks of importance sampling and the Signal-To-Noise problem ( Rainforth et al. , 2018 ) . To alleviate the problem , we train our model by a convex combination of the VAE and IWAE losses : LKIWAE ( y1 : n ) = ( 1− α ) LV AE ( y1 : n ) + αL̃KIWAE ( y1 : n ) , α ∈ ( 0 , 1 ) . ( 11 ) With the use of reparameterization ( Kingma & Welling , 2014 ) , both LV AE ( y1 : n ) and LKIWAE ( y1 : n ) are differentiable with respect to the parameters of the generative and inference models . Therefore , they can be applied to train continuous-time stochastic models with deep learning components .
This paper introduces a latent variable model for high dimensional stochastic time-series. The model is akin to a VAE with RNNs that incorporate time-series data. The authors introduce two variants of the model, one which only contains a feedforward RNN (filtering) and another that contains feedforward and feedback RNNs (smoothing). The authors use two inference procedures for the model, one the standard VAE, and the other importance weighted IWAE. The work is reasonably clearly presented and the experiments are multiple data sets are a nice addition.
SP:a5b2b3f420b135829267204092e681f2b10d04fe
Vision at A Glance: Interplay between Fine and Coarse Information Processing Pathways
1 INTRODUCTION . Imagine you are driving a car on a highway and suddenly an object appears in your visual field , crossing the road . Your initial reaction is to slam on the brakes even before recognizing the object . This highlights a core difference between human vision and current machine learning strategies for object recognition . In machine learning , visual object recognition is often viewed as a feedforward , bottom up process , where object features are extracted from local to global in a hierarchical manner ; whereas in human vision , we can capture the gist of a visual object at a glance without processing the details of it , a crucial ability for us ( especially animals ) to survive in competitive natural environments . This strategic difference has been demonstrated by a large volume of experimental data . For examples , Sugase et al . ( 1999 ) found that neurons in the inferior temporal cortex ( IT ) of macaque monkeys convey the coarse information of an object much faster than the fine information of it ; FMRI and MEG studies on humans showed that the activation of orbitofrontal cortex ( OFC ) precedes that of the temporal cortex when a blurred object was shown to the subject ( Bar et al. , 2006 ) ; Liu et al . ( 2017 ) further demonstrated that the dorsal pathway extracts the coarse information of an object in less than 100ms after the stimulus onset , and this coarse information guides the subsequent local information processing . Indeed , the Reverse Hierarchy Theory for visual perception has proposed that although the representation of image features along the ventral pathway goes from local to global , our perception of an object goes inversely from global to local ( Hochstein & Ahissar , 2002 ) . How does this happen in the brain ? Experimental studies have revealed that there exist two anatomically and functionally separated signal pathways for visual information processing ( see Fig.1 ) . One is called the parvocellular pathway ( P-pathway ) , which starts from midget retina ganglion cells ( MRGCs ) , projects to layers 3-6 in the lateral geniculate nucleus ( LGN ) , and then primarily goes downstream along the ventral stream . The other is called the magnocellular pathway ( M-pathway ) , which starts from parasol retina ganglion cells ( PRGCs ) , projects to layers 1-2 of LGN , and then goes along the dorsal stream or the subcortical pathway ( the superior colliculus and downstream areas ) . The two pathways have different neural response characteristics and complementary computational roles . Experimental findings have shown that the P-pathway is sensitive to colors and responds primarily to visual inputs of high spatial frequency ; whereas the M-pathway is color blind and responds primarily to visual inputs of low spatial frequency ( Derrington & Lennie , 1984 ) . It has been suggested that the M-pathway serves as a short-cut to extract coarse information of images rapidly , while the P-pathway extracts fine features of images slowly , and the interplay between two pathways endows the neural system with the capacity of processing visual information rapidly , adaptively , and robustly ( Bar , 2003 ; Wang et al. , 2020 ; Bullier , 2001 ; Liu et al. , 2017 ) . For instance , by extracting the coarse information of an image , the M-pathway can generate predictions about what are expected in the visual field , and this knowledge subsequently modulate the fine information processing in the P-pathway ( Fig . 1 ) . Although the existence of separated P- and M- pathways is well known in the neuroscience field , exactly how they cooperate with each other to facilitate information processing remains poorly understood . In this study , we build up a two-pathway model to elucidate the computational properties associated with the interplay between two pathways ( Fig . 2 ) . We use convolution neural networks ( CNNs ) as the building blocks , since recent studies have revealed that CNNs are effective to model the neuronal response variability along the visual pathway ( Yamins et al. , 2013 ; Kriegeskorte , 2015 ) . Specifically , we model the P-pathway using a relatively deep CNN , which has small-size kernels and receives detailed visual inputs , referred to as FineNet hereafter . The M-pathway is modeled by a relatively shallow CNN , which has large-size kernels and receives blurred visual inputs , referred to as CoarseNet hereafter . Based on the proposed model , we investigate several computational issues associated with the interplay between two pathways , including how CoarseNet learns from FineNet via imitation , and how FineNet benefits from CoarseNet via feedback to leverage its performance . We also use the two-pathway model to reproduce the backward masking phenomenon observed in human psychophysic experiments . 2 THE TWO-PATHWAY MODEL . The structure of our two-pathway model is illustrated in Fig . 2 , where FineNet and CoarseNet mimic the P- and M- pathways , respectively . Notably , FineNet is deeper than CoarseNet , reflecting that the P-pathway goes through more feature analyzing relays ( e.g. , V1-V2-V4-IT along the ventral pathway ) than the M-pathway . FineNet also has smaller convolutional kernels than CoarseNet , reflecting that MRGCs in the retina have much smaller receptive fields than PRGCs . Furthermore , we consider that FineNet receives detailed and colourful visual inputs , reflecting that MRGCs have small receptive fields and are color sensitive ; while CoarseNet receives blurred and gray inputs , reflecting that PRGCs have large receptive fields and are color blind . In the model , we consider that the two pathways interact with each other in three forms : 1 ) Imitation learning . Since CoarseNet has a shallow structure and receives blurred inputs , it is hard to train CoarseNet well for object recognition directly . Hence we consider that CoarseNet learns the feature representations of FineNet via an imitation process . Later we will argue that this has an important biological implication ( Sec . 3.2 ) . 2 ) Association . It is supposed that the M-pathway generates predictions about what might be in the visual scene , which guides the information processing in the P-pathway . We model this by considering that CoarseNet predicts the representation of FineNet through a memory association process . 3 ) Feedback . It is known that coarse information can serve as a cognitive bias guiding the extraction of fine information of images . We model this by feeding the associated prediction back to an earlier layer of FineNet to enhance the fine feature extraction . The details of the two-pathway model are introduced below . 2.1 THE INFERENCE PROCESS OF THE MODEL . Denote the input to FineNet as x and the input to CoarseNet as x̂ . x̂ is obtained by either filtering x with a 2D Gaussian filter or binarizing x. Denote the output of CoarseNet to be pC ( x̂ ) = fC [ gC ( x̂ ; θC ) ; wC ] , where gC ( · ; θC ) and fC ( · ; wC ) represent , respectively , the fea- ture extractor and the linear classifier of CoarseNet , and { θC , wC } the trainable parameters . The output of FineNet is similarly denoted as pF ( x ) = fF { gF [ x , O ( x̂ , x ) ; θF ] ; wF } , where the fea- ture extractor gF ( · ; θF ) has an extra input componentO ( x̂ ; x ) , representing the feedback signal . To generate the feedback signal O ( x̂ , x ) in FineNet , we consider a memory association process . Two types of associations are exploited in this work , static memory association ( SMA ) and dynamic memory association ( DMA ) . They have the similar effect of using coarse features gC ( · ; θC ) as a cue to predict fine features . SMA is simpler , but we also consider DMA , as it introduces temporal dynamics into our two-pathway model necessary for reproducing the backward masking experiment ( see Sec . 3.6 ) . For clearance , we only introduce SMA here ( see Fig . 2B ) , and DMA is described in Appendix F. Specifically , we implement SMA with the cache memory model ( Orhan , 2018 ) , which performs a key-value association . The model stores a pair of a key matrix u ∈ Rd×K and a value matrix v ∈ Rc×K in the memory buffer , with K the number of memory items and d , c the dimensions of the key and value vectors , respectively . The columns uk and vk represent , respectively , the normalized gC ( x̂k ; θC ) of CoarseNet and the flattened feature vector hF ( xk , x̂k ) of the last convolution layer of FineNet . When a specific query vector gC ( x̂ ) of CoarseNet is presented , we first calculate its similarities with all key vectors stored in the memory buffer , which are given by ∆k ( x̂ ) = exp [ βgC ( x̂ ) > uk ] , for k = 1 , . . .K , with β controlling the sharpness of similarity . After that , we calculate the associated result , i.e. , the predicted fine features , which is given by C ( x̂ ) = ∑ k vk∆k ( x̂ ) / [ ∑ k ∆k ( x̂ ) ] . The inference of FineNet forms a continuous loop so that the feedback signal is updated iteratively ( see Fig . 2A ) . At time step t , the feedback signal in FineNet is calculated by Ot ( x̂ , x ) = C ( x̂ ) + hFt−1 ( x , x̂ ) . Notably , at the first step t = 1 , only the associated result from CoarseNet is available , which gives O1 ( x̂ , x ) = C ( x̂ ) . This reflects the fact that the M-pathway is much faster than the P-pathway , which generates the first feedback signal without interacting with high visual areas in the P-pathway . In summary , the inference of the model involves interaction between two pathways : in response to an image , CoarseNet first generates its output and meanwhile predicts the fine features of FineNet through association ; the predicted result is then combined with the deep representations of FineNet to form a feedback signal , which modulates the shallow layer of FineNet for feature extraction ; this feedback loop can go on iteratively to continuously leverage the performance of FineNet . 2.2 THE TRAINING OF THE MODEL . During training , FineNet and CoarseNet are optimized jointly . To get the network output for an input , we run the feedback loop iteratively in FineNet for T steps ( T = 2 used in this study ) . FineNet is optimized through minimizing the cross-entropy loss , which is given by LF = − 1 N N∑ i=1 K∑ j=1 yi , j lnp F j ( xi ) , ( 1 ) where pFj is the jth element of p F , i.e. , the likelihood of the jth class , and yi , j is the jth element of the one-hot label yi for the image xi , which is 1 for the correct class and 0 otherwise . The summation runs over all images N and all classes K. Since CoarseNet receives coarse inputs and has a shallow structure , we optimize it via a combination of classification and imitation losses , which is written as LC = 1 N N∑ i=1 −α K∑ j=1 yi , j ln p C j ( x̂i ) + 1− α 2 ‖gC ( x̂i ) − gF ( xi , x̂i ) ‖2 , ( 2 ) where the symbol ‖ · ‖ denotes L2 normal , and α is a hyper-parameter balancing the cross-entropy loss and the imitation loss . Since SMA aims to store the long-term correlation ( association ) between the feature representations of CoarseNet and FineNet , we update its key and value matrices after every two training epochs .
This paper proposed a two-pathway neural network to mimic the interplay between the parvocellular (slow and fine-grained) and magnocellular (fast and course) pathways in neural systems. The two pathways are named as FineNet and CourseNet. During inference, the FineNet received recurrent feedback signals from the CoarseNet via an attention layer and memory. During training, cross-entropy loss are used for both pathways, and an "imitation" loss is used to encourage the CoarseNet pathway to mimic the FineNet
SP:727502b110dd9d104b7ae9caa72a6e5f9c119a8d
Vision at A Glance: Interplay between Fine and Coarse Information Processing Pathways
1 INTRODUCTION . Imagine you are driving a car on a highway and suddenly an object appears in your visual field , crossing the road . Your initial reaction is to slam on the brakes even before recognizing the object . This highlights a core difference between human vision and current machine learning strategies for object recognition . In machine learning , visual object recognition is often viewed as a feedforward , bottom up process , where object features are extracted from local to global in a hierarchical manner ; whereas in human vision , we can capture the gist of a visual object at a glance without processing the details of it , a crucial ability for us ( especially animals ) to survive in competitive natural environments . This strategic difference has been demonstrated by a large volume of experimental data . For examples , Sugase et al . ( 1999 ) found that neurons in the inferior temporal cortex ( IT ) of macaque monkeys convey the coarse information of an object much faster than the fine information of it ; FMRI and MEG studies on humans showed that the activation of orbitofrontal cortex ( OFC ) precedes that of the temporal cortex when a blurred object was shown to the subject ( Bar et al. , 2006 ) ; Liu et al . ( 2017 ) further demonstrated that the dorsal pathway extracts the coarse information of an object in less than 100ms after the stimulus onset , and this coarse information guides the subsequent local information processing . Indeed , the Reverse Hierarchy Theory for visual perception has proposed that although the representation of image features along the ventral pathway goes from local to global , our perception of an object goes inversely from global to local ( Hochstein & Ahissar , 2002 ) . How does this happen in the brain ? Experimental studies have revealed that there exist two anatomically and functionally separated signal pathways for visual information processing ( see Fig.1 ) . One is called the parvocellular pathway ( P-pathway ) , which starts from midget retina ganglion cells ( MRGCs ) , projects to layers 3-6 in the lateral geniculate nucleus ( LGN ) , and then primarily goes downstream along the ventral stream . The other is called the magnocellular pathway ( M-pathway ) , which starts from parasol retina ganglion cells ( PRGCs ) , projects to layers 1-2 of LGN , and then goes along the dorsal stream or the subcortical pathway ( the superior colliculus and downstream areas ) . The two pathways have different neural response characteristics and complementary computational roles . Experimental findings have shown that the P-pathway is sensitive to colors and responds primarily to visual inputs of high spatial frequency ; whereas the M-pathway is color blind and responds primarily to visual inputs of low spatial frequency ( Derrington & Lennie , 1984 ) . It has been suggested that the M-pathway serves as a short-cut to extract coarse information of images rapidly , while the P-pathway extracts fine features of images slowly , and the interplay between two pathways endows the neural system with the capacity of processing visual information rapidly , adaptively , and robustly ( Bar , 2003 ; Wang et al. , 2020 ; Bullier , 2001 ; Liu et al. , 2017 ) . For instance , by extracting the coarse information of an image , the M-pathway can generate predictions about what are expected in the visual field , and this knowledge subsequently modulate the fine information processing in the P-pathway ( Fig . 1 ) . Although the existence of separated P- and M- pathways is well known in the neuroscience field , exactly how they cooperate with each other to facilitate information processing remains poorly understood . In this study , we build up a two-pathway model to elucidate the computational properties associated with the interplay between two pathways ( Fig . 2 ) . We use convolution neural networks ( CNNs ) as the building blocks , since recent studies have revealed that CNNs are effective to model the neuronal response variability along the visual pathway ( Yamins et al. , 2013 ; Kriegeskorte , 2015 ) . Specifically , we model the P-pathway using a relatively deep CNN , which has small-size kernels and receives detailed visual inputs , referred to as FineNet hereafter . The M-pathway is modeled by a relatively shallow CNN , which has large-size kernels and receives blurred visual inputs , referred to as CoarseNet hereafter . Based on the proposed model , we investigate several computational issues associated with the interplay between two pathways , including how CoarseNet learns from FineNet via imitation , and how FineNet benefits from CoarseNet via feedback to leverage its performance . We also use the two-pathway model to reproduce the backward masking phenomenon observed in human psychophysic experiments . 2 THE TWO-PATHWAY MODEL . The structure of our two-pathway model is illustrated in Fig . 2 , where FineNet and CoarseNet mimic the P- and M- pathways , respectively . Notably , FineNet is deeper than CoarseNet , reflecting that the P-pathway goes through more feature analyzing relays ( e.g. , V1-V2-V4-IT along the ventral pathway ) than the M-pathway . FineNet also has smaller convolutional kernels than CoarseNet , reflecting that MRGCs in the retina have much smaller receptive fields than PRGCs . Furthermore , we consider that FineNet receives detailed and colourful visual inputs , reflecting that MRGCs have small receptive fields and are color sensitive ; while CoarseNet receives blurred and gray inputs , reflecting that PRGCs have large receptive fields and are color blind . In the model , we consider that the two pathways interact with each other in three forms : 1 ) Imitation learning . Since CoarseNet has a shallow structure and receives blurred inputs , it is hard to train CoarseNet well for object recognition directly . Hence we consider that CoarseNet learns the feature representations of FineNet via an imitation process . Later we will argue that this has an important biological implication ( Sec . 3.2 ) . 2 ) Association . It is supposed that the M-pathway generates predictions about what might be in the visual scene , which guides the information processing in the P-pathway . We model this by considering that CoarseNet predicts the representation of FineNet through a memory association process . 3 ) Feedback . It is known that coarse information can serve as a cognitive bias guiding the extraction of fine information of images . We model this by feeding the associated prediction back to an earlier layer of FineNet to enhance the fine feature extraction . The details of the two-pathway model are introduced below . 2.1 THE INFERENCE PROCESS OF THE MODEL . Denote the input to FineNet as x and the input to CoarseNet as x̂ . x̂ is obtained by either filtering x with a 2D Gaussian filter or binarizing x. Denote the output of CoarseNet to be pC ( x̂ ) = fC [ gC ( x̂ ; θC ) ; wC ] , where gC ( · ; θC ) and fC ( · ; wC ) represent , respectively , the fea- ture extractor and the linear classifier of CoarseNet , and { θC , wC } the trainable parameters . The output of FineNet is similarly denoted as pF ( x ) = fF { gF [ x , O ( x̂ , x ) ; θF ] ; wF } , where the fea- ture extractor gF ( · ; θF ) has an extra input componentO ( x̂ ; x ) , representing the feedback signal . To generate the feedback signal O ( x̂ , x ) in FineNet , we consider a memory association process . Two types of associations are exploited in this work , static memory association ( SMA ) and dynamic memory association ( DMA ) . They have the similar effect of using coarse features gC ( · ; θC ) as a cue to predict fine features . SMA is simpler , but we also consider DMA , as it introduces temporal dynamics into our two-pathway model necessary for reproducing the backward masking experiment ( see Sec . 3.6 ) . For clearance , we only introduce SMA here ( see Fig . 2B ) , and DMA is described in Appendix F. Specifically , we implement SMA with the cache memory model ( Orhan , 2018 ) , which performs a key-value association . The model stores a pair of a key matrix u ∈ Rd×K and a value matrix v ∈ Rc×K in the memory buffer , with K the number of memory items and d , c the dimensions of the key and value vectors , respectively . The columns uk and vk represent , respectively , the normalized gC ( x̂k ; θC ) of CoarseNet and the flattened feature vector hF ( xk , x̂k ) of the last convolution layer of FineNet . When a specific query vector gC ( x̂ ) of CoarseNet is presented , we first calculate its similarities with all key vectors stored in the memory buffer , which are given by ∆k ( x̂ ) = exp [ βgC ( x̂ ) > uk ] , for k = 1 , . . .K , with β controlling the sharpness of similarity . After that , we calculate the associated result , i.e. , the predicted fine features , which is given by C ( x̂ ) = ∑ k vk∆k ( x̂ ) / [ ∑ k ∆k ( x̂ ) ] . The inference of FineNet forms a continuous loop so that the feedback signal is updated iteratively ( see Fig . 2A ) . At time step t , the feedback signal in FineNet is calculated by Ot ( x̂ , x ) = C ( x̂ ) + hFt−1 ( x , x̂ ) . Notably , at the first step t = 1 , only the associated result from CoarseNet is available , which gives O1 ( x̂ , x ) = C ( x̂ ) . This reflects the fact that the M-pathway is much faster than the P-pathway , which generates the first feedback signal without interacting with high visual areas in the P-pathway . In summary , the inference of the model involves interaction between two pathways : in response to an image , CoarseNet first generates its output and meanwhile predicts the fine features of FineNet through association ; the predicted result is then combined with the deep representations of FineNet to form a feedback signal , which modulates the shallow layer of FineNet for feature extraction ; this feedback loop can go on iteratively to continuously leverage the performance of FineNet . 2.2 THE TRAINING OF THE MODEL . During training , FineNet and CoarseNet are optimized jointly . To get the network output for an input , we run the feedback loop iteratively in FineNet for T steps ( T = 2 used in this study ) . FineNet is optimized through minimizing the cross-entropy loss , which is given by LF = − 1 N N∑ i=1 K∑ j=1 yi , j lnp F j ( xi ) , ( 1 ) where pFj is the jth element of p F , i.e. , the likelihood of the jth class , and yi , j is the jth element of the one-hot label yi for the image xi , which is 1 for the correct class and 0 otherwise . The summation runs over all images N and all classes K. Since CoarseNet receives coarse inputs and has a shallow structure , we optimize it via a combination of classification and imitation losses , which is written as LC = 1 N N∑ i=1 −α K∑ j=1 yi , j ln p C j ( x̂i ) + 1− α 2 ‖gC ( x̂i ) − gF ( xi , x̂i ) ‖2 , ( 2 ) where the symbol ‖ · ‖ denotes L2 normal , and α is a hyper-parameter balancing the cross-entropy loss and the imitation loss . Since SMA aims to store the long-term correlation ( association ) between the feature representations of CoarseNet and FineNet , we update its key and value matrices after every two training epochs .
This paper proposes a dual-path CNN architecture with complementary roles (FineNet and CoarseNet) which is inspired by parvocellular and magnocellular pathways in the primate brain. The CoarseNet receives blurred inputs and has large kernels while FineNet received high-resolution input, is deep and has small kernels. It is shown that this architecture improves the robustness in object recognition performance over single-pathway architecture and could replicate the behavioral responses in humans during a classic psychological experiment (backward masking).
SP:727502b110dd9d104b7ae9caa72a6e5f9c119a8d
Defective Convolutional Networks
1 INTRODUCTION . Deep learning ( LeCun et al. , 1998 ; 2015 ) , especially deep Convolutional Neural Network ( CNN ) ( Krizhevsky et al. , 2012 ) , has led to state-of-the-art results spanning many machine learning fields ( Girshick , 2015 ; Chen et al. , 2018 ; Luo et al. , 2020 ) . Despite the great success in numerous applications , recent studies show that deep CNNs are vulnerable to some well-designed input samples named as Adversarial Examples ( Szegedy et al. , 2013 ; Biggio et al. , 2013 ) . Take the task of image classification as an example , for almost every commonly used well-performed CNN , attackers are able to construct a small perturbation on an input image , which is almost imperceptible to humans but can make the model give a wrong prediction . The problem is serious as some well-designed adversarial examples can be transferred among different kinds of CNN architectures ( Papernot et al. , 2016b ) . As a result , a machine learning system can be easily attacked even if the attacker does not have access to the model parameters , which seriously affect its use in practical applications . There is a rapidly growing body of work on how to obtain a robust CNN , mainly based on adversarial training ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ; Madry et al. , 2017 ; Buckman et al. , 2018 ; Mao et al. , 2019 ) . However , those methods need lots of extra computation to obtain adversarial examples at each time step and tend to overfit the attacking method used in training ( Buckman et al. , 2018 ) . In this paper , we tackle the problem in a perspective different from most existing methods . In particular , we explore the possibility of designing new CNN architectures which can be trained using standard optimization methods on standard benchmark datasets and can enjoy robustness by themselves , without appealing to other techniques . Recent studies ( Geirhos et al. , 2017 ; 2018 ; Baker et al. , 2018 ; Brendel & Bethge , 2019 ) show that the predictions of standard CNNs mainly depend on the texture of objects . However , the textural information has a high degree of redundancy and may be easily injected with adversarial noise ( Yang et al. , 2019 ; Hosseini et al. , 2019 ) . Also , Cao et al . ( 2020 ) ; Das et al . ( 2020 ) finds adversarial attack methods may perturb local patches to contain textural features of incorrect classes . All the literature suggests that the wrong prediction by CNNs for adversarial examples mainly comes from the change in the textural information . The small perturbation of adversarial examples will change the textures and eventually affect the features extracted by the CNNs . Therefore , a natural way to avoid adversarial examples is to let the CNN make predictions relying less on textures but more on other information , such as the shape , which can not be severely distorted by small perturbations . In practice , sometimes a camera might have mechanical failures which cause the output image to have many defective pixels ( such pixels are always black in all images ) . Nonetheless , humans can still recognize objects in the image with defective pixels since we are able to classify the objects even in the absence of local textural information . Motivated by this , we introduce the concept of defectiveness into the convolutional neural networks : we call a neuron a defective neuron if its output value is fixed to zero no matter what input signal is received ; similary , a convolutional layer is a defective convolutional layer if it contains defective neurons . Before training , we replace the standard convolutional layers with the defective version on a standard CNN and train the network in the standard way . As defective neurons of the defective convolutional layer contain no information and are very different from their spatial neighbors , the textural information can not be accurately extracted from the bottom defective layers to top layers . Therefore , we destroy local textural information to a certain extent and prompt the neural network to rely more on other information for classification . We call the architecture deployed with defective convolutional layers as defective convolutional network . We find that applying the defective convolutional layers to the bottom1 layers of the network and introducing various patterns for defective neurons arrangement across channels are critical . In summary , our main contributions are : • We propose Defective CNNs and four empirical evidences to justify that , compared to standard CNNs , the defective ones rely less on textures and more on shapes of the inputs for making predictions . • Experiments show that Defective CNNs has superior defense performance than standard CNNs against transfer-based attacks , decision-based attacks , and additive Gaussian noise . • Using the standard training method , Defective CNN achieves state-of-the-art results against two transfer-based black-box attacks while maintaining high accuracy on clean test data . • Through proper implementation , Defective CNNs can save a lot of computation and storage costs ; thus may lead to a practical solution in the real world . 2 RELATED WORK . Various methods have been proposed to defend against adversarial examples . One line of research is to derive a meaningful optimization objective and optimize the model by adversarial training ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ; Huang et al. , 2015 ; Madry et al. , 2017 ; Buckman et al. , 2018 ; Mao et al. , 2019 ) . The high-level idea of these works is that if we can predict the potential attack to the model during optimization , then we can give the attacked sample a correct signal and use it during training . Another line of research is to take an adjustment to the input image before letting it go through the deep neural network ( Liao et al. , 2017 ; Song et al. , 2017 ; Samangouei et al. , 2018 ; Sun et al. , 2018 ; Xie et al. , 2019 ; Yuan & He , 2020 ) . The basic intuition behind this is that if we can clean the adversarial attack to a certain extent , then such attacks can be defended . Although these methods achieve some success , a major difficulty is that it needs a large extra cost to collect adversarial examples and hard to apply on large-scale datasets . Several studies ( Geirhos et al. , 2017 ; 2018 ; Baker et al. , 2018 ; Brendel & Bethge , 2019 ) show that the prediction of CNNs is mainly from the texture of objects but not the shape . Also , Cao et al . ( 2020 ) ; Das et al . ( 2020 ) found that adversarial examples usually perturb a patch of the original image to contain the textural feature of incorrect classes . For example , the adversarial example of the panda image is misclassified as a monkey because a patch of the panda skin is perturbed adversarially so that it alone looks like the face of a monkey ( see Figure 11 in ( Cao et al. , 2020 ) ) . All previous works above suggest that the CNN learns textural information more than shape and the adversarial attack might come from textural-level perturbations . This is also correlated with robust features ( Tsipras et al. , 2018 ; Ilyas et al. , 2019 ; Hosseini et al. , 2019 ; Yang et al. , 2019 ) which has attracted more interest recently . Pixels which encode textural information contain high redundancy and may be easily deteriorated to the distribution of incorrect classes . However , shape information is more compact and thus may serve as a more robust feature for predicting . 1In this paper , bottom layer means the layer close to the input and top layer means the layer close to the output prediction . 3 DEFECTIVE CONVOLUTIONAL NEURAL NETWORK . 3.1 DESIGN OF DEFECTIVE CONVOLUTIONAL LAYERS . In this subsection , we introduce our proposed defective convolutional neural networks and discuss the differences between the proposed method and related topics . First , we briefly introduce the notations . For one convolutional layer , denote x as the input and z as the output of neurons in the layer . Note that x may be the input image or the output of the last convolutional layer . The input x is usually aM×N×K tensor in whichM/N are the height/width of a feature map , and K is the number of feature maps , or equivalently , channels . Denote w and b as the parameters ( e.g. , the weights and biases ) of the convolutional kernel . Then a standard convolutional layer can be mathematically defined as below . Standard convolutional layer : . x′ = w ∗ x+ b , ( 1 ) z = f ( x′ ) , ( 2 ) where f ( · ) is a non-linear activation function such as ReLU2 and ∗ is the convolutional operation . The convolutional filter receives signals in a patch and extracts local textural information from the patch . As mentioned in the introduction , recent works suggest that the prediction of standard CNNs strongly depends on such textural information , and noises imposed on the texture may lead to wrong predictions . Therefore , we hope to learn a feature extractor which does not solely rely on textural features and also considers other information . To achieve this goal , we introduce the defective convolutional layer in which some neurons are purposely designed to be corrupted . Define Mdefect to be a binary matrix of size M ×N ×K . Our defective convolutional layer is defined as follows . Defective convolutional layer : . x′ = w ∗ x+ b , ( 3 ) z′ = f ( x′ ) ( 4 ) z = Mdefect ◦ z′ , ( 5 ) where ◦ denotes element-wise product . Mdefect is a fixed matrix and is not learnable during training and testing . We can see that Mdefect plays a role of “ masking ” out values of some neurons in the layer . This disturbs the distribution of local textural information and decouples the correlation among neurons . With the masked output z as input , the feature extractor of the next convolutional layer can not accurately capture the local textural feature from x . As a consequence , the textural information is hard to pass through the defective CNN from bottom to top . To produce accurate predictions , the deep neural network has to find relevant signals other than the texture , e.g. , the shape . Those corrupted neurons have no severe impact on the extraction of shape information since neighbors of those neurons in the same filter are still capable of passing the shape information to the next layer . In this paper , we find that simply setting Mdefect by random initialization is already helpful for learning a robust CNN . Before training , we sample each entry in Mdefect using Bernoulli distribution with keep probability p and then fix Mdefect during training and testing . More discussions and ablation studies are provided in Section 4 . As can be seen from Equation ( 3 ) - ( 5 ) , the implementation of our defective convolutional layer is similar to the dropout operation ( Srivastava et al. , 2014 ) . To demonstrate the relationship and differences , we mathematically define the dropout as below . Standard convolutional layer + dropout : Mdropout ∼ Bernoulli ( p ) ( 6 ) x′ = w ∗ x+ b ( 7 ) z′ = f ( x′ ) ( 8 ) z = Mdropout ◦ z′ . ( 9 ) 2Batch normalization is popularly used on x′ before computing z . Here we simply omit this . The shape of Mdropout is the same as Mdefect , and the value of each entry in Mdropout is sampled in each batch using some sampling strategies at each step during training . Generally , entries inMdropout are independent and identically sampled in an online fashion using Bernoulli distribution with keep probability p. There are several significant differences between dropout and defective convolutional layer . First , the binary matrix Mdropout is sampled online during training and is removed during testing , while the binary matrix Mdefect in defective convolutional layers is predefined and keeps fixed in both training and testing . The predefined way can help Defective CNNs save a lot of computation and storage costs . Second , the motivations behind the two methods are quite different and may lead to differences in the places to applying methods , the values of the keep probability p , and the shape of the masked unit . Dropout tries to reduce overfitting by preventing co-adaptations on training data . When comes to CNNs , those methods are applied to top layers , p is set to be large ( e.g. , 0.9 ) , and the masked units are chosen to be a whole channel in Tompson et al . ( 2015 ) and a connected block in Ghiasi et al . ( 2018 ) . However , our method tries to prevent the model extract textural information of inputs for making predictions . We would apply the method to bottom layers , use a small p ( e.g . 0.1 ) , and the masked unit is a single neuron . Also , in our experiments , we will show that the proposed method can improve the robustness of CNNs against transfer-based attacks and decisionbased attacks , while the dropout methods can not .
Studies suggest that CNNs that overly rely on texture features are more vulnerable to adversarial attacks. The authors of this paper propose a simple yet effective method "defective convolution" that randomly "disables" neurons on the convolution layer. The authors argue that by doing so, the CNN is encouraged to learn less from object texture and more on features such as shape. The authors support this statement by empirically evaluating the proposed model under multiple perturbation methods.
SP:b7853999d2be6a4e637097094d8088e0229d651e
Defective Convolutional Networks
1 INTRODUCTION . Deep learning ( LeCun et al. , 1998 ; 2015 ) , especially deep Convolutional Neural Network ( CNN ) ( Krizhevsky et al. , 2012 ) , has led to state-of-the-art results spanning many machine learning fields ( Girshick , 2015 ; Chen et al. , 2018 ; Luo et al. , 2020 ) . Despite the great success in numerous applications , recent studies show that deep CNNs are vulnerable to some well-designed input samples named as Adversarial Examples ( Szegedy et al. , 2013 ; Biggio et al. , 2013 ) . Take the task of image classification as an example , for almost every commonly used well-performed CNN , attackers are able to construct a small perturbation on an input image , which is almost imperceptible to humans but can make the model give a wrong prediction . The problem is serious as some well-designed adversarial examples can be transferred among different kinds of CNN architectures ( Papernot et al. , 2016b ) . As a result , a machine learning system can be easily attacked even if the attacker does not have access to the model parameters , which seriously affect its use in practical applications . There is a rapidly growing body of work on how to obtain a robust CNN , mainly based on adversarial training ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ; Madry et al. , 2017 ; Buckman et al. , 2018 ; Mao et al. , 2019 ) . However , those methods need lots of extra computation to obtain adversarial examples at each time step and tend to overfit the attacking method used in training ( Buckman et al. , 2018 ) . In this paper , we tackle the problem in a perspective different from most existing methods . In particular , we explore the possibility of designing new CNN architectures which can be trained using standard optimization methods on standard benchmark datasets and can enjoy robustness by themselves , without appealing to other techniques . Recent studies ( Geirhos et al. , 2017 ; 2018 ; Baker et al. , 2018 ; Brendel & Bethge , 2019 ) show that the predictions of standard CNNs mainly depend on the texture of objects . However , the textural information has a high degree of redundancy and may be easily injected with adversarial noise ( Yang et al. , 2019 ; Hosseini et al. , 2019 ) . Also , Cao et al . ( 2020 ) ; Das et al . ( 2020 ) finds adversarial attack methods may perturb local patches to contain textural features of incorrect classes . All the literature suggests that the wrong prediction by CNNs for adversarial examples mainly comes from the change in the textural information . The small perturbation of adversarial examples will change the textures and eventually affect the features extracted by the CNNs . Therefore , a natural way to avoid adversarial examples is to let the CNN make predictions relying less on textures but more on other information , such as the shape , which can not be severely distorted by small perturbations . In practice , sometimes a camera might have mechanical failures which cause the output image to have many defective pixels ( such pixels are always black in all images ) . Nonetheless , humans can still recognize objects in the image with defective pixels since we are able to classify the objects even in the absence of local textural information . Motivated by this , we introduce the concept of defectiveness into the convolutional neural networks : we call a neuron a defective neuron if its output value is fixed to zero no matter what input signal is received ; similary , a convolutional layer is a defective convolutional layer if it contains defective neurons . Before training , we replace the standard convolutional layers with the defective version on a standard CNN and train the network in the standard way . As defective neurons of the defective convolutional layer contain no information and are very different from their spatial neighbors , the textural information can not be accurately extracted from the bottom defective layers to top layers . Therefore , we destroy local textural information to a certain extent and prompt the neural network to rely more on other information for classification . We call the architecture deployed with defective convolutional layers as defective convolutional network . We find that applying the defective convolutional layers to the bottom1 layers of the network and introducing various patterns for defective neurons arrangement across channels are critical . In summary , our main contributions are : • We propose Defective CNNs and four empirical evidences to justify that , compared to standard CNNs , the defective ones rely less on textures and more on shapes of the inputs for making predictions . • Experiments show that Defective CNNs has superior defense performance than standard CNNs against transfer-based attacks , decision-based attacks , and additive Gaussian noise . • Using the standard training method , Defective CNN achieves state-of-the-art results against two transfer-based black-box attacks while maintaining high accuracy on clean test data . • Through proper implementation , Defective CNNs can save a lot of computation and storage costs ; thus may lead to a practical solution in the real world . 2 RELATED WORK . Various methods have been proposed to defend against adversarial examples . One line of research is to derive a meaningful optimization objective and optimize the model by adversarial training ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ; Huang et al. , 2015 ; Madry et al. , 2017 ; Buckman et al. , 2018 ; Mao et al. , 2019 ) . The high-level idea of these works is that if we can predict the potential attack to the model during optimization , then we can give the attacked sample a correct signal and use it during training . Another line of research is to take an adjustment to the input image before letting it go through the deep neural network ( Liao et al. , 2017 ; Song et al. , 2017 ; Samangouei et al. , 2018 ; Sun et al. , 2018 ; Xie et al. , 2019 ; Yuan & He , 2020 ) . The basic intuition behind this is that if we can clean the adversarial attack to a certain extent , then such attacks can be defended . Although these methods achieve some success , a major difficulty is that it needs a large extra cost to collect adversarial examples and hard to apply on large-scale datasets . Several studies ( Geirhos et al. , 2017 ; 2018 ; Baker et al. , 2018 ; Brendel & Bethge , 2019 ) show that the prediction of CNNs is mainly from the texture of objects but not the shape . Also , Cao et al . ( 2020 ) ; Das et al . ( 2020 ) found that adversarial examples usually perturb a patch of the original image to contain the textural feature of incorrect classes . For example , the adversarial example of the panda image is misclassified as a monkey because a patch of the panda skin is perturbed adversarially so that it alone looks like the face of a monkey ( see Figure 11 in ( Cao et al. , 2020 ) ) . All previous works above suggest that the CNN learns textural information more than shape and the adversarial attack might come from textural-level perturbations . This is also correlated with robust features ( Tsipras et al. , 2018 ; Ilyas et al. , 2019 ; Hosseini et al. , 2019 ; Yang et al. , 2019 ) which has attracted more interest recently . Pixels which encode textural information contain high redundancy and may be easily deteriorated to the distribution of incorrect classes . However , shape information is more compact and thus may serve as a more robust feature for predicting . 1In this paper , bottom layer means the layer close to the input and top layer means the layer close to the output prediction . 3 DEFECTIVE CONVOLUTIONAL NEURAL NETWORK . 3.1 DESIGN OF DEFECTIVE CONVOLUTIONAL LAYERS . In this subsection , we introduce our proposed defective convolutional neural networks and discuss the differences between the proposed method and related topics . First , we briefly introduce the notations . For one convolutional layer , denote x as the input and z as the output of neurons in the layer . Note that x may be the input image or the output of the last convolutional layer . The input x is usually aM×N×K tensor in whichM/N are the height/width of a feature map , and K is the number of feature maps , or equivalently , channels . Denote w and b as the parameters ( e.g. , the weights and biases ) of the convolutional kernel . Then a standard convolutional layer can be mathematically defined as below . Standard convolutional layer : . x′ = w ∗ x+ b , ( 1 ) z = f ( x′ ) , ( 2 ) where f ( · ) is a non-linear activation function such as ReLU2 and ∗ is the convolutional operation . The convolutional filter receives signals in a patch and extracts local textural information from the patch . As mentioned in the introduction , recent works suggest that the prediction of standard CNNs strongly depends on such textural information , and noises imposed on the texture may lead to wrong predictions . Therefore , we hope to learn a feature extractor which does not solely rely on textural features and also considers other information . To achieve this goal , we introduce the defective convolutional layer in which some neurons are purposely designed to be corrupted . Define Mdefect to be a binary matrix of size M ×N ×K . Our defective convolutional layer is defined as follows . Defective convolutional layer : . x′ = w ∗ x+ b , ( 3 ) z′ = f ( x′ ) ( 4 ) z = Mdefect ◦ z′ , ( 5 ) where ◦ denotes element-wise product . Mdefect is a fixed matrix and is not learnable during training and testing . We can see that Mdefect plays a role of “ masking ” out values of some neurons in the layer . This disturbs the distribution of local textural information and decouples the correlation among neurons . With the masked output z as input , the feature extractor of the next convolutional layer can not accurately capture the local textural feature from x . As a consequence , the textural information is hard to pass through the defective CNN from bottom to top . To produce accurate predictions , the deep neural network has to find relevant signals other than the texture , e.g. , the shape . Those corrupted neurons have no severe impact on the extraction of shape information since neighbors of those neurons in the same filter are still capable of passing the shape information to the next layer . In this paper , we find that simply setting Mdefect by random initialization is already helpful for learning a robust CNN . Before training , we sample each entry in Mdefect using Bernoulli distribution with keep probability p and then fix Mdefect during training and testing . More discussions and ablation studies are provided in Section 4 . As can be seen from Equation ( 3 ) - ( 5 ) , the implementation of our defective convolutional layer is similar to the dropout operation ( Srivastava et al. , 2014 ) . To demonstrate the relationship and differences , we mathematically define the dropout as below . Standard convolutional layer + dropout : Mdropout ∼ Bernoulli ( p ) ( 6 ) x′ = w ∗ x+ b ( 7 ) z′ = f ( x′ ) ( 8 ) z = Mdropout ◦ z′ . ( 9 ) 2Batch normalization is popularly used on x′ before computing z . Here we simply omit this . The shape of Mdropout is the same as Mdefect , and the value of each entry in Mdropout is sampled in each batch using some sampling strategies at each step during training . Generally , entries inMdropout are independent and identically sampled in an online fashion using Bernoulli distribution with keep probability p. There are several significant differences between dropout and defective convolutional layer . First , the binary matrix Mdropout is sampled online during training and is removed during testing , while the binary matrix Mdefect in defective convolutional layers is predefined and keeps fixed in both training and testing . The predefined way can help Defective CNNs save a lot of computation and storage costs . Second , the motivations behind the two methods are quite different and may lead to differences in the places to applying methods , the values of the keep probability p , and the shape of the masked unit . Dropout tries to reduce overfitting by preventing co-adaptations on training data . When comes to CNNs , those methods are applied to top layers , p is set to be large ( e.g. , 0.9 ) , and the masked units are chosen to be a whole channel in Tompson et al . ( 2015 ) and a connected block in Ghiasi et al . ( 2018 ) . However , our method tries to prevent the model extract textural information of inputs for making predictions . We would apply the method to bottom layers , use a small p ( e.g . 0.1 ) , and the masked unit is a single neuron . Also , in our experiments , we will show that the proposed method can improve the robustness of CNNs against transfer-based attacks and decisionbased attacks , while the dropout methods can not .
The paper proposes a method to improve adversarial robustness of the current convolutional networks. The method is based on dropping outputs of a fraction of neurons. However, unlike in dropout the masks are kept fixed throughout training/inference and applied to the bottom layers of the network. This shifts the focus of the network away from texture and towards shape features resulting in the adversarial examples being fooling humans as well.
SP:b7853999d2be6a4e637097094d8088e0229d651e
Attainability and Optimality: The Equalized-Odds Fairness Revisited
1 INTRODUCTION . As machine learning models become widespread in automated decision making systems , apart from the efficiency and accuracy of the prediction , their potential social consequence also gains increasing attention . To date , there is ample evidence that machine learning models have resulted in discrimination against certain groups of individuals under many circumstances , for instance , the discrimination in ad delivery when searching for names that can be predictive of the race of individual ( Sweeney , 2013 ) ; the gender discrimination in job-related ads push ( Datta et al. , 2015 ) ; stereotypes associated with gender in word embeddings ( Bolukbasi et al. , 2016 ) ; the bias against certain ethnicities in the assessment of recidivism risk ( Angwin et al. , 2016 ) . The call for accountability and fairness in machine learning has motivated various ( statistical ) notions of fairness . The Demographic Parity criterion ( Calders et al. , 2009 ) requires the independence between prediction ( e.g. , of a classifier ) and the protected feature ( sensitive attributes of an individual , e.g. , gender , race ) . Equalized Odds ( Hardt et al. , 2016 ) , also known as Error-rate Balance ( Chouldechova , 2017 ) , requires the output of a model be conditionally independent of protected feature ( s ) given the ground truth of the target . Predictive Rate Parity ( Zafar et al. , 2017a ) , on the other hand , requires the actually proportion of positives ( negatives ) in the original data for positive ( negative ) predictions should match across groups ( well-calibrated ) . On the theoretical side , results have been reported regarding relationships among fairness notions . It has been independently shown that if base rates of true positives differ among groups , then Equalized Odds and Predictive Rate Parity can not be achieved simultaneously for non-perfect predictors ( Kleinberg et al. , 2016 ; Chouldechova , 2017 ) . Any two out of three among Demographic Parity , Equalized Odds , and Predictive Rate Parity are incompatible with each other ( Barocas et al. , 2017 ) . At the interface of privacy and fairness , the impossibility of achieving both Differential Privacy ( Dwork et al. , 2006 ) and Equal Opportunity ( Hardt et al. , 2016 ) while maintaining non-trivial accuracy is also established ( Cummings et al. , 2019 ) . In practice , one can broadly categorize computational procedures to derive a fair predictor into three types : pre-processing approaches ( Calders et al. , 2009 ; Dwork et al. , 2012 ; Zemel et al. , 2013 ; Zhang et al. , 2018 ; Madras et al. , 2018 ; Creager et al. , 2019 ; Zhao et al. , 2020 ) , in-processing approaches ( Kamishima et al. , 2011 ; Pérez-Suay et al. , 2017 ; Zafar et al. , 2017a ; b ; Donini et al. , 2018 ; Song et al. , 2019 ; Mary et al. , 2019 ; Baharlouei et al. , 2020 ) , and post-processing approaches ( Hardt et al. , 2016 ; Fish et al. , 2016 ; Dwork et al. , 2018 ) . In accord with the fairness notion of interest , a pre-processing approach first maps the training data to a transformed space to remove discriminatory information between protected feature and target , and then pass on the data to make prediction . In direct contrast , a post-processing approach treats the off-the-shelf predictor ( s ) as uninterpretable black-box ( es ) , and imposes fairness by outputting a function of the original prediction . For inprocessing approaches , various kinds of regularization terms are proposed so that one can optimize the utility function while suppressing the discrimination at the same time . Approaches based on estimating/bounding causal effect between the protected feature and final target have also been proposed ( Kusner et al. , 2017 ; Russell et al. , 2017 ; Zhang et al. , 2017 ; Nabi & Shpitser , 2018 ; Zhang & Bareinboim , 2018 ; Chiappa , 2019 ; Wu et al. , 2019 ) . Focusing on the Equalized-Odds criterion , although various approaches have been proposed to impose the fairness requirement , whether or not it is always attainable is not well addressed . The attainability of Equalized Odds , namely , the existence of the predictor that can score zero violation of fairness in the large sample limit , is an asymptotic property of the fairness criterion . This characterizes a completely different kind of violation of fairness compared to the empirical error bound of discrimination in finite-sample cases . If utilizing a “ fair ” predictor which is actually biased , the discrimination would become a snake in the grass , making it hard to detect and eliminate . Actually , as we illustrate in this paper , Equalized Odds is not always attainable for regression and even classification tasks , if we use deterministic prediction functions . This calls for alternative definitions in the same spirit as Equalized Odds that can always be achieved under various circumstances . Our contributions are mainly : • For regression and classification tasks with deterministic prediction functions , we show that Equalized Odds is not always attainable if certain ( rather restrictive ) conditions on the joint distribution of the features and the target variable are not met . • Under mild assumptions , for binary classification we show that if randomized prediction is taken into consideration , one can always derive a non-trivial Equalized Odds classifier . • Considering the optimality of performance under fairness constraint ( s ) , when exploiting all available features , we show that the predictor derived via an in-processing approach would always outperform the one derived via a post-processing approach ( unconstrained optimization followed by a post-processing step ) . 2 PRELIMINARIES . In this section , we first illustrate the difference between prediction fairness and procedure fairness , and then , we present the formal definition of Equalized Odds ( Hardt et al. , 2016 ) . 2.1 HIERARCHY OF FAIRNESS . Before presenting the formulation of fairness , it is important to see the distinction between different levels of fairness when discussing fair predictors . When evaluating the performance of the proposed fair predictor , it is a common practice to compare the loss ( with respect to the utility function of choice , e.g. , accuracy for binary classification ) computed on target variable and the predicted value . There is an implicit assumption lying beneath this practice : the generating process of the data , which is just describing a real-world procedure , is not biased in any sense ( Danks & London , 2017 ) . Only when we treat the target variable ( recorded in the dataset ) as unbiased can we justify the practice of loss evaluation and the conditioning on target variable when imposing fairness ( as we shall see in the definition of Equalized Odds in Equation 1 ) . One may consider a music school admission example . The music school committee would decide if they admit a student to the violin performance program based on the applicant ’ s personal information , educational background , instrumental performance , and so on . When evaluating whether or not the admission is “ fair ” , there are actually two levels of fairness . First , based on the information at hand , did the committee evaluate the qualification of applicants without bias ( How committee evaluate the applicants ) ? And second , is committee ’ s procedure of evaluating applicants ’ qualification reasonable ( How other people view the evaluation procedure used by the committee ) ? In this paper , we consider prediction fairness , namely , assuming the data recorded is unbiased , the prediction ( made with respect to current reality ) itself should not include any biased utilization of information . The fairness with respect to the data generating procedure as well as the potential future influence of the prediction are beyond the scope of this paper . 2.2 EQUALIZED-ODDS FAIRNESS . Hardt et al . ( 2016 ) proposed Equalized Odds which requires conditional independence between prediction and protected feature ( s ) given ground truth of the target . Let us denote the protected feature byA , with domain of valueA , additional ( observable ) feature ( s ) byX , with domain of value X , target variable by Y , with domain Y , ( not necessarily fair ) predictors by Ŷ , and fair predictors by Ỹ . Equalized-Odds fairness requires Ỹ ⊥ A | Y . ( 1 ) For classification tasks , one can conveniently use the probability distribution form : ∀a ∈ A , t , y ∈ Y : P ( Ỹ = t | A = a , Y = y ) = P ( Ỹ = t | Y = y ) , ( 2 ) or more concisely , PỸ |AY ( t | a , y ) = PỸ |Y ( t | y ) . ( 3 ) For better readability , we also use the formulation in Equation 3 in cases without ambiguity . In the context of binary classification ( Y = { 0 , 1 } ) , Equalized Odds requires that the True Positive Rate ( TPR ) and False Positive Rate ( FPR ) of each certain group match population positive rates . Throughout the paper , without loss of generality we assume there is only one protected feature for the purpose of simplifying notation . However , considering the fact that the protected feature can be discrete ( e.g. , race , gender ) or continuous ( e.g. , the ratio of ethnic group in the population for certain district of a city ) , we do not assume discreteness of the protected feature . Due to the space limit , we will focus on the illustration and implication of our results and defer all the proofs to the appendix . 3 FAIRNESS IN REGRESSION MAY NOT BE ATTAINED . In this section we consider the attainability of Equalized Odds for regression tasks , namely , whether or not it is possible to find a predictor that is conditionally independent from the protected feature given true value of the target . For linearly Gaussian cases , one can attain Equalized Odds by constraining zero partial correlation between the prediction and the protected feature given target variable ( Woodworth et al. , 2017 ) . Various regularization terms have also been proposed to suppress discrimination when predicting a continuous target ( Berk et al. , 2017 ; Mary et al. , 2019 ) . However , whether or not one can always achieve 0-discrimination for regression , even if with an unlimited amount of data , is not clear yet . If “ fair ” predictors are deployed without carefully checking the attainability of fairness , the discrimination would become a hidden hazard , making it hard to detect and eliminate . Actually as we will show in this section , even in the simple setup of linearly correlated continuous data , Equalized Odds is not always attainable . 3.1 UNATTAINABILITY OF EQUALIZED ODDS IN LINEAR NON-GAUSSIAN REGRESSION . As stated in Section 2.1 , in this paper we consider prediction fairness , and therefore any possible bias introduced by the data generating procedure itself is beyond the scope of the discussion . Consider the situation where the data is generated as following ( H is not measured in the dataset ) : X = qA+ EX , H = bA+ EH , Y = cX + dH + EY , ( 4 ) where ( A , EX , EH , EY ) are mutually independent . In fact , if at most one of EX and E : = EY + dEH is Gaussian , then any linear combination of A and X with non-zero coefficients will not be conditionally independent fromA given Y , meaning that it is not possible to achieve Equalized-Odds fairness . Let Z be a linear combination of A and X , i.e. , Z = αA + βX = ( α + qβ ) A + βEX , with linear coefficients α and β , where β 6= 0 . In Theorem 3.1 , we present the general result in linear non-Gaussian cases , where one can not achieve the conditional independence between Z and A given Y . Theorem 3.1 . ( Unattainability of Equalized Odds in the Linear Non-Gaussian Case ) Assume that feature X has a causal influence on Y , i.e. , c 6= 0 in Equation 4 , and that the protected feature A and Y are not independent , i.e. , qc + bd 6= 0 . Assume pEX and pE are positive on R. Let f1 : = log pA , f2 : = log pEX , and f3 : = log pE . Further assume that f2 and f3 are third-order differentiable . Then if at most one of EX and E is Gaussian , Z is always conditionally dependent on A given Y . From Theorem 3.1 , we see that in linear non-Gaussian cases , any non-zero linear combination of the feature ( which is a deterministic function of the input ) will not satisfy Equalized Odds . One may wonder whether Equalized Odds can be achieved by nonlinear regression , instead of a linear model . Although a proof with general nonlinear models is rather complicated , our simulation results in Section 5.1 strongly suggest that the unattainability of Equalized Odds persists in nonlinear regression cases . In light of the unattainability of Equalized Odds for prediction with deterministic functions of A and X , it is desirable to develop general , nonlinear prediction algorithms to produce a probabilistic prediction ( i.e. , with a certain type of randomness in the prediction ) . One possible way follows the framework of Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) : we use random standard Gaussian noise , in addition to A and X , as input , such that the output will have a specific type of randomness . The parameters involved are learned by minimizing prediction error and enforcing Equalized Odds on the “ randomized ” output at the same time . Given that this approach is not essential to illustrate the claims made in this paper and that theoretical properties of such nonlinear regression algorithms with randomized output are not straightforward to establish , this is left as future work .
This paper studies the attainability of Equalized Odds fairness criterion in both the classification and regression problems. When the prediction function is deterministic, it shows that Equalized Odds may not be attainable under certain conditions. In contrast, if the prediction function is randomized, then Equalized Odds classifiers can always be achieved under some conditions. Moreover, it shows that the performance attained using the in-processing approach after exploiting all available features is always better than attained using the post-processing approach.
SP:b89605095e431ae84ab91c9831a03ef9a5843e17
Attainability and Optimality: The Equalized-Odds Fairness Revisited
1 INTRODUCTION . As machine learning models become widespread in automated decision making systems , apart from the efficiency and accuracy of the prediction , their potential social consequence also gains increasing attention . To date , there is ample evidence that machine learning models have resulted in discrimination against certain groups of individuals under many circumstances , for instance , the discrimination in ad delivery when searching for names that can be predictive of the race of individual ( Sweeney , 2013 ) ; the gender discrimination in job-related ads push ( Datta et al. , 2015 ) ; stereotypes associated with gender in word embeddings ( Bolukbasi et al. , 2016 ) ; the bias against certain ethnicities in the assessment of recidivism risk ( Angwin et al. , 2016 ) . The call for accountability and fairness in machine learning has motivated various ( statistical ) notions of fairness . The Demographic Parity criterion ( Calders et al. , 2009 ) requires the independence between prediction ( e.g. , of a classifier ) and the protected feature ( sensitive attributes of an individual , e.g. , gender , race ) . Equalized Odds ( Hardt et al. , 2016 ) , also known as Error-rate Balance ( Chouldechova , 2017 ) , requires the output of a model be conditionally independent of protected feature ( s ) given the ground truth of the target . Predictive Rate Parity ( Zafar et al. , 2017a ) , on the other hand , requires the actually proportion of positives ( negatives ) in the original data for positive ( negative ) predictions should match across groups ( well-calibrated ) . On the theoretical side , results have been reported regarding relationships among fairness notions . It has been independently shown that if base rates of true positives differ among groups , then Equalized Odds and Predictive Rate Parity can not be achieved simultaneously for non-perfect predictors ( Kleinberg et al. , 2016 ; Chouldechova , 2017 ) . Any two out of three among Demographic Parity , Equalized Odds , and Predictive Rate Parity are incompatible with each other ( Barocas et al. , 2017 ) . At the interface of privacy and fairness , the impossibility of achieving both Differential Privacy ( Dwork et al. , 2006 ) and Equal Opportunity ( Hardt et al. , 2016 ) while maintaining non-trivial accuracy is also established ( Cummings et al. , 2019 ) . In practice , one can broadly categorize computational procedures to derive a fair predictor into three types : pre-processing approaches ( Calders et al. , 2009 ; Dwork et al. , 2012 ; Zemel et al. , 2013 ; Zhang et al. , 2018 ; Madras et al. , 2018 ; Creager et al. , 2019 ; Zhao et al. , 2020 ) , in-processing approaches ( Kamishima et al. , 2011 ; Pérez-Suay et al. , 2017 ; Zafar et al. , 2017a ; b ; Donini et al. , 2018 ; Song et al. , 2019 ; Mary et al. , 2019 ; Baharlouei et al. , 2020 ) , and post-processing approaches ( Hardt et al. , 2016 ; Fish et al. , 2016 ; Dwork et al. , 2018 ) . In accord with the fairness notion of interest , a pre-processing approach first maps the training data to a transformed space to remove discriminatory information between protected feature and target , and then pass on the data to make prediction . In direct contrast , a post-processing approach treats the off-the-shelf predictor ( s ) as uninterpretable black-box ( es ) , and imposes fairness by outputting a function of the original prediction . For inprocessing approaches , various kinds of regularization terms are proposed so that one can optimize the utility function while suppressing the discrimination at the same time . Approaches based on estimating/bounding causal effect between the protected feature and final target have also been proposed ( Kusner et al. , 2017 ; Russell et al. , 2017 ; Zhang et al. , 2017 ; Nabi & Shpitser , 2018 ; Zhang & Bareinboim , 2018 ; Chiappa , 2019 ; Wu et al. , 2019 ) . Focusing on the Equalized-Odds criterion , although various approaches have been proposed to impose the fairness requirement , whether or not it is always attainable is not well addressed . The attainability of Equalized Odds , namely , the existence of the predictor that can score zero violation of fairness in the large sample limit , is an asymptotic property of the fairness criterion . This characterizes a completely different kind of violation of fairness compared to the empirical error bound of discrimination in finite-sample cases . If utilizing a “ fair ” predictor which is actually biased , the discrimination would become a snake in the grass , making it hard to detect and eliminate . Actually , as we illustrate in this paper , Equalized Odds is not always attainable for regression and even classification tasks , if we use deterministic prediction functions . This calls for alternative definitions in the same spirit as Equalized Odds that can always be achieved under various circumstances . Our contributions are mainly : • For regression and classification tasks with deterministic prediction functions , we show that Equalized Odds is not always attainable if certain ( rather restrictive ) conditions on the joint distribution of the features and the target variable are not met . • Under mild assumptions , for binary classification we show that if randomized prediction is taken into consideration , one can always derive a non-trivial Equalized Odds classifier . • Considering the optimality of performance under fairness constraint ( s ) , when exploiting all available features , we show that the predictor derived via an in-processing approach would always outperform the one derived via a post-processing approach ( unconstrained optimization followed by a post-processing step ) . 2 PRELIMINARIES . In this section , we first illustrate the difference between prediction fairness and procedure fairness , and then , we present the formal definition of Equalized Odds ( Hardt et al. , 2016 ) . 2.1 HIERARCHY OF FAIRNESS . Before presenting the formulation of fairness , it is important to see the distinction between different levels of fairness when discussing fair predictors . When evaluating the performance of the proposed fair predictor , it is a common practice to compare the loss ( with respect to the utility function of choice , e.g. , accuracy for binary classification ) computed on target variable and the predicted value . There is an implicit assumption lying beneath this practice : the generating process of the data , which is just describing a real-world procedure , is not biased in any sense ( Danks & London , 2017 ) . Only when we treat the target variable ( recorded in the dataset ) as unbiased can we justify the practice of loss evaluation and the conditioning on target variable when imposing fairness ( as we shall see in the definition of Equalized Odds in Equation 1 ) . One may consider a music school admission example . The music school committee would decide if they admit a student to the violin performance program based on the applicant ’ s personal information , educational background , instrumental performance , and so on . When evaluating whether or not the admission is “ fair ” , there are actually two levels of fairness . First , based on the information at hand , did the committee evaluate the qualification of applicants without bias ( How committee evaluate the applicants ) ? And second , is committee ’ s procedure of evaluating applicants ’ qualification reasonable ( How other people view the evaluation procedure used by the committee ) ? In this paper , we consider prediction fairness , namely , assuming the data recorded is unbiased , the prediction ( made with respect to current reality ) itself should not include any biased utilization of information . The fairness with respect to the data generating procedure as well as the potential future influence of the prediction are beyond the scope of this paper . 2.2 EQUALIZED-ODDS FAIRNESS . Hardt et al . ( 2016 ) proposed Equalized Odds which requires conditional independence between prediction and protected feature ( s ) given ground truth of the target . Let us denote the protected feature byA , with domain of valueA , additional ( observable ) feature ( s ) byX , with domain of value X , target variable by Y , with domain Y , ( not necessarily fair ) predictors by Ŷ , and fair predictors by Ỹ . Equalized-Odds fairness requires Ỹ ⊥ A | Y . ( 1 ) For classification tasks , one can conveniently use the probability distribution form : ∀a ∈ A , t , y ∈ Y : P ( Ỹ = t | A = a , Y = y ) = P ( Ỹ = t | Y = y ) , ( 2 ) or more concisely , PỸ |AY ( t | a , y ) = PỸ |Y ( t | y ) . ( 3 ) For better readability , we also use the formulation in Equation 3 in cases without ambiguity . In the context of binary classification ( Y = { 0 , 1 } ) , Equalized Odds requires that the True Positive Rate ( TPR ) and False Positive Rate ( FPR ) of each certain group match population positive rates . Throughout the paper , without loss of generality we assume there is only one protected feature for the purpose of simplifying notation . However , considering the fact that the protected feature can be discrete ( e.g. , race , gender ) or continuous ( e.g. , the ratio of ethnic group in the population for certain district of a city ) , we do not assume discreteness of the protected feature . Due to the space limit , we will focus on the illustration and implication of our results and defer all the proofs to the appendix . 3 FAIRNESS IN REGRESSION MAY NOT BE ATTAINED . In this section we consider the attainability of Equalized Odds for regression tasks , namely , whether or not it is possible to find a predictor that is conditionally independent from the protected feature given true value of the target . For linearly Gaussian cases , one can attain Equalized Odds by constraining zero partial correlation between the prediction and the protected feature given target variable ( Woodworth et al. , 2017 ) . Various regularization terms have also been proposed to suppress discrimination when predicting a continuous target ( Berk et al. , 2017 ; Mary et al. , 2019 ) . However , whether or not one can always achieve 0-discrimination for regression , even if with an unlimited amount of data , is not clear yet . If “ fair ” predictors are deployed without carefully checking the attainability of fairness , the discrimination would become a hidden hazard , making it hard to detect and eliminate . Actually as we will show in this section , even in the simple setup of linearly correlated continuous data , Equalized Odds is not always attainable . 3.1 UNATTAINABILITY OF EQUALIZED ODDS IN LINEAR NON-GAUSSIAN REGRESSION . As stated in Section 2.1 , in this paper we consider prediction fairness , and therefore any possible bias introduced by the data generating procedure itself is beyond the scope of the discussion . Consider the situation where the data is generated as following ( H is not measured in the dataset ) : X = qA+ EX , H = bA+ EH , Y = cX + dH + EY , ( 4 ) where ( A , EX , EH , EY ) are mutually independent . In fact , if at most one of EX and E : = EY + dEH is Gaussian , then any linear combination of A and X with non-zero coefficients will not be conditionally independent fromA given Y , meaning that it is not possible to achieve Equalized-Odds fairness . Let Z be a linear combination of A and X , i.e. , Z = αA + βX = ( α + qβ ) A + βEX , with linear coefficients α and β , where β 6= 0 . In Theorem 3.1 , we present the general result in linear non-Gaussian cases , where one can not achieve the conditional independence between Z and A given Y . Theorem 3.1 . ( Unattainability of Equalized Odds in the Linear Non-Gaussian Case ) Assume that feature X has a causal influence on Y , i.e. , c 6= 0 in Equation 4 , and that the protected feature A and Y are not independent , i.e. , qc + bd 6= 0 . Assume pEX and pE are positive on R. Let f1 : = log pA , f2 : = log pEX , and f3 : = log pE . Further assume that f2 and f3 are third-order differentiable . Then if at most one of EX and E is Gaussian , Z is always conditionally dependent on A given Y . From Theorem 3.1 , we see that in linear non-Gaussian cases , any non-zero linear combination of the feature ( which is a deterministic function of the input ) will not satisfy Equalized Odds . One may wonder whether Equalized Odds can be achieved by nonlinear regression , instead of a linear model . Although a proof with general nonlinear models is rather complicated , our simulation results in Section 5.1 strongly suggest that the unattainability of Equalized Odds persists in nonlinear regression cases . In light of the unattainability of Equalized Odds for prediction with deterministic functions of A and X , it is desirable to develop general , nonlinear prediction algorithms to produce a probabilistic prediction ( i.e. , with a certain type of randomness in the prediction ) . One possible way follows the framework of Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) : we use random standard Gaussian noise , in addition to A and X , as input , such that the output will have a specific type of randomness . The parameters involved are learned by minimizing prediction error and enforcing Equalized Odds on the “ randomized ” output at the same time . Given that this approach is not essential to illustrate the claims made in this paper and that theoretical properties of such nonlinear regression algorithms with randomized output are not straightforward to establish , this is left as future work .
This paper studies under what conditions a classifier can satisfy the condition of equalized odds. The authors first prove an impossibility result which shows that (under linear non-gaussian case) any deterministic classifier cannot achieve equalized odds across the protected groups. This leads to the question of randomized classifier. We can always satisfy equalized odds with randomized classifier with a trivial classifier. However, the authors show two interesting results — (1) using the post-processing framework developed in Hard et. al. (2016) it is possible to obtain a non-trivial randomized classifier that satisfies EO, and (2) in-processing based classifiers are better than the post-processing based fair classifiers if we compare them in terms of accuracy.
SP:b89605095e431ae84ab91c9831a03ef9a5843e17
Neural Jump Ordinary Differential Equations: Consistent Continuous-Time Prediction and Filtering
1 INTRODUCTION . Stochastic processes are widely used in many fields to model time series that exhibit a random behaviour . In this work , we focus on processes that can be expressed as solutions of stochastic differential equations ( SDE ) of the form dXt = µ ( t , Xt ) dt+ σ ( t , Xt ) dWt , with certain assumptions on the drift µ and the diffusion σ . With respect to the L2-norm , the best prediction of a future value of the process is provided by the conditional expectation given the current value . If the drift and diffusion are known or a good estimation is available , the conditional expectation can be approximated by a Monte Carlo ( MC ) simulation . However , since µ and σ are usually unknown , this approach strongly depends on the assumptions made on their parametric form . A more flexible approach is given by neural SDEs , where the drift µ and diffusion σ are modelled by neural networks ( Tzen & Raginsky , 2019 ; Li et al. , 2020 ; Jia & Benson , 2019 ) . Nevertheless , modelling the diffusion can be avoided if one is only interested in forecasting the behaviour instead of sampling new paths . An alternative widely used approach is to use Recurrent Neural Networks ( RNN ) , where a neural network dynamically updates a latent variable with the observations of a discrete input time-series . RNNs are successfully applied to tasks for which time-series are regularly sampled , as for example speech or text recognition . However , often observations are irregularly observed in time . The standard approach of dividing the time-line into equally-sized intervals and imputing or aggregating observations might lead to a significant loss of information ( Rubanova et al. , 2019 ) . Frameworks that overcome this issue are the GRU-ODE-Bayes ( Brouwer et al. , 2019 ) and the ODE-RNN ( Rubanova et al. , 2019 ) , which combine a RNN with a neural ODE ( Chen et al. , 2018 ) . In standard RNNs , the hidden state is updated at each observation and constant in between . Conversely , in the GRU-ODEBayes and ODE-RNN framework , a neural ODE is trained to model the continuous evolution of the hidden state of the RNN between two observations . While GRU-ODE-Bayes and ODE-RNN both provide convincing empirical results , they lack thorough theoretical guarantees . Contribution . In this paper , we introduce a mathematical framework to precisely describe the problem statement of online prediction and filtering of a stochastic process with temporal irregular observations . Based on this rigorous mathematical description , we introduce the Neural Jump ODE ( NJ-ODE ) . The model architecture is very similar to the one of GRU-ODE-Bayes and ODE-RNN , however we introduce a novel training framework , which in contrast to them allows us to prove convergence guarantees for the first time . Moreover , we demonstrate empirically the capabilities of our model . Precise problem formulation . We emphasize that a precise definition of all ingredients is needed , to be able to show theoretical convergence guarantees , which is the main purpose of this work . Since the objects of interest are stochastic processes , we use tools from probability theory and stochastic calculus . To make the paper more readable and comprehensible also for readers without background in these fields , the precise formulations and demonstrations of all claims are given in the appendix , while the main part of the paper focuses on giving well understandable heuristics . 2 PROBLEM STATEMENT . The problem we consider in this work , is the online forecasting of temporal data . We assume that we make observations of a Markovian stochastic process described by the stochastic differential equation ( SDE ) dXt = µ ( t , Xt ) dt+ σ ( t , Xt ) dWt , ( 1 ) at irregularly-sampled time points . Between those observation times , we want to predict the stochastic process , based only on the observations that we made previously in time , excluding the possibility to interpolate observations . Due to the Markov property , only the last observation is needed for an optimal prediction . Hence , after each observation we extrapolate the current observation into the future until we make the next observation . The time at which the next observation will be made is random and assumed to be independent of the stochastic process itself . More precisely , we suppose to have a training set of N independent realisations of the RdX - dimensional stochastic process X defined in ( 1 ) . Each realisation j is observed at nj random observation times t ( j ) 1 , . . . , t ( j ) nj ∈ [ 0 , T ] with values x ( j ) 1 , . . . , x ( j ) nj ∈ RdX . We assume that all coordinates of the vector x ( j ) i are observed . We are interested in forecasting how a new independent realization evolves in time , such that our predictions of X minimize the expected squared distance ( L2-metric ) to the true unknown path . The optimal prediction , i.e . the L2-minimizer , is the conditional expectation . Given that the value of the new realization at time t is xt , we are therefore interested in estimating the function f ( xt , t , s ) : = E [ Xt+s|Xt = xt ] , s ≥ 0 , ( 2 ) which is the L2-optimal prediction until the next observation is made . To learn an approximation f̂ of f we make use of the N realisations of the training set . After training , f̂ is applied to the new realization . Hence , this can be interpreted as a special type of filtering problem . The following example illustrates the considered problem . Example . A complicated to measure vital parameter of patients in a hospital is measured multiple times during the first 48 hours of their stay . For each patient , this happens at different times depending on the resources , hence the observation dates are irregular and exhibit some randomness . Patient 1 has n1 = 4 measurements at hours ( t ( 1 ) 1 , t ( 1 ) 2 , t ( 1 ) 3 , t ( 1 ) 4 ) = ( 1 , 14 , 27 , 34 ) where the values ( x ( 1 ) 1 , x ( 1 ) 2 , x ( 1 ) 3 , x ( 1 ) 4 ) = ( 0.74 , 0.65 , 0.78 , 0.81 ) are measured . Patient 2 only has n2 = 2 measurements at hours ( t ( 2 ) 1 , t ( 2 ) 2 ) = ( 3 , 28 ) where the values ( x ( 2 ) 1 , x ( 2 ) 2 ) = ( 0.56 , 0.63 ) are measured . Similarly , the j-th patient has nj measurements at times ( t ( j ) 1 , . . . , t ( j ) nj ) and has the measured values ( x ( j ) 1 , . . . , x ( j ) nj ) . Based on this data , we want to forecast the vital parameter of new patients coming to the hospital . In particular , for a patient with measured values x1 at time t1 , we want to predict what his values will likely be at any time t1 + s > t1 . Importantly , we do not only focus on predicting the value at some t2 > t1 , but we want to know the entire evolution of the value . 3 BACKGROUND . Recurrent Neural Network . The input to a RNN is a discrete time series of observations { x1 , · · · , xn } . At each observation time ti+1 , a neural network , the RNNCell , updates the latent variable h using the previous latent variable hi and the input xi+1 as hi+1 : = RNNCell ( hi , xi+1 ) . Neural Ordinary Differential Equation . Neural ODEs ( Chen et al. , 2018 ) are a family of continuous-time models defining a latent variable ht : = h ( t ) to be the solution to an ODE initial-value problem ht : = h0 + ∫ t t0 f ( hs , s , θ ) ds , t ≥ t0 , ( 3 ) where f ( · , · , θ ) = fθ is a neural network with weights θ . Therefore , the latent variables can be updated continuously by solving this ODE ( 3 ) . We can emphasize the dependence of ht on a numerical ODE solver by rewriting ( 3 ) as ht : = ODESolve ( fθ , h0 , ( t0 , t ) ) . ( 4 ) ODE-RNN . ODE-RNN ( Rubanova et al. , 2019 ) is a mixture of a RNN and a neural ODE . In contrast to a standard RNN , we are not only interested in an output at the observation times ti , but also in between those times . In particular , we want to have an output stream that is generated continuously in time . This is achieved by using a neural ODE to model the latent dynamics between two observation times , i.e . for ti−1 < t < ti the latent variable is defined as in ( 3 ) and ( 4 ) , with h0 and t0 replaced by hi−1 and ti−1 . At the next observation time ti , the latent variable is updated by a RNN with the new observation xi . Fixing h0 , the entire latent process can be computed by iteratively solving an ODE followed by applying a RNN . Rubanova et al . ( 2019 ) write this as { h′i : = ODESolve ( fθ , hi−1 , ( ti−1 , ti ) ) hi : = RNNCell ( h ′ i , xi ) . ( 5 ) GRU-ODE-Bayes . The model architecture describing the latent variable in GRU-ODE-Bayes ( Brouwer et al. , 2019 ) is defined as a special case of the ODE-RNN architecture . In particular , a gated recurrent unit ( GRU ) is used for the RNN-cell and a continuous version of the GRU for the neural ODE fθ . Therefore , we focus on explaining the difference between our model architecture and the ODE-RNN architecture , in the following section . 4 PROPOSED METHOD – NEURAL JUMP ODE . Markovian paths . Our assumptions on the stochastic process X imply that it is a Markov process . In particular , the optimal prediction of a future state of X only depends on the current state rather than on the full history . Hence , the previous values do not provide any additional useful information for the prediction . JumpNN instead of RNN . Using a neural ODE between two observation has the advantage that it allows to continuously model the hidden state between two observations . But since the underlying process is Markov , there is no need to use a RNN-cell to model the updates of the hidden state at each new observation . Instead , whenever a new observation is made , the new hidden state can solely be defined from this observation . Therefore , we replace the RNN-cell used in ODE-RNN by a standard neural network mapping the observation to the hidden state . We call this the jumpNN which can be interpreted as an encoder map . Compared to ODE-RNN , this architecture is easier to train . Last observation and time increment as additional inputs for the neural ODE . The neural network fθ used in the neural ODE takes two arguments as inputs , the hidden state ht and the current time t. However , our theoretical problem analysis suggests , that instead of t the last observation time ti−1 and the time increment t− ti−1 should be used . Additionally the last observation xi−1 should also be part of the input . NJ-ODE . Combining the ODE-RNN architecture ( 5 ) with the previous considerations , we introduce the modified architecture of Neural Jump ODE ( NJ-ODE ) { h′i : = ODESolve ( fθ , ( hi−1 , xi−1 , ti−1 , t− ti−1 ) , ( ti−1 , ti ) ) hi : = jumpNN ( xi ) . ( 6 ) An implementable version of this method is presented in the Algorithm 1 . A neural ODE fθ transforms the hidden state between observations , and the hidden state jumps according to jumpNN when a new observation is available . The outputNN , a standard neural network , maps any hidden state ht to the output yt . To implement the continuous-in-time ODE evaluation , a discretization scheme is provided by the inner loop . In the training process , the weights of all three neural networks , jumpNN , the neural ODE fθ and outputNN are optimized . Algorithm 1 The NJ-ODE . A small step size ∆t is fixed and we denote tn+1 : = T . Input : Data points with timestamps { ( xi , ti ) } i=0 ... n , for i = 0 to n do hti = jumpNN ( xi ) . Update hidden state given next observation xi yti = outputNN ( hti ) . compute output s← ti while s+ ∆t ≤ ti+1 do hs+∆t = ODESolve ( fθ , hs , xi , ti , s− ti , ( s , s+ ∆t ) ) . get next hidden state ys+∆t = outputNN ( hs+∆t ) . compute output s← s+ ∆t end while end for Objective function . Our goal is to train the NJ-ODE model such that its output approximates the conditional expectation ( 2 ) , which is the optimal prediction of the target process X with respect to the L2-norm . Therefore , we define a new objective function , with which we can prove convergence . Let yi− denote the output of the NJ-ODE at ti before the jump and yi the output at ti after the jump . Note that the outputs depend on parameters θ and the previously observed xi which are inputs to the model . Then the objective function is defined as Φ̂N ( θ ) : = 1 N N∑ j=1︸ ︷︷ ︸ paths 1 nj nj∑ i=1︸ ︷︷ ︸ dates ( |x ( j ) i − y ( j ) i |︸ ︷︷ ︸ jump part at observations + |y ( j ) i − y ( j ) i− |︸ ︷︷ ︸ continuous part between two observations ) 2 . ( 7 ) We give an intuitive explanation for this definition . The “ jump part ” of the loss function forces the jumpNN to produce good updates based on new observations , while the other part forces the jump size to be small in ( the empirical ) L2-norm . Since the conditional expectation minimizes the jump size with respect to the L2-norm , this forces the neural ODE fθ to continuously transform the hidden state such that the output approximates the conditional expectation . Moreover , both parts of the loss function force the outputNN to reasonably transform the hidden state ht to the output yt .
This paper introduces Neural Jump Ordinary Differential Equations as a method for learning models of continuous-time stochastic processes sampled at random time epochs. Specifically, the paper studies the problem of estimating the marginal conditional expectation (i.e., the L2 optimal approximation conditional on the available information) by estimating an auxiliary stochastic differential equation, parameterized by neural networks, that approximates the conditional expectation of the process of interest at each point in time. The neural networks are trained by using a “randomized” mean squared-loss objective. The main theoretical results in the paper include asymptotic consistency of the optimal objective value in the limit of a large neural network, as well as consistency of a Monte Carlo sample average estimator of the value. The paper also establishes the L2 convergence of the estimated auxiliary solution to the marginal conditional expectation.
SP:c824a0fe491742bf809b2d3a90e47810f0ef6a5e
Neural Jump Ordinary Differential Equations: Consistent Continuous-Time Prediction and Filtering
1 INTRODUCTION . Stochastic processes are widely used in many fields to model time series that exhibit a random behaviour . In this work , we focus on processes that can be expressed as solutions of stochastic differential equations ( SDE ) of the form dXt = µ ( t , Xt ) dt+ σ ( t , Xt ) dWt , with certain assumptions on the drift µ and the diffusion σ . With respect to the L2-norm , the best prediction of a future value of the process is provided by the conditional expectation given the current value . If the drift and diffusion are known or a good estimation is available , the conditional expectation can be approximated by a Monte Carlo ( MC ) simulation . However , since µ and σ are usually unknown , this approach strongly depends on the assumptions made on their parametric form . A more flexible approach is given by neural SDEs , where the drift µ and diffusion σ are modelled by neural networks ( Tzen & Raginsky , 2019 ; Li et al. , 2020 ; Jia & Benson , 2019 ) . Nevertheless , modelling the diffusion can be avoided if one is only interested in forecasting the behaviour instead of sampling new paths . An alternative widely used approach is to use Recurrent Neural Networks ( RNN ) , where a neural network dynamically updates a latent variable with the observations of a discrete input time-series . RNNs are successfully applied to tasks for which time-series are regularly sampled , as for example speech or text recognition . However , often observations are irregularly observed in time . The standard approach of dividing the time-line into equally-sized intervals and imputing or aggregating observations might lead to a significant loss of information ( Rubanova et al. , 2019 ) . Frameworks that overcome this issue are the GRU-ODE-Bayes ( Brouwer et al. , 2019 ) and the ODE-RNN ( Rubanova et al. , 2019 ) , which combine a RNN with a neural ODE ( Chen et al. , 2018 ) . In standard RNNs , the hidden state is updated at each observation and constant in between . Conversely , in the GRU-ODEBayes and ODE-RNN framework , a neural ODE is trained to model the continuous evolution of the hidden state of the RNN between two observations . While GRU-ODE-Bayes and ODE-RNN both provide convincing empirical results , they lack thorough theoretical guarantees . Contribution . In this paper , we introduce a mathematical framework to precisely describe the problem statement of online prediction and filtering of a stochastic process with temporal irregular observations . Based on this rigorous mathematical description , we introduce the Neural Jump ODE ( NJ-ODE ) . The model architecture is very similar to the one of GRU-ODE-Bayes and ODE-RNN , however we introduce a novel training framework , which in contrast to them allows us to prove convergence guarantees for the first time . Moreover , we demonstrate empirically the capabilities of our model . Precise problem formulation . We emphasize that a precise definition of all ingredients is needed , to be able to show theoretical convergence guarantees , which is the main purpose of this work . Since the objects of interest are stochastic processes , we use tools from probability theory and stochastic calculus . To make the paper more readable and comprehensible also for readers without background in these fields , the precise formulations and demonstrations of all claims are given in the appendix , while the main part of the paper focuses on giving well understandable heuristics . 2 PROBLEM STATEMENT . The problem we consider in this work , is the online forecasting of temporal data . We assume that we make observations of a Markovian stochastic process described by the stochastic differential equation ( SDE ) dXt = µ ( t , Xt ) dt+ σ ( t , Xt ) dWt , ( 1 ) at irregularly-sampled time points . Between those observation times , we want to predict the stochastic process , based only on the observations that we made previously in time , excluding the possibility to interpolate observations . Due to the Markov property , only the last observation is needed for an optimal prediction . Hence , after each observation we extrapolate the current observation into the future until we make the next observation . The time at which the next observation will be made is random and assumed to be independent of the stochastic process itself . More precisely , we suppose to have a training set of N independent realisations of the RdX - dimensional stochastic process X defined in ( 1 ) . Each realisation j is observed at nj random observation times t ( j ) 1 , . . . , t ( j ) nj ∈ [ 0 , T ] with values x ( j ) 1 , . . . , x ( j ) nj ∈ RdX . We assume that all coordinates of the vector x ( j ) i are observed . We are interested in forecasting how a new independent realization evolves in time , such that our predictions of X minimize the expected squared distance ( L2-metric ) to the true unknown path . The optimal prediction , i.e . the L2-minimizer , is the conditional expectation . Given that the value of the new realization at time t is xt , we are therefore interested in estimating the function f ( xt , t , s ) : = E [ Xt+s|Xt = xt ] , s ≥ 0 , ( 2 ) which is the L2-optimal prediction until the next observation is made . To learn an approximation f̂ of f we make use of the N realisations of the training set . After training , f̂ is applied to the new realization . Hence , this can be interpreted as a special type of filtering problem . The following example illustrates the considered problem . Example . A complicated to measure vital parameter of patients in a hospital is measured multiple times during the first 48 hours of their stay . For each patient , this happens at different times depending on the resources , hence the observation dates are irregular and exhibit some randomness . Patient 1 has n1 = 4 measurements at hours ( t ( 1 ) 1 , t ( 1 ) 2 , t ( 1 ) 3 , t ( 1 ) 4 ) = ( 1 , 14 , 27 , 34 ) where the values ( x ( 1 ) 1 , x ( 1 ) 2 , x ( 1 ) 3 , x ( 1 ) 4 ) = ( 0.74 , 0.65 , 0.78 , 0.81 ) are measured . Patient 2 only has n2 = 2 measurements at hours ( t ( 2 ) 1 , t ( 2 ) 2 ) = ( 3 , 28 ) where the values ( x ( 2 ) 1 , x ( 2 ) 2 ) = ( 0.56 , 0.63 ) are measured . Similarly , the j-th patient has nj measurements at times ( t ( j ) 1 , . . . , t ( j ) nj ) and has the measured values ( x ( j ) 1 , . . . , x ( j ) nj ) . Based on this data , we want to forecast the vital parameter of new patients coming to the hospital . In particular , for a patient with measured values x1 at time t1 , we want to predict what his values will likely be at any time t1 + s > t1 . Importantly , we do not only focus on predicting the value at some t2 > t1 , but we want to know the entire evolution of the value . 3 BACKGROUND . Recurrent Neural Network . The input to a RNN is a discrete time series of observations { x1 , · · · , xn } . At each observation time ti+1 , a neural network , the RNNCell , updates the latent variable h using the previous latent variable hi and the input xi+1 as hi+1 : = RNNCell ( hi , xi+1 ) . Neural Ordinary Differential Equation . Neural ODEs ( Chen et al. , 2018 ) are a family of continuous-time models defining a latent variable ht : = h ( t ) to be the solution to an ODE initial-value problem ht : = h0 + ∫ t t0 f ( hs , s , θ ) ds , t ≥ t0 , ( 3 ) where f ( · , · , θ ) = fθ is a neural network with weights θ . Therefore , the latent variables can be updated continuously by solving this ODE ( 3 ) . We can emphasize the dependence of ht on a numerical ODE solver by rewriting ( 3 ) as ht : = ODESolve ( fθ , h0 , ( t0 , t ) ) . ( 4 ) ODE-RNN . ODE-RNN ( Rubanova et al. , 2019 ) is a mixture of a RNN and a neural ODE . In contrast to a standard RNN , we are not only interested in an output at the observation times ti , but also in between those times . In particular , we want to have an output stream that is generated continuously in time . This is achieved by using a neural ODE to model the latent dynamics between two observation times , i.e . for ti−1 < t < ti the latent variable is defined as in ( 3 ) and ( 4 ) , with h0 and t0 replaced by hi−1 and ti−1 . At the next observation time ti , the latent variable is updated by a RNN with the new observation xi . Fixing h0 , the entire latent process can be computed by iteratively solving an ODE followed by applying a RNN . Rubanova et al . ( 2019 ) write this as { h′i : = ODESolve ( fθ , hi−1 , ( ti−1 , ti ) ) hi : = RNNCell ( h ′ i , xi ) . ( 5 ) GRU-ODE-Bayes . The model architecture describing the latent variable in GRU-ODE-Bayes ( Brouwer et al. , 2019 ) is defined as a special case of the ODE-RNN architecture . In particular , a gated recurrent unit ( GRU ) is used for the RNN-cell and a continuous version of the GRU for the neural ODE fθ . Therefore , we focus on explaining the difference between our model architecture and the ODE-RNN architecture , in the following section . 4 PROPOSED METHOD – NEURAL JUMP ODE . Markovian paths . Our assumptions on the stochastic process X imply that it is a Markov process . In particular , the optimal prediction of a future state of X only depends on the current state rather than on the full history . Hence , the previous values do not provide any additional useful information for the prediction . JumpNN instead of RNN . Using a neural ODE between two observation has the advantage that it allows to continuously model the hidden state between two observations . But since the underlying process is Markov , there is no need to use a RNN-cell to model the updates of the hidden state at each new observation . Instead , whenever a new observation is made , the new hidden state can solely be defined from this observation . Therefore , we replace the RNN-cell used in ODE-RNN by a standard neural network mapping the observation to the hidden state . We call this the jumpNN which can be interpreted as an encoder map . Compared to ODE-RNN , this architecture is easier to train . Last observation and time increment as additional inputs for the neural ODE . The neural network fθ used in the neural ODE takes two arguments as inputs , the hidden state ht and the current time t. However , our theoretical problem analysis suggests , that instead of t the last observation time ti−1 and the time increment t− ti−1 should be used . Additionally the last observation xi−1 should also be part of the input . NJ-ODE . Combining the ODE-RNN architecture ( 5 ) with the previous considerations , we introduce the modified architecture of Neural Jump ODE ( NJ-ODE ) { h′i : = ODESolve ( fθ , ( hi−1 , xi−1 , ti−1 , t− ti−1 ) , ( ti−1 , ti ) ) hi : = jumpNN ( xi ) . ( 6 ) An implementable version of this method is presented in the Algorithm 1 . A neural ODE fθ transforms the hidden state between observations , and the hidden state jumps according to jumpNN when a new observation is available . The outputNN , a standard neural network , maps any hidden state ht to the output yt . To implement the continuous-in-time ODE evaluation , a discretization scheme is provided by the inner loop . In the training process , the weights of all three neural networks , jumpNN , the neural ODE fθ and outputNN are optimized . Algorithm 1 The NJ-ODE . A small step size ∆t is fixed and we denote tn+1 : = T . Input : Data points with timestamps { ( xi , ti ) } i=0 ... n , for i = 0 to n do hti = jumpNN ( xi ) . Update hidden state given next observation xi yti = outputNN ( hti ) . compute output s← ti while s+ ∆t ≤ ti+1 do hs+∆t = ODESolve ( fθ , hs , xi , ti , s− ti , ( s , s+ ∆t ) ) . get next hidden state ys+∆t = outputNN ( hs+∆t ) . compute output s← s+ ∆t end while end for Objective function . Our goal is to train the NJ-ODE model such that its output approximates the conditional expectation ( 2 ) , which is the optimal prediction of the target process X with respect to the L2-norm . Therefore , we define a new objective function , with which we can prove convergence . Let yi− denote the output of the NJ-ODE at ti before the jump and yi the output at ti after the jump . Note that the outputs depend on parameters θ and the previously observed xi which are inputs to the model . Then the objective function is defined as Φ̂N ( θ ) : = 1 N N∑ j=1︸ ︷︷ ︸ paths 1 nj nj∑ i=1︸ ︷︷ ︸ dates ( |x ( j ) i − y ( j ) i |︸ ︷︷ ︸ jump part at observations + |y ( j ) i − y ( j ) i− |︸ ︷︷ ︸ continuous part between two observations ) 2 . ( 7 ) We give an intuitive explanation for this definition . The “ jump part ” of the loss function forces the jumpNN to produce good updates based on new observations , while the other part forces the jump size to be small in ( the empirical ) L2-norm . Since the conditional expectation minimizes the jump size with respect to the L2-norm , this forces the neural ODE fθ to continuously transform the hidden state such that the output approximates the conditional expectation . Moreover , both parts of the loss function force the outputNN to reasonably transform the hidden state ht to the output yt .
The authors propose a method for learning the conditional expectation of stochastic process in an online fashion. The paper bears a considerable theoretical treatment, derived from the stochastic filtering literature, which is present both in the main body of the paper and the appendix. Besides the model, the paper also aims to provide a theoretical justification of the convergence of their method.
SP:c824a0fe491742bf809b2d3a90e47810f0ef6a5e
Experience Replay with Likelihood-free Importance Weights
1 INTRODUCTION . Deep reinforcement learning methods have achieved much success in a wide variety of domains ( Mnih et al. , 2016 ; Lillicrap et al. , 2015 ; Horgan et al. , 2018 ) . While on-policy methods ( Schulman et al. , 2017 ) are effective , using off-policy data often yields better sample efficiency ( Haarnoja et al. , 2018 ; Fujimoto et al. , 2018 ) , which is critical when querying the environment is expensive and experiences are difficult to obtain . Experience replay ( Lin , 1992 ) is a popular paradigm in off-policy reinforcement learning , where experiences stored in a replay memory can be reused to perform additional updates . When applied to temporal difference ( TD ) learning of the Q-value function ( Mnih et al. , 2015 ) , the use of replay buffers avoids catastrophic forgetting of previous experiences and improves learning . Selecting experiences from the replay buffers using a prioritization strategy ( instead of uniformly ) can lead to large empirical improvements in terms of sample efficiency ( Hessel et al. , 2017 ) . Existing prioritization procedures rely on certain choices of importance sampling ; for instance , Prioritized Experience Replay ( PER ) selects experiences with high TD error more often , and then down-weight the experiences that are frequently sampled in order to become closer to uniform sampling over the experiences ( Schaul et al. , 2015 ) . However , this might not work well in actorcritic methods , where the goal is to learn the value function ( or Q-value function ) induced by the current policy , and following off-policy experiences might be harmful . In this case , it might be more beneficial to perform importance sampling that reflects on-policy experiences instead . Based on this intuition , we investigate a new prioritization strategy for actor-critic methods based on the likelihood ( i.e. , the frequency ) of experiences under the stationary distribution of the current policy ( Tsitsiklis et al. , 1997 ) . In actor-critic methods ( Konda & Tsitsiklis , 2000 ) , we can estimate the value function of a policy by minimizing the expected squared difference between the critic network and its target value over a replay buffer ; an appropriate replay buffer should properly reflect the discrepancy between critic value functions . We treat a discrepancy as “ proper ” if it preserves the contraction properties of the Bellman operator , and consider discrepancies measured by the expected squared distances under some state-action distribution . In Theorem 1 we prove that the stationary distribution of the current policy is the only distribution in which the Bellman operator is a contraction ( i.e . being “ proper ” ) ; this motivates the use of the stationary distribution as the underlying distribution for the replay buffer . Intuitively , optimizing the expected TD-error under the stationary distribution addresses the TD-learning issue in actor-critic methods , as the TD errors in high-frequency states are given more weight . To use replay buffers derived from the stationary distribution with existing deep reinforcement learning methods , we need to be mindful of the following bias-variance trade-off . We have fewer experiences from the current policy ( using which results in high variance ) , but more experiences from other policies under the same environment ( using which results in high bias ) . We propose to find appropriate bias-variance trade-offs by using importance sampling over the replay buffer , which requires an estimate of the density ratio between the stationary policy distribution and the replay buffer . Inspired by recent advances in inverse reinforcement learning ( Fu et al. , 2017 ) and off-policy policy evaluation ( Grover et al. , 2019 ) , we use a likelihood-free method to obtain an estimate of the density ratio from a classifier trained to distinguish different types of experiences . We consider a smaller , “ fast ” replay buffer that contains near on-policy experiences , and a larger , “ slow ” replay buffer that contains additional off-policy experiences , and estimate density ratios between the two buffers . We then use these estimated density ratios as importance weights over the Q-value function update objective . This encourages more updates over state-action pairs that are more likely under the stationary policy distribution of the current policy , i.e. , closer to the fast replay buffer . Our approach can be readily combined with existing approaches that learn value functions from replay buffers . We consider our approach over three competitive actor-critic methods , Soft Actor-Critic ( SAC , Haarnoja et al . ( 2018 ) ) , Twin Delayed Deep Deterministic policy gradient ( TD3 , Fujimoto et al . ( 2018 ) ) and Data-regularized Q ( DrQ , Kostrikov et al . ( 2020 ) ) . We demonstrate the effectiveness of our approach over on 11 environments from OpenAI gym ( Dhariwal et al. , 2017 ) and DeepMind Control Suite ( Tassa et al. , 2018 ) , where both low-dimensional state space and high-dimensional image space are considered ; this results in 45 method-task combinations in total . Notably , our approach outperforms the respective baselines in 35 out of the 45 cases , while being competitive in the remaining 10 cases . This demonstrates that our method can be applied as a simple plug-and-play approach to improve existing actor-critic methods . 2 PRELIMINARIES . The reinforcement learning problem can be described as finding a policy for a Markov decision process ( MDP ) defined as the following tuple ( S , A , P , r , γ , p0 ) , where S is the state space , A is the action space , P : S × A → P ( S ) is the transition kernel , r : S × A → R is the reward function , γ ∈ [ 0 , 1 ) is the discount factor and p0 ∈ P ( S ) is the initial state distribution . The goal is to learn a stationary policy π : S → P ( A ) that selects actions in A for each state s ∈ S , such that the policy maximizes the expected sum of rewards : J ( π ) : = Eπ [ ∑∞ t=0 γ tr ( st , at ) ] , where the expectation is over trajectories sampled from s0 ∼ p0 , at ∼ π ( ·|st ) , and st+1 ∼ P ( ·|st , at ) for t ≥ 0 . For a fixed policy , the MDP becomes a Markov chain , so we define the state-action distribution at timestep t : dπt ( s , a ) , and the the corresponding ( unnormalized ) stationary distribution over states and actions dπ ( s , a ) = ∑∞ t=0 γ tdπt ( s , a ) ( we assume this always exists for the policies we consider ) . We can then write J ( π ) = Edπ [ r ( s , a ) ] . For any stationary policy π , we define its corresponding stateaction value function as Qπ ( s , a ) : = Eπ [ ∑∞ t=0 γ tr ( st , at ) |s0 = s , a0 = a ] , its corresponding value function as V π ( s ) : = Ea∼π ( ·|s ) [ Qπ ( s , a ) ] and the advantage function Aπ ( s , a ) = Qπ ( s , a ) −V π ( s ) . A large variety of actor-critic methods ( Konda & Tsitsiklis , 2000 ) have been developed in the context of deep reinforcement learning ( Silver et al. , 2014 ; Mnih et al. , 2016 ; Lillicrap et al. , 2015 ; Haarnoja et al. , 2018 ; Fujimoto et al. , 2018 ) , where learning good approximations to the Q-function is critical to the success of any deep reinforcement learning method based on actor-critic paradigms . The Q-function can be learned via temporal difference ( TD ) learning ( Sutton , 1988 ) based on the Bellman equation Qπ ( s , a ) = BπQπ ( s , a ) ; where Bπ denotes the Bellman evaluation operator BπQ ( s , a ) : = r ( s , a ) + γEs′ , a′ [ Q ( s′ , a′ ) ] , ( 1 ) where in the expectation we sample the next step , s′ ∼ P ( ·|s , a ) and a′ ∼ π ( ·|s ) . Given some experience replay buffer D ( collected by navigating the same environment , but with unknown and potentially different policies ) , one could optimize the following loss for a Q-network : LQ ( θ ; D ) = E ( s , a ) ∼D [ ( Qθ ( s , a ) − B̂πQθ ( s , a ) ) 2 ] ( 2 ) which fits Qθ ( s , a ) to an estimate of the target value B̂π [ Qθ ] ( s , a ) 1 . In practice , the target values can be estimated either via on-policy experiences ( Sutton et al. , 1999 ) or via off-policy experiences ( Precup , 2000 ; Munos et al. , 2016 ) . Ideally , we can learn Qπ by optimizing the LQ ( θ ; D ) to zero with over-parametrized neural networks . However , instead of minimizing the loss LQ ( θ ; D ) directly , prioritization over the sampled replay bufferD could lead to stronger performance . For example , prioritized experience replay ( PER , ( Schaul et al. , 2015 ) ) is a heuristic that assigns higher weights to transitions with higher TD errors , and is applied successfully in deep Q-learning ( Hessel et al. , 2017 ) . 3 PRIORITIZED EXPERIENCE REPLAY BASED ON STATIONARY DISTRIBUTIONS . Assume that d , the distribution the replay buffer D is sampled from , is supported on the entire space S ×A , and that we have infinite samples from π ( so the Bellman target is unbiased ) . Let us define the TD-learning objective for Q with prioritization weights w : S ×A → R+ , under the sampling distribution d ∈ P ( S ×A ) : LQ ( θ ; d , w ) = Ed [ w ( s , a ) ( Qθ ( s , a ) − BπQθ ( s , a ) ) 2 ] ( 3 ) In practice , the expectation in LQ ( θ ; d , w ) can be estimated with Monte-Carlo methods , such as importance sampling , rejection sampling , or combinations of multiple methods ( such as in PER ( Schaul et al. , 2015 ) ) . Without loss of generality , we can treat the problem as optimizing the mean squared TD error under some priority distribution dw ∝ d · w , since : arg min θ LQ ( θ ; d , w ) = arg min θ LQ ( θ ; d w ) , ( 4 ) so one could treat prioritized experience replay for TD learning as selecting a favorable priority distribution dw ( under which the LQ loss is computed ) in order to improve some notion of performance . In this paper , we propose to use as priority distribution dw = dπ , where dπ is the stationary distribution of state-action pairs under the current policy π . This reflects the intuition that TDerrors in high-frequency state-action pairs are more problematic than in low-frequency ones , as they will negatively impact policy updates more severely . In the following subsection , we argue the importance of choosing dπ from the perspective of maintaining desirable contraction properties of the Bellman operators under more general norms . If we consider Euclidean norms weighted under some distribution dw ∈ P ( S × A ) , the usual γ-contraction argument for Bellman operators holds only for dw = dπ , and not for other distributions .
The paper proposes a generally applicable modification to experience sampling in the context of actor-critic algorithms using a Q function as a critic. The modification is called "Likelihood-free Importance Weights" (LFIW). The authors describe the approach in Appendix A in the form of pseudocode. Comparing to a generic actor-critic algorithm, the changes include the keeping of two replay buffers ("fast" and "slow") and inclusion of an additional re-weighting function w which in turn is used in the update of the Q function. The paper includes a thorough performance comparison on MuJoCo and DM Control Suite.
SP:06e17ef81ccba1f4406589bdf7b47780f06be8d9
Experience Replay with Likelihood-free Importance Weights
1 INTRODUCTION . Deep reinforcement learning methods have achieved much success in a wide variety of domains ( Mnih et al. , 2016 ; Lillicrap et al. , 2015 ; Horgan et al. , 2018 ) . While on-policy methods ( Schulman et al. , 2017 ) are effective , using off-policy data often yields better sample efficiency ( Haarnoja et al. , 2018 ; Fujimoto et al. , 2018 ) , which is critical when querying the environment is expensive and experiences are difficult to obtain . Experience replay ( Lin , 1992 ) is a popular paradigm in off-policy reinforcement learning , where experiences stored in a replay memory can be reused to perform additional updates . When applied to temporal difference ( TD ) learning of the Q-value function ( Mnih et al. , 2015 ) , the use of replay buffers avoids catastrophic forgetting of previous experiences and improves learning . Selecting experiences from the replay buffers using a prioritization strategy ( instead of uniformly ) can lead to large empirical improvements in terms of sample efficiency ( Hessel et al. , 2017 ) . Existing prioritization procedures rely on certain choices of importance sampling ; for instance , Prioritized Experience Replay ( PER ) selects experiences with high TD error more often , and then down-weight the experiences that are frequently sampled in order to become closer to uniform sampling over the experiences ( Schaul et al. , 2015 ) . However , this might not work well in actorcritic methods , where the goal is to learn the value function ( or Q-value function ) induced by the current policy , and following off-policy experiences might be harmful . In this case , it might be more beneficial to perform importance sampling that reflects on-policy experiences instead . Based on this intuition , we investigate a new prioritization strategy for actor-critic methods based on the likelihood ( i.e. , the frequency ) of experiences under the stationary distribution of the current policy ( Tsitsiklis et al. , 1997 ) . In actor-critic methods ( Konda & Tsitsiklis , 2000 ) , we can estimate the value function of a policy by minimizing the expected squared difference between the critic network and its target value over a replay buffer ; an appropriate replay buffer should properly reflect the discrepancy between critic value functions . We treat a discrepancy as “ proper ” if it preserves the contraction properties of the Bellman operator , and consider discrepancies measured by the expected squared distances under some state-action distribution . In Theorem 1 we prove that the stationary distribution of the current policy is the only distribution in which the Bellman operator is a contraction ( i.e . being “ proper ” ) ; this motivates the use of the stationary distribution as the underlying distribution for the replay buffer . Intuitively , optimizing the expected TD-error under the stationary distribution addresses the TD-learning issue in actor-critic methods , as the TD errors in high-frequency states are given more weight . To use replay buffers derived from the stationary distribution with existing deep reinforcement learning methods , we need to be mindful of the following bias-variance trade-off . We have fewer experiences from the current policy ( using which results in high variance ) , but more experiences from other policies under the same environment ( using which results in high bias ) . We propose to find appropriate bias-variance trade-offs by using importance sampling over the replay buffer , which requires an estimate of the density ratio between the stationary policy distribution and the replay buffer . Inspired by recent advances in inverse reinforcement learning ( Fu et al. , 2017 ) and off-policy policy evaluation ( Grover et al. , 2019 ) , we use a likelihood-free method to obtain an estimate of the density ratio from a classifier trained to distinguish different types of experiences . We consider a smaller , “ fast ” replay buffer that contains near on-policy experiences , and a larger , “ slow ” replay buffer that contains additional off-policy experiences , and estimate density ratios between the two buffers . We then use these estimated density ratios as importance weights over the Q-value function update objective . This encourages more updates over state-action pairs that are more likely under the stationary policy distribution of the current policy , i.e. , closer to the fast replay buffer . Our approach can be readily combined with existing approaches that learn value functions from replay buffers . We consider our approach over three competitive actor-critic methods , Soft Actor-Critic ( SAC , Haarnoja et al . ( 2018 ) ) , Twin Delayed Deep Deterministic policy gradient ( TD3 , Fujimoto et al . ( 2018 ) ) and Data-regularized Q ( DrQ , Kostrikov et al . ( 2020 ) ) . We demonstrate the effectiveness of our approach over on 11 environments from OpenAI gym ( Dhariwal et al. , 2017 ) and DeepMind Control Suite ( Tassa et al. , 2018 ) , where both low-dimensional state space and high-dimensional image space are considered ; this results in 45 method-task combinations in total . Notably , our approach outperforms the respective baselines in 35 out of the 45 cases , while being competitive in the remaining 10 cases . This demonstrates that our method can be applied as a simple plug-and-play approach to improve existing actor-critic methods . 2 PRELIMINARIES . The reinforcement learning problem can be described as finding a policy for a Markov decision process ( MDP ) defined as the following tuple ( S , A , P , r , γ , p0 ) , where S is the state space , A is the action space , P : S × A → P ( S ) is the transition kernel , r : S × A → R is the reward function , γ ∈ [ 0 , 1 ) is the discount factor and p0 ∈ P ( S ) is the initial state distribution . The goal is to learn a stationary policy π : S → P ( A ) that selects actions in A for each state s ∈ S , such that the policy maximizes the expected sum of rewards : J ( π ) : = Eπ [ ∑∞ t=0 γ tr ( st , at ) ] , where the expectation is over trajectories sampled from s0 ∼ p0 , at ∼ π ( ·|st ) , and st+1 ∼ P ( ·|st , at ) for t ≥ 0 . For a fixed policy , the MDP becomes a Markov chain , so we define the state-action distribution at timestep t : dπt ( s , a ) , and the the corresponding ( unnormalized ) stationary distribution over states and actions dπ ( s , a ) = ∑∞ t=0 γ tdπt ( s , a ) ( we assume this always exists for the policies we consider ) . We can then write J ( π ) = Edπ [ r ( s , a ) ] . For any stationary policy π , we define its corresponding stateaction value function as Qπ ( s , a ) : = Eπ [ ∑∞ t=0 γ tr ( st , at ) |s0 = s , a0 = a ] , its corresponding value function as V π ( s ) : = Ea∼π ( ·|s ) [ Qπ ( s , a ) ] and the advantage function Aπ ( s , a ) = Qπ ( s , a ) −V π ( s ) . A large variety of actor-critic methods ( Konda & Tsitsiklis , 2000 ) have been developed in the context of deep reinforcement learning ( Silver et al. , 2014 ; Mnih et al. , 2016 ; Lillicrap et al. , 2015 ; Haarnoja et al. , 2018 ; Fujimoto et al. , 2018 ) , where learning good approximations to the Q-function is critical to the success of any deep reinforcement learning method based on actor-critic paradigms . The Q-function can be learned via temporal difference ( TD ) learning ( Sutton , 1988 ) based on the Bellman equation Qπ ( s , a ) = BπQπ ( s , a ) ; where Bπ denotes the Bellman evaluation operator BπQ ( s , a ) : = r ( s , a ) + γEs′ , a′ [ Q ( s′ , a′ ) ] , ( 1 ) where in the expectation we sample the next step , s′ ∼ P ( ·|s , a ) and a′ ∼ π ( ·|s ) . Given some experience replay buffer D ( collected by navigating the same environment , but with unknown and potentially different policies ) , one could optimize the following loss for a Q-network : LQ ( θ ; D ) = E ( s , a ) ∼D [ ( Qθ ( s , a ) − B̂πQθ ( s , a ) ) 2 ] ( 2 ) which fits Qθ ( s , a ) to an estimate of the target value B̂π [ Qθ ] ( s , a ) 1 . In practice , the target values can be estimated either via on-policy experiences ( Sutton et al. , 1999 ) or via off-policy experiences ( Precup , 2000 ; Munos et al. , 2016 ) . Ideally , we can learn Qπ by optimizing the LQ ( θ ; D ) to zero with over-parametrized neural networks . However , instead of minimizing the loss LQ ( θ ; D ) directly , prioritization over the sampled replay bufferD could lead to stronger performance . For example , prioritized experience replay ( PER , ( Schaul et al. , 2015 ) ) is a heuristic that assigns higher weights to transitions with higher TD errors , and is applied successfully in deep Q-learning ( Hessel et al. , 2017 ) . 3 PRIORITIZED EXPERIENCE REPLAY BASED ON STATIONARY DISTRIBUTIONS . Assume that d , the distribution the replay buffer D is sampled from , is supported on the entire space S ×A , and that we have infinite samples from π ( so the Bellman target is unbiased ) . Let us define the TD-learning objective for Q with prioritization weights w : S ×A → R+ , under the sampling distribution d ∈ P ( S ×A ) : LQ ( θ ; d , w ) = Ed [ w ( s , a ) ( Qθ ( s , a ) − BπQθ ( s , a ) ) 2 ] ( 3 ) In practice , the expectation in LQ ( θ ; d , w ) can be estimated with Monte-Carlo methods , such as importance sampling , rejection sampling , or combinations of multiple methods ( such as in PER ( Schaul et al. , 2015 ) ) . Without loss of generality , we can treat the problem as optimizing the mean squared TD error under some priority distribution dw ∝ d · w , since : arg min θ LQ ( θ ; d , w ) = arg min θ LQ ( θ ; d w ) , ( 4 ) so one could treat prioritized experience replay for TD learning as selecting a favorable priority distribution dw ( under which the LQ loss is computed ) in order to improve some notion of performance . In this paper , we propose to use as priority distribution dw = dπ , where dπ is the stationary distribution of state-action pairs under the current policy π . This reflects the intuition that TDerrors in high-frequency state-action pairs are more problematic than in low-frequency ones , as they will negatively impact policy updates more severely . In the following subsection , we argue the importance of choosing dπ from the perspective of maintaining desirable contraction properties of the Bellman operators under more general norms . If we consider Euclidean norms weighted under some distribution dw ∈ P ( S × A ) , the usual γ-contraction argument for Bellman operators holds only for dw = dπ , and not for other distributions .
This paper is on an experience replay approach, as applied to deep RL methods, that uses a density ratio between on-policy and off-policy experiences as the prioritization weights. The objective is to find appropriate bias-variance trade-offs for importance sampling from the replay buffer. In particular, there's the bias issue from replay experiences of other policies, and the variance issue from the recent on-policy experiences.
SP:06e17ef81ccba1f4406589bdf7b47780f06be8d9
Context-Agnostic Learning Using Synthetic Data
1 INTRODUCTION . Despite recent advances in deep learning , one central challenge is the large amount of labelled training data required to achieve state-of-the-art performance . Procuring such volumes of high quality , reliably annotated data can be costly or even close to impossible ( e.g. , obtaining data to train an autonomous navigation system for a lunar probe ) . Additional hurdles include hidden biases in large datasets ( Tommasi et al. , 2017 ) and maliciously perturbed training data ( Biggio et al. , 2012 ) . Synthetically generated data has seen growing adoption in response to these problems , since the marginal cost of producing new training data is generally very low , and one has full control over the generation process . This is particularly true for applications with a physical component , such as autonomous navigation ( Gaidon et al. , 2016 ) or robotics ( Todorov et al. , 2012 ) . However , training with purely synthetic data suffers from the so-called “ reality gap ” , whereby good performance on synthetic data does not necessarily yield good performance in the real world ( Jakobi et al. , 1995 ) . In particular , the difficulty of generating realistic training images scales not just with the objects of interest , but also the real-world contexts in which the learned model is expected to operate . This work begins with the simple observation that , for many classification tasks , the label of an input is determined entirely by the object ; however , this additional structure is discarded by current synthetic data pipelines . Our goal is to leverage this decomposition to develop more efficient methods for the related problems of generating training data and learning from a synthetic domain . Our contributions are two-fold : first , we formally introduce the setting of context-agnostic learning , where the input space is decomposed into object and context spaces , and the labels are independent of contexts when conditioned on the objects . Second , we propose an algorithm to efficiently train a classifier in the context-agnostic setting , which relies on the ability to sample from the object and context spaces independently . We apply our methods to train deep neural networks for real-world image classification using only a single synthetic example of each class , obtaining performance comparable to existing methods for domain adaptation and few-shot learning while using substantially less data . Our results show that it is possible to train classifiers in the absence of any contextual training data that nonetheless generalize to real world domains . 2 RELATED WORK . Domain shift refers to the problem that occurs when the training set ( source domain ) and test set ( target domain ) are drawn from different distributions . In this setting , a classifier which performs well on the source domain may not generalize well in the target domain . A standard method for addressing this challenge is domain adaptation , which leverages a small amount of data from the target domain to adapt a function that is learned over the source domain ( Blitzer et al. , 2006 ) . In the context of learning from synthetic data , the domain shift that occurs between synthetic and real world data is known as the reality gap ( Jakobi et al. , 1995 ) . State-of-the-art rendering engines , such as those used for video games , can help narrow this gap by generating photorealistic data for training ( Dosovitskiy et al. , 2017 ; Johnson-Roberson et al. , 2016 ; Qiu and Yuille , 2016 ) . Another technique is using domain randomization to generate the source domain with more variability than is expected in the target domain ( e.g. , extreme lighting conditions and camera angles ) , so as to make real images appear as just another variant ( Tobin et al. , 2017 ; Tremblay et al. , 2018 ) ; in particular , Torres et al . ( 2019 ) apply domain randomization to traffic sign detection and find that arbitrary natural images suffice for the task . Another body of work exploits generative adversarial networks ( Goodfellow et al. , 2014a ) to generate synthetic domains ( Hoffman et al. , 2017 ; Liu et al. , 2017 ; Shrivastava et al. , 2016 ; Taigman et al. , 2016 ; Tzeng et al. , 2017 ) . Finally , several works have explored using synthetic data for natural image text recognition ( Gupta et al. , 2016 ; Jaderberg et al. , 2014 ) . These works use an approach that is roughly analogous to our baseline models , and test their techniques on the target domain of street signs rather than handwritten characters ( as we do ) . A different paradigm for the low-data regime is few-shot learning . In contrast to domain adaptation , few-shot learning operates under the assumption that the target and source distributions are the same , but the ability to sample certain classes is limited in the source domain . Early approaches emphasized capturing knowledge in a Bayesian framework ( Fe-Fei et al. , 2003 ) , which was later formulated as Bayesian program learning ( Lake et al. , 2015 ) . Another approach based on metric learning is to find a nonlinear embedding for objects where closeness in the geometry of the embedding generalizes to unseen classes ( Koch , 2015 ; Snell et al. , 2017 ; Sung et al. , 2018 ; Vinyals et al. , 2016 ) . Meta-learning approaches aim to extract higher level concepts which can be applied to learn new classes from a few examples ( Finn et al. , 2017 ; Munkhdalai and Yu , 2017 ; Nichol et al. , 2018 ; Ravi and Larochelle , 2016 ) . A conceptually-related method that leverages synthetic training data is learning how to generate new data from a few examples of unseen classes ; in contrast to our work , however , these methods still require a large number of samples to learn the synthesizer ( Schwartz et al. , 2018 ; Zhang et al. , 2019 ) . Finally , some works combine domain adaptation with few-shot learning to learn under domain shift and limited samples ( Motiian et al . ( 2017 ) ) . The main characteristic that differentiates our work from these approaches is that we are interested in learning classifiers that are context-agnostic , i.e. , do not rely on background signals . As such , while we find our approach is applicable to many of the same tasks as the aforementioned works , our theoretical setting and objectives differ significantly . From a practical perspective , we demonstrate our techniques when the entire training set consists solely of a single synthetic image of each class , though our techniques can certainly be applied when more data is available ; however we do not expect the reverse to hold for domain adaptation or few-shot learning in our setting . Indeed , we consider this work to be complementary in that we are concerned with exploiting the additional structure that is inherent in certain source domains , while the goal of domain adaptation and fewshot learning is to achieve good performance under various downstream domain shift assumptions . 3 SETTING . The standard supervised learning setting consists of an input space X , an output space Y , and a hypothesis space H of functions mapping X to Y . A domain PD is a probability distribution over ( X , Y ) . Given a target domain PT and a loss function ` , the goal is to learn a classifier h ∈ H that minimizes the risk , i.e. , the expected loss RPT ( h ) : = EPT [ ` ( h ( x ) , y ) ] . The training procedure consists of n samples ( x1 , y1 ) , ... , ( xn , yn ) from a source domain PS . A standard approach is empirical risk minimization , which takes the classifier that minimizes Remp ( h ) = 1n ∑ i ` ( h ( xi ) , yi ) ; if PS is close to PT , then with enough samples , such a classifier also achieves low risk in the target domain . 3.1 CONTEXT-AGNOSTIC LEARNING . In general , we can frame the goal of classification as learning to extract reliable signals for the label y from points x ∈ X . This task is often complicated by the presence of noise or other spurious signals . However , for input spaces generated by physical processes , such signals are generally produced by distinct physical entities and can thus be thought of as independent signals that become mixed via the observation process . We aim to capture this additional structure in our setting . Concretely , we have an object space O , a context space C , and an observation function γ on O × C. The input space X is defined as the image of γ : O × C → X . We will assume that points in O are associated with a unique label in Y , and require that γ preserves this property when passing to X . Note that this setting can be easily generalized to a case when the image of γ is a subdomain of X . In this work , we will consider the special case when X ⊆ C. Conceptually , the context space is an “ ambient space ” containing not only valid inputs , but also random noise or irrelevant classes ; the input space is a subset of the context space for which there exists a well-defined label . For example , in our experiments we explore such a decomposition for the task of traffic sign recognition , where the object space O consists of traffic signs viewed from different angles , the context space C is unconstrained pixel space , and the input space X is the set of images that contain a traffic sign . Recall that the standard objective of learning is to find a good classifier for an unknown subdomain XPT ⊆ X . We consider instead the task of learning a classifier on the entire input space X . To sample from X we are given oracle access to the observation function and draw ( labelled ) samples from O and C independently . Clearly , if this problem is realizable , i.e. , there exists h∗ ∈ H for which RX ( h∗ ) = 0 , then we do not even need to know the target domain PT , since XPT ⊆ X =⇒ [ RX ( h ∗ ) = 0 =⇒ RPT ( h∗ ) = 0 ] Assuming access toX through γ , we can learn h∗ simply by taking the number of samples to infinity . Unfortunately , learning a classifier on X generally requires many more samples than learning a classifier on XPT . Thus we aim to learn h∗ using as few samples as possible . Our new goal will be to learn a classifier over X which depends only on signals from O ; more precisely , we have the following definitions : Definition 3.1 . A function f on X is context-agnostic if Pr [ f ◦ γ ( o , c ) = x ] = Pr [ f ◦ γ ( o , c′ ) = x ] ∀c , c′ ∈ C , o ∈ O , x ∈ Im ( f ) Definition 3.2 . Given a context-agnostic label function y∗ , the objective of context-agnostic learning is to find h ∈ H such that h achieves the lowest risk of all context-agnostic classifiers . The hope is that , since y∗ is context-agnostic , we can learn y∗ through the lower dimensional structure of O using fewer samples . Note , however , that while we only need max ( |O| , |C| ) samples to observe every object and context once , we need |O| ∗ |C| samples to observe every object in every context . Hence the main challenge when the number of samples is low will be avoiding spurious signals , i.e. , statistical correlations between context and objects ( and by extension , labels ) which are artifacts of the sampling process and do not generalize outside the training set . We conclude with some high-level remarks about this setting . First , note that if the problem is realizeable , then the lowest risk classifier is also context-agnostic . Second , we recover the standard supervised setting for the trivial context space C = ∅ . Conversely , classification remains welldefined even in the trivial object space O = { yi } , the set of classes ; however , this pushes all the complexity to the observation function γ , which may be hard to define or intractable to compute . Finally , we do not preclude the existence of useful signals originating from the context for certain domains . For instance , a great deal of information can often be gleaned from the backgrounds of photos , e.g. , stop signs are more often found in cities than on highways . Our theoretical setting avoids this issue by assuming realizability and uniqueness of labels ; more practically , we argue that a “ good ” classifier should nonetheless recognize stop signs on the highway , and our experimental results provide evidence that over-reliance on such background signals leads to brittle classifiers .
This paper proposed a context-agnostic learning approach that combines an object area and a context image (s.t. background image) to generate input synthetic image and train the model as context-independent. The proposed method is made more efficient by including this generation process in the training loop, compared to a exhaustive sampling method that randomly selects from a set of combinations of object areas and context images. When applying this method on the task of traffic sign recognition, character recognition, it provides enhanced performance in a setting where model is trained using synthetic data and evaluated on real-world dataset.
SP:07440014f05a06c121a4e56f6af6e31b2190dc70
Context-Agnostic Learning Using Synthetic Data
1 INTRODUCTION . Despite recent advances in deep learning , one central challenge is the large amount of labelled training data required to achieve state-of-the-art performance . Procuring such volumes of high quality , reliably annotated data can be costly or even close to impossible ( e.g. , obtaining data to train an autonomous navigation system for a lunar probe ) . Additional hurdles include hidden biases in large datasets ( Tommasi et al. , 2017 ) and maliciously perturbed training data ( Biggio et al. , 2012 ) . Synthetically generated data has seen growing adoption in response to these problems , since the marginal cost of producing new training data is generally very low , and one has full control over the generation process . This is particularly true for applications with a physical component , such as autonomous navigation ( Gaidon et al. , 2016 ) or robotics ( Todorov et al. , 2012 ) . However , training with purely synthetic data suffers from the so-called “ reality gap ” , whereby good performance on synthetic data does not necessarily yield good performance in the real world ( Jakobi et al. , 1995 ) . In particular , the difficulty of generating realistic training images scales not just with the objects of interest , but also the real-world contexts in which the learned model is expected to operate . This work begins with the simple observation that , for many classification tasks , the label of an input is determined entirely by the object ; however , this additional structure is discarded by current synthetic data pipelines . Our goal is to leverage this decomposition to develop more efficient methods for the related problems of generating training data and learning from a synthetic domain . Our contributions are two-fold : first , we formally introduce the setting of context-agnostic learning , where the input space is decomposed into object and context spaces , and the labels are independent of contexts when conditioned on the objects . Second , we propose an algorithm to efficiently train a classifier in the context-agnostic setting , which relies on the ability to sample from the object and context spaces independently . We apply our methods to train deep neural networks for real-world image classification using only a single synthetic example of each class , obtaining performance comparable to existing methods for domain adaptation and few-shot learning while using substantially less data . Our results show that it is possible to train classifiers in the absence of any contextual training data that nonetheless generalize to real world domains . 2 RELATED WORK . Domain shift refers to the problem that occurs when the training set ( source domain ) and test set ( target domain ) are drawn from different distributions . In this setting , a classifier which performs well on the source domain may not generalize well in the target domain . A standard method for addressing this challenge is domain adaptation , which leverages a small amount of data from the target domain to adapt a function that is learned over the source domain ( Blitzer et al. , 2006 ) . In the context of learning from synthetic data , the domain shift that occurs between synthetic and real world data is known as the reality gap ( Jakobi et al. , 1995 ) . State-of-the-art rendering engines , such as those used for video games , can help narrow this gap by generating photorealistic data for training ( Dosovitskiy et al. , 2017 ; Johnson-Roberson et al. , 2016 ; Qiu and Yuille , 2016 ) . Another technique is using domain randomization to generate the source domain with more variability than is expected in the target domain ( e.g. , extreme lighting conditions and camera angles ) , so as to make real images appear as just another variant ( Tobin et al. , 2017 ; Tremblay et al. , 2018 ) ; in particular , Torres et al . ( 2019 ) apply domain randomization to traffic sign detection and find that arbitrary natural images suffice for the task . Another body of work exploits generative adversarial networks ( Goodfellow et al. , 2014a ) to generate synthetic domains ( Hoffman et al. , 2017 ; Liu et al. , 2017 ; Shrivastava et al. , 2016 ; Taigman et al. , 2016 ; Tzeng et al. , 2017 ) . Finally , several works have explored using synthetic data for natural image text recognition ( Gupta et al. , 2016 ; Jaderberg et al. , 2014 ) . These works use an approach that is roughly analogous to our baseline models , and test their techniques on the target domain of street signs rather than handwritten characters ( as we do ) . A different paradigm for the low-data regime is few-shot learning . In contrast to domain adaptation , few-shot learning operates under the assumption that the target and source distributions are the same , but the ability to sample certain classes is limited in the source domain . Early approaches emphasized capturing knowledge in a Bayesian framework ( Fe-Fei et al. , 2003 ) , which was later formulated as Bayesian program learning ( Lake et al. , 2015 ) . Another approach based on metric learning is to find a nonlinear embedding for objects where closeness in the geometry of the embedding generalizes to unseen classes ( Koch , 2015 ; Snell et al. , 2017 ; Sung et al. , 2018 ; Vinyals et al. , 2016 ) . Meta-learning approaches aim to extract higher level concepts which can be applied to learn new classes from a few examples ( Finn et al. , 2017 ; Munkhdalai and Yu , 2017 ; Nichol et al. , 2018 ; Ravi and Larochelle , 2016 ) . A conceptually-related method that leverages synthetic training data is learning how to generate new data from a few examples of unseen classes ; in contrast to our work , however , these methods still require a large number of samples to learn the synthesizer ( Schwartz et al. , 2018 ; Zhang et al. , 2019 ) . Finally , some works combine domain adaptation with few-shot learning to learn under domain shift and limited samples ( Motiian et al . ( 2017 ) ) . The main characteristic that differentiates our work from these approaches is that we are interested in learning classifiers that are context-agnostic , i.e. , do not rely on background signals . As such , while we find our approach is applicable to many of the same tasks as the aforementioned works , our theoretical setting and objectives differ significantly . From a practical perspective , we demonstrate our techniques when the entire training set consists solely of a single synthetic image of each class , though our techniques can certainly be applied when more data is available ; however we do not expect the reverse to hold for domain adaptation or few-shot learning in our setting . Indeed , we consider this work to be complementary in that we are concerned with exploiting the additional structure that is inherent in certain source domains , while the goal of domain adaptation and fewshot learning is to achieve good performance under various downstream domain shift assumptions . 3 SETTING . The standard supervised learning setting consists of an input space X , an output space Y , and a hypothesis space H of functions mapping X to Y . A domain PD is a probability distribution over ( X , Y ) . Given a target domain PT and a loss function ` , the goal is to learn a classifier h ∈ H that minimizes the risk , i.e. , the expected loss RPT ( h ) : = EPT [ ` ( h ( x ) , y ) ] . The training procedure consists of n samples ( x1 , y1 ) , ... , ( xn , yn ) from a source domain PS . A standard approach is empirical risk minimization , which takes the classifier that minimizes Remp ( h ) = 1n ∑ i ` ( h ( xi ) , yi ) ; if PS is close to PT , then with enough samples , such a classifier also achieves low risk in the target domain . 3.1 CONTEXT-AGNOSTIC LEARNING . In general , we can frame the goal of classification as learning to extract reliable signals for the label y from points x ∈ X . This task is often complicated by the presence of noise or other spurious signals . However , for input spaces generated by physical processes , such signals are generally produced by distinct physical entities and can thus be thought of as independent signals that become mixed via the observation process . We aim to capture this additional structure in our setting . Concretely , we have an object space O , a context space C , and an observation function γ on O × C. The input space X is defined as the image of γ : O × C → X . We will assume that points in O are associated with a unique label in Y , and require that γ preserves this property when passing to X . Note that this setting can be easily generalized to a case when the image of γ is a subdomain of X . In this work , we will consider the special case when X ⊆ C. Conceptually , the context space is an “ ambient space ” containing not only valid inputs , but also random noise or irrelevant classes ; the input space is a subset of the context space for which there exists a well-defined label . For example , in our experiments we explore such a decomposition for the task of traffic sign recognition , where the object space O consists of traffic signs viewed from different angles , the context space C is unconstrained pixel space , and the input space X is the set of images that contain a traffic sign . Recall that the standard objective of learning is to find a good classifier for an unknown subdomain XPT ⊆ X . We consider instead the task of learning a classifier on the entire input space X . To sample from X we are given oracle access to the observation function and draw ( labelled ) samples from O and C independently . Clearly , if this problem is realizable , i.e. , there exists h∗ ∈ H for which RX ( h∗ ) = 0 , then we do not even need to know the target domain PT , since XPT ⊆ X =⇒ [ RX ( h ∗ ) = 0 =⇒ RPT ( h∗ ) = 0 ] Assuming access toX through γ , we can learn h∗ simply by taking the number of samples to infinity . Unfortunately , learning a classifier on X generally requires many more samples than learning a classifier on XPT . Thus we aim to learn h∗ using as few samples as possible . Our new goal will be to learn a classifier over X which depends only on signals from O ; more precisely , we have the following definitions : Definition 3.1 . A function f on X is context-agnostic if Pr [ f ◦ γ ( o , c ) = x ] = Pr [ f ◦ γ ( o , c′ ) = x ] ∀c , c′ ∈ C , o ∈ O , x ∈ Im ( f ) Definition 3.2 . Given a context-agnostic label function y∗ , the objective of context-agnostic learning is to find h ∈ H such that h achieves the lowest risk of all context-agnostic classifiers . The hope is that , since y∗ is context-agnostic , we can learn y∗ through the lower dimensional structure of O using fewer samples . Note , however , that while we only need max ( |O| , |C| ) samples to observe every object and context once , we need |O| ∗ |C| samples to observe every object in every context . Hence the main challenge when the number of samples is low will be avoiding spurious signals , i.e. , statistical correlations between context and objects ( and by extension , labels ) which are artifacts of the sampling process and do not generalize outside the training set . We conclude with some high-level remarks about this setting . First , note that if the problem is realizeable , then the lowest risk classifier is also context-agnostic . Second , we recover the standard supervised setting for the trivial context space C = ∅ . Conversely , classification remains welldefined even in the trivial object space O = { yi } , the set of classes ; however , this pushes all the complexity to the observation function γ , which may be hard to define or intractable to compute . Finally , we do not preclude the existence of useful signals originating from the context for certain domains . For instance , a great deal of information can often be gleaned from the backgrounds of photos , e.g. , stop signs are more often found in cities than on highways . Our theoretical setting avoids this issue by assuming realizability and uniqueness of labels ; more practically , we argue that a “ good ” classifier should nonetheless recognize stop signs on the highway , and our experimental results provide evidence that over-reliance on such background signals leads to brittle classifiers .
The paper defines the task of context-agnostic learning and proposes an algorithm to solve the problem while assuming the ability to sample objects and contexts independently. They propose to decompose factors contributing to the risk into two, context bias and object error. Based on this interpretation, an algorithm is designed to 'greedily correct bias' while employing adversarial training (or robustness training) for 'local refinement'. The method achieves high accuracy on two synthetic visual tasks, digits and traffic sign classification, when a model is trained using one sample per class from the source domain and tested on an unseen target domain.
SP:07440014f05a06c121a4e56f6af6e31b2190dc70
Compressing gradients in distributed SGD by exploiting their temporal correlation
1 INTRODUCTION . Distributed optimization has become the norm for training machine learning models on large datasets . With the need to train bigger models on ever-growing datasets , scalability of distributed optimization has become a key focus in the research community . While an obvious solution to growing dataset size is to increase the number of workers , the communication among workers has proven to be a bottleneck . For popular benchmark models such as AlexNet , ResNet and BERT the communication can account for a significant portion of the overall training time ( Alistarh et al. , 2017 ; Seide et al. , 2014 ; Lin et al. , 2018 ) . The BERT ( “ Bidirectional Encoder Representing from Transformers ” ) architecture for language models ( Devlin et al. , 2018 ) comprises about 340 million parameters . If 32-bit floating-point representation is used one gradient update from a worker amounts to communicating around 1.3GB ( 340×106 parameters× 32 bits per parameter× 2−33 gigabytes per bit≈ 1.3GB ) . Frequently communicating such large payloads can easily overwhelm the network resulting in prolonged training times . In addition , large payloads may increase other forms of costs in distributed optimization . Novel approaches such as federated learning employ mobile devices as worker nodes . Exchanging information with mobile devices is heavily constrained due to communication bandwidth and budget limitations . Therefore , communication remains an important bottleneck in distributed optimization and reducing communication is of utmost importance . Gradient compression alleviates the communication bottleneck . The idea is to apply a compression scheme on gradients before sending them over the network . There has been an increasing amount of literature on gradient compression within the last few years ( Seide et al. , 2014 ; Aji & Heafield , 2017 ; Alistarh et al. , 2017 ; Wen et al. , 2017 ; Wangni et al. , 2018 ; Wu et al. , 2018 ; Lin et al. , 2018 ; Wang et al. , 2018 ) . Such compression schemes have been demonstrated to work well with distributed stochastic gradient descent ( SGD ) and its variants . However , SGD with arbitrary compres- sion schemes may not converge . Karimireddy et al . ( 2019 ) give one example of non-convergence . The recently proposed error-feedback based algorithms ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) circumvent the convergence issue . Error-feedback methods accumulate the compression error and feed it back to the input of the compression scheme so that the error gets transmitted over subsequent iterations . The dist-EF-SGD algorithm proposed by Zheng et al . ( 2019 ) applies error-feedback to two-way compression , in which both worker-to-master and master-to-worker communications are compressed . Theoretical guarantees provided by dist-EF-SGD are valid for all compression schemes that fall under the definition of ‘ δ-approximate compressors ’ , also referred to as δ-compressors . The authors prove that error-feedback with two-way compression asymptotically achieves theO ( 1/ √ T ) convergence rate of SGD . However , the analysis by Zheng et al . ( 2019 ) suggests that dist-EF-SGD converges slower than SGD by a constant factor . Our contributions in this paper are as follows . We propose SignXOR , a novel compression scheme that exploits temporal correlation of gradients . We prove that SignXOR is a δ-compressor , and we provide convergence guarantees for SignXOR by employing dist-EF-SGD . We strengthen the convergence bound by Zheng et al . ( 2019 ) to show that dist-EF-SGD asymptotically converges at the same O ( 1/ √ T ) rate as SGD . Consequently , we show that the proposed method asymptotically achieves the SGD convergence rate . We empirically validate the proposed method on CIFAR100 and ImageNet datasets and demonstrate that the ratio between total communication budgets of SignXOR and Scaled-sign is less than 50 % . Notation : For x ∈ Rd , x [ j ] denotes the jth entry of x , ‖x‖1 denotes the ` 1-norm , and ‖x‖ denotes the ` 2-norm . For vector inputs sgn ( · ) function outputs the sign of the input element-wise . The index set { 1 , . . . , n } is denoted by [ n ] , and denotes elementwise multiplication . 2 RELATED WORK . The most common gradient compression schemes can be categorized into those based on sparsification and those based on quantization . The methods based on sparsification such as Top-k , Rand-k ( Stich et al. , 2018 ; Lin et al. , 2018 ) and Spectral-ATOMO ( Wang et al. , 2018 ) preserve only the most significant gradient elements , effectively reducing the quantity of information carrying gradient components . On the other hand , the methods based on quantization such as QSGD ( Alistarh et al. , 2017 ) , TernGrad ( Wen et al. , 2017 ) and SignSGD ( Bernstein et al. , 2018 ) reduce the overall floating-point precision of the gradient . Therefore , these two classes of methods can be respectively thought of as approaches that reduce the quantity versus the quality of the gradient . One can think of this in analogy to image compression . For example , JPEG image compression that is based on discrete cosine transform determines both which transform coefficients to store ( the quantity ) and at what level of resolution to store those coefficients ( the quality ) . Sign-based compression schemes such as Scaled-sign , SignSGD and Signum ( Bernstein et al. , 2018 ) sit at the far end of quantization-based algorithms . Such schemes quantize real values to only two levels , +1 and −1 . For example , the compressing function of Scaled-sign takes in a vector x ∈ Rd , and the decompressing function outputs the vector ( ‖x‖1/d ) sgn ( x ) . This means that the compressed representation needs to store only the sign of each entry x [ j ] , along with the scaling constant ‖x‖1/d . In practice one can avoid the ‘ zero ’ output of sgn by mapping it to +1 or −1 . This allows the two outcomes +1 and −1 to be represented using one bit per entry , making the size of the compressed representation d + 32 bits in total ( assuming 32-bit single-precision representation of the scaling constant ) . As per Shannon ’ s source coding theorem ( MacKay , 2003 , p. 81 ) , the sequence of +1 and −1 can be further compressed without any information loss if the probability of encountering +1 is different from that for −1 . However , in our experiments on Scaled-sign compression we observe that both outputs are equally likely across all iterations . Any lossy gradient compression scheme introduces noise , also known as distortion , in addition to the measurement noise that is already present in the stochastic gradients computed by the workers . It is reasonable to expect the additional compression error hurts the convergence rate of the algorithm . However , it has been empirically observed that significant compression ratios can be achieved before observing any impact on convergence ( Seide et al. , 2014 ; Alistarh et al. , 2017 ) . One can achieve even greater compression while keeping the convergence rate nearly the same by employing errorfeedback ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ; Zheng et al. , 2019 ) . Algorithms based on error-feedback accumulate the compression error in past iterations and add it to the input of the compressing function . This allows all of the gradient information to get transmitted over a sequence of iterations , albeit with a delay . For smooth functions the gradient does not change considerably , therefore , the delay does not significantly impact the rate of convergence . The dist-EF-SGD algorithm proposed by Zheng et al . ( 2019 ) is based on error-feedback and offers two-way compression under the master-worker topology . The dist-EF-SGD algorithm provides convergence guarantees for δ-compressors . An operator C : Rd → Rd is a δ-compressor for all x ∈ Rd if ‖C ( x ) − x‖2 ≤ ( 1− δ ) ‖x‖2 for some δ ∈ ( 0 , 1 ] . The δ is a measure of the distortion due to applying C. A good compressor will have δ close to 1 and a large compression ratio . One can show that Scaled-sign is a 1d -approximate compressor . The Scaled-sign compression scheme with dist-EF-SGD offers convergence guarantees under standard assumptions even though Scaled-sign without error-feedback may not converge ( Karimireddy et al. , 2019 ) . Zheng et al . ( 2019 ) generalize the dist-EF-SGD algorithm to include blockwise compression , in which the gradient is partitioned into blocks that are compressed separately . A natural partitioning method is to consider as blocks the elements of a deep neural network such as tensors , matrices and vectors . Blockwise compression allows better exploitation of redundancy that may be present only within a block . Zheng et al . ( 2019 ) empirically demonstrate that Scaled-sign with dist-EF-SGD achieves almost the same performance of SGD with respect to training loss and test accuracy . This indicates that we can allow more distortion in the compression before observing a significant impact on training performance . The proposed SignXOR compression is based on allowing additional distortion in the interest of achieving a higher compression ratio . 3 PROPOSED ALGORITHM : SIGNXOR . Our proposal is motivated by the concept of delta encoding . Delta encoding refers to techniques that store data as the difference between successive samples , rather than directly storing the samples themselves ( Smith et al. , 1997 ) . One often encounters delta encoding in applications such as integer compression and video compression . For example , it is more space efficient to store the first-order differences of a digitized audio signal than to store the values of the original signal . First-order differences have smaller magnitudes compared to the original sequence and , therefore , the differences can be represented by a comparatively smaller number of bits . A similar approach is used in the inter-picture prediction method in high efficiency video coding ( HEVC ) ( Sze et al. , 2014 ) . Interpicture prediction makes use of temporal correlations across video frames encoding the differences between frames . This requires less storage compared to storing each video frame . Jiang et al . ( 2018 ) employ delta encoding in a more related application to distributed optimization . At the core of their algorithm , delta encoding is employed to compress an increasing sequence of integers . Our SignXOR algorithm applies delta encoding to represent temporal changes in the gradient . In essence , SignXOR maintains a binary vector that indicates whether the sign of a gradient entry is equal ( or not equal ) to the sign of the corresponding entry in the previous gradient . The equality ( or non-equality ) can be represented by a binary 1 ( or 0 ) . This procedure resembles the binary XOR operation , hence the name SignXOR . We employ a generalized version of original dist-EF-SGD by Zheng et al . ( 2019 ) ( the original dist-EF-SGD is specified in Algorithm 2 therein ) to provide convergence guarantees for the proposed compression scheme . The generalization is to make dist-EF-SGD compatible with SignXOR . We outline the proposed method in Algorithm 1 and Algorithm 2 . Generalized dist-EF-SGD : Algorithm 1 presents generalized dist-EF-SGD for setup with a master and n workers . The three main differences between the generalized and original dist-EF-SGD versions are as follows . First , Algorithm 1 delegates compression and decompression tasks to two separate functions encode and decode . Second , Algorithm 1 maintains a vector ḡk at all nodes including the master . The vector ḡk is the average gradient all workers used to update the parameter vector wk in the last iteration . Third , the encode and decode functions each take the last gradient ḡk as the second argument . This is in contrast to the original dist-EF-SGD algorithm in which compression is based only on the gradient in the current iteration . Since there are no differences related to compression performance between the original and generalized dist-EF-SGD algorithms , they encompass the same theoretical guarantees . The compressing function encode takes in two inputs and outputs Gik . This output is the actual payload sent to master over the communication channel . In addition to Gik , the ith worker also Algorithm 1 : Generalized dist-EF-SGD compatible with SignXOR input : initial parameter vector w0 ; step sizes { η0 , . . . , ηT−1 } initialize : let 0 be the all-zeros vector of dimension d ; let ḡ0 ∈ Rd be a vector with entries sampled uniformly from [ −1 , 1 ] ; on ith worker store ḡ0 ; set ei0 to 0 ; set parameter vector to w0 ; on master store ḡ0 ; set e0 to 0 ; 1 for k ∈ { 0 , . . . , T − 1 } do 2 on ith worker 3 compute ĝik = g i k + 1 ηk eik where g i k is the stochastic gradient at wk ; 4 compute Gik = encode ( ĝik , ḡk ) and ḡik = decode ( Gik , ḡk ) ; 5 send Gik to master and update error eik+1 = ηk ( ĝik − ḡik ) ; 6 receive Gk from master and compute ḡk+1 = decode ( Gk , ḡk ) ; 7 update parameter vector wk+1 = wk − ηkḡk+1 ; 8 on master 9 receive Gik and compute ḡik = decode ( Gik , ḡk ) for all i ∈ [ n ] ; 10 compute ĝk = 1 n ∑ i∈ [ n ] ḡ i k + 1 ηk ek ; 11 compute Gk = encode ( ĝk , ḡk ) and ḡk+1 = decode ( Gk , ḡk ) ; 12 broadcast Gk to all workers and update ek+1 = ηk ( ĝk − ḡk+1 ) ; computes ḡik which is what the master will obtain by decompressing Gik . The vector ḡik is used to update eik+1 , the compression error fed back in the next iteration . The master collects Gik from all workers and decompresses each to obtain ḡik . All workers receive the master broadcast Gk , and input it , along with ḡk , to the decode function to decompress and obtain ḡk+1 . Note that in the kth iteration all nodes use the same ḡk vector as the second argument in the encode and decode functions . The encode and decode functions corresponding to SignXOR compression scheme are specified in Algorithm 2 . For the ease of explanation we consider the specific case when the master compresses ĝk to obtain Gk , and the workers decompress Gk to obtain ḡk+1 as an approximation to ĝk . Algorithm 2 : SignXOR compression and decompression input : hyperparameter 0 ≤ α < 14 ( 1− √ 1− 1d ) 2 < 1 ; 1 function encode ( x ∈ Rd , y ∈ Rd ) : 2 compute r , the fraction of +1 ’ s in sgn ( x ) ; 3 compute q , the fraction of elements in x such that sgn ( x [ j ] ) = sgn ( y [ j ] ) ; 4 initialize binary vector b ∈ { 0 , 1 } d to all-zeros ; 5 for all j ∈ [ d ] , if sgn ( x [ j ] ) = sgn ( y [ j ] ) set b [ j ] = 1 with probability 1− α ; 6 compute p , the fraction of 1 ’ s in b ; 7 compress b with a lossless scheme and compute scalar a = ‖x‖1/d ; 8 output G = { a , compressed representation of b } ; 9 function decode ( G = { a ∈ R , compressed representation of a vector b ∈ { 0 , 1 } d } , y ∈ Rd ) : 10 expand and decompress G to obtain a and b ; 11 output a sgn ( y ) ( 2b− 1 ) ; Compressing function : We consider the case when master calls encode with arguments x = ĝk and y = ḡk . Note that the scalars r , q and p are not used anywhere in Algorithm 2 . These scalars help us describe the algorithm and also become useful in our theoretical analysis . The output of encode function is random when α 6= 0 , and deterministic when α = 0 . Let us first consider α = 0 , in which case b [ j ] = 1 if and only if sgn ( ĝk [ j ] ) = sgn ( ḡk [ j ] ) . This implies that p is the fraction of entries in ĝk and ḡk that have the same signs . This is a measure of the ( positive or negative ) correlation between the two vectors . The core idea in the proposed compression scheme is to compress the binary vector b using a lossless compression scheme . Shannon ’ s source coding theorem ( MacKay , 2003 , p. 81 ) states that d i.i.d . random variables each with entropy H ( p ) = −p log2 p− ( 1− p ) log2 ( 1− p ) can be compressed into approximately dH ( p ) bits with negligible risk of information loss , as d → ∞ . In our case , while the length of the parameter vector d is well over a million for models of practical interest , the entries of b are not necessarily i.i.d .. However , we demonstrate in Section 5 that there exist readily available lossless compression algorithms that can compress b to very close to dH ( p ) bits , the Shannon limit . Note that binary entropy H ( p ) is symmetric around p = 0.5 , and satisfies 0 ≤ H ( p ) ≤ 1 with H ( 0.5 ) = 1 . When p ≈ 0.5 the size of the compressed representation gets close to d , which is same as that offered by Scaled-sign . Further compression of b is only possible if p is away from 0.5 . In our experiments presented in Section 5.2 we observe that when α = 0 , p remain close to but slightly lower than 0.5 . This implies a low correlation between ĝk and ḡk . We remedy this issue by making α > 0 . Next we explain how making α > 0 yields compression gains and , at the same time , induces correlation between ĝk and ḡk . First , note that increasing α introduces distortion to b , driving p away from 0.5 and towards 0 . A lower p decreases the entropy of b , yielding a higher compression ratio realized with the lossless compressor . Since we start with a p slightly less than 0.5 , if we were to drive p towards 1 we would first increase H ( p ) before starting to incur compression gains . For this reason , we design the encoder to push p towards zero . In summary , increasing α offers rate savings in the current iteration by adding distortion to the current gradient . Second , we explain how added distortion induces correlation between ĝk and ḡk . Note that q is a measure of correlation between the two vectors sgn ( ĝk ) and sgn ( ḡk ) measured prior to adding distortion using α . We emphasize that q does not depend on the errors introduced in the current iteration . Rather , q depends on α only through the past iterations . In the experimental results presented in Section 5.2 we observe that increasing α also decreases q . This means that in a given iteration the inputs to the encoder are already correlated . Therefore , the b vector that encodes equality ( or nonequality ) of sgn ( ĝk ) and sgn ( ḡk ) can be compressed even without adding distortion in the current iteration . The underlying mechanism that causes this temporal correlation in our encoder is errorfeedback . Recall that the idea behind error-feedback is to keep track of the compression error in the last iteration and add it back to the input of the encoder in the current iteration ( with correction for the step size ) . Specifically , ĝk includes the compression error incurred in ḡk . This feedback system induces the temporal relation between the two vectors that we see through q . In summary , our compression mechanism interacts with the error-feedback mechanism to increase temporal correlation which we then exploit to realize further compressive gains . The suggested upper bound for α ensures theoretical convergence of the SignXOR algorithm . We show in Section 5 that in practice α can be increased considerably more than the suggested upper bound before seeing an impact on the training performance . Decompressing function : Let us now consider the case when a worker calls decode with arguments G = Gk and y = ḡk . The first argument Gk contains the scalar a and the compressed representation of binary vector b , which can be recovered exactly as compression of b is lossless . Noting that b is computed with ḡk as the second argument to the encode function , the output of the decoder is obtained by inverting the sign of ḡk [ j ] whenever b [ j ] is 0 , and by scaling the result by a . One can compactly express the output of this operation as a sgn ( ḡk ) ( 2b− 1 ) . Remarks : Note that we can recover the Scaled-sign compression scheme by setting α = 0 in Algorithm 2 . In comparison to SignXOR , the compressed representation of Scaled-sign stores the sign of each entry in ĝk , and the scaling constant a = ‖ĝk‖1/d . We demonstrate in Section 5.2 that r , the fraction of +1 ’ s in sgn ( ĝk ) , is approximately 0.5 across all iterations . This means that we can not compress further the sequence of signs , and the encoded representation of Scaled-sign requires at least d bits . Algorithm 1 can be easily extended to accommodate blockwise compression as was explained by Zheng et al . ( 2019 ) . In blockwise compression the gradient ĝk is partitioned into blocks , and the blocks are processed separately using encode and decode functions . In our numerical experiments we employ the blockwise extension of Algorithm 1 . 4 THEORETICAL GUARANTEES In this section we summarize our theoretical results on Algorithm 1 and Algorithm 2 . First , we prove that the SignXOR compression scheme presented in Algorithm 2 is a δ-compressor . Second , we show that for any δ-compressor the generalized dist-EF-SGD scheme in Algorithm 1 converges at the same O ( 1/ √ T ) rate as SGD . Putting these two together yields the desired result . SignXOR is a δ-compressor : We consider the general form of encode and decode functions , i.e. , with inputs ( x , y ) for encode , and with inputs ( G , y ) for decode . Let us define the operator Cyα : Rd → Rd where Cyα ( x ) = a sgn ( y ) ( 2b − 1 ) with a = ‖x‖1 d and the binary vector b is such that b = 1 with probability 1 − α if sgn ( x ) = sgn ( y ) , and b = 0 otherwise . Note that a and b have the same meaning as in Algorithm 2 . The operator Cyα ( x ) is representative of decode ( encode ( x , y ) , y ) , therefore , Cyα ( x ) is the SignXOR compressor . Since C y α is a randomized operator , we show that it is a δ-compressor in expectation as stated in Theorem 1 . Theorem 1 . There exist a δ ∈ ( 0 , 1 ] such that E [ ‖Cyα ( x ) − x‖2 ] ≤ ( 1 − δ ) ‖x‖2 for all y ∈ Rd if α < 14 ( 1− √ 1− 1d ) 2 . The proof is provided in Appendix A.1 . Although the suggested upper bound for α is extremely small for large d , we demonstrate in Section 5 that in practice α can be set quite close to 1 . Next we discuss the convergence rate of dist-EF-SGD for an arbitrary δ-compressor . Convergence rate of dist-EF-SGD : Zheng et al . ( 2019 ) compare the convergence rates of dist-EFSGD and vanilla SGD in their Corollary 1 . As authors note , although the convergence rates of the two algorithms are same in O notation , the former is slower by a constant factor . The differences between the bounds for dist-EF-SGD and vanilla SGD in Corollary 1 are as follows . In both cases , for large T the dominant terms are those with √ T in denominator . The ratio between the dominant terms corresponding to dist-EF-SGD and vanilla SGD is 1.5 , suggesting that the former is always slower than the latter by a factor of 1.5 . In our Theorem 2 we strengthen the bound for dist-EFSGD so that the dominant terms in the two algorithms are the same . This means that dist-EFSGD achieves same convergence rate as vanilla SGD in the limit as T → ∞ . Our proof is for an arbitrary δ-compressor that is not necessarily SignXOR . To this end we consider Algorithm 1 and define C ( x ) = decode ( encode ( x , y ) , y ) . We assume that C is a δ-compressor for all y ∈ Rd , i.e. , E [ ‖C ( x ) − x‖2 ] ≤ ( 1− δ ) ‖x‖2 for some δ ∈ ( 0 , 1 ] . We setup the optimization problem next . Let F : Rd → R be a function that is lower bounded by F∗ and has a gradient ∇F . We consider a distributed optimization setup with a master and n workers . For some wk ∈ Rd the ith worker calculates the stochastic gradient gik at wk . We assume that g i k is an unbiased estimate of ∇F ( wk ) , and that gik has bounded variance . Specifically , we assume that gik satisfies the two properties E [ gik|wk ] = ∇F ( wk ) and E [ ‖gik −∇F ( wk ) ‖2 ] ≤ σ2 . We also assume that F is L-Lipschitz smooth , and that the gradient of F is bounded . The latter implies that E [ ‖gik‖2 ] ≤ G2 for some scalar G. The master and workers generate a sequence { w0 , . . . , wT } as per Algorithm 1 . Convergence results of this system is summarized in Theorem 2 . Theorem 2 . For a given w0 and a step size schedule ηk = 1L√T ( 1− 1 2T 1/4 ) , the convergence of the system outlined in Algorithm 1 after T iterations is given by E [ min k=0 , ... , T−1 ‖∇F ( wk ) ‖ 2 ] < 2L ( F ( w0 ) − F∗ ) + σ 2 n 2 √ T − 1 + 2L ( F ( w0 ) − F∗ ) + 8 ( 1−δ ) G2 δ2 ( 1 + 1δ2 ) 2T 3/4 − T 1/4 +O ( 1 T ) . ( 1 ) We defer the proof of Theorem 2 to Section A.2 in the Appendix . In comparison , the bound for SGD has only the first term in ( 1 ) . Note that the last two terms in ( 1 ) converge to zero faster than the first term . This means that dist-EF-SGD asymptotically achieves the same convergence rate as SGD . Since SignXOR is a δ-compressor we conclude that the proposed algorithm converges asymptotically at the same rate as SGD .
The authors present a new scheme for compressing gradients for use in distributed training. In addition to the previously proposed techniques of sending the sign of the gradient components along with the scale, and the use of error feedback (each sender tracks the error introduced by quantization, and adjusts future gradient updates using it), the authors also propose to exploit the temporal correlation of gradient values (i.e., over successive steps). They do so by computing the delta between two steps, and then use a hyperparameter $\alpha$ to keep only a fraction of the deltas and that is sent losslessly.
SP:14c3434519fa9e757c4c5e6869c7c3cb07511f24
Compressing gradients in distributed SGD by exploiting their temporal correlation
1 INTRODUCTION . Distributed optimization has become the norm for training machine learning models on large datasets . With the need to train bigger models on ever-growing datasets , scalability of distributed optimization has become a key focus in the research community . While an obvious solution to growing dataset size is to increase the number of workers , the communication among workers has proven to be a bottleneck . For popular benchmark models such as AlexNet , ResNet and BERT the communication can account for a significant portion of the overall training time ( Alistarh et al. , 2017 ; Seide et al. , 2014 ; Lin et al. , 2018 ) . The BERT ( “ Bidirectional Encoder Representing from Transformers ” ) architecture for language models ( Devlin et al. , 2018 ) comprises about 340 million parameters . If 32-bit floating-point representation is used one gradient update from a worker amounts to communicating around 1.3GB ( 340×106 parameters× 32 bits per parameter× 2−33 gigabytes per bit≈ 1.3GB ) . Frequently communicating such large payloads can easily overwhelm the network resulting in prolonged training times . In addition , large payloads may increase other forms of costs in distributed optimization . Novel approaches such as federated learning employ mobile devices as worker nodes . Exchanging information with mobile devices is heavily constrained due to communication bandwidth and budget limitations . Therefore , communication remains an important bottleneck in distributed optimization and reducing communication is of utmost importance . Gradient compression alleviates the communication bottleneck . The idea is to apply a compression scheme on gradients before sending them over the network . There has been an increasing amount of literature on gradient compression within the last few years ( Seide et al. , 2014 ; Aji & Heafield , 2017 ; Alistarh et al. , 2017 ; Wen et al. , 2017 ; Wangni et al. , 2018 ; Wu et al. , 2018 ; Lin et al. , 2018 ; Wang et al. , 2018 ) . Such compression schemes have been demonstrated to work well with distributed stochastic gradient descent ( SGD ) and its variants . However , SGD with arbitrary compres- sion schemes may not converge . Karimireddy et al . ( 2019 ) give one example of non-convergence . The recently proposed error-feedback based algorithms ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) circumvent the convergence issue . Error-feedback methods accumulate the compression error and feed it back to the input of the compression scheme so that the error gets transmitted over subsequent iterations . The dist-EF-SGD algorithm proposed by Zheng et al . ( 2019 ) applies error-feedback to two-way compression , in which both worker-to-master and master-to-worker communications are compressed . Theoretical guarantees provided by dist-EF-SGD are valid for all compression schemes that fall under the definition of ‘ δ-approximate compressors ’ , also referred to as δ-compressors . The authors prove that error-feedback with two-way compression asymptotically achieves theO ( 1/ √ T ) convergence rate of SGD . However , the analysis by Zheng et al . ( 2019 ) suggests that dist-EF-SGD converges slower than SGD by a constant factor . Our contributions in this paper are as follows . We propose SignXOR , a novel compression scheme that exploits temporal correlation of gradients . We prove that SignXOR is a δ-compressor , and we provide convergence guarantees for SignXOR by employing dist-EF-SGD . We strengthen the convergence bound by Zheng et al . ( 2019 ) to show that dist-EF-SGD asymptotically converges at the same O ( 1/ √ T ) rate as SGD . Consequently , we show that the proposed method asymptotically achieves the SGD convergence rate . We empirically validate the proposed method on CIFAR100 and ImageNet datasets and demonstrate that the ratio between total communication budgets of SignXOR and Scaled-sign is less than 50 % . Notation : For x ∈ Rd , x [ j ] denotes the jth entry of x , ‖x‖1 denotes the ` 1-norm , and ‖x‖ denotes the ` 2-norm . For vector inputs sgn ( · ) function outputs the sign of the input element-wise . The index set { 1 , . . . , n } is denoted by [ n ] , and denotes elementwise multiplication . 2 RELATED WORK . The most common gradient compression schemes can be categorized into those based on sparsification and those based on quantization . The methods based on sparsification such as Top-k , Rand-k ( Stich et al. , 2018 ; Lin et al. , 2018 ) and Spectral-ATOMO ( Wang et al. , 2018 ) preserve only the most significant gradient elements , effectively reducing the quantity of information carrying gradient components . On the other hand , the methods based on quantization such as QSGD ( Alistarh et al. , 2017 ) , TernGrad ( Wen et al. , 2017 ) and SignSGD ( Bernstein et al. , 2018 ) reduce the overall floating-point precision of the gradient . Therefore , these two classes of methods can be respectively thought of as approaches that reduce the quantity versus the quality of the gradient . One can think of this in analogy to image compression . For example , JPEG image compression that is based on discrete cosine transform determines both which transform coefficients to store ( the quantity ) and at what level of resolution to store those coefficients ( the quality ) . Sign-based compression schemes such as Scaled-sign , SignSGD and Signum ( Bernstein et al. , 2018 ) sit at the far end of quantization-based algorithms . Such schemes quantize real values to only two levels , +1 and −1 . For example , the compressing function of Scaled-sign takes in a vector x ∈ Rd , and the decompressing function outputs the vector ( ‖x‖1/d ) sgn ( x ) . This means that the compressed representation needs to store only the sign of each entry x [ j ] , along with the scaling constant ‖x‖1/d . In practice one can avoid the ‘ zero ’ output of sgn by mapping it to +1 or −1 . This allows the two outcomes +1 and −1 to be represented using one bit per entry , making the size of the compressed representation d + 32 bits in total ( assuming 32-bit single-precision representation of the scaling constant ) . As per Shannon ’ s source coding theorem ( MacKay , 2003 , p. 81 ) , the sequence of +1 and −1 can be further compressed without any information loss if the probability of encountering +1 is different from that for −1 . However , in our experiments on Scaled-sign compression we observe that both outputs are equally likely across all iterations . Any lossy gradient compression scheme introduces noise , also known as distortion , in addition to the measurement noise that is already present in the stochastic gradients computed by the workers . It is reasonable to expect the additional compression error hurts the convergence rate of the algorithm . However , it has been empirically observed that significant compression ratios can be achieved before observing any impact on convergence ( Seide et al. , 2014 ; Alistarh et al. , 2017 ) . One can achieve even greater compression while keeping the convergence rate nearly the same by employing errorfeedback ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ; Zheng et al. , 2019 ) . Algorithms based on error-feedback accumulate the compression error in past iterations and add it to the input of the compressing function . This allows all of the gradient information to get transmitted over a sequence of iterations , albeit with a delay . For smooth functions the gradient does not change considerably , therefore , the delay does not significantly impact the rate of convergence . The dist-EF-SGD algorithm proposed by Zheng et al . ( 2019 ) is based on error-feedback and offers two-way compression under the master-worker topology . The dist-EF-SGD algorithm provides convergence guarantees for δ-compressors . An operator C : Rd → Rd is a δ-compressor for all x ∈ Rd if ‖C ( x ) − x‖2 ≤ ( 1− δ ) ‖x‖2 for some δ ∈ ( 0 , 1 ] . The δ is a measure of the distortion due to applying C. A good compressor will have δ close to 1 and a large compression ratio . One can show that Scaled-sign is a 1d -approximate compressor . The Scaled-sign compression scheme with dist-EF-SGD offers convergence guarantees under standard assumptions even though Scaled-sign without error-feedback may not converge ( Karimireddy et al. , 2019 ) . Zheng et al . ( 2019 ) generalize the dist-EF-SGD algorithm to include blockwise compression , in which the gradient is partitioned into blocks that are compressed separately . A natural partitioning method is to consider as blocks the elements of a deep neural network such as tensors , matrices and vectors . Blockwise compression allows better exploitation of redundancy that may be present only within a block . Zheng et al . ( 2019 ) empirically demonstrate that Scaled-sign with dist-EF-SGD achieves almost the same performance of SGD with respect to training loss and test accuracy . This indicates that we can allow more distortion in the compression before observing a significant impact on training performance . The proposed SignXOR compression is based on allowing additional distortion in the interest of achieving a higher compression ratio . 3 PROPOSED ALGORITHM : SIGNXOR . Our proposal is motivated by the concept of delta encoding . Delta encoding refers to techniques that store data as the difference between successive samples , rather than directly storing the samples themselves ( Smith et al. , 1997 ) . One often encounters delta encoding in applications such as integer compression and video compression . For example , it is more space efficient to store the first-order differences of a digitized audio signal than to store the values of the original signal . First-order differences have smaller magnitudes compared to the original sequence and , therefore , the differences can be represented by a comparatively smaller number of bits . A similar approach is used in the inter-picture prediction method in high efficiency video coding ( HEVC ) ( Sze et al. , 2014 ) . Interpicture prediction makes use of temporal correlations across video frames encoding the differences between frames . This requires less storage compared to storing each video frame . Jiang et al . ( 2018 ) employ delta encoding in a more related application to distributed optimization . At the core of their algorithm , delta encoding is employed to compress an increasing sequence of integers . Our SignXOR algorithm applies delta encoding to represent temporal changes in the gradient . In essence , SignXOR maintains a binary vector that indicates whether the sign of a gradient entry is equal ( or not equal ) to the sign of the corresponding entry in the previous gradient . The equality ( or non-equality ) can be represented by a binary 1 ( or 0 ) . This procedure resembles the binary XOR operation , hence the name SignXOR . We employ a generalized version of original dist-EF-SGD by Zheng et al . ( 2019 ) ( the original dist-EF-SGD is specified in Algorithm 2 therein ) to provide convergence guarantees for the proposed compression scheme . The generalization is to make dist-EF-SGD compatible with SignXOR . We outline the proposed method in Algorithm 1 and Algorithm 2 . Generalized dist-EF-SGD : Algorithm 1 presents generalized dist-EF-SGD for setup with a master and n workers . The three main differences between the generalized and original dist-EF-SGD versions are as follows . First , Algorithm 1 delegates compression and decompression tasks to two separate functions encode and decode . Second , Algorithm 1 maintains a vector ḡk at all nodes including the master . The vector ḡk is the average gradient all workers used to update the parameter vector wk in the last iteration . Third , the encode and decode functions each take the last gradient ḡk as the second argument . This is in contrast to the original dist-EF-SGD algorithm in which compression is based only on the gradient in the current iteration . Since there are no differences related to compression performance between the original and generalized dist-EF-SGD algorithms , they encompass the same theoretical guarantees . The compressing function encode takes in two inputs and outputs Gik . This output is the actual payload sent to master over the communication channel . In addition to Gik , the ith worker also Algorithm 1 : Generalized dist-EF-SGD compatible with SignXOR input : initial parameter vector w0 ; step sizes { η0 , . . . , ηT−1 } initialize : let 0 be the all-zeros vector of dimension d ; let ḡ0 ∈ Rd be a vector with entries sampled uniformly from [ −1 , 1 ] ; on ith worker store ḡ0 ; set ei0 to 0 ; set parameter vector to w0 ; on master store ḡ0 ; set e0 to 0 ; 1 for k ∈ { 0 , . . . , T − 1 } do 2 on ith worker 3 compute ĝik = g i k + 1 ηk eik where g i k is the stochastic gradient at wk ; 4 compute Gik = encode ( ĝik , ḡk ) and ḡik = decode ( Gik , ḡk ) ; 5 send Gik to master and update error eik+1 = ηk ( ĝik − ḡik ) ; 6 receive Gk from master and compute ḡk+1 = decode ( Gk , ḡk ) ; 7 update parameter vector wk+1 = wk − ηkḡk+1 ; 8 on master 9 receive Gik and compute ḡik = decode ( Gik , ḡk ) for all i ∈ [ n ] ; 10 compute ĝk = 1 n ∑ i∈ [ n ] ḡ i k + 1 ηk ek ; 11 compute Gk = encode ( ĝk , ḡk ) and ḡk+1 = decode ( Gk , ḡk ) ; 12 broadcast Gk to all workers and update ek+1 = ηk ( ĝk − ḡk+1 ) ; computes ḡik which is what the master will obtain by decompressing Gik . The vector ḡik is used to update eik+1 , the compression error fed back in the next iteration . The master collects Gik from all workers and decompresses each to obtain ḡik . All workers receive the master broadcast Gk , and input it , along with ḡk , to the decode function to decompress and obtain ḡk+1 . Note that in the kth iteration all nodes use the same ḡk vector as the second argument in the encode and decode functions . The encode and decode functions corresponding to SignXOR compression scheme are specified in Algorithm 2 . For the ease of explanation we consider the specific case when the master compresses ĝk to obtain Gk , and the workers decompress Gk to obtain ḡk+1 as an approximation to ĝk . Algorithm 2 : SignXOR compression and decompression input : hyperparameter 0 ≤ α < 14 ( 1− √ 1− 1d ) 2 < 1 ; 1 function encode ( x ∈ Rd , y ∈ Rd ) : 2 compute r , the fraction of +1 ’ s in sgn ( x ) ; 3 compute q , the fraction of elements in x such that sgn ( x [ j ] ) = sgn ( y [ j ] ) ; 4 initialize binary vector b ∈ { 0 , 1 } d to all-zeros ; 5 for all j ∈ [ d ] , if sgn ( x [ j ] ) = sgn ( y [ j ] ) set b [ j ] = 1 with probability 1− α ; 6 compute p , the fraction of 1 ’ s in b ; 7 compress b with a lossless scheme and compute scalar a = ‖x‖1/d ; 8 output G = { a , compressed representation of b } ; 9 function decode ( G = { a ∈ R , compressed representation of a vector b ∈ { 0 , 1 } d } , y ∈ Rd ) : 10 expand and decompress G to obtain a and b ; 11 output a sgn ( y ) ( 2b− 1 ) ; Compressing function : We consider the case when master calls encode with arguments x = ĝk and y = ḡk . Note that the scalars r , q and p are not used anywhere in Algorithm 2 . These scalars help us describe the algorithm and also become useful in our theoretical analysis . The output of encode function is random when α 6= 0 , and deterministic when α = 0 . Let us first consider α = 0 , in which case b [ j ] = 1 if and only if sgn ( ĝk [ j ] ) = sgn ( ḡk [ j ] ) . This implies that p is the fraction of entries in ĝk and ḡk that have the same signs . This is a measure of the ( positive or negative ) correlation between the two vectors . The core idea in the proposed compression scheme is to compress the binary vector b using a lossless compression scheme . Shannon ’ s source coding theorem ( MacKay , 2003 , p. 81 ) states that d i.i.d . random variables each with entropy H ( p ) = −p log2 p− ( 1− p ) log2 ( 1− p ) can be compressed into approximately dH ( p ) bits with negligible risk of information loss , as d → ∞ . In our case , while the length of the parameter vector d is well over a million for models of practical interest , the entries of b are not necessarily i.i.d .. However , we demonstrate in Section 5 that there exist readily available lossless compression algorithms that can compress b to very close to dH ( p ) bits , the Shannon limit . Note that binary entropy H ( p ) is symmetric around p = 0.5 , and satisfies 0 ≤ H ( p ) ≤ 1 with H ( 0.5 ) = 1 . When p ≈ 0.5 the size of the compressed representation gets close to d , which is same as that offered by Scaled-sign . Further compression of b is only possible if p is away from 0.5 . In our experiments presented in Section 5.2 we observe that when α = 0 , p remain close to but slightly lower than 0.5 . This implies a low correlation between ĝk and ḡk . We remedy this issue by making α > 0 . Next we explain how making α > 0 yields compression gains and , at the same time , induces correlation between ĝk and ḡk . First , note that increasing α introduces distortion to b , driving p away from 0.5 and towards 0 . A lower p decreases the entropy of b , yielding a higher compression ratio realized with the lossless compressor . Since we start with a p slightly less than 0.5 , if we were to drive p towards 1 we would first increase H ( p ) before starting to incur compression gains . For this reason , we design the encoder to push p towards zero . In summary , increasing α offers rate savings in the current iteration by adding distortion to the current gradient . Second , we explain how added distortion induces correlation between ĝk and ḡk . Note that q is a measure of correlation between the two vectors sgn ( ĝk ) and sgn ( ḡk ) measured prior to adding distortion using α . We emphasize that q does not depend on the errors introduced in the current iteration . Rather , q depends on α only through the past iterations . In the experimental results presented in Section 5.2 we observe that increasing α also decreases q . This means that in a given iteration the inputs to the encoder are already correlated . Therefore , the b vector that encodes equality ( or nonequality ) of sgn ( ĝk ) and sgn ( ḡk ) can be compressed even without adding distortion in the current iteration . The underlying mechanism that causes this temporal correlation in our encoder is errorfeedback . Recall that the idea behind error-feedback is to keep track of the compression error in the last iteration and add it back to the input of the encoder in the current iteration ( with correction for the step size ) . Specifically , ĝk includes the compression error incurred in ḡk . This feedback system induces the temporal relation between the two vectors that we see through q . In summary , our compression mechanism interacts with the error-feedback mechanism to increase temporal correlation which we then exploit to realize further compressive gains . The suggested upper bound for α ensures theoretical convergence of the SignXOR algorithm . We show in Section 5 that in practice α can be increased considerably more than the suggested upper bound before seeing an impact on the training performance . Decompressing function : Let us now consider the case when a worker calls decode with arguments G = Gk and y = ḡk . The first argument Gk contains the scalar a and the compressed representation of binary vector b , which can be recovered exactly as compression of b is lossless . Noting that b is computed with ḡk as the second argument to the encode function , the output of the decoder is obtained by inverting the sign of ḡk [ j ] whenever b [ j ] is 0 , and by scaling the result by a . One can compactly express the output of this operation as a sgn ( ḡk ) ( 2b− 1 ) . Remarks : Note that we can recover the Scaled-sign compression scheme by setting α = 0 in Algorithm 2 . In comparison to SignXOR , the compressed representation of Scaled-sign stores the sign of each entry in ĝk , and the scaling constant a = ‖ĝk‖1/d . We demonstrate in Section 5.2 that r , the fraction of +1 ’ s in sgn ( ĝk ) , is approximately 0.5 across all iterations . This means that we can not compress further the sequence of signs , and the encoded representation of Scaled-sign requires at least d bits . Algorithm 1 can be easily extended to accommodate blockwise compression as was explained by Zheng et al . ( 2019 ) . In blockwise compression the gradient ĝk is partitioned into blocks , and the blocks are processed separately using encode and decode functions . In our numerical experiments we employ the blockwise extension of Algorithm 1 . 4 THEORETICAL GUARANTEES In this section we summarize our theoretical results on Algorithm 1 and Algorithm 2 . First , we prove that the SignXOR compression scheme presented in Algorithm 2 is a δ-compressor . Second , we show that for any δ-compressor the generalized dist-EF-SGD scheme in Algorithm 1 converges at the same O ( 1/ √ T ) rate as SGD . Putting these two together yields the desired result . SignXOR is a δ-compressor : We consider the general form of encode and decode functions , i.e. , with inputs ( x , y ) for encode , and with inputs ( G , y ) for decode . Let us define the operator Cyα : Rd → Rd where Cyα ( x ) = a sgn ( y ) ( 2b − 1 ) with a = ‖x‖1 d and the binary vector b is such that b = 1 with probability 1 − α if sgn ( x ) = sgn ( y ) , and b = 0 otherwise . Note that a and b have the same meaning as in Algorithm 2 . The operator Cyα ( x ) is representative of decode ( encode ( x , y ) , y ) , therefore , Cyα ( x ) is the SignXOR compressor . Since C y α is a randomized operator , we show that it is a δ-compressor in expectation as stated in Theorem 1 . Theorem 1 . There exist a δ ∈ ( 0 , 1 ] such that E [ ‖Cyα ( x ) − x‖2 ] ≤ ( 1 − δ ) ‖x‖2 for all y ∈ Rd if α < 14 ( 1− √ 1− 1d ) 2 . The proof is provided in Appendix A.1 . Although the suggested upper bound for α is extremely small for large d , we demonstrate in Section 5 that in practice α can be set quite close to 1 . Next we discuss the convergence rate of dist-EF-SGD for an arbitrary δ-compressor . Convergence rate of dist-EF-SGD : Zheng et al . ( 2019 ) compare the convergence rates of dist-EFSGD and vanilla SGD in their Corollary 1 . As authors note , although the convergence rates of the two algorithms are same in O notation , the former is slower by a constant factor . The differences between the bounds for dist-EF-SGD and vanilla SGD in Corollary 1 are as follows . In both cases , for large T the dominant terms are those with √ T in denominator . The ratio between the dominant terms corresponding to dist-EF-SGD and vanilla SGD is 1.5 , suggesting that the former is always slower than the latter by a factor of 1.5 . In our Theorem 2 we strengthen the bound for dist-EFSGD so that the dominant terms in the two algorithms are the same . This means that dist-EFSGD achieves same convergence rate as vanilla SGD in the limit as T → ∞ . Our proof is for an arbitrary δ-compressor that is not necessarily SignXOR . To this end we consider Algorithm 1 and define C ( x ) = decode ( encode ( x , y ) , y ) . We assume that C is a δ-compressor for all y ∈ Rd , i.e. , E [ ‖C ( x ) − x‖2 ] ≤ ( 1− δ ) ‖x‖2 for some δ ∈ ( 0 , 1 ] . We setup the optimization problem next . Let F : Rd → R be a function that is lower bounded by F∗ and has a gradient ∇F . We consider a distributed optimization setup with a master and n workers . For some wk ∈ Rd the ith worker calculates the stochastic gradient gik at wk . We assume that g i k is an unbiased estimate of ∇F ( wk ) , and that gik has bounded variance . Specifically , we assume that gik satisfies the two properties E [ gik|wk ] = ∇F ( wk ) and E [ ‖gik −∇F ( wk ) ‖2 ] ≤ σ2 . We also assume that F is L-Lipschitz smooth , and that the gradient of F is bounded . The latter implies that E [ ‖gik‖2 ] ≤ G2 for some scalar G. The master and workers generate a sequence { w0 , . . . , wT } as per Algorithm 1 . Convergence results of this system is summarized in Theorem 2 . Theorem 2 . For a given w0 and a step size schedule ηk = 1L√T ( 1− 1 2T 1/4 ) , the convergence of the system outlined in Algorithm 1 after T iterations is given by E [ min k=0 , ... , T−1 ‖∇F ( wk ) ‖ 2 ] < 2L ( F ( w0 ) − F∗ ) + σ 2 n 2 √ T − 1 + 2L ( F ( w0 ) − F∗ ) + 8 ( 1−δ ) G2 δ2 ( 1 + 1δ2 ) 2T 3/4 − T 1/4 +O ( 1 T ) . ( 1 ) We defer the proof of Theorem 2 to Section A.2 in the Appendix . In comparison , the bound for SGD has only the first term in ( 1 ) . Note that the last two terms in ( 1 ) converge to zero faster than the first term . This means that dist-EF-SGD asymptotically achieves the same convergence rate as SGD . Since SignXOR is a δ-compressor we conclude that the proposed algorithm converges asymptotically at the same rate as SGD .
This paper proposed an extension of blockwise scaled sign compressor in Zheng et al. (2019). The proposed method exploits the temporal correlation between two consecutive gradients. The authors show that one can have a higher compression rate by inserting distortion to the compressed gradient. A tighten bound is provided such that the asymptotic rate (including constant) is exactly the same as the full-precision counterpart. The experiments show that the proposed compressor can achieve additional 40%-50% reduction on communication compared to the scaled sign. Overall, the reviewer thinks the idea is interesting. The reviewer has a few comments:
SP:14c3434519fa9e757c4c5e6869c7c3cb07511f24
Motif-Driven Contrastive Learning of Graph Representations
1 INTRODUCTION . Graph-structured data , such as molecules and social networks , is ubiquitous in many scientific research areas and real-world applications . To represent graph characteristics , graph motifs were proposed in Milo et al . ( 2002 ) as significant subgraph patterns occurring frequently in graphs and uncovering graph structural principles . For example , functional groups are important motifs that can determine molecule properties . Like the hydroxide ( –OH ) usually implies higher water solubility , and for proteins , Zif268 can mediate protein-protein interactions in sequence-specific DNA-binding proteins . ( Pabo et al. , 2001 ) . Graph motifs has been studied for years . Meaningful motifs can benefit many important applications like quantum chemistry and drug discovery ( Ramsundar et al. , 2019 ) . However , extracting motifs from large graph datasets remains a challenging question . Traditional motif discovery approaches ( Milo et al. , 2002 ; Kashtan et al. , 2004 ; Chen et al. , 2006 ; Wernicke , 2006 ) rely on discrete counting or statistical estimation , which are hard to generalize to large-scale graph datasets with continuous and high-dimension features , as often the case in real-world applications . Recently , Graph Neural Networks ( GNNs ) have shown great expressive power for learning graph representations without explicit feature engineering ( Kipf & Welling , 2016 ; Hamilton et al. , 2017 ; Veličković et al. , 2017 ; Xu et al. , 2018 ) . In addition , GNNs can be trained in a self-supervised manner without human annotations to capture important graph structural and semantic properties ( Veličković et al. , 2018 ; Hu et al. , 2020c ; Qiu et al. , 2020 ; Bai et al. , 2019 ; Navarin et al. , 2018 ; Wang et al. , 2020 ; Sun et al. , 2019 ; Hu et al. , 2020b ) . This motivates us to rethink about motifs as more general representations than exact structure matches and ask the following research questions : • Can we use GNNs to automatically extract graph motifs from large graph datasets ? • Can we leverage the learned graph motif to benefit self-supervised GNN learning ? In this paper , we propose MICRO-Graph : a framework for MotIf-driven Contrastive leaRning Of Graph representations . The key idea of this framework is to learn graph motifs as prototypical cluster centers of subgraph embeddings encoded by GNNs . In this way , the discrete counting problem is transfered to a fully-differentiable framework that can generalize to large-scale graph datasets with continuous and high-dimensional features . In addition , the learned motifs can help generate more informative subgraphs for graph-to-subgraph contrastive learning . The motif learning and contrastive learning are mutually reinforced to pre-train a more generalizable GNN encoder . For motif learning , given a graph dataset , a motif-guided subgraph segmenter generates subgraphs from each graph , and a GNN encoder turns these subgraphs into vector representations . We then learn graph motifs through clustering , where we keep the K prototypical cluster centers as representations of motifs . Similar and significant subgraphs are assigned to the same motif and become closer to their corresponding motif representation . We train our model in an Expectation-Maximization ( EM ) fashion to update both the motif assignment of each subgraph and the motif representations . For leveraging learned motifs , we propose a graph-to-subgraph contrastive learning framework for GNN pre-training . One of the key components for contrastive learning is to generate semantically meaningful views of each instance . For example , a continuous span within a sentence ( Joshi et al. , 2020 ) or a random crop of an image ( Chen et al. , 2020 ) . For graph data , previous approaches leverage node-level views , which is not sufficient to capture high-level graph structural information Sun et al . ( 2019 ) . As motifs can represent the key graph properties by its nature , we propose to leverage the learned motifs to generate more informative subgraph views . For example , alpha helix and beta sheet can come together as a simple ββα fold to form a zinc finger protein with unique properties . By learning such subgraph co-occurrence via contrastive learning , the pre-trained GNN can capture higher-level information of the graph that node-level contrastive can ’ t capture . The pre-trained GNN using MICRO-Graph on the ogbg-molhiv molecule dataset can successfully learn meaningful motifs , including Benzene rings , nitro , acetate , and etc . Meanwhile , fine-tune this GNN on seven chemical property prediction benchmarks yielding 2.0 % average improvement over non-pretrained GNNs and outperforming other self-supervised pre-training baselines . Also , extensive ablation studies show the significance of the learned motifs for the contrastive learning . 2 RELATED WORK . The goal of self-supervised learning is to train a model to capture significant characteristics of data without human annotations . This paper studies whether we can use such approach to automatically extract graph motifs , i.e . the significant subgraph patterns , and leverage the learned motifs to benefit self-supervised learning . In the following , we first review graph motifs especially challenges for motif mining , and then discuss approaches for pre-training GNNs in a self-supervised manner . Graph motifs are building blocks of complex graphs . They reveal the interconnections of graphs and represent graph characteristics . Mining motifs can benefit many tasks from exploratory analysis to transfer learning ( Henderson et al. , 2012 ) . For many years , various motif mining algorithms have been proposed . There are generally two categories , either exact counting as in Milo et al . ( 2002 ) ; Kashtan et al . ( 2004 ) ; Schreiber & Schwöbbermeyer ( 2005 ) ; Chen et al . ( 2006 ) , or sampling and statistical estimation as in Wernicke ( 2006 ) . However , both approaches can not scale to large graph datasets with high-dimension and continuous features , which is common in real-world applications . In this paper , we proposes to turn the discrete motif mining problem into a GNN-based differentiable cluster learning problem that can generalize to large-scale datasets . Another GNN-based work related to graph motifs is the GNNExplainer , which focuses on post-process model interpretation ( Ying et al. , 2019 ) . It can identify substructures that are important for graph property prediciton , e.g . motifs . The difference between GNNExplainer and MICRO-Graph is that the former identify motifs at a single graph level , and the later learns motifs across the whole dataset . Contrastive learning is one of the state-of-the-art self-supervised representation learning algorithms . It achieves great results for visual representation learning ( Chen et al. , 2020 ; He et al. , 2019 ) . Contrastive learning forces views generated from the same instance ( e.g . different crops of the same image ) to become closer , while views from different instances apart . One key component in contrastive learning is to generate informative and diverse views from each data instance . In computer vision , researchers use various techniques , including cropping , color distortion , and Gaussian blurs to generate views . However , when it comes to graphs , constructing informative view of graph is a challenging task . In our framework , we utilize the learned motifs , which are significant subgraph patterns , to guide view ( subgraph ) generation , and conduct graph-to-subgraph contrastive learning . Self-supervised learning for GNNs also draws many attention recently . For graphs , representations can be at different levels , e.g . node level and ( sub ) graph level . Veličković et al . ( 2018 ) ; Hu et al . ( 2020c ) ; Qiu et al . ( 2020 ) mainly focus on node-level representation learning in a single large graph , as opposed to the focus of this paper , which is representation learning of whole graphs . Hu et al . ( 2020b ) provides a systematic analysis of pre-training strategies on graphs for both node-level and graph-level . However , only the node-level learning is self-supervised , and annotated labels are utilized for supervised learning at the graph level . For graph level self-supervised representation learning , Sun et al . ( 2019 ) proposed a contrastive framework , InfoGraph , to maximize the mutual information between graph representations and node representations . In Rong et al . ( 2020 ) , the GROVER model and a motif based self-supervised learning task was proposed , where the discrete motifs are first extracted using a professional software , and then these motifs are used as prediction labels for pre-training the model . The difference between motifs in GROVER and in MICRO-Graph is that GROVER uses discrete structures , but MICRO-Graph uses continuous vector embeddings To alleviate these issues , we propose graph-to-subgraph view self-supervised contrastive learning , and the subgraph generation is guided by the learned motifs . 3 METHODOLOGY . The goal of this paper is to train a GNN encoder that can automatically extract graph motifs , i.e . significant subgraph patterns . Motif discovery on discrete graph structures is a combinatorial problem , and it is hard to generalize to large datasets with continuous features . We thus propose to formalize this problem as a differentiable clustering learning problem and solve it via self-supervised GNN learning . In this section , we formalize the problem and introduce the overall framework of MICROGraph in Section 3.1 , and then describe each module in details in the following sections . 3.1 THE OVERALL FRAMEWORK OF MICRO-Graph Given a dataset with M graphs G = { G1 , ... , GM } , the differentiable clustering learning problem is meant to learn two things . One is a GNN-based graph encoder E ( · ) that maps input ( sub ) graphs an embedding vector . The other is a K-slot embedding table { m1 , ... , mK } , where each slot is a motif vector m corresponding to a cluster center of embeddings of frequently occurred subgraphs . To tackle this problem , we introduce the MICRO-Graph framework which consists three modules : 1 ) a Motif-guided Segmenter to extract important subgraphs ; 2 ) a Motif-Learner to cluster sampled subgraphs and identify motifs ; 3 ) a constrastive learning module for graph-to-subgraph contrastive learning . The overall framework is shown in Figure1 . We describe details of each module in the following sections . The Motif Learner is introduced first in Section 3.2 , and then the Motif-guided Segmenter and the constrastive learning module in Section 3.3 and 3.4 respectively . 3.2 MOTIF LEARNER VIA EM CLUSTERING . The Motif Learner learns motifs by applying an Expectation-Maximization ( EM ) style clustering algorithm on sampled subgraphs . To start clustering , the Motif-guided Segmenter first extracts N subgraphs { gj } Nj=1 from the input whole graphs . For each subgraph gj , we generate its embedding ej = E ( gj ) and calculate the cosine similarity between ej and each of the K motif vectors { m1 , ... , mK } . We denote the similarity between ej and the kth motif vector as Sk , j = φ ( mk ) Tφ ( ej ) 1 . In vector notation , the K-dimensional vector of similarities between gj and all K motif vectors is denoted as sj , and the K-by-N-dimensional motif-to-subgraph similarity matrix is denoted as S , where the j-th column of S is sj and the entry ( k , j ) of S is Sk , j . E-Step . The goal of the E-step is to come up with motif-based cluster assignments for subgraph embeddings { ej } Nj=1 . The assignments can be represented by a K-by-N-dimensional matrix Q = [ q1 , ... , qN ] , where the j-th column qj contains the probabilities of assigning the j-th subgraph to each of the K motifs . Each qj can be a one-hot vector for hard clustering or a probability vector with all the entries sum up to one for soft-clustering . This vanilla version clustering problem boils down to maximizing the objective Tr ( QTS ) , which corresponds to an assignment Q that maximizes similarities between embeddings and its assigned motif . This objective works fine for a traditional EM clustering algorithm when embeddings are fixed . However , since representations will change when doing representation learning , this vanilla objective in the E-step can lead to a degenerate solution , i.e . all representations collapse to a single cluster center . To avoid this issue , we follow YM . et al . ( 2020 ) to introduce an entropy term and an equal-size constraint on Q for clusters to have similar sizes . Our final objective is : max Q∈Q Tr ( QTS ) + 1 λ H ( Q ) ( 1 ) where H ( Q ) = − ∑ i , j Qi , j logQi , j is the entropy function , and the constraint set Q requires the marginal projection of Q onto its columns and rows to be uniform . Q = { Q ∈ RK , N+ |Q1N = 1K K , Q1K = 1N N } ( 2 ) where 1N and 1K are all one vectors . This constraint optimization problem turns out to be an optimal transportation problem with a closed-form solution as ( 3 ) and can be solved efficiently using a fast Sinkhorn-Knopp algorithm . Q∗ = diag ( u ) · exp ( λS ) · diag ( v ) ( 3 ) Here u and v are normalization vectors . The derivations can be found in Cuturi ( 2013 ) . M-Step . The goal of the M-step is to maximize the log-likelihood of our data given the cluster assignment matrix Q estimated in the E-step . We update parameters in the GNN encoder and the motif embedding table through the M-step . This step is equivalent to a supervised K-class classification problem with labels Q and prediction scores S. Thus , we first apply a columnwise softmax normalization with temperature τg to S to convert all entries of S to probabilities , i.e . S̃k , j = softmaxk ( Sk , j/τg ) . Then we use the negative likelihood as the loss function . Lm = − 1 N N∑ j=1 K∑ k=1 Qk , j log S̃k , j ( 4 )
This paper proposes to learn the sub-graph patterns from a collection of training graphs. The key idea is to partition each graph into segments and enforce a global clustering of the subgraphs. The partitioning is also guided through contrastive learning, i.e., subgraphs should have a larger similarity with the graph it is drawn from, compared with other graphs. The learned GNN (that generates node embedding) will then be used to some downstream learning tasks with or without further fine-tuning.
SP:6ebd0f56ad29eeb2a152333873da0c5614607174
Motif-Driven Contrastive Learning of Graph Representations
1 INTRODUCTION . Graph-structured data , such as molecules and social networks , is ubiquitous in many scientific research areas and real-world applications . To represent graph characteristics , graph motifs were proposed in Milo et al . ( 2002 ) as significant subgraph patterns occurring frequently in graphs and uncovering graph structural principles . For example , functional groups are important motifs that can determine molecule properties . Like the hydroxide ( –OH ) usually implies higher water solubility , and for proteins , Zif268 can mediate protein-protein interactions in sequence-specific DNA-binding proteins . ( Pabo et al. , 2001 ) . Graph motifs has been studied for years . Meaningful motifs can benefit many important applications like quantum chemistry and drug discovery ( Ramsundar et al. , 2019 ) . However , extracting motifs from large graph datasets remains a challenging question . Traditional motif discovery approaches ( Milo et al. , 2002 ; Kashtan et al. , 2004 ; Chen et al. , 2006 ; Wernicke , 2006 ) rely on discrete counting or statistical estimation , which are hard to generalize to large-scale graph datasets with continuous and high-dimension features , as often the case in real-world applications . Recently , Graph Neural Networks ( GNNs ) have shown great expressive power for learning graph representations without explicit feature engineering ( Kipf & Welling , 2016 ; Hamilton et al. , 2017 ; Veličković et al. , 2017 ; Xu et al. , 2018 ) . In addition , GNNs can be trained in a self-supervised manner without human annotations to capture important graph structural and semantic properties ( Veličković et al. , 2018 ; Hu et al. , 2020c ; Qiu et al. , 2020 ; Bai et al. , 2019 ; Navarin et al. , 2018 ; Wang et al. , 2020 ; Sun et al. , 2019 ; Hu et al. , 2020b ) . This motivates us to rethink about motifs as more general representations than exact structure matches and ask the following research questions : • Can we use GNNs to automatically extract graph motifs from large graph datasets ? • Can we leverage the learned graph motif to benefit self-supervised GNN learning ? In this paper , we propose MICRO-Graph : a framework for MotIf-driven Contrastive leaRning Of Graph representations . The key idea of this framework is to learn graph motifs as prototypical cluster centers of subgraph embeddings encoded by GNNs . In this way , the discrete counting problem is transfered to a fully-differentiable framework that can generalize to large-scale graph datasets with continuous and high-dimensional features . In addition , the learned motifs can help generate more informative subgraphs for graph-to-subgraph contrastive learning . The motif learning and contrastive learning are mutually reinforced to pre-train a more generalizable GNN encoder . For motif learning , given a graph dataset , a motif-guided subgraph segmenter generates subgraphs from each graph , and a GNN encoder turns these subgraphs into vector representations . We then learn graph motifs through clustering , where we keep the K prototypical cluster centers as representations of motifs . Similar and significant subgraphs are assigned to the same motif and become closer to their corresponding motif representation . We train our model in an Expectation-Maximization ( EM ) fashion to update both the motif assignment of each subgraph and the motif representations . For leveraging learned motifs , we propose a graph-to-subgraph contrastive learning framework for GNN pre-training . One of the key components for contrastive learning is to generate semantically meaningful views of each instance . For example , a continuous span within a sentence ( Joshi et al. , 2020 ) or a random crop of an image ( Chen et al. , 2020 ) . For graph data , previous approaches leverage node-level views , which is not sufficient to capture high-level graph structural information Sun et al . ( 2019 ) . As motifs can represent the key graph properties by its nature , we propose to leverage the learned motifs to generate more informative subgraph views . For example , alpha helix and beta sheet can come together as a simple ββα fold to form a zinc finger protein with unique properties . By learning such subgraph co-occurrence via contrastive learning , the pre-trained GNN can capture higher-level information of the graph that node-level contrastive can ’ t capture . The pre-trained GNN using MICRO-Graph on the ogbg-molhiv molecule dataset can successfully learn meaningful motifs , including Benzene rings , nitro , acetate , and etc . Meanwhile , fine-tune this GNN on seven chemical property prediction benchmarks yielding 2.0 % average improvement over non-pretrained GNNs and outperforming other self-supervised pre-training baselines . Also , extensive ablation studies show the significance of the learned motifs for the contrastive learning . 2 RELATED WORK . The goal of self-supervised learning is to train a model to capture significant characteristics of data without human annotations . This paper studies whether we can use such approach to automatically extract graph motifs , i.e . the significant subgraph patterns , and leverage the learned motifs to benefit self-supervised learning . In the following , we first review graph motifs especially challenges for motif mining , and then discuss approaches for pre-training GNNs in a self-supervised manner . Graph motifs are building blocks of complex graphs . They reveal the interconnections of graphs and represent graph characteristics . Mining motifs can benefit many tasks from exploratory analysis to transfer learning ( Henderson et al. , 2012 ) . For many years , various motif mining algorithms have been proposed . There are generally two categories , either exact counting as in Milo et al . ( 2002 ) ; Kashtan et al . ( 2004 ) ; Schreiber & Schwöbbermeyer ( 2005 ) ; Chen et al . ( 2006 ) , or sampling and statistical estimation as in Wernicke ( 2006 ) . However , both approaches can not scale to large graph datasets with high-dimension and continuous features , which is common in real-world applications . In this paper , we proposes to turn the discrete motif mining problem into a GNN-based differentiable cluster learning problem that can generalize to large-scale datasets . Another GNN-based work related to graph motifs is the GNNExplainer , which focuses on post-process model interpretation ( Ying et al. , 2019 ) . It can identify substructures that are important for graph property prediciton , e.g . motifs . The difference between GNNExplainer and MICRO-Graph is that the former identify motifs at a single graph level , and the later learns motifs across the whole dataset . Contrastive learning is one of the state-of-the-art self-supervised representation learning algorithms . It achieves great results for visual representation learning ( Chen et al. , 2020 ; He et al. , 2019 ) . Contrastive learning forces views generated from the same instance ( e.g . different crops of the same image ) to become closer , while views from different instances apart . One key component in contrastive learning is to generate informative and diverse views from each data instance . In computer vision , researchers use various techniques , including cropping , color distortion , and Gaussian blurs to generate views . However , when it comes to graphs , constructing informative view of graph is a challenging task . In our framework , we utilize the learned motifs , which are significant subgraph patterns , to guide view ( subgraph ) generation , and conduct graph-to-subgraph contrastive learning . Self-supervised learning for GNNs also draws many attention recently . For graphs , representations can be at different levels , e.g . node level and ( sub ) graph level . Veličković et al . ( 2018 ) ; Hu et al . ( 2020c ) ; Qiu et al . ( 2020 ) mainly focus on node-level representation learning in a single large graph , as opposed to the focus of this paper , which is representation learning of whole graphs . Hu et al . ( 2020b ) provides a systematic analysis of pre-training strategies on graphs for both node-level and graph-level . However , only the node-level learning is self-supervised , and annotated labels are utilized for supervised learning at the graph level . For graph level self-supervised representation learning , Sun et al . ( 2019 ) proposed a contrastive framework , InfoGraph , to maximize the mutual information between graph representations and node representations . In Rong et al . ( 2020 ) , the GROVER model and a motif based self-supervised learning task was proposed , where the discrete motifs are first extracted using a professional software , and then these motifs are used as prediction labels for pre-training the model . The difference between motifs in GROVER and in MICRO-Graph is that GROVER uses discrete structures , but MICRO-Graph uses continuous vector embeddings To alleviate these issues , we propose graph-to-subgraph view self-supervised contrastive learning , and the subgraph generation is guided by the learned motifs . 3 METHODOLOGY . The goal of this paper is to train a GNN encoder that can automatically extract graph motifs , i.e . significant subgraph patterns . Motif discovery on discrete graph structures is a combinatorial problem , and it is hard to generalize to large datasets with continuous features . We thus propose to formalize this problem as a differentiable clustering learning problem and solve it via self-supervised GNN learning . In this section , we formalize the problem and introduce the overall framework of MICROGraph in Section 3.1 , and then describe each module in details in the following sections . 3.1 THE OVERALL FRAMEWORK OF MICRO-Graph Given a dataset with M graphs G = { G1 , ... , GM } , the differentiable clustering learning problem is meant to learn two things . One is a GNN-based graph encoder E ( · ) that maps input ( sub ) graphs an embedding vector . The other is a K-slot embedding table { m1 , ... , mK } , where each slot is a motif vector m corresponding to a cluster center of embeddings of frequently occurred subgraphs . To tackle this problem , we introduce the MICRO-Graph framework which consists three modules : 1 ) a Motif-guided Segmenter to extract important subgraphs ; 2 ) a Motif-Learner to cluster sampled subgraphs and identify motifs ; 3 ) a constrastive learning module for graph-to-subgraph contrastive learning . The overall framework is shown in Figure1 . We describe details of each module in the following sections . The Motif Learner is introduced first in Section 3.2 , and then the Motif-guided Segmenter and the constrastive learning module in Section 3.3 and 3.4 respectively . 3.2 MOTIF LEARNER VIA EM CLUSTERING . The Motif Learner learns motifs by applying an Expectation-Maximization ( EM ) style clustering algorithm on sampled subgraphs . To start clustering , the Motif-guided Segmenter first extracts N subgraphs { gj } Nj=1 from the input whole graphs . For each subgraph gj , we generate its embedding ej = E ( gj ) and calculate the cosine similarity between ej and each of the K motif vectors { m1 , ... , mK } . We denote the similarity between ej and the kth motif vector as Sk , j = φ ( mk ) Tφ ( ej ) 1 . In vector notation , the K-dimensional vector of similarities between gj and all K motif vectors is denoted as sj , and the K-by-N-dimensional motif-to-subgraph similarity matrix is denoted as S , where the j-th column of S is sj and the entry ( k , j ) of S is Sk , j . E-Step . The goal of the E-step is to come up with motif-based cluster assignments for subgraph embeddings { ej } Nj=1 . The assignments can be represented by a K-by-N-dimensional matrix Q = [ q1 , ... , qN ] , where the j-th column qj contains the probabilities of assigning the j-th subgraph to each of the K motifs . Each qj can be a one-hot vector for hard clustering or a probability vector with all the entries sum up to one for soft-clustering . This vanilla version clustering problem boils down to maximizing the objective Tr ( QTS ) , which corresponds to an assignment Q that maximizes similarities between embeddings and its assigned motif . This objective works fine for a traditional EM clustering algorithm when embeddings are fixed . However , since representations will change when doing representation learning , this vanilla objective in the E-step can lead to a degenerate solution , i.e . all representations collapse to a single cluster center . To avoid this issue , we follow YM . et al . ( 2020 ) to introduce an entropy term and an equal-size constraint on Q for clusters to have similar sizes . Our final objective is : max Q∈Q Tr ( QTS ) + 1 λ H ( Q ) ( 1 ) where H ( Q ) = − ∑ i , j Qi , j logQi , j is the entropy function , and the constraint set Q requires the marginal projection of Q onto its columns and rows to be uniform . Q = { Q ∈ RK , N+ |Q1N = 1K K , Q1K = 1N N } ( 2 ) where 1N and 1K are all one vectors . This constraint optimization problem turns out to be an optimal transportation problem with a closed-form solution as ( 3 ) and can be solved efficiently using a fast Sinkhorn-Knopp algorithm . Q∗ = diag ( u ) · exp ( λS ) · diag ( v ) ( 3 ) Here u and v are normalization vectors . The derivations can be found in Cuturi ( 2013 ) . M-Step . The goal of the M-step is to maximize the log-likelihood of our data given the cluster assignment matrix Q estimated in the E-step . We update parameters in the GNN encoder and the motif embedding table through the M-step . This step is equivalent to a supervised K-class classification problem with labels Q and prediction scores S. Thus , we first apply a columnwise softmax normalization with temperature τg to S to convert all entries of S to probabilities , i.e . S̃k , j = softmaxk ( Sk , j/τg ) . Then we use the negative likelihood as the loss function . Lm = − 1 N N∑ j=1 K∑ k=1 Qk , j log S̃k , j ( 4 )
The paper describes a self-supervised framework to extract graph motifs and use them as input for downstream contrastive learning. The framework contains three components: (a) motif guided segmenter to derive node subgraphs, (b) a motif learning - a clustering task among the subgraphs to identify concrete graph motifs and (c) contrastive learner for downstream graph tasks. The global objective is defined as the sum of the likelihoods of the three components. The framework is evaluated using from a large scale chemical compound graph dataset. The evaluation is performed for both transfer learning and utility of extracted features and outperforms the tested competing methods.
SP:6ebd0f56ad29eeb2a152333873da0c5614607174
Ballroom Dance Movement Recognition Using a Smart Watch and Representation Learning
1 INTRODUCTION . Recent work has used low-cost smart watches to track the movement of human body parts . ArmTrak tracks arm movement , assuming that the body and torso are stationary ( Shen et al. , 2016 ) . In this paper , we perform whole body movement recognition using a single smart watch , which is a hard problem given that body movements need to be inferred using readings taken from a single location on the body ( the wrist ) . The movements in the study are from ballroom dancing , which engages tens of thousands of competitors in the U.S. and other countries . Competitors dance at different skill levels and each level is associated with an internationally recognized syllabus , set by the World Dance Sport Federation . The syllabus breaks each dance into smaller segments with well-defined body movements . Those segments are called figures . In the waltz , for example , each figure has a length of one measure of the waltz song being danced to ; the entire dance is a sequence of 40 to 60 figures ( depending on the length of the song ) . The sequence is random , but the figures themselves are well-defined . The sequence is illustrated in Fig . 1 . The International Standard ballroom dances are a subset of ballroom dances danced around the world , and they include the waltz , tango , foxtrot , quickstep and Viennese waltz . A unique characteristic of all these dances is that the couple is always in a closed-hold , meaning they never separate . Also , both dancers in the couple maintain a rigid frame , meaning the arms and torso move together as one unit . The head and the lower body , however , move independently of that arms-torso unit . Our hypothesis in this paper is that the figures in each of these dances can be recognized with high accuracy using deep learning representations of data obtained from a single smart watch worn by the lead in the couple . That is possible because the rigid frame makes it unnecessary to separately instrument the arms and torso , and because most figures are characterized by distinct movements ( translations and rotations in space ) of the arms and torso . We refer the interested reader to the website www.ballroomguide.com for free videos and details on the various syllabus figures in all the International Standard ballroom dance styles . In this paper , we validate our hypothesis on the quintessential ballroom dance– the waltz . We chose 16 waltz figures that are most commonly danced by amateurs . The full names of the figures are included in Appendix A . Our goal is to accurately classify those figures in real-time using data from a smart watch . That data can be pushed to mobile devices in the hands of spectators at ballroom competitions , providing them with real-time commentary on the moves that they will have just watched being performed . That is an augmented-reality platform serving laymen in the audience who want to become more engaged with the nuances of the dance that they are watching . The main beneficiary of the analysis of dance movements would be the dancers themselves . The analysis will help them identify whether or not they are dancing the figures correctly . If a figure is confused for a different figure , it may be because the dancers have not sufficiently emphasized the difference in their dancing and need to improve their technique on that figure . That confusion metrics could also be used by competition judges to mark competitors on how well dancers are performing figures ; that task is currently done by eye-balling multiple competitors on the floor , and is challenging when there are over ten couples to keep track of . We make three main contributions in detecting ballroom dance movements using learning representations . • First , we show that representations using data from a single smart watch are sufficient for discriminating between complex dancing movements . • Second , we identify and evaluate six learning representations that can be used for classifying the figures with varying accuracies . The representations are 1 ) Gaussian Hidden Markov Model , 2 ) Extra Trees Classifier , 3 ) Feed-Forward Neural Network , 4 ) Recurrent Neural Network ( LSTM ) , 5 ) Convolution Neural Network , and 6 ) a Convolution Neural Network that feeds into a Recurrent Neural Network . • Finally , we model the sequence of figures as a Markov chain , using the fact that the transitions between figures are memoryless . We use the rules of the waltz to determine which transitions are possible and which are not . With that transition knowledge , we correct the immediately previous figure ’ s estimate . This leads to an average estimation accuracy improvement of 5.33 percentage points . 2 DATASET DESCRIPTION . 2.1 DATA COLLECTION . The data was collected using an Android app on a Samsung Gear Live smart watch . The app was developed for this work on top of the ArmTrak data collection app . We were able to reliably collect two derived sensor measurements from the Android API : • Linear Acceleration . This contains accelerometer data in the X , Y and Z directions of the smart watch , with the effect of gravity removed . • Rotation Vector . This provides the Euler angles ( roll , pitch and yaw ) by fusing accelerometer , gyroscope and magnetometer readings in the global coordinate space . We use only the yaw ( rotation about the vertical axis ) in this study , and that is based on prior knowledge that roll and pitch are insignificant in the waltz figures included in the study . In total , we collected readings from 4 sensor axes ( three from the Linear Acceleration and the yaw from the Rotation Vector sensors ) . The readings were reported by watch operating system asynchronously , at irregular intervals , whenever a change was sensed . In order to facilitate signal processing , we downsampled the data such that each figure contained exactly 100 sensor samples , which was possible because the effective sampling rate was greater than that . The downsampling was done by taking the median ( instead of the mean , which is sensitive to outliers ) values of 100 evenly-spaced time windows . From this point on in the paper , when we refer to “ samples ” , we refer to an observation for a figure of dimension 4× 100 as one sample . The app was developed in such a way that the button on the watch that started recording the aforementioned sensor measurements also simultaneously started playing the music via Bluetooth speakers . That ensured that the music and the recording of the movements were time-synchronized . For all the data that was collected , we used the same rendition of the classic song ” Moon River ” . We performed manual segmentation of the song using its beats offline , and that was used to segment the time series data for the entire dance sequence into 2.1-second-long-segments corresponding to figures in the dance . We noted the song intro length ( where no dancing was performed ) and ignored all data in that period . For each figure , we extended the window of measurements equally at the beginning and at the end by 0.35 seconds to account for slight errors in dancer timing . That ensured that the window captures the figures even if the dancer was slightly early or late to begin/finish dancing the figure . The yaw data for 4 figure samples corresponding to two different figures are illustrated in Fig . 2 . It can be seen that right-turning figures tended to record yaw readings with an upward trend , while left-turning figures recorded yaw readings with a downward trend . Slight differences between the samples for each figure can be attributed to differences in the dancers ’ timing and execution . 2.2 CROSS-VALIDATION GROUPINGS . In total , we collected 818 figure samples across 16 different waltz figures , over 14 dances ( figure sequences ) . Thus , the input data had a dimension of < 818×4×100 for 818 figure samples , 4 sensors , and 100 measurements per sensor per figure sample . The small size of the dataset and subsequent difficulty in collecting additional data during the Covid19 pandemic made the learning problem more challenging . We had only 14 sequences in total , and that is too small of dataset to learn dependencies between figures . Therefore , our focus is on independently classifying the figures using the 818 samples , and leveraging the Markov property to enforce dependencies between figures . The 818 samples came from 14 separate dances ( figure sequences ) and we performed 7-fold crossvalidation with two dances per cross-validation group ( assigned randomly ) . That ensured sequences of figures ( dances ) were not split across different cross-validation groups . It also allowed us to test our representations ’ accuracy for each sequence as a whole . 2.3 LABELING GROUND TRUTH . Each dance was recorded on video so that labels ( ground truth ) could be given to the data segments corresponding to the figures . The labels are listed in Appendix A . 3 MARKOV TRANSITIONS . The sequence of figures in each dance can be modeled as a Markov chain . The probability of observing the next figure is dependent on the current figure , but independent of past figures given the current figure . The reason is as follows . Certain figures end on the right foot , while others end on the left foot . Similarly , certain figures begin on the left foot , while others begin on the right foot . The probability of going from a figure ending on the right foot to another figure beginning on the right foot is zero ( and the same applies to the left foot ) . That is because of the physics of the dance and the way weight is distributed between the feet . Similarly , some figures must be followed by figures that move forward while others must be followed by figures that go backward . Therefore , each figure constrains the immediate next figure , but the sequence is memoryless . Using the above rules , we constructed a transition matrix for all figures , and that is given in the Appendix in Table 3 . We essential gave a zero probability to impossible transitions , and equal probability to all possible transitions . Therefore , our transition matrix is completely unbiased , and not based on real training data . The advantage of the unbiased transition matrix is that the same matrix can be used across different couples since it is very general . It does not encode unique habits of certain couples , where there is a tendency to follow patterns . If a biased approach were taken , a unique transition matrix could be learned for each couple , but it would not generalize to other couples . 4 HIDDEN MARKOV MODEL REPRESENTATION . The dance can be represented as a Hidden Markov Model ( HMM ) where the states represent figures that emit sensor readings , as illustrated in Fig . 3 . Although the state space is discrete , the emission space is continuous because the sensor readings continuous . As a result , the HMM can not be solved using a discrete emission probability matrix . Instead , we assumed Gaussian emission probabilities , resulting in a Gaussian HMM . We used the HMMLearn Python library ( hmm ) to estimate the transition and emission probabilities while fitting the input data . We initialized the transition probabilities with the trained transition matrix described in Section 3 , and initialized state vector with the actual initial state obtained from the ground truth . The problem with the HMM approach for this task is that the HMM is a generative model , and not a discriminative model . At no stage does the model take the actual known labels to perform classification . It simple estimates states using the probability information and we assigned labels to the states by fitting the training set , and matching the states estimated by the HMM with the known labels . The approach achieved an accuracy of 35.93 % on the validation sets , averaged across the 7 cross validation groups .
Authors propose an approach to perform classification of ballroom dance movements (called figures) captured by the sensing mechanism of a smartwatch and discriminated via different ANN architectures. The sequence of figures are modelled as a Marlov chain, which work in a generative+discriminative fashion to output the final prediction of the model. Authors also present a dataset collected specifically for this work, to perform the inference of the algorithms included in the evaluation. Results show a remarkable accuracy, but are not compared to any existing state of art due to limited related work.
SP:368767f64b05defbb5e0c479759df4d251597745
Ballroom Dance Movement Recognition Using a Smart Watch and Representation Learning
1 INTRODUCTION . Recent work has used low-cost smart watches to track the movement of human body parts . ArmTrak tracks arm movement , assuming that the body and torso are stationary ( Shen et al. , 2016 ) . In this paper , we perform whole body movement recognition using a single smart watch , which is a hard problem given that body movements need to be inferred using readings taken from a single location on the body ( the wrist ) . The movements in the study are from ballroom dancing , which engages tens of thousands of competitors in the U.S. and other countries . Competitors dance at different skill levels and each level is associated with an internationally recognized syllabus , set by the World Dance Sport Federation . The syllabus breaks each dance into smaller segments with well-defined body movements . Those segments are called figures . In the waltz , for example , each figure has a length of one measure of the waltz song being danced to ; the entire dance is a sequence of 40 to 60 figures ( depending on the length of the song ) . The sequence is random , but the figures themselves are well-defined . The sequence is illustrated in Fig . 1 . The International Standard ballroom dances are a subset of ballroom dances danced around the world , and they include the waltz , tango , foxtrot , quickstep and Viennese waltz . A unique characteristic of all these dances is that the couple is always in a closed-hold , meaning they never separate . Also , both dancers in the couple maintain a rigid frame , meaning the arms and torso move together as one unit . The head and the lower body , however , move independently of that arms-torso unit . Our hypothesis in this paper is that the figures in each of these dances can be recognized with high accuracy using deep learning representations of data obtained from a single smart watch worn by the lead in the couple . That is possible because the rigid frame makes it unnecessary to separately instrument the arms and torso , and because most figures are characterized by distinct movements ( translations and rotations in space ) of the arms and torso . We refer the interested reader to the website www.ballroomguide.com for free videos and details on the various syllabus figures in all the International Standard ballroom dance styles . In this paper , we validate our hypothesis on the quintessential ballroom dance– the waltz . We chose 16 waltz figures that are most commonly danced by amateurs . The full names of the figures are included in Appendix A . Our goal is to accurately classify those figures in real-time using data from a smart watch . That data can be pushed to mobile devices in the hands of spectators at ballroom competitions , providing them with real-time commentary on the moves that they will have just watched being performed . That is an augmented-reality platform serving laymen in the audience who want to become more engaged with the nuances of the dance that they are watching . The main beneficiary of the analysis of dance movements would be the dancers themselves . The analysis will help them identify whether or not they are dancing the figures correctly . If a figure is confused for a different figure , it may be because the dancers have not sufficiently emphasized the difference in their dancing and need to improve their technique on that figure . That confusion metrics could also be used by competition judges to mark competitors on how well dancers are performing figures ; that task is currently done by eye-balling multiple competitors on the floor , and is challenging when there are over ten couples to keep track of . We make three main contributions in detecting ballroom dance movements using learning representations . • First , we show that representations using data from a single smart watch are sufficient for discriminating between complex dancing movements . • Second , we identify and evaluate six learning representations that can be used for classifying the figures with varying accuracies . The representations are 1 ) Gaussian Hidden Markov Model , 2 ) Extra Trees Classifier , 3 ) Feed-Forward Neural Network , 4 ) Recurrent Neural Network ( LSTM ) , 5 ) Convolution Neural Network , and 6 ) a Convolution Neural Network that feeds into a Recurrent Neural Network . • Finally , we model the sequence of figures as a Markov chain , using the fact that the transitions between figures are memoryless . We use the rules of the waltz to determine which transitions are possible and which are not . With that transition knowledge , we correct the immediately previous figure ’ s estimate . This leads to an average estimation accuracy improvement of 5.33 percentage points . 2 DATASET DESCRIPTION . 2.1 DATA COLLECTION . The data was collected using an Android app on a Samsung Gear Live smart watch . The app was developed for this work on top of the ArmTrak data collection app . We were able to reliably collect two derived sensor measurements from the Android API : • Linear Acceleration . This contains accelerometer data in the X , Y and Z directions of the smart watch , with the effect of gravity removed . • Rotation Vector . This provides the Euler angles ( roll , pitch and yaw ) by fusing accelerometer , gyroscope and magnetometer readings in the global coordinate space . We use only the yaw ( rotation about the vertical axis ) in this study , and that is based on prior knowledge that roll and pitch are insignificant in the waltz figures included in the study . In total , we collected readings from 4 sensor axes ( three from the Linear Acceleration and the yaw from the Rotation Vector sensors ) . The readings were reported by watch operating system asynchronously , at irregular intervals , whenever a change was sensed . In order to facilitate signal processing , we downsampled the data such that each figure contained exactly 100 sensor samples , which was possible because the effective sampling rate was greater than that . The downsampling was done by taking the median ( instead of the mean , which is sensitive to outliers ) values of 100 evenly-spaced time windows . From this point on in the paper , when we refer to “ samples ” , we refer to an observation for a figure of dimension 4× 100 as one sample . The app was developed in such a way that the button on the watch that started recording the aforementioned sensor measurements also simultaneously started playing the music via Bluetooth speakers . That ensured that the music and the recording of the movements were time-synchronized . For all the data that was collected , we used the same rendition of the classic song ” Moon River ” . We performed manual segmentation of the song using its beats offline , and that was used to segment the time series data for the entire dance sequence into 2.1-second-long-segments corresponding to figures in the dance . We noted the song intro length ( where no dancing was performed ) and ignored all data in that period . For each figure , we extended the window of measurements equally at the beginning and at the end by 0.35 seconds to account for slight errors in dancer timing . That ensured that the window captures the figures even if the dancer was slightly early or late to begin/finish dancing the figure . The yaw data for 4 figure samples corresponding to two different figures are illustrated in Fig . 2 . It can be seen that right-turning figures tended to record yaw readings with an upward trend , while left-turning figures recorded yaw readings with a downward trend . Slight differences between the samples for each figure can be attributed to differences in the dancers ’ timing and execution . 2.2 CROSS-VALIDATION GROUPINGS . In total , we collected 818 figure samples across 16 different waltz figures , over 14 dances ( figure sequences ) . Thus , the input data had a dimension of < 818×4×100 for 818 figure samples , 4 sensors , and 100 measurements per sensor per figure sample . The small size of the dataset and subsequent difficulty in collecting additional data during the Covid19 pandemic made the learning problem more challenging . We had only 14 sequences in total , and that is too small of dataset to learn dependencies between figures . Therefore , our focus is on independently classifying the figures using the 818 samples , and leveraging the Markov property to enforce dependencies between figures . The 818 samples came from 14 separate dances ( figure sequences ) and we performed 7-fold crossvalidation with two dances per cross-validation group ( assigned randomly ) . That ensured sequences of figures ( dances ) were not split across different cross-validation groups . It also allowed us to test our representations ’ accuracy for each sequence as a whole . 2.3 LABELING GROUND TRUTH . Each dance was recorded on video so that labels ( ground truth ) could be given to the data segments corresponding to the figures . The labels are listed in Appendix A . 3 MARKOV TRANSITIONS . The sequence of figures in each dance can be modeled as a Markov chain . The probability of observing the next figure is dependent on the current figure , but independent of past figures given the current figure . The reason is as follows . Certain figures end on the right foot , while others end on the left foot . Similarly , certain figures begin on the left foot , while others begin on the right foot . The probability of going from a figure ending on the right foot to another figure beginning on the right foot is zero ( and the same applies to the left foot ) . That is because of the physics of the dance and the way weight is distributed between the feet . Similarly , some figures must be followed by figures that move forward while others must be followed by figures that go backward . Therefore , each figure constrains the immediate next figure , but the sequence is memoryless . Using the above rules , we constructed a transition matrix for all figures , and that is given in the Appendix in Table 3 . We essential gave a zero probability to impossible transitions , and equal probability to all possible transitions . Therefore , our transition matrix is completely unbiased , and not based on real training data . The advantage of the unbiased transition matrix is that the same matrix can be used across different couples since it is very general . It does not encode unique habits of certain couples , where there is a tendency to follow patterns . If a biased approach were taken , a unique transition matrix could be learned for each couple , but it would not generalize to other couples . 4 HIDDEN MARKOV MODEL REPRESENTATION . The dance can be represented as a Hidden Markov Model ( HMM ) where the states represent figures that emit sensor readings , as illustrated in Fig . 3 . Although the state space is discrete , the emission space is continuous because the sensor readings continuous . As a result , the HMM can not be solved using a discrete emission probability matrix . Instead , we assumed Gaussian emission probabilities , resulting in a Gaussian HMM . We used the HMMLearn Python library ( hmm ) to estimate the transition and emission probabilities while fitting the input data . We initialized the transition probabilities with the trained transition matrix described in Section 3 , and initialized state vector with the actual initial state obtained from the ground truth . The problem with the HMM approach for this task is that the HMM is a generative model , and not a discriminative model . At no stage does the model take the actual known labels to perform classification . It simple estimates states using the probability information and we assigned labels to the states by fitting the training set , and matching the states estimated by the HMM with the known labels . The approach achieved an accuracy of 35.93 % on the validation sets , averaged across the 7 cross validation groups .
The paper presents some classification results for ballroom dancing movements, as measured by inertial sensors on a smartwatch. The motivation is mixed - as a guide to dancers themselves and as an automatic grading mechanism for competition judges. However the sensors used can only measure a very limited aspect of the dance, and there is no 'gold standard' data describing the full motion of the dancers that can then be compared with the limited data actually measured, to be able to contrast variations in the full body motion with variations in the intertial sensors on the smartwatch. Describing the project as a study of whole body movement seems a bit brave given that you measure only hand movements. The restriction to the yaw axis will make results very sensitive to how the hand is held.
SP:368767f64b05defbb5e0c479759df4d251597745
Importance-based Multimodal Autoencoder
Integrating information from multiple modalities ( e.g. , verbal , acoustic and visual data ) into meaningful representations has seen great progress in recent years . However , two challenges are not sufficiently addressed by current approaches : ( 1 ) computationally efficient training of multimodal autoencoder networks which are robust in the absence of modalities , and ( 2 ) unsupervised learning of important subspaces in each modality which are correlated with other modalities . In this paper we propose the IMA ( Importance-based Multimodal Autoencoder ) model , a scalable model that learns modality importances and robust multimodal representations through a novel cross-covariance based loss function . We conduct experiments on MNIST-TIDIGITS , a multimodal dataset of spoken and image digits , and on IEMOCAP , a multimodal emotion corpus . The IMA model is able to distinguish digits from uncorrelated noise , and word-level importances are learnt that correspond to the separation between function and emotional words . The multimodal representations learnt by IMA are also competitive with state-of-the-art baseline approaches on downstream tasks . 1 INTRODUCTION . With the ever-increasing amount of heterogeneous multimedia content present on the internet , machine learning approaches have been applied to automated perception problems such as object recognition ( Krizhevsky et al. , 2012 ) , image captioning ( Vinyals et al. , 2015 ) and automatic language translation ( Choi et al. , 2018 ) . An important research direction is the problem of learning representations from multiple modalities which represent our primary channels of communication and sensation , such as vision or touch ( Baltrušaitis et al. , 2018 ) . With respect to this area of research , there are two major challenges in this research area which our paper addresses . The first is the design of encoder networks to enable learning and inference of multimodal representations in the absence of modalities . This is useful for scenarios such as sensor failure or imputation/bidirectional generation of missing modalities from any combination of the observed ones . The caveat is that to have this property , recent approaches such as the JMVAE-KL model ( Suzuki et al. , 2016 ) and MVAE ( Wu & Goodman , 2018 ) have encoders with high complexity for a large number of modalities . When M is the number of modalities , JMVAE-KL needs 2M sub-networks for every combination of input modalities , while MVAE requires M sub-networks but additional O ( 2M ) subsampled loss terms to handle missing modalities . The second challenge is that multimodal data , such as emotional spoken utterances or web images with captions are often generated not only by an underlying shared latent factor , but also by modality-specific private latent factors . For example in spoken utterances , the verbal modality ( words ) are generated not only due to emotion but also due to syntax and semantics . Function words such as I and the are mostly syntactic and do not relate to emotion , similarly not all recorded audio frames are indicative of emotion . The inference of how relevant a sample in each modality is to the shared latent factor ( subsequently referred to as importance ) is important for downstream tasks . For the remainder of this paper , non-relevant samples are refered to as uncorrelated noise . In a supervised setting , the latent factors and modality importance weights can be learnt from task labels . When labels are absent in the unsupervised scenario , for the purpose of this paper we define the concept of modality importances based on correlations between the latent factor and each modality . In the important subspace of each modality the multimodal and unimodal representations both maximally correlate , indicating that samples in that modality subspace can be attributed to a shared latent factor , and not an independent private one . In contrast , for unimportant samples in a modality , the correlation is minimal . The main contributions of our proposed approach are two-fold . The first is a multimodal autoencoder framework where training requires additional loss terms which are O ( M ) , i.e . linear in the number of modalities , and thus only require M per-modality encoders to handle missing modalities . Computationally , this is advantageous compared to JMVAE-KL and MVAE , which require exponential number of sub-networks and loss terms respectively . Secondly , we define the concept of importance in an unsupervised setting , and propose novel cross-correlation based loss terms to learn important regions in each modality ’ s representation space . The importances are modeled by separate unimodal networks referred to as importance networks . Hyper-parameter ρj for the j-th modality controls the integration of prior domain knowledge about the degree of importance in that modality . While not trained on any supervised labels , the learnt importances from IMA are analyzed quantitatively and found to correspond to the separation between digit vs. noise labels and emotion vs. neutral categories . 2 RELATED WORK . Following the great success of deep neural networks for representation learning , the research area of multimodal machine learning is gaining interest ( Baltrušaitis et al. , 2018 ) . Our proposed IMA model is relevant to two main research areas in this domain , Inter-modality Correlation Learning and Efficient Multimodal VAEs . The idea of learning acoustic embeddings for words has also been explored in Wang et al . ( 2018 ) and Jung et al . ( 2019 ) however we attempt to map words to their affective rather than phonetic representations . In this section , we describe each area and conclude with the similarities and differences between the IMA model and prior approaches . Inter-modality Correlation Learning : There have been several approaches which measure correlations between modalities/sources of data to understand how observed data in each modality can be explained by shared underlying concepts . The IBFA ( Inter-Battery Factor Analysis ) introduced by Tucker ( 1958 ) and its successor , the MBFA ( Multi-battery factor Analysis ) ( Browne , 1980 ) are among the earliest proposed techniques to study shared factors between score sets from batteries of tests . DeepCCA ( Deep Canonical Covariance Analysis ) proposed by Benton et al . ( 2017 ) learns a deep projection of each modality in a bimodal understanding scenario so that the projections are maximally correlated , effectively extending the classical CCA technique ( Knapp , 1978 ) to deep neural networks . Our proposed model extends these approaches to also detect important regions of each modality correlated with the shared latent factor . Efficient Multimodal VAEs : VAEs ( Variational Autoencoders ) have been applied to multimodal data for applications such as inference and bidirectional generation of modalities . This poses a major challenge of constructing encoders to model the latent posterior which are efficient in training/inference under any combination of input modalities . Recent work addresses this by focusing on factorized models for efficient inference . Vedantam et al . ( 2017 ) employs a product-of-experts decomposition with modality specific inference networks to train image generation models . Wu & Goodman ( 2018 ) propose MVAE ( Multimodal Variational Autoencoders ) , where the latent posterior is modeled with a parameter shared product of experts network . Shi et al . ( 2019 ) proposed a mixture-of-experts multimodal variational autoencoder ( MMVAE ) where the posterior is a mixture of experts instead . These approaches have been extended more recently , for example in multi-source neural variational inference ( Kurle et al. , 2019 ) where the multimodal posterior is constructed using learnt and integrated beliefs from multiple posteriors , each being informed by a different source . Sutter et al . ( 2020 ) introduce a novel Jensen-Shannon divergence based objective function which can be used to approximate both unimodal and joint multimodal posteriors . While existing approaches attempt to efficiently learn multimodal representations through posterior modeling , our proposed IMA model aligns modalities during autoencoder training for projection to a common space which facilitates inference even in absence of modalities . Only M encoders and O ( M ) loss terms are required by the IMA model for inference with M modalities . Prior work has also not focused sufficiently on unsupervised learning of modality importances ( through detection of subspaces maximally correlated with shared latent factors ) which we address in this paper . 3 MODEL DESCRIPTION . The IMA model consists of two main components : ( 1 ) the multimodal autoencoder and ( 2 ) the unimodal importance networks . In Figure 1 we have provided an overview diagram of the proposed model , including the loss functions utilized in training . Multimodal Autoencoder : Assume that the input training examples consist of multimodal data , where each multimodal sample is denoted as x = { x1 , x2 , ... xM } . There are M modalities and the training set consists of N multimodal samples . The input data xj in the j-th modality is passed through an encoder for that modality and its output is denoted as uj ( xj ) . The latent multimodal representation z could be modeled with different approaches for example , concatenated fusion with fully connected layers ( Suzuki et al. , 2016 ) . We wish to model the latent multimodal representation z ( x1 , x2 ... xM ) with a pooling weighted by distinct modality importances yj ( xj ) as given by : z ( x1 , x2 , ... xM ) = M∑ j=1 yj ( xj ) uj ( xj ) ∑ j yj ( xj ) = 1 ( 1 ) The multimodal representation z is passed through the decoder networks to obtain the reconstruction in the j-th modality as x̂j . L ( j ) rec is the reconstruction loss for the j-th modality . Lglob is a global regularization term which encourages z to be centered with zero mean . This term is for regularization of the multimodal representation z and expands to Lglob ( z ) = ‖z ( x1 , x2 , ... xM ) ‖2 . Multimodal alignment in IMA : For our proposed model , we have M sub-networks for each modality , where each j-th unimodal sub-network is trained using the autoencoder reconstruction losses and an additional alignment loss L ( j ) align . By forcing the multimodal representation z and each modality ’ s view uj ( xj ) to be similar , we also enforce each unimodal sub-network to learn its contribution to the latent factor z during autoencoder training . Per-sample , this requires only M sub-networks and M additional loss terms instead of a random subset of 2M losses for sub-sampled training as in the MVAE model ( Wu & Goodman , 2018 ) or 2M subnetworks as in the JMVAE-KL model ( Suzuki et al. , 2016 ) . Similar to MVAE , this model also can learn and infer in the absence of modalities . z can be expressed as a linear average of the unimodal representations present and incorporated into the overall loss . If the j-th modality is missing , we can set yj ( xj ) = 0 in Equation 1 . For the j-th modality , L ( j ) align is the SSE error between the multimodal representation z and the unimodal encoder outputs uj ( xj ) given by : L ( j ) align ( z , uj ) = ‖z ( x1 , x2 , ... xM ) − uj ( xj ) ‖ 2 When there are N training samples , during multimodal autoencoder learning the loss below is optimized : Lauto = λglob N∑ i=1 Lglob ( zi ) + [ N∑ i=1 M∑ j=1 λ ( j ) recL ( j ) rec ( xij , x̂ij ) ] + [ N∑ i=1 M∑ j=1 λ ( j ) alignL ( j ) align ( zi , uij ) ] ( 2 ) λalign , λrec and λglob are the associated hyper-parameter weights for the alignment , reconstruction and global multimodal regularization terms which appear in Equation ( 2 ) . Since it is not straightforward to tune such parameters for an unsupervised model , we start with equal weights for all losses and observe if that is sufficient to minimize all terms simultaneously during training . For the MNIST-TIDIGITS and IEMOCAP experiments , we have found that equally weighing them works , thus λalign = λrec = λglob = 1.0 . Importance Network Training : Section 1 explains that there are samples in observed data for each modality which do not correlate with the multimodal representation z . For the scope of this paper , we consider such samples as uncorrelated noise in the data . We have assumed the presence of a subspace Rj inside each j-th modality ’ s representation space so that if xj ∈ Rj , uj is minimally correlated with z . We seek to learn weights yij ∈ [ 0 , 1 ] for the i-th sample in the j−th modality , so that yij denotes the importance of each sample xij ( i.e . the degree to which xij does not belong to Rj ) . A unimodal neural network is trained to map from xij to yij for the j-th modality ; this is called the importance network . The multimodal autoencoder solely being trained to reconstruct all modality inputs would not have learnt yij corresponding to uncorrelated noise . This motivates the need for explicitly defining an importance-based loss . Cross-correlation Losses : To train the importance networks , we have made use of a loss function explicitly capturing cross-correlation between z and uj , weighted by yj . Minimizing this cost is equivalent to enforcing zero-correlation between uj and z based on mini-batch statistics during training , where size of a mini-batch is B samples . This cost can be derived from the definition of cross-correlation , where each i-th sample in the mini-batch is not weighted equally . Recent work has also applied other independence criterion , such as MMD ( Maximum Mean Discrepancies ) which act as auxiliary losses in a variational framework ( Louizos et al. , 2015 ) . Assuming the latent variables are Gaussians , independence and uncorrelated properties are equivalent , and we define the following alternative loss term based on the Frobenius norm of the cross-covariance between z and uj , where y′ij = 1− yij and w′ij = y′ij / B∑ i=1 y′ij L ( j ) corr = ∥∥∥∥∥ B∑ i=1 w′ij [ ( z− µz ) ( uj − µu ) ] T ∥∥∥∥∥ 2 F µz = B∑ i=1 w′ijzi µu = B∑ i=1 w′ijuij ( 3 ) Importance Priors : Learning to predict yij ≈ 1.0 ∀i , j trivially decreases Lcorr down to zero , and thus we need to regularize the importance network training , through an additional loss function which we refer to as L ( j ) local . This loss utilizes the hyper-parameter ρj ∈ [ 0 , 1 ] which serves as prior about how much of the j-th modality is corrupted by uncorrelated noise , and is defined as the KL-divergence DKL ( Bernoulli ( ȳj ) ||Bernoulli ( ρj ) ) where ȳj = ∑ i yij/B is the average value of yij as computed over a mini-batch of size B. Enforcing the loss Llocal at a sample level would have forced yij = ρj for all samples , which is addressed by defining the loss on a mini-batch instead . The importance network for each modality minimizes Limp , which is the weighted sum of the cross-covariance based loss and the regularization term , as defined below . λ ( j ) local and λ ( j ) corr are hyper-parameter weights for each loss term . Limp = N∑ i=1 M∑ j=1 λ ( j ) localL ( j ) local + N∑ i=1 M∑ j=1 λ ( j ) corrL ( j ) corr ( 4 ) It is important to note the difference between yij and y′ij = 1 − yij in terms of notation . yij is the importance of the i−th sample in the j−th modality and is required in the joint representation , whereas its complement y′ij is utilized in importance network training . The intuition is that a model minimizing Equation 3 would tend to learn high values of y′ij for maximally uncorrelated samples , which translates to low importance ( yij ) for these same samples .
The paper proposes the IMA model, a scalable model that learns modality importances and robust multimodal representations through a novel cross-covariance based loss function. The proposed model performs unimodal inference in absence of modalities and also addresses the problem of detecting important subspaces in each modality through weighted cross-covariance loss terms, which are minimized by unimodal importance networks. Results are shown that the IMA model is able to distinguish digits from uncorrelated noise, and word-level importances are learned that correspond to the separation between function and emotional words. The multimodal representations learned by IMA are also competitive with state-of-the-art baseline approaches on downstream tasks.
SP:4f201b7e397d5a6e30ca7cea4b379baa1a046899
Importance-based Multimodal Autoencoder
Integrating information from multiple modalities ( e.g. , verbal , acoustic and visual data ) into meaningful representations has seen great progress in recent years . However , two challenges are not sufficiently addressed by current approaches : ( 1 ) computationally efficient training of multimodal autoencoder networks which are robust in the absence of modalities , and ( 2 ) unsupervised learning of important subspaces in each modality which are correlated with other modalities . In this paper we propose the IMA ( Importance-based Multimodal Autoencoder ) model , a scalable model that learns modality importances and robust multimodal representations through a novel cross-covariance based loss function . We conduct experiments on MNIST-TIDIGITS , a multimodal dataset of spoken and image digits , and on IEMOCAP , a multimodal emotion corpus . The IMA model is able to distinguish digits from uncorrelated noise , and word-level importances are learnt that correspond to the separation between function and emotional words . The multimodal representations learnt by IMA are also competitive with state-of-the-art baseline approaches on downstream tasks . 1 INTRODUCTION . With the ever-increasing amount of heterogeneous multimedia content present on the internet , machine learning approaches have been applied to automated perception problems such as object recognition ( Krizhevsky et al. , 2012 ) , image captioning ( Vinyals et al. , 2015 ) and automatic language translation ( Choi et al. , 2018 ) . An important research direction is the problem of learning representations from multiple modalities which represent our primary channels of communication and sensation , such as vision or touch ( Baltrušaitis et al. , 2018 ) . With respect to this area of research , there are two major challenges in this research area which our paper addresses . The first is the design of encoder networks to enable learning and inference of multimodal representations in the absence of modalities . This is useful for scenarios such as sensor failure or imputation/bidirectional generation of missing modalities from any combination of the observed ones . The caveat is that to have this property , recent approaches such as the JMVAE-KL model ( Suzuki et al. , 2016 ) and MVAE ( Wu & Goodman , 2018 ) have encoders with high complexity for a large number of modalities . When M is the number of modalities , JMVAE-KL needs 2M sub-networks for every combination of input modalities , while MVAE requires M sub-networks but additional O ( 2M ) subsampled loss terms to handle missing modalities . The second challenge is that multimodal data , such as emotional spoken utterances or web images with captions are often generated not only by an underlying shared latent factor , but also by modality-specific private latent factors . For example in spoken utterances , the verbal modality ( words ) are generated not only due to emotion but also due to syntax and semantics . Function words such as I and the are mostly syntactic and do not relate to emotion , similarly not all recorded audio frames are indicative of emotion . The inference of how relevant a sample in each modality is to the shared latent factor ( subsequently referred to as importance ) is important for downstream tasks . For the remainder of this paper , non-relevant samples are refered to as uncorrelated noise . In a supervised setting , the latent factors and modality importance weights can be learnt from task labels . When labels are absent in the unsupervised scenario , for the purpose of this paper we define the concept of modality importances based on correlations between the latent factor and each modality . In the important subspace of each modality the multimodal and unimodal representations both maximally correlate , indicating that samples in that modality subspace can be attributed to a shared latent factor , and not an independent private one . In contrast , for unimportant samples in a modality , the correlation is minimal . The main contributions of our proposed approach are two-fold . The first is a multimodal autoencoder framework where training requires additional loss terms which are O ( M ) , i.e . linear in the number of modalities , and thus only require M per-modality encoders to handle missing modalities . Computationally , this is advantageous compared to JMVAE-KL and MVAE , which require exponential number of sub-networks and loss terms respectively . Secondly , we define the concept of importance in an unsupervised setting , and propose novel cross-correlation based loss terms to learn important regions in each modality ’ s representation space . The importances are modeled by separate unimodal networks referred to as importance networks . Hyper-parameter ρj for the j-th modality controls the integration of prior domain knowledge about the degree of importance in that modality . While not trained on any supervised labels , the learnt importances from IMA are analyzed quantitatively and found to correspond to the separation between digit vs. noise labels and emotion vs. neutral categories . 2 RELATED WORK . Following the great success of deep neural networks for representation learning , the research area of multimodal machine learning is gaining interest ( Baltrušaitis et al. , 2018 ) . Our proposed IMA model is relevant to two main research areas in this domain , Inter-modality Correlation Learning and Efficient Multimodal VAEs . The idea of learning acoustic embeddings for words has also been explored in Wang et al . ( 2018 ) and Jung et al . ( 2019 ) however we attempt to map words to their affective rather than phonetic representations . In this section , we describe each area and conclude with the similarities and differences between the IMA model and prior approaches . Inter-modality Correlation Learning : There have been several approaches which measure correlations between modalities/sources of data to understand how observed data in each modality can be explained by shared underlying concepts . The IBFA ( Inter-Battery Factor Analysis ) introduced by Tucker ( 1958 ) and its successor , the MBFA ( Multi-battery factor Analysis ) ( Browne , 1980 ) are among the earliest proposed techniques to study shared factors between score sets from batteries of tests . DeepCCA ( Deep Canonical Covariance Analysis ) proposed by Benton et al . ( 2017 ) learns a deep projection of each modality in a bimodal understanding scenario so that the projections are maximally correlated , effectively extending the classical CCA technique ( Knapp , 1978 ) to deep neural networks . Our proposed model extends these approaches to also detect important regions of each modality correlated with the shared latent factor . Efficient Multimodal VAEs : VAEs ( Variational Autoencoders ) have been applied to multimodal data for applications such as inference and bidirectional generation of modalities . This poses a major challenge of constructing encoders to model the latent posterior which are efficient in training/inference under any combination of input modalities . Recent work addresses this by focusing on factorized models for efficient inference . Vedantam et al . ( 2017 ) employs a product-of-experts decomposition with modality specific inference networks to train image generation models . Wu & Goodman ( 2018 ) propose MVAE ( Multimodal Variational Autoencoders ) , where the latent posterior is modeled with a parameter shared product of experts network . Shi et al . ( 2019 ) proposed a mixture-of-experts multimodal variational autoencoder ( MMVAE ) where the posterior is a mixture of experts instead . These approaches have been extended more recently , for example in multi-source neural variational inference ( Kurle et al. , 2019 ) where the multimodal posterior is constructed using learnt and integrated beliefs from multiple posteriors , each being informed by a different source . Sutter et al . ( 2020 ) introduce a novel Jensen-Shannon divergence based objective function which can be used to approximate both unimodal and joint multimodal posteriors . While existing approaches attempt to efficiently learn multimodal representations through posterior modeling , our proposed IMA model aligns modalities during autoencoder training for projection to a common space which facilitates inference even in absence of modalities . Only M encoders and O ( M ) loss terms are required by the IMA model for inference with M modalities . Prior work has also not focused sufficiently on unsupervised learning of modality importances ( through detection of subspaces maximally correlated with shared latent factors ) which we address in this paper . 3 MODEL DESCRIPTION . The IMA model consists of two main components : ( 1 ) the multimodal autoencoder and ( 2 ) the unimodal importance networks . In Figure 1 we have provided an overview diagram of the proposed model , including the loss functions utilized in training . Multimodal Autoencoder : Assume that the input training examples consist of multimodal data , where each multimodal sample is denoted as x = { x1 , x2 , ... xM } . There are M modalities and the training set consists of N multimodal samples . The input data xj in the j-th modality is passed through an encoder for that modality and its output is denoted as uj ( xj ) . The latent multimodal representation z could be modeled with different approaches for example , concatenated fusion with fully connected layers ( Suzuki et al. , 2016 ) . We wish to model the latent multimodal representation z ( x1 , x2 ... xM ) with a pooling weighted by distinct modality importances yj ( xj ) as given by : z ( x1 , x2 , ... xM ) = M∑ j=1 yj ( xj ) uj ( xj ) ∑ j yj ( xj ) = 1 ( 1 ) The multimodal representation z is passed through the decoder networks to obtain the reconstruction in the j-th modality as x̂j . L ( j ) rec is the reconstruction loss for the j-th modality . Lglob is a global regularization term which encourages z to be centered with zero mean . This term is for regularization of the multimodal representation z and expands to Lglob ( z ) = ‖z ( x1 , x2 , ... xM ) ‖2 . Multimodal alignment in IMA : For our proposed model , we have M sub-networks for each modality , where each j-th unimodal sub-network is trained using the autoencoder reconstruction losses and an additional alignment loss L ( j ) align . By forcing the multimodal representation z and each modality ’ s view uj ( xj ) to be similar , we also enforce each unimodal sub-network to learn its contribution to the latent factor z during autoencoder training . Per-sample , this requires only M sub-networks and M additional loss terms instead of a random subset of 2M losses for sub-sampled training as in the MVAE model ( Wu & Goodman , 2018 ) or 2M subnetworks as in the JMVAE-KL model ( Suzuki et al. , 2016 ) . Similar to MVAE , this model also can learn and infer in the absence of modalities . z can be expressed as a linear average of the unimodal representations present and incorporated into the overall loss . If the j-th modality is missing , we can set yj ( xj ) = 0 in Equation 1 . For the j-th modality , L ( j ) align is the SSE error between the multimodal representation z and the unimodal encoder outputs uj ( xj ) given by : L ( j ) align ( z , uj ) = ‖z ( x1 , x2 , ... xM ) − uj ( xj ) ‖ 2 When there are N training samples , during multimodal autoencoder learning the loss below is optimized : Lauto = λglob N∑ i=1 Lglob ( zi ) + [ N∑ i=1 M∑ j=1 λ ( j ) recL ( j ) rec ( xij , x̂ij ) ] + [ N∑ i=1 M∑ j=1 λ ( j ) alignL ( j ) align ( zi , uij ) ] ( 2 ) λalign , λrec and λglob are the associated hyper-parameter weights for the alignment , reconstruction and global multimodal regularization terms which appear in Equation ( 2 ) . Since it is not straightforward to tune such parameters for an unsupervised model , we start with equal weights for all losses and observe if that is sufficient to minimize all terms simultaneously during training . For the MNIST-TIDIGITS and IEMOCAP experiments , we have found that equally weighing them works , thus λalign = λrec = λglob = 1.0 . Importance Network Training : Section 1 explains that there are samples in observed data for each modality which do not correlate with the multimodal representation z . For the scope of this paper , we consider such samples as uncorrelated noise in the data . We have assumed the presence of a subspace Rj inside each j-th modality ’ s representation space so that if xj ∈ Rj , uj is minimally correlated with z . We seek to learn weights yij ∈ [ 0 , 1 ] for the i-th sample in the j−th modality , so that yij denotes the importance of each sample xij ( i.e . the degree to which xij does not belong to Rj ) . A unimodal neural network is trained to map from xij to yij for the j-th modality ; this is called the importance network . The multimodal autoencoder solely being trained to reconstruct all modality inputs would not have learnt yij corresponding to uncorrelated noise . This motivates the need for explicitly defining an importance-based loss . Cross-correlation Losses : To train the importance networks , we have made use of a loss function explicitly capturing cross-correlation between z and uj , weighted by yj . Minimizing this cost is equivalent to enforcing zero-correlation between uj and z based on mini-batch statistics during training , where size of a mini-batch is B samples . This cost can be derived from the definition of cross-correlation , where each i-th sample in the mini-batch is not weighted equally . Recent work has also applied other independence criterion , such as MMD ( Maximum Mean Discrepancies ) which act as auxiliary losses in a variational framework ( Louizos et al. , 2015 ) . Assuming the latent variables are Gaussians , independence and uncorrelated properties are equivalent , and we define the following alternative loss term based on the Frobenius norm of the cross-covariance between z and uj , where y′ij = 1− yij and w′ij = y′ij / B∑ i=1 y′ij L ( j ) corr = ∥∥∥∥∥ B∑ i=1 w′ij [ ( z− µz ) ( uj − µu ) ] T ∥∥∥∥∥ 2 F µz = B∑ i=1 w′ijzi µu = B∑ i=1 w′ijuij ( 3 ) Importance Priors : Learning to predict yij ≈ 1.0 ∀i , j trivially decreases Lcorr down to zero , and thus we need to regularize the importance network training , through an additional loss function which we refer to as L ( j ) local . This loss utilizes the hyper-parameter ρj ∈ [ 0 , 1 ] which serves as prior about how much of the j-th modality is corrupted by uncorrelated noise , and is defined as the KL-divergence DKL ( Bernoulli ( ȳj ) ||Bernoulli ( ρj ) ) where ȳj = ∑ i yij/B is the average value of yij as computed over a mini-batch of size B. Enforcing the loss Llocal at a sample level would have forced yij = ρj for all samples , which is addressed by defining the loss on a mini-batch instead . The importance network for each modality minimizes Limp , which is the weighted sum of the cross-covariance based loss and the regularization term , as defined below . λ ( j ) local and λ ( j ) corr are hyper-parameter weights for each loss term . Limp = N∑ i=1 M∑ j=1 λ ( j ) localL ( j ) local + N∑ i=1 M∑ j=1 λ ( j ) corrL ( j ) corr ( 4 ) It is important to note the difference between yij and y′ij = 1 − yij in terms of notation . yij is the importance of the i−th sample in the j−th modality and is required in the joint representation , whereas its complement y′ij is utilized in importance network training . The intuition is that a model minimizing Equation 3 would tend to learn high values of y′ij for maximally uncorrelated samples , which translates to low importance ( yij ) for these same samples .
This paper presents a multimodal Autoencoder framework that learns the multimodal latent representations alongwith the importance of regions in each modality’s representation space in an unsupervised fashion. Multimodal fusion algorithms either use complex architecture representations or use disentangling joint representations for improving generative auto-encoding architectures using VAEs, GANs, WAE and some variants of these. This paper presents an elegant importance based model and architecture that takes into account various local and joint loss functions along with alignment factors to represent the Autoencoder model.
SP:4f201b7e397d5a6e30ca7cea4b379baa1a046899
SOLAR: Sparse Orthogonal Learned and Random Embeddings
1 INTRODUCTION . Embedding models have been the mainstay algorithms for several machine learning applications like Information Retrieval ( IR ) ( 8 ; 2 ) and Natural Language Processing ( NLP ) ( 21 ; 16 ; 31 ; 9 ) in the last decade . Embedding models are learned spin-offs from the low-rank approximation and Matrix Factorization techniques that dominated the space of recommendation systems prior to the emergence of Deep Learning ( DL ) . The primary purpose of these models is to project a rather simple and intuitive representation of an input to an abstract low-dimensional dense vector space . This projection enables two things : 1 ) tailoring the vectors to specific downstream applications and 2 ) pre-processing and storing documents or products as vectors , thereby making the retrieval process computationally efficient ( often matrix multiplication followed by sorting , which are conducive to modern hardware like GPUs ) . Besides the computational advantage , embedding models capture the semantic relationship between queries and products . A good example is product prediction for a service like Amazon . A user-typed query has to be matched against millions of products and the best search results have to be displayed within a fraction of a second . With naive product data , it would be impossible to figure out that products with ‘ aqua ’ in their titles are actually relevant to the query ‘ water ’ . Rather , if we can project all the products to a dense low-dimensional vector space , a query can also be projected to the same space and an inner product computation can be performed with all the product vectors ( usually a dot product ) . We can then display the products with the highest inner product . These projections can be learned to encapsulate semantic information and can be continually updated to reflect temporal changes in customer preference . To the best of our knowledge , embedding models are the most prevalent ones in the industry , particularly for product and advertisement recommendations ( Amazon ’ s - DSSM ( 23 ) , Facebook ’ s DLRM ( 22 ) ) . However , the scale of these problems has blown out of proportion in the past few years prompting research in extreme classification tasks , where the number of classes runs into several million . Consequentially , approaches like Tree-based Models ( 26 ; 15 ; 1 ) and Sparse-linear Models ( 36 ; 39 ; 38 ) have emerged as powerful alternatives . Particularly , Tree-based models are much faster to train and evaluate compared to the other methods . However , most real Information Retrieval systems have dynamically changing output classes and all the extreme classification models fail to generalize to new classes with limited training data ( e.g. , new products being added to the catalogue every day ) . This has caused the resurgence of embedding models for large scale Extreme Classification ( 5 ; 29 ; 3 ; 7 ) . Our Contributions : In this paper , we argue that sparse , high dimensional , orthogonal embeddings are superior to their dense low dimensional counterparts . In this regard , we make two interesting design choices : 1 ) We design the label embeddings ( e.g.products in the catalogue ) to be high dimensional , super-sparse , and orthogonal vectors . 2 ) We fix the label embeddings throughout the training process and learn only the input embeddings ( one-sided learning ) , unlike typical dense models , where both the input and label embeddings are learned . Since we use a combination of Sparse , Orthogonal , Learned and Random embeddings , we code-name our method SOLAR . We provide a theoretical premise for SOLAR by showing that one-sided and two-sided learning are mathematically equivalent . Our choices manifest in a five-fold advantage over prior methods : • Matrix Multiplication to Inverted-Index Lookup : Sparse high dimensional embeddings can obtain a subset of labels using a mere inverted-index ( 8 ) lookup and restrict the computation and sorting to those labels . This enhances the inference speed by a large margin . • Load-balanced Inverted Index : By forcing the label embeddings to be near-orthogonal and equally sparse ( and fixing them ) , we ensure that all buckets in an inverted index are equally filled and we sample approximately the same number of labels for each input . This omits the well-known imbalanced buckets issue where we sub-sample almost all the labels for popular inputs and end up hurting the inference speed . • Lower Embedding Memory : Dense embedding models need to hold all label embeddings in GPU memory to perform real-time inference . This is not a scalable solution with millions of labels ( which is a practical industry requirement ) . On the contrary , SOLAR needs to store only few integers indices per label which is very memory efficient with modern sparse array support on all platforms . These vectors can also be used with Locality Sensitive Hashing based indexing systems like FLASH ( 34 ) . • Zero-communication : Our unique construction of label embeddings enables distributed training over multiple GPUs with zero-communication . Hence , we can afford to train on a 1.67 M book recommendation dataset and three largest extreme classification datasets and outperform the respective baselines on all 4 of them on both precision and speed . • Learning to Hash : An Inverted-Index can be perceived as a hash table where all the output classes are hashed into a few buckets ( 18 ; 33 ) . By fixing the label buckets and learning to map the inputs to the corresponding label buckets , we are doing a ‘ partial learning to hash ’ task in the hindsight ( more on this in Appendix A ) . 2 RELATED WORK . SNRM : While there have been a plethora of dense embedding models , there is only one prior work called SNRM ( Standalone Neural Ranking Model ) ( 40 ) that trains sparse embeddings for the task of suggesting documents relevant to an input query ( classic web search problem ) . In SNRM , the authors propose to learn a high dimensional output layer and sparsify it using a typical L1 or L2 regularizer . However , imposing sparsity through regularization causes lopsided inverted-index with imbalanced loads and high inference times . As we see in our experiments later , these issues lead to the poor performance of SNRM on our 1.67M product recommendation dataset . GLaS : Akin to SOLAR ’ s construction of near-orthogonal label embeddings , another recent work from Google ( 11 ) also explores the idea of enforcing orthogonality to make the labels distinguishable and thereby easier for the classifier to learn . The authors enforce it in such a way that frequently co-occurring labels have high cosine-similarity and the ones that rarely co-occur have low cosine similarity . This imposition was called a Graph Laplacian and Spreadout ( GLaS ) regularizer . However , this was done entirely in the context of dense embeddings and can not be extended to our case due to the differentiability issue . We show the comparison of SOLAR against dense embedding models with and without GLaS regularizer later on in section 5.1 . Fix your Classifier : The idea of fixing label vectors was explored in ( 13 ; 24 ) . In ( 13 ) , the authors propose to initialize the last weight matrix of popular CNN architectures with Hadamard matrices and only train the preceding layer weights . With minimal loss in precision , the number of trainable parameters can be greatly reduced . However , ‘ Fix your Classifier ’ does not scale to the the huge number of labels in the tasks of our interest as the model can not be elegantly distributed across independent workers . As shown later in section 5.2 , we observe huge performance degradation too with similar network configurations as SOLAR . Sparsifying Dense Embeddings : Several prior works have proposed to project pre-trained dense embeddings to a sparse high dimensional vectors using techniques like over-complete dictionaries ( 10 ) , denoising k-sparse auto-encoders ( 28 ; 19 ) , permutation maps on unit spheres ( 4 ) . While all these approaches vindicate the superiority of sparse vectors , none of them learn end-to-end high dimensional sparse vectors and have been confined to tasks with lot fewer labels . All other embedding models ( 5 ; 29 ; 3 ; 7 ) primarily optimize a pairwise similarity-based loss function for query-label pairs , differing in the choice of projection functions . Pairwise training needs negative sampling ( 12 ) to avoid degenerate solutions and has large training times as the number of training instances ( query-label pairs , both relevant and irrelevant ) effectively blows up . SOLAR , in addition to being sparse , also solves these challenges by learning a classifier instead of a similarity based loss function , encapsulating all labels of an input at once . Since a classifier has intrinsic negative sampling , the number of effective training samples is much lower . 3 OUR METHOD : SOLAR . In this section , we describe in detail the workflow of our algorithm SOLAR . First , we will discuss the pre-processing phase where we construct random sparse label vectors ( figure 1 ) and an inverted-index of the labels ( figure 2 ) . Then , we move to the training phase where we split the label vectors into independent contiguous components and train each of them in parallel ( figure 1 ) . In the end , we show the inference procedure where we obtain the piece-wise query vector in parallel and sparsify by retaining only top buckets from each piece . We then look up the saved inverted index to retrieve and score the candidate labels to sort and predict the best ones ( figure 3 ) . Notations : N denotes the total number of labels . D is the sparse vector dimension . K is the number of non-zeros in label vectors . B = DK is the number of buckets in each component of the vector . 1 ) Pre-processing : Construction of Label Embeddings and Inverted-Index : As presented in figure 1 , let there be N labels ( N is large , in the order of a million ) . We intend to construct K-sparse ( having K non-zero indices ) high dimensional vectors for each label . As noted earlier , a large output dimension makes training a cross-entropy loss prohibitively expensive . Therefore , inspired by recent work on zero-communication Model Parallelism ( 20 ) , we partition the large dimensional vector into K subsets and train each one independently . Each subset of the partition comprises of B buckets with exactly one non-zero index . The colored blocks on the right side in figure 1 denote the non-zero indices for each label vector . To adhere to our design principle of load-balancing , for each label , we pick the non-zero index randomly in the range of B for each of the K components . To be precise , for any label , we randomly generateK integers in the range ofB . As in most of our experiments , setK = 16 andB = 30K . This makes the overall dimension D = B ×K = 480K and a sparsity ratio of 0.000533 ( 0.0533 % ) . As an example , let the generated integers be { 18189 , 8475 , 23984 , .... , 17924 , 459 } . Then the non-zero indices of the overall vector are simply B-shifted , i.e. , { 18189 , 38475 , 83984 , .... , 437924 , 450459 } . Although any random number generator would work fine , we pick our non-zero indices using sklearn ’ s murmurhash function . It is rather straightforward to see that these vectors are near-orthogonal . The expected dot-product between any two label vectors li and lj is , E ( li T ∗ lj ) = ∑ k p ( hk ( i ) = hk ( j ) ) = K B ≈ 0 . ( 1 ) B Labels 0 1 2 3 Inverted Index0 0 N-1 1 2 3 N-2 N-4 N-3 B Labels 0 1 2 3 Inverted Index1 0 N-1 2 3 N-4 N-3 1 N-2 B Labels 0 1 2 3 Inverted Indexk-2 N-1 3 N-2N-4 1 0 N-3 2 B Labels 0 1 2 3 Inverted Indexk-1 N-13 N-2 N-4 1 0 2 N-3 Figure 2 : Inverted-Index construction for the label vectors shown in figure 1 . We construct one index for each of the K chunks . Each bucket will have the same number of labels by design ( Load-Balanced ) . Figure 2 shows the toy inverted index for the label vectors shown in figure 1 . Since we train K independent models , each model is expected to predict its own ‘ buckets of high relevance ’ . Hence we maintain K separate inverted-indexes . For any input , we accumulate the candidates from each of the K inverted-indexes and take a union of them for scoring and sorting . It is noteworthy that two unrelated labels might be pooled into the same bucket . While this sounds rather jarring from a learnability perspective , it is essential for the load-balance and also to learn a positive-only association of input tokens and true-label buckets ( more on this Appendix B ) . 2 ) Training : Figure 1 also depicts the training process ( on the left side ) . In a multilabel learning problem , each input has a variable number of true labels . We lookup all the true label vectors for an input and perform an ‘ OR ’ operation over the respective sparse label vectors . Please note that at the level of sparsity we are dealing , even with zero pairwise collisions among the non-zero indices of label vectors , we still have a super-sparse representation for the resultant ‘ OR ’ vector . We partition this combined-label vector into K parts just like before and train individual classifiers ( simple feed for- ward neural networks with 1 hidden layer ) with a binary cross entropy loss function with the B dimensional few-hot vectors . Please note that these models do not communicate with each other . Since there is no overhead of parameter sharing , training can be embarrassing parallellized across multiple GPUs ( Zero-Communication Model Parallellism ) . Input Feature Hashing : Usually , naive input tokenization like bag-of-words ( BoW ) leads to a very high dimensional input . This in turn makes the first layer of the network intractable . Hence , an elegant solution for this problem is to hash the token indices to a lower dimension ( called Feature Hashing ( 35 ) ) . In our case , we use a different random seed for each of the K models and hash the input indices to a feasible range . Although we lose some input-information in each individual model ( due to feature hash collisions ) , the variation in random seed minimizes this loss when all the models are collectively taken into account . 3 ) Inference One of the key advantages of SOLAR over dense embedding models is faster inference . As mentioned earlier , the primary reason for this is the replacement of matrix-multiplication and sorting with simple lookups and aggregations . This workflow is depicted in figure 3 . Given a tokenized input , we pass it through all the K models in parallel and obtain the respective B dimensional probability vectors . We then sort these probabilities and obtain the top-m ( m varies among 50 and 100 ) buckets for each model . These m×K integers constitute the non-zero indices of the SOLAR embedding for the input query . We can query the K inverted-indexes with the respective top-m buckets for candidates . A union of all these candidates is our target set of labels . For each of these candidates , we sum up the predicted probability scores from the corresponding buckets and sort for the top results . Filtering Noisy Labels : Random initialization of embeddings will inevitably result in unrelated labels being assigned the same bucket in any of the K subsets of the partition . Hence , a lot of the candidate labels obtained from the inverted-indexes would be irrelevant to the query . However , it is unlikely that an unrelated label would appear in the top m buckets in more than one of teh K models . Hence , we can omit the labels that appear in the topm buckets less than a threshold d times . The larger d is , the more precise the retrieved candidates will be . However , setting a large d would mean fewer candidates and thereby lower recall . Later in section 5.1 ( in table 3 ) , we experiment with multiple values of d and corroborate that d = K/2 is an optimal trade-off between precision and candidate set size . Alternate Scoring Methods : It is quite customary to sum up the log of probabilities across the K models for every candidate label as it represents the log-likelihood . Another potential strategy is to sum the logit values directly ( as logits have a wider range and are more expressive ) . In our case , we chose to assign the sum of predicted probabilities for each candidate ( as sum is just a scaled version of mean ) . The rationale behind this is shown in the following analysis assuming a multi-class classification setting . For an input x , denote Pr ( y = i ) = pi ; i ∈ { 0 , 1 , 2 , ... , N } . Let Pr ( y = b|θk ) = P jb ; b ∈ { 0 , 1 , 2 , ... , B } and k ∈ { 0 , 1 , ... , K } . Since our hash-functions are random , we have P jh ( i ) = pi + ∑ k 6=i 1h ( k ) =h ( i ) pk Hence E ( P jh ( i ) ) = pi + 1 B ∑ k 6=i pk = pi + ( 1−pi ) B . After rearrangement , we get pi = B ∗ E ( P jh ( i ) ) ( B − 1 ) − 1 B − 1 This analysis shows that the original label probabilities are linearly monotonic with expected value of the respective bucket probabilities . And since expected value is proportional to the sum across all K models , summing probabilities is a principled scheme to preserving ranking . While we primarily report the precision with sum of probabilities , we compare it against the other two heuristics ( summing log-probabilities and logits ) too in our experiments ( in table 2 ) . Time-Complexity : Since our inverted indexes are load-balanced , each bucket accommodates NB labels . Hence , the top-m buckets contribute mNB candidates . The candidates retrieved from K models are bound to have some overlapping labels , accompanied by some unrelated counterparts from the respective buckets ( since we randomly initialized the label vectors ) . In the worst case of zero overlaps , the total number of candidates would be KmNB . The aggregation of scores and frequencies of occurrence in top-m buckets is a by-product of the candidate selection step . We then omit the candidates with frequency < d. Finally , we sort the aggregated scores for the remaining candidates . Including the two sorting steps : 1 ) to get top-m buckets 2 ) to get top 5 labels , the total number of operations performed is B logm + KmNB + KmN B log 5 . A dense embedding model on the other hand needs NmK + N log 5 steps ( assuming dense vectors of dimension d = mK , since SOLAR also has mK non-zeros after inference ) . For the scale of N and mK we are dealing with ( N = 1M , K = 16 , m = 50 , B = 30K ) , SOLAR supposedly has B1+log 5 times faster inference . However , matrix multiplications have specialized processes with hardware acceleration unlike SOLAR and the practical gains would be in the order of 5x ( more in section 5.1 ) .
This submission addresses the problem of learning document embeddings for document retrieval/recommendation tasks. Such tasks are characterized by a large number of documents and a large set of semantic class labels. In contrast to the now standard approach of representing documents and their labels as dense low dimensional embeddings, this work proposes to use sparse high dimensional embeddings for documents/labels and claims that the proposed approach has certain advantages over state of the art:
SP:b3c08f134af295f65238e3ce338e941733858d7e
SOLAR: Sparse Orthogonal Learned and Random Embeddings
1 INTRODUCTION . Embedding models have been the mainstay algorithms for several machine learning applications like Information Retrieval ( IR ) ( 8 ; 2 ) and Natural Language Processing ( NLP ) ( 21 ; 16 ; 31 ; 9 ) in the last decade . Embedding models are learned spin-offs from the low-rank approximation and Matrix Factorization techniques that dominated the space of recommendation systems prior to the emergence of Deep Learning ( DL ) . The primary purpose of these models is to project a rather simple and intuitive representation of an input to an abstract low-dimensional dense vector space . This projection enables two things : 1 ) tailoring the vectors to specific downstream applications and 2 ) pre-processing and storing documents or products as vectors , thereby making the retrieval process computationally efficient ( often matrix multiplication followed by sorting , which are conducive to modern hardware like GPUs ) . Besides the computational advantage , embedding models capture the semantic relationship between queries and products . A good example is product prediction for a service like Amazon . A user-typed query has to be matched against millions of products and the best search results have to be displayed within a fraction of a second . With naive product data , it would be impossible to figure out that products with ‘ aqua ’ in their titles are actually relevant to the query ‘ water ’ . Rather , if we can project all the products to a dense low-dimensional vector space , a query can also be projected to the same space and an inner product computation can be performed with all the product vectors ( usually a dot product ) . We can then display the products with the highest inner product . These projections can be learned to encapsulate semantic information and can be continually updated to reflect temporal changes in customer preference . To the best of our knowledge , embedding models are the most prevalent ones in the industry , particularly for product and advertisement recommendations ( Amazon ’ s - DSSM ( 23 ) , Facebook ’ s DLRM ( 22 ) ) . However , the scale of these problems has blown out of proportion in the past few years prompting research in extreme classification tasks , where the number of classes runs into several million . Consequentially , approaches like Tree-based Models ( 26 ; 15 ; 1 ) and Sparse-linear Models ( 36 ; 39 ; 38 ) have emerged as powerful alternatives . Particularly , Tree-based models are much faster to train and evaluate compared to the other methods . However , most real Information Retrieval systems have dynamically changing output classes and all the extreme classification models fail to generalize to new classes with limited training data ( e.g. , new products being added to the catalogue every day ) . This has caused the resurgence of embedding models for large scale Extreme Classification ( 5 ; 29 ; 3 ; 7 ) . Our Contributions : In this paper , we argue that sparse , high dimensional , orthogonal embeddings are superior to their dense low dimensional counterparts . In this regard , we make two interesting design choices : 1 ) We design the label embeddings ( e.g.products in the catalogue ) to be high dimensional , super-sparse , and orthogonal vectors . 2 ) We fix the label embeddings throughout the training process and learn only the input embeddings ( one-sided learning ) , unlike typical dense models , where both the input and label embeddings are learned . Since we use a combination of Sparse , Orthogonal , Learned and Random embeddings , we code-name our method SOLAR . We provide a theoretical premise for SOLAR by showing that one-sided and two-sided learning are mathematically equivalent . Our choices manifest in a five-fold advantage over prior methods : • Matrix Multiplication to Inverted-Index Lookup : Sparse high dimensional embeddings can obtain a subset of labels using a mere inverted-index ( 8 ) lookup and restrict the computation and sorting to those labels . This enhances the inference speed by a large margin . • Load-balanced Inverted Index : By forcing the label embeddings to be near-orthogonal and equally sparse ( and fixing them ) , we ensure that all buckets in an inverted index are equally filled and we sample approximately the same number of labels for each input . This omits the well-known imbalanced buckets issue where we sub-sample almost all the labels for popular inputs and end up hurting the inference speed . • Lower Embedding Memory : Dense embedding models need to hold all label embeddings in GPU memory to perform real-time inference . This is not a scalable solution with millions of labels ( which is a practical industry requirement ) . On the contrary , SOLAR needs to store only few integers indices per label which is very memory efficient with modern sparse array support on all platforms . These vectors can also be used with Locality Sensitive Hashing based indexing systems like FLASH ( 34 ) . • Zero-communication : Our unique construction of label embeddings enables distributed training over multiple GPUs with zero-communication . Hence , we can afford to train on a 1.67 M book recommendation dataset and three largest extreme classification datasets and outperform the respective baselines on all 4 of them on both precision and speed . • Learning to Hash : An Inverted-Index can be perceived as a hash table where all the output classes are hashed into a few buckets ( 18 ; 33 ) . By fixing the label buckets and learning to map the inputs to the corresponding label buckets , we are doing a ‘ partial learning to hash ’ task in the hindsight ( more on this in Appendix A ) . 2 RELATED WORK . SNRM : While there have been a plethora of dense embedding models , there is only one prior work called SNRM ( Standalone Neural Ranking Model ) ( 40 ) that trains sparse embeddings for the task of suggesting documents relevant to an input query ( classic web search problem ) . In SNRM , the authors propose to learn a high dimensional output layer and sparsify it using a typical L1 or L2 regularizer . However , imposing sparsity through regularization causes lopsided inverted-index with imbalanced loads and high inference times . As we see in our experiments later , these issues lead to the poor performance of SNRM on our 1.67M product recommendation dataset . GLaS : Akin to SOLAR ’ s construction of near-orthogonal label embeddings , another recent work from Google ( 11 ) also explores the idea of enforcing orthogonality to make the labels distinguishable and thereby easier for the classifier to learn . The authors enforce it in such a way that frequently co-occurring labels have high cosine-similarity and the ones that rarely co-occur have low cosine similarity . This imposition was called a Graph Laplacian and Spreadout ( GLaS ) regularizer . However , this was done entirely in the context of dense embeddings and can not be extended to our case due to the differentiability issue . We show the comparison of SOLAR against dense embedding models with and without GLaS regularizer later on in section 5.1 . Fix your Classifier : The idea of fixing label vectors was explored in ( 13 ; 24 ) . In ( 13 ) , the authors propose to initialize the last weight matrix of popular CNN architectures with Hadamard matrices and only train the preceding layer weights . With minimal loss in precision , the number of trainable parameters can be greatly reduced . However , ‘ Fix your Classifier ’ does not scale to the the huge number of labels in the tasks of our interest as the model can not be elegantly distributed across independent workers . As shown later in section 5.2 , we observe huge performance degradation too with similar network configurations as SOLAR . Sparsifying Dense Embeddings : Several prior works have proposed to project pre-trained dense embeddings to a sparse high dimensional vectors using techniques like over-complete dictionaries ( 10 ) , denoising k-sparse auto-encoders ( 28 ; 19 ) , permutation maps on unit spheres ( 4 ) . While all these approaches vindicate the superiority of sparse vectors , none of them learn end-to-end high dimensional sparse vectors and have been confined to tasks with lot fewer labels . All other embedding models ( 5 ; 29 ; 3 ; 7 ) primarily optimize a pairwise similarity-based loss function for query-label pairs , differing in the choice of projection functions . Pairwise training needs negative sampling ( 12 ) to avoid degenerate solutions and has large training times as the number of training instances ( query-label pairs , both relevant and irrelevant ) effectively blows up . SOLAR , in addition to being sparse , also solves these challenges by learning a classifier instead of a similarity based loss function , encapsulating all labels of an input at once . Since a classifier has intrinsic negative sampling , the number of effective training samples is much lower . 3 OUR METHOD : SOLAR . In this section , we describe in detail the workflow of our algorithm SOLAR . First , we will discuss the pre-processing phase where we construct random sparse label vectors ( figure 1 ) and an inverted-index of the labels ( figure 2 ) . Then , we move to the training phase where we split the label vectors into independent contiguous components and train each of them in parallel ( figure 1 ) . In the end , we show the inference procedure where we obtain the piece-wise query vector in parallel and sparsify by retaining only top buckets from each piece . We then look up the saved inverted index to retrieve and score the candidate labels to sort and predict the best ones ( figure 3 ) . Notations : N denotes the total number of labels . D is the sparse vector dimension . K is the number of non-zeros in label vectors . B = DK is the number of buckets in each component of the vector . 1 ) Pre-processing : Construction of Label Embeddings and Inverted-Index : As presented in figure 1 , let there be N labels ( N is large , in the order of a million ) . We intend to construct K-sparse ( having K non-zero indices ) high dimensional vectors for each label . As noted earlier , a large output dimension makes training a cross-entropy loss prohibitively expensive . Therefore , inspired by recent work on zero-communication Model Parallelism ( 20 ) , we partition the large dimensional vector into K subsets and train each one independently . Each subset of the partition comprises of B buckets with exactly one non-zero index . The colored blocks on the right side in figure 1 denote the non-zero indices for each label vector . To adhere to our design principle of load-balancing , for each label , we pick the non-zero index randomly in the range of B for each of the K components . To be precise , for any label , we randomly generateK integers in the range ofB . As in most of our experiments , setK = 16 andB = 30K . This makes the overall dimension D = B ×K = 480K and a sparsity ratio of 0.000533 ( 0.0533 % ) . As an example , let the generated integers be { 18189 , 8475 , 23984 , .... , 17924 , 459 } . Then the non-zero indices of the overall vector are simply B-shifted , i.e. , { 18189 , 38475 , 83984 , .... , 437924 , 450459 } . Although any random number generator would work fine , we pick our non-zero indices using sklearn ’ s murmurhash function . It is rather straightforward to see that these vectors are near-orthogonal . The expected dot-product between any two label vectors li and lj is , E ( li T ∗ lj ) = ∑ k p ( hk ( i ) = hk ( j ) ) = K B ≈ 0 . ( 1 ) B Labels 0 1 2 3 Inverted Index0 0 N-1 1 2 3 N-2 N-4 N-3 B Labels 0 1 2 3 Inverted Index1 0 N-1 2 3 N-4 N-3 1 N-2 B Labels 0 1 2 3 Inverted Indexk-2 N-1 3 N-2N-4 1 0 N-3 2 B Labels 0 1 2 3 Inverted Indexk-1 N-13 N-2 N-4 1 0 2 N-3 Figure 2 : Inverted-Index construction for the label vectors shown in figure 1 . We construct one index for each of the K chunks . Each bucket will have the same number of labels by design ( Load-Balanced ) . Figure 2 shows the toy inverted index for the label vectors shown in figure 1 . Since we train K independent models , each model is expected to predict its own ‘ buckets of high relevance ’ . Hence we maintain K separate inverted-indexes . For any input , we accumulate the candidates from each of the K inverted-indexes and take a union of them for scoring and sorting . It is noteworthy that two unrelated labels might be pooled into the same bucket . While this sounds rather jarring from a learnability perspective , it is essential for the load-balance and also to learn a positive-only association of input tokens and true-label buckets ( more on this Appendix B ) . 2 ) Training : Figure 1 also depicts the training process ( on the left side ) . In a multilabel learning problem , each input has a variable number of true labels . We lookup all the true label vectors for an input and perform an ‘ OR ’ operation over the respective sparse label vectors . Please note that at the level of sparsity we are dealing , even with zero pairwise collisions among the non-zero indices of label vectors , we still have a super-sparse representation for the resultant ‘ OR ’ vector . We partition this combined-label vector into K parts just like before and train individual classifiers ( simple feed for- ward neural networks with 1 hidden layer ) with a binary cross entropy loss function with the B dimensional few-hot vectors . Please note that these models do not communicate with each other . Since there is no overhead of parameter sharing , training can be embarrassing parallellized across multiple GPUs ( Zero-Communication Model Parallellism ) . Input Feature Hashing : Usually , naive input tokenization like bag-of-words ( BoW ) leads to a very high dimensional input . This in turn makes the first layer of the network intractable . Hence , an elegant solution for this problem is to hash the token indices to a lower dimension ( called Feature Hashing ( 35 ) ) . In our case , we use a different random seed for each of the K models and hash the input indices to a feasible range . Although we lose some input-information in each individual model ( due to feature hash collisions ) , the variation in random seed minimizes this loss when all the models are collectively taken into account . 3 ) Inference One of the key advantages of SOLAR over dense embedding models is faster inference . As mentioned earlier , the primary reason for this is the replacement of matrix-multiplication and sorting with simple lookups and aggregations . This workflow is depicted in figure 3 . Given a tokenized input , we pass it through all the K models in parallel and obtain the respective B dimensional probability vectors . We then sort these probabilities and obtain the top-m ( m varies among 50 and 100 ) buckets for each model . These m×K integers constitute the non-zero indices of the SOLAR embedding for the input query . We can query the K inverted-indexes with the respective top-m buckets for candidates . A union of all these candidates is our target set of labels . For each of these candidates , we sum up the predicted probability scores from the corresponding buckets and sort for the top results . Filtering Noisy Labels : Random initialization of embeddings will inevitably result in unrelated labels being assigned the same bucket in any of the K subsets of the partition . Hence , a lot of the candidate labels obtained from the inverted-indexes would be irrelevant to the query . However , it is unlikely that an unrelated label would appear in the top m buckets in more than one of teh K models . Hence , we can omit the labels that appear in the topm buckets less than a threshold d times . The larger d is , the more precise the retrieved candidates will be . However , setting a large d would mean fewer candidates and thereby lower recall . Later in section 5.1 ( in table 3 ) , we experiment with multiple values of d and corroborate that d = K/2 is an optimal trade-off between precision and candidate set size . Alternate Scoring Methods : It is quite customary to sum up the log of probabilities across the K models for every candidate label as it represents the log-likelihood . Another potential strategy is to sum the logit values directly ( as logits have a wider range and are more expressive ) . In our case , we chose to assign the sum of predicted probabilities for each candidate ( as sum is just a scaled version of mean ) . The rationale behind this is shown in the following analysis assuming a multi-class classification setting . For an input x , denote Pr ( y = i ) = pi ; i ∈ { 0 , 1 , 2 , ... , N } . Let Pr ( y = b|θk ) = P jb ; b ∈ { 0 , 1 , 2 , ... , B } and k ∈ { 0 , 1 , ... , K } . Since our hash-functions are random , we have P jh ( i ) = pi + ∑ k 6=i 1h ( k ) =h ( i ) pk Hence E ( P jh ( i ) ) = pi + 1 B ∑ k 6=i pk = pi + ( 1−pi ) B . After rearrangement , we get pi = B ∗ E ( P jh ( i ) ) ( B − 1 ) − 1 B − 1 This analysis shows that the original label probabilities are linearly monotonic with expected value of the respective bucket probabilities . And since expected value is proportional to the sum across all K models , summing probabilities is a principled scheme to preserving ranking . While we primarily report the precision with sum of probabilities , we compare it against the other two heuristics ( summing log-probabilities and logits ) too in our experiments ( in table 2 ) . Time-Complexity : Since our inverted indexes are load-balanced , each bucket accommodates NB labels . Hence , the top-m buckets contribute mNB candidates . The candidates retrieved from K models are bound to have some overlapping labels , accompanied by some unrelated counterparts from the respective buckets ( since we randomly initialized the label vectors ) . In the worst case of zero overlaps , the total number of candidates would be KmNB . The aggregation of scores and frequencies of occurrence in top-m buckets is a by-product of the candidate selection step . We then omit the candidates with frequency < d. Finally , we sort the aggregated scores for the remaining candidates . Including the two sorting steps : 1 ) to get top-m buckets 2 ) to get top 5 labels , the total number of operations performed is B logm + KmNB + KmN B log 5 . A dense embedding model on the other hand needs NmK + N log 5 steps ( assuming dense vectors of dimension d = mK , since SOLAR also has mK non-zeros after inference ) . For the scale of N and mK we are dealing with ( N = 1M , K = 16 , m = 50 , B = 30K ) , SOLAR supposedly has B1+log 5 times faster inference . However , matrix multiplications have specialized processes with hardware acceleration unlike SOLAR and the practical gains would be in the order of 5x ( more in section 5.1 ) .
The paper studies the problem of document retrieval using embedding based models. It argues that performing near-neighbour search on a large number of dense embeddings hurts performance and accuracy. As an alternative, the paper proposes SOLAR (SPARSE ORTHOGONAL LEARNED AND RANDOM EMBEDDINGS), a model which uses high-dimensional, ultra sparse embeddings, on which near-neighbour search can be done using simple lookup operations. In SOLAR the document labels are divided into equal chunks of sparse vector, and independent models are learned for mapping the query to each chunk. Hence, SOLAR could be trained on multiple GPUs in an embarrassingly parallel way without requiring any communication between the GPUs. The paper demonstrates the effectiveness of SOLAR by comparing it against strong baselines on various recommendation and extreme classification datasets. It also provides theoretical justification for SOLAR by showing how “one-sided” learning (i.e. fixing label embedding but learning mapping from query to label) is mathematically equivalent to “two-sided” learning (i.e. jointly mapping the label and query to a common space).
SP:b3c08f134af295f65238e3ce338e941733858d7e
Generative Scene Graph Networks
1 INTRODUCTION . Learning to discover and represent objects purely from observations is at the core of human cognition ( Spelke & Kinzler , 2007 ) . Recent advances in unsupervised object-centric representation learning have enabled decomposition of scenes into objects ( Greff et al. , 2019 ; Lin et al. , 2020b ; Locatello et al. , 2020 ) , inference and rendering of 3D object models ( Chen et al. , 2020 ) , and object tracking and future generation ( Crawford & Pineau , 2019a ; Jiang et al. , 2020 ; Lin et al. , 2020a ) . These neuro-symbolic approaches , where the discreteness of discovered objects provides the symbolic representation , facilitate various desired abilities such as out-of-distribution generalization , relational reasoning , and causal inference . In this paper , we seek to further discover and represent the structure within objects without supervision . Our motivation is that natural scenes frequently contain compositional objects—objects that are composed of primitive parts . We humans can easily identify the primitive parts and recognize the part-whole relationship . Representing objects as explicit composition of parts is expected to be more efficient , since a vast number of complex objects can often be compositionally explained by a small set of simple primitives . It also allows us to imagine and create meaningful new objects . A well-established representation for part-whole relationships in computer graphics is called the scene graph ( Foley et al. , 1996 ) . It is a tree whose leaf nodes store models of primitive parts , and whose edges specify affine transformations that compose parts into objects . While in computer graphics , the scene graph is manually constructed for rendering , in this paper , we are interested in inferring the scene graph from unlabeled images . To this end , we propose Generative Scene Graph Networks ( GSGNs ) . We formulate this model as a variational autoencoder ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) whose latent representation is a probabilistic scene graph . In the latent tree , each node is associated with an appearance variable that summarizes the composition up to the current level , and each edge with a pose variable that parameterizes the affine transformation from the current level to the upper level . The design of the GSGN decoder follows the rendering process of graphics engines , but with differentiable operations helping the encoder to learn inverse graphics ( Tieleman , 2014 ; Wu et al. , 2017 ; Romaszko et al. , 2017 ; Yao et al. , 2018 ; Deng et al. , 2019 ) . As a result , the pose variables inferred by GSGN are interpretable , and the probabilistic scene graph supports symbolic manipulation by configuring the pose variables . One major challenge is to infer the structure of the scene graph . This involves identifying the parts and grouping them into objects . Notice that unlike objects , parts are often stitched together and thus can have severe occlusion , making it hard to separate them . Existing methods for learning hierarchical scene representations circumvent this challenge by working on single-object scenes ( Kosiorek et al. , 2019 ) and also providing predefined or pre-segmented parts ( Li et al. , 2017 ; Huang et al. , 2020 ) . In contrast , GSGN addresses this challenge directly , and learns to infer the scene graph structure from multi-object scenes without knowledge of individual parts . Our key observation is that the scene graph has a recursive structure—inferring the structure of the tree should be similar to inferring the structure of its subtrees . Hence , we develop a top-down inference process that first decomposes the scene into objects and then further decomposes each object into its parts . This allows us to reuse existing scene decomposition methods such as SPACE ( Lin et al. , 2020b ) as an inference module shared at each level of the scene graph . However , we find that SPACE has difficulty separating parts that have severe occlusion , possibly due to its complete reliance on bottom-up image features . Therefore , simply applying SPACE for decomposition at each level will lead to suboptimal scene graphs . To alleviate this , GSGN learns a prior over plausible scene graphs that captures typical compositions . During inference , this prior provides top-down information which is combined with bottom-up image features to help reduce ambiguity caused by occlusion . For evaluation , we develop two datasets of scenes containing multiple compositional 2D and 3D objects , respectively . These can be regarded as compositional versions of Multi-dSprites ( Greff et al. , 2019 ) and CLEVR ( Johnson et al. , 2017 ) , two commonly used datasets for evaluating unsupervised object-level scene decomposition . For example , the compositional 3D objects in our dataset are made up of shapes similar to those in the CLEVR dataset , with variable sizes , colors , and materials . Hence , we name our 3D dataset the Compositional CLEVR dataset . The contributions of this paper are : ( i ) we propose the probabilistic scene graph representation that enables unsupervised and end-to-end scene graph inference and compositional scene generation , ( ii ) we develop and release the Compositional CLEVR dataset to facilitate future research on object compositionality , and ( iii ) we demonstrate that our model is able to infer the latent scene graph , shows decent generation quality and generalization ability , and improves data efficiency in downstream tasks . 2 RELATED WORK . Object-centric representations . Our model builds upon a line of recent work on unsupervised object-centric representation learning , which aims to eliminate the need for supervision in structured scene understanding . These methods learn a holistic model capable of decomposing scenes into objects , learning appearance representations for each object , and generating novel scenes—all without supervision and in an end-to-end trainable way . We believe such unsupervised and holistic models are more desirable , albeit more challenging to learn . These models can be categorized into scene-mixture models ( Greff et al. , 2017 ; 2019 ; Burgess et al. , 2019 ; Engelcke et al. , 2020 ; Locatello et al. , 2020 ) and spatial-attention models ( Eslami et al. , 2016 ; Crawford & Pineau , 2019b ; Lin et al. , 2020b ; Jiang & Ahn , 2020 ) . Compared to these models , we go a step further by also decomposing objects into parts . We use spatial-attention models as the inference module at each level of the scene graph , because they explicitly provide object positions , unlike scene-mixture models . We combine the inference module with a learned prior to help improve robustness to occlusion . This also allows sampling novel scenes from the prior , which is not possible with spatial-attention models . Hierarchical scene representations . Modeling the part-whole relationship in scenes has attracted growing interest , and has been utilized for improving image classification ( Sabour et al. , 2017 ; Hinton et al. , 2018 ; Kosiorek et al. , 2019 ) , parsing , and segmentation ( Zhu et al. , 2008 ) . However , these models have been applied to scenes with one dominant object only , and can not perform scene generation . Recent work on assembly-based 3D shape modeling also learns the part-whole relationship ( Tulsiani et al. , 2017 ; Li et al. , 2017 ; Zhu et al. , 2018 ; Huang et al. , 2020 ; Kania et al. , 2020 ) , but these methods require predefined or pre-segmented parts as input , and can only model single shapes with no background . By contrast , our model learns the part-whole relationship from multi-object scenes without knowledge of individual parts . There has also been work on 3D part decomposition ( Chen et al. , 2019 ; Deng et al. , 2020 ) , but they require voxels or point clouds as input , and typically focus on geometry ( e.g. , part occupancy ) without learning to represent appearance ( e.g. , color , material ) . Part hierarchies have also been used for shape generation ( Mo et al. , 2019 ) , where the hierarchy is provided as input rather than inferred from the input . Our approach infers compositional structures from static scenes , and is orthogonal to methods that use motion cues for decomposing dynamic scenes ( Xu et al. , 2019 ) and methods that infer physical interactions from dynamic scenes ( Li et al. , 2020 ; Stanić et al. , 2020 ) . Hinton ( 2021 ) recently proposed an iterative procedure that is expected to form the part hierarchy through multiple rounds of message passing among adjacent levels . While our model works without iterative message passing , we believe this is important for parsing more complex scenes . Hierarchical latent variable models . Our model can be regarded as a hierarchical latent variable model , and is inspired by several recent advances ( Bachman , 2016 ; Sønderby et al. , 2016 ; Zhao et al. , 2017 ; Maaløe et al. , 2019 ) that have achieved impressive generation quality . While these methods focus on designing the hierarchical structure and training method that harness the full expressive power of generative models , our goal is to learn the hierarchical structure from unlabeled images that captures the compositional relationship among symbolic entities like objects and parts . 3 GENERATIVE SCENE GRAPH NETWORKS . 3.1 GENERATIVE PROCESS . We assume that the image x is generated by a set of foreground variables collectively denoted zfg and a background variable zbg as follows : p ( x ) = ∫∫ p ( x |zfg , zbg ) p ( zbg |zfg ) p ( zfg ) dzfg dzbg . ( 1 ) To represent the compositional structures within foreground objects , GSGN models zfg as a treestructured probabilistic scene graph , as shown in Figure 1A . Each leaf node represents a primitive entity that is not further decomposed . Each internal node represents an abstract entity that is composed from its children nodes . Similar to graphics engines , the composition is modeled as affine transformations , and is specified by the relative pose ( including rotation , scaling , and translation ) of each child node v with respect to its parent pa ( v ) . We use a pose variable zposev to represent the relative pose , and associate it with the corresponding edge . We also associate an appearance variable zapprv with each node v. It is expected to represent the appearance of entity v in its canonical pose 1 , summarizing all lower-level composition in the subtree rooted at v. In particular , the appearance variable zapprr at the root node r summarizes the full scene . Due to this summarization assumption , given zapprv , we can generate the pose and appearance variables for all children nodes of v in a conditionally independent way . Hence , for a given tree structure with V being the set of all nodes , the prior over foreground variables can be factorized according to the tree structure : p ( zfg ) = p ( z appr r ) ∏ v∈V \ { r } p ( z pose v |z appr pa ( v ) ) p ( z appr v |z appr pa ( v ) ) . ( 2 ) Here we further assume conditional independence between zposev and z appr v , since in graphics engines , one should be able to specify the pose and appearance separately . Representing tree structures . The above factorization only works for a given tree structure . To deal with variable tree structures , we need to include them in the latent representation as well . We start by setting a maximum out-degree for each node so that the total number of possible structures is bounded . To determine the structure , it then suffices to specify the presence of each possible edge . Hence , for an arbitrary edge between node v and its parent , we introduce a Bernoulli variable zpresv to indicate its presence . If zpresv = 0 , meaning the edge is not present , then the pose variable associated with the edge along with all variables in the subtree rooted at v are excluded from the probabilistic scene graph . More precisely , let us define z̄presv to be the product of all the presence variables along the path from root r to node v : z̄presr = 1 , z̄ pres v = z pres v × z̄ pres pa ( v ) for v ∈ V \ { r } . ( 3 ) The foreground variables now become zfg = { zapprr } ∪ { zpresv , zposev , zapprv } v∈V \ { r } , and the prior factorizes as follows : p ( zfg ) = p ( z appr r ) ∏ v∈V \ { r } [ p ( z pres v |z appr pa ( v ) ) ] z̄pres pa ( v ) [ p ( zposev |z appr pa ( v ) ) p ( z appr v |z appr pa ( v ) ) ] z̄presv . ( 4 ) We implement the pose and appearance variables as Gaussian variables , p ( zapprr ) = N ( 0,1 ) , and the parameters of each conditional distribution are output by an MLP . Differentiable decoder . We design the decoder to follow the recursive compositing process in graphics engines , helping the encoder to learn inverse graphics . First , for each leaf node v , we use a neural network g ( · ) to decode its appearance variable into a small image patch x̂v and a ( close to ) binary mask m̂v : ( x̂v , m̂v ) = g ( zapprv ) . Here , g ( · ) is implemented as a spatial broadcast decoder ( Watters et al. , 2019 ) optionally followed by sub-pixel convolutions ( Shi et al. , 2016 ) . We then recursively compose these primitive patches into whole objects and the full scene by applying affine transformations parameterized by the pose variables . Specifically , let u be an internal node , and ch ( u ) be the set of its children . We compose the higher-level image patch x̂u and mask m̂u as follows : x̂u = ∑ v∈ch ( u ) z pres v ·αv ST −1 ( x̂v , z pose v ) , ( 5 ) m̂u = ∑ v∈ch ( u ) z pres v ·αv ST −1 ( m̂v , z pose v ) . ( 6 ) Here , denotes pixel-wise multiplication , and ST −1 is an inverse spatial transformer ( Jaderberg et al. , 2015 ) that differentiably places x̂v and m̂v into the coordinate frame of the parent node u . To deal with occlusion , we include relative depth in zposev , and compute a transparency map αv by the softmax over negative depth values . This ensures that entities with smaller depth will appear in front of entities with larger depth . See Figure 1C for an illustration . When we reach the root node , we obtain an image x̂r and a mask m̂r of all foreground objects . We then use a spatial broadcast decoder ( Watters et al. , 2019 ) to decode zbg into a background image x̂bg . The full scene x can now be modeled as a pixel-wise mixture of foreground and background , where m̂r serves as the mixing weight : p ( x |zfg , zbg ) = m̂r N ( x | x̂r , σ2fg1 ) + ( 1− m̂r ) N ( x | x̂bg , σ2bg1 ) . ( 7 ) Here , σfg and σbg are hyperparameters . 1We use v to refer to both node v in the probabilistic scene graph and the entity that node v represents .
The paper presents a generative model for scenes that uses tree-structured latent variables to recursively decompose images into objects and parts, without any object or part supervision during training. The model is trained using variational inference. Experiments are performed on two new datasets (2D Shapes and Compositional CLEVR), demonstrating that the model is able to successfully uncover recursive scene/object/part decompositions in an unsupervised setting. The model is compared against prior work (SPACE) that performs non-hierarchical scene modeling.
SP:643597431db07482ab2de551f78064a102b16c6c
Generative Scene Graph Networks
1 INTRODUCTION . Learning to discover and represent objects purely from observations is at the core of human cognition ( Spelke & Kinzler , 2007 ) . Recent advances in unsupervised object-centric representation learning have enabled decomposition of scenes into objects ( Greff et al. , 2019 ; Lin et al. , 2020b ; Locatello et al. , 2020 ) , inference and rendering of 3D object models ( Chen et al. , 2020 ) , and object tracking and future generation ( Crawford & Pineau , 2019a ; Jiang et al. , 2020 ; Lin et al. , 2020a ) . These neuro-symbolic approaches , where the discreteness of discovered objects provides the symbolic representation , facilitate various desired abilities such as out-of-distribution generalization , relational reasoning , and causal inference . In this paper , we seek to further discover and represent the structure within objects without supervision . Our motivation is that natural scenes frequently contain compositional objects—objects that are composed of primitive parts . We humans can easily identify the primitive parts and recognize the part-whole relationship . Representing objects as explicit composition of parts is expected to be more efficient , since a vast number of complex objects can often be compositionally explained by a small set of simple primitives . It also allows us to imagine and create meaningful new objects . A well-established representation for part-whole relationships in computer graphics is called the scene graph ( Foley et al. , 1996 ) . It is a tree whose leaf nodes store models of primitive parts , and whose edges specify affine transformations that compose parts into objects . While in computer graphics , the scene graph is manually constructed for rendering , in this paper , we are interested in inferring the scene graph from unlabeled images . To this end , we propose Generative Scene Graph Networks ( GSGNs ) . We formulate this model as a variational autoencoder ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) whose latent representation is a probabilistic scene graph . In the latent tree , each node is associated with an appearance variable that summarizes the composition up to the current level , and each edge with a pose variable that parameterizes the affine transformation from the current level to the upper level . The design of the GSGN decoder follows the rendering process of graphics engines , but with differentiable operations helping the encoder to learn inverse graphics ( Tieleman , 2014 ; Wu et al. , 2017 ; Romaszko et al. , 2017 ; Yao et al. , 2018 ; Deng et al. , 2019 ) . As a result , the pose variables inferred by GSGN are interpretable , and the probabilistic scene graph supports symbolic manipulation by configuring the pose variables . One major challenge is to infer the structure of the scene graph . This involves identifying the parts and grouping them into objects . Notice that unlike objects , parts are often stitched together and thus can have severe occlusion , making it hard to separate them . Existing methods for learning hierarchical scene representations circumvent this challenge by working on single-object scenes ( Kosiorek et al. , 2019 ) and also providing predefined or pre-segmented parts ( Li et al. , 2017 ; Huang et al. , 2020 ) . In contrast , GSGN addresses this challenge directly , and learns to infer the scene graph structure from multi-object scenes without knowledge of individual parts . Our key observation is that the scene graph has a recursive structure—inferring the structure of the tree should be similar to inferring the structure of its subtrees . Hence , we develop a top-down inference process that first decomposes the scene into objects and then further decomposes each object into its parts . This allows us to reuse existing scene decomposition methods such as SPACE ( Lin et al. , 2020b ) as an inference module shared at each level of the scene graph . However , we find that SPACE has difficulty separating parts that have severe occlusion , possibly due to its complete reliance on bottom-up image features . Therefore , simply applying SPACE for decomposition at each level will lead to suboptimal scene graphs . To alleviate this , GSGN learns a prior over plausible scene graphs that captures typical compositions . During inference , this prior provides top-down information which is combined with bottom-up image features to help reduce ambiguity caused by occlusion . For evaluation , we develop two datasets of scenes containing multiple compositional 2D and 3D objects , respectively . These can be regarded as compositional versions of Multi-dSprites ( Greff et al. , 2019 ) and CLEVR ( Johnson et al. , 2017 ) , two commonly used datasets for evaluating unsupervised object-level scene decomposition . For example , the compositional 3D objects in our dataset are made up of shapes similar to those in the CLEVR dataset , with variable sizes , colors , and materials . Hence , we name our 3D dataset the Compositional CLEVR dataset . The contributions of this paper are : ( i ) we propose the probabilistic scene graph representation that enables unsupervised and end-to-end scene graph inference and compositional scene generation , ( ii ) we develop and release the Compositional CLEVR dataset to facilitate future research on object compositionality , and ( iii ) we demonstrate that our model is able to infer the latent scene graph , shows decent generation quality and generalization ability , and improves data efficiency in downstream tasks . 2 RELATED WORK . Object-centric representations . Our model builds upon a line of recent work on unsupervised object-centric representation learning , which aims to eliminate the need for supervision in structured scene understanding . These methods learn a holistic model capable of decomposing scenes into objects , learning appearance representations for each object , and generating novel scenes—all without supervision and in an end-to-end trainable way . We believe such unsupervised and holistic models are more desirable , albeit more challenging to learn . These models can be categorized into scene-mixture models ( Greff et al. , 2017 ; 2019 ; Burgess et al. , 2019 ; Engelcke et al. , 2020 ; Locatello et al. , 2020 ) and spatial-attention models ( Eslami et al. , 2016 ; Crawford & Pineau , 2019b ; Lin et al. , 2020b ; Jiang & Ahn , 2020 ) . Compared to these models , we go a step further by also decomposing objects into parts . We use spatial-attention models as the inference module at each level of the scene graph , because they explicitly provide object positions , unlike scene-mixture models . We combine the inference module with a learned prior to help improve robustness to occlusion . This also allows sampling novel scenes from the prior , which is not possible with spatial-attention models . Hierarchical scene representations . Modeling the part-whole relationship in scenes has attracted growing interest , and has been utilized for improving image classification ( Sabour et al. , 2017 ; Hinton et al. , 2018 ; Kosiorek et al. , 2019 ) , parsing , and segmentation ( Zhu et al. , 2008 ) . However , these models have been applied to scenes with one dominant object only , and can not perform scene generation . Recent work on assembly-based 3D shape modeling also learns the part-whole relationship ( Tulsiani et al. , 2017 ; Li et al. , 2017 ; Zhu et al. , 2018 ; Huang et al. , 2020 ; Kania et al. , 2020 ) , but these methods require predefined or pre-segmented parts as input , and can only model single shapes with no background . By contrast , our model learns the part-whole relationship from multi-object scenes without knowledge of individual parts . There has also been work on 3D part decomposition ( Chen et al. , 2019 ; Deng et al. , 2020 ) , but they require voxels or point clouds as input , and typically focus on geometry ( e.g. , part occupancy ) without learning to represent appearance ( e.g. , color , material ) . Part hierarchies have also been used for shape generation ( Mo et al. , 2019 ) , where the hierarchy is provided as input rather than inferred from the input . Our approach infers compositional structures from static scenes , and is orthogonal to methods that use motion cues for decomposing dynamic scenes ( Xu et al. , 2019 ) and methods that infer physical interactions from dynamic scenes ( Li et al. , 2020 ; Stanić et al. , 2020 ) . Hinton ( 2021 ) recently proposed an iterative procedure that is expected to form the part hierarchy through multiple rounds of message passing among adjacent levels . While our model works without iterative message passing , we believe this is important for parsing more complex scenes . Hierarchical latent variable models . Our model can be regarded as a hierarchical latent variable model , and is inspired by several recent advances ( Bachman , 2016 ; Sønderby et al. , 2016 ; Zhao et al. , 2017 ; Maaløe et al. , 2019 ) that have achieved impressive generation quality . While these methods focus on designing the hierarchical structure and training method that harness the full expressive power of generative models , our goal is to learn the hierarchical structure from unlabeled images that captures the compositional relationship among symbolic entities like objects and parts . 3 GENERATIVE SCENE GRAPH NETWORKS . 3.1 GENERATIVE PROCESS . We assume that the image x is generated by a set of foreground variables collectively denoted zfg and a background variable zbg as follows : p ( x ) = ∫∫ p ( x |zfg , zbg ) p ( zbg |zfg ) p ( zfg ) dzfg dzbg . ( 1 ) To represent the compositional structures within foreground objects , GSGN models zfg as a treestructured probabilistic scene graph , as shown in Figure 1A . Each leaf node represents a primitive entity that is not further decomposed . Each internal node represents an abstract entity that is composed from its children nodes . Similar to graphics engines , the composition is modeled as affine transformations , and is specified by the relative pose ( including rotation , scaling , and translation ) of each child node v with respect to its parent pa ( v ) . We use a pose variable zposev to represent the relative pose , and associate it with the corresponding edge . We also associate an appearance variable zapprv with each node v. It is expected to represent the appearance of entity v in its canonical pose 1 , summarizing all lower-level composition in the subtree rooted at v. In particular , the appearance variable zapprr at the root node r summarizes the full scene . Due to this summarization assumption , given zapprv , we can generate the pose and appearance variables for all children nodes of v in a conditionally independent way . Hence , for a given tree structure with V being the set of all nodes , the prior over foreground variables can be factorized according to the tree structure : p ( zfg ) = p ( z appr r ) ∏ v∈V \ { r } p ( z pose v |z appr pa ( v ) ) p ( z appr v |z appr pa ( v ) ) . ( 2 ) Here we further assume conditional independence between zposev and z appr v , since in graphics engines , one should be able to specify the pose and appearance separately . Representing tree structures . The above factorization only works for a given tree structure . To deal with variable tree structures , we need to include them in the latent representation as well . We start by setting a maximum out-degree for each node so that the total number of possible structures is bounded . To determine the structure , it then suffices to specify the presence of each possible edge . Hence , for an arbitrary edge between node v and its parent , we introduce a Bernoulli variable zpresv to indicate its presence . If zpresv = 0 , meaning the edge is not present , then the pose variable associated with the edge along with all variables in the subtree rooted at v are excluded from the probabilistic scene graph . More precisely , let us define z̄presv to be the product of all the presence variables along the path from root r to node v : z̄presr = 1 , z̄ pres v = z pres v × z̄ pres pa ( v ) for v ∈ V \ { r } . ( 3 ) The foreground variables now become zfg = { zapprr } ∪ { zpresv , zposev , zapprv } v∈V \ { r } , and the prior factorizes as follows : p ( zfg ) = p ( z appr r ) ∏ v∈V \ { r } [ p ( z pres v |z appr pa ( v ) ) ] z̄pres pa ( v ) [ p ( zposev |z appr pa ( v ) ) p ( z appr v |z appr pa ( v ) ) ] z̄presv . ( 4 ) We implement the pose and appearance variables as Gaussian variables , p ( zapprr ) = N ( 0,1 ) , and the parameters of each conditional distribution are output by an MLP . Differentiable decoder . We design the decoder to follow the recursive compositing process in graphics engines , helping the encoder to learn inverse graphics . First , for each leaf node v , we use a neural network g ( · ) to decode its appearance variable into a small image patch x̂v and a ( close to ) binary mask m̂v : ( x̂v , m̂v ) = g ( zapprv ) . Here , g ( · ) is implemented as a spatial broadcast decoder ( Watters et al. , 2019 ) optionally followed by sub-pixel convolutions ( Shi et al. , 2016 ) . We then recursively compose these primitive patches into whole objects and the full scene by applying affine transformations parameterized by the pose variables . Specifically , let u be an internal node , and ch ( u ) be the set of its children . We compose the higher-level image patch x̂u and mask m̂u as follows : x̂u = ∑ v∈ch ( u ) z pres v ·αv ST −1 ( x̂v , z pose v ) , ( 5 ) m̂u = ∑ v∈ch ( u ) z pres v ·αv ST −1 ( m̂v , z pose v ) . ( 6 ) Here , denotes pixel-wise multiplication , and ST −1 is an inverse spatial transformer ( Jaderberg et al. , 2015 ) that differentiably places x̂v and m̂v into the coordinate frame of the parent node u . To deal with occlusion , we include relative depth in zposev , and compute a transparency map αv by the softmax over negative depth values . This ensures that entities with smaller depth will appear in front of entities with larger depth . See Figure 1C for an illustration . When we reach the root node , we obtain an image x̂r and a mask m̂r of all foreground objects . We then use a spatial broadcast decoder ( Watters et al. , 2019 ) to decode zbg into a background image x̂bg . The full scene x can now be modeled as a pixel-wise mixture of foreground and background , where m̂r serves as the mixing weight : p ( x |zfg , zbg ) = m̂r N ( x | x̂r , σ2fg1 ) + ( 1− m̂r ) N ( x | x̂bg , σ2bg1 ) . ( 7 ) Here , σfg and σbg are hyperparameters . 1We use v to refer to both node v in the probabilistic scene graph and the entity that node v represents .
Generative Scene Graph Networks (GSGN) is a variational auto-encoder with the intermediate representation being tree-like scene graphs. The leaf nodes stand for primitive parts and edges stand for poses to compose parts into objects recursively. The experiments are done in two image datasets of single color, simple shape 2D/3D objects: Multi-dSprites and CLEVR, and the model is able to discover objects without supervision.
SP:643597431db07482ab2de551f78064a102b16c6c
Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer
1 INTRODUCTION . Computer vision models in the real-world are often frozen and not updated after deployment , yet they may encounter novel data in the environment . Unlike the typical supervised learning setting , class-incremental continual learning challenges the learner to incorporate new information as it sequentially encounters new object classes without forgetting previously-acquired knowledge ( catastrophic forgetting ) . Research has shown that rehearsal of prior classes is a critical component for class-incremental continual learning ( Hsu et al. , 2018 ; van de Ven & Tolias , 2019 ) . Unfortunately , rehearsal requires a substantial memory budget , either in the form of a coreset of stored experiences or a separate learned model to generate samples from past experiences . This is not acceptable for memory-constrained applications which can not afford to increase the size of their memory as they encounter new classes . Instead , we consider a novel real-world setting where an incremental learner ’ s labeled task data is a product of its environment and the learner encounters a vast stream of unlabeled data in addition to the labeled task data . In such a setting ( visualized in Figure 1 ) , the unlabeled datastream is intrinsically correlated to each learning tasks due to the underlying structure of the environment . We explore many ways in which this correlation may exist . For example , when an incremental learner is tasked to learn samples of the previously-unseen class ci at time i in the real world , examples of ci may be encountered in the environment ( in unlabeled form ) during some future task . In such a setting , an incremental learner could use the unlabeled data in its environment as a source of memory-free rehearsal , though it would need a method to determine which unlabeled data is relevant to the incremental task ( i.e . detecting in-distribution data ) . We formalize this realistic paradigm in the semi-supervised continual learning ( SSCL ) setting , wherein unlabeled and labeled data are not i.i.d . as they are correlated through the underlying structure of the environment . We propose and conduct experiments over a realistic setting in which this correlation may exist , in the form of label super-class structure ( e.g . unlabeled examples of household furniture such as chairs , couches , and tables will appear while learning the labeled examples of household electrical devices such as lamp , keyboard , and television ( Krizhevsky et al. , 2009 ; Zhu & Bain , 2017 ) ) . We measure the final-task accuracy A , the accuracy over all tasks ⌦ , and the coreset memory required to attain a specific level of ⌦ accuracy over several realistic SSCL settings . Our experiments demonstrate that state-of-the-art continual learning methods ( Lee et al. , 2019 ) perform inconsistently in the novel SSCL paradigm with no prior method performing “ best '' . across all settings . This leads us to ask `` How can an approach to catastrophic forgetting be robust to several realistic , memory-constrained continual learning scenarios ? '' To answer the above question , we propose a novel learning approach that works well in both the simple ( i.e. , no correlations ) and realistic SSCL settings : DistillMatch . We leverage unlabeled data not only for knowledge distillation ( in which the distilling model is fixed ) , but also for a semi-supervised loss ( in which the supervisory signal can adapt during training on the new task ) . Key to our approach is that we address the distribution mismatch between the labeled and unlabeled data ( Oliver et al. , 2018 ) with out-of-distribution ( OoD ) detection ( and are the first to do so in the continual learning setting ) . Compared to nearest prior state-of-the-art ( all methods from ( Lee et al. , 2019 ) ) configured to work as well as possible in the novel SSCL setting , we outperform the state-of-the-art in all of our experiment scenarios by as much as a 54.5 % increase in ⌦ and no less than an 8.7 % increase . Furthermore , we find that our method can save up to 0.23 stored images per processed unlabeled image over naive rehearsal ( compared to ( Lee et al. , 2019 ) which only saved 0.08 stored images per processed unlabeled image ) . In summary , we make the following contributions : 1 . We propose the realistic semi-supervised continual learning ( SSCL ) setting , where object-object correlations between labeled and unlabeled sets are maintained through a label super-class structure . We show that state-of-the-art continual learning methods perform inconsistently in the SSCL setting ( i.e . no baseline method is “ best '' across all settings ) . 2 . We propose a novel continual learning method DistillMatch for the SSCL setting leveraging pseudo-labeling , strong data augmentations , and out-of-distribution detection . Compared to the baselines , DistillMatch achieves superior performance on a majority of metrics for 8/8 experiments and results in substantial memory budget savings . 2 BACKGROUND AND RELATED WORK . Knowledge Distillation in Continual Learning : Several related methods leverage distillation losses on past tasks to mitigate catastrophic forgetting using soft labels from a frozen copy of the previous task ’ s model ( Castro et al. , 2018 ; Hou et al. , 2018 ; Li & Hoiem , 2017 ; Rebuffi et al. , 2017 ) . For example , learning using two teachers , with one teacher distilling knowledge from previous tasks and another distilling knowledge from the current task , has been found to increase adaptability to a new task while preserving knowledge on the previous tasks ( Hou et al. , 2018 ; Lee et al. , 2019 ) . Classbalancing and fine-tuning have been used to encourage the model ’ s final predicted class distribution to be balanced across all tasks ( Castro et al. , 2018 ; Lee et al. , 2019 ) . These methods are related in that they rely on distillation losses to mitigate catastrophic forgetting , but the losses are designed to distill knowledge about specific local tasks and can not discriminate between classes from different tasks ( crucial for class incremental learning ) . More context on where our work fits into the greater body of continual learning research is provided in Appendix H .. Global distillation ( GD ) introduces a global teacher which provides a knowledge ensemble from both the past tasks and current task ( Lee et al. , 2019 ) . This addresses a crucial shortcoming of common knowledge distillation methods which do not reconcile information from the local tasks ( i.e . the groups of object classes presented sequentially to the learner ) with the global task ( i.e . all object classes seen at any time ) . GD leverages a large stream of uncorrelated unlabeled data from sources such as data mining social media or web data ( Lee et al. , 2019 ) to boost its distillation performance . Similar to GD , we leverage an unlabeled datastream to mitigate forgetting , but we take the perspective that this datastream is from the agent ’ s environment and reflects object-object correlation structures imposed by the world ( i.e . correlations between the task data and the unlabeled data ) . Out of Distribution Detection : Leveraging unlabeled data for rehearsal is key to our work , but it can contain a mix of classes not in the distribution of the data seen by the learner thus far . Therefore , we include Out-of-Distribution ( OoD ) Detection ( Hsu et al. , 2020 ; Lee et al. , 2018 ; Liang et al. , 2017 ) to select unlabeled data corresponding to the classes our learner has seen so far with high confidence . Semantic OoD detection is a difficult challenge ( Hsu et al. , 2020 ) and we do not have access to any known OoD data to calibrate our confidence . We therefore build on a recent method , Decomposed Confidence ( DeConf ) ( Hsu et al. , 2020 ) , which can be calibrated using only in-distribution training data . The method consists of decomposed confidence scoring with a learned temperature scaling in addition to input pre-processing . For further details , the reader is referred to ( Hsu et al. , 2020 ) . Semi-Supervised Learning : In semi-supervised learning ( which motivates the SSCL setting ) , models are given a ( typically small ) amount of labeled data and leverage unlabeled data to boost performance . This is an active area of research given that large , labeled datasets are expensive , but most applications have access to plentiful , cheap unlabeled data . There are several approaches to semi-supervised learning ( Berthelot et al. , 2019 ; Lee , 2013 ; Kingma et al. , 2014 ; Kuo et al. , 2019 ; Miyato et al. , 2018 ; Oliver et al. , 2018 ; Sohn et al. , 2020 ; Springenberg , 2015 ; Tarvainen & Valpola , 2017 ) which involve balancing a supervised loss ` s applied to the labeled data with an unsupervised loss ` ul applied to unlabeled data . Additional details on these methods are provided in Appendix H . 3 SSCL SETTING . In class-incremental continual learning , a model is gradually introduced to labeled data corresponding to M semantic object classes c1 , c2 , . . . , cM over a series of N tasks , where tasks are non-overlapping subsets of classes . We use the notation Tn to denote the set of classes introduced in task n , with |Tn| denoting the number of object classes in task n. Each class appears in only a single task , and the goal is to incrementally learn to classify new object classes as they are introduced while retaining performance on previously learned classes . The class-incremental learning setting ( Hsu et al . ( 2018 ) ) is a challenging continual learning settings because no task indexes are provided to the learner during inference and the learner must support classification across all classes seen up to task n. We extend the class-incremental continual learning setting in the realistic semi-supervised continual learning ( SSCL ) setting , where data distributions reflect existing object class correlations between , and among , the labeled and unlabeled data distributions . The amount of labeled data in this setting is drastically reduced as is common in semi-supervised learning . For example , our experiments reduce the number of labeled examples per class by 80 % compared to a prior setting ( Lee et al. , 2019 ) . At task n , we denote batches of labeled data as Xn = { ( xb , yb ) : b 2 ( 1 , · · · , B ) | yb 2 Tn } and batches of unlabeled data as Un = { ub : b 2 ( 1 , · · · , µB ) . Here , B refers to batch-size and µ is a hyperparameter describing the relative size of Xn to Un . The goal in task n is to learn a model ✓n which predicts object class labels for any query input over all classes seen in the current and previous tasks ( T1 [ T2 [ · · · [ Tn ) . The index n on ✓n indicates that our model is updated each task ; i.e . ✓n 1 refers to model from the previous task and ✓n refers to the model from the current task . To simulate an environment where unlabeled and labeled data are naturally correlated , we leverage well-defined relationships between objects derived from a super-class structure ( i.e . various animals within one super-class ) . We use the CIFAR-100 dataset ( Krizhevsky et al. , 2009 ) because object correlations among classes and parent classes , crucial to our experiments , are well defined and explored ( Zhu & Bain , 2017 ) . This dataset contains eight unbalanced super-classes , which we use to simulate realistic data environments . Each super-class contains a number of parent classes ( e.g . one super-class contains the parent classes flowers , fruits/vegetables , and trees ) . There are 20 parent classes in total which form the 20 continual learning tasks , with each parent class consisting of five object classes ( e.g . flowers parent class consists of orchids , poppies , roses , sunflowers , tulips ) . For a single task , when our learner is being shown labeled training data from one of the parent classes ( e.g . flowers , fruit/vegetables , or trees ) , the unlabeled data for this task will contain examples from the entire super-class ( e.g . flowers , fruits/vegetables , and trees ) . SSCL with this realistic super-class “ environment '' structure is our main setting , but we also explore several other correlation combinations , including the simple SSCL setting without any super-class structure . We use the following terminology to describe the correlations of the tasks ( i.e . labeled data ) : RandomClass Tasks , where no correlations exist in task classes , and ParentClass Tasks , where tasks are introduced by CIFAR-100 parent classes ( i.e . each task is to learn the five classes of a single CIFAR-100 parent class ) . For the unlabeled data distribution we have : Uniform Unlabeled , where all classes are uniformly dsitributed in unlabeled data for all tasks , PositiveSuperclass Unlabeled , where the unlabeled data of each tasks consists of the parent classes in the same super-class as the current task , NegativeSuperclass Unlabeled , where the unlabeled data of each tasks consists of parent classes from different super-class as the current task , and RandomUnlabeled , where the unlabeled data of each task consists of 20 randomly sampled classes ( roughly equal to the average class size in a super-class ) . Further discussion and details , including figures depicting example streams for each task sequence , are provided in Appendix F .
This paper proposes class-incremental learning with unlabeled data correlated to labeled data, and a method to tackle it. The task can be considered as a variant of [Lee et al.], which has no assumption on the unlabeled dataset, while this paper assumes the correlation between labeled and unlabeled dataset explicitly. The proposed method is inspired by state-of-the-art class-incremental learning, semi-supervised learning, and out-of-distribution (OoD) detection methods: local distillation [Li and Hoiem], OoD detection [Hsu et al.], consistency regularization and pseudo labeling (or hard distillation) [Sohn et al.], and loss balancing based on class statistics [Lee et al.]. Experimental results support that the proposed method outperforms prior works in the proposed task.
SP:3c564f60a942d2b56589ce292cc233d137560152
Memory-Efficient Semi-Supervised Continual Learning: The World is its Own Replay Buffer
1 INTRODUCTION . Computer vision models in the real-world are often frozen and not updated after deployment , yet they may encounter novel data in the environment . Unlike the typical supervised learning setting , class-incremental continual learning challenges the learner to incorporate new information as it sequentially encounters new object classes without forgetting previously-acquired knowledge ( catastrophic forgetting ) . Research has shown that rehearsal of prior classes is a critical component for class-incremental continual learning ( Hsu et al. , 2018 ; van de Ven & Tolias , 2019 ) . Unfortunately , rehearsal requires a substantial memory budget , either in the form of a coreset of stored experiences or a separate learned model to generate samples from past experiences . This is not acceptable for memory-constrained applications which can not afford to increase the size of their memory as they encounter new classes . Instead , we consider a novel real-world setting where an incremental learner ’ s labeled task data is a product of its environment and the learner encounters a vast stream of unlabeled data in addition to the labeled task data . In such a setting ( visualized in Figure 1 ) , the unlabeled datastream is intrinsically correlated to each learning tasks due to the underlying structure of the environment . We explore many ways in which this correlation may exist . For example , when an incremental learner is tasked to learn samples of the previously-unseen class ci at time i in the real world , examples of ci may be encountered in the environment ( in unlabeled form ) during some future task . In such a setting , an incremental learner could use the unlabeled data in its environment as a source of memory-free rehearsal , though it would need a method to determine which unlabeled data is relevant to the incremental task ( i.e . detecting in-distribution data ) . We formalize this realistic paradigm in the semi-supervised continual learning ( SSCL ) setting , wherein unlabeled and labeled data are not i.i.d . as they are correlated through the underlying structure of the environment . We propose and conduct experiments over a realistic setting in which this correlation may exist , in the form of label super-class structure ( e.g . unlabeled examples of household furniture such as chairs , couches , and tables will appear while learning the labeled examples of household electrical devices such as lamp , keyboard , and television ( Krizhevsky et al. , 2009 ; Zhu & Bain , 2017 ) ) . We measure the final-task accuracy A , the accuracy over all tasks ⌦ , and the coreset memory required to attain a specific level of ⌦ accuracy over several realistic SSCL settings . Our experiments demonstrate that state-of-the-art continual learning methods ( Lee et al. , 2019 ) perform inconsistently in the novel SSCL paradigm with no prior method performing “ best '' . across all settings . This leads us to ask `` How can an approach to catastrophic forgetting be robust to several realistic , memory-constrained continual learning scenarios ? '' To answer the above question , we propose a novel learning approach that works well in both the simple ( i.e. , no correlations ) and realistic SSCL settings : DistillMatch . We leverage unlabeled data not only for knowledge distillation ( in which the distilling model is fixed ) , but also for a semi-supervised loss ( in which the supervisory signal can adapt during training on the new task ) . Key to our approach is that we address the distribution mismatch between the labeled and unlabeled data ( Oliver et al. , 2018 ) with out-of-distribution ( OoD ) detection ( and are the first to do so in the continual learning setting ) . Compared to nearest prior state-of-the-art ( all methods from ( Lee et al. , 2019 ) ) configured to work as well as possible in the novel SSCL setting , we outperform the state-of-the-art in all of our experiment scenarios by as much as a 54.5 % increase in ⌦ and no less than an 8.7 % increase . Furthermore , we find that our method can save up to 0.23 stored images per processed unlabeled image over naive rehearsal ( compared to ( Lee et al. , 2019 ) which only saved 0.08 stored images per processed unlabeled image ) . In summary , we make the following contributions : 1 . We propose the realistic semi-supervised continual learning ( SSCL ) setting , where object-object correlations between labeled and unlabeled sets are maintained through a label super-class structure . We show that state-of-the-art continual learning methods perform inconsistently in the SSCL setting ( i.e . no baseline method is “ best '' across all settings ) . 2 . We propose a novel continual learning method DistillMatch for the SSCL setting leveraging pseudo-labeling , strong data augmentations , and out-of-distribution detection . Compared to the baselines , DistillMatch achieves superior performance on a majority of metrics for 8/8 experiments and results in substantial memory budget savings . 2 BACKGROUND AND RELATED WORK . Knowledge Distillation in Continual Learning : Several related methods leverage distillation losses on past tasks to mitigate catastrophic forgetting using soft labels from a frozen copy of the previous task ’ s model ( Castro et al. , 2018 ; Hou et al. , 2018 ; Li & Hoiem , 2017 ; Rebuffi et al. , 2017 ) . For example , learning using two teachers , with one teacher distilling knowledge from previous tasks and another distilling knowledge from the current task , has been found to increase adaptability to a new task while preserving knowledge on the previous tasks ( Hou et al. , 2018 ; Lee et al. , 2019 ) . Classbalancing and fine-tuning have been used to encourage the model ’ s final predicted class distribution to be balanced across all tasks ( Castro et al. , 2018 ; Lee et al. , 2019 ) . These methods are related in that they rely on distillation losses to mitigate catastrophic forgetting , but the losses are designed to distill knowledge about specific local tasks and can not discriminate between classes from different tasks ( crucial for class incremental learning ) . More context on where our work fits into the greater body of continual learning research is provided in Appendix H .. Global distillation ( GD ) introduces a global teacher which provides a knowledge ensemble from both the past tasks and current task ( Lee et al. , 2019 ) . This addresses a crucial shortcoming of common knowledge distillation methods which do not reconcile information from the local tasks ( i.e . the groups of object classes presented sequentially to the learner ) with the global task ( i.e . all object classes seen at any time ) . GD leverages a large stream of uncorrelated unlabeled data from sources such as data mining social media or web data ( Lee et al. , 2019 ) to boost its distillation performance . Similar to GD , we leverage an unlabeled datastream to mitigate forgetting , but we take the perspective that this datastream is from the agent ’ s environment and reflects object-object correlation structures imposed by the world ( i.e . correlations between the task data and the unlabeled data ) . Out of Distribution Detection : Leveraging unlabeled data for rehearsal is key to our work , but it can contain a mix of classes not in the distribution of the data seen by the learner thus far . Therefore , we include Out-of-Distribution ( OoD ) Detection ( Hsu et al. , 2020 ; Lee et al. , 2018 ; Liang et al. , 2017 ) to select unlabeled data corresponding to the classes our learner has seen so far with high confidence . Semantic OoD detection is a difficult challenge ( Hsu et al. , 2020 ) and we do not have access to any known OoD data to calibrate our confidence . We therefore build on a recent method , Decomposed Confidence ( DeConf ) ( Hsu et al. , 2020 ) , which can be calibrated using only in-distribution training data . The method consists of decomposed confidence scoring with a learned temperature scaling in addition to input pre-processing . For further details , the reader is referred to ( Hsu et al. , 2020 ) . Semi-Supervised Learning : In semi-supervised learning ( which motivates the SSCL setting ) , models are given a ( typically small ) amount of labeled data and leverage unlabeled data to boost performance . This is an active area of research given that large , labeled datasets are expensive , but most applications have access to plentiful , cheap unlabeled data . There are several approaches to semi-supervised learning ( Berthelot et al. , 2019 ; Lee , 2013 ; Kingma et al. , 2014 ; Kuo et al. , 2019 ; Miyato et al. , 2018 ; Oliver et al. , 2018 ; Sohn et al. , 2020 ; Springenberg , 2015 ; Tarvainen & Valpola , 2017 ) which involve balancing a supervised loss ` s applied to the labeled data with an unsupervised loss ` ul applied to unlabeled data . Additional details on these methods are provided in Appendix H . 3 SSCL SETTING . In class-incremental continual learning , a model is gradually introduced to labeled data corresponding to M semantic object classes c1 , c2 , . . . , cM over a series of N tasks , where tasks are non-overlapping subsets of classes . We use the notation Tn to denote the set of classes introduced in task n , with |Tn| denoting the number of object classes in task n. Each class appears in only a single task , and the goal is to incrementally learn to classify new object classes as they are introduced while retaining performance on previously learned classes . The class-incremental learning setting ( Hsu et al . ( 2018 ) ) is a challenging continual learning settings because no task indexes are provided to the learner during inference and the learner must support classification across all classes seen up to task n. We extend the class-incremental continual learning setting in the realistic semi-supervised continual learning ( SSCL ) setting , where data distributions reflect existing object class correlations between , and among , the labeled and unlabeled data distributions . The amount of labeled data in this setting is drastically reduced as is common in semi-supervised learning . For example , our experiments reduce the number of labeled examples per class by 80 % compared to a prior setting ( Lee et al. , 2019 ) . At task n , we denote batches of labeled data as Xn = { ( xb , yb ) : b 2 ( 1 , · · · , B ) | yb 2 Tn } and batches of unlabeled data as Un = { ub : b 2 ( 1 , · · · , µB ) . Here , B refers to batch-size and µ is a hyperparameter describing the relative size of Xn to Un . The goal in task n is to learn a model ✓n which predicts object class labels for any query input over all classes seen in the current and previous tasks ( T1 [ T2 [ · · · [ Tn ) . The index n on ✓n indicates that our model is updated each task ; i.e . ✓n 1 refers to model from the previous task and ✓n refers to the model from the current task . To simulate an environment where unlabeled and labeled data are naturally correlated , we leverage well-defined relationships between objects derived from a super-class structure ( i.e . various animals within one super-class ) . We use the CIFAR-100 dataset ( Krizhevsky et al. , 2009 ) because object correlations among classes and parent classes , crucial to our experiments , are well defined and explored ( Zhu & Bain , 2017 ) . This dataset contains eight unbalanced super-classes , which we use to simulate realistic data environments . Each super-class contains a number of parent classes ( e.g . one super-class contains the parent classes flowers , fruits/vegetables , and trees ) . There are 20 parent classes in total which form the 20 continual learning tasks , with each parent class consisting of five object classes ( e.g . flowers parent class consists of orchids , poppies , roses , sunflowers , tulips ) . For a single task , when our learner is being shown labeled training data from one of the parent classes ( e.g . flowers , fruit/vegetables , or trees ) , the unlabeled data for this task will contain examples from the entire super-class ( e.g . flowers , fruits/vegetables , and trees ) . SSCL with this realistic super-class “ environment '' structure is our main setting , but we also explore several other correlation combinations , including the simple SSCL setting without any super-class structure . We use the following terminology to describe the correlations of the tasks ( i.e . labeled data ) : RandomClass Tasks , where no correlations exist in task classes , and ParentClass Tasks , where tasks are introduced by CIFAR-100 parent classes ( i.e . each task is to learn the five classes of a single CIFAR-100 parent class ) . For the unlabeled data distribution we have : Uniform Unlabeled , where all classes are uniformly dsitributed in unlabeled data for all tasks , PositiveSuperclass Unlabeled , where the unlabeled data of each tasks consists of the parent classes in the same super-class as the current task , NegativeSuperclass Unlabeled , where the unlabeled data of each tasks consists of parent classes from different super-class as the current task , and RandomUnlabeled , where the unlabeled data of each task consists of 20 randomly sampled classes ( roughly equal to the average class size in a super-class ) . Further discussion and details , including figures depicting example streams for each task sequence , are provided in Appendix F .
This paper investigates a semi-supervised continual learning (SSCL) setting and proposes a new method called DistillMatch for this setting. The major contributions are: (1) The authors carefully design a realistic SSCL setting where object-object correlations between labeled and unlabeled sets are maintained through a label super-class structure. And then, they develop the DistillMatch method combining knowledge distillation, pseudo-labels, out of distribution detection, and consistency regularization. (2) They show that DistillMatch outperforms other existing methods on CIFAR-100 dataset, and ablation study results are shown also.
SP:3c564f60a942d2b56589ce292cc233d137560152
Can Kernel Transfer Operators Help Flow based Generative Models?
1 INTRODUCTION . A flow-based generative model refers to a deep generative model composed using a set of invertible transformations . While GANs and VAEs remain the two dominant generative models in the community , flow based formulations have continually evolved and now offer competitive performance in applications including audio/speech synthesis Kim et al . ( 2019 ; 2020 ) , text to speech Miao et al . ( 2020 ) , photo-realistic image generation Kingma & Dhariwal ( 2018 ) , and learning cross-domain mappings Mahajan et al . ( 2020 ) . An important property of such models is the explicit use of a tractable likelihood function , which enables leveraging maximum likelihood principles during training as well as efficient/exact density estimation and sampling . The formulation is invertible by design but this involves higher memory requirements . For example , permitting the bijective mapping to be expressive enough involves increases in the memory footprint Lee et al . ( 2020 ) ; Kim et al . ( 2019 ) , an issue that is a focus of several recent results Jacobsen et al . ( 2018 ) ; Chen et al . ( 2016 ) . Moreover , in these models we need to calculate the inverse and backpropagate through all invertible transformations during training . Calculating the inverse incurs a multiplicative increase in cost , usually as a function of the feature dimension , relative to the calculation of the likelihood , an issue addressed to some extent in Dinh et al . ( 2017 ) ; Kingma & Dhariwal ( 2018 ) . At a high level , a flow-based generative model bijectively pushes the data density from a source to a target , i.e. , from a known simple distribution to an unknown ( may be intractable ) data distribution . During training , we seek to learn this bijective mapping by maximizing the likelihood of the mapped training samples . In the generation step , we need the inverse of this mapping ( given such an inverse exists ) to map from a sample drawn from the known distribution back to the input ( data ) space . When the Jacobian of the transformation mapping can be efficiently computed or estimated ( e.g. , having a lower triangular form ) , directly optimizing the likelihood of the training samples is possible . However , in training flow-based generative either we must restrict the expressiveness at each layer or fall back on more numerically heavy solutions , see ( Chen et al. , 2018 ) . Next , we discuss how several existing results may provide a simplification strategy . 1.1 RELATED WORKS AND RATIONALE . Our starting point is the existing literature on Koopman and Perron-Frobenius operators Song et al . ( 2009 ) ; Fukumizu et al . ( 2013 ) ; Klus et al . ( 2020 ) , that offers an arguably easier , optionally linear , procedure that can be used to analyze non-linear dynamics of measurements that evolve temporally . For instance , as described in Arbabi ( 2018 ) ; Lusch et al . ( 2018 ) , if we view the data/measurements as evaluations of functions of the state ( of a dynamical system ) – where the functions are also called observables – then the entire set of such functions forms a linear vector space . Transfer operators on this space describe a linear evolution of the dynamics , i.e. , finite-dimensional nonlinear dynamics are replaced by infinite-dimensional linear dynamics Brunton et al . ( 2017 ) , perfectly evolving one set of measurements to another over time if the space can be well characterized . Of course , this is not practically beneficial because constructing such infinite-dimensional spaces could be intractable . Nonetheless , results in optimal control demonstrate that the idea can still be effective in specific cases , using approximations with either spectral analysis of large but finite number of functions Williams et al . ( 2015 ) or via a search for potential eigenfunctions of the operators using neural networks Li et al . ( 2017 ) ; Lusch et al . ( 2018 ) . Within the last year , several results describe its potential benefits of such operators in machine learning problems as well Li et al . ( 2020 ) ; Azencot et al . ( 2020 ) . If we consider the transformations that flow-based generative models learn as a non-linear dynamics , also used in ( Chen et al. , 2018 ) , a data-driven approximation strategy one can consider is to map the given data ( or distribution ) into infinite dimension space of functions through the kernel trick , which may allow the use of well known results based on kernel methods , including old and new results on powerful neural kernels Neal ( 1994 ) ; Jacot et al . ( 2018 ) ; Arora et al . ( 2019 ) . Utilizing these results , a mean embedding on the corresponding Reproducing Kernel Hilbert Space ( RKHS ) would correspond to the distribution in the input space ( the distribution from which input samples are drawn ) . Therefore , the problem of identifying a nonlinear mapping ( or dynamics ) in the input space ( going from an intractable distribution to a known distribution or vice-versa ) reduces to estimating a linear mapping operator between two empirical mean kernel embeddings where recent results on kernel transfer operators Klus et al . ( 2020 ) could be relevant or applicable . However , due to the high variability of the data , estimation of the distribution directly in the input space , as we will see shortly , can be difficult . But , if the input space is low-dimensional or otherwise structured , this problem could be mitigated . Fortunately , for many image datasets , one can identify a low-dimensional latent space such that , in theory , the above pipeline could be instantiated , enabling us to learn a transfer operator . Conceptually , it is not difficult to see how the foregoing idea could potentially help ( or simplify ) flow-based generative models . In principle , using a transfer operator , one could push-forward the input data distribution to a target distribution of our choice , if both have already been mapped to a sufficiently high dimensional space . If – additionally – the operator could also be inverted , this strategy may , at least , be viable . Of course , several key components are missing . We need to ( a ) assess if setting up a suitable infinite dimensional space is possible , ( b ) identify if we can estimate the transfer operator and then finally , ( c ) check if the procedure works at all . In the following sections of the paper , we will verify these key components and show that using only a linear operator yield surprisingly competitive results on several image generation tasks . 2 PRELIMINARIES . Auto-encoders . Images often lie close to an ( unknown ) lower dimensional manifoldM⊂ Rm such that dim ( M ) m , and operating with densities in a lower dimensional setting is often much easier . VAEs leverage this explicitly via auto-encoders . If the parameterized empirical density is pτ ( x ) , in VAEs , we write it as ∫ pτ ( x|z ) p ( z ) dz where z is the low dimensional representation with a suitable prior . We then use a decoder distribution qτ ′ ( z|x ) and an encoder distribution pτ ( x|z ) and train the parameters τ and τ ′ . In practice , pτ ( · ) and qτ ′ ( · ) are assumed to be Gaussian , but it is known that jointly fitting the manifold as well as regressing to a particular prior distribution can be challenging in VAEs Kingma et al . ( 2016 ) ; Dai & Wipf ( 2019 ) . Now , consider the following approach , where we do not impose a distributional assumption on z . If a well-regularized auto-encoder is able to capture information about the data generating density Alain & Bengio ( 2014 ) , we can think of z as the input measurements ( likely , meaningful representations from the input data ) which is subsequently mapped to an infinite dimensional space ( of observables of these measurements ) . Now , if we could push-forward the embedded RKHS distribution to the RKHS mapping of a simpler distribution , similar to flow-based generation , one could easily sample from the simple ( e.g. , standard normal ) distribution and transform it via the learned mapping to samples on the latent space . In summary , instead of explicitly searching for the eigenfunctions , we propose to Step 1 : embed the density from an auto-encoder into a RKHS , Step 2 : learn a kernel transfer operator in RKHS in one step . In the remainder of this paper , we will provide the details to operationalize this idea and show that this simple approach , in fact , performs surprisingly well with highly favorable computational properties . We now introduce the definition of reproducing kernel Hilbert space ( RKHS ) and the kernel embedding of probability distributions which are the building blocks of this paper . Definition 1 ( RKHS Aronszajn ( 1950 ) ) . Given a set X and H to be a set of functions φ : X → R , H is called a reproducing kernel Hilbert space ( RKHS ) with corresponding product 〈. , .〉H if there exists a function k : X × X → R ( called a reproducing kernel ) such that ( i ) ∀X ∈ X , φ ∈ H , φ ( X ) = 〈φ , k ( X , . ) 〉H ( ii ) H = span ( { k ( X , . ) , X ∈ X } ) , where . is the completion . The kernel mean embedding can be used to embed a probability measure in a RKHS . Definition 2 ( Kernel Mean Embedding Smola et al . ( 2007 ) ) . Given a probability measure p on X with an associated RKHS H equipped with a reproducing kernel k such that supX∈X k ( X , X ) < ∞ , the kernel mean embedding of p in RKHS H , denoted by µH ∈ H , is defined as µH =∫ k ( X , . ) p ( X ) , and the mean embedding operator E : L1 ( X ) → H is defined as µH = Ep . For a characteristic kernel , the mapping from an input space distribution to its corresponding kernel mean embedding is one-to-one . Thus , two distributions in the input space are identical if and only if their kernel mean embedding matches exactly . This property enables Maximum Mean Discrepancy ( MMD ) for distribution matching . For a finite number of samples { Xi } Ni=1 , drawn from the probability measure µ , an empirical estimation of µH is µ̂H = 1N ∑N i=1 k ( Xi , . ) . Definition 3 ( Flow-based generative model Dinh et al . ( 2014 ) ) . A flow-based generative model explicitly learns the data distribution by trying to bijectively map it to a tractable density via invertible transformations . Formally , given a random variable z following a tractable density , i.e. , z ∼ pθ ( z ) , a flow-based model learns an invertible mapping fφ such that the data sample x = fφ ( z ) and the corresponding data distribution , pθ̃ ( x ) = f∗pθ ( z ) , where , f∗ = | dz dx | is the push-forward operator .
The authors build a generator that builds on top of the latent space of a “well-trained auto-encoder”. The generator consists of several steps: 1) sampling an latent element from the spherical latent space, 2) using a kernel Perron-Frobenius operator to embed the sampled latent element 3) selecting latent representations of real samples based on the indices of the highest values of the newly calculated embedding 4) calculating the geodesic interpolation of these latent representations 5) using the decoder of the autoencoder to generate new samples from the latent element resulting from geodesic interpolation. The authors describe the related work, the methods they propose and present experimental results on 4 datasets.
SP:b9fb99a65e598d0e0b1a97bc04dfc80865216541
Can Kernel Transfer Operators Help Flow based Generative Models?
1 INTRODUCTION . A flow-based generative model refers to a deep generative model composed using a set of invertible transformations . While GANs and VAEs remain the two dominant generative models in the community , flow based formulations have continually evolved and now offer competitive performance in applications including audio/speech synthesis Kim et al . ( 2019 ; 2020 ) , text to speech Miao et al . ( 2020 ) , photo-realistic image generation Kingma & Dhariwal ( 2018 ) , and learning cross-domain mappings Mahajan et al . ( 2020 ) . An important property of such models is the explicit use of a tractable likelihood function , which enables leveraging maximum likelihood principles during training as well as efficient/exact density estimation and sampling . The formulation is invertible by design but this involves higher memory requirements . For example , permitting the bijective mapping to be expressive enough involves increases in the memory footprint Lee et al . ( 2020 ) ; Kim et al . ( 2019 ) , an issue that is a focus of several recent results Jacobsen et al . ( 2018 ) ; Chen et al . ( 2016 ) . Moreover , in these models we need to calculate the inverse and backpropagate through all invertible transformations during training . Calculating the inverse incurs a multiplicative increase in cost , usually as a function of the feature dimension , relative to the calculation of the likelihood , an issue addressed to some extent in Dinh et al . ( 2017 ) ; Kingma & Dhariwal ( 2018 ) . At a high level , a flow-based generative model bijectively pushes the data density from a source to a target , i.e. , from a known simple distribution to an unknown ( may be intractable ) data distribution . During training , we seek to learn this bijective mapping by maximizing the likelihood of the mapped training samples . In the generation step , we need the inverse of this mapping ( given such an inverse exists ) to map from a sample drawn from the known distribution back to the input ( data ) space . When the Jacobian of the transformation mapping can be efficiently computed or estimated ( e.g. , having a lower triangular form ) , directly optimizing the likelihood of the training samples is possible . However , in training flow-based generative either we must restrict the expressiveness at each layer or fall back on more numerically heavy solutions , see ( Chen et al. , 2018 ) . Next , we discuss how several existing results may provide a simplification strategy . 1.1 RELATED WORKS AND RATIONALE . Our starting point is the existing literature on Koopman and Perron-Frobenius operators Song et al . ( 2009 ) ; Fukumizu et al . ( 2013 ) ; Klus et al . ( 2020 ) , that offers an arguably easier , optionally linear , procedure that can be used to analyze non-linear dynamics of measurements that evolve temporally . For instance , as described in Arbabi ( 2018 ) ; Lusch et al . ( 2018 ) , if we view the data/measurements as evaluations of functions of the state ( of a dynamical system ) – where the functions are also called observables – then the entire set of such functions forms a linear vector space . Transfer operators on this space describe a linear evolution of the dynamics , i.e. , finite-dimensional nonlinear dynamics are replaced by infinite-dimensional linear dynamics Brunton et al . ( 2017 ) , perfectly evolving one set of measurements to another over time if the space can be well characterized . Of course , this is not practically beneficial because constructing such infinite-dimensional spaces could be intractable . Nonetheless , results in optimal control demonstrate that the idea can still be effective in specific cases , using approximations with either spectral analysis of large but finite number of functions Williams et al . ( 2015 ) or via a search for potential eigenfunctions of the operators using neural networks Li et al . ( 2017 ) ; Lusch et al . ( 2018 ) . Within the last year , several results describe its potential benefits of such operators in machine learning problems as well Li et al . ( 2020 ) ; Azencot et al . ( 2020 ) . If we consider the transformations that flow-based generative models learn as a non-linear dynamics , also used in ( Chen et al. , 2018 ) , a data-driven approximation strategy one can consider is to map the given data ( or distribution ) into infinite dimension space of functions through the kernel trick , which may allow the use of well known results based on kernel methods , including old and new results on powerful neural kernels Neal ( 1994 ) ; Jacot et al . ( 2018 ) ; Arora et al . ( 2019 ) . Utilizing these results , a mean embedding on the corresponding Reproducing Kernel Hilbert Space ( RKHS ) would correspond to the distribution in the input space ( the distribution from which input samples are drawn ) . Therefore , the problem of identifying a nonlinear mapping ( or dynamics ) in the input space ( going from an intractable distribution to a known distribution or vice-versa ) reduces to estimating a linear mapping operator between two empirical mean kernel embeddings where recent results on kernel transfer operators Klus et al . ( 2020 ) could be relevant or applicable . However , due to the high variability of the data , estimation of the distribution directly in the input space , as we will see shortly , can be difficult . But , if the input space is low-dimensional or otherwise structured , this problem could be mitigated . Fortunately , for many image datasets , one can identify a low-dimensional latent space such that , in theory , the above pipeline could be instantiated , enabling us to learn a transfer operator . Conceptually , it is not difficult to see how the foregoing idea could potentially help ( or simplify ) flow-based generative models . In principle , using a transfer operator , one could push-forward the input data distribution to a target distribution of our choice , if both have already been mapped to a sufficiently high dimensional space . If – additionally – the operator could also be inverted , this strategy may , at least , be viable . Of course , several key components are missing . We need to ( a ) assess if setting up a suitable infinite dimensional space is possible , ( b ) identify if we can estimate the transfer operator and then finally , ( c ) check if the procedure works at all . In the following sections of the paper , we will verify these key components and show that using only a linear operator yield surprisingly competitive results on several image generation tasks . 2 PRELIMINARIES . Auto-encoders . Images often lie close to an ( unknown ) lower dimensional manifoldM⊂ Rm such that dim ( M ) m , and operating with densities in a lower dimensional setting is often much easier . VAEs leverage this explicitly via auto-encoders . If the parameterized empirical density is pτ ( x ) , in VAEs , we write it as ∫ pτ ( x|z ) p ( z ) dz where z is the low dimensional representation with a suitable prior . We then use a decoder distribution qτ ′ ( z|x ) and an encoder distribution pτ ( x|z ) and train the parameters τ and τ ′ . In practice , pτ ( · ) and qτ ′ ( · ) are assumed to be Gaussian , but it is known that jointly fitting the manifold as well as regressing to a particular prior distribution can be challenging in VAEs Kingma et al . ( 2016 ) ; Dai & Wipf ( 2019 ) . Now , consider the following approach , where we do not impose a distributional assumption on z . If a well-regularized auto-encoder is able to capture information about the data generating density Alain & Bengio ( 2014 ) , we can think of z as the input measurements ( likely , meaningful representations from the input data ) which is subsequently mapped to an infinite dimensional space ( of observables of these measurements ) . Now , if we could push-forward the embedded RKHS distribution to the RKHS mapping of a simpler distribution , similar to flow-based generation , one could easily sample from the simple ( e.g. , standard normal ) distribution and transform it via the learned mapping to samples on the latent space . In summary , instead of explicitly searching for the eigenfunctions , we propose to Step 1 : embed the density from an auto-encoder into a RKHS , Step 2 : learn a kernel transfer operator in RKHS in one step . In the remainder of this paper , we will provide the details to operationalize this idea and show that this simple approach , in fact , performs surprisingly well with highly favorable computational properties . We now introduce the definition of reproducing kernel Hilbert space ( RKHS ) and the kernel embedding of probability distributions which are the building blocks of this paper . Definition 1 ( RKHS Aronszajn ( 1950 ) ) . Given a set X and H to be a set of functions φ : X → R , H is called a reproducing kernel Hilbert space ( RKHS ) with corresponding product 〈. , .〉H if there exists a function k : X × X → R ( called a reproducing kernel ) such that ( i ) ∀X ∈ X , φ ∈ H , φ ( X ) = 〈φ , k ( X , . ) 〉H ( ii ) H = span ( { k ( X , . ) , X ∈ X } ) , where . is the completion . The kernel mean embedding can be used to embed a probability measure in a RKHS . Definition 2 ( Kernel Mean Embedding Smola et al . ( 2007 ) ) . Given a probability measure p on X with an associated RKHS H equipped with a reproducing kernel k such that supX∈X k ( X , X ) < ∞ , the kernel mean embedding of p in RKHS H , denoted by µH ∈ H , is defined as µH =∫ k ( X , . ) p ( X ) , and the mean embedding operator E : L1 ( X ) → H is defined as µH = Ep . For a characteristic kernel , the mapping from an input space distribution to its corresponding kernel mean embedding is one-to-one . Thus , two distributions in the input space are identical if and only if their kernel mean embedding matches exactly . This property enables Maximum Mean Discrepancy ( MMD ) for distribution matching . For a finite number of samples { Xi } Ni=1 , drawn from the probability measure µ , an empirical estimation of µH is µ̂H = 1N ∑N i=1 k ( Xi , . ) . Definition 3 ( Flow-based generative model Dinh et al . ( 2014 ) ) . A flow-based generative model explicitly learns the data distribution by trying to bijectively map it to a tractable density via invertible transformations . Formally , given a random variable z following a tractable density , i.e. , z ∼ pθ ( z ) , a flow-based model learns an invertible mapping fφ such that the data sample x = fφ ( z ) and the corresponding data distribution , pθ̃ ( x ) = f∗pθ ( z ) , where , f∗ = | dz dx | is the push-forward operator .
This paper starts with an autoencoder trained on vision data. Autoencoders aren't generative models, strictly, so in order to make it so, they leverage a simple linear transformation on over RKHSs. The kernel they use is the NTK. In order to generate, the construct a reduced sample version of the Kernel using Nystrom's approximation, then use a geodesic interpolation algorithm to inverse map the transferred sample from the prior in the RKHS space back to the latent space for use in the decoder.
SP:b9fb99a65e598d0e0b1a97bc04dfc80865216541
Generalisation Guarantees For Continual Learning With Orthogonal Gradient Descent
1 INTRODUCTION . Continual Learning is a setting in which an agent is exposed to multiples tasks sequentially ( Kirkpatrick et al. , 2016 ) . The core challenge lies in the ability of the agent to learn the new tasks while retaining the knowledge acquired from previous tasks . Too much plasticity ( Nguyen et al. , 2018 ) will lead to catastrophic forgetting , which means the degradation of the ability of the agent to perform the past tasks ( McCloskey & Cohen 1989 , Ratcliff 1990 , Goodfellow et al . 2014 ) . On the other hand , too much stability will hinder the agent from adapting to new tasks . While there is a large literature on Continual Learning ( Parisi et al. , 2019 ) , few works have addressed the problem from a theoretical perspective . Recently , Jacot et al . ( 2018 ) established the connection between overparameterized neural networks and kernel methods by introducing the Neural Tangent Kernel ( NTK ) . They showed that at the infinite width limit , the kernel remains constant throughout training . Lee et al . ( 2019 ) also showed that , in the infinite width limit or Neural Tangent Kernel ( NTK ) regime , a network evolves as a linear model when trained on certain losses under gradient descent . In addition to these findings , recent works on the convergence of Stochastic Gradient Descent for overparameterized neural networks ( Arora et al. , 2019 ) have unlocked multiple mathematical tools to study the training dynamics of over-parameterized neural networks . We leverage these theoretical findings in order to to propose a theoretical framework for Continual Learning in the NTK regime then prove convergence and generalisation properties for the algorithm Orthogonal Gradient Descent for Continual Learning ( Farajtabar et al. , 2019 ) . Our contributions are summarized as follows : 1 . We present a theoretical framework to study Continual Learning algorithms in the Neural Tangent Kernel ( NTK ) regime . This framework frames Continual Learning as a recursive kernel regression and comprises proxies for Transfer Learning , generalisation , tasks similarity and Curriculum Learning . ( Thm . 1 , Lem . 1 and Thm . 3 ) . 2 . In this framework , we prove that OGD is robust to forgetting with respect to an arbitrary number of tasks under an infinite memory ( Sec . 5 , Thm . 2 ) . 3 . We prove the first generalisation bound for Continual Learning with SGD and OGD . We find that generalisation through tasks depends on a task similarity with respect to the NTK . ( Sec . 5 , Theorem 3 ) 4 . We study the limits of this framework in practical settings , in which the Neural Tangent Kernel may vary . We find that the variation of the Neural Tangent Kernel impacts negatively the robustness to Catastrophic Forgetting of OGD in non overparameterized benchmarks . ( Sec . 6 ) 2 RELATED WORKS . Continual Learning addresses the Catastrophic Forgetting problem , which refers to the tendency of agents to ” forget ” the previous tasks they were trained on over the course of training . It ’ s an active area of research , several heuristics were developed in order to characterise it ( Ans & Rousset 1997 , Ans & Rousset 2000 , Goodfellow et al . 2014 , French 1999 , McCloskey & Cohen 1989 , Robins 1995 , Nguyen et al . 2019 ) . Approaches to Continual Learning can be categorised into : regularization methods , memory based methods and dynamic architectural methods . We refer the reader to the survey by Parisi et al . ( 2019 ) for an extensive overview on the existing methods . The idea behind memory-based methods is to store data from previous tasks in a buffer of fixed size , which can then be reused during training on the current task ( Chaudhry et al . 2019 , Van de Ven & Tolias 2018 ) . While dynamic architectural methods rely on growing architectures which keep the past knowledge fixed and store new knowledge in new components , such as new nodes or layers . ( Lee et al . 2018 , Schwarz et al . 2018 ) Finally , regularization methods regularize the objective in order to preserve the knowledge acquired from the previous tasks ( Kirkpatrick et al . 2016 , Aljundi et al . 2018 , Farajtabar et al . 2019 , Zenke et al . 2017 ) . While there is a large literature on the field , there is a limited number of theoretical works on Continual Learning . Alquier et al . ( 2017 ) define a compound regret for lifelong learning , as the regret with respect to the oracle who would have known the best common representation for all tasks in advance . Knoblauch et al . ( 2020 ) show that optimal Continual Learning algorithms generally solve an NP-HARD problem and will require perfect memory not to suffer from catastrophic forgetting . Benzing ( 2020 ) presents mathematical and empirical evidence that the two methods – Synaptic Intelligence and Memory Aware Synapses – approximate a rescaled version of the Fisher Information . Continual Learning is not limited to Catastrophic Forgetting , but also closely related to Transfer Learning . A desirable property of a Continual Learning algorithm is to enable the agent to carry the acquired knowledge through his lifetime , and transfer it to solve new tasks . A new theoretical study of the phenomena was presented by Liu et al . ( 2019 ) . They prove how the task similarity contributes to generalisation , when training with Stochastic Gradient Descent , in a two tasks setting and for over-parameterised two layer RELU neural networks . The recent findings on the Neural Tangent Kernel ( Jacot et al. , 2018 ) and on the properties of overparameterized neural networks ( Du et al . 2018 , Arora et al . 2019 ) provide powerful tools to analyze their training dynamics . We build up on these advances to construct a theoretical framework for Continual Learning and study the generalisation properties of Orthogonal Gradient Descent . 3 PRELIMINARIES . Notation We use bold-faced characters for vectors and matrices . We use ‖·‖ to denote the Euclidean norm of a vector or the spectral norm of a matrix , and ‖ · ‖F to denote the Frobenius norm of a matrix . We use 〈· , ·〉 for the Euclidean dot product , and 〈· , ·〉H the dot product in the Hilbert space H. We index the task ID by τ . The ≤ operator if used with matrices , corresponds to the partial ordering over symmetric matrices . We denote N the set of natural numbers , R the space of real numbers and N ? for the set Nr { 0 } . We use ⊕ to refer to the direct sum over Euclidean spaces . 3.1 CONTINUAL LEARNING . Continual Learning considers a series of tasks { T 1 , T 2 , . . . } , where each task can be viewed as a separate supervised learning problem . Similarly to online learning , data from each task is revealed only once . The goal of Continual Learning is to model each task accurately with a single model . The challenge is to achieve a good performance on the new tasks , while retaining knowledge from the previous tasks ( Nguyen et al. , 2018 ) . We assume the data from each task T τ , τ ∈ N ? , is drawn from a distribution Dτ . Individual samples are denoted ( xτ , i , yτ , i ) , where i ∈ [ nτ ] . For a given task T τ , the model is denoted fτ , we use the superscript ( t ) to indicate the training iteration t ∈ N , while we use the superscript ? to indicate the asymptotic convergence . For the regression case , given a ridge regularisation coefficient λ ∈ R+ , for all t ∈ N , we write the train loss for a task T τ as : Lτ ( wτ ( t ) ) = nτ∑ i=1 ( f ( t ) τ ( xτ , i ) − yτ , i ) 2 + λ ∥∥wτ ( t ) −w ? τ−1∥∥2 . 3.2 OGD FOR CONTINUAL LEARNING . Let T T the current task , where T ∈ N ? . For all i ∈ [ nT ] , let vT , i = ∇wf ? T−1 ( xT−1 , i ) , which is the Jacobian of task T T . We define Eτ = vec ( { vτ , i , i ∈ [ nτ ] } ) , the subspace induced by the Jacobian . The idea behind OGD ( Farajtabar et al. , 2019 ) is to update the weights along the projection of the gradient on the orthogonal space induced by the Jacobians over the previous tasks E1 ⊕ . . .⊕ Eτ−1 . The update rule at an iteration t ∈ N ? for the task T T is as follows : wT ( t+ 1 ) = wT ( t ) − ηΠE⊥T−1∇wL T ( wT ( t ) ) . The intuition behind OGD is to “ preserve the previously acquired knowledge by maintaining a space consisting of the gradient directions of the neural networks predictions on previous tasks ” ( Farajtabar et al. , 2019 ) . Throughout the paper , we only consider the OGD-GTL variant which stores the gradient with respect to the ground truth logit . 3.3 NEURAL TANGENT KERNEL . In their seminal paper , Jacot et al . ( 2018 ) established the connection between deep networks and kernel methods by introducing the Neural Tangent Kernel ( NTK ) . They showed that at the infinite width limit , the kernel remains constant throughout training . Lee et al . ( 2019 ) also showed that a network evolves as a linear model in the infinite width limit when trained on certain losses under gradient descent . Throughout our analysis , we make the assumption that the neural network is overparameterized , and consider the linear approximation of the neural network around its initialisation : f ( t ) ( x ) ≈ f ( 0 ) ( x ) +∇wf ( 0 ) ( x ) T ( w ( t ) −w ( 0 ) ) . 4 CONVERGENCE - CONTINUAL LEARNING AS A RECURSIVE KERNEL REGRESSION . In this section , we derive a closed form expression for the Continual Learning models through tasks . We find that Continual Learning models can be expressed with recursive kernel ridge regression across tasks . We also find a that the NTK of OGD is recursive with respect to the projection of its feature map on the tasks ’ spaces . The result is presented in Theorem 1 , a stepping stone towards proving the generalisation bound for OGD in Sec . 5 . 4.1 CONVERGENCE THEOREM . Theorem 1 ( Continual Learning as a recursive Kernel Regression ) Given T 1 , . . . , T T a sequence of tasks . Fix a learning rate sequence ( ητ ) τ∈ [ T ] and a ridge regularisation coefficient λ ∈ R+ . If , for all τ , the learning rate satisfies ητ < 1‖κτ ( Xτ , Xτ ) +λI‖ , then for all τ , wτ ( t ) converges linearly to a limit solution w ? τ such that f ? τ ( x ) = f ? τ−1 ( x ) + κτ ( x , Xτ ) T ( κτ ( Xτ , Xτ ) + λI ) −1ỹτ , where κτ ( x , x ′ ) = φ̃τ ( x ) φ̃τ ( x ′ ) T , ỹτ = yτ − yτ−1→τ , yτ−1→τ = f ? τ−1 ( Xτ ) , φ̃τ ( x ) = { ∇wf ? 0 ( x ) ∈ Rd for SGD , Tτ∇wf ? 0 ( x ) ∈ Rd−Mτ for OGD . and { Tτ ∈ Rd−Mτ , d , τ ∈ [ T ] } are projection matrices from Rd to ( τ⊕ k=1 Ek ) ⊥ andMτ = dim ( τ⊕ k=1 Ek ) . The theorem describes how the model f ? τ evolves across tasks . It is recursive because the learning is incremental . For a given task T τ , f ? τ−1 ( x ) is the knowledge acquired by the agent up to the task T τ−1 . At this stage , the model only fits the residual ỹτ = yτ − yτ−1→τ , which complements the knowledge acquired through previous tasks . This residual is also a proxy for task similarity . If the tasks are identical , the residual is equal to zero . The knowledge increment is captured by the term : κτ ( x , Xτ ) T ( κτ+1 ( Xτ , Xτ ) + λI ) −1ỹτ . Finally , the task similarity is computed with respect to the most recent feature map φ̃τ , and κτ is the NTK with respect to the feature map φ̃τ . Remark 1 The recursive relation from Theorem 1 can also be written as a linear combination of kernel regressors as follows : f ? τ ( x ) = τ∑ k=1 f̃ ? k ( x ) , where f̃ ? k ( x ) = κk ( x , Xk ) T ( κk ( Xk , Xk ) + λI ) −1ỹk .
The authors use a Neural Tangent Kernel (NTK) approximation of wide neural nets to establish generalization bounds for continual learning (CL) using stochastic gradient descent (SGD) and orthogonal gradient descent (OGD). In this regime, the authors prove that OGD does not suffer from catastrophic forgetting of training data. The authors additionally introduce a modification to OGD which causes significant performance improvements in the Rotated MNIST and Permuted MNIST problems. OGD involves storing feature maps from data points from previous tasks. The modified OGD method (OGD+) additionally stores feature maps from the current task.
SP:1b467d99fd9fb26c374247e68873d34596705f75
Generalisation Guarantees For Continual Learning With Orthogonal Gradient Descent
1 INTRODUCTION . Continual Learning is a setting in which an agent is exposed to multiples tasks sequentially ( Kirkpatrick et al. , 2016 ) . The core challenge lies in the ability of the agent to learn the new tasks while retaining the knowledge acquired from previous tasks . Too much plasticity ( Nguyen et al. , 2018 ) will lead to catastrophic forgetting , which means the degradation of the ability of the agent to perform the past tasks ( McCloskey & Cohen 1989 , Ratcliff 1990 , Goodfellow et al . 2014 ) . On the other hand , too much stability will hinder the agent from adapting to new tasks . While there is a large literature on Continual Learning ( Parisi et al. , 2019 ) , few works have addressed the problem from a theoretical perspective . Recently , Jacot et al . ( 2018 ) established the connection between overparameterized neural networks and kernel methods by introducing the Neural Tangent Kernel ( NTK ) . They showed that at the infinite width limit , the kernel remains constant throughout training . Lee et al . ( 2019 ) also showed that , in the infinite width limit or Neural Tangent Kernel ( NTK ) regime , a network evolves as a linear model when trained on certain losses under gradient descent . In addition to these findings , recent works on the convergence of Stochastic Gradient Descent for overparameterized neural networks ( Arora et al. , 2019 ) have unlocked multiple mathematical tools to study the training dynamics of over-parameterized neural networks . We leverage these theoretical findings in order to to propose a theoretical framework for Continual Learning in the NTK regime then prove convergence and generalisation properties for the algorithm Orthogonal Gradient Descent for Continual Learning ( Farajtabar et al. , 2019 ) . Our contributions are summarized as follows : 1 . We present a theoretical framework to study Continual Learning algorithms in the Neural Tangent Kernel ( NTK ) regime . This framework frames Continual Learning as a recursive kernel regression and comprises proxies for Transfer Learning , generalisation , tasks similarity and Curriculum Learning . ( Thm . 1 , Lem . 1 and Thm . 3 ) . 2 . In this framework , we prove that OGD is robust to forgetting with respect to an arbitrary number of tasks under an infinite memory ( Sec . 5 , Thm . 2 ) . 3 . We prove the first generalisation bound for Continual Learning with SGD and OGD . We find that generalisation through tasks depends on a task similarity with respect to the NTK . ( Sec . 5 , Theorem 3 ) 4 . We study the limits of this framework in practical settings , in which the Neural Tangent Kernel may vary . We find that the variation of the Neural Tangent Kernel impacts negatively the robustness to Catastrophic Forgetting of OGD in non overparameterized benchmarks . ( Sec . 6 ) 2 RELATED WORKS . Continual Learning addresses the Catastrophic Forgetting problem , which refers to the tendency of agents to ” forget ” the previous tasks they were trained on over the course of training . It ’ s an active area of research , several heuristics were developed in order to characterise it ( Ans & Rousset 1997 , Ans & Rousset 2000 , Goodfellow et al . 2014 , French 1999 , McCloskey & Cohen 1989 , Robins 1995 , Nguyen et al . 2019 ) . Approaches to Continual Learning can be categorised into : regularization methods , memory based methods and dynamic architectural methods . We refer the reader to the survey by Parisi et al . ( 2019 ) for an extensive overview on the existing methods . The idea behind memory-based methods is to store data from previous tasks in a buffer of fixed size , which can then be reused during training on the current task ( Chaudhry et al . 2019 , Van de Ven & Tolias 2018 ) . While dynamic architectural methods rely on growing architectures which keep the past knowledge fixed and store new knowledge in new components , such as new nodes or layers . ( Lee et al . 2018 , Schwarz et al . 2018 ) Finally , regularization methods regularize the objective in order to preserve the knowledge acquired from the previous tasks ( Kirkpatrick et al . 2016 , Aljundi et al . 2018 , Farajtabar et al . 2019 , Zenke et al . 2017 ) . While there is a large literature on the field , there is a limited number of theoretical works on Continual Learning . Alquier et al . ( 2017 ) define a compound regret for lifelong learning , as the regret with respect to the oracle who would have known the best common representation for all tasks in advance . Knoblauch et al . ( 2020 ) show that optimal Continual Learning algorithms generally solve an NP-HARD problem and will require perfect memory not to suffer from catastrophic forgetting . Benzing ( 2020 ) presents mathematical and empirical evidence that the two methods – Synaptic Intelligence and Memory Aware Synapses – approximate a rescaled version of the Fisher Information . Continual Learning is not limited to Catastrophic Forgetting , but also closely related to Transfer Learning . A desirable property of a Continual Learning algorithm is to enable the agent to carry the acquired knowledge through his lifetime , and transfer it to solve new tasks . A new theoretical study of the phenomena was presented by Liu et al . ( 2019 ) . They prove how the task similarity contributes to generalisation , when training with Stochastic Gradient Descent , in a two tasks setting and for over-parameterised two layer RELU neural networks . The recent findings on the Neural Tangent Kernel ( Jacot et al. , 2018 ) and on the properties of overparameterized neural networks ( Du et al . 2018 , Arora et al . 2019 ) provide powerful tools to analyze their training dynamics . We build up on these advances to construct a theoretical framework for Continual Learning and study the generalisation properties of Orthogonal Gradient Descent . 3 PRELIMINARIES . Notation We use bold-faced characters for vectors and matrices . We use ‖·‖ to denote the Euclidean norm of a vector or the spectral norm of a matrix , and ‖ · ‖F to denote the Frobenius norm of a matrix . We use 〈· , ·〉 for the Euclidean dot product , and 〈· , ·〉H the dot product in the Hilbert space H. We index the task ID by τ . The ≤ operator if used with matrices , corresponds to the partial ordering over symmetric matrices . We denote N the set of natural numbers , R the space of real numbers and N ? for the set Nr { 0 } . We use ⊕ to refer to the direct sum over Euclidean spaces . 3.1 CONTINUAL LEARNING . Continual Learning considers a series of tasks { T 1 , T 2 , . . . } , where each task can be viewed as a separate supervised learning problem . Similarly to online learning , data from each task is revealed only once . The goal of Continual Learning is to model each task accurately with a single model . The challenge is to achieve a good performance on the new tasks , while retaining knowledge from the previous tasks ( Nguyen et al. , 2018 ) . We assume the data from each task T τ , τ ∈ N ? , is drawn from a distribution Dτ . Individual samples are denoted ( xτ , i , yτ , i ) , where i ∈ [ nτ ] . For a given task T τ , the model is denoted fτ , we use the superscript ( t ) to indicate the training iteration t ∈ N , while we use the superscript ? to indicate the asymptotic convergence . For the regression case , given a ridge regularisation coefficient λ ∈ R+ , for all t ∈ N , we write the train loss for a task T τ as : Lτ ( wτ ( t ) ) = nτ∑ i=1 ( f ( t ) τ ( xτ , i ) − yτ , i ) 2 + λ ∥∥wτ ( t ) −w ? τ−1∥∥2 . 3.2 OGD FOR CONTINUAL LEARNING . Let T T the current task , where T ∈ N ? . For all i ∈ [ nT ] , let vT , i = ∇wf ? T−1 ( xT−1 , i ) , which is the Jacobian of task T T . We define Eτ = vec ( { vτ , i , i ∈ [ nτ ] } ) , the subspace induced by the Jacobian . The idea behind OGD ( Farajtabar et al. , 2019 ) is to update the weights along the projection of the gradient on the orthogonal space induced by the Jacobians over the previous tasks E1 ⊕ . . .⊕ Eτ−1 . The update rule at an iteration t ∈ N ? for the task T T is as follows : wT ( t+ 1 ) = wT ( t ) − ηΠE⊥T−1∇wL T ( wT ( t ) ) . The intuition behind OGD is to “ preserve the previously acquired knowledge by maintaining a space consisting of the gradient directions of the neural networks predictions on previous tasks ” ( Farajtabar et al. , 2019 ) . Throughout the paper , we only consider the OGD-GTL variant which stores the gradient with respect to the ground truth logit . 3.3 NEURAL TANGENT KERNEL . In their seminal paper , Jacot et al . ( 2018 ) established the connection between deep networks and kernel methods by introducing the Neural Tangent Kernel ( NTK ) . They showed that at the infinite width limit , the kernel remains constant throughout training . Lee et al . ( 2019 ) also showed that a network evolves as a linear model in the infinite width limit when trained on certain losses under gradient descent . Throughout our analysis , we make the assumption that the neural network is overparameterized , and consider the linear approximation of the neural network around its initialisation : f ( t ) ( x ) ≈ f ( 0 ) ( x ) +∇wf ( 0 ) ( x ) T ( w ( t ) −w ( 0 ) ) . 4 CONVERGENCE - CONTINUAL LEARNING AS A RECURSIVE KERNEL REGRESSION . In this section , we derive a closed form expression for the Continual Learning models through tasks . We find that Continual Learning models can be expressed with recursive kernel ridge regression across tasks . We also find a that the NTK of OGD is recursive with respect to the projection of its feature map on the tasks ’ spaces . The result is presented in Theorem 1 , a stepping stone towards proving the generalisation bound for OGD in Sec . 5 . 4.1 CONVERGENCE THEOREM . Theorem 1 ( Continual Learning as a recursive Kernel Regression ) Given T 1 , . . . , T T a sequence of tasks . Fix a learning rate sequence ( ητ ) τ∈ [ T ] and a ridge regularisation coefficient λ ∈ R+ . If , for all τ , the learning rate satisfies ητ < 1‖κτ ( Xτ , Xτ ) +λI‖ , then for all τ , wτ ( t ) converges linearly to a limit solution w ? τ such that f ? τ ( x ) = f ? τ−1 ( x ) + κτ ( x , Xτ ) T ( κτ ( Xτ , Xτ ) + λI ) −1ỹτ , where κτ ( x , x ′ ) = φ̃τ ( x ) φ̃τ ( x ′ ) T , ỹτ = yτ − yτ−1→τ , yτ−1→τ = f ? τ−1 ( Xτ ) , φ̃τ ( x ) = { ∇wf ? 0 ( x ) ∈ Rd for SGD , Tτ∇wf ? 0 ( x ) ∈ Rd−Mτ for OGD . and { Tτ ∈ Rd−Mτ , d , τ ∈ [ T ] } are projection matrices from Rd to ( τ⊕ k=1 Ek ) ⊥ andMτ = dim ( τ⊕ k=1 Ek ) . The theorem describes how the model f ? τ evolves across tasks . It is recursive because the learning is incremental . For a given task T τ , f ? τ−1 ( x ) is the knowledge acquired by the agent up to the task T τ−1 . At this stage , the model only fits the residual ỹτ = yτ − yτ−1→τ , which complements the knowledge acquired through previous tasks . This residual is also a proxy for task similarity . If the tasks are identical , the residual is equal to zero . The knowledge increment is captured by the term : κτ ( x , Xτ ) T ( κτ+1 ( Xτ , Xτ ) + λI ) −1ỹτ . Finally , the task similarity is computed with respect to the most recent feature map φ̃τ , and κτ is the NTK with respect to the feature map φ̃τ . Remark 1 The recursive relation from Theorem 1 can also be written as a linear combination of kernel regressors as follows : f ? τ ( x ) = τ∑ k=1 f̃ ? k ( x ) , where f̃ ? k ( x ) = κk ( x , Xk ) T ( κk ( Xk , Xk ) + λI ) −1ỹk .
The paper provides a theoretical analysis on the OGD based continual learning method. The method is in fact proposed by a previous paper (Farajtabar et al. 2019) and the current paper shows a generalization bound for the regression case. The result (Thm 3) compares the generalization bounds between SGD and OGD and shows OGD leads to a tighter bound. The theorem is also based on the bound on the Rademacher Complexity (Lemma 1). The paper also suggests OGD+, which stores some data points from past tasks. They also present some experimental results on small benchmark datasets, and show OGD+ outperforms SGD and OGD.
SP:1b467d99fd9fb26c374247e68873d34596705f75
PolyRetro: Few-shot Polymer Retrosynthesis via Domain Adaptation
Polymers appear everywhere in our daily lives – fabrics , plastics , rubbers , etc . – and we could hardly live without them . To make polymers , chemists develop processes that combine smaller building blocks ( monomers ) to form long chains or complex networks ( polymers ) . These processes are called polymerizations and will usually take lots of human efforts to develop . Although machine learning models for small molecules have generated lots of promising results , the prediction problem for polymerization is new and suffers from the scarcity of polymerization datasets available in the field . Furthermore , the problem is made even more challenging by the large size of the polymers and the additional recursive constraints , which are not present in the small molecule problem . In this paper , we make an initial step towards this challenge and propose a learning-based search framework that can automatically identify a sequence of reactions that lead to the polymerization of a target polymer with minimal polymerization data involved . Our method transfers models trained on small molecule datasets for retrosynthesis to check the validity of polymerization reaction . Furthermore , our method also incorporates a template prior learned on a limited amount of polymer data into the framework to adapt the model from small molecule to the polymer domain . We demonstrate that our method is able to propose high-quality polymerization plans for a dataset of 52 real-world polymers , of which more than 50 % successfully recovers the currently-in-used polymerization processes in the real world . 1 INTRODUCTION . Human beings are living in a world of chemical products , among which a category of chemicals , called polymers , is playing an essential role . Ranging from fabrics to plastics to rubbers , polymers are appearing in every corner of our daily lives . Polymers with different properties are desired when used in different circumstances , and chemists have been spending tremendous effort to design and synthesize new polymers in the pursuit of ones with better properties . To make polymers , chemists develop processes that combine small building blocks , which we call monomers , to form longer chains or complex networks . Such processes are called polymerization and will take a significant amount of human effort to develop . Since the rise of deep learning ( LeCun et al. , 2015 ) , applying these models to science problems like biology and chemistry ones have gradually gathered attentions . Specifically , the applications of AI methods in the retrosynthetic design of chemical compounds have become very popular recently ( Segler et al. , 2018 ; Coley et al. ) . While most work focuses on synthesizing drug-like small molecules , the study of polymer retrosynthesis is still at its infancy . The reasons are multifold , but one of the most important ones being the lack of available polymerization datasets , which poses difficulties for existing learning-based methods to learn meaningful pattern for polymerization reactions . Moreover , polymers usually have a chain or network structure with repeat units , which is very different from small molecules . This additional constraints also introduces difficulties in the formulation and modeling of polymer design/retrosynthesis . In this paper , we focus specifically on the polymer retrosynthesis problem . While there has been a series of work focusing on small molecule retrosynthesis ( Corey & Wipke , 1969 ; Gasteiger et al. , 1992 ; Coley et al. , 2017 ; Liu et al. , 2017 ; Segler & Waller , 2017 ; Segler et al. , 2018 ; Coley et al . ; Karpov et al. , 2019 ; Baylon et al . ; Schreck et al . ; Dai et al. , 2019 ; Chen et al. , 2020 ) , the problem of polymerization is very different and challenging in the machine learning sense that 1 . To predict synthesis routes for polymer repeat units , additional structural constraints such as recursive constraint should be imposed to guarantee a potentially valid polymerization procedure . Such constraints do not exist in existing formulation for molecule retrosynthesis , thus most methods could not be directly applied . 2 . Polymerization data for training is very limited . Compared with retrosynthesis models built for small molecules where accessible training data is at least tens of thousands , the size of polymerization data is tiny , and in our case it is even less than 100 . This size is meaningless for most existing models to learn any synthesis patterns . In this paper , we formulate the problem of polymer retrosynthesis as a constrained optimization and present PolyRetro , a novel learning-based search framework to tackle the problem of polymer retrosynthesis . With beam search and rejection sampling , PolyRetro is able to propose monomer candidates with high polymerization probability while satisfying monomer synthesizability constraints . Our method is based on reaction templates collected from small molecule reactions , which capture the local structural properties for small molecule reactions . We leverage an one-step retrosynthesis model trained on a small molecule reaction dataset and adapt it to the polymer domain by incorporating a template prior learned on tiny-sized polymerization data . To verify whether the proposed monomers are synthesizable , we employ Retro * ( Chen et al. , 2020 ) , a multi-step retrosynthesis model to predict their synthesis routes . We demonstrate PolyRetro through experiments that it is able to predict monomers accurately given target polymer repeat units . To our knowledge , we are the first to formulate , model and tackle the polymer retrosynthesis machine learning problem . The approach we developed is general in the sense that it can also be applied to other machine learning problem such as theorem proving and program synthesis , where the results we want to obtain involves recursion . For instance , the analogue problem in theorem proving is deriving proof for a theorem which contains recursive relations ; the analogue problem in program synthesis is generating programs containing loops and recursive calls . We choose to focus our application in polymer synthesis because its importance and high societal impact . See Appendix E for more concrete discussions on the generality of PolyRetro . Our contributions are summarized below : • We formulate the problem of polymer retrosynthesis as a constrained optimization problem . To our knowledge , this is the first machine learning formulation that takes constraints in polymer retrosynthesis into consideration . • We propose PolyRetro , a learning-based search framework that tackles the problem of polymer retrosynthesis . To our knowledge , this is also the first learning-based method in this problem setting . • PolyRetro is able to recover 53 % of ground truth monomers for a real-world polymer dataset using limited training data , significantly outperforming all existing algorithms . 2 RELATED WORKS . Computer-aided retrosynthetic planning for chemical molecules was first formalized by E. J. Corey ( Corey & Wipke , 1969 ) and have been deployed over the past years . The task of retrosynthetic design is to identify a series of reactions that leads to the synthesis of target molecule . This is one of the most fundamental problems in organic chemistry . Recently , many machine learning methods has been proposed to the easier but also important subproblem , where one is given target molecule and the task is to predict the direct predicates ( Coley et al. ) . Methods to tackle such ‘ one-step version ’ of retrosynthesis could be roughly divided into two categories , template-based and template-free ones . A template of chemical reaction is essentially how bonds and atom change during the reaction , and could be applied reversely to get reactants from products . Thus there have been a series of methods trying to predict the reaction templates given product molecules to get the corresponding reactants ( Coley et al. , 2017 ; Segler & Waller , 2017 ; Baylon et al . ; Dai et al. , 2019 ) . While powerful , these methods are not applicable in the case where training data comes without templates . To resolve this , there have been attempts to use sequence-to-ssequence model to directly predict SMILES 1 representation of reactants ( Liu et al. , 2017 ; Karpov et al. , 2019 ) . Such methods could be straight-forwardly applied to any reaction data , but may need a large number of reactions for training to find meaningful reaction patterns without help of reaction templates . On the other hand , there has been works trying to directly solve multi-step retrosynthesis prediction ( Segler et al. , 2018 ; Schreck et al . ; Kishimoto et al. , 2019 ; Chen et al. , 2020 ) . This multi-step procedure is usually decomposed into two modules , a one-step retrosynthesis module which could propose possible direct predicates given product molecules , and a planning algorithm to search for best synthetsis route with recursive application of one-step module . While there has been such rich literature in the retrosynthesis domain , a paucity of work has tackle the specific problem of polymer retrosynthesis . We focus on polymer retrosynthesis and bring knowledge from general retrosynthesis literature to help tackle the problem . 3 BACKGROUND . In this section , we focus on providing the background knowledge about molecule retrosynthesis as well as defining notations . This serves as the building block in our polymer retrosynthesis modeling . Given a molecule m ∈M whereM indicates the space of molecules , the molecule retrosynthesis problem focuses on finding a set of reactants S ⊂ M that can be used to synthesize m. Before introducing the approaches for retrosynthesis , we first cover the background on reaction templates . 3.1 REACTION TEMPLATE . A reaction template T : = oT → rT1 +rT2 + . . .+rT|T | is a graph rewriting rule 2 that rewrites subgraph pattern oT that is matched with target molecule m , into rTi that appears in i-th reactant si ∈ S . The set of templates T can be extract from existing chemical reactions in the literature . Although applying templates involves with expensive subgraph matching between oT and m , where itself is an NP-hard problem , such approach provides a tractable way of finding candidate set S with chemical rules . 3.2 LEARNING-BASED MOLECULE RETROSYNTHESIS . The molecule retrosynthesis problem has raised increasing interest in the machine learning ( ML ) community , due to its importance in chemistry and the difficulty in structured prediction setting . We here mainly focus on the ML approaches for such problem , as some of these provide probabilistic interpretations that will be needed in our optimization framework . Depends on the number of reaction steps needed to synthesize m , such problem can be categorized into one-step and multi-step retrosynthesis . One-step molecule retrosynthesis : One-step setting requires that R : = S → m can be realized in one chemical reaction . It focuses on modeling p ( S|m ) with or without reaction templates . As the template based one guarantees the satisfaction of human defined rules , we use the model proposed in NeuralSym ( Segler & Waller , 2017 ) for one-step prediction . Specifically in this model : p ( S|m ) ∝ ∑ T∈T p ( T |m ) I [ SubgMatch ( oT , m ) ] ( 1 ) where SubgMatch ( · ) operator checks the subgraph matching between oT and m. Multi-step molecule retrosynthesis The multi-step extension allows using multiple reactionsRm = { Rmi } |Rm| i=1 to synthesize m , with the restriction that the reactants set S ⊂ I ⊂M where I is the set of starting molecules . This is essentially a planning problem that search through the reaction space using one-step models as expansion proposals . In our paper , we use the Retro * ( Chen et al. , 2020 ) which is the state-of-the-art approach that provably optimizes the synthesizability ofRm . 1https : //www.daylight.com/dayhtml/doc/theory/theory.smiles.html . 2https : //www.daylight.com/dayhtml/doc/theory/theory.smarts.html
This paper focuses on polymer retro synthesis problem. This is a novel problem and is very challenging because of the very small amounts of training data available (<100). They use reaction templates collected from small molecule reactions and formulate polymer retrosynthesis as a constrained optimization problem. The authors claim that this is the first learning method that takes constraints in polymer retrosynthesis problem.
SP:11a15c3a8b83e911ab4ce1193871b468656a63ac
PolyRetro: Few-shot Polymer Retrosynthesis via Domain Adaptation
Polymers appear everywhere in our daily lives – fabrics , plastics , rubbers , etc . – and we could hardly live without them . To make polymers , chemists develop processes that combine smaller building blocks ( monomers ) to form long chains or complex networks ( polymers ) . These processes are called polymerizations and will usually take lots of human efforts to develop . Although machine learning models for small molecules have generated lots of promising results , the prediction problem for polymerization is new and suffers from the scarcity of polymerization datasets available in the field . Furthermore , the problem is made even more challenging by the large size of the polymers and the additional recursive constraints , which are not present in the small molecule problem . In this paper , we make an initial step towards this challenge and propose a learning-based search framework that can automatically identify a sequence of reactions that lead to the polymerization of a target polymer with minimal polymerization data involved . Our method transfers models trained on small molecule datasets for retrosynthesis to check the validity of polymerization reaction . Furthermore , our method also incorporates a template prior learned on a limited amount of polymer data into the framework to adapt the model from small molecule to the polymer domain . We demonstrate that our method is able to propose high-quality polymerization plans for a dataset of 52 real-world polymers , of which more than 50 % successfully recovers the currently-in-used polymerization processes in the real world . 1 INTRODUCTION . Human beings are living in a world of chemical products , among which a category of chemicals , called polymers , is playing an essential role . Ranging from fabrics to plastics to rubbers , polymers are appearing in every corner of our daily lives . Polymers with different properties are desired when used in different circumstances , and chemists have been spending tremendous effort to design and synthesize new polymers in the pursuit of ones with better properties . To make polymers , chemists develop processes that combine small building blocks , which we call monomers , to form longer chains or complex networks . Such processes are called polymerization and will take a significant amount of human effort to develop . Since the rise of deep learning ( LeCun et al. , 2015 ) , applying these models to science problems like biology and chemistry ones have gradually gathered attentions . Specifically , the applications of AI methods in the retrosynthetic design of chemical compounds have become very popular recently ( Segler et al. , 2018 ; Coley et al. ) . While most work focuses on synthesizing drug-like small molecules , the study of polymer retrosynthesis is still at its infancy . The reasons are multifold , but one of the most important ones being the lack of available polymerization datasets , which poses difficulties for existing learning-based methods to learn meaningful pattern for polymerization reactions . Moreover , polymers usually have a chain or network structure with repeat units , which is very different from small molecules . This additional constraints also introduces difficulties in the formulation and modeling of polymer design/retrosynthesis . In this paper , we focus specifically on the polymer retrosynthesis problem . While there has been a series of work focusing on small molecule retrosynthesis ( Corey & Wipke , 1969 ; Gasteiger et al. , 1992 ; Coley et al. , 2017 ; Liu et al. , 2017 ; Segler & Waller , 2017 ; Segler et al. , 2018 ; Coley et al . ; Karpov et al. , 2019 ; Baylon et al . ; Schreck et al . ; Dai et al. , 2019 ; Chen et al. , 2020 ) , the problem of polymerization is very different and challenging in the machine learning sense that 1 . To predict synthesis routes for polymer repeat units , additional structural constraints such as recursive constraint should be imposed to guarantee a potentially valid polymerization procedure . Such constraints do not exist in existing formulation for molecule retrosynthesis , thus most methods could not be directly applied . 2 . Polymerization data for training is very limited . Compared with retrosynthesis models built for small molecules where accessible training data is at least tens of thousands , the size of polymerization data is tiny , and in our case it is even less than 100 . This size is meaningless for most existing models to learn any synthesis patterns . In this paper , we formulate the problem of polymer retrosynthesis as a constrained optimization and present PolyRetro , a novel learning-based search framework to tackle the problem of polymer retrosynthesis . With beam search and rejection sampling , PolyRetro is able to propose monomer candidates with high polymerization probability while satisfying monomer synthesizability constraints . Our method is based on reaction templates collected from small molecule reactions , which capture the local structural properties for small molecule reactions . We leverage an one-step retrosynthesis model trained on a small molecule reaction dataset and adapt it to the polymer domain by incorporating a template prior learned on tiny-sized polymerization data . To verify whether the proposed monomers are synthesizable , we employ Retro * ( Chen et al. , 2020 ) , a multi-step retrosynthesis model to predict their synthesis routes . We demonstrate PolyRetro through experiments that it is able to predict monomers accurately given target polymer repeat units . To our knowledge , we are the first to formulate , model and tackle the polymer retrosynthesis machine learning problem . The approach we developed is general in the sense that it can also be applied to other machine learning problem such as theorem proving and program synthesis , where the results we want to obtain involves recursion . For instance , the analogue problem in theorem proving is deriving proof for a theorem which contains recursive relations ; the analogue problem in program synthesis is generating programs containing loops and recursive calls . We choose to focus our application in polymer synthesis because its importance and high societal impact . See Appendix E for more concrete discussions on the generality of PolyRetro . Our contributions are summarized below : • We formulate the problem of polymer retrosynthesis as a constrained optimization problem . To our knowledge , this is the first machine learning formulation that takes constraints in polymer retrosynthesis into consideration . • We propose PolyRetro , a learning-based search framework that tackles the problem of polymer retrosynthesis . To our knowledge , this is also the first learning-based method in this problem setting . • PolyRetro is able to recover 53 % of ground truth monomers for a real-world polymer dataset using limited training data , significantly outperforming all existing algorithms . 2 RELATED WORKS . Computer-aided retrosynthetic planning for chemical molecules was first formalized by E. J. Corey ( Corey & Wipke , 1969 ) and have been deployed over the past years . The task of retrosynthetic design is to identify a series of reactions that leads to the synthesis of target molecule . This is one of the most fundamental problems in organic chemistry . Recently , many machine learning methods has been proposed to the easier but also important subproblem , where one is given target molecule and the task is to predict the direct predicates ( Coley et al. ) . Methods to tackle such ‘ one-step version ’ of retrosynthesis could be roughly divided into two categories , template-based and template-free ones . A template of chemical reaction is essentially how bonds and atom change during the reaction , and could be applied reversely to get reactants from products . Thus there have been a series of methods trying to predict the reaction templates given product molecules to get the corresponding reactants ( Coley et al. , 2017 ; Segler & Waller , 2017 ; Baylon et al . ; Dai et al. , 2019 ) . While powerful , these methods are not applicable in the case where training data comes without templates . To resolve this , there have been attempts to use sequence-to-ssequence model to directly predict SMILES 1 representation of reactants ( Liu et al. , 2017 ; Karpov et al. , 2019 ) . Such methods could be straight-forwardly applied to any reaction data , but may need a large number of reactions for training to find meaningful reaction patterns without help of reaction templates . On the other hand , there has been works trying to directly solve multi-step retrosynthesis prediction ( Segler et al. , 2018 ; Schreck et al . ; Kishimoto et al. , 2019 ; Chen et al. , 2020 ) . This multi-step procedure is usually decomposed into two modules , a one-step retrosynthesis module which could propose possible direct predicates given product molecules , and a planning algorithm to search for best synthetsis route with recursive application of one-step module . While there has been such rich literature in the retrosynthesis domain , a paucity of work has tackle the specific problem of polymer retrosynthesis . We focus on polymer retrosynthesis and bring knowledge from general retrosynthesis literature to help tackle the problem . 3 BACKGROUND . In this section , we focus on providing the background knowledge about molecule retrosynthesis as well as defining notations . This serves as the building block in our polymer retrosynthesis modeling . Given a molecule m ∈M whereM indicates the space of molecules , the molecule retrosynthesis problem focuses on finding a set of reactants S ⊂ M that can be used to synthesize m. Before introducing the approaches for retrosynthesis , we first cover the background on reaction templates . 3.1 REACTION TEMPLATE . A reaction template T : = oT → rT1 +rT2 + . . .+rT|T | is a graph rewriting rule 2 that rewrites subgraph pattern oT that is matched with target molecule m , into rTi that appears in i-th reactant si ∈ S . The set of templates T can be extract from existing chemical reactions in the literature . Although applying templates involves with expensive subgraph matching between oT and m , where itself is an NP-hard problem , such approach provides a tractable way of finding candidate set S with chemical rules . 3.2 LEARNING-BASED MOLECULE RETROSYNTHESIS . The molecule retrosynthesis problem has raised increasing interest in the machine learning ( ML ) community , due to its importance in chemistry and the difficulty in structured prediction setting . We here mainly focus on the ML approaches for such problem , as some of these provide probabilistic interpretations that will be needed in our optimization framework . Depends on the number of reaction steps needed to synthesize m , such problem can be categorized into one-step and multi-step retrosynthesis . One-step molecule retrosynthesis : One-step setting requires that R : = S → m can be realized in one chemical reaction . It focuses on modeling p ( S|m ) with or without reaction templates . As the template based one guarantees the satisfaction of human defined rules , we use the model proposed in NeuralSym ( Segler & Waller , 2017 ) for one-step prediction . Specifically in this model : p ( S|m ) ∝ ∑ T∈T p ( T |m ) I [ SubgMatch ( oT , m ) ] ( 1 ) where SubgMatch ( · ) operator checks the subgraph matching between oT and m. Multi-step molecule retrosynthesis The multi-step extension allows using multiple reactionsRm = { Rmi } |Rm| i=1 to synthesize m , with the restriction that the reactants set S ⊂ I ⊂M where I is the set of starting molecules . This is essentially a planning problem that search through the reaction space using one-step models as expansion proposals . In our paper , we use the Retro * ( Chen et al. , 2020 ) which is the state-of-the-art approach that provably optimizes the synthesizability ofRm . 1https : //www.daylight.com/dayhtml/doc/theory/theory.smiles.html . 2https : //www.daylight.com/dayhtml/doc/theory/theory.smarts.html
This paper proposes a method for the retrosynthesis prediction of polymers. A challenge in this problem is the lack of synthetic data for polymers. The method attempts to leverage models for small molecule retrosynthesis predictions (where there is more abundant data), as well as domain specific constraints derived from the chemistry of a particular class of polymerization reactions. The method is shown to outperform some baselines that are commonly used in small molecule retrosynthesis.
SP:11a15c3a8b83e911ab4ce1193871b468656a63ac
CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and Patients
1 INTRODUCTION . At present , the healthcare system is unable to sufficiently leverage the large , unlabelled datasets that it generates on a daily basis . This is partially due to the dependence of deep learning algorithms on high quality labels for good generalization performance . However , arriving at such high quality labels in a clinical setting where physicians are squeezed for time and attention is increasingly difficult . To overcome such an obstacle , self-supervised techniques have emerged as promising methods . These methods exploit the unlabelled dataset to formulate pretext tasks such as predicting the rotation of images ( Gidaris et al. , 2018 ) , their corresponding colourmap ( Larsson et al. , 2017 ) , and the arrow of time ( Wei et al. , 2018 ) . More recently , contrastive learning was introduced as a way to learn representations of instances that share some context . By capturing this high-level shared context ( e.g. , medical diagnosis ) , representations become invariant to the differences ( e.g. , input modalities ) between the instances . Contrastive learning can be characterized by three main components : 1 ) a positive and negative set of examples , 2 ) a set of transformation operators , and 3 ) a variant of the noise contrastive estimation loss . Most research in this domain has focused on curating a positive set of examples by exploiting data temporality ( Oord et al. , 2018 ) , data augmentations ( Chen et al. , 2020 ) , and multiple views of the same data instance ( Tian et al. , 2019 ) . These methods are predominantly catered to the image-domain and central to their implementation is the notion that shared context arises from the same instance . We believe this precludes their applicability to the medical domain where physiological time-series are plentiful . Moreover , their interpretation of shared context is limited to data from a common source where that source is the individual data instance . In medicine , however , shared context can occur at a higher level , the patient level . This idea is central to our contributions and will encourage the development of representations that are patient-specific . Such representations have the potential to be used in tasks that exploit patient similarity such as disease subgroup clustering and discovery . As a result of the process , medical practitioners may receive more interpretable outputs from networks . In this work , we leverage electrocardiogram ( ECG ) signals to learn patient-specific representations in a self-supervised manner via contrastive learning . To do so , we exploit the fact that ECG signals summarize both temporal and spatial information . The latter can be understood in terms of projections of the same electrical signal onto multiple axes , also known as leads . Contributions . Our contributions are the following : 1 . We propose a family of patient-specific contrastive learning methods , entitled CLOCS , that exploit both temporal and spatial information present within ECG signals . 2 . We show that CLOCS outperforms state-of-the-art methods , BYOL and SimCLR , when performing a linear evaluation of , and fine-tuning on , downstream tasks involving cardiac arrhythmia classification . 2 RELATED WORK . Contrastive Learning . In contrastive predictive coding , Oord et al . ( 2018 ) use representations of current segments to predict those of future segments . More recently , Tian et al . ( 2019 ) propose contrastive multi-view coding where multiple views of the same image are treated as ‘ shared context ’ . He et al . ( 2019 ) ; Chen et al . ( 2020 ) ; Grill et al . ( 2020 ) exploit the idea of instance discrimination ( Wu et al. , 2018 ) and interpret multiple views as stochastically augmented forms of the same instance . They explore the benefit of sequential data augmentations and show that cropping and colour distortions are the most important . These augmentations , however , do not trivially extend to the time-series domain . Shen et al . ( 2020 ) propose to create mixtures of images to smoothen the output distribution and thus prevent the model from being overly confident . Time Contrastive Learning ( Hyvarinen & Morioka , 2016 ) performs contrastive learning over temporal segments in a signal and illustrate the relationship between their approach and ICA . In contrast to our work , they formulate their task as prediction of the segment index within a signal and perform limited experiments that do not exploit the noise contrastive estimation ( NCE ) loss . Bachman et al . ( 2019 ) Time Contrastive Networks ( Sermanet et al. , 2017 ) attempt to learn commonalities across views and differences across time . In contrast , our work focuses on identifying commonalities across both spatial and temporal components of data . Self-Supervision for Medical Time-Series . Miotto et al . ( 2016 ) propose DeepPatient , a 3-layer stacked denoising autoencoder that attempts to learn a patient representation using electronic health record ( EHR ) data . Although performed on a large proprietary dataset , their approach is focused on EHRs and does not explore contrastive learning for physiological signals . Sarkar & Etemad ( 2020 ) apply existing self-supervised methods on ECG recordings in the context of affective computing . The methods implemented include defining pretext classification tasks such as temporal inversion , negation , time-warping , etc . Their work is limited to affective computing , does not explore contrastive learning , and does not exploit multi-lead data as we do . Lyu et al . ( 2018 ) explore a sequence to sequence model to learn representations from EHR data in the eICU dataset . In the process , they minimize the reconstruction error of the input time-series . Li et al . ( 2020 ) leverage the aforementioned unsupervised learning technique on a large clinical dataset , CPRD , to obtain uncertainty estimates for predictions . 3 BACKGROUND . 3.1 CONTRASTIVE LEARNING . Assume the presence of a learner fθ : x ∈ RD −→ h ∈ RE , parameterized by θ , which maps a D-dimensional input , x , to an E-dimensional representation , h. Further assume the presence of an unlabelled dataset , X ∈ RNxD , where N is the total number of instances . Each unlabelled instance , xi ∈ X , is exposed to a set of transformations , TA and TB , such that xiA = TA ( x i ) and xiB = TB ( x i ) . Such transformations can consist of two different data augmentation procedures such as random cropping and flipping . These transformed instances now belong to an augmented dataset , X ′ ∈ RNxDxV , where V is equal to the number of applied transformations . In contrastive learning , representations , hiA = fθ ( x i A ) and h i B = fθ ( x i B ) , are said to share context . As a result of this shared context , these representations constitute a positive pair because ( a ) they are derived from the same original instance , xi , and ( b ) the transformations applied to the original instance were class-preserving . Representations within a positive pair are encouraged to be similar to one another and dissimilar to representations of all other instances , hjA , h j B ∀j j 6= i . The similarity of these representations , s ( hiA , h i B ) , is quantified via a metric , s , such as cosine similarity . By encouraging high similarity between representations in the positive pair , the goal is to learn representations that are invariant to different transformations of the same instance . 4 METHODS . 4.1 POSITIVE AND NEGATIVE PAIRS OF REPRESENTATIONS . Representations that are derived from the same instance are typically assumed to share context . This approach , however , fails to capture commonalities present across instances . In the medical domain , for example , multiple physiological recordings from the same patient may share context . It is important to note that if the multitude of physiological recordings associated with a patient were collected over large time-scales ( e.g. , on the order of years ) and in drastically different scenarios ( e.g. , at rest vs. during a stress test ) , then the shared context across these recordings is likely to diminish . This could be due to changing patient demographics and disease profiles . With the previous caveat in mind , we propose to leverage commonalities present in multiple physiological recordings by redefining a positive pair to refer to representations of transformed instances that belong to the same patient . We outline how to arrive at these transformed instances next . 4.2 TRANSFORMATION OPERATORS . When choosing the transformation operators , T , that are applied to each instance , the principal desideratum is that they capture invariances in the ECG recording . Motivated by the observation that ECG recordings reflect both temporal and spatial information , we propose to exploit both temporal and spatial invariance . We provide an intuition for such invariances in Fig . 1 . As is pertains to temporal invariance ( Fig . 1 left ) , we assume that upon splitting an ECG recording , associated with Class 1 , into several segments , each of them remain associated with Class 1 . We justify this assumption based on human physiology where abrupt changes in cardiac function ( on the order of seconds ) are unlikely to occur . If these segments were collected years apart , for example , our assumption may no longer hold . As for spatial invariance ( Fig . 1 right ) , we leverage the hexiaxial diagram which illustrates the location of the leads relative to the heart . We assume that temporallyaligned ECG recordings from different leads ( views ) are associated with the same class . This is based on the idea that multiple leads ( collected at the same time ) will reflect the same underlying cardiac function . Occasionally , this assumption may not hold , if , for example , a cardiac condition afflicts a specific part of the heart , making it detectable by only a few leads . We now describe how to exploit these invariances for contrastive learning . Contrastive Multi-segment Coding ( CMSC ) . Given an ECG recording , xi , with duration S seconds , we can extract V non-overlapping temporal segments , each with duration S/V seconds . If V = 2 , for example , xit1 = Tt1 ( x i ) and xit2 = Tt2 ( x i ) where t indicates the timestamp of the temporal segment ( see Fig . 1 left ) . We exploit temporal invariances in the ECG by defining representations of these adjacent and non-overlapping temporal segments as positive pairs . Contrastive Multi-lead Coding ( CMLC ) . Different projections of the same electrical signal emanating from the heart are characterized by different leads , L. For example , with two leads , L1 and L2 , then xiL1 = TL1 ( x i ) and xiL2 = TL2 ( x i ) ( see Fig . 1 right ) . We exploit spatial invariances in the ECG by defining temporally-aligned representations of these different projections as positive pairs . Contrastive Multi-segment Multi-lead Coding ( CMSMLC ) . We simultaneously exploit both temporal and spatial invariances in the ECG by defining representations of non-overlapping temporal segments and different projections as positive pairs . For example , in the presence of two temporal segments with timestamps , t1 and t2 , that belong to two leads , L1 and L2 , then xit1 , L1 = Tt1 , L1 ( x i ) and xit2 , L2 = Tt2 , L2 ( x i ) .
This paper proposes to use contrastive learning to learn representations from cardiac signals (ECGs). The model incorporates ECG domain knowledge, patient-specific, and relationships between multiple leads (channels), in the learning process. The targeted task is very important, enormous ECGs are collected and stored, but seldom people are mining them as they are unlabelled. This paper might reinvigorate them.
SP:971a6f6ec230b9804ce5c14fa75eb1a2cf516249
CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and Patients
1 INTRODUCTION . At present , the healthcare system is unable to sufficiently leverage the large , unlabelled datasets that it generates on a daily basis . This is partially due to the dependence of deep learning algorithms on high quality labels for good generalization performance . However , arriving at such high quality labels in a clinical setting where physicians are squeezed for time and attention is increasingly difficult . To overcome such an obstacle , self-supervised techniques have emerged as promising methods . These methods exploit the unlabelled dataset to formulate pretext tasks such as predicting the rotation of images ( Gidaris et al. , 2018 ) , their corresponding colourmap ( Larsson et al. , 2017 ) , and the arrow of time ( Wei et al. , 2018 ) . More recently , contrastive learning was introduced as a way to learn representations of instances that share some context . By capturing this high-level shared context ( e.g. , medical diagnosis ) , representations become invariant to the differences ( e.g. , input modalities ) between the instances . Contrastive learning can be characterized by three main components : 1 ) a positive and negative set of examples , 2 ) a set of transformation operators , and 3 ) a variant of the noise contrastive estimation loss . Most research in this domain has focused on curating a positive set of examples by exploiting data temporality ( Oord et al. , 2018 ) , data augmentations ( Chen et al. , 2020 ) , and multiple views of the same data instance ( Tian et al. , 2019 ) . These methods are predominantly catered to the image-domain and central to their implementation is the notion that shared context arises from the same instance . We believe this precludes their applicability to the medical domain where physiological time-series are plentiful . Moreover , their interpretation of shared context is limited to data from a common source where that source is the individual data instance . In medicine , however , shared context can occur at a higher level , the patient level . This idea is central to our contributions and will encourage the development of representations that are patient-specific . Such representations have the potential to be used in tasks that exploit patient similarity such as disease subgroup clustering and discovery . As a result of the process , medical practitioners may receive more interpretable outputs from networks . In this work , we leverage electrocardiogram ( ECG ) signals to learn patient-specific representations in a self-supervised manner via contrastive learning . To do so , we exploit the fact that ECG signals summarize both temporal and spatial information . The latter can be understood in terms of projections of the same electrical signal onto multiple axes , also known as leads . Contributions . Our contributions are the following : 1 . We propose a family of patient-specific contrastive learning methods , entitled CLOCS , that exploit both temporal and spatial information present within ECG signals . 2 . We show that CLOCS outperforms state-of-the-art methods , BYOL and SimCLR , when performing a linear evaluation of , and fine-tuning on , downstream tasks involving cardiac arrhythmia classification . 2 RELATED WORK . Contrastive Learning . In contrastive predictive coding , Oord et al . ( 2018 ) use representations of current segments to predict those of future segments . More recently , Tian et al . ( 2019 ) propose contrastive multi-view coding where multiple views of the same image are treated as ‘ shared context ’ . He et al . ( 2019 ) ; Chen et al . ( 2020 ) ; Grill et al . ( 2020 ) exploit the idea of instance discrimination ( Wu et al. , 2018 ) and interpret multiple views as stochastically augmented forms of the same instance . They explore the benefit of sequential data augmentations and show that cropping and colour distortions are the most important . These augmentations , however , do not trivially extend to the time-series domain . Shen et al . ( 2020 ) propose to create mixtures of images to smoothen the output distribution and thus prevent the model from being overly confident . Time Contrastive Learning ( Hyvarinen & Morioka , 2016 ) performs contrastive learning over temporal segments in a signal and illustrate the relationship between their approach and ICA . In contrast to our work , they formulate their task as prediction of the segment index within a signal and perform limited experiments that do not exploit the noise contrastive estimation ( NCE ) loss . Bachman et al . ( 2019 ) Time Contrastive Networks ( Sermanet et al. , 2017 ) attempt to learn commonalities across views and differences across time . In contrast , our work focuses on identifying commonalities across both spatial and temporal components of data . Self-Supervision for Medical Time-Series . Miotto et al . ( 2016 ) propose DeepPatient , a 3-layer stacked denoising autoencoder that attempts to learn a patient representation using electronic health record ( EHR ) data . Although performed on a large proprietary dataset , their approach is focused on EHRs and does not explore contrastive learning for physiological signals . Sarkar & Etemad ( 2020 ) apply existing self-supervised methods on ECG recordings in the context of affective computing . The methods implemented include defining pretext classification tasks such as temporal inversion , negation , time-warping , etc . Their work is limited to affective computing , does not explore contrastive learning , and does not exploit multi-lead data as we do . Lyu et al . ( 2018 ) explore a sequence to sequence model to learn representations from EHR data in the eICU dataset . In the process , they minimize the reconstruction error of the input time-series . Li et al . ( 2020 ) leverage the aforementioned unsupervised learning technique on a large clinical dataset , CPRD , to obtain uncertainty estimates for predictions . 3 BACKGROUND . 3.1 CONTRASTIVE LEARNING . Assume the presence of a learner fθ : x ∈ RD −→ h ∈ RE , parameterized by θ , which maps a D-dimensional input , x , to an E-dimensional representation , h. Further assume the presence of an unlabelled dataset , X ∈ RNxD , where N is the total number of instances . Each unlabelled instance , xi ∈ X , is exposed to a set of transformations , TA and TB , such that xiA = TA ( x i ) and xiB = TB ( x i ) . Such transformations can consist of two different data augmentation procedures such as random cropping and flipping . These transformed instances now belong to an augmented dataset , X ′ ∈ RNxDxV , where V is equal to the number of applied transformations . In contrastive learning , representations , hiA = fθ ( x i A ) and h i B = fθ ( x i B ) , are said to share context . As a result of this shared context , these representations constitute a positive pair because ( a ) they are derived from the same original instance , xi , and ( b ) the transformations applied to the original instance were class-preserving . Representations within a positive pair are encouraged to be similar to one another and dissimilar to representations of all other instances , hjA , h j B ∀j j 6= i . The similarity of these representations , s ( hiA , h i B ) , is quantified via a metric , s , such as cosine similarity . By encouraging high similarity between representations in the positive pair , the goal is to learn representations that are invariant to different transformations of the same instance . 4 METHODS . 4.1 POSITIVE AND NEGATIVE PAIRS OF REPRESENTATIONS . Representations that are derived from the same instance are typically assumed to share context . This approach , however , fails to capture commonalities present across instances . In the medical domain , for example , multiple physiological recordings from the same patient may share context . It is important to note that if the multitude of physiological recordings associated with a patient were collected over large time-scales ( e.g. , on the order of years ) and in drastically different scenarios ( e.g. , at rest vs. during a stress test ) , then the shared context across these recordings is likely to diminish . This could be due to changing patient demographics and disease profiles . With the previous caveat in mind , we propose to leverage commonalities present in multiple physiological recordings by redefining a positive pair to refer to representations of transformed instances that belong to the same patient . We outline how to arrive at these transformed instances next . 4.2 TRANSFORMATION OPERATORS . When choosing the transformation operators , T , that are applied to each instance , the principal desideratum is that they capture invariances in the ECG recording . Motivated by the observation that ECG recordings reflect both temporal and spatial information , we propose to exploit both temporal and spatial invariance . We provide an intuition for such invariances in Fig . 1 . As is pertains to temporal invariance ( Fig . 1 left ) , we assume that upon splitting an ECG recording , associated with Class 1 , into several segments , each of them remain associated with Class 1 . We justify this assumption based on human physiology where abrupt changes in cardiac function ( on the order of seconds ) are unlikely to occur . If these segments were collected years apart , for example , our assumption may no longer hold . As for spatial invariance ( Fig . 1 right ) , we leverage the hexiaxial diagram which illustrates the location of the leads relative to the heart . We assume that temporallyaligned ECG recordings from different leads ( views ) are associated with the same class . This is based on the idea that multiple leads ( collected at the same time ) will reflect the same underlying cardiac function . Occasionally , this assumption may not hold , if , for example , a cardiac condition afflicts a specific part of the heart , making it detectable by only a few leads . We now describe how to exploit these invariances for contrastive learning . Contrastive Multi-segment Coding ( CMSC ) . Given an ECG recording , xi , with duration S seconds , we can extract V non-overlapping temporal segments , each with duration S/V seconds . If V = 2 , for example , xit1 = Tt1 ( x i ) and xit2 = Tt2 ( x i ) where t indicates the timestamp of the temporal segment ( see Fig . 1 left ) . We exploit temporal invariances in the ECG by defining representations of these adjacent and non-overlapping temporal segments as positive pairs . Contrastive Multi-lead Coding ( CMLC ) . Different projections of the same electrical signal emanating from the heart are characterized by different leads , L. For example , with two leads , L1 and L2 , then xiL1 = TL1 ( x i ) and xiL2 = TL2 ( x i ) ( see Fig . 1 right ) . We exploit spatial invariances in the ECG by defining temporally-aligned representations of these different projections as positive pairs . Contrastive Multi-segment Multi-lead Coding ( CMSMLC ) . We simultaneously exploit both temporal and spatial invariances in the ECG by defining representations of non-overlapping temporal segments and different projections as positive pairs . For example , in the presence of two temporal segments with timestamps , t1 and t2 , that belong to two leads , L1 and L2 , then xit1 , L1 = Tt1 , L1 ( x i ) and xit2 , L2 = Tt2 , L2 ( x i ) .
This work presents a new self-supervised training framework for multi-channel ECG signals. The authors use contrastive learning by exploiting the fact that a single patient can generate multiple ECG signals, and there are multiple views (i.e. leads) for the same ECG signals. Compared to popular self-supervised training methods BYOL and SimCLR, the proposed method shows superior performance on the arrhythmia classification task for 4 different datasets in various scenarios.
SP:971a6f6ec230b9804ce5c14fa75eb1a2cf516249
Generative Learning With Euler Particle Transport
1 INTRODUCTION . The ability to efficiently sample from complex distributions plays a key role in a variety of prediction and inference tasks in machine learning and statistics ( Salakhutdinov , 2015 ) . The long-standing methodology for learning an underlying distribution relies on an explicit statistical data model , which can be difficult to specify in many applications such as image analysis , computer vision and natural language processing . In contrast , implicit generative models do not assume a specific form of the data distribution , but rather learn a nonlinear map to transform a reference distribution to the target distribution . This modeling approach has been shown to achieve impressive performance in many machine learning tasks ( Reed et al. , 2016 ; Zhu et al. , 2017 ) . Generative adversarial networks ( GAN ) ( Goodfellow et al. , 2014 ) , variational auto-encoders ( VAE ) ( Kingma & Welling , 2014 ) and flow-based methods ( Rezende & Mohamed , 2015 ) are important representatives of implicit generative models . In this paper , we propose an Euler particle transport ( EPT ) approach for learning a generative model by integrating ideas from optimal transport , numerical ODE , density-ratio estimation and deep neural networks . We formulate the problem of generative learning as that of finding a nonlinear transform that pushes forward a reference to the target based on the quadratic Wasserstein distance . Since it is challenging to solve the resulting Monge-Ampère equation , we consider the continuity equation derived from the linearization of the Monge-Ampère equation , which is a gradient flows converging to the target distribution . We solve the Mckean-Vlasov equation associated with the gradient flow using the forward Euler method . The resulting EPT that pushes forward the reference distribution to the target distribution is a composition of a sequence of simple residual maps , which are computationally stable and easy to train . The residual maps are completely determined by the density ratios between the distributions at the current iterations and the target distribution . We estimate density ratios based on the Bregman divergence with a gradient regularizer using deep density-ratio fitting . We establish bounds on the approximation errors due to linearization of the Monge-Ampère equation , Euler discretization of the Mckean-Vlasov equation , and deep density-ratio estimation . Our result on the error rate for the proposed density-ratio estimators improves the minimax rate of nonparametric estimation via exploring the low-dimensional structure of the data and circumvents the “ curse of dimensionality ” . Experimental results on multi-mode synthetic data and comparisons with stateof-the-art GANs on benchmark data support our theoretical findings and demonstrate that EPT is computationally more stable and easier to train than GANs . Using simple ReLU ResNets without batch normalization and spectral normalization , we obtained results that are better than or comparable with those using GANs trained with such tricks . 2 EULER PARTICLE TRANSPORT . Let X ∈ Rm be a random vector with distribution ν , and let Z be a random vector with distribution µ . We assume that µ has a known and simple form . Our goal is to construct a transformation T such that T # µ = ν , where T # µ denotes the push-forward distribution of µ by T , that is , the distribution of T ( Z ) . Then we can sample from ν by first generating a Z ∼ µ and calculate T ( Z ) . In practice , ν is unknown and only a random sample { Xi } ni=1 i.i.d . ν is available . We must construct T based on the sample . There may exist multiple transports T with T # µ = ν . The optimal transport is the one that minimizes the quadratic Wasserstein distance between µ and ν defined by W2 ( µ , ν ) = { inf γ∈Γ ( µ , ν ) E ( Z , X ) ∼γ [ ‖Z −X‖22 ] } 1 2 , ( 1 ) where Γ ( µ , ν ) denotes the set of couplings of ( µ , ν ) ( Villani , 2008 ; Ambrosio et al. , 2008 ) . Suppose that µ and ν have densities q and p with respect to the Lesbeque measure , respectively . Then the optimal transport map T such that T # µ = ν is characterized by the Monge-Ampère equation ( Brenier , 1991 ; McCann , 1995 ; Santambrogio , 2015 ) . Specifically , the minimization problem in ( 1 ) admits a unique solution γ = ( 1 , T ) # µ with T = ∇Ψ , µ-a.e. , where 1 is the identity map and ∇Ψ is the gradient of the potential function Ψ : Rm → R. This function is convex and satisfies the Monge-Ampère equation det ( ∇2Ψ ( z ) ) = q ( z ) p ( ∇Ψ ( z ) ) , z ∈ Rm . ( 2 ) Therefore , to find the optimal transport T , it suffices to solve ( 2 ) for Ψ . However , it is challenging to solve this degenerate elliptic equation due to its highly nonlinear nature . Below we describe the proposed EPT method for obtaining an approximate solution of the MongeAmpère equation ( 2 ) . It consists of the following steps : ( a ) linearizing ( 2 ) via residual maps , ( b ) determining the velocity fields governing the stochastic McKean-Vlasov equation resulting from the linearization , ( c ) calculating the forward Euler particle transport map and , ( d ) training the EPT map by estimating the velocity fields from data . Since velocity fields are completely determined by density ratios , this step amounts to nonparametric density ratio estimation . We also provide bounds on the errors due to linearization , discretization and estimation . Mathematical details and proofs are given in the appendix . Linearization via residual map A basic approach to addressing the difficulty due to nonlinearity is linearization . We use a linearization method based on the residual map Tt , Φt = ∇Ψ = 1+ t∇Φt , t ≥ 0 , ( 3 ) where Φt : Rm → R1 is a function to be chosen such that the law of Tt , Φt ( Z ) approaches ν as t increases ( Villani , 2008 ) . We give the specific form of Φt below , see Theorem B.1 in the appendix for details . This linearization scheme leads to the stochastic process Xt : Rm → Rm satisfying the McKeanVlasov equation d dt Xt ( x ) = vt ( Xt ( x ) ) , t ≥ 0 , with X0 ∼ µ , µ- a.e . x ∈ Rm , ( 4 ) where vt is the velocity vector field of Xt . In addition , we have vt = ∇Φt . Thus vt also determines the residual map ( 3 ) . The details of the derivation are given in Theorems B.2 and B.1 . in the appendix . Therefore , estimating the residual map ( 3 ) is equivalent to estimating vt . The movement of Xt along t is completely governed by vt , given the initial value . We choose a vt to decrease the discrepancy between the distribution of Xt , say µt , at time t and the target ν with respect to a properly chosen measure . An equivalent formulation of ( 4 ) is through the gradient flow { µt } t≥0 with { vt } t≥0 as its velocity fields , see Proposition B.1 in the appendix . Computationally it is more convenient to work with ( 4 ) . Determining velocity field The basic intuition is that we should move in the direction that decreases the differences between µt and the target ν . We use an energy functional L [ µt ] to measure such differences . An important energy functional L [ µt ] is the f -divergence ( Ali & Silvey , 1966 ) , L [ µt ] = Df ( µt‖ν ) = ∫ Rm p ( x ) f ( qt ( x ) p ( x ) ) dx , ( 5 ) where qt is the density of µt , p is the density of ν and f : R+ → R is assumed to be a twicedifferentiable convex function with f ( 1 ) = 0 . We choose Φt such that L [ µt ] is minimized . We show in Theorem B.1 in the appendix that Φt ( x ) = −f ′ ( rt ( x ) ) and vt ( x ) = ∇Φt ( x ) . Therefore , vt ( x ) = −f ′′ ( rt ( x ) ) ∇rt ( x ) , where rt ( x ) = qt ( x ) p ( x ) , x ∈ Rm . For example , if we use the χ2-divergence with f ( c ) = ( c−1 ) 2/2 , then vt ( x ) = ∇rt ( x ) is simply the gradient of the density ratio . Other types of velocity fields can be obtained by using different energy functionals such as the Lebesgue norm of the density difference , i.e. , L [ µt ] = ∫ Rm |qt ( x ) −p ( x ) | 2dx , see Section B.2 for details . The forward Euler method Numerically , we need to discretize the McKean-Vlasov equation ( 4 ) . Let s > 0 be a small step size . We use the forward Euler method defined iteratively by : Tk = 1+ svk , ( 6 ) Xk+1 = Tk ( Xk ) , ( 7 ) µk+1 = ( Tk ) # µk , ( 8 ) where X0 ∼ µ , µ0 = µ , vk is the velocity field at the kth step , k = 0 , 1 , ... , K for some large K. The particle process { Xk } k≥0 is a discretized version of the continuous process { Xt } t≥0 in ( 4 ) . The final transport map is the composition of a sequence of simple residual maps T0 , T1 . . . , TK , i.e. , T = TK ◦ TK−1 · · · ◦ T0 . This updating scheme is based on the forward Euler method for solving equation ( 4 ) . This is the reason we refer to the proposed method as Euler particle transport ( EPT ) . Training EPT When the target ν is unknown and only a random sample is available , it is natural to learn ν by first estimating the discrete velocity fields vk at the sample level and then plugging the estimator of vk in ( 6 ) . For example , if we use the f -divergence as the energy functional , estimating vk ( x ) = −f ′′ ( rk ( x ) ) ∇rk ( x ) boils down to estimating the density ratios rk ( x ) = qk ( x ) /p ( x ) dynamically at each iteration k. Nonparametric density-ratio estimation using Bregman divergences and gradient regularizer are discussed in Section 4 below . Let v̂k be the estimated velocity fields at the kth iteration . The kth estimated residual map is T̂k = 1+ sv̂k . Finally , the trained map is T̂ = T̂K ◦ T̂K−1 ◦ · · · ◦ T̂0 . ( 9 ) Theoretical guarantees We establish the following bound on the approximation error due to the linearization of the Monge-Ampère equation under appropriate conditions : W2 ( µt , ν ) = O ( e−λt ) , ( 10 ) for some λ > 0 , see Proposition B.1 in the appendix . Therefore , µt converges to ν exponentially fast as t→∞ . For an integer K ≥ 1 and a small s > 0 , let { µst : t ∈ [ ks , ( k + 1 ) s ) , k = 0 , . . . , K } be a piecewise constant interpolation between µks and µ ( k+1 ) s , k = 0 , 1 , . . . , K. Under the assumption that the velocity fields vt are Lipschitz continuous with respect to ( x , µt ) , the discretization error of µst can be bounded in a finite time interval [ 0 , T ) as follows : sup t∈ [ 0 , T ) W2 ( µt , µst ) = O ( s ) . ( 11 ) The proof of ( 11 ) is given in Proposition B.2 in the appendix . The error bounds ( 10 ) and ( 11 ) imply that the distributions of the particles Xk generated by the EPT map defined in ( 7 ) with a small s and a sufficiently large k converges to the target ν at the rate of discretization size s. When training the EPT map , we use the deep neural networks to estimate the density ratios ( density differences ) with samples . In Theorem 4.1 , we provide an estimation error bound that improves the minimax rate of deep nonparametric estimation via exploring the low-dimensional structure of data and circumvents the “ curse of dimensionality. ” Thus this result is of independent interest in nonparametric estimation using deep neural networks .
This paper considers generative learning by discretizing a Wasserstein gradient with Euler methods. More precisely, some samples of a target distribution are given and the goal is to pushforward some samples of an initial distribution to the target distribution. The proposed method is obtained by minimizing the f-divergence between the initial distribution and the target distribution, but considering the Wasserstein gradient flow of the f-divergence w.r.t. the target distribution (= the objective function). This Wasserstein gradient flow is discretized via Euler method to obtain the proposed algorithm. This Euler method involves the Wasserstein gradient of the objective, which is intractable. The authors describe a statistical methodology to compute this Wasserstein gradient based on the samples of the target distribution and samples from the current distribution. They also prove a bound for the estimated Wasserstein gradient wrt the true Wasserstein gradient. Finally, the paper presents relevant numerical experiments.
SP:701fd2e93907ea7c0c9c6e70d8eac8d91250e023
Generative Learning With Euler Particle Transport
1 INTRODUCTION . The ability to efficiently sample from complex distributions plays a key role in a variety of prediction and inference tasks in machine learning and statistics ( Salakhutdinov , 2015 ) . The long-standing methodology for learning an underlying distribution relies on an explicit statistical data model , which can be difficult to specify in many applications such as image analysis , computer vision and natural language processing . In contrast , implicit generative models do not assume a specific form of the data distribution , but rather learn a nonlinear map to transform a reference distribution to the target distribution . This modeling approach has been shown to achieve impressive performance in many machine learning tasks ( Reed et al. , 2016 ; Zhu et al. , 2017 ) . Generative adversarial networks ( GAN ) ( Goodfellow et al. , 2014 ) , variational auto-encoders ( VAE ) ( Kingma & Welling , 2014 ) and flow-based methods ( Rezende & Mohamed , 2015 ) are important representatives of implicit generative models . In this paper , we propose an Euler particle transport ( EPT ) approach for learning a generative model by integrating ideas from optimal transport , numerical ODE , density-ratio estimation and deep neural networks . We formulate the problem of generative learning as that of finding a nonlinear transform that pushes forward a reference to the target based on the quadratic Wasserstein distance . Since it is challenging to solve the resulting Monge-Ampère equation , we consider the continuity equation derived from the linearization of the Monge-Ampère equation , which is a gradient flows converging to the target distribution . We solve the Mckean-Vlasov equation associated with the gradient flow using the forward Euler method . The resulting EPT that pushes forward the reference distribution to the target distribution is a composition of a sequence of simple residual maps , which are computationally stable and easy to train . The residual maps are completely determined by the density ratios between the distributions at the current iterations and the target distribution . We estimate density ratios based on the Bregman divergence with a gradient regularizer using deep density-ratio fitting . We establish bounds on the approximation errors due to linearization of the Monge-Ampère equation , Euler discretization of the Mckean-Vlasov equation , and deep density-ratio estimation . Our result on the error rate for the proposed density-ratio estimators improves the minimax rate of nonparametric estimation via exploring the low-dimensional structure of the data and circumvents the “ curse of dimensionality ” . Experimental results on multi-mode synthetic data and comparisons with stateof-the-art GANs on benchmark data support our theoretical findings and demonstrate that EPT is computationally more stable and easier to train than GANs . Using simple ReLU ResNets without batch normalization and spectral normalization , we obtained results that are better than or comparable with those using GANs trained with such tricks . 2 EULER PARTICLE TRANSPORT . Let X ∈ Rm be a random vector with distribution ν , and let Z be a random vector with distribution µ . We assume that µ has a known and simple form . Our goal is to construct a transformation T such that T # µ = ν , where T # µ denotes the push-forward distribution of µ by T , that is , the distribution of T ( Z ) . Then we can sample from ν by first generating a Z ∼ µ and calculate T ( Z ) . In practice , ν is unknown and only a random sample { Xi } ni=1 i.i.d . ν is available . We must construct T based on the sample . There may exist multiple transports T with T # µ = ν . The optimal transport is the one that minimizes the quadratic Wasserstein distance between µ and ν defined by W2 ( µ , ν ) = { inf γ∈Γ ( µ , ν ) E ( Z , X ) ∼γ [ ‖Z −X‖22 ] } 1 2 , ( 1 ) where Γ ( µ , ν ) denotes the set of couplings of ( µ , ν ) ( Villani , 2008 ; Ambrosio et al. , 2008 ) . Suppose that µ and ν have densities q and p with respect to the Lesbeque measure , respectively . Then the optimal transport map T such that T # µ = ν is characterized by the Monge-Ampère equation ( Brenier , 1991 ; McCann , 1995 ; Santambrogio , 2015 ) . Specifically , the minimization problem in ( 1 ) admits a unique solution γ = ( 1 , T ) # µ with T = ∇Ψ , µ-a.e. , where 1 is the identity map and ∇Ψ is the gradient of the potential function Ψ : Rm → R. This function is convex and satisfies the Monge-Ampère equation det ( ∇2Ψ ( z ) ) = q ( z ) p ( ∇Ψ ( z ) ) , z ∈ Rm . ( 2 ) Therefore , to find the optimal transport T , it suffices to solve ( 2 ) for Ψ . However , it is challenging to solve this degenerate elliptic equation due to its highly nonlinear nature . Below we describe the proposed EPT method for obtaining an approximate solution of the MongeAmpère equation ( 2 ) . It consists of the following steps : ( a ) linearizing ( 2 ) via residual maps , ( b ) determining the velocity fields governing the stochastic McKean-Vlasov equation resulting from the linearization , ( c ) calculating the forward Euler particle transport map and , ( d ) training the EPT map by estimating the velocity fields from data . Since velocity fields are completely determined by density ratios , this step amounts to nonparametric density ratio estimation . We also provide bounds on the errors due to linearization , discretization and estimation . Mathematical details and proofs are given in the appendix . Linearization via residual map A basic approach to addressing the difficulty due to nonlinearity is linearization . We use a linearization method based on the residual map Tt , Φt = ∇Ψ = 1+ t∇Φt , t ≥ 0 , ( 3 ) where Φt : Rm → R1 is a function to be chosen such that the law of Tt , Φt ( Z ) approaches ν as t increases ( Villani , 2008 ) . We give the specific form of Φt below , see Theorem B.1 in the appendix for details . This linearization scheme leads to the stochastic process Xt : Rm → Rm satisfying the McKeanVlasov equation d dt Xt ( x ) = vt ( Xt ( x ) ) , t ≥ 0 , with X0 ∼ µ , µ- a.e . x ∈ Rm , ( 4 ) where vt is the velocity vector field of Xt . In addition , we have vt = ∇Φt . Thus vt also determines the residual map ( 3 ) . The details of the derivation are given in Theorems B.2 and B.1 . in the appendix . Therefore , estimating the residual map ( 3 ) is equivalent to estimating vt . The movement of Xt along t is completely governed by vt , given the initial value . We choose a vt to decrease the discrepancy between the distribution of Xt , say µt , at time t and the target ν with respect to a properly chosen measure . An equivalent formulation of ( 4 ) is through the gradient flow { µt } t≥0 with { vt } t≥0 as its velocity fields , see Proposition B.1 in the appendix . Computationally it is more convenient to work with ( 4 ) . Determining velocity field The basic intuition is that we should move in the direction that decreases the differences between µt and the target ν . We use an energy functional L [ µt ] to measure such differences . An important energy functional L [ µt ] is the f -divergence ( Ali & Silvey , 1966 ) , L [ µt ] = Df ( µt‖ν ) = ∫ Rm p ( x ) f ( qt ( x ) p ( x ) ) dx , ( 5 ) where qt is the density of µt , p is the density of ν and f : R+ → R is assumed to be a twicedifferentiable convex function with f ( 1 ) = 0 . We choose Φt such that L [ µt ] is minimized . We show in Theorem B.1 in the appendix that Φt ( x ) = −f ′ ( rt ( x ) ) and vt ( x ) = ∇Φt ( x ) . Therefore , vt ( x ) = −f ′′ ( rt ( x ) ) ∇rt ( x ) , where rt ( x ) = qt ( x ) p ( x ) , x ∈ Rm . For example , if we use the χ2-divergence with f ( c ) = ( c−1 ) 2/2 , then vt ( x ) = ∇rt ( x ) is simply the gradient of the density ratio . Other types of velocity fields can be obtained by using different energy functionals such as the Lebesgue norm of the density difference , i.e. , L [ µt ] = ∫ Rm |qt ( x ) −p ( x ) | 2dx , see Section B.2 for details . The forward Euler method Numerically , we need to discretize the McKean-Vlasov equation ( 4 ) . Let s > 0 be a small step size . We use the forward Euler method defined iteratively by : Tk = 1+ svk , ( 6 ) Xk+1 = Tk ( Xk ) , ( 7 ) µk+1 = ( Tk ) # µk , ( 8 ) where X0 ∼ µ , µ0 = µ , vk is the velocity field at the kth step , k = 0 , 1 , ... , K for some large K. The particle process { Xk } k≥0 is a discretized version of the continuous process { Xt } t≥0 in ( 4 ) . The final transport map is the composition of a sequence of simple residual maps T0 , T1 . . . , TK , i.e. , T = TK ◦ TK−1 · · · ◦ T0 . This updating scheme is based on the forward Euler method for solving equation ( 4 ) . This is the reason we refer to the proposed method as Euler particle transport ( EPT ) . Training EPT When the target ν is unknown and only a random sample is available , it is natural to learn ν by first estimating the discrete velocity fields vk at the sample level and then plugging the estimator of vk in ( 6 ) . For example , if we use the f -divergence as the energy functional , estimating vk ( x ) = −f ′′ ( rk ( x ) ) ∇rk ( x ) boils down to estimating the density ratios rk ( x ) = qk ( x ) /p ( x ) dynamically at each iteration k. Nonparametric density-ratio estimation using Bregman divergences and gradient regularizer are discussed in Section 4 below . Let v̂k be the estimated velocity fields at the kth iteration . The kth estimated residual map is T̂k = 1+ sv̂k . Finally , the trained map is T̂ = T̂K ◦ T̂K−1 ◦ · · · ◦ T̂0 . ( 9 ) Theoretical guarantees We establish the following bound on the approximation error due to the linearization of the Monge-Ampère equation under appropriate conditions : W2 ( µt , ν ) = O ( e−λt ) , ( 10 ) for some λ > 0 , see Proposition B.1 in the appendix . Therefore , µt converges to ν exponentially fast as t→∞ . For an integer K ≥ 1 and a small s > 0 , let { µst : t ∈ [ ks , ( k + 1 ) s ) , k = 0 , . . . , K } be a piecewise constant interpolation between µks and µ ( k+1 ) s , k = 0 , 1 , . . . , K. Under the assumption that the velocity fields vt are Lipschitz continuous with respect to ( x , µt ) , the discretization error of µst can be bounded in a finite time interval [ 0 , T ) as follows : sup t∈ [ 0 , T ) W2 ( µt , µst ) = O ( s ) . ( 11 ) The proof of ( 11 ) is given in Proposition B.2 in the appendix . The error bounds ( 10 ) and ( 11 ) imply that the distributions of the particles Xk generated by the EPT map defined in ( 7 ) with a small s and a sufficiently large k converges to the target ν at the rate of discretization size s. When training the EPT map , we use the deep neural networks to estimate the density ratios ( density differences ) with samples . In Theorem 4.1 , we provide an estimation error bound that improves the minimax rate of deep nonparametric estimation via exploring the low-dimensional structure of data and circumvents the “ curse of dimensionality. ” Thus this result is of independent interest in nonparametric estimation using deep neural networks .
This paper tackles generative modeling (sampling, in particular) via finding the push forward functions T (equivalently, the velocity fields v) that iteratively moves particles from a reference distribution toward the target data distribution. The velocity fields are solved by minimizing the f-divergence between the particle density at iteration k and the target data density, which is shown to be in the form of gradient of density ratio. Based on this intuition, the training stage becomes estimating the density ratio via neural networks, for each iteration k=1,...,K. However, estimating the density ratio can be quite difficult when two densities have little overlapping support. Thus, the author proposes to add gradient regularizer to the density ratio estimating function. The experiment on real-world computer vision benchmarks demonstrate reasonable sampling quality, and the FID score on CIFAR-10 is comparable to some GAN baselines in the generative modeling literature.
SP:701fd2e93907ea7c0c9c6e70d8eac8d91250e023
Identifying Treatment Effects under Unobserved Confounding by Causal Representation Learning
As an important problem of causal inference , we discuss the estimation of treatment effects under the existence of unobserved confounding . By representing the confounder as a latent variable , we propose Counterfactual VAE , a new variant of variational autoencoder , based on recent advances in identifiability of representation learning . Combining the identifiability and classical identification results of causal inference , under mild assumptions on the generative model and with small noise on the outcome , we theoretically show that the confounder is identifiable up to an affine transformation and then the treatment effects can be identified . Experiments on synthetic and semi-synthetic datasets demonstrate that our method matches the state-of-the-art , even under settings violating our formal assumptions . 1 INTRODUCTION . Causal inference ( Imbens & Rubin , 2015 ; Pearl , 2009 ) , i.e , estimating causal effects of interventions , is a fundamental problem across many domains . In this work , we focus on the estimation of treatment effects , e.g. , effects of public policies or a new drug , based on a set of observations consisting of binary labels for treatment / control ( non-treated ) , outcome , and other covariates . The fundamental difficulty of causal inference is that we never have observations of counterfactual outcomes , which would have been if we had made another decision ( treatment or control ) . While the ideal protocol for causal inference is randomized controlled trials ( RCTs ) , they often have ethical and practical issues , or are prohibitively expensive . Thus , causal inference from observational data is indispensable , though they introduce other challenges . Perhaps the most crucial one is confounding : there might be variables ( called confounders ) that causally affect both the treatment and the outcome , and spurious correlation follows . Most of works in causal inference rely on the unconfoundedness assumption that appropriate covariates are collected so that the confounding can be controlled by conditioning on or adjusting for those variables . This is still challenging , due to systematic difference of the distributions of the covariates between the treatment and control groups . One classical way of dealing with this difference is re-weighting ( Horvitz & Thompson , 1952 ) . There are semi-parametric methods , which have better finite sample performance , e.g . TMLE ( Van der Laan & Rose , 2011 ) , and also non-parametric , tree-based , methods , e.g . Causal Forests ( CF ) ( Wager & Athey , 2018 ) . Notably , there is a recent rise of interest in representation learning for causal inference starting from Johansson et al . ( 2016 ) . There are a few lines of works that challenge the difficult but important problem of causal inference under unobserved confounding . Without covariates we can adjust for , many of them assume special structures among the variables , such as instrumental variables ( IVs ) ( Angrist et al. , 1996 ) , proxy variables ( Miao et al. , 2018 ) , network structure ( Ogburn , 2018 ) , and multiple causes ( Wang & Blei , 2019 ) . Among them , instrumental variables and proxy ( or surrogate ) variables are most commonly exploited . Instrumental variables are not affected by unobserved confounders , influencing the outcome only through the treatment . On the other hand , proxy variables are causally connected to unobserved confounders , but are not confounding the treatment and outcome by themselves . Other methods use restrictive parametric models ( Allman et al. , 2009 ) , or only give interval estimation ( Manski , 2009 ; Kallus et al. , 2019 ) . In this work , we address the problem of estimating treatment effects under unobserved confounding . We further discuss the individual-level treatment effect , which measures the treatment effect conditioned on the covariate , for example , on a patient ’ s personal data . To model the problem , we regard the covariate as a proxy variable and the confounder as a latent variable in representation learning . Our method particularly exploits the recent advance of identifiability of representation learning for VAE ( Khemakhem et al. , 2020 ) . The hallmark of deep neural networks ( NNs ) might be that they can learn representations of data . It is desirable that the learned representations are interpretable , that is , in approximately the same relationship to the latent sources for each down-stream task . A principled approach to this is identifiability , that is , when optimizing our learning objective w.r.t . the representation function , only a unique optimum will be returned . Our method builds on this and further provides the stronger identifiability of representations that is needed in causal inference . The proposed method is also based firmly on the well-established results in causal inference . In many works exploiting proxies , it is assumed that the proxies are independent of the outcome given the confounder ( Greenland , 1980 ; Rothman et al. , 2008 ; Kuroki & Pearl , 2014 ) . This also motivates our method . Further , our method naturally combines a new VAE architecture with the classical result of Rosenbaum & Rubin ( 1983 ) regarding the sufficient information for identification of treatment effects , showing identifiability proof of both latent representations and treatment effects . The main contributions of this paper are as follows : 1 ) interpretable , causal representation learning by a new VAE architecture for estimating treatment effects under unobserved confounding ; 2 ) theoretical analysis of the identifiability of representation and treatment effect ; 3 ) experimental study on diverse settings showing performance of state-of-the-art . 2 RELATED WORK . Identifiability of representation learning . With recent advances in nonlinear ICA , identifiability of representations is proved under a number of settings , e.g. , auxiliary task for representation learning ( Hyvärinen & Morioka , 2016 ; Hyvärinen et al. , 2019 ) and VAE ( Khemakhem et al. , 2020 ) . Recently , Roeder et al . ( 2020 ) extends the the result to include a wide class of state-of-the-art deep discriminative models . The results are exploited in bivariate causal discovery ( Wu & Fukumizu , 2020 ) and structure learning ( Yang et al. , 2020 ) . To the best of our knowledge , this work is the first to explore this new possibility in causal inference . Representation learning for causal inference . Recently , researchers start to design representation learning methods for causal inference , but mostly limited to unconfounded settings . Some methods focus on learning a balanced covariate representation , e.g. , BLR/BNN ( Johansson et al. , 2016 ) , and TARnet/CFR ( Shalit et al. , 2017 ) . Adding to this , Yao et al . ( 2018 ) also exploits the local similarity of between data points . Shi et al . ( 2019 ) uses similar architecture to TARnet , considering the importance of treatment probability . There are also methods using GAN ( Yoon et al. , 2018 , GANITE ) and Gaussian process ( Alaa & van der Schaar , 2017 ) . Our method adds to these by also tackling the harder problem of unobserved confounding . Causal inference with auxiliary structures . Both our method and CEVAE ( Louizos et al. , 2017 ) are motivated by exploiting proxies and use VAE as a learning method . However , CEVAE assumes a specific causal graph where the covariates should be independent of the treatment given the confounder . Further , CEVAE relies on the assumption that VAE can recover the true latent distribution . Kallus et al . ( 2018 ) uses matrix factorization to infer the confounders from proxy variables , and gives consistent ATE estimator and its error bound . Miao et al . ( 2018 ) established conditions for identification using more general proxies , but without practical estimation method . Note that , two active lines of works in machine learning exist in their own right , exploiting IV ( Hartford et al. , 2017 ) and network structure ( Veitch et al. , 2019 ) . 3 SETUP AND PRELIMINARIES . 3.1 TREATMENT EFFECTS AND CONFOUNDERS . Following Imbens & Rubin ( 2015 ) , we begin by introducing potential outcomes ( or counterfactual outcomes ) y ( t ) , t = 0 , 1. y ( t ) is the outcome we would observe , if we applied treatment value t. Note that , for a unit under research , we can observe only one of y ( 0 ) or y ( 1 ) , corresponding to which factual treatment we have applied . This is the fundamental problem of causal inference . We write expected potential outcomes , conditioned on covariate ( s ) x = x as µt ( x ) = E ( y ( t ) |x = x ) . The estimands in this work are the causal effects , which are Conditional Average Treatment Effect ( CATE ) and Average Treatment Effect ( ATE ) defined by τ ( x ) = µ1 ( x ) − µ0 ( x ) , ATE = E ( τ ( x ) ) ( 1 ) CATE can be understood as an individual-level treatment effect , if conditioned on high dimensional and highly diverse covariates . In general , we need three assumptions for identification ( Rubin , 2005 ) . There should exist variable z ∈ Rn satisfies ignorability ( y ( 0 ) , y ( 1 ) |= t|z ) and positivity ( ∀z , t : p ( t = t|z = z ) > 0 ) , and also given the consistency of counterfactuals ( y = y ( t ) if t = t ) ( See Appendix for explanations ) . Then , treatment effects can be identified by : µt ( x ) = E ( E ( y ( t ) |z , x = x ) ) = E ( E ( y|z , x = x , t = t ) ) = ∫ ( ∫ p ( y|z , x , t ) ydy ) p ( z|x ) dz ( 2 ) The second equality uses the three conditions . We say that strong ignorability holds when we have both ignorability and positivity . In this work , we consider unobserved confounding , that is , we assume the existence of confounder ( s ) z , satisfying the three conditions , but it is ( partially ) 1 unobserved . The following theorem adapted from Rosenbaum & Rubin ( 1983 ) is central to causal inference and we will use it for motivating and justifying our method . Such function b ( z ) is called a balancing score ( of z ) . Obviously , the propensity score e ( z ) : = p ( t = 1|z ) , the propensity of assigning the treatment given z , is a balancing score ( with f be the identity function ) . Theorem 1 ( Balancing score ) . Let b ( z ) be a function of random variable z . Then t |= z|b ( z ) if and only if f ( b ( z ) ) = p ( t = 1|z ) : = e ( z ) for some function f ( or more formally , e ( z ) is b ( z ) measurable ) . Assume further that z satisfies strong ignorability , then so does b ( z ) . 3.2 VARIATIONAL AUTOENCODERS . Variational autoencoders ( VAEs ) ( Kingma et al. , 2019 ) are a class of latent variable models with latent variable z , and observed variable y is generated by the decoder pθ ( y|z ) . The variational lower bound of the log-likelihood is written as : log p ( y ) ≥ log p ( y ) −DKL ( q ( z|y ) ‖p ( z|y ) ) = Ez∼q log pθ ( y|z ) −DKL ( qφ ( z|y ) ‖p ( z ) ) ︸ ︷︷ ︸ LV AE ( y ; θ , φ ) , ( 3 ) where the encoder qφ ( z|y ) is introduced to approximate the true posterior p ( z|y ) and DKL denotes KL divergence . The decoder pθ and encoder qφ are usually parametrized by NNs . We will omit the parameters θ , φ in notations when appropriate . Using the reparameterization trick ( Kingma & Welling , 2014 ) and optimizing the evidence lower bound ( ELBO ) Ey∼D ( L ( y ) ) with data D , we train the VAE efficiently . Conditional VAE ( CVAE ) adds a conditioning variable c to ( 3 ) ( See Appendix for details ) . As mentioned , identifiable VAE ( iVAE ) ( Khemakhem et al. , 2020 ) provides the first identifiability result for VAE , using auxiliary variable u . It assumes y |= u|z , that is , p ( y|z , u ) = p ( y|z ) . The variational lower bound is log p ( y|u ) ≥ Ez∼q log pf ( y|z ) −DKL ( q ( z|y , u ) ‖pT , λ ( z|u ) ) ︸ ︷︷ ︸ LiV AE ( y , u ) ( 4 ) where y = f ( z ) + , is additive noise and z has exponential family distribution with sufficient statistics T and parameter λ ( u ) . Note that , unlike CVAE , the decoder does not depend on u due to the independence assumption . Here identifiability means that the functional parameters ( f , T , λ ) can be identified ( learned ) up to a simple transformation .
The present paper introduces Counterfactual VAE (CFVAE), a generative learning method to estimate treatment effects under a latent unconfoundedness assumption. It builds on variational autoencoders (VAE) to learn causal representations. The authors provide identification results using recent results on nonlinear ICA (Khemakhem et al., 2020). They show that the confounder is identifiable up to an affine transformation.
SP:79570f7ba5c60925f5612cf5e1f8b46a5332d880
Identifying Treatment Effects under Unobserved Confounding by Causal Representation Learning
As an important problem of causal inference , we discuss the estimation of treatment effects under the existence of unobserved confounding . By representing the confounder as a latent variable , we propose Counterfactual VAE , a new variant of variational autoencoder , based on recent advances in identifiability of representation learning . Combining the identifiability and classical identification results of causal inference , under mild assumptions on the generative model and with small noise on the outcome , we theoretically show that the confounder is identifiable up to an affine transformation and then the treatment effects can be identified . Experiments on synthetic and semi-synthetic datasets demonstrate that our method matches the state-of-the-art , even under settings violating our formal assumptions . 1 INTRODUCTION . Causal inference ( Imbens & Rubin , 2015 ; Pearl , 2009 ) , i.e , estimating causal effects of interventions , is a fundamental problem across many domains . In this work , we focus on the estimation of treatment effects , e.g. , effects of public policies or a new drug , based on a set of observations consisting of binary labels for treatment / control ( non-treated ) , outcome , and other covariates . The fundamental difficulty of causal inference is that we never have observations of counterfactual outcomes , which would have been if we had made another decision ( treatment or control ) . While the ideal protocol for causal inference is randomized controlled trials ( RCTs ) , they often have ethical and practical issues , or are prohibitively expensive . Thus , causal inference from observational data is indispensable , though they introduce other challenges . Perhaps the most crucial one is confounding : there might be variables ( called confounders ) that causally affect both the treatment and the outcome , and spurious correlation follows . Most of works in causal inference rely on the unconfoundedness assumption that appropriate covariates are collected so that the confounding can be controlled by conditioning on or adjusting for those variables . This is still challenging , due to systematic difference of the distributions of the covariates between the treatment and control groups . One classical way of dealing with this difference is re-weighting ( Horvitz & Thompson , 1952 ) . There are semi-parametric methods , which have better finite sample performance , e.g . TMLE ( Van der Laan & Rose , 2011 ) , and also non-parametric , tree-based , methods , e.g . Causal Forests ( CF ) ( Wager & Athey , 2018 ) . Notably , there is a recent rise of interest in representation learning for causal inference starting from Johansson et al . ( 2016 ) . There are a few lines of works that challenge the difficult but important problem of causal inference under unobserved confounding . Without covariates we can adjust for , many of them assume special structures among the variables , such as instrumental variables ( IVs ) ( Angrist et al. , 1996 ) , proxy variables ( Miao et al. , 2018 ) , network structure ( Ogburn , 2018 ) , and multiple causes ( Wang & Blei , 2019 ) . Among them , instrumental variables and proxy ( or surrogate ) variables are most commonly exploited . Instrumental variables are not affected by unobserved confounders , influencing the outcome only through the treatment . On the other hand , proxy variables are causally connected to unobserved confounders , but are not confounding the treatment and outcome by themselves . Other methods use restrictive parametric models ( Allman et al. , 2009 ) , or only give interval estimation ( Manski , 2009 ; Kallus et al. , 2019 ) . In this work , we address the problem of estimating treatment effects under unobserved confounding . We further discuss the individual-level treatment effect , which measures the treatment effect conditioned on the covariate , for example , on a patient ’ s personal data . To model the problem , we regard the covariate as a proxy variable and the confounder as a latent variable in representation learning . Our method particularly exploits the recent advance of identifiability of representation learning for VAE ( Khemakhem et al. , 2020 ) . The hallmark of deep neural networks ( NNs ) might be that they can learn representations of data . It is desirable that the learned representations are interpretable , that is , in approximately the same relationship to the latent sources for each down-stream task . A principled approach to this is identifiability , that is , when optimizing our learning objective w.r.t . the representation function , only a unique optimum will be returned . Our method builds on this and further provides the stronger identifiability of representations that is needed in causal inference . The proposed method is also based firmly on the well-established results in causal inference . In many works exploiting proxies , it is assumed that the proxies are independent of the outcome given the confounder ( Greenland , 1980 ; Rothman et al. , 2008 ; Kuroki & Pearl , 2014 ) . This also motivates our method . Further , our method naturally combines a new VAE architecture with the classical result of Rosenbaum & Rubin ( 1983 ) regarding the sufficient information for identification of treatment effects , showing identifiability proof of both latent representations and treatment effects . The main contributions of this paper are as follows : 1 ) interpretable , causal representation learning by a new VAE architecture for estimating treatment effects under unobserved confounding ; 2 ) theoretical analysis of the identifiability of representation and treatment effect ; 3 ) experimental study on diverse settings showing performance of state-of-the-art . 2 RELATED WORK . Identifiability of representation learning . With recent advances in nonlinear ICA , identifiability of representations is proved under a number of settings , e.g. , auxiliary task for representation learning ( Hyvärinen & Morioka , 2016 ; Hyvärinen et al. , 2019 ) and VAE ( Khemakhem et al. , 2020 ) . Recently , Roeder et al . ( 2020 ) extends the the result to include a wide class of state-of-the-art deep discriminative models . The results are exploited in bivariate causal discovery ( Wu & Fukumizu , 2020 ) and structure learning ( Yang et al. , 2020 ) . To the best of our knowledge , this work is the first to explore this new possibility in causal inference . Representation learning for causal inference . Recently , researchers start to design representation learning methods for causal inference , but mostly limited to unconfounded settings . Some methods focus on learning a balanced covariate representation , e.g. , BLR/BNN ( Johansson et al. , 2016 ) , and TARnet/CFR ( Shalit et al. , 2017 ) . Adding to this , Yao et al . ( 2018 ) also exploits the local similarity of between data points . Shi et al . ( 2019 ) uses similar architecture to TARnet , considering the importance of treatment probability . There are also methods using GAN ( Yoon et al. , 2018 , GANITE ) and Gaussian process ( Alaa & van der Schaar , 2017 ) . Our method adds to these by also tackling the harder problem of unobserved confounding . Causal inference with auxiliary structures . Both our method and CEVAE ( Louizos et al. , 2017 ) are motivated by exploiting proxies and use VAE as a learning method . However , CEVAE assumes a specific causal graph where the covariates should be independent of the treatment given the confounder . Further , CEVAE relies on the assumption that VAE can recover the true latent distribution . Kallus et al . ( 2018 ) uses matrix factorization to infer the confounders from proxy variables , and gives consistent ATE estimator and its error bound . Miao et al . ( 2018 ) established conditions for identification using more general proxies , but without practical estimation method . Note that , two active lines of works in machine learning exist in their own right , exploiting IV ( Hartford et al. , 2017 ) and network structure ( Veitch et al. , 2019 ) . 3 SETUP AND PRELIMINARIES . 3.1 TREATMENT EFFECTS AND CONFOUNDERS . Following Imbens & Rubin ( 2015 ) , we begin by introducing potential outcomes ( or counterfactual outcomes ) y ( t ) , t = 0 , 1. y ( t ) is the outcome we would observe , if we applied treatment value t. Note that , for a unit under research , we can observe only one of y ( 0 ) or y ( 1 ) , corresponding to which factual treatment we have applied . This is the fundamental problem of causal inference . We write expected potential outcomes , conditioned on covariate ( s ) x = x as µt ( x ) = E ( y ( t ) |x = x ) . The estimands in this work are the causal effects , which are Conditional Average Treatment Effect ( CATE ) and Average Treatment Effect ( ATE ) defined by τ ( x ) = µ1 ( x ) − µ0 ( x ) , ATE = E ( τ ( x ) ) ( 1 ) CATE can be understood as an individual-level treatment effect , if conditioned on high dimensional and highly diverse covariates . In general , we need three assumptions for identification ( Rubin , 2005 ) . There should exist variable z ∈ Rn satisfies ignorability ( y ( 0 ) , y ( 1 ) |= t|z ) and positivity ( ∀z , t : p ( t = t|z = z ) > 0 ) , and also given the consistency of counterfactuals ( y = y ( t ) if t = t ) ( See Appendix for explanations ) . Then , treatment effects can be identified by : µt ( x ) = E ( E ( y ( t ) |z , x = x ) ) = E ( E ( y|z , x = x , t = t ) ) = ∫ ( ∫ p ( y|z , x , t ) ydy ) p ( z|x ) dz ( 2 ) The second equality uses the three conditions . We say that strong ignorability holds when we have both ignorability and positivity . In this work , we consider unobserved confounding , that is , we assume the existence of confounder ( s ) z , satisfying the three conditions , but it is ( partially ) 1 unobserved . The following theorem adapted from Rosenbaum & Rubin ( 1983 ) is central to causal inference and we will use it for motivating and justifying our method . Such function b ( z ) is called a balancing score ( of z ) . Obviously , the propensity score e ( z ) : = p ( t = 1|z ) , the propensity of assigning the treatment given z , is a balancing score ( with f be the identity function ) . Theorem 1 ( Balancing score ) . Let b ( z ) be a function of random variable z . Then t |= z|b ( z ) if and only if f ( b ( z ) ) = p ( t = 1|z ) : = e ( z ) for some function f ( or more formally , e ( z ) is b ( z ) measurable ) . Assume further that z satisfies strong ignorability , then so does b ( z ) . 3.2 VARIATIONAL AUTOENCODERS . Variational autoencoders ( VAEs ) ( Kingma et al. , 2019 ) are a class of latent variable models with latent variable z , and observed variable y is generated by the decoder pθ ( y|z ) . The variational lower bound of the log-likelihood is written as : log p ( y ) ≥ log p ( y ) −DKL ( q ( z|y ) ‖p ( z|y ) ) = Ez∼q log pθ ( y|z ) −DKL ( qφ ( z|y ) ‖p ( z ) ) ︸ ︷︷ ︸ LV AE ( y ; θ , φ ) , ( 3 ) where the encoder qφ ( z|y ) is introduced to approximate the true posterior p ( z|y ) and DKL denotes KL divergence . The decoder pθ and encoder qφ are usually parametrized by NNs . We will omit the parameters θ , φ in notations when appropriate . Using the reparameterization trick ( Kingma & Welling , 2014 ) and optimizing the evidence lower bound ( ELBO ) Ey∼D ( L ( y ) ) with data D , we train the VAE efficiently . Conditional VAE ( CVAE ) adds a conditioning variable c to ( 3 ) ( See Appendix for details ) . As mentioned , identifiable VAE ( iVAE ) ( Khemakhem et al. , 2020 ) provides the first identifiability result for VAE , using auxiliary variable u . It assumes y |= u|z , that is , p ( y|z , u ) = p ( y|z ) . The variational lower bound is log p ( y|u ) ≥ Ez∼q log pf ( y|z ) −DKL ( q ( z|y , u ) ‖pT , λ ( z|u ) ) ︸ ︷︷ ︸ LiV AE ( y , u ) ( 4 ) where y = f ( z ) + , is additive noise and z has exponential family distribution with sufficient statistics T and parameter λ ( u ) . Note that , unlike CVAE , the decoder does not depend on u due to the independence assumption . Here identifiability means that the functional parameters ( f , T , λ ) can be identified ( learned ) up to a simple transformation .
This paper provides a method for using a VAE with proxy variables to estimate CATE in a model with latent confounding by recovering a conditional distribution over the latent confounders. Building upon results from Khemakhem et.al. 2020, the confounding can be identified if the latent variable is parameterized by an exponential family distribution dependent on the proxy variable, and the outcome variable is an injective function of the latents with (small) additive noise. Conceptually, the paper is similar in goal to Louizos et al. 2017, with the main difference seeming to be a stronger theoretical base.
SP:79570f7ba5c60925f5612cf5e1f8b46a5332d880
Contrastive Syn-to-Real Generalization
1 INTRODUCTION . Deep neural networks have pushed the boundaries of many visual recognition tasks . However , their success often hinges on the availability of both training data and labels . Obtaining data and labels can be difficult or expensive in many applications such as semantic segmentation , correspondence , 3D reconstruction , pose estimation , and reinforcement learning . In these cases , learning with synthetic data can greatly benefit the applications since large amounts of data and labels are available at relatively low costs . For this reason , synthetic training has recently gained significant attention ( Wu et al. , 2015 ; Richter et al. , 2016 ; Shrivastava et al. , 2017 ; Savva et al. , 2019 ) . Despite many benefits , synthetically trained models often have poor generalization on the real domain due to large domain gaps between synthetic and real images . Limitations on simulation and rendering can lead to degraded synthesis quality , such as aliased boundaries , unrealistic textures , fake appearance , over-simplified lighting conditions , and unreasonable scene layouts . These issues result in domain gaps between synthetic and real images , preventing the synthetically trained models from capturing meaningful representations and limiting their generalization ability on real images . To mitigate these issues , domain generalization and adaptation techniques have been proposed ( Li et al. , 2017 ; Pan et al. , 2018 ; Yue et al. , 2019 ) . Domain adaptation assumes the availability of target data ( labeled , partially labeled , or unlabeled ) during training . On the other hand , domain generalization considers zero-shot generalization without seeing the target data of real images , and is therefore more challenging . An illustration of the domain generalization protocol on the ∗Work done during the research internship with NVIDIA . †Corresponding author . VisDA-17 dataset ( Peng et al. , 2017 ) is shown in Figure 1 . Considering that ImageNet pre-trained representation is widely used as model initialization , recent efforts on domain generalization show that such knowledge can be used to prevent overfitting to the synthetic domain ( Chen et al. , 2018 ; 2020c ) . Specifically , they impose a distillation loss to regularize the distance between the synthetically trained and the ImageNet pre-trained representations , which improves synthetic-to-real generalization . The above approaches still face limitations due to the challenging nature of this problem . Taking a closer look , we observe the following pitfalls in training on synthetic data . First , obtaining photorealistic appearance features at the micro-level , such as texture and illumination , is challenging due to the limits of simulation complexity and rendering granularity . Without special treatment , CNNs tend to be biased towards textures ( Geirhos et al. , 2019 ) and suffer from badly learned representations on synthetic data . Second , the common lack of texture and shape variations on synthetic images often leads to collapsed and trivial representations without any diversity . This is unlike training with natural images where models get sufficiently trained by seeing enough variations . Such a lack of diversity in the representation makes the learned models vulnerable to natural variations in the real world . Summary of contributions and results : • We observe that the diversity of learned feature embedding plays an important role in syntheticto-real generalization . We show an example of collapsed representations learned by a synthetic model , which is in sharp contrast to features learned from real data ( Section 2 ) . • Motivated by the above observation , we propose a contrastive synthetic-to-real generalization framework that simultaneously regularizes the synthetically trained representation while promoting the diversity of the learned representation to improve generalization ( Section 3.1 ) . • We further enhance the CSG framework with attentional pooling ( A-pool ) where feature representations are guided by model attention . This allows the model to localize its attention to semantically more important regions , and thus improves synthetic-to-real generalization ( Section 3.4 ) . • We benchmark CSG on various synthetic training tasks including image classification ( VisDA-17 ) and semantic segmentation ( GTA5→ Cityscapes ) . We show that CSG considerably improves the generalization performance without seeing target data . Our best model reaches 64.05 % accuracy on VisDA-17 compared to previous state-of-the-art ( Chen et al. , 2020c ) with 61.1 % ( Section 4 ) . 2 A MOTIVATING EXAMPLE . We give a motivating example to show the significant differences between the features learned on synthetic and real images . Specifically , we use a ResNet-101 backbone and extract the l2 normalized feature embedding after global average pooling ( defined as v̄ ) . We consider the following three models : 1 ) model pre-trained on ImageNet , 2 ) model trained on VisDA-17 validation set ( real images ) , and 3 ) model trained on VisDA-17 training set ( synthetic images ) 1 . Both 2 ) and 3 ) are initialized with ImageNet pre-training , and fine-tuned on the 12 classes defined in VisDA-17 . 1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 ( a ) ImageNet pre-trained ( real ) . ( Es = 0.2541 ) 1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 ( b ) Trained on VisDA-17 validation set ( real ) . ( Es = 0.3355 ) 1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 ( c ) Trained on VisDA-17 training set ( synthetic ) . ( Es = 0.4408 ) Visualization of feature diversity . We visualize the normalized representations on a 2-dim sphere . A Gaussian kernel with bandwidth estimated by Scott ’ s Rule ( Scott , 2015 ) is applied to estimate the probability density function . Darker areas have more concentrated features , and if the feature space ( the 2-dim sphere ) is covered by dark areas , it has more diversely placed features . In Figure 2 , we can see that the ImageNet pretrained model can widely span the representations on the 2-dim feature space . The model trained on VisDA-17 validation set can also generate diverse features , although slightly affected by the class imbalance . However , when the model is trained on the training set ( synthetic images ) , the features largely collapse to a narrow subspace , i.e. , the model fails to fully leverage the whole feature space . This is clear that training on synthetic images can easily introduce poor bias to the model and the collapsed representations will fail to generalize to the real domain . Quantitive measurement of feature diversity . Inspired by ( Liu et al. , 2018 ) , we also quantitatively measure the diversity of the feature embeddings using the following hyperspherical potential energy : Es ( v̄i|Ni=1 ) = N∑ i=1 N∑ j=1 , j 6=i es ( ‖v̄i − v̄j‖ ) = { ∑ i6=j ‖v̄i − v̄j‖ −s , s > 0∑ i6=j log ( ‖v̄i − v̄j‖−1 ) , s = 0 ( 1 ) N is the number of examples . The lower the hyperspherical energy ( HSE ) is , the more diverse the feature vectors will be scattered in the unit sphere . s is the power factor , and we choose s = 0 in this example . Three training strategies exhibit energies as 0.2541 , 0.3355 , 0.4408 , respectively . This validates that models trained on real images can capture diverse features , whereas the synthetic training will lead the model to highly collapsed feature space . Remarks . A conclusion can be drawn from the above examples : though assisted with ImageNet initialization , fine-tuning on synthetic images tends to give collapsed features with poor diversity in sharp contrast to training with real images . This indicates that the diversity of learned representation could play an important role in synthetic-to-real generalization . 3 CONTRASTIVE SYNTHETIC-TO-REAL GENERALIZATION . We consider the synthetic-to-real domain generalization problem following the protocols of Chen et al . ( 2020c ) . More specifically , the objective is to achieve the best zero-shot generalization on the unseen target domain real images without having access to them during synthetic training . 3.1 NOTATION AND FRAMEWORK . Our design of the model considers the following two aspects with a “ push and pull ” strategy : Pull : Without access to real images , the ImageNet pre-trained model presents the only source of real domain knowledge that can implicitly guide our training . As a result , we hope to impose some form of similarity between the features obtained by the synthetic model and the ImageNet pre-trained one . This helps to overcome the domain gaps from the unrealistic appearance of synthetic images . Push : Section 2 shows that synthetic training tends to generate collapsed features whereas models trained on natural images give many diverse ones . We treat this as an inductive bias to improve synthetic training , by pushing the feature embeddings away from each other across different images . The above “ push and pull ” strategy can be exactly formulated with a contrastive loss . This motivates us to propose a contrastive synthetic-to-real generalization framework as partly inspired by recent popular contrastive learning methods ( He et al. , 2020 ) . Figure 3 ( b ) illustrates our CSG framework . Specifically , we denote the frozen Imagenet pre-trained model as fe , o and the synthetically trained model fe , where fe is supervised by the task loss Lsyn for the defined downstream task . We denote the input synthetic image as xa and treat it as an anchor . We treat the embeddings of xa obtained by fe and fe , o as anchor and positive embeddings , denoting them as za and z+ , respectively . Following a typical contrastive approach , we define K negative images { x−1 , · · · , x − K } for every anchor xa , and denote their corresponding embeddings as { z−1 , · · · , z − K } . Similar to the design in ( Chen et al. , 2020d ) , we define h/h̃ : RC → Rc as the nonlinear projection heads with a two MLP layers and a ReLU layer between them . The CSG framework regularizes fe in a contrastive manner : pulling za and z+ to be closer while pushing za and { z−1 , · · · , z − K } apart . This regularizes the model by preventing its representation from deviating too far from that of a pre-trained ImageNet model and yet encouraging it to learn task-specific information from the synthetic data . 𝑓 '' 𝑓 # 𝑓 '' , % ℒ ' ( ) 𝒙+ frozen updated ( Synthetic ) synthetic images Feature Distance pull 𝒗′ 𝒗 𝑓 '' 𝑓 # 𝑓 '' , % ℒ ' ( ) 𝒙 frozen updated ( Synthetic ) synthetic images Feature Distance pull ( a ) 𝑓 '' 𝑓 # 𝑓 '' , % ℒ ' ( ) 𝒙𝒂 frozen updated ( Synthetic ) synthetic images 𝒛- , . 𝒙/0 , ⋯ , 𝒙30 𝒛/ -,0 , ⋯ , 𝒛3 -,0𝒛-,4 pushpull ; ℎ 𝒗4 𝒗𝒂 𝒗0 𝒗 ℎ Contrastive ( b ) Figure 3 : ( a ) Previous work ( Chen et al. , 2018 ; 2020c ) consider “ learning without forgetting ” which minimizes a distillation loss between a synthetic model and an ImageNet pre-trained one ( either on features or model parameters ) to avoid catastrophic forgetting . ( b ) The proposed CSG framework with a “ push and pull ” strategy . Even though having connections to recent self-supervised contrastive representation learning methods ( Oord et al. , 2018 ; Wu et al. , 2018 ; Chen et al. , 2020a ; He et al. , 2020 ; Chen et al. , 2020b ; Jiang et al. , 2020 ) , our work differs in the following aspects : 1 ) Self-supervised learning and the addressed task are ill-posed in different manners - the former lacks the constraints from semantic labels , whereas the latter lacks the support of data distribution . 2 ) As a result , the motivations of contrastive learning are different . Our work is also related to the contrastive distillation framework in ( Tian et al. , 2020a ) . Again , the two works differ in both task and motivation despite the converging techniques .
Synthetic-to-real generalization is an important topic of extensive practical interest. This paper motivates its work from an observation of feature diversity difference between synthetic and real training: synthetic training tends to generate less diverse or even collapsed features, whereas models trained on natural images give much diverse ones. Based on that, the author developed a contrastive “push and pull” framework to: (1) keep proximity between learned and ImageNet-pretrained features; and (2) push the feature embeddings away from each other across different images, using the feature diversity observation as the inductive bias.
SP:4522ae8f5aaec18049aafa53746c3bc337d620db
Contrastive Syn-to-Real Generalization
1 INTRODUCTION . Deep neural networks have pushed the boundaries of many visual recognition tasks . However , their success often hinges on the availability of both training data and labels . Obtaining data and labels can be difficult or expensive in many applications such as semantic segmentation , correspondence , 3D reconstruction , pose estimation , and reinforcement learning . In these cases , learning with synthetic data can greatly benefit the applications since large amounts of data and labels are available at relatively low costs . For this reason , synthetic training has recently gained significant attention ( Wu et al. , 2015 ; Richter et al. , 2016 ; Shrivastava et al. , 2017 ; Savva et al. , 2019 ) . Despite many benefits , synthetically trained models often have poor generalization on the real domain due to large domain gaps between synthetic and real images . Limitations on simulation and rendering can lead to degraded synthesis quality , such as aliased boundaries , unrealistic textures , fake appearance , over-simplified lighting conditions , and unreasonable scene layouts . These issues result in domain gaps between synthetic and real images , preventing the synthetically trained models from capturing meaningful representations and limiting their generalization ability on real images . To mitigate these issues , domain generalization and adaptation techniques have been proposed ( Li et al. , 2017 ; Pan et al. , 2018 ; Yue et al. , 2019 ) . Domain adaptation assumes the availability of target data ( labeled , partially labeled , or unlabeled ) during training . On the other hand , domain generalization considers zero-shot generalization without seeing the target data of real images , and is therefore more challenging . An illustration of the domain generalization protocol on the ∗Work done during the research internship with NVIDIA . †Corresponding author . VisDA-17 dataset ( Peng et al. , 2017 ) is shown in Figure 1 . Considering that ImageNet pre-trained representation is widely used as model initialization , recent efforts on domain generalization show that such knowledge can be used to prevent overfitting to the synthetic domain ( Chen et al. , 2018 ; 2020c ) . Specifically , they impose a distillation loss to regularize the distance between the synthetically trained and the ImageNet pre-trained representations , which improves synthetic-to-real generalization . The above approaches still face limitations due to the challenging nature of this problem . Taking a closer look , we observe the following pitfalls in training on synthetic data . First , obtaining photorealistic appearance features at the micro-level , such as texture and illumination , is challenging due to the limits of simulation complexity and rendering granularity . Without special treatment , CNNs tend to be biased towards textures ( Geirhos et al. , 2019 ) and suffer from badly learned representations on synthetic data . Second , the common lack of texture and shape variations on synthetic images often leads to collapsed and trivial representations without any diversity . This is unlike training with natural images where models get sufficiently trained by seeing enough variations . Such a lack of diversity in the representation makes the learned models vulnerable to natural variations in the real world . Summary of contributions and results : • We observe that the diversity of learned feature embedding plays an important role in syntheticto-real generalization . We show an example of collapsed representations learned by a synthetic model , which is in sharp contrast to features learned from real data ( Section 2 ) . • Motivated by the above observation , we propose a contrastive synthetic-to-real generalization framework that simultaneously regularizes the synthetically trained representation while promoting the diversity of the learned representation to improve generalization ( Section 3.1 ) . • We further enhance the CSG framework with attentional pooling ( A-pool ) where feature representations are guided by model attention . This allows the model to localize its attention to semantically more important regions , and thus improves synthetic-to-real generalization ( Section 3.4 ) . • We benchmark CSG on various synthetic training tasks including image classification ( VisDA-17 ) and semantic segmentation ( GTA5→ Cityscapes ) . We show that CSG considerably improves the generalization performance without seeing target data . Our best model reaches 64.05 % accuracy on VisDA-17 compared to previous state-of-the-art ( Chen et al. , 2020c ) with 61.1 % ( Section 4 ) . 2 A MOTIVATING EXAMPLE . We give a motivating example to show the significant differences between the features learned on synthetic and real images . Specifically , we use a ResNet-101 backbone and extract the l2 normalized feature embedding after global average pooling ( defined as v̄ ) . We consider the following three models : 1 ) model pre-trained on ImageNet , 2 ) model trained on VisDA-17 validation set ( real images ) , and 3 ) model trained on VisDA-17 training set ( synthetic images ) 1 . Both 2 ) and 3 ) are initialized with ImageNet pre-training , and fine-tuned on the 12 classes defined in VisDA-17 . 1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 ( a ) ImageNet pre-trained ( real ) . ( Es = 0.2541 ) 1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 1.0 ( b ) Trained on VisDA-17 validation set ( real ) . ( Es = 0.3355 ) 1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 ( c ) Trained on VisDA-17 training set ( synthetic ) . ( Es = 0.4408 ) Visualization of feature diversity . We visualize the normalized representations on a 2-dim sphere . A Gaussian kernel with bandwidth estimated by Scott ’ s Rule ( Scott , 2015 ) is applied to estimate the probability density function . Darker areas have more concentrated features , and if the feature space ( the 2-dim sphere ) is covered by dark areas , it has more diversely placed features . In Figure 2 , we can see that the ImageNet pretrained model can widely span the representations on the 2-dim feature space . The model trained on VisDA-17 validation set can also generate diverse features , although slightly affected by the class imbalance . However , when the model is trained on the training set ( synthetic images ) , the features largely collapse to a narrow subspace , i.e. , the model fails to fully leverage the whole feature space . This is clear that training on synthetic images can easily introduce poor bias to the model and the collapsed representations will fail to generalize to the real domain . Quantitive measurement of feature diversity . Inspired by ( Liu et al. , 2018 ) , we also quantitatively measure the diversity of the feature embeddings using the following hyperspherical potential energy : Es ( v̄i|Ni=1 ) = N∑ i=1 N∑ j=1 , j 6=i es ( ‖v̄i − v̄j‖ ) = { ∑ i6=j ‖v̄i − v̄j‖ −s , s > 0∑ i6=j log ( ‖v̄i − v̄j‖−1 ) , s = 0 ( 1 ) N is the number of examples . The lower the hyperspherical energy ( HSE ) is , the more diverse the feature vectors will be scattered in the unit sphere . s is the power factor , and we choose s = 0 in this example . Three training strategies exhibit energies as 0.2541 , 0.3355 , 0.4408 , respectively . This validates that models trained on real images can capture diverse features , whereas the synthetic training will lead the model to highly collapsed feature space . Remarks . A conclusion can be drawn from the above examples : though assisted with ImageNet initialization , fine-tuning on synthetic images tends to give collapsed features with poor diversity in sharp contrast to training with real images . This indicates that the diversity of learned representation could play an important role in synthetic-to-real generalization . 3 CONTRASTIVE SYNTHETIC-TO-REAL GENERALIZATION . We consider the synthetic-to-real domain generalization problem following the protocols of Chen et al . ( 2020c ) . More specifically , the objective is to achieve the best zero-shot generalization on the unseen target domain real images without having access to them during synthetic training . 3.1 NOTATION AND FRAMEWORK . Our design of the model considers the following two aspects with a “ push and pull ” strategy : Pull : Without access to real images , the ImageNet pre-trained model presents the only source of real domain knowledge that can implicitly guide our training . As a result , we hope to impose some form of similarity between the features obtained by the synthetic model and the ImageNet pre-trained one . This helps to overcome the domain gaps from the unrealistic appearance of synthetic images . Push : Section 2 shows that synthetic training tends to generate collapsed features whereas models trained on natural images give many diverse ones . We treat this as an inductive bias to improve synthetic training , by pushing the feature embeddings away from each other across different images . The above “ push and pull ” strategy can be exactly formulated with a contrastive loss . This motivates us to propose a contrastive synthetic-to-real generalization framework as partly inspired by recent popular contrastive learning methods ( He et al. , 2020 ) . Figure 3 ( b ) illustrates our CSG framework . Specifically , we denote the frozen Imagenet pre-trained model as fe , o and the synthetically trained model fe , where fe is supervised by the task loss Lsyn for the defined downstream task . We denote the input synthetic image as xa and treat it as an anchor . We treat the embeddings of xa obtained by fe and fe , o as anchor and positive embeddings , denoting them as za and z+ , respectively . Following a typical contrastive approach , we define K negative images { x−1 , · · · , x − K } for every anchor xa , and denote their corresponding embeddings as { z−1 , · · · , z − K } . Similar to the design in ( Chen et al. , 2020d ) , we define h/h̃ : RC → Rc as the nonlinear projection heads with a two MLP layers and a ReLU layer between them . The CSG framework regularizes fe in a contrastive manner : pulling za and z+ to be closer while pushing za and { z−1 , · · · , z − K } apart . This regularizes the model by preventing its representation from deviating too far from that of a pre-trained ImageNet model and yet encouraging it to learn task-specific information from the synthetic data . 𝑓 '' 𝑓 # 𝑓 '' , % ℒ ' ( ) 𝒙+ frozen updated ( Synthetic ) synthetic images Feature Distance pull 𝒗′ 𝒗 𝑓 '' 𝑓 # 𝑓 '' , % ℒ ' ( ) 𝒙 frozen updated ( Synthetic ) synthetic images Feature Distance pull ( a ) 𝑓 '' 𝑓 # 𝑓 '' , % ℒ ' ( ) 𝒙𝒂 frozen updated ( Synthetic ) synthetic images 𝒛- , . 𝒙/0 , ⋯ , 𝒙30 𝒛/ -,0 , ⋯ , 𝒛3 -,0𝒛-,4 pushpull ; ℎ 𝒗4 𝒗𝒂 𝒗0 𝒗 ℎ Contrastive ( b ) Figure 3 : ( a ) Previous work ( Chen et al. , 2018 ; 2020c ) consider “ learning without forgetting ” which minimizes a distillation loss between a synthetic model and an ImageNet pre-trained one ( either on features or model parameters ) to avoid catastrophic forgetting . ( b ) The proposed CSG framework with a “ push and pull ” strategy . Even though having connections to recent self-supervised contrastive representation learning methods ( Oord et al. , 2018 ; Wu et al. , 2018 ; Chen et al. , 2020a ; He et al. , 2020 ; Chen et al. , 2020b ; Jiang et al. , 2020 ) , our work differs in the following aspects : 1 ) Self-supervised learning and the addressed task are ill-posed in different manners - the former lacks the constraints from semantic labels , whereas the latter lacks the support of data distribution . 2 ) As a result , the motivations of contrastive learning are different . Our work is also related to the contrastive distillation framework in ( Tian et al. , 2020a ) . Again , the two works differ in both task and motivation despite the converging techniques .
This paper focuses on the domain generalization problem where the source domain contains synthetic data. An interesting phenomenon is observed in this paper: the diversity of the learned feature embeddings plays an important role in the generalization performance. Then, this paper presents a method to address the syn-to-real generalization problem by combining augmentation, contrastive loss and attention pooling techniques. In general, the observed phenomenon is interesting but the proposed method is not a well-motivated solution. Besides, the researched problem is not a general one, which limits the impact of this paper.
SP:4522ae8f5aaec18049aafa53746c3bc337d620db
Uniform-Precision Neural Network Quantization via Neural Channel Expansion
1 INTRODUCTION . Deep neural networks ( DNNs ) have reached human-level performance in a wide range of domains including image processing ( He et al . ( 2016 ) ; Tan & Le ( 2019 ) ) , object detection ( Ren et al . ( 2015 ) ; Liu et al . ( 2016 ) ; Tan et al . ( 2020 ) ) , machine translation ( Wu et al . ( 2016 ) ; Devlin et al . ( 2018 ) ) , and speech recognition ( Zhang et al . ( 2016 ) ; Nassif et al . ( 2019 ) ) . However , tremendous computation and memory costs of these state-of-the-art DNNs make them challenging to deploy on resourceconstrained devices such as mobile phones , edge sensors , and drones . Therefore , several edge hardware accelerators specifically optimized for intensive DNN computation have emerged , including Google ’ s edge TPU ( Google ( 2019 ) ) and NVIDIA ’ s NVDLA ( NVIDIA ( 2019 ) ) . One of the central techniques innovating these edge DNN accelerators is the quantization of deep neural networks ( QDNN ) . QDNN reduces the complexity of DNN computation by quantizing network weights and activations to low-bit precision . Since the area and energy consumption of the multiplyaccumulate ( MAC ) unit can be significantly reduced with the bit-width reduction ( Sze et al . ( 2017 ) ) , thousands of them can be packed in a small area . Therefore , the popular edge DNN accelerators are equipped with densely integrated MAC arrays to boost their performance in compute-intensive operations such as matrix multiplication ( MatMul ) and convolution ( Conv ) . Early studies of QDNN focused on the quantization of weights and activations of MatMul and Conv to the same bit-width ( Hubara et al . ( 2016 ) ; Rastegari et al . ( 2016 ) ; Zhou et al . ( 2016 ) ) . This uniformprecision QDNN gained popularity because it simplifies the dense MAC array design for edge DNN accelerators . However , uniform bit allocation did not account for the properties of individual layers in a network . Sakr & Shanbhag ( 2018 ) showed that the optimal bit-precision varies within a neural network from layer to layer . As a result , uniform-precision quantization may lead to sub-optimal inference accuracy for a given network . Mixed-precision networks address this limitation by optimizing bit-widths at each layer . In this approach , the sensitivity of the layer to the quantization error is either numerically estimated ( Zhou et al . ( 2017 ) ; Dong et al . ( 2019 ) ) or automatically explored under the framework of neural architecture search ( NAS , Wang et al . ( 2019 ) ; Elthakeb et al . ( 2018 ) ) to allocate bit-precision properly . However , mixed-precision representation requires specific variable precision support in hardware , restricting computation units ’ density and power efficiency ( Camus et al . ( 2019 ) ) . Therefore , mixed-precision support imposes a significant barrier for the low-profile edge accelerators with stringent hardware constraints . In this work , we propose a novel NAS based hardware-friendly DNN quantization method that can address the layer-wise heterogeneous sensitivity under uniform-precision quantization . The proposed method explores network structure in terms of the number of channels . Different from the previous work that only includes pruning of the channels in its search space ( Dong & Yang ( 2019 ) ) , we further incorporate the expansion of the channels , thus called neural channel expansion ( NCE ) . During a search of NCE , search parameters associated with different numbers of channels are updated based on each layer ’ s sensitivity to the uniform-precision quantization and the hardware constraints ; the more sensitive to quantization errors , the larger number of channels preferred in that layer . When the preference to the larger number of channels in a layer exceeds a certain threshold , we expand the channels in that layer ’ s search space so that the more number of channels can be explored . Therefore , NCE allows both pruning and expansion of each layer ’ s channels , finding the sweet-spot for the trade-off between the robustness against the quantization error and the hardware cost . We analytically and empirically demonstrate that NCE can adequately facilitate the search to adapt the target model ’ s structure for better quantization accuracy . The experimental results on CIFAR10 and ImageNet show that the network structures adapted from the popular convolutional neural networks ( CNNs ) achieve superior accuracy when the challenging 2-bit quantization is uniformly applied to MatMul and Conv layers . In particular , we achieve the best-to-date accuracy of 74.03/91.63 % ( Top-1/Top-5 ) for NCE-ResNet50 on ImageNet with slightly lower FLOPs and 30 % reduced number of parameters . Our contributions can be summarized as follows : • We propose a new NAS-based quantization algorithm called neural channel expansion ( NCE ) , which is equipped with a simple yet innovative channel expansion mechanism to balance the number of channels across the layers under uniform-precision quantization . • We provide an in-depth analysis of NCE , shedding light on understanding the impact of channel expansion for compensation of quantization errors . • We demonstrate that the proposed method can adapt the structure of target neural networks to significantly improve the quantization accuracy . 2 RELATED WORK . Neural architecture search : The goal of NAS is to find a network architecture that can achieve the best test accuracy . Early studies ( Zoph & Le ( 2016 ) ; Zoph et al . ( 2018 ) ) often employed meta-learners such as reinforcement learning ( RL ) agents to learn the policy for accurate network architectures . However , RL-based approaches may incur prohibitive search costs ( e.g. , thousands of GPU hours ) . As a relaxation , differentiable neural architecture search ( DNAS ) has been proposed ( Liu et al . ( 2018 ) ) , which updates the search parameters and the weights via bi-level optimization . Recent DNAS approaches considered hardware constraints such as latency , the number of parameters , and FLOPs so that the search explored the trade-off between the cross-entropy loss and the hardware constraint loss . This search resulted in the discovery of light-weight models . As an example , Dong & Yang ( 2019 ) explored channel pruning that satisfies the target hardware constraints . In this work , we adopt the successful NAS framework in the domain of QDNN , for which we devise a novel channel expansion search to robustify networks against the quantization errors . Low-precision quantization of deep neural network : QDNN has been actively studied in the literature . Early work on QDNN ( Hubara et al . ( 2016 ) ; Rastegari et al . ( 2016 ) ; Zhou et al . ( 2016 ) ) introduced the concept of a straight-through estimator ( STE ) for the approximation of gradients of the non-differentiable rounding operation . This approximation enabled uniform-precision ( 1- or multi-bit ) quantization during the model training procedure , which fine-tunes the weight parameters towards lower training loss . QDNN techniques have evolved to adaptively find the quantization step size ( Choi et al . ( 2018 ) ; Zhang et al . ( 2018 ) ; Jung et al . ( 2019 ) ; Esser et al . ( 2020 ) ) , which significantly enhanced the accuracy of the uniform-precision quantization . However , this line of research lacks consideration of the heterogeneous quantization sensitivity for individual layers in a network . On the other hand , mixed-precision quantization allows layer-specific bit-precision optimization ; the higher bit-precision is assigned to the more quantization sensitive layers . Zhou et al . ( 2017 ) ; Dong et al . ( 2019 ) numerically estimated the sensitivity via approximating the impact of quantization errors on model prediction accuracy . Wang et al . ( 2019 ) ; Elthakeb et al . ( 2018 ) employed a reinforcement learning framework to learn the bit-allocation policy . Wu et al . ( 2018 ) ; Cai & Vasconcelos ( 2020 ) adopted DNAS with the various bit-precision operators in the search space . However , mixed-precision representation requires specific variable precision support in hardware , restricting computation units ’ density and power efficiency ( Camus et al . ( 2019 ) ) . Therefore , mixed-precision support imposes a significant barrier for the low-profile edge accelerators with stringent hardware constraints . In this work , we exploit NAS to address the layer-wise heterogeneous sensitivity under uniform-precision quantization . Channel expansion for accurate DNN quantization : Researchers have actively studied channel expansion for accurate DNN quantization . Pioneering work by Mishra et al . ( 2018 ) ( WRPN ) demonstrated that an increased number of channels during the training helped regain QDNN accuracy , but this work lack discussion about the detailed mechanism of channel expansion compensating the quantization error . Zhao et al . ( 2019 ) and Park & Choi ( 2019 ) further attempted to split the channels with large magnitude weights in the pre-trained models . This channel splitting reduced the dynamic range of weights to be represented with lower bit-precision ( 6-8-bits ) . Regarding the control over the dynamic range , Meller et al . ( 2019 ) also adjusted the scale factors of the weight parameters after training to balance the dynamic range across the layers . However , these approaches focused on the numerical remedy for quantization of pre-trained models ( with relatively high bit-precision ) . Thus , it not straightforward to extend their work for quantization-aware training , which is necessary for ultra-low bit QDNN . We provide insights with empirical supports that the structure with the channel expanded layers itself matters , reducing the dynamic range of activation during the quantization-aware training . These exciting insights become a pivotal motivation for us to explore channel expansion in the NAS framework . 3 NEURAL CHANNEL EXPANSION . In this section , we explain the detail of our neural channel expansion method . Similar to TAS ( Dong & Yang ( 2019 ) ) , we construct the search space over the number of channels C = { 1 : cout } with the search parameters α ∈ R|C| . Then the output activation is computed as the weighted sum of sampled activations with a different number of channels aligned via channel-wise interpolation ( CWI ) : Ô = ∑ j∈I Softmax ( αj ; { αk } k∈I ) × CWI ( O1 : Cj , max { ckout } k∈I ) , ( 1 ) where output activation Oj:1≤j≤cout = ∑cin k=1Q ( X { k , : , : } ) ∗ Q ( W { j , k , : , : } ) is computed with input activation X and weight parameters W quantized by the quantizer Q , and I is the sampled subset of C. During the search , the search parameters are updated via channel selection based on the trade-off between the cross-entropy loss and the hardware constraint loss ( e.g. , FLOPs ) . In TAS , the number of channels ( |C| ) is fixed , limiting the exploration scope to the pruning . In NCE , we enable channel expansion of individual layers when the search parameter associated with the maximum number of channels exceeds a certain threshold . The intuition is that if one layer is susceptible to the quantization errors , its search parameters are updated toward the preference for a larger number of channels to decrease the cross-entropy loss . With this simple expansion condition , we can expand channels to those layers affected most by the quantization errors and prune channels of the other layers robust to quantization ; therefore , the overall hardware constraints are met . Algorithm 1 summarizes the overall procedure . NCE consists of three phases : warm-up , search , and train . As advocated by Wu et al . ( 2018 ) and Bender et al . ( 2020 ) , we first perform a warm-up of the entire super-net so that all the super-net weight parameters can be reasonably initialized . The search phase consists of the iterative updates of weights ( w ) and the search parameters ( α ) via bi-level optimization . The updated search parameter associated with the maximum number of channels is compared with a threshold ( pre-determined as a hyper-parameter ) to identify if a layer needs a channel expansion in each layer . When a channel expansion happens ( = Expand ) , the additional weight parameters are added to that layer ( and the search parameter is also copied ) , increasing the number of channels . Once the search is done , the candidate model is derived by the `` winner-takes-all '' strategy ; i.e. , for each layer , the number of channels with the largest magnitude search parameter is selected . Algorithm 1 : Neural Channel Expansion Input : Split the training set into two dis-joint sets : Dweight and Darch ( n ( Dweight ) = n ( Darch ) ) Search Parameter : { αl1 , α l 2 , .. , α l n } ∈ Al , { A1 , A2 , .. , AL } ⊂ A , L =number of layer Expand Threshold : T 1 For Warm-up Epoch do 2 Sample batch data Dw from Dweight and network from A ∼ U ( 0 , 1 ) 3 Calculate Lossweight on Dw to update network weights 4 End For 5 For Search Epoch do 6 Sample batch data Dw from Dweight and network from Softmax ( A ) 7 Calculate Lossweight on Dw to update network weights 8 Sample batch data Da from Darch and network from Softmax ( A ) 9 Calculate Lossarch on Da to update A 10 For layer do 11 j ← # Al 12 If Softmax ( αlj ; { αlk } k∈j ) ≥ T do 13 Expand search space ( αlj+1 ) 14 αlj+1 ← αlj # copy search parameter 15 End if 16 End for 17 End for 18 Derive the searched network from A 19 Randomly initialize the searched network and optimize it on the training set
The authors propose neural channel expansion (NCE), a neural architecture search (NAS) and quantization method. Existing NAS+Q methods typically search for the architecture of the DNN along with the precision at each layer, maximizing accuracy while respecting some kind of hardware constraint. The result is a DNN with mixed-precision, which is challenging for most existing hardware (which only support one or a few precisions). NCE keeps precision the same in each layer, and instead uses the precision sensitivity signal in the NAS to adjust the width of the layer (expand or shrink). The result is uniform-precision, hardware-friendly DNN.
SP:4b23ef2262646d8e924f67770af22618f5e83b39
Uniform-Precision Neural Network Quantization via Neural Channel Expansion
1 INTRODUCTION . Deep neural networks ( DNNs ) have reached human-level performance in a wide range of domains including image processing ( He et al . ( 2016 ) ; Tan & Le ( 2019 ) ) , object detection ( Ren et al . ( 2015 ) ; Liu et al . ( 2016 ) ; Tan et al . ( 2020 ) ) , machine translation ( Wu et al . ( 2016 ) ; Devlin et al . ( 2018 ) ) , and speech recognition ( Zhang et al . ( 2016 ) ; Nassif et al . ( 2019 ) ) . However , tremendous computation and memory costs of these state-of-the-art DNNs make them challenging to deploy on resourceconstrained devices such as mobile phones , edge sensors , and drones . Therefore , several edge hardware accelerators specifically optimized for intensive DNN computation have emerged , including Google ’ s edge TPU ( Google ( 2019 ) ) and NVIDIA ’ s NVDLA ( NVIDIA ( 2019 ) ) . One of the central techniques innovating these edge DNN accelerators is the quantization of deep neural networks ( QDNN ) . QDNN reduces the complexity of DNN computation by quantizing network weights and activations to low-bit precision . Since the area and energy consumption of the multiplyaccumulate ( MAC ) unit can be significantly reduced with the bit-width reduction ( Sze et al . ( 2017 ) ) , thousands of them can be packed in a small area . Therefore , the popular edge DNN accelerators are equipped with densely integrated MAC arrays to boost their performance in compute-intensive operations such as matrix multiplication ( MatMul ) and convolution ( Conv ) . Early studies of QDNN focused on the quantization of weights and activations of MatMul and Conv to the same bit-width ( Hubara et al . ( 2016 ) ; Rastegari et al . ( 2016 ) ; Zhou et al . ( 2016 ) ) . This uniformprecision QDNN gained popularity because it simplifies the dense MAC array design for edge DNN accelerators . However , uniform bit allocation did not account for the properties of individual layers in a network . Sakr & Shanbhag ( 2018 ) showed that the optimal bit-precision varies within a neural network from layer to layer . As a result , uniform-precision quantization may lead to sub-optimal inference accuracy for a given network . Mixed-precision networks address this limitation by optimizing bit-widths at each layer . In this approach , the sensitivity of the layer to the quantization error is either numerically estimated ( Zhou et al . ( 2017 ) ; Dong et al . ( 2019 ) ) or automatically explored under the framework of neural architecture search ( NAS , Wang et al . ( 2019 ) ; Elthakeb et al . ( 2018 ) ) to allocate bit-precision properly . However , mixed-precision representation requires specific variable precision support in hardware , restricting computation units ’ density and power efficiency ( Camus et al . ( 2019 ) ) . Therefore , mixed-precision support imposes a significant barrier for the low-profile edge accelerators with stringent hardware constraints . In this work , we propose a novel NAS based hardware-friendly DNN quantization method that can address the layer-wise heterogeneous sensitivity under uniform-precision quantization . The proposed method explores network structure in terms of the number of channels . Different from the previous work that only includes pruning of the channels in its search space ( Dong & Yang ( 2019 ) ) , we further incorporate the expansion of the channels , thus called neural channel expansion ( NCE ) . During a search of NCE , search parameters associated with different numbers of channels are updated based on each layer ’ s sensitivity to the uniform-precision quantization and the hardware constraints ; the more sensitive to quantization errors , the larger number of channels preferred in that layer . When the preference to the larger number of channels in a layer exceeds a certain threshold , we expand the channels in that layer ’ s search space so that the more number of channels can be explored . Therefore , NCE allows both pruning and expansion of each layer ’ s channels , finding the sweet-spot for the trade-off between the robustness against the quantization error and the hardware cost . We analytically and empirically demonstrate that NCE can adequately facilitate the search to adapt the target model ’ s structure for better quantization accuracy . The experimental results on CIFAR10 and ImageNet show that the network structures adapted from the popular convolutional neural networks ( CNNs ) achieve superior accuracy when the challenging 2-bit quantization is uniformly applied to MatMul and Conv layers . In particular , we achieve the best-to-date accuracy of 74.03/91.63 % ( Top-1/Top-5 ) for NCE-ResNet50 on ImageNet with slightly lower FLOPs and 30 % reduced number of parameters . Our contributions can be summarized as follows : • We propose a new NAS-based quantization algorithm called neural channel expansion ( NCE ) , which is equipped with a simple yet innovative channel expansion mechanism to balance the number of channels across the layers under uniform-precision quantization . • We provide an in-depth analysis of NCE , shedding light on understanding the impact of channel expansion for compensation of quantization errors . • We demonstrate that the proposed method can adapt the structure of target neural networks to significantly improve the quantization accuracy . 2 RELATED WORK . Neural architecture search : The goal of NAS is to find a network architecture that can achieve the best test accuracy . Early studies ( Zoph & Le ( 2016 ) ; Zoph et al . ( 2018 ) ) often employed meta-learners such as reinforcement learning ( RL ) agents to learn the policy for accurate network architectures . However , RL-based approaches may incur prohibitive search costs ( e.g. , thousands of GPU hours ) . As a relaxation , differentiable neural architecture search ( DNAS ) has been proposed ( Liu et al . ( 2018 ) ) , which updates the search parameters and the weights via bi-level optimization . Recent DNAS approaches considered hardware constraints such as latency , the number of parameters , and FLOPs so that the search explored the trade-off between the cross-entropy loss and the hardware constraint loss . This search resulted in the discovery of light-weight models . As an example , Dong & Yang ( 2019 ) explored channel pruning that satisfies the target hardware constraints . In this work , we adopt the successful NAS framework in the domain of QDNN , for which we devise a novel channel expansion search to robustify networks against the quantization errors . Low-precision quantization of deep neural network : QDNN has been actively studied in the literature . Early work on QDNN ( Hubara et al . ( 2016 ) ; Rastegari et al . ( 2016 ) ; Zhou et al . ( 2016 ) ) introduced the concept of a straight-through estimator ( STE ) for the approximation of gradients of the non-differentiable rounding operation . This approximation enabled uniform-precision ( 1- or multi-bit ) quantization during the model training procedure , which fine-tunes the weight parameters towards lower training loss . QDNN techniques have evolved to adaptively find the quantization step size ( Choi et al . ( 2018 ) ; Zhang et al . ( 2018 ) ; Jung et al . ( 2019 ) ; Esser et al . ( 2020 ) ) , which significantly enhanced the accuracy of the uniform-precision quantization . However , this line of research lacks consideration of the heterogeneous quantization sensitivity for individual layers in a network . On the other hand , mixed-precision quantization allows layer-specific bit-precision optimization ; the higher bit-precision is assigned to the more quantization sensitive layers . Zhou et al . ( 2017 ) ; Dong et al . ( 2019 ) numerically estimated the sensitivity via approximating the impact of quantization errors on model prediction accuracy . Wang et al . ( 2019 ) ; Elthakeb et al . ( 2018 ) employed a reinforcement learning framework to learn the bit-allocation policy . Wu et al . ( 2018 ) ; Cai & Vasconcelos ( 2020 ) adopted DNAS with the various bit-precision operators in the search space . However , mixed-precision representation requires specific variable precision support in hardware , restricting computation units ’ density and power efficiency ( Camus et al . ( 2019 ) ) . Therefore , mixed-precision support imposes a significant barrier for the low-profile edge accelerators with stringent hardware constraints . In this work , we exploit NAS to address the layer-wise heterogeneous sensitivity under uniform-precision quantization . Channel expansion for accurate DNN quantization : Researchers have actively studied channel expansion for accurate DNN quantization . Pioneering work by Mishra et al . ( 2018 ) ( WRPN ) demonstrated that an increased number of channels during the training helped regain QDNN accuracy , but this work lack discussion about the detailed mechanism of channel expansion compensating the quantization error . Zhao et al . ( 2019 ) and Park & Choi ( 2019 ) further attempted to split the channels with large magnitude weights in the pre-trained models . This channel splitting reduced the dynamic range of weights to be represented with lower bit-precision ( 6-8-bits ) . Regarding the control over the dynamic range , Meller et al . ( 2019 ) also adjusted the scale factors of the weight parameters after training to balance the dynamic range across the layers . However , these approaches focused on the numerical remedy for quantization of pre-trained models ( with relatively high bit-precision ) . Thus , it not straightforward to extend their work for quantization-aware training , which is necessary for ultra-low bit QDNN . We provide insights with empirical supports that the structure with the channel expanded layers itself matters , reducing the dynamic range of activation during the quantization-aware training . These exciting insights become a pivotal motivation for us to explore channel expansion in the NAS framework . 3 NEURAL CHANNEL EXPANSION . In this section , we explain the detail of our neural channel expansion method . Similar to TAS ( Dong & Yang ( 2019 ) ) , we construct the search space over the number of channels C = { 1 : cout } with the search parameters α ∈ R|C| . Then the output activation is computed as the weighted sum of sampled activations with a different number of channels aligned via channel-wise interpolation ( CWI ) : Ô = ∑ j∈I Softmax ( αj ; { αk } k∈I ) × CWI ( O1 : Cj , max { ckout } k∈I ) , ( 1 ) where output activation Oj:1≤j≤cout = ∑cin k=1Q ( X { k , : , : } ) ∗ Q ( W { j , k , : , : } ) is computed with input activation X and weight parameters W quantized by the quantizer Q , and I is the sampled subset of C. During the search , the search parameters are updated via channel selection based on the trade-off between the cross-entropy loss and the hardware constraint loss ( e.g. , FLOPs ) . In TAS , the number of channels ( |C| ) is fixed , limiting the exploration scope to the pruning . In NCE , we enable channel expansion of individual layers when the search parameter associated with the maximum number of channels exceeds a certain threshold . The intuition is that if one layer is susceptible to the quantization errors , its search parameters are updated toward the preference for a larger number of channels to decrease the cross-entropy loss . With this simple expansion condition , we can expand channels to those layers affected most by the quantization errors and prune channels of the other layers robust to quantization ; therefore , the overall hardware constraints are met . Algorithm 1 summarizes the overall procedure . NCE consists of three phases : warm-up , search , and train . As advocated by Wu et al . ( 2018 ) and Bender et al . ( 2020 ) , we first perform a warm-up of the entire super-net so that all the super-net weight parameters can be reasonably initialized . The search phase consists of the iterative updates of weights ( w ) and the search parameters ( α ) via bi-level optimization . The updated search parameter associated with the maximum number of channels is compared with a threshold ( pre-determined as a hyper-parameter ) to identify if a layer needs a channel expansion in each layer . When a channel expansion happens ( = Expand ) , the additional weight parameters are added to that layer ( and the search parameter is also copied ) , increasing the number of channels . Once the search is done , the candidate model is derived by the `` winner-takes-all '' strategy ; i.e. , for each layer , the number of channels with the largest magnitude search parameter is selected . Algorithm 1 : Neural Channel Expansion Input : Split the training set into two dis-joint sets : Dweight and Darch ( n ( Dweight ) = n ( Darch ) ) Search Parameter : { αl1 , α l 2 , .. , α l n } ∈ Al , { A1 , A2 , .. , AL } ⊂ A , L =number of layer Expand Threshold : T 1 For Warm-up Epoch do 2 Sample batch data Dw from Dweight and network from A ∼ U ( 0 , 1 ) 3 Calculate Lossweight on Dw to update network weights 4 End For 5 For Search Epoch do 6 Sample batch data Dw from Dweight and network from Softmax ( A ) 7 Calculate Lossweight on Dw to update network weights 8 Sample batch data Da from Darch and network from Softmax ( A ) 9 Calculate Lossarch on Da to update A 10 For layer do 11 j ← # Al 12 If Softmax ( αlj ; { αlk } k∈j ) ≥ T do 13 Expand search space ( αlj+1 ) 14 αlj+1 ← αlj # copy search parameter 15 End if 16 End for 17 End for 18 Derive the searched network from A 19 Randomly initialize the searched network and optimize it on the training set
In this paper, the authors propose neural channel expansion (NCE) to adjust the network structure to compensate for the performance degradation from uniform-precision quantization. Given a hardware constraint, the proposed NCE selectively expands the width for the quantization sensitive layers. Experiments on CIFAR-10 and ImageNet shows the effectiveness of the proposed NCE. However, the novelty of this paper is limited since the proposed method is just an extension of TAS. Besides, the comparisons between NCE and the existing methods are missing. My detailed comments are as follows.
SP:4b23ef2262646d8e924f67770af22618f5e83b39
Improved Denoising Diffusion Probabilistic Models
1 INTRODUCTION . Sohl-Dickstein et al . ( 2015 ) introduced diffusion probabilistic models ( `` diffusion models '' for brevity ) , a class of generative models which match a data distribution by learning to reverse a gradual , multi-step noising process . More recently , Ho et al . ( 2020 ) showed an equivalence between these models and score based generative models ( Song & Ermon , 2019 ; 2020 ) , which learn a gradient of the log-density of the data distribution using denoising score matching ( Hyvärinen , 2005 ) . It has recently been shown that this class of models can produce high-quality images ( Ho et al. , 2020 ; Song & Ermon , 2020 ; Jolicoeur-Martineau et al. , 2020 ) and audio ( Chen et al. , 2020b ; Kong et al. , 2020 ) , but it has yet to be shown that diffusion models can achieve competitive log-likelihoods . Furthermore , while Ho et al . ( 2020 ) showed extremely good results on the CIFAR-10 ( Krizhevsky , 2009 ) and LSUN ( Yu et al. , 2015 ) datasets , it is unclear how well diffusion models scale to datasets with higher diversity such as ImageNet . Finally , while Chen et al . ( 2020b ) found that diffusion models can efficiently generate audio using a small number of sampling steps , it has yet to be shown that the same is true for images . In this paper , we show that diffusion models can achieve competitive log-likelihoods while maintaining good sample quality , even on high-diversity datasets like ImageNet . Additionally , we show that our improved models can produce competitive samples an order of magnitude faster than those from Ho et al . ( 2020 ) . We achieve these results by combining a simple reparameterization of the reverse process variance , a hybrid learning objective that combines the variational lower-bound with the simplified objective from Ho et al . ( 2020 ) , and a novel noise schedule which allows the model to better leverage the entire diffusion process . We find surprisingly that , with our hybrid objective , our models obtain better log-likelihoods than those obtained by optimizing the log-likelihood directly , and discover that the latter objective has much more gradient noise during training . We show that a simple importance sampling technique reduces this noise and allows us to achieve better log-likelihoods than with the hybrid objective . Using our trained models , we study how sample quality and log-likelihood change as we adjust the number of diffusion steps used at sampling time . We demonstrate that our improved models allow us to use an order of magnitude fewer steps at test time with only a modest change in sample quality and log-likelihood , thus speeding up sampling for use in practical applications . Finally , we evaluate the performance of these models as we increase model size , and observe trends that suggest predictable improvements in performance as we increase training compute . 2 DENOISING DIFFUSION PROBABILISTIC MODELS . We briefly review the formulation of diffusion models from Ho et al . ( 2020 ) . This formulation makes various simplifying assumptions , such as a fixed noising process q which adds diagonal Gaussian noise at each timestep . For a more general derivation , see Sohl-Dickstein et al . ( 2015 ) . 2.1 DEFINITIONS . Given a data distribution x0 ∼ q ( x0 ) , we define a forward noising process q which produces latents x1 through xT by adding Gaussian noise at time t with variance βt ∈ ( 0 , 1 ) as follows : q ( x1 , ... , xT |x0 ) : = T∏ t=1 q ( xt|xt−1 ) ( 1 ) q ( xt|xt−1 ) : = N ( xt ; √ 1− βtxt−1 , βtI ) ( 2 ) Given sufficiently large T and a well behaved schedule of βt , the latent xT is nearly an isotropic Gaussian distribution . Thus , if we know the exact reverse distribution q ( xt−1|xt ) , we can sample xT ∼ N ( 0 , I ) and run the process in reverse to get a sample from q ( x0 ) . However , since q ( xt−1|xt ) depends on the entire data distribution , we approximate it using a neural network : pθ ( xt−1|xt ) : = N ( xt−1 ; µθ ( xt , t ) , Σθ ( xt , t ) ) ( 3 ) The combination of q and p is a variational auto-encoder ( Kingma & Welling , 2013 ) , and we can write the variational lower bound ( VLB ) as follows : Lvlb : = L0 + L1 + ... + LT−1 + LT ( 4 ) L0 : = − log pθ ( x0|x1 ) ( 5 ) Lt−1 : = DKL ( q ( xt−1|xt , x0 ) || pθ ( xt−1|xt ) ) ( 6 ) LT : = DKL ( q ( xT |x0 ) || p ( xT ) ) ( 7 ) Aside from L0 , each term of Equation 4 is a KL divergence between two Gaussian distributions , and can thus be evaluated in closed form . To evaluate L0 for images , we assume that each color component is divided into 256 bins , and we compute the probability of pθ ( x0|x1 ) landing in the correct bin ( which is tractable using the CDF of the Gaussian distribution ) . Also note that while LT does not depend on θ , it will be close to zero if the forward noising process adequately destroys the data distribution so that q ( xT |x0 ) ≈ N ( 0 , I ) . It is useful to define and derive several other quantities which are relevant to the forward noising process , so we repeat them here from Ho et al . ( 2020 ) : αt : = 1− βt ( 8 ) ᾱt : = t∏ s=0 αs ( 9 ) β̃t : = 1− ᾱt−1 1− ᾱt βt ( 10 ) µ̃t ( xt , x0 ) : = √ ᾱt−1βt 1− ᾱt x0 + √ αt ( 1− ᾱt−1 ) 1− ᾱt xt ( 11 ) q ( xt|x0 ) = N ( xt ; √ ᾱtx0 , ( 1− ᾱt ) I ) ( 12 ) q ( xt−1|xt , x0 ) = N ( xt−1 ; µ̃ ( xt , x0 ) , β̃tI ) ( 13 ) 2.2 TRAINING IN PRACTICE . Equation 12 provides an efficient way to jump directly to an arbitrary step of the forward noising process . This makes it possible to randomly sample t during training . Ho et al . ( 2020 ) uniformly sample t for each image in each mini-batch . There are many different ways to parameterize µθ ( xt , t ) . The most obvious option is to predict µθ ( xt , t ) directly with a neural network ; alternatively , the network could predict x0 , and this output could then be fed through µ̃ ( xt , x0 ) ; finally , the network could predict the noise added to x0 , and this noise could be used to predict x0 via x0 = 1 √ αt ( xt − βt√ 1− ᾱt ) ( 14 ) Ho et al . ( 2020 ) found that predicting worked best , especially when combined with a reweighted loss function : Lsimple = Et , x0 , [ || − θ ( xt , t ) ||2 ] ( 15 ) This objective can be seen as a reweighted form of Lvlb ( without the terms affecting Σθ ) . The authors found that optimizing this reweighted objective resulted in much better sample quality than optimizing Lvlb directly , and explain this by drawing a connection to generative score matching ( Song & Ermon , 2019 ; 2020 ) . One subtlety is that Lsimple provides no learning signal for Σθ ( xt , t ) . This is irrelevant , however , since Ho et al . ( 2020 ) achieved their best results by fixing the variance to σ2t I rather than learning it . They found that they achieve similar sample quality using either σ2t = βt or σ 2 t = β̃t , which are two extremes given by q ( x0 ) being either isotropic Gaussian noise or a delta function , respectively . 3 IMPROVING THE LOG-LIKELIHOOD . While Ho et al . ( 2020 ) found that diffusion models can generate high-fidelity samples according to FID ( Heusel et al. , 2017 ) and Inception Score ( Salimans et al. , 2016 ) , they were unable to achieve competitive log-likelihoods with these models . Log-likelihood is a widely used metric in generative modeling , and it is generally believed that optimizing log-likelihood forces generative models to capture all of the modes of the data distribution ( Razavi et al. , 2019 ) . Additionally , recent work ( Henighan et al. , 2020 ) has shown that small improvements in log-likelihood can have a dramatic impact on sample quality and learnt feature representations . Thus , it is important to explore why diffusion models seem to perform poorly on this metric , since this may suggest a fundamental shortcoming such as bad mode coverage . This section explores several modifications to the algorithm described in Section 2 that , when combined , allow diffusion models to achieve much better log-likelihoods on image datasets , suggesting that these models enjoy the same benefits as other likelihood-based generative models . To study the effects of different modifications , we train fixed model architectures with fixed hyperparameters ( Appendix A ) on the ImageNet 64 × 64 ( van den Oord et al. , 2016a ) and CIFAR-10 ( Krizhevsky , 2009 ) datasets . While CIFAR-10 has seen more usage for this class of models , we chose to study ImageNet 64× 64 as well because it provides a good trade-off between diversity and resolution , allowing us to train models quickly without worrying about overfitting . Additionally , ImageNet 64×64 has been studied extensively in the context of generative modeling ( van den Oord et al. , 2016b ; Menick & Kalchbrenner , 2018 ; Child et al. , 2019 ; Roy et al. , 2020 ) , allowing us to compare diffusion models directly to many other generative models . The setup from Ho et al . ( 2020 ) ( optimizing Lsimple while setting σ2t = βt and T = 1000 ) achieves a log-likelihood of 3.99 bits/dim on ImageNet 64 × 64 after 200K training iterations . We found in early experiments that we could get a boost in log-likelihood by increasing T from 1000 to 4000 ; with this change , the log-likelihood improves to 3.77 bits/dim . For the remainder of this section , we use T = 4000 , but we explore this choice in Section 4 . 3.1 LEARNING Σθ ( xt , t ) In Ho et al . ( 2020 ) , the authors set Σθ ( xt , t ) = σ2t I , where σt is not learned . Oddly , they found that fixing σ2t to βt yielded roughly the same sample quality as fixing it to β̃t . Considering that βt and β̃t represent two opposite extremes , it is reasonable to ask why this choice doesn ’ t affect samples . One clue is given by Figure 1a , which shows that βt and β̃t are almost equal except near t = 0 , i.e . where the model is dealing with imperceptible details . Furthermore , as we increase the number of diffusion steps , βt and β̃t seem to remain close to one another for more of the diffusion process . Figure 1a : The ratio β̃t/βt for every diffusion step for diffusion processes of different lengths . 0 500 1000 1500 2000 2500 3000 3500 4000 diffusion step Figure 2 : Latent samples from linear ( top ) and cosine ( bottom ) schedules respectively at linearly spaced values of t from 0 to T . The latents in the last quarter of the linear schedule are almost purely noise , whereas the cosine schedule adds noise more slowly This suggests that , in the limit of infinite diffusion steps , the choice of σt might not matter at all for sample quality . In other words , as we add more diffusion steps , the model mean µθ ( xt , t ) determines the distribution much more than Σθ ( xt , t ) . While the above argument suggests that fixing σt is a reasonable choice for the sake of sample quality , it says nothing about log-likelihood . In fact , Figure 1b shows that the first few steps of the diffusion process contribute the most to the variational lower bound . Thus , it seems likely that we could improve log-likelihood by using a better choice of Σθ ( xt , t ) . To achieve this , we must learn Σθ ( xt , t ) without the instabilities encountered by Ho et al . ( 2020 ) . Since Figure 1a shows that the reasonable range for Σθ ( xt , t ) is very small , it would be hard for a neural network to predict Σθ ( xt , t ) directly , even in the log domain , as observed by Ho et al . ( 2020 ) . Instead , we found it better to parameterize the variance as an interpolation between βt and β̃t in the log domain . In particular , our model outputs a vector v containing one component per dimension , and we turn this output into variances as follows : Σθ ( xt , t ) = exp ( v log βt + ( 1− v ) log β̃t ) ( 16 ) We did not apply any constraints on v , theoretically allowing the model to predict variances outside of the interpolated range . However , we did not observe the network doing this in practice , suggesting that the bounds for Σθ ( xt , t ) are indeed expressive enough . Since Lsimple doesn ’ t depend on Σθ ( xt , t ) , we define a new hybrid objective : Lhybrid = Lsimple + λLvlb ( 17 ) For our experiments , we set λ = 0.001 to prevent Lvlb from overwhelming Lsimple . Along this same line of reasoning , we also apply a stop-gradient to the µθ ( xt , t ) output for the Lvlb term . This way , Lvlb can guide Σθ ( xt , t ) while Lsimple is still the main source of influence over µθ ( xt , t ) .
The paper talks builds upon the recent work from Ho (2020) about generative models that use noise diffusion. The authors suggest that the proposal in Ho can not only be used in good quality sample generation (as already shown by Ho), but also leads to reasonable improvements in likelihood. Overall, some of the ideas presented in the paper are interesting and useful; but the paper overall needs work.
SP:30e7bbca17b264e59fc856365ad7dcf6081c7861
Improved Denoising Diffusion Probabilistic Models
1 INTRODUCTION . Sohl-Dickstein et al . ( 2015 ) introduced diffusion probabilistic models ( `` diffusion models '' for brevity ) , a class of generative models which match a data distribution by learning to reverse a gradual , multi-step noising process . More recently , Ho et al . ( 2020 ) showed an equivalence between these models and score based generative models ( Song & Ermon , 2019 ; 2020 ) , which learn a gradient of the log-density of the data distribution using denoising score matching ( Hyvärinen , 2005 ) . It has recently been shown that this class of models can produce high-quality images ( Ho et al. , 2020 ; Song & Ermon , 2020 ; Jolicoeur-Martineau et al. , 2020 ) and audio ( Chen et al. , 2020b ; Kong et al. , 2020 ) , but it has yet to be shown that diffusion models can achieve competitive log-likelihoods . Furthermore , while Ho et al . ( 2020 ) showed extremely good results on the CIFAR-10 ( Krizhevsky , 2009 ) and LSUN ( Yu et al. , 2015 ) datasets , it is unclear how well diffusion models scale to datasets with higher diversity such as ImageNet . Finally , while Chen et al . ( 2020b ) found that diffusion models can efficiently generate audio using a small number of sampling steps , it has yet to be shown that the same is true for images . In this paper , we show that diffusion models can achieve competitive log-likelihoods while maintaining good sample quality , even on high-diversity datasets like ImageNet . Additionally , we show that our improved models can produce competitive samples an order of magnitude faster than those from Ho et al . ( 2020 ) . We achieve these results by combining a simple reparameterization of the reverse process variance , a hybrid learning objective that combines the variational lower-bound with the simplified objective from Ho et al . ( 2020 ) , and a novel noise schedule which allows the model to better leverage the entire diffusion process . We find surprisingly that , with our hybrid objective , our models obtain better log-likelihoods than those obtained by optimizing the log-likelihood directly , and discover that the latter objective has much more gradient noise during training . We show that a simple importance sampling technique reduces this noise and allows us to achieve better log-likelihoods than with the hybrid objective . Using our trained models , we study how sample quality and log-likelihood change as we adjust the number of diffusion steps used at sampling time . We demonstrate that our improved models allow us to use an order of magnitude fewer steps at test time with only a modest change in sample quality and log-likelihood , thus speeding up sampling for use in practical applications . Finally , we evaluate the performance of these models as we increase model size , and observe trends that suggest predictable improvements in performance as we increase training compute . 2 DENOISING DIFFUSION PROBABILISTIC MODELS . We briefly review the formulation of diffusion models from Ho et al . ( 2020 ) . This formulation makes various simplifying assumptions , such as a fixed noising process q which adds diagonal Gaussian noise at each timestep . For a more general derivation , see Sohl-Dickstein et al . ( 2015 ) . 2.1 DEFINITIONS . Given a data distribution x0 ∼ q ( x0 ) , we define a forward noising process q which produces latents x1 through xT by adding Gaussian noise at time t with variance βt ∈ ( 0 , 1 ) as follows : q ( x1 , ... , xT |x0 ) : = T∏ t=1 q ( xt|xt−1 ) ( 1 ) q ( xt|xt−1 ) : = N ( xt ; √ 1− βtxt−1 , βtI ) ( 2 ) Given sufficiently large T and a well behaved schedule of βt , the latent xT is nearly an isotropic Gaussian distribution . Thus , if we know the exact reverse distribution q ( xt−1|xt ) , we can sample xT ∼ N ( 0 , I ) and run the process in reverse to get a sample from q ( x0 ) . However , since q ( xt−1|xt ) depends on the entire data distribution , we approximate it using a neural network : pθ ( xt−1|xt ) : = N ( xt−1 ; µθ ( xt , t ) , Σθ ( xt , t ) ) ( 3 ) The combination of q and p is a variational auto-encoder ( Kingma & Welling , 2013 ) , and we can write the variational lower bound ( VLB ) as follows : Lvlb : = L0 + L1 + ... + LT−1 + LT ( 4 ) L0 : = − log pθ ( x0|x1 ) ( 5 ) Lt−1 : = DKL ( q ( xt−1|xt , x0 ) || pθ ( xt−1|xt ) ) ( 6 ) LT : = DKL ( q ( xT |x0 ) || p ( xT ) ) ( 7 ) Aside from L0 , each term of Equation 4 is a KL divergence between two Gaussian distributions , and can thus be evaluated in closed form . To evaluate L0 for images , we assume that each color component is divided into 256 bins , and we compute the probability of pθ ( x0|x1 ) landing in the correct bin ( which is tractable using the CDF of the Gaussian distribution ) . Also note that while LT does not depend on θ , it will be close to zero if the forward noising process adequately destroys the data distribution so that q ( xT |x0 ) ≈ N ( 0 , I ) . It is useful to define and derive several other quantities which are relevant to the forward noising process , so we repeat them here from Ho et al . ( 2020 ) : αt : = 1− βt ( 8 ) ᾱt : = t∏ s=0 αs ( 9 ) β̃t : = 1− ᾱt−1 1− ᾱt βt ( 10 ) µ̃t ( xt , x0 ) : = √ ᾱt−1βt 1− ᾱt x0 + √ αt ( 1− ᾱt−1 ) 1− ᾱt xt ( 11 ) q ( xt|x0 ) = N ( xt ; √ ᾱtx0 , ( 1− ᾱt ) I ) ( 12 ) q ( xt−1|xt , x0 ) = N ( xt−1 ; µ̃ ( xt , x0 ) , β̃tI ) ( 13 ) 2.2 TRAINING IN PRACTICE . Equation 12 provides an efficient way to jump directly to an arbitrary step of the forward noising process . This makes it possible to randomly sample t during training . Ho et al . ( 2020 ) uniformly sample t for each image in each mini-batch . There are many different ways to parameterize µθ ( xt , t ) . The most obvious option is to predict µθ ( xt , t ) directly with a neural network ; alternatively , the network could predict x0 , and this output could then be fed through µ̃ ( xt , x0 ) ; finally , the network could predict the noise added to x0 , and this noise could be used to predict x0 via x0 = 1 √ αt ( xt − βt√ 1− ᾱt ) ( 14 ) Ho et al . ( 2020 ) found that predicting worked best , especially when combined with a reweighted loss function : Lsimple = Et , x0 , [ || − θ ( xt , t ) ||2 ] ( 15 ) This objective can be seen as a reweighted form of Lvlb ( without the terms affecting Σθ ) . The authors found that optimizing this reweighted objective resulted in much better sample quality than optimizing Lvlb directly , and explain this by drawing a connection to generative score matching ( Song & Ermon , 2019 ; 2020 ) . One subtlety is that Lsimple provides no learning signal for Σθ ( xt , t ) . This is irrelevant , however , since Ho et al . ( 2020 ) achieved their best results by fixing the variance to σ2t I rather than learning it . They found that they achieve similar sample quality using either σ2t = βt or σ 2 t = β̃t , which are two extremes given by q ( x0 ) being either isotropic Gaussian noise or a delta function , respectively . 3 IMPROVING THE LOG-LIKELIHOOD . While Ho et al . ( 2020 ) found that diffusion models can generate high-fidelity samples according to FID ( Heusel et al. , 2017 ) and Inception Score ( Salimans et al. , 2016 ) , they were unable to achieve competitive log-likelihoods with these models . Log-likelihood is a widely used metric in generative modeling , and it is generally believed that optimizing log-likelihood forces generative models to capture all of the modes of the data distribution ( Razavi et al. , 2019 ) . Additionally , recent work ( Henighan et al. , 2020 ) has shown that small improvements in log-likelihood can have a dramatic impact on sample quality and learnt feature representations . Thus , it is important to explore why diffusion models seem to perform poorly on this metric , since this may suggest a fundamental shortcoming such as bad mode coverage . This section explores several modifications to the algorithm described in Section 2 that , when combined , allow diffusion models to achieve much better log-likelihoods on image datasets , suggesting that these models enjoy the same benefits as other likelihood-based generative models . To study the effects of different modifications , we train fixed model architectures with fixed hyperparameters ( Appendix A ) on the ImageNet 64 × 64 ( van den Oord et al. , 2016a ) and CIFAR-10 ( Krizhevsky , 2009 ) datasets . While CIFAR-10 has seen more usage for this class of models , we chose to study ImageNet 64× 64 as well because it provides a good trade-off between diversity and resolution , allowing us to train models quickly without worrying about overfitting . Additionally , ImageNet 64×64 has been studied extensively in the context of generative modeling ( van den Oord et al. , 2016b ; Menick & Kalchbrenner , 2018 ; Child et al. , 2019 ; Roy et al. , 2020 ) , allowing us to compare diffusion models directly to many other generative models . The setup from Ho et al . ( 2020 ) ( optimizing Lsimple while setting σ2t = βt and T = 1000 ) achieves a log-likelihood of 3.99 bits/dim on ImageNet 64 × 64 after 200K training iterations . We found in early experiments that we could get a boost in log-likelihood by increasing T from 1000 to 4000 ; with this change , the log-likelihood improves to 3.77 bits/dim . For the remainder of this section , we use T = 4000 , but we explore this choice in Section 4 . 3.1 LEARNING Σθ ( xt , t ) In Ho et al . ( 2020 ) , the authors set Σθ ( xt , t ) = σ2t I , where σt is not learned . Oddly , they found that fixing σ2t to βt yielded roughly the same sample quality as fixing it to β̃t . Considering that βt and β̃t represent two opposite extremes , it is reasonable to ask why this choice doesn ’ t affect samples . One clue is given by Figure 1a , which shows that βt and β̃t are almost equal except near t = 0 , i.e . where the model is dealing with imperceptible details . Furthermore , as we increase the number of diffusion steps , βt and β̃t seem to remain close to one another for more of the diffusion process . Figure 1a : The ratio β̃t/βt for every diffusion step for diffusion processes of different lengths . 0 500 1000 1500 2000 2500 3000 3500 4000 diffusion step Figure 2 : Latent samples from linear ( top ) and cosine ( bottom ) schedules respectively at linearly spaced values of t from 0 to T . The latents in the last quarter of the linear schedule are almost purely noise , whereas the cosine schedule adds noise more slowly This suggests that , in the limit of infinite diffusion steps , the choice of σt might not matter at all for sample quality . In other words , as we add more diffusion steps , the model mean µθ ( xt , t ) determines the distribution much more than Σθ ( xt , t ) . While the above argument suggests that fixing σt is a reasonable choice for the sake of sample quality , it says nothing about log-likelihood . In fact , Figure 1b shows that the first few steps of the diffusion process contribute the most to the variational lower bound . Thus , it seems likely that we could improve log-likelihood by using a better choice of Σθ ( xt , t ) . To achieve this , we must learn Σθ ( xt , t ) without the instabilities encountered by Ho et al . ( 2020 ) . Since Figure 1a shows that the reasonable range for Σθ ( xt , t ) is very small , it would be hard for a neural network to predict Σθ ( xt , t ) directly , even in the log domain , as observed by Ho et al . ( 2020 ) . Instead , we found it better to parameterize the variance as an interpolation between βt and β̃t in the log domain . In particular , our model outputs a vector v containing one component per dimension , and we turn this output into variances as follows : Σθ ( xt , t ) = exp ( v log βt + ( 1− v ) log β̃t ) ( 16 ) We did not apply any constraints on v , theoretically allowing the model to predict variances outside of the interpolated range . However , we did not observe the network doing this in practice , suggesting that the bounds for Σθ ( xt , t ) are indeed expressive enough . Since Lsimple doesn ’ t depend on Σθ ( xt , t ) , we define a new hybrid objective : Lhybrid = Lsimple + λLvlb ( 17 ) For our experiments , we set λ = 0.001 to prevent Lvlb from overwhelming Lsimple . Along this same line of reasoning , we also apply a stop-gradient to the µθ ( xt , t ) output for the Lvlb term . This way , Lvlb can guide Σθ ( xt , t ) while Lsimple is still the main source of influence over µθ ( xt , t ) .
The paper found several methods to improve log likelihood of diffusion models while maintain their sample quality, including cosine instead of linear noise schedule, using a hybrid objective to learn parameters of the covariance function, and using importance sampling to improve the gradient noise. The authors also explore how sample quality and log likelihood scale with the number of diffusion steps and model capacity. Experiments on 64x64 ImageNet dataset show competitive llh while keeping the sample quality.
SP:30e7bbca17b264e59fc856365ad7dcf6081c7861
In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning
1 INTRODUCTION . The recent extraordinary success of deep learning methods can be mostly attributed to advancements in learning algorithms and the availability of large-scale labeled datasets . However , constructing large labeled datasets for supervised learning tends to be costly and is often infeasible . Several approaches have been proposed to overcome this dependency on huge labeled datasets ; these include semi-supervised learning ( Berthelot et al. , 2019 ; Tarvainen & Valpola , 2017 ; Miyato et al. , 2018 ; Lee , 2013 ) , self-supervised learning ( Doersch et al. , 2015 ; Noroozi & Favaro , 2016 ; Chen et al. , 2020a ) , and few-shot learning ( Finn et al. , 2017 ; Snell et al. , 2017 ; Vinyals et al. , 2016 ) . Semi-supervised learning ( SSL ) is one of the most dominant approaches for solving this problem , where the goal is to leverage a large unlabeled dataset alongside a small labeled dataset . One common assumption for SSL is that decision boundaries should lie in low density regions ( Chapelle & Zien , 2005 ) . Consistency-regularization based methods achieve this by making the network outputs invariant to small input perturbations ( Verma et al. , 2019 ) . However , one issue with these methods is that they often rely on a rich set of augmentations , like affine transformations , cutout ( DeVries & Taylor , 2017 ) , and color jittering in images , which limits their capability for domains where these augmentations are less effective ( e.g . videos and medical images ) . Pseudo-labeling based methods select unlabeled samples with high confidence as training targets ( pseudo-labels ) ; this can be viewed as a form of entropy minimization , which reduces the density of data points at the decision boundaries ( Grandvalet & Bengio , 2005 ; Lee , 2013 ) . One advantage of pseudo-labeling over consistency regularization is that it does not inherently require augmentations and can be generally applied to most domains . However , recent consistency regularization approaches tend to outperform pseudo-labeling on SSL benchmarks . This work is in defense of pseudo-labeling : we demonstrate that pseudo-labeling based methods can perform on par with consistency regularization methods . Although the selection of unlabeled samples with high confidence predictions moves decision boundaries to low density regions in pseudo-labeling based approaches , many of these selected predictions are incorrect due to the poor calibration of neural networks ( Guo et al. , 2017 ) . Since , calibration measures the discrepancy between the confidence level of a network ’ s individual predictions and its overall accuracy ( Dawid , 1982 ; Degroot & Fienberg , 1983 ) ; for poorly calibrated networks , an incorrect prediction might have high confidence . We argue that conventional pseudo-labeling based methods achieve poor results because poor network calibration produces incorrectly pseudo-labeled samples , leading to noisy training and poor generalization . To remedy this , we empirically study the relationship between output prediction uncertainty and calibration . We find that selecting predictions with low uncertainty greatly reduces the effect of poor calibration , improving generalization . Motivated by this , we propose an uncertainty-aware pseudo-label selection ( UPS ) framework that leverages the prediction uncertainty to guide the pseudo-label selection procedure . We believe pseudolabeling has been impactful due to its simplicity , generality , and ease of implementation ; to this end , our proposed framework attempts to maintain these benefits , while addressing the issue of calibration to drastically improve PL performance . UPS does not require modality-specific augmentations and can leverage most uncertainty estimation methods in its selection process . Furthermore , the proposed framework allows for the creation of negative pseudo-labels ( i.e . labels which specify the absence of specific classes ) . If a network predicts the absence of a class with high confidence and high certainty , then a negative label can be assigned to that sample . This generalization is beneficial for both single-label and multi-label learning . In the single-label case , networks can use these labels for negative learning ( Kim et al. , 2019 ) 1 ; in the multi-label case , class presence is independent so both positive and negative labels are necessary for training . Our key contributions include the following : ( 1 ) We introduce UPS , a novel uncertainty-aware pseudo-label selection framework which greatly reduces the effect of poor network calibration on the pseudo-labeling process , ( 2 ) While prior SSL methods focus on single-label classification , we generalize pseudo-labeling to create negative labels , allowing for negative learning and multi-label classification , and ( 3 ) Our comprehensive experimentation shows that the proposed method achieves strong performance on commonly used benchmark datasets CIFAR-10 and CIFAR-100 . In addition , we highlight our method ’ s flexibility by outperforming previous state-of-the-art approaches on the video dataset , UCF-101 , and the multi-label Pascal VOC dataset . 2 RELATED WORKS . Semi-supervised learning is a heavily studied problem . In this work , we mostly focus on pseudolabeling and consistency regularization based approaches as currently , these are the dominant approaches for SSL . Following ( Berthelot et al. , 2019 ) , we refer to the other SSL approaches for interested readers which includes : “ transductive ” models ( Gammerman et al. , 1998 ; Joachims , 1999 ; 2003 ) , graph-based methods ( Zhu et al. , 2003 ; Bengio et al. , 2006 ; Liu et al. , 2019 ) , generative modeling ( Belkin & Niyogi , 2002 ; Lasserre et al. , 2006 ; Kingma et al. , 2014 ; Pu et al. , 2016 ) . Furthermore , several recent self-supervised approaches ( Grill et al. , 2020 ; Chen et al. , 2020b ; Caron et al. , 2020 ) , have shown strong performance when applied to the SSL task . For a general overview of SSL , we point to ( Chapelle et al. , 2010 ; Zhu , 2005 ) . Pseudo-labeling The goal of pseudo-labeling ( Lee , 2013 ; Shi et al. , 2018 ) and self-training ( Yarowsky , 1995 ; McClosky et al. , 2006 ) is to generate pseudo-labels for unlabeled samples with a model trained on labeled data . In ( Lee , 2013 ) , pseudo-labels are created from the predictions of a trained neural network . Pseudo-labels can also be assigned to unlabeled samples based on neighborhood graphs ( Iscen et al. , 2019 ) . Shi et al . ( 2018 ) extend the idea of pseudo-labeling by incorporating confidence scores for unlabeled samples based on the density of a local neighborhood . Inspired by noise correction work ( Yi & Wu , 2019 ) , Wang & Wu ( 2020 ) attempt to update the pseudo-labels through an optimization framework . Recently , ( Xie et al. , 2019 ) show self-training can be used to improve the performance of benchmark supervised classification tasks . A concurrent 1The motivations for using negative learning ( NL ) in this work differs greatly from Kim et al . ( 2019 ) . In this work , NL is used to incorporate more unlabeled samples into training and to generalize pseudo-labeling to the multi-label classification setting , whereas Kim et al . ( 2019 ) use negative learning primarily to obtain good network initializations to learn with noisy labels . Further discussion about NL can be found in Appendix K. work ( Haase-Schutz et al. , 2020 ) partitions an unlabeled dataset and trains re-initialized networks on each partition . They use previously trained networks to filter the labels used for training newer networks . However , most of their experiments involve learning from noisy data . Although previous pseudo-labeling based SSL approaches are general and domain-agnostic , they tend to under-perform due to the generation of noisy pseudo-labels ; our approach greatly reduces noise by minimizing the effect of poor network calibration , allowing for competitive state-of-the-art results . Consistency Regularization The main objective of consistency regularization methods is to obtain perturbation/augmentation invariant output distribution . In ( Sajjadi et al. , 2016 ) random max-pooling , dropout , and random data augmentation are used as input perturbations . In ( Miyato et al. , 2018 ) perturbations are applied to the input that changes the output predictions maximally . Temporal ensembling ( Laine & Aila , 2017 ) forces the output class distribution for a sample to be consistent over multiple epochs . Tarvainen & Valpola ( 2017 ) reformulate temporal ensembling as a teacher-student problem . Recently , the Mixup ( Zhang et al. , 2018 ) augmentation , has been used for consistency regularization in ( Verma et al. , 2019 ) . Several SSL works combine ideas from both consistency regularization and pseudo-labeling ( Berthelot et al. , 2019 ; 2020 ; Zhou et al. , 2020 ) . In ( Berthelot et al. , 2019 ) , pseudo-labels are generated by averaging different predictions of augmented versions of the same sample and the Mixup augmentation is used to train with these pseudo-labels . The authors in ( Berthelot et al. , 2020 ) extend this idea by dividing the set of augmentations into strong and weak augmentations . Also , ( Zhou et al. , 2020 ) incorporate a time-consistency metric to effectively select time-consistent samples for consistency regularization . The success of recent consistency regularization methods can be attributed to domain-specific augmentations ; our approach does not inherently rely on these augmentations , which allows for application to various modalities . Also , our pseudo-labeling method is orthogonal to consistency regularization techniques ; therefore , these existing techniques can be applied alongside UPS to further improve network performance . Uncertainty and Calibration Estimating network prediction uncertainty has been a deeply studied topic ( Graves , 2011 ; Blundell et al. , 2015 ; Louizos & Welling , 2016 ; Lakshminarayanan et al. , 2017 ; Malinin & Gales , 2018 ; Maddox et al. , 2019 ; Welling & Teh , 2011 ) . In the SSL domain , ( Yu et al. , 2019 ; Xia et al. , 2020 ) use uncertainty to improve consistency regularization learning for the segmentation of medical images . A concurrent work ( Mukherjee & Awadallah , 2020 ) , selects pseudo-labels predicted by a pretrained language model using uncertainty for a downstream SSL task . One difference between our works is the selection of hard samples . Whereas Mukherjee et al . select a certain amount of hard samples ( i.e . those which are not confident or certain ) and learn from these using positive learning , we decide to use negative learning on these samples which reduces the amount of noise seen by the network . Zheng & Yang ( 2020 ) show strong performance on the domain adaptive semantic segmentation task by leveraging uncertainty . However , to the best of our knowledge , uncertainty has not been used to reduce the effect of poor network calibration in the pseudo-labeling process . In this work , instead of improving the calibration of the network ( Guo et al. , 2017 ; Xing et al. , 2020 ) , we present a general framework which can leverage most uncertainty estimation methods to select a better calibrated subset of pseudo-labels . 3 PROPOSED METHOD . 3.1 PSEUDO-LABELING FOR SEMI-SUPERVISED LEARNING . Notation Let DL = { ( x ( i ) , y ( i ) ) } NL i=1 be a labeled dataset with NL samples , where x ( i ) is the input and y ( i ) = [ y ( i ) 1 , ... , y ( i ) C ] ⊆ { 0 , 1 } C is the corresponding label with C class categories ( note that multiple elements in y ( i ) can be non-zero in multi-label datasets ) . For a sample i , y ( i ) c = 1 denotes that class c is present in the corresponding input and y ( i ) c = 0 represent the class ’ s absence . Let DU = { x ( i ) } NUi=1 be an unlabeled dataset with NU samples , which does not contain labels corresponding to its input samples . For the unlabeled samples , pseudo-labels ỹ ( i ) are generated . Pseudo-labeling based SSL approaches involve learning a parameterized model fθ on the dataset D̃ = { ( x ( i ) , ỹ ( i ) ) } NL+NU i=1 , with ỹ ( i ) = y ( i ) for the NL labeled samples . Generalizing Pseudo-label Generation There are several approaches to create the pseudo-labels ỹ ( i ) , which have been described in Section 2 . We adopt the approach where hard pseudo-labels are obtained directly from network predictions . Let p ( i ) be the probability outputs of a trained network on the sample x ( i ) , such that p ( i ) c represents the probability of class c being present in the sample . Using these output probabilities , the pseudo-label can be generated for x ( i ) as : ỹ ( i ) c = 1 [ p ( i ) c ≥ γ ] , ( 1 ) where γ ∈ ( 0 , 1 ) is a threshold used to produce hard labels . Note that conventional single-label pseudo-labeling can be derived from equation 1 when γ = maxc p ( i ) c . For the multi-label case , γ = 0.5 would lead to binary pseudo-labels , in which multiple classes can be present in one sample .
This paper is in defense of simple semi-supervised learning (SSL) with pseudo-labeling (PL): authors demonstrate with experiments on 4 vision datasets (CIFAR-10, CIFAR-100, Pascal VOC and UCF-101) that pseudo-labeling can perform on par with consistency regularization methods. Authors argue that PL doesn't work well because of poor network calibration: because of that the high confident predictions are wrong leading to noisy training and poor generalization. The main contribution of the paper is the usage of prediction uncertainty selection in addition to the confidence-based selection which provides high accuracy of PL used in the further training. Besides this PL is generalized to create negative labels: with this authors perform and show effectiveness of negative learning and multi-label classification. Proposed approach performs in the same ballpark as state-of-the-art methods on CIFAR-10 and CIFAR-100, while it achieves the new state-of-the-art results on video dataset and multi-label task. It is worth to notice that proposed approach is independent from the domain while consistency regularization methods extensively are based on the specific augmentation techniques for the vision/datasets.
SP:2eb629c9ac83f2068ae67b00c10c5c9ba11bbc13
In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning
1 INTRODUCTION . The recent extraordinary success of deep learning methods can be mostly attributed to advancements in learning algorithms and the availability of large-scale labeled datasets . However , constructing large labeled datasets for supervised learning tends to be costly and is often infeasible . Several approaches have been proposed to overcome this dependency on huge labeled datasets ; these include semi-supervised learning ( Berthelot et al. , 2019 ; Tarvainen & Valpola , 2017 ; Miyato et al. , 2018 ; Lee , 2013 ) , self-supervised learning ( Doersch et al. , 2015 ; Noroozi & Favaro , 2016 ; Chen et al. , 2020a ) , and few-shot learning ( Finn et al. , 2017 ; Snell et al. , 2017 ; Vinyals et al. , 2016 ) . Semi-supervised learning ( SSL ) is one of the most dominant approaches for solving this problem , where the goal is to leverage a large unlabeled dataset alongside a small labeled dataset . One common assumption for SSL is that decision boundaries should lie in low density regions ( Chapelle & Zien , 2005 ) . Consistency-regularization based methods achieve this by making the network outputs invariant to small input perturbations ( Verma et al. , 2019 ) . However , one issue with these methods is that they often rely on a rich set of augmentations , like affine transformations , cutout ( DeVries & Taylor , 2017 ) , and color jittering in images , which limits their capability for domains where these augmentations are less effective ( e.g . videos and medical images ) . Pseudo-labeling based methods select unlabeled samples with high confidence as training targets ( pseudo-labels ) ; this can be viewed as a form of entropy minimization , which reduces the density of data points at the decision boundaries ( Grandvalet & Bengio , 2005 ; Lee , 2013 ) . One advantage of pseudo-labeling over consistency regularization is that it does not inherently require augmentations and can be generally applied to most domains . However , recent consistency regularization approaches tend to outperform pseudo-labeling on SSL benchmarks . This work is in defense of pseudo-labeling : we demonstrate that pseudo-labeling based methods can perform on par with consistency regularization methods . Although the selection of unlabeled samples with high confidence predictions moves decision boundaries to low density regions in pseudo-labeling based approaches , many of these selected predictions are incorrect due to the poor calibration of neural networks ( Guo et al. , 2017 ) . Since , calibration measures the discrepancy between the confidence level of a network ’ s individual predictions and its overall accuracy ( Dawid , 1982 ; Degroot & Fienberg , 1983 ) ; for poorly calibrated networks , an incorrect prediction might have high confidence . We argue that conventional pseudo-labeling based methods achieve poor results because poor network calibration produces incorrectly pseudo-labeled samples , leading to noisy training and poor generalization . To remedy this , we empirically study the relationship between output prediction uncertainty and calibration . We find that selecting predictions with low uncertainty greatly reduces the effect of poor calibration , improving generalization . Motivated by this , we propose an uncertainty-aware pseudo-label selection ( UPS ) framework that leverages the prediction uncertainty to guide the pseudo-label selection procedure . We believe pseudolabeling has been impactful due to its simplicity , generality , and ease of implementation ; to this end , our proposed framework attempts to maintain these benefits , while addressing the issue of calibration to drastically improve PL performance . UPS does not require modality-specific augmentations and can leverage most uncertainty estimation methods in its selection process . Furthermore , the proposed framework allows for the creation of negative pseudo-labels ( i.e . labels which specify the absence of specific classes ) . If a network predicts the absence of a class with high confidence and high certainty , then a negative label can be assigned to that sample . This generalization is beneficial for both single-label and multi-label learning . In the single-label case , networks can use these labels for negative learning ( Kim et al. , 2019 ) 1 ; in the multi-label case , class presence is independent so both positive and negative labels are necessary for training . Our key contributions include the following : ( 1 ) We introduce UPS , a novel uncertainty-aware pseudo-label selection framework which greatly reduces the effect of poor network calibration on the pseudo-labeling process , ( 2 ) While prior SSL methods focus on single-label classification , we generalize pseudo-labeling to create negative labels , allowing for negative learning and multi-label classification , and ( 3 ) Our comprehensive experimentation shows that the proposed method achieves strong performance on commonly used benchmark datasets CIFAR-10 and CIFAR-100 . In addition , we highlight our method ’ s flexibility by outperforming previous state-of-the-art approaches on the video dataset , UCF-101 , and the multi-label Pascal VOC dataset . 2 RELATED WORKS . Semi-supervised learning is a heavily studied problem . In this work , we mostly focus on pseudolabeling and consistency regularization based approaches as currently , these are the dominant approaches for SSL . Following ( Berthelot et al. , 2019 ) , we refer to the other SSL approaches for interested readers which includes : “ transductive ” models ( Gammerman et al. , 1998 ; Joachims , 1999 ; 2003 ) , graph-based methods ( Zhu et al. , 2003 ; Bengio et al. , 2006 ; Liu et al. , 2019 ) , generative modeling ( Belkin & Niyogi , 2002 ; Lasserre et al. , 2006 ; Kingma et al. , 2014 ; Pu et al. , 2016 ) . Furthermore , several recent self-supervised approaches ( Grill et al. , 2020 ; Chen et al. , 2020b ; Caron et al. , 2020 ) , have shown strong performance when applied to the SSL task . For a general overview of SSL , we point to ( Chapelle et al. , 2010 ; Zhu , 2005 ) . Pseudo-labeling The goal of pseudo-labeling ( Lee , 2013 ; Shi et al. , 2018 ) and self-training ( Yarowsky , 1995 ; McClosky et al. , 2006 ) is to generate pseudo-labels for unlabeled samples with a model trained on labeled data . In ( Lee , 2013 ) , pseudo-labels are created from the predictions of a trained neural network . Pseudo-labels can also be assigned to unlabeled samples based on neighborhood graphs ( Iscen et al. , 2019 ) . Shi et al . ( 2018 ) extend the idea of pseudo-labeling by incorporating confidence scores for unlabeled samples based on the density of a local neighborhood . Inspired by noise correction work ( Yi & Wu , 2019 ) , Wang & Wu ( 2020 ) attempt to update the pseudo-labels through an optimization framework . Recently , ( Xie et al. , 2019 ) show self-training can be used to improve the performance of benchmark supervised classification tasks . A concurrent 1The motivations for using negative learning ( NL ) in this work differs greatly from Kim et al . ( 2019 ) . In this work , NL is used to incorporate more unlabeled samples into training and to generalize pseudo-labeling to the multi-label classification setting , whereas Kim et al . ( 2019 ) use negative learning primarily to obtain good network initializations to learn with noisy labels . Further discussion about NL can be found in Appendix K. work ( Haase-Schutz et al. , 2020 ) partitions an unlabeled dataset and trains re-initialized networks on each partition . They use previously trained networks to filter the labels used for training newer networks . However , most of their experiments involve learning from noisy data . Although previous pseudo-labeling based SSL approaches are general and domain-agnostic , they tend to under-perform due to the generation of noisy pseudo-labels ; our approach greatly reduces noise by minimizing the effect of poor network calibration , allowing for competitive state-of-the-art results . Consistency Regularization The main objective of consistency regularization methods is to obtain perturbation/augmentation invariant output distribution . In ( Sajjadi et al. , 2016 ) random max-pooling , dropout , and random data augmentation are used as input perturbations . In ( Miyato et al. , 2018 ) perturbations are applied to the input that changes the output predictions maximally . Temporal ensembling ( Laine & Aila , 2017 ) forces the output class distribution for a sample to be consistent over multiple epochs . Tarvainen & Valpola ( 2017 ) reformulate temporal ensembling as a teacher-student problem . Recently , the Mixup ( Zhang et al. , 2018 ) augmentation , has been used for consistency regularization in ( Verma et al. , 2019 ) . Several SSL works combine ideas from both consistency regularization and pseudo-labeling ( Berthelot et al. , 2019 ; 2020 ; Zhou et al. , 2020 ) . In ( Berthelot et al. , 2019 ) , pseudo-labels are generated by averaging different predictions of augmented versions of the same sample and the Mixup augmentation is used to train with these pseudo-labels . The authors in ( Berthelot et al. , 2020 ) extend this idea by dividing the set of augmentations into strong and weak augmentations . Also , ( Zhou et al. , 2020 ) incorporate a time-consistency metric to effectively select time-consistent samples for consistency regularization . The success of recent consistency regularization methods can be attributed to domain-specific augmentations ; our approach does not inherently rely on these augmentations , which allows for application to various modalities . Also , our pseudo-labeling method is orthogonal to consistency regularization techniques ; therefore , these existing techniques can be applied alongside UPS to further improve network performance . Uncertainty and Calibration Estimating network prediction uncertainty has been a deeply studied topic ( Graves , 2011 ; Blundell et al. , 2015 ; Louizos & Welling , 2016 ; Lakshminarayanan et al. , 2017 ; Malinin & Gales , 2018 ; Maddox et al. , 2019 ; Welling & Teh , 2011 ) . In the SSL domain , ( Yu et al. , 2019 ; Xia et al. , 2020 ) use uncertainty to improve consistency regularization learning for the segmentation of medical images . A concurrent work ( Mukherjee & Awadallah , 2020 ) , selects pseudo-labels predicted by a pretrained language model using uncertainty for a downstream SSL task . One difference between our works is the selection of hard samples . Whereas Mukherjee et al . select a certain amount of hard samples ( i.e . those which are not confident or certain ) and learn from these using positive learning , we decide to use negative learning on these samples which reduces the amount of noise seen by the network . Zheng & Yang ( 2020 ) show strong performance on the domain adaptive semantic segmentation task by leveraging uncertainty . However , to the best of our knowledge , uncertainty has not been used to reduce the effect of poor network calibration in the pseudo-labeling process . In this work , instead of improving the calibration of the network ( Guo et al. , 2017 ; Xing et al. , 2020 ) , we present a general framework which can leverage most uncertainty estimation methods to select a better calibrated subset of pseudo-labels . 3 PROPOSED METHOD . 3.1 PSEUDO-LABELING FOR SEMI-SUPERVISED LEARNING . Notation Let DL = { ( x ( i ) , y ( i ) ) } NL i=1 be a labeled dataset with NL samples , where x ( i ) is the input and y ( i ) = [ y ( i ) 1 , ... , y ( i ) C ] ⊆ { 0 , 1 } C is the corresponding label with C class categories ( note that multiple elements in y ( i ) can be non-zero in multi-label datasets ) . For a sample i , y ( i ) c = 1 denotes that class c is present in the corresponding input and y ( i ) c = 0 represent the class ’ s absence . Let DU = { x ( i ) } NUi=1 be an unlabeled dataset with NU samples , which does not contain labels corresponding to its input samples . For the unlabeled samples , pseudo-labels ỹ ( i ) are generated . Pseudo-labeling based SSL approaches involve learning a parameterized model fθ on the dataset D̃ = { ( x ( i ) , ỹ ( i ) ) } NL+NU i=1 , with ỹ ( i ) = y ( i ) for the NL labeled samples . Generalizing Pseudo-label Generation There are several approaches to create the pseudo-labels ỹ ( i ) , which have been described in Section 2 . We adopt the approach where hard pseudo-labels are obtained directly from network predictions . Let p ( i ) be the probability outputs of a trained network on the sample x ( i ) , such that p ( i ) c represents the probability of class c being present in the sample . Using these output probabilities , the pseudo-label can be generated for x ( i ) as : ỹ ( i ) c = 1 [ p ( i ) c ≥ γ ] , ( 1 ) where γ ∈ ( 0 , 1 ) is a threshold used to produce hard labels . Note that conventional single-label pseudo-labeling can be derived from equation 1 when γ = maxc p ( i ) c . For the multi-label case , γ = 0.5 would lead to binary pseudo-labels , in which multiple classes can be present in one sample .
As noted in (Guo et al., 2017), modern neural networks are often miscalibrated. Pseudo-labeling based Semi-Supervised learning schemes are predicated on high confidence predictions from these neural networks. This paper posits that this miscalibration may lead to inferior results in confidence-based pseudo-labeling approaches. By taking into account the uncertainty of models and only using pseudo labels from high-confidence instances with low uncertainty this work presents a model that significant improves on other PL strategies and is competitive with consistency-based regularization strategies that comprise the current state-of-the-art.
SP:2eb629c9ac83f2068ae67b00c10c5c9ba11bbc13
Adversarial Synthetic Datasets for Neural Program Synthesis
1 INTRODUCTION . Program synthesis has long been a key goal of AI research . In particular , researchers have become increasingly interested in the task of programming by example ( PBE ) , where the goal is to generate a program consistent with a given set of input-output ( I/O ) pairs . Recent studies have achieved impressive results , capable of solving PBE problems that humans would find difficult ( e.g. , Sharma et al . ( 2017 ) ; Zohar & Wolf ( 2018 ) ; Ellis et al . ( 2019 ) ) . However these studies have a concerning weakness : since large , naturally occurring datasets of program synthesis problems do not exist , these studies train and test their models on synthetic datasets of randomly generated programs and I/O pairs . The justification for using these synthetic datasets is that if a model can correctly predict programs for arbitrary PBE problems , then it has likely learned the semantics of the programming language and can generalize to problems outside the synthetic data distribution ( Devlin et al. , 2017 ) . While this justification is plausible , a model might also perform well because it has learned specific aspects of the synthetic data distribution , and recent studies have found this to be the case for several state-of-the-art models ( Shin et al. , 2019 ; Clymo et al. , 2019 ) . These studies find that current PBE models often perform poorly on distributions different from that of the training data , and they propose methods to mitigate this issue by generating synthetic data with more varied distributions . The idea behind these methods is that a model trained on more varied synthetic data should generalize to a wider variety of distributions , hopefully including those of real-world PBE problems . Nevertheless , we find that these methods are often insufficient . Previous studies differ on what constitutes a “ varied distribution ” of synthetic data , creating definitions based on problem-specific heuristics . While generating training data based on these heuristics does help models generalize to certain distributions , we find that models trained using these methods still fail to generalize to many other distributions , including those resembling distributions of real-world problems . Moreover , different methods fail to generalize to different distributions , raising the question of how one should construct test sets to evaluate these methods . While previous studies have arbitrarily picked test sets that they believe present a reasonable challenge for state-of-the-art methods , this approach may lead to overly optimistic evaluations . A study may report that a method performed well because the researchers failed to find those distributions on which the method performs poorly . In this paper , we propose an adversarial method to generate a training set . Our adversarial approach builds a training set iteratively , finding data distributions on which a given model performs poorly and adding data drawn from those distributions to the training set on each iteration . We test this method by using it to generate training data for the PCCoder model from Zohar & Wolf ( 2018 ) , and we show that models trained using our method generalize to a variety of distributions better than previously proposed methods . Moreover , we propose using a variation of our adversarial approach to generate test sets to evaluate PBE methods . We create test sets for different versions of PCCoder using this approach and show that these test sets reveal weaknesses in models that are not obvious when using other test sets . This paper makes the following key contributions : 1 . We propose a new , adversarial method to generate desirable distributions on which to train models for PBE . 2 . We show that models trained using our method generalize to a variety of datasets better than models trained using previously proposed methods . 3 . We show that our adversarial approach may also be used to generate test sets that are less likely to overestimate the performance of a model . 2 RELATED WORK . Most studies on PBE methods generate I/O pairs using random sampling schemes , filtering out invalid I/O pairs for each program by constraining the sample space and rejecting sets of I/O pairs that do not meet these constraints . Balog et al . ( 2016 ) construct a dataset of PBE problems for DeepCoder by enumerating programs up to a given length and removing programs with easily detectable issues ( e.g. , redundant variables ) . For each program generated , they then create I/O pairs by sampling inputs uniformly from a restricted range of values guaranteed to yield valid outputs for the given program . Feng et al . ( 2018 ) and Zohar & Wolf ( 2018 ) also create datasets using the DeepCoder DSL ( domain-specific language ) in a similar manner . Bunel et al . ( 2018 ) generate PBE problems for Karel ( Pattis , 1981 ) by randomly sampling programs from the Karel DSL and removing programs with obvious problems , similar to Balog et al . They then generate I/O pairs for each program by sampling random inputs and running the program to obtain the corresponding outputs . However , Bunel et al . do not specify what sampling distributions are used for the programs and I/O pairs . Parisotto et al . Parisotto et al . ( 2016 ) create a dataset for the Flashfill domain ( Gulwani et al. , 2012 ) by enumerating programs up to 13 expressions long and then randomly sampling inputs to create I/O pairs . They report that while their model achieves 97 % accuracy on their synthetic data , they achieve only 38 % accuracy on a small dataset of 238 real-world problems . Devlin et al . ( 2017 ) use a data generation approach similar to Parisotto et al . but with an improved model and are more successful , achieving 92 % accuracy on the same real-world dataset used by Parisotto et al . All of the papers above focus on advancing models for PBE , but they do so largely using synthetic data to train those models . Shin et al . ( 2019 ) report that even minor differences between the synthetic data distributions used for training and evaluation can drastically decrease a model ’ s performance . To solve this problem , they propose a data generation method to improve a model ’ s ability to generalize to other data distributions . They first choose a set of “ salient variables ” for the domain , defined as a mapping from I/O pairs in the synthetic dataset to a finite , discrete set . They then sample I/O pairs such that the salient variables will be approximately uniformly distributed in the resulting dataset . Shin et al . find that the model proposed by Bunel et al . ( 2018 ) better generalizes to a variety of distributions when trained with data generated with this method . However , this method has two major disadvantages . First , it requires the user to determine the correct salient variables , which may be difficult for complex domains . Second , if the domain of valid I/O pairs is highly dependent on the program , it is often prohibitively complex to enforce uniformity across salient variables . Recently , Clymo et al . ( 2019 ) proposed a method to generate PBE problems using a SMT solver . They impose constraints on the I/O pairs to ensure that pairs selected for the dataset are not too similar to each other and then select I/O pairs that satisfy these constraints using a SMT solver . However , when testing an implementation of this method on the DeepCoder domain , the reported improvement of the constraint-based methods over simpler sampling methods is marginal , with the best constraint-based method performing only 2.4 % better than the best sampling method . Moreover , many of their constraints are highly specific to the DeepCoder domain , and Clymo et al . do not offer a way to adapt their method to other problem spaces . Nevertheless , they present a method that does not require the salient variables of Shin et al . ( 2019 ) , and they show that their approach can be applied to domains such as DeepCoder , for which it is difficult to enforce uniformity across salient variables . Our adversarial method is closely related to the literature on adversarial training . Most similar to our work is that of Volpi et al . ( 2018 ) , who propose an adversarial training procedure to learn models resistant to covariate shift . Given a model M trained on data from a given domain X , Volpi et al . generate synthetic training data by sampling data points for which M performs poorly , under the constraint that the sampled data points must be within a certain Wassertein distance ρ from X . By retraining M on the synthetic data , Volpi et al . create a new model resistant to covariate shifts of magnitude ρ . However , a key difficulty in applying their method is that the user must predict the magnitude of a covariate shift before it actually occurs in order to select ρ . Moreover , our approach differs from that of Volpi et al . in that our goal is to train a model to generalize to virtually any distribution of data valid for a given programming problem , where as Volpi et al . try to train a model to be resistant to a single , anticipated covariate shift . Our method also does not require the user to predict a hyperparameter such as ρ to train a model . Other studies have also proposed adversarial methods to generate synthetic training data ( e.g. , Goodfellow et al . ( 2015 ) ; Sinha et al . ( 2020 ) ) . However , these methods try to train a model to be resistant to imperceptible adversarial attacks , which make small perturbations to training examples such that the perturbed inputs cause the target model to output incorrect answers with high confidence . Our method tries to train a model to generalize to all valid data distributions , not just the small perturbations found in adversarial attacks . Some studies have proposed iteratively constructing training sets based on the concept of “ surprise. ” In the field of program synthesis , Pu et al . ( 2017 ) generate I/O pairs to specify a program by first generating a large set of candidate I/O pairs and then iteratively selecting the I/O pairs from this set that are most surprising , where surprise is estimated using a neural network trained to predict which I/O pairs will be selected ( I/O pairs assigned lower probabilities by the neural network are more surprising ) . Pu et al . use this approach to ensure that their I/O pairs are sufficient to specify desired programs unambiguously . In contrast , our approach tries to find and generalize to out-ofdistribution data , and therefore generates I/O pairs to maximize the difficulty of the resulting PBE problems rather than maximizing surprise . Our adversarial method uses an evolutionary algorithm to find desirable training distributions . This approach is similar to that of studies on competitive co-evolution ( e.g. , Rosin & Belew ( 1997 ) ; Arcuri & Yao ( 2007 ) ) . For example , Arcuri & Yao ( 2014 ) co-evolve populations of programs and populations of unit tests given a formal specification of a desired program . The fitness value of a candidate program depends on how many unit tests it passes , while the fitness value of a candidate unit test depends on how many programs it causes to fail . However , the goal of Arcuri & Yao ( 2014 ) and similar works is to simply generate a vanilla program synthesis algorithm capable of generating a program given its formal specification . They do not try to ensure that their algorithms generalize to different data distributions as we do ( e.g. , their algorithm might only work for certain distributions of formal specifications ) . Furthermore , in order to compute fitness values , co-evolutionary methods require the user to provide a formal specification of the desired program ( e.g. , using Z notation ( Spivey & Abrial , 1992 ) ) . Our approach only requires the user to provide I/O pairs describing the behavior of a desired program , which tend to be easier to provide than a formal specification ( Palshikar , 2001 ) .
Synthesis models trained on synthetic datasets of randomly generated programs and corresponding IO pairs often fail to generalize to real-world PBE problems. The models often learn specific aspects of the synthetic data distribution rather than the semantics of the programming language. This work proposes a more principled adversarial approach to generate synthetic data. The training set is built iteratively by finding data distributions on which a given model performs poorly and adding data drawn from these distributions to the training set. The model and the training dataset are built in tandem. The candidate distributions are generated by mutating the current population of distributions. Models trained using this method are shown to generalize to a variety of distributions.
SP:2a886848c0832643ba2bd13804fe3fe8104a33bb
Adversarial Synthetic Datasets for Neural Program Synthesis
1 INTRODUCTION . Program synthesis has long been a key goal of AI research . In particular , researchers have become increasingly interested in the task of programming by example ( PBE ) , where the goal is to generate a program consistent with a given set of input-output ( I/O ) pairs . Recent studies have achieved impressive results , capable of solving PBE problems that humans would find difficult ( e.g. , Sharma et al . ( 2017 ) ; Zohar & Wolf ( 2018 ) ; Ellis et al . ( 2019 ) ) . However these studies have a concerning weakness : since large , naturally occurring datasets of program synthesis problems do not exist , these studies train and test their models on synthetic datasets of randomly generated programs and I/O pairs . The justification for using these synthetic datasets is that if a model can correctly predict programs for arbitrary PBE problems , then it has likely learned the semantics of the programming language and can generalize to problems outside the synthetic data distribution ( Devlin et al. , 2017 ) . While this justification is plausible , a model might also perform well because it has learned specific aspects of the synthetic data distribution , and recent studies have found this to be the case for several state-of-the-art models ( Shin et al. , 2019 ; Clymo et al. , 2019 ) . These studies find that current PBE models often perform poorly on distributions different from that of the training data , and they propose methods to mitigate this issue by generating synthetic data with more varied distributions . The idea behind these methods is that a model trained on more varied synthetic data should generalize to a wider variety of distributions , hopefully including those of real-world PBE problems . Nevertheless , we find that these methods are often insufficient . Previous studies differ on what constitutes a “ varied distribution ” of synthetic data , creating definitions based on problem-specific heuristics . While generating training data based on these heuristics does help models generalize to certain distributions , we find that models trained using these methods still fail to generalize to many other distributions , including those resembling distributions of real-world problems . Moreover , different methods fail to generalize to different distributions , raising the question of how one should construct test sets to evaluate these methods . While previous studies have arbitrarily picked test sets that they believe present a reasonable challenge for state-of-the-art methods , this approach may lead to overly optimistic evaluations . A study may report that a method performed well because the researchers failed to find those distributions on which the method performs poorly . In this paper , we propose an adversarial method to generate a training set . Our adversarial approach builds a training set iteratively , finding data distributions on which a given model performs poorly and adding data drawn from those distributions to the training set on each iteration . We test this method by using it to generate training data for the PCCoder model from Zohar & Wolf ( 2018 ) , and we show that models trained using our method generalize to a variety of distributions better than previously proposed methods . Moreover , we propose using a variation of our adversarial approach to generate test sets to evaluate PBE methods . We create test sets for different versions of PCCoder using this approach and show that these test sets reveal weaknesses in models that are not obvious when using other test sets . This paper makes the following key contributions : 1 . We propose a new , adversarial method to generate desirable distributions on which to train models for PBE . 2 . We show that models trained using our method generalize to a variety of datasets better than models trained using previously proposed methods . 3 . We show that our adversarial approach may also be used to generate test sets that are less likely to overestimate the performance of a model . 2 RELATED WORK . Most studies on PBE methods generate I/O pairs using random sampling schemes , filtering out invalid I/O pairs for each program by constraining the sample space and rejecting sets of I/O pairs that do not meet these constraints . Balog et al . ( 2016 ) construct a dataset of PBE problems for DeepCoder by enumerating programs up to a given length and removing programs with easily detectable issues ( e.g. , redundant variables ) . For each program generated , they then create I/O pairs by sampling inputs uniformly from a restricted range of values guaranteed to yield valid outputs for the given program . Feng et al . ( 2018 ) and Zohar & Wolf ( 2018 ) also create datasets using the DeepCoder DSL ( domain-specific language ) in a similar manner . Bunel et al . ( 2018 ) generate PBE problems for Karel ( Pattis , 1981 ) by randomly sampling programs from the Karel DSL and removing programs with obvious problems , similar to Balog et al . They then generate I/O pairs for each program by sampling random inputs and running the program to obtain the corresponding outputs . However , Bunel et al . do not specify what sampling distributions are used for the programs and I/O pairs . Parisotto et al . Parisotto et al . ( 2016 ) create a dataset for the Flashfill domain ( Gulwani et al. , 2012 ) by enumerating programs up to 13 expressions long and then randomly sampling inputs to create I/O pairs . They report that while their model achieves 97 % accuracy on their synthetic data , they achieve only 38 % accuracy on a small dataset of 238 real-world problems . Devlin et al . ( 2017 ) use a data generation approach similar to Parisotto et al . but with an improved model and are more successful , achieving 92 % accuracy on the same real-world dataset used by Parisotto et al . All of the papers above focus on advancing models for PBE , but they do so largely using synthetic data to train those models . Shin et al . ( 2019 ) report that even minor differences between the synthetic data distributions used for training and evaluation can drastically decrease a model ’ s performance . To solve this problem , they propose a data generation method to improve a model ’ s ability to generalize to other data distributions . They first choose a set of “ salient variables ” for the domain , defined as a mapping from I/O pairs in the synthetic dataset to a finite , discrete set . They then sample I/O pairs such that the salient variables will be approximately uniformly distributed in the resulting dataset . Shin et al . find that the model proposed by Bunel et al . ( 2018 ) better generalizes to a variety of distributions when trained with data generated with this method . However , this method has two major disadvantages . First , it requires the user to determine the correct salient variables , which may be difficult for complex domains . Second , if the domain of valid I/O pairs is highly dependent on the program , it is often prohibitively complex to enforce uniformity across salient variables . Recently , Clymo et al . ( 2019 ) proposed a method to generate PBE problems using a SMT solver . They impose constraints on the I/O pairs to ensure that pairs selected for the dataset are not too similar to each other and then select I/O pairs that satisfy these constraints using a SMT solver . However , when testing an implementation of this method on the DeepCoder domain , the reported improvement of the constraint-based methods over simpler sampling methods is marginal , with the best constraint-based method performing only 2.4 % better than the best sampling method . Moreover , many of their constraints are highly specific to the DeepCoder domain , and Clymo et al . do not offer a way to adapt their method to other problem spaces . Nevertheless , they present a method that does not require the salient variables of Shin et al . ( 2019 ) , and they show that their approach can be applied to domains such as DeepCoder , for which it is difficult to enforce uniformity across salient variables . Our adversarial method is closely related to the literature on adversarial training . Most similar to our work is that of Volpi et al . ( 2018 ) , who propose an adversarial training procedure to learn models resistant to covariate shift . Given a model M trained on data from a given domain X , Volpi et al . generate synthetic training data by sampling data points for which M performs poorly , under the constraint that the sampled data points must be within a certain Wassertein distance ρ from X . By retraining M on the synthetic data , Volpi et al . create a new model resistant to covariate shifts of magnitude ρ . However , a key difficulty in applying their method is that the user must predict the magnitude of a covariate shift before it actually occurs in order to select ρ . Moreover , our approach differs from that of Volpi et al . in that our goal is to train a model to generalize to virtually any distribution of data valid for a given programming problem , where as Volpi et al . try to train a model to be resistant to a single , anticipated covariate shift . Our method also does not require the user to predict a hyperparameter such as ρ to train a model . Other studies have also proposed adversarial methods to generate synthetic training data ( e.g. , Goodfellow et al . ( 2015 ) ; Sinha et al . ( 2020 ) ) . However , these methods try to train a model to be resistant to imperceptible adversarial attacks , which make small perturbations to training examples such that the perturbed inputs cause the target model to output incorrect answers with high confidence . Our method tries to train a model to generalize to all valid data distributions , not just the small perturbations found in adversarial attacks . Some studies have proposed iteratively constructing training sets based on the concept of “ surprise. ” In the field of program synthesis , Pu et al . ( 2017 ) generate I/O pairs to specify a program by first generating a large set of candidate I/O pairs and then iteratively selecting the I/O pairs from this set that are most surprising , where surprise is estimated using a neural network trained to predict which I/O pairs will be selected ( I/O pairs assigned lower probabilities by the neural network are more surprising ) . Pu et al . use this approach to ensure that their I/O pairs are sufficient to specify desired programs unambiguously . In contrast , our approach tries to find and generalize to out-ofdistribution data , and therefore generates I/O pairs to maximize the difficulty of the resulting PBE problems rather than maximizing surprise . Our adversarial method uses an evolutionary algorithm to find desirable training distributions . This approach is similar to that of studies on competitive co-evolution ( e.g. , Rosin & Belew ( 1997 ) ; Arcuri & Yao ( 2007 ) ) . For example , Arcuri & Yao ( 2014 ) co-evolve populations of programs and populations of unit tests given a formal specification of a desired program . The fitness value of a candidate program depends on how many unit tests it passes , while the fitness value of a candidate unit test depends on how many programs it causes to fail . However , the goal of Arcuri & Yao ( 2014 ) and similar works is to simply generate a vanilla program synthesis algorithm capable of generating a program given its formal specification . They do not try to ensure that their algorithms generalize to different data distributions as we do ( e.g. , their algorithm might only work for certain distributions of formal specifications ) . Furthermore , in order to compute fitness values , co-evolutionary methods require the user to provide a formal specification of the desired program ( e.g. , using Z notation ( Spivey & Abrial , 1992 ) ) . Our approach only requires the user to provide I/O pairs describing the behavior of a desired program , which tend to be easier to provide than a formal specification ( Palshikar , 2001 ) .
The paper is proposing to evolve datasets of (inputs, outputs) in the context of programming by example (PBE), where the goal is to infer a computer program that is consistent with the association of (inputs, outputs). The justification of the approach is to figure out instances that would allow to learn a model with PBE that would generalize better, compared to learning from synthetic datasets that are randomly generated. The approach followed consists to use an evolutionary algorithm to generate this new dataset, using the learn model as a guide, by picking the points where the model at hand performs poorly. Such adversarial approach proceed iteratively, adding extra pairs of (inputs, outputs) to the training set, to get it to perform better in situations where it has trouble.
SP:2a886848c0832643ba2bd13804fe3fe8104a33bb
Understanding the role of importance weighting for deep learning
1 INTRODUCTION . Importance weighting is a standard tool for estimating a quantity under a target distribution while only the samples from some source distribution is accessible . It has been drawing extensive attention in the communities of statistics and machine learning . Causal inference for deep learning investigates heavily on the propensity score weighting method that applies the off-policy optimization with counterfactual estimator ( Gilotte et al. , 2018 ; Jiang & Li , 2016 ) , modelling with observational feedback ( Schnabel et al. , 2016 ; Xu et al. , 2020 ) and learning from controlled intervention ( Swaminathan & Joachims , 2015 ) . The importance weighting methods are also applied to characterize distribution shifts for deep learning models ( Fang et al. , 2020 ) , with modern applications in such as the domain adaptation ( Azizzadenesheli et al. , 2019 ; Lipton et al. , 2018 ) and learning from noisy labels ( Song et al. , 2020 ) . Other usages include curriculum learning ( Bengio et al. , 2009 ) and knowledge distillation ( Hinton et al. , 2015 ) , where the weights characterize the model confidence on each sample . To reduce the discrepancy between the source and target distribution for model training , a standard routine is to minimize a weighted risk ( Rubinstein & Kroese , 2016 ) . Many techniques have been developed to this end , and the common strategy is re-weighting the classes proportionally to the inverse of their frequencies ( Huang et al. , 2016 ; 2019 ; Wang et al. , 2017 ) . For example , Cui et al . ∗The work was done when the author was with Walmart Labs . ( 2019 ) proposes re-weighting by the inverse of effective number of samples . The focal loss ( Lin et al. , 2017 ) down-weights the well-classified examples , and the work by Li et al . ( 2019 ) suggests an improved technique which down-weights examples based on the magnitude of the gradients . Despite the empirical successes of various re-weighting methods , it is ultimately not clear how importance weighting lays influence from the theoretical standpoint . The recent study of Byrd & Lipton ( 2019 ) observes from experiments that there is little impact of importance weights on the converged deep neural network , if the data can be separated by the model using gradient descent . They connect this phenomenon to the implicit bias of gradient descent ( Soudry et al. , 2018 ) - a novel topic that studies why over-parameterized models trained on separable data is biased toward solutions that generalize well . Implicit bias of gradient descent has been observed and studied for linear model ( Soudry et al. , 2018 ; Ji & Telgarsky , 2018b ) , linear neural network ( Ji & Telgarsky , 2018a ; Gunasekar et al. , 2018 ) , two-layer neural network with homogeneous activation ( Chizat & Bach , 2020 ) and smooth neural networks ( Nacson et al. , 2019 ; Lyu & Li , 2019 ) . To summarize , those work reveals that the direction of the parameters ( for linear predictor ) and the normalized margin ( for nonlinear predictor ) , regardless of the initialization , respectively converge to those of a max-margin solution . The pivotal role of margin for deep learning models has been explored actively after the long journey of understanding the generalization of over-parameterized neural networks ( Bartlett et al. , 2017 ; Golowich et al. , 2018 ; Neyshabur et al. , 2018 ) . For instance , Wei et al . ( 2019 ) studies the margin of the neural networks for separable data under weak regularization . They show that the normalized margin also converges to the max-margin solution , and provide a generalization bound for a neural network that hinges on its margin . Although there are rich understandings for the implicit bias of gradient descent and the margin-based generalization , very few efforts are dedicated to studying how they adjust to the weighted empiricalrisk minimization ( ERM ) setting . The established results do not directly transfer since importance weighting can change both the optimization geometry and how the generalization is measured . In this paper , we fill in the gap by showing the impact of importance weighting on the implicit bias of gradient descent as well as the generalization performance . By studying the optimization dynamics of linear models , we first reveal the effect of importance weighting on the convergence speed under linearly separable data . When the data is not linearly separable , we characterize the unique role of importance weighting on defining the intercept term upon the implicit bias . We then investigate the non-linear neural network under a weak regularization as Wei et al . ( 2019 ) . We provide a novel generalization bound that reflects how importance weighting leads to the interplay between the empirical risk and a compounding term that consists of the model complexity as well as the deviation between the source target distribution . Based on our theoretical results , we discuss several exploratory developments on importance weighting that are worthy of further investigations . • A good set of weights for learning can be inversely proportional to the hard-to-classify extent . For example , a sample that is close to ( far from ) the oracle decision boundary should have a large ( small ) weight . • If the importance weights are jointly trained according to a weighting model , the impact of the weighting model eventually diminishes after showing strong correlation with the hard-to-classify extent such as margin . • The usefulness of explicit regularization on weighted ERM can be studied , via their impact on the margin , on balancing the empirical loss and the distribution divergence . In summary , our contribution are three folds . • We characterize the impact of importance weighting on the implicit bias of gradient descent . • We find a generalization bound that hinges on the importance weights . For finite-step training , the role of importance weighting on the generalization bound is reflected in how the margin is affected , and how it balances the source and target distribution . • We propose several exploratory topics for importance weighting that worth further investigating from both the application and theoretical perspective . The rest of the paper is organized as follows . In Section 2 , we introduce the background , preliminary results and the experimental setup . In Section 3 and 4 , we demonstrate the influence of the importance weighting for linear and non-linear models in terms of the implicit bias of gradient descent and the generalization performance . We then discuss the extended investigations in Section 5 . 2 PRELIMINARIES . We use bold-font letters for vectors and matrices , uppercase letters for random variables and distributions , and ‖ · ‖ to denote ` 2 norm when no confusion arises . We denote the training data by D = { wi , xi , yi } ni=1 where xi ∈ X denotes the features , yi is binary or categorical , and the importance weight is bounded such that : wi ∈ [ 1/M , M ] for someM > 1 . We mention that the importance weights are often defined with respect to the source distribution Ps from which the training data is drawn , and the target distribution Pt . We do not make this assumption here because importance weighting is often applied for more general purposes . Therefore , wi can be defined arbitrarily . We use f ( θ , x ) to denote the predictor and define F = { f ( θ , · ) | θ ∈ Θ ⊂ Rd } . For the sake of notation , we focus on the binary setting : yi ∈ { −1 , +1 } with f ( θ , x ) ∈ R. However , it will become clear later that our results can be easily extended to the multi-class setting . Consider the weighted empirical risk minimization ( ERM ) task with the risk given byL ( θ ; w ) = 1/n ∑n i=1 wi ` ( yif ( θ , xi ) ) for some non-negative loss function ` ( · ) . The weight-agnostic counterpart is denoted by : L ( θ ) = 1/n ∑n i=1 ` ( yif ( θ , xi ) ) . We focus particularly on the exponential loss ` ( u ) = exp ( −u ) and log loss ` ( u ) = log ( 1 + exp ( −u ) ) . For the multi-class problem where yi ∈ [ k ] , we extend our setup using the softmax function where the logits are now given by { fj ( θ , x ) } kj=1 . For optimization , we consider using gradient descent to minimize the total loss : θ ( t+1 ) ( w ) = θ ( t ) ( w ) − ηt∇L ( θ ; w ) ∣∣ θ=θ ( t ) ( w ) , where the learning rate ηt can be constant or step-dependent . From parameter norm divergence to support vectors . Suppose D is separated by f ( θ ( t ) , x ) after some point during training . The key factor that contributes to the implicit bias for both linear and non-linear predictor under a weak regularization 1 is that the norm of the parameters diverges after separation , i.e . limt→∞ ‖θ ( t ) ‖2 = ∞ , as a consequence of using gradient descent . Now we examine ‖θ ( t ) ( w ) ‖2 . The heuristic is that if ` ( · ) is exponential-like , multiplying by wi only changes its tail property up to a constant while the asymptotic behavior is not affected . In particular , the necessary conditions for norm divergence under gradient descent can be summarized by : • C1 . The loss function ` ( · ) has a exponential tail behavior ( that we formalize in Appendix A.1 ) such that limu→∞ ` ( −u ) = limu→∞∇ ` ( −u ) = 0 ; • C2 . The predictor f ( θ , x ) is α-homogeneous such that f ( c · θ , x ) = cαf ( θ , x ) , ∀c > 0 . In addition , we need certain regularities from f ( θ , x ) to ensure the existence of critical points and the convergence of gradient descent : • C3 . for any x ∈ X , f ( · , x ) is β-smooth and l-Lipschitz on Rd . C1 can be satisfied by the exponential loss , log loss and cross entropy loss under the multi-class setting . For standard deep learning models such as multilayer perceptron ( MLP ) , C2 implies that the activation functions are homogeneous such as ReLU and LeakyReLU , and bias terms are disallowed . C3 is a common technical assumptions whose practical implications are discussed in Appendix A.1 . Among the three necessary conditions , importance weighting only affects C1 up to a constant , so its impact on the norm divergence diminishes in the asymptotic regime . The formal statement is provided as below . Claim 1 . There exists a constant learning rate for gradient descent , such that for any w ∈ [ 1/M , M ] n , with a weak regularization , limt→∞ ∥∥θ ( t ) ( w ) ∥∥ =∞ under C1-C3 . Compared with the previous work , we extend the norm divergence result not only to weighted ERM but a more general setting where a weak regularization is considered . We defer the proof to Appendix A.1 . A direct consequence of parameter norm divergence is that both the risk and the gradient are dominated by the terms with the smallest margin , i.e . arg mini yif ( θ , xi ) , which are also referred to as the `` support vectors '' . To make sense of this point , notice that both the risk and the gradient have the form of : ∑ i Ci exp ( − yif ( θ , xi ) ) , where Ci are low-order terms . Since f ( θ , xi ) = ‖θ‖α2 f ( θ/‖θ‖2 , xi ) due to the homogeneous assumption in C2 , it holds that : 1The regularized loss is given by Lλ ( θ ; w ) = L ( θ ; w ) + λ‖θ‖r for a fixed r > 0 . The weak regularization refers to the case where λ→ 0. limt→∞ exp ( − yif ( θ ( t ) ( w ) , xi ) ) → 0 . Therefore , the decision boundaries may share certain characteristics with the support vector machine ( SVM ) since they rely on the same support vectors . As a matter of fact , the current understandings on the implicit bias of gradient descent are mostly established on the connection with hard-margin SVM : min θ∈Rd ‖θ‖2 s.t . yif ( θ , xi ) ≥ 1 ∀i = 1 , 2 , . . . , n , ( 1 ) whose optimization path coincides with the max-margin problem : max‖θ‖2≤1 mini=1 , ... , n yif ( θ , xi ) , as shown by Nacson et al . ( 2019 ) . Define γ ( θ ) : = mini yif ( θ , xi ) . We use θ∗ to denote the optimal solution and γ∗ = γ ( θ∗ ) : = mini yif ( θ∗ , xi ) to denote the corresponding margin . Implicit bias of gradient descent . We start by considering the weight-agnostic setting . When D is linear separable , it is reasonable to conjecture that the separating hyperplane under a linear f ( θ , · ) overlaps with the solution of hard-margin SVM . Soudry et al . ( 2018 ) and Ji & Telgarsky ( 2018b ) first show that ‖θ ( t ) ‖ converges in direction to θ∗ , i.e . limt→∞ θ ( t ) /‖θ ( t ) ‖2 = θ∗ . For nonlinear predictors , however , the parameter direction is less meaningful . Instead , it has been pointed out that neural networks often achieve perfect separation of the training data ( Zhang et al. , 2016 ) . Therefore , we are more interested in the margin whose pivoting role for the generalization of neural networks is studied extensively ( Neyshabur et al. , 2017 ; Bartlett et al. , 2017 ; Golowich et al. , 2018 ) . Specifically , it has been show in Nacson et al . ( 2019 ) and Lyu & Li ( 2019 ) that the normalized margin , defined by γ̃ ( θ ( t ) ) : = γ ( θ ( t ) /‖θ ( t ) ‖2 ) , converges to the maximum margin γ∗ without regularization . It becomes clear at this point that to understand the role of importance weighting for deep learning , we must characterize the impact of weights on the implicit bias since they reveal the optimization geometry and generalization performance . Formally , we address the following critical questions . • Q1 . Does importance weighting modify the convergence results ( convergence in direction for linear predictor and in normalized margin for nonlinear predictor ) ? • If the convergence results remain unchanged , then : – Q2 . in what way is importance weighting affecting the optimization process ; – Q3 . how does importance weighting influence the generalization from the source distribution to the target distribution ?
It is now well-understood that when the data are linearly separable, gradient descent over the linear class of functions converges toward the hard margin solution. It highlights the implicit bias of gradient descent. Among all solutions interpolating the dataset, gradient descent selects the one with larger margin, partly explaining why over-parametrized models may generalize. The picture for non-linear class is a little bit more complicated.
SP:c36ebda129cbecfd9279410d276bb365cd5676eb
Understanding the role of importance weighting for deep learning
1 INTRODUCTION . Importance weighting is a standard tool for estimating a quantity under a target distribution while only the samples from some source distribution is accessible . It has been drawing extensive attention in the communities of statistics and machine learning . Causal inference for deep learning investigates heavily on the propensity score weighting method that applies the off-policy optimization with counterfactual estimator ( Gilotte et al. , 2018 ; Jiang & Li , 2016 ) , modelling with observational feedback ( Schnabel et al. , 2016 ; Xu et al. , 2020 ) and learning from controlled intervention ( Swaminathan & Joachims , 2015 ) . The importance weighting methods are also applied to characterize distribution shifts for deep learning models ( Fang et al. , 2020 ) , with modern applications in such as the domain adaptation ( Azizzadenesheli et al. , 2019 ; Lipton et al. , 2018 ) and learning from noisy labels ( Song et al. , 2020 ) . Other usages include curriculum learning ( Bengio et al. , 2009 ) and knowledge distillation ( Hinton et al. , 2015 ) , where the weights characterize the model confidence on each sample . To reduce the discrepancy between the source and target distribution for model training , a standard routine is to minimize a weighted risk ( Rubinstein & Kroese , 2016 ) . Many techniques have been developed to this end , and the common strategy is re-weighting the classes proportionally to the inverse of their frequencies ( Huang et al. , 2016 ; 2019 ; Wang et al. , 2017 ) . For example , Cui et al . ∗The work was done when the author was with Walmart Labs . ( 2019 ) proposes re-weighting by the inverse of effective number of samples . The focal loss ( Lin et al. , 2017 ) down-weights the well-classified examples , and the work by Li et al . ( 2019 ) suggests an improved technique which down-weights examples based on the magnitude of the gradients . Despite the empirical successes of various re-weighting methods , it is ultimately not clear how importance weighting lays influence from the theoretical standpoint . The recent study of Byrd & Lipton ( 2019 ) observes from experiments that there is little impact of importance weights on the converged deep neural network , if the data can be separated by the model using gradient descent . They connect this phenomenon to the implicit bias of gradient descent ( Soudry et al. , 2018 ) - a novel topic that studies why over-parameterized models trained on separable data is biased toward solutions that generalize well . Implicit bias of gradient descent has been observed and studied for linear model ( Soudry et al. , 2018 ; Ji & Telgarsky , 2018b ) , linear neural network ( Ji & Telgarsky , 2018a ; Gunasekar et al. , 2018 ) , two-layer neural network with homogeneous activation ( Chizat & Bach , 2020 ) and smooth neural networks ( Nacson et al. , 2019 ; Lyu & Li , 2019 ) . To summarize , those work reveals that the direction of the parameters ( for linear predictor ) and the normalized margin ( for nonlinear predictor ) , regardless of the initialization , respectively converge to those of a max-margin solution . The pivotal role of margin for deep learning models has been explored actively after the long journey of understanding the generalization of over-parameterized neural networks ( Bartlett et al. , 2017 ; Golowich et al. , 2018 ; Neyshabur et al. , 2018 ) . For instance , Wei et al . ( 2019 ) studies the margin of the neural networks for separable data under weak regularization . They show that the normalized margin also converges to the max-margin solution , and provide a generalization bound for a neural network that hinges on its margin . Although there are rich understandings for the implicit bias of gradient descent and the margin-based generalization , very few efforts are dedicated to studying how they adjust to the weighted empiricalrisk minimization ( ERM ) setting . The established results do not directly transfer since importance weighting can change both the optimization geometry and how the generalization is measured . In this paper , we fill in the gap by showing the impact of importance weighting on the implicit bias of gradient descent as well as the generalization performance . By studying the optimization dynamics of linear models , we first reveal the effect of importance weighting on the convergence speed under linearly separable data . When the data is not linearly separable , we characterize the unique role of importance weighting on defining the intercept term upon the implicit bias . We then investigate the non-linear neural network under a weak regularization as Wei et al . ( 2019 ) . We provide a novel generalization bound that reflects how importance weighting leads to the interplay between the empirical risk and a compounding term that consists of the model complexity as well as the deviation between the source target distribution . Based on our theoretical results , we discuss several exploratory developments on importance weighting that are worthy of further investigations . • A good set of weights for learning can be inversely proportional to the hard-to-classify extent . For example , a sample that is close to ( far from ) the oracle decision boundary should have a large ( small ) weight . • If the importance weights are jointly trained according to a weighting model , the impact of the weighting model eventually diminishes after showing strong correlation with the hard-to-classify extent such as margin . • The usefulness of explicit regularization on weighted ERM can be studied , via their impact on the margin , on balancing the empirical loss and the distribution divergence . In summary , our contribution are three folds . • We characterize the impact of importance weighting on the implicit bias of gradient descent . • We find a generalization bound that hinges on the importance weights . For finite-step training , the role of importance weighting on the generalization bound is reflected in how the margin is affected , and how it balances the source and target distribution . • We propose several exploratory topics for importance weighting that worth further investigating from both the application and theoretical perspective . The rest of the paper is organized as follows . In Section 2 , we introduce the background , preliminary results and the experimental setup . In Section 3 and 4 , we demonstrate the influence of the importance weighting for linear and non-linear models in terms of the implicit bias of gradient descent and the generalization performance . We then discuss the extended investigations in Section 5 . 2 PRELIMINARIES . We use bold-font letters for vectors and matrices , uppercase letters for random variables and distributions , and ‖ · ‖ to denote ` 2 norm when no confusion arises . We denote the training data by D = { wi , xi , yi } ni=1 where xi ∈ X denotes the features , yi is binary or categorical , and the importance weight is bounded such that : wi ∈ [ 1/M , M ] for someM > 1 . We mention that the importance weights are often defined with respect to the source distribution Ps from which the training data is drawn , and the target distribution Pt . We do not make this assumption here because importance weighting is often applied for more general purposes . Therefore , wi can be defined arbitrarily . We use f ( θ , x ) to denote the predictor and define F = { f ( θ , · ) | θ ∈ Θ ⊂ Rd } . For the sake of notation , we focus on the binary setting : yi ∈ { −1 , +1 } with f ( θ , x ) ∈ R. However , it will become clear later that our results can be easily extended to the multi-class setting . Consider the weighted empirical risk minimization ( ERM ) task with the risk given byL ( θ ; w ) = 1/n ∑n i=1 wi ` ( yif ( θ , xi ) ) for some non-negative loss function ` ( · ) . The weight-agnostic counterpart is denoted by : L ( θ ) = 1/n ∑n i=1 ` ( yif ( θ , xi ) ) . We focus particularly on the exponential loss ` ( u ) = exp ( −u ) and log loss ` ( u ) = log ( 1 + exp ( −u ) ) . For the multi-class problem where yi ∈ [ k ] , we extend our setup using the softmax function where the logits are now given by { fj ( θ , x ) } kj=1 . For optimization , we consider using gradient descent to minimize the total loss : θ ( t+1 ) ( w ) = θ ( t ) ( w ) − ηt∇L ( θ ; w ) ∣∣ θ=θ ( t ) ( w ) , where the learning rate ηt can be constant or step-dependent . From parameter norm divergence to support vectors . Suppose D is separated by f ( θ ( t ) , x ) after some point during training . The key factor that contributes to the implicit bias for both linear and non-linear predictor under a weak regularization 1 is that the norm of the parameters diverges after separation , i.e . limt→∞ ‖θ ( t ) ‖2 = ∞ , as a consequence of using gradient descent . Now we examine ‖θ ( t ) ( w ) ‖2 . The heuristic is that if ` ( · ) is exponential-like , multiplying by wi only changes its tail property up to a constant while the asymptotic behavior is not affected . In particular , the necessary conditions for norm divergence under gradient descent can be summarized by : • C1 . The loss function ` ( · ) has a exponential tail behavior ( that we formalize in Appendix A.1 ) such that limu→∞ ` ( −u ) = limu→∞∇ ` ( −u ) = 0 ; • C2 . The predictor f ( θ , x ) is α-homogeneous such that f ( c · θ , x ) = cαf ( θ , x ) , ∀c > 0 . In addition , we need certain regularities from f ( θ , x ) to ensure the existence of critical points and the convergence of gradient descent : • C3 . for any x ∈ X , f ( · , x ) is β-smooth and l-Lipschitz on Rd . C1 can be satisfied by the exponential loss , log loss and cross entropy loss under the multi-class setting . For standard deep learning models such as multilayer perceptron ( MLP ) , C2 implies that the activation functions are homogeneous such as ReLU and LeakyReLU , and bias terms are disallowed . C3 is a common technical assumptions whose practical implications are discussed in Appendix A.1 . Among the three necessary conditions , importance weighting only affects C1 up to a constant , so its impact on the norm divergence diminishes in the asymptotic regime . The formal statement is provided as below . Claim 1 . There exists a constant learning rate for gradient descent , such that for any w ∈ [ 1/M , M ] n , with a weak regularization , limt→∞ ∥∥θ ( t ) ( w ) ∥∥ =∞ under C1-C3 . Compared with the previous work , we extend the norm divergence result not only to weighted ERM but a more general setting where a weak regularization is considered . We defer the proof to Appendix A.1 . A direct consequence of parameter norm divergence is that both the risk and the gradient are dominated by the terms with the smallest margin , i.e . arg mini yif ( θ , xi ) , which are also referred to as the `` support vectors '' . To make sense of this point , notice that both the risk and the gradient have the form of : ∑ i Ci exp ( − yif ( θ , xi ) ) , where Ci are low-order terms . Since f ( θ , xi ) = ‖θ‖α2 f ( θ/‖θ‖2 , xi ) due to the homogeneous assumption in C2 , it holds that : 1The regularized loss is given by Lλ ( θ ; w ) = L ( θ ; w ) + λ‖θ‖r for a fixed r > 0 . The weak regularization refers to the case where λ→ 0. limt→∞ exp ( − yif ( θ ( t ) ( w ) , xi ) ) → 0 . Therefore , the decision boundaries may share certain characteristics with the support vector machine ( SVM ) since they rely on the same support vectors . As a matter of fact , the current understandings on the implicit bias of gradient descent are mostly established on the connection with hard-margin SVM : min θ∈Rd ‖θ‖2 s.t . yif ( θ , xi ) ≥ 1 ∀i = 1 , 2 , . . . , n , ( 1 ) whose optimization path coincides with the max-margin problem : max‖θ‖2≤1 mini=1 , ... , n yif ( θ , xi ) , as shown by Nacson et al . ( 2019 ) . Define γ ( θ ) : = mini yif ( θ , xi ) . We use θ∗ to denote the optimal solution and γ∗ = γ ( θ∗ ) : = mini yif ( θ∗ , xi ) to denote the corresponding margin . Implicit bias of gradient descent . We start by considering the weight-agnostic setting . When D is linear separable , it is reasonable to conjecture that the separating hyperplane under a linear f ( θ , · ) overlaps with the solution of hard-margin SVM . Soudry et al . ( 2018 ) and Ji & Telgarsky ( 2018b ) first show that ‖θ ( t ) ‖ converges in direction to θ∗ , i.e . limt→∞ θ ( t ) /‖θ ( t ) ‖2 = θ∗ . For nonlinear predictors , however , the parameter direction is less meaningful . Instead , it has been pointed out that neural networks often achieve perfect separation of the training data ( Zhang et al. , 2016 ) . Therefore , we are more interested in the margin whose pivoting role for the generalization of neural networks is studied extensively ( Neyshabur et al. , 2017 ; Bartlett et al. , 2017 ; Golowich et al. , 2018 ) . Specifically , it has been show in Nacson et al . ( 2019 ) and Lyu & Li ( 2019 ) that the normalized margin , defined by γ̃ ( θ ( t ) ) : = γ ( θ ( t ) /‖θ ( t ) ‖2 ) , converges to the maximum margin γ∗ without regularization . It becomes clear at this point that to understand the role of importance weighting for deep learning , we must characterize the impact of weights on the implicit bias since they reveal the optimization geometry and generalization performance . Formally , we address the following critical questions . • Q1 . Does importance weighting modify the convergence results ( convergence in direction for linear predictor and in normalized margin for nonlinear predictor ) ? • If the convergence results remain unchanged , then : – Q2 . in what way is importance weighting affecting the optimization process ; – Q3 . how does importance weighting influence the generalization from the source distribution to the target distribution ?
This paper studies the inductive bias of gradient descent (GD) on smooth non-linear models when optimizing a weighted ERM. The authors provide several novel results for the linear and non-linear model cases. For linear models and linearly separable data, they show that GD converges to the hard-margin SVM solution and the convergence rate upper bound is lower for weighted ERMs that have higher weight on low margin points. They further characterize the inductive bias for non-linearly separable data, on a unique non-linearly separable subspace defined by Ji and Telgarsky (2018). For nonlinear models they consider a weak regularization setup. They show that asymptotically GD converges to a max margin predictor, which is similar to the non-weighted ERM case. They prove a generalization bound for weighted ERM and together with experiments provide insights on the generalization performance of GD in this case.
SP:c36ebda129cbecfd9279410d276bb365cd5676eb
Provable More Data Hurt in High Dimensional Least Squares Estimator
1 INTRODUCTION . More data hurt refers to the phenomenon that training on more data can hurt the prediction performance of the learned model , especially for some deep learning tasks . Loog et al . ( 2019 ) shows that various standard learners can lead to sample-wise non-monotonicity . Nakkiran et al . ( 2019 ) experimentally confirms the sample-wise non-monotonicity of the test accuracy on deep neural networks . This challenges the conventional understanding in large sample properties : if an estimator is consistent , more data makes the estimator more stable and improves its finite-sample performance . Nakkiran ( 2019 ) considers adding one single data point to a linear regression task and analyzes its marginal effect to the test risk . Dereziński et al . ( 2019 ) gives an exact non-asymptotic risk of the high-dimensional least squares estimator and observes the sample-wise non-monotonicity on mean square error . For adversarially robust models , Min et al . ( 2020 ) proves that more data may increase the gap between the generalization error of adversarially-trained models and standard models . Chen et al . ( 2020 ) shows that more training data causes the generalization error to increase in the strong adversary regime . In this work , we derive the finite-sample distribution of the prediction risk under linear models and prove the “ more data hurt ” phenomenon from an asymptotic point of view . Intuitively , the “ more data hurt ” stems from the “ double descent ” risk curve : as the model complexity increases , the prediction risk of the learned model first decreases and then increases , and then decreases again . The double descent phenomenon can be precisely quantified for certain simple models ( Hastie et al . ( 2019 ) ; Mei & Montanari ( 2019 ) ; Ba et al . ( 2019 ) ; Belkin et al . ( 2019 ) ; Bartlett et al . ( 2020 ) ; Xing et al . ( 2019 ) ) . Among these works , Hastie et al . ( 2019 ) and Mei & Montanari ( 2019 ) use the tools from random matrix theory and explicitly prove the double descent curve of the asymptotic risk of linear regression and random features regression in high dimensional setup . Ba et al . ( 2019 ) gives the asymptotic risk of two-layer neural networks when either the first or the second layer is trained using a gradient flow . The second decline of the prediction risk in the double descent curve is highly related to the more data hurt phenomenon . In the over-parameterized regime when the model complexity is fixed while the sample size increases , the degree of over-parameterization decreases and becomes close to the interpolation boundary ( for example p/n = 1 in Hastie et al . ( 2019 ) ) , in which a high prediction risk is achieved . However , the existing asymptotic results , which focus on the first-order limit of the prediction risk , can not fully describe the more data hurt phenomenon . In fact , the “ double descent ” curve is a function of the limiting ratio lim p/n , which may not be able to characterize the empirical prediction risk in finite sample situations . There will be a non-negligible discrepancy between the empirical prediction risk and its limit , especially when the sample size or dimension is small . Finegrained second-order results are thus needed to fully characterize such discrepancy and further , a confidence band for the prediction risk can be constructed to evaluate its finite sample performance . We take Figure 1 as an example to illustrate this . According to the first-order limit , given a fixed dimension p = 100 , the prediction risks at sample size n = 90 and n = 98 are about 10.20 and 49.02 . More data hurt seems true . However , the 95 % confidence interval of the prediction risks with sample size 98 is [ 4.91 , 142.12 ] , which contains the risk for n = 90 . Then more data hurt is not statistically significant . Hence , in this work , we characterize the second-order fluctuations of the prediction risk and make attempts to fill this gap . We employ the linear regression task in Hastie et al . ( 2019 ) and Nakkiran ( 2019 ) , and introduce new tools from the random matrix theory , e.g . the central limit theorems for linear spectral statistics in Bai & Silverstein ( 2004 ) ; Bai et al . ( 2007 ) , to derive the central limit theorem of the prediction risk . Consider a linear regression task with n data points and p features , the setup of the more data hurt is similar to that in the classical asymptotic analysis in Van der Vaart ( 2000 ) . According to the classical asymptotic analysis with p fixed and n → ∞ , the least square estimator is unbiased and√ n-consistent to the ground truth . This implies that the more data will not hurt and even improve the prediction performance . However , the story is very different in the over-parameterized regime . The prediction risk doesn ’ t decrease monotonously with n when p > n. More data does hurt in the over-parameterized case . In the following , we will justify this phenomenon by developing the second-order asymptotic results as both n and p tend to infinity . We assume p/n → c , and denote 0 < n1 < n2 < +∞ , c1 = p/n1 and c2 = p/n2 . Then the direct comparison of the prediction risk between sample sizes n1 and n2 can be decomposed into three parts : ( i ) the gap between the finitesample risk under n = n1 and the asymptotic risk with c = c1 ; ( ii ) the gap between the finite-sample risk under n = n2 and the asymptotic risk with c = c2 ; ( iii ) the comparison between two asymptotic risks under c = c1 and c = c2 . Theorem 1 and 2 of Hastie et al . ( 2019 ) give answers to the task ( iii ) . For ( i ) and ( ii ) , we develop in this paper the convergence rate and the limiting distribution of the prediction risk as n , p → +∞ , p/n → c. Furthermore , the confidence interval of the finite-sample risk can be obtained as well . The main goal of this paper is to study the second order asymptotic behavior of two different types of conditional prediction risk in the linear regression model . One is RX , β ( β̂ , β ) given both the training data and regression coefficient while the other is RX ( β̂ , β ) given the training data only . We summarize our main results as follows : ( 1 ) The regression coefficient is set to be either random or nonrandom to cover more cases . Different convergence rates and limiting distributions of both prediction risk are derived under various scenarios . ( 2 ) In particular , the finite-sample distribution of the conditional prediction risk given both the training data and regression coefficient is derived and the sample-wise double descent is characterized in Theorem 4.2 and Theorem 4.5 ( see Figure 1 ) . Under certain assumptions , the more data hurt phenomenon can be confirmed by comparing the confidence intervals built via the central limit theorems . ( 3 ) Our results incorporate non-Gaussian observations . For Gaussian data , the limiting mean and variance in the central limit theorems have simpler forms , see Section 4.2 and 4.3 for more details . The rest of this paper is organized as follows . Section 3 introduces the model settings and two different prediction risk . Section 4 presents the main results on CLTs for the two types of risk with discussion . Section 5 conducts simulation experiments to verify the main results . All the technical proofs and lemmas are relegated to the appendix in the supplementary file . 2 RELATED WORK . Double Descent The double descent curve describes how generalization ability changes as model capacity increases . It subsumes the classical bias-variance trade-off , a U-shape curve , and further show that the test error exhibits a second drop when the model capacity exceeds the interpolation threshold ( Belkin et al . ( 2018 ) ; Geiger et al . ( 2019 ) ; Spigler et al . ( 2019 ) ; Advani & Saxe ( 2017 ) ) . The double descent phenomenon has been quantified for certain models , including two-layer neural networks via non-asymptotic bounds or asymptotic risk ( Belkin et al . ( 2019 ) ; Muthukumar et al . ( 2020 ) ; Hastie et al . ( 2019 ) ; Mei & Montanari ( 2019 ) ; Ba et al . ( 2019 ) ) . As our results are based on the linear regression , we mainly focus on the literature of linear models . Muthukumar et al . ( 2020 ) and Bartlett et al . ( 2020 ) derive the generalization bounds for over-parametrized linear models and show the benefits of the interpolation . Hastie et al . ( 2019 ) gives the first-order limit of the generalization error for linear regressions as n , p → +∞ . Dereziński et al . ( 2019 ) provides an exact non-asymptotic expression for double descent of the high-dimensional least square estimator . Wu & Xu ( 2020 ) extends the first-order limit of the prediction error of the generalized weighted ridge estimator to more general case with anisotropic features and signals . Montanari et al . ( 2019 ) , Deng et al . ( 2019 ) and Kini & Thrampoulidis ( 2020 ) investigate the sharp asymptotic of binary classification tasks with the max-margin solution and the maximum likelihood solution . Emami et al . ( 2020 ) and Gerbelot et al . ( 2020a ) consider the double descent in generalized linear models . Furthermore , the double descent phenomenon is also observed on linear tasks with various problems and assumptions , e.g . LeJeune et al . ( 2020 ) ; Gerbelot et al . ( 2020b ) ; Javanmard et al . ( 2020 ) ; Dar & Baraniuk ( 2020 ) ; Xu & Hsu ( 2019 ) ; Dar et al . ( 2020 ) . Xing et al . ( 2019 ) sharply quantifies the benefit of interpolation in the nearest neighbors algorithm . Mei & Montanari ( 2019 ) derives the limiting risk on the random features model and shows that minimum generalization error is achieved by highly overparametrized interpolators . Ba et al . ( 2019 ) gives the limiting risk of the regression problem under two-layer neural networks . However , the existing asymptotic results focus on the first-order limit of the prediction risk and do not indicate the convergence rate . There are very few second-order results in the literature , Shen & Bellec ( 2020 ) establishes the asymptotic normality for the derivatives of 2-layers neural network , but not the exact limiting distribution of the risk . In this work , we are the first to develop results on second-order fluctuations of the prediction risk in linear regressions and provide its corresponding confidence intervals . The more data hurt phenomenon is further justified from the asymptotic point of view . Random Matrix Theory The primary tool for analyzing the second-order fluctuations of prediction risk comes from random matrix theory . In particular , Bai & Silverstein ( 2004 ) refines the central limit theorem for linear spectral statistics of large dimensional sample covariance matrix with general population and the population is not necessary to be Gaussian . Similar central limit theorems are also developed for other random matrix ensembles , see Sinai & Soshnikov ( 1998 ) ; Bai & Yao ( 2005 ) ; Zheng ( 2012 ) . Other than the central limit theorem for linear spectral statistics , Bai et al . ( 2007 ) and Pan & Zhou ( 2008 ) study the asymptotic fluctuation of eigenvectors of sample covariance matrices . Bai & Yao ( 2008 ) considers the fluctuation of quadratic forms . All these technical tools and results are adopted and fully utilized in this paper , especially those related to Stieltjes transform , which are closely connected to the prediction risk studied in this paper .
In this article, the authors characterized the second-order fluctuation of the prediction risk of the (min-norm) least square estimator, by assuming an underlying noisy teacher model $y_i = \beta^T x_i + \epsilon_i$, in the regime where the data dimension $p$ and the number of training samples $n$ grow large at the same pace. Results in both the under-parameterized (Theorem 4.1 and 4.2) and over-parameterized regimes (Theorem 4.3-4.5) were provided, under the statistical model where the data $x_i$ are zero-mean random vectors with **generic** i.i.d. entries and then "rotated" to have some possible covariance structures. Numerical experiments for relatively small values of $n,p$ were conducted to support the theoretical assessment.
SP:c57d966aea7e3a714f81845afed92f4ffa730626
Provable More Data Hurt in High Dimensional Least Squares Estimator
1 INTRODUCTION . More data hurt refers to the phenomenon that training on more data can hurt the prediction performance of the learned model , especially for some deep learning tasks . Loog et al . ( 2019 ) shows that various standard learners can lead to sample-wise non-monotonicity . Nakkiran et al . ( 2019 ) experimentally confirms the sample-wise non-monotonicity of the test accuracy on deep neural networks . This challenges the conventional understanding in large sample properties : if an estimator is consistent , more data makes the estimator more stable and improves its finite-sample performance . Nakkiran ( 2019 ) considers adding one single data point to a linear regression task and analyzes its marginal effect to the test risk . Dereziński et al . ( 2019 ) gives an exact non-asymptotic risk of the high-dimensional least squares estimator and observes the sample-wise non-monotonicity on mean square error . For adversarially robust models , Min et al . ( 2020 ) proves that more data may increase the gap between the generalization error of adversarially-trained models and standard models . Chen et al . ( 2020 ) shows that more training data causes the generalization error to increase in the strong adversary regime . In this work , we derive the finite-sample distribution of the prediction risk under linear models and prove the “ more data hurt ” phenomenon from an asymptotic point of view . Intuitively , the “ more data hurt ” stems from the “ double descent ” risk curve : as the model complexity increases , the prediction risk of the learned model first decreases and then increases , and then decreases again . The double descent phenomenon can be precisely quantified for certain simple models ( Hastie et al . ( 2019 ) ; Mei & Montanari ( 2019 ) ; Ba et al . ( 2019 ) ; Belkin et al . ( 2019 ) ; Bartlett et al . ( 2020 ) ; Xing et al . ( 2019 ) ) . Among these works , Hastie et al . ( 2019 ) and Mei & Montanari ( 2019 ) use the tools from random matrix theory and explicitly prove the double descent curve of the asymptotic risk of linear regression and random features regression in high dimensional setup . Ba et al . ( 2019 ) gives the asymptotic risk of two-layer neural networks when either the first or the second layer is trained using a gradient flow . The second decline of the prediction risk in the double descent curve is highly related to the more data hurt phenomenon . In the over-parameterized regime when the model complexity is fixed while the sample size increases , the degree of over-parameterization decreases and becomes close to the interpolation boundary ( for example p/n = 1 in Hastie et al . ( 2019 ) ) , in which a high prediction risk is achieved . However , the existing asymptotic results , which focus on the first-order limit of the prediction risk , can not fully describe the more data hurt phenomenon . In fact , the “ double descent ” curve is a function of the limiting ratio lim p/n , which may not be able to characterize the empirical prediction risk in finite sample situations . There will be a non-negligible discrepancy between the empirical prediction risk and its limit , especially when the sample size or dimension is small . Finegrained second-order results are thus needed to fully characterize such discrepancy and further , a confidence band for the prediction risk can be constructed to evaluate its finite sample performance . We take Figure 1 as an example to illustrate this . According to the first-order limit , given a fixed dimension p = 100 , the prediction risks at sample size n = 90 and n = 98 are about 10.20 and 49.02 . More data hurt seems true . However , the 95 % confidence interval of the prediction risks with sample size 98 is [ 4.91 , 142.12 ] , which contains the risk for n = 90 . Then more data hurt is not statistically significant . Hence , in this work , we characterize the second-order fluctuations of the prediction risk and make attempts to fill this gap . We employ the linear regression task in Hastie et al . ( 2019 ) and Nakkiran ( 2019 ) , and introduce new tools from the random matrix theory , e.g . the central limit theorems for linear spectral statistics in Bai & Silverstein ( 2004 ) ; Bai et al . ( 2007 ) , to derive the central limit theorem of the prediction risk . Consider a linear regression task with n data points and p features , the setup of the more data hurt is similar to that in the classical asymptotic analysis in Van der Vaart ( 2000 ) . According to the classical asymptotic analysis with p fixed and n → ∞ , the least square estimator is unbiased and√ n-consistent to the ground truth . This implies that the more data will not hurt and even improve the prediction performance . However , the story is very different in the over-parameterized regime . The prediction risk doesn ’ t decrease monotonously with n when p > n. More data does hurt in the over-parameterized case . In the following , we will justify this phenomenon by developing the second-order asymptotic results as both n and p tend to infinity . We assume p/n → c , and denote 0 < n1 < n2 < +∞ , c1 = p/n1 and c2 = p/n2 . Then the direct comparison of the prediction risk between sample sizes n1 and n2 can be decomposed into three parts : ( i ) the gap between the finitesample risk under n = n1 and the asymptotic risk with c = c1 ; ( ii ) the gap between the finite-sample risk under n = n2 and the asymptotic risk with c = c2 ; ( iii ) the comparison between two asymptotic risks under c = c1 and c = c2 . Theorem 1 and 2 of Hastie et al . ( 2019 ) give answers to the task ( iii ) . For ( i ) and ( ii ) , we develop in this paper the convergence rate and the limiting distribution of the prediction risk as n , p → +∞ , p/n → c. Furthermore , the confidence interval of the finite-sample risk can be obtained as well . The main goal of this paper is to study the second order asymptotic behavior of two different types of conditional prediction risk in the linear regression model . One is RX , β ( β̂ , β ) given both the training data and regression coefficient while the other is RX ( β̂ , β ) given the training data only . We summarize our main results as follows : ( 1 ) The regression coefficient is set to be either random or nonrandom to cover more cases . Different convergence rates and limiting distributions of both prediction risk are derived under various scenarios . ( 2 ) In particular , the finite-sample distribution of the conditional prediction risk given both the training data and regression coefficient is derived and the sample-wise double descent is characterized in Theorem 4.2 and Theorem 4.5 ( see Figure 1 ) . Under certain assumptions , the more data hurt phenomenon can be confirmed by comparing the confidence intervals built via the central limit theorems . ( 3 ) Our results incorporate non-Gaussian observations . For Gaussian data , the limiting mean and variance in the central limit theorems have simpler forms , see Section 4.2 and 4.3 for more details . The rest of this paper is organized as follows . Section 3 introduces the model settings and two different prediction risk . Section 4 presents the main results on CLTs for the two types of risk with discussion . Section 5 conducts simulation experiments to verify the main results . All the technical proofs and lemmas are relegated to the appendix in the supplementary file . 2 RELATED WORK . Double Descent The double descent curve describes how generalization ability changes as model capacity increases . It subsumes the classical bias-variance trade-off , a U-shape curve , and further show that the test error exhibits a second drop when the model capacity exceeds the interpolation threshold ( Belkin et al . ( 2018 ) ; Geiger et al . ( 2019 ) ; Spigler et al . ( 2019 ) ; Advani & Saxe ( 2017 ) ) . The double descent phenomenon has been quantified for certain models , including two-layer neural networks via non-asymptotic bounds or asymptotic risk ( Belkin et al . ( 2019 ) ; Muthukumar et al . ( 2020 ) ; Hastie et al . ( 2019 ) ; Mei & Montanari ( 2019 ) ; Ba et al . ( 2019 ) ) . As our results are based on the linear regression , we mainly focus on the literature of linear models . Muthukumar et al . ( 2020 ) and Bartlett et al . ( 2020 ) derive the generalization bounds for over-parametrized linear models and show the benefits of the interpolation . Hastie et al . ( 2019 ) gives the first-order limit of the generalization error for linear regressions as n , p → +∞ . Dereziński et al . ( 2019 ) provides an exact non-asymptotic expression for double descent of the high-dimensional least square estimator . Wu & Xu ( 2020 ) extends the first-order limit of the prediction error of the generalized weighted ridge estimator to more general case with anisotropic features and signals . Montanari et al . ( 2019 ) , Deng et al . ( 2019 ) and Kini & Thrampoulidis ( 2020 ) investigate the sharp asymptotic of binary classification tasks with the max-margin solution and the maximum likelihood solution . Emami et al . ( 2020 ) and Gerbelot et al . ( 2020a ) consider the double descent in generalized linear models . Furthermore , the double descent phenomenon is also observed on linear tasks with various problems and assumptions , e.g . LeJeune et al . ( 2020 ) ; Gerbelot et al . ( 2020b ) ; Javanmard et al . ( 2020 ) ; Dar & Baraniuk ( 2020 ) ; Xu & Hsu ( 2019 ) ; Dar et al . ( 2020 ) . Xing et al . ( 2019 ) sharply quantifies the benefit of interpolation in the nearest neighbors algorithm . Mei & Montanari ( 2019 ) derives the limiting risk on the random features model and shows that minimum generalization error is achieved by highly overparametrized interpolators . Ba et al . ( 2019 ) gives the limiting risk of the regression problem under two-layer neural networks . However , the existing asymptotic results focus on the first-order limit of the prediction risk and do not indicate the convergence rate . There are very few second-order results in the literature , Shen & Bellec ( 2020 ) establishes the asymptotic normality for the derivatives of 2-layers neural network , but not the exact limiting distribution of the risk . In this work , we are the first to develop results on second-order fluctuations of the prediction risk in linear regressions and provide its corresponding confidence intervals . The more data hurt phenomenon is further justified from the asymptotic point of view . Random Matrix Theory The primary tool for analyzing the second-order fluctuations of prediction risk comes from random matrix theory . In particular , Bai & Silverstein ( 2004 ) refines the central limit theorem for linear spectral statistics of large dimensional sample covariance matrix with general population and the population is not necessary to be Gaussian . Similar central limit theorems are also developed for other random matrix ensembles , see Sinai & Soshnikov ( 1998 ) ; Bai & Yao ( 2005 ) ; Zheng ( 2012 ) . Other than the central limit theorem for linear spectral statistics , Bai et al . ( 2007 ) and Pan & Zhou ( 2008 ) study the asymptotic fluctuation of eigenvectors of sample covariance matrices . Bai & Yao ( 2008 ) considers the fluctuation of quadratic forms . All these technical tools and results are adopted and fully utilized in this paper , especially those related to Stieltjes transform , which are closely connected to the prediction risk studied in this paper .
This paper investigates the phenomenon of double descent, also referred to as "more data hurts", in high dimensional linear regression using the least square estimator. In the same setup, previous sharp results were already established in the asymptotic regime. Non-asymptotic results are also known but are less precise. The authors of this paper try to provide a new type of results that fill the gap between the two regimes (asymptotic vs non-asymptotic). To do so they have managed to derive second order (CLT type) asymptotic results for different risks based on more refined random matrix theory results.
SP:c57d966aea7e3a714f81845afed92f4ffa730626
Reducing Class Collapse in Metric Learning with Easy Positive Sampling
1 INTRODUCTION . Metric learning aims to learn an embedding function to lower dimensional space , in which semantic similarity translates to neighborhood relations in the embedding space ( Lowe , 1995 ) . Deep metric learning approaches achieve promising results in a large variety of tasks such as face identification ( Chopra et al. , 2005 ; Taigman et al. , 2014 ; Sun et al. , 2014 ) , zero-shot learning ( Frome et al. , 2013 ) , image retrieval ( Hoffer & Ailon , 2015 ; Gordo et al. , 2016 ) and fine-grained recognition ( Wang et al. , 2014 ) . In this work we investigate the family of losses which optimize for an embedding representation that enforces that all modes of intra-class appearance variation project to a single point in embedding space . Learning such an embedding is very challenging when classes have a diverse appearance . This happens especially in real-world scenarios where the class consists of multiple modes with diverse visual appearance . Pushing all these modes to a single point in the embedding space requires the network to memorize the relations between the different class modes , which could reduce the generalization capabilities of the network and result in sub-par performance . Recently researchers observed that this phenomena , where all modes of class appearance “ collapse ” to the same center , occurs in case of the classification SoftMax loss ( Qian et al. ) . They proposed a multi-center approach , where multiple centers for each class are used with the SoftMax loss to capture the hidden distribution of the data to solve this issue . Instead of using SoftMax , it was shown that triplet loss may offer some relief from class collapsing ( Wang et al. , 2014 ) and this is certainly true in noise-free environments . However , in this paper , we show that in real-world conditions with modest noise assumptions , triplet and other metric learning loss yet suffer from class collapse . Rather than refine the loss , we argue the key lies in an improved strategy for sampling and selecting the examples . Early work ( Malisiewicz & Efros , 2008 ) proposed per-exemplar distance representation as a means to overcome class collapsing ; inspired by this we introduce a simple sampling method to select positive pairs of training examples . Our method can be combined naturally with other popular sampling methods . In each training iteration , given an anchor and a batch of samples in the same category , our method selects the closest sample to the anchor in the current embedding space as the positive sample . The metric learning loss is then computed based on the anchor and its positive paired sample . We demonstrate the class-collapsing phenomena on a real-world dataset , and show that our method is able to create more diverse embedding which result in a better generalization performance . We evaluate our method on three standard zero-shot benchmarks : CARS196 ( Krause et al. , 2013 ) , CUB200-2011 ( Wah et al. , 2011 ) and Omniglot ( Lake et al. , 2015 ) . Our method achieves a consistent performance enhancement with respect to various baseline combinations of sampling methods and embedding losses . 2 RELATED WORK . Sampling methods . Designing a good sampling strategy is a key element in deep metric learning . Researchers have been proposed sampling methods when sampling both the negative examples as well as the positive pairs . For negative samples , studies have focused on sampling hard negatives to make training more efficient ( Simo-Serra et al. , 2015 ; Schroff et al. , 2015 ; Wang & Gupta , 2015 ; Oh Song et al. , 2016 ; Parkhi et al. , 2015 ) . Recently , it has been shown that increasing the negative examples in training can significantly help unsupervised representation learning with contrastive losses ( He et al. , 2020 ; Wu et al. , 2018 ; Chen et al. , 2020 ) . Besides negative examples , methods for sampling hard positive examples have been developed in classification and detection tasks ( Loshchilov & Hutter , 2015 ; Shrivastava et al. , 2016 ; Arandjelovic et al. , 2016 ; Cubuk et al. , 2019 ; Singh & Lee , 2017 ; Wang et al. , 2017 ) . The central idea is to perform better augmentation to improve the generalization in testing ( Cubuk et al. , 2019 ) . Apart from learning with SoftMax classification , Arandjelovic et al . ( 2016 ) propose to perform metric learning by assigning the near instance from the same class as the positive instance . As the positive training set is noisy in their setting , this method leads to features invariant to different perspectives . Different from this approach , we use this method in a clean setting , where the purpose is to get the opposite result of maintaining the inner-class modalities in the embedding space . Xuan et al . ( 2020 ) also propose to use this positive sampling method with respect to the N-pair loss ( Sohn , 2016 ) in order to relax the constraints of the loss on the intra-class relations . From a theoretic perspective , we prove that in a clean setting this relaxation is redundant for other popular metric losses like the triplet loss ( Chechik et al. , 2010 ) and margin loss ( Wu et al. , 2017 ) . We formulate the noisy-environment setting and prove that in this case the triplet and margin losses also suffer from class-collapsing and using our purpose positive sampling method optimizes for solutions without class-collapsing . We also provide an empirical study that supports the theoretic analysis . Noisy label problem . Learning with noisy labels is a practical problem when applied to the real world ( Scott et al. , 2013 ; Natarajan et al. , 2013 ; Shen & Sanghavi , 2019 ; Reed et al. , 2014 ; Jiang et al. , 2017 ; Khetan et al. , 2017 ; Malach & Shalev-Shwartz , 2017 ) , especially when training with large-scale data ( Sun et al. , 2017 ) . One line of work applies a data-driven curriculum learning approach where the data that are most likely labeled correctly are used for learning in the beginning , and then harder data is taken into learning during a later phase ( Jiang et al. , 2017 ) . Researchers have also tried on to apply the loss only on easiest top k-elements in the batch , determine by lowest current loss ( Shen & Sanghavi , 2019 ) . Inspired by these works , our method focuses on selecting only the top easiest positive relations in the batch . Beyond memorization . Deep networks are shown to be extremely easy to memorize and over-fit to the training data ( Zhang et al. , 2016 ; Recht et al. , 2018 ; 2019 ) . For example , it is shown the network can be trained with randomly assigned labels on the ImageNet data , and obtain 100 % training accuracy if augmentations are not adopted . Moreover , even the CIFAR-10 classifier performs well in the validation set , it is shown that it does not really generalize to new collected data which is visually similar to the training and validation set ( Recht et al. , 2018 ) . In this paper , we show that when allowing the network the freedom not to have to learn inner-class relation between different class modes , we can achieve much better generalization , and the representation can be applied in a zero-shot setting . 3 PRELIMINARIES . Let X = { x1 , .. , xn } be a set of samples with labels yi ∈ { 1 , .. , m } . The objective of metric learning is to learn an embedding f ( · , θ ) −→ Rk , in which the neighbourhood of each sample in the embedding space contains samples only from the same class . One of the common approaches for metric learning is using embedding losses in which at each iteration , samples from the same class and samples from different classes are chosen according to same sampling heuristic . The objective of the loss is to push away projections of samples from different classes , and pull closer projections of samples from a same class . In this section , we introduce a few popular embedding losses . Notation : Let xi , xj ∈ X , define : Dfxi , xj = ‖ f ( xi ) −f ( xj ) ‖ 2 . In cases where there is no ambiguity we omit f and simply write Dxi , xj . We also define the function δxi , xj = { 1 if yi = yj 0 otherwise . Lastly , for every a ∈ R , denote ( a ) + : = max ( a , 0 ) . The Contrastive loss ( Hadsell et al . ) takes sample embeddings and pushes the samples from the different classes apart and pulls samples from the same class together . Lfcon ( xi , xj ) = δxi , xj ·Dfxi , xj + ( 1− δxi , xj ) · ( α−D f xi , xj ) + Here α is the margin parameter which defines the desired minimal distance between samples from different classes . While the Contrastive loss imposes a constraint on a pair of samples , the Triplet loss ( Chechik et al. , 2010 ) functions on a triplet of samples . Given a triplet xa , xp , xn ∈ X , the triplet loss is defined by Lftrip ( xa , xp , xn ) = δxa , xp · ( 1− δxa , xn ) · ( D f xa , xp −D f xp , xn + α ) + The Margin loss ( Wu et al. , 2017 ) aims to exploit the flexibility of Triplet loss while maintaining the computational efficiency of the Contrastive loss . This is done by adding a variable which determines the boundary between positive and negative pairs ; given an anchor xa ∈ X the loss is defined by Lf , βmargin ( xa , x ) = δxa , x · ( D f xa , x − βxa + α ) + + ( 1− δxa , x ) · ( βxa −D f xa , x + α ) + 4 CLASS-COLLAPSING . The contrastive loss objective is to pull all the samples with the same class to a single point in the embedding space . We call this the Class-collapsing property . Formally , an embedding f : X −→ Rm has the class-collapsing property , if there exists a label y and a point p ∈ Rm such that { f ( xi ) | yi = y } = { p } . 4.1 EMBEDDING LOSSES OPTIMAL SOLUTION . It is easy to see that an embedding function f that minimizes : Ocon ( f ) = 1 n2 ∑ xi , xj∈X Lfcon ( xi , xj ) has the class-collapsing property with respect to all classes . However , this is not necessarily true for the Triplet loss and the Margin loss . For simplification for the rest of this subsection we will assume there are only two classes . Let A ⊂ X be a subset of elements such that all the elements in A belongs to one class and all the element in Ac belong to the other class . Recall some basic set definitions . Definition 1 . For all sets Y , Z ⊂ Rm define : 1 . The diameter of Y is defined by : diam ( Y ) = sup { ‖y − z‖ |y , z ∈ Y } 2 . The distance between Y and Z is : ‖Y − Z‖ = inf { ‖y − z‖ |y ∈ Y , z ∈ Z } It is easy to see that if f : X −→ Rm is an embedding , such that diam ( f ( A ) ) < 2·α+‖f ( A ) −f ( B ) ‖ , then : Otrip ( f ) = 1 n3 ∑ xi , xj , xk∈X Lftrip ( xi , xj , xk ) = 0 . Moreover , fixing βxi = α for every xi ∈ X , then : Omargin ( f , β ) = 1 n2 ∑ xi , xj∈X Lf , βmargin ( xi , xj ) = 0 . It can be seen that indeed , the family of embedding which induce the global-minimum with respect to the Triplet loss and the Margin loss , is rich and diverse . However , as we will prove in the next subsection , this does not remain true in a noisy environment scenario .
This paper proposes/adopts a simple positive sampling scheme in metric learning: only sampling the easiest positive for each anchor. Authors give a theoretical analysis of how the proposed sampling scheme can reduce class collapse. Experiments on fine-grain retrieval datasets show the effectiveness of the sampling scheme. Using the sampled easiest positive, nearly all current metric learning methods got improved performance.
SP:152f5c76e34d2b7acdafa37763be4bb51aa9ce6f
Reducing Class Collapse in Metric Learning with Easy Positive Sampling
1 INTRODUCTION . Metric learning aims to learn an embedding function to lower dimensional space , in which semantic similarity translates to neighborhood relations in the embedding space ( Lowe , 1995 ) . Deep metric learning approaches achieve promising results in a large variety of tasks such as face identification ( Chopra et al. , 2005 ; Taigman et al. , 2014 ; Sun et al. , 2014 ) , zero-shot learning ( Frome et al. , 2013 ) , image retrieval ( Hoffer & Ailon , 2015 ; Gordo et al. , 2016 ) and fine-grained recognition ( Wang et al. , 2014 ) . In this work we investigate the family of losses which optimize for an embedding representation that enforces that all modes of intra-class appearance variation project to a single point in embedding space . Learning such an embedding is very challenging when classes have a diverse appearance . This happens especially in real-world scenarios where the class consists of multiple modes with diverse visual appearance . Pushing all these modes to a single point in the embedding space requires the network to memorize the relations between the different class modes , which could reduce the generalization capabilities of the network and result in sub-par performance . Recently researchers observed that this phenomena , where all modes of class appearance “ collapse ” to the same center , occurs in case of the classification SoftMax loss ( Qian et al. ) . They proposed a multi-center approach , where multiple centers for each class are used with the SoftMax loss to capture the hidden distribution of the data to solve this issue . Instead of using SoftMax , it was shown that triplet loss may offer some relief from class collapsing ( Wang et al. , 2014 ) and this is certainly true in noise-free environments . However , in this paper , we show that in real-world conditions with modest noise assumptions , triplet and other metric learning loss yet suffer from class collapse . Rather than refine the loss , we argue the key lies in an improved strategy for sampling and selecting the examples . Early work ( Malisiewicz & Efros , 2008 ) proposed per-exemplar distance representation as a means to overcome class collapsing ; inspired by this we introduce a simple sampling method to select positive pairs of training examples . Our method can be combined naturally with other popular sampling methods . In each training iteration , given an anchor and a batch of samples in the same category , our method selects the closest sample to the anchor in the current embedding space as the positive sample . The metric learning loss is then computed based on the anchor and its positive paired sample . We demonstrate the class-collapsing phenomena on a real-world dataset , and show that our method is able to create more diverse embedding which result in a better generalization performance . We evaluate our method on three standard zero-shot benchmarks : CARS196 ( Krause et al. , 2013 ) , CUB200-2011 ( Wah et al. , 2011 ) and Omniglot ( Lake et al. , 2015 ) . Our method achieves a consistent performance enhancement with respect to various baseline combinations of sampling methods and embedding losses . 2 RELATED WORK . Sampling methods . Designing a good sampling strategy is a key element in deep metric learning . Researchers have been proposed sampling methods when sampling both the negative examples as well as the positive pairs . For negative samples , studies have focused on sampling hard negatives to make training more efficient ( Simo-Serra et al. , 2015 ; Schroff et al. , 2015 ; Wang & Gupta , 2015 ; Oh Song et al. , 2016 ; Parkhi et al. , 2015 ) . Recently , it has been shown that increasing the negative examples in training can significantly help unsupervised representation learning with contrastive losses ( He et al. , 2020 ; Wu et al. , 2018 ; Chen et al. , 2020 ) . Besides negative examples , methods for sampling hard positive examples have been developed in classification and detection tasks ( Loshchilov & Hutter , 2015 ; Shrivastava et al. , 2016 ; Arandjelovic et al. , 2016 ; Cubuk et al. , 2019 ; Singh & Lee , 2017 ; Wang et al. , 2017 ) . The central idea is to perform better augmentation to improve the generalization in testing ( Cubuk et al. , 2019 ) . Apart from learning with SoftMax classification , Arandjelovic et al . ( 2016 ) propose to perform metric learning by assigning the near instance from the same class as the positive instance . As the positive training set is noisy in their setting , this method leads to features invariant to different perspectives . Different from this approach , we use this method in a clean setting , where the purpose is to get the opposite result of maintaining the inner-class modalities in the embedding space . Xuan et al . ( 2020 ) also propose to use this positive sampling method with respect to the N-pair loss ( Sohn , 2016 ) in order to relax the constraints of the loss on the intra-class relations . From a theoretic perspective , we prove that in a clean setting this relaxation is redundant for other popular metric losses like the triplet loss ( Chechik et al. , 2010 ) and margin loss ( Wu et al. , 2017 ) . We formulate the noisy-environment setting and prove that in this case the triplet and margin losses also suffer from class-collapsing and using our purpose positive sampling method optimizes for solutions without class-collapsing . We also provide an empirical study that supports the theoretic analysis . Noisy label problem . Learning with noisy labels is a practical problem when applied to the real world ( Scott et al. , 2013 ; Natarajan et al. , 2013 ; Shen & Sanghavi , 2019 ; Reed et al. , 2014 ; Jiang et al. , 2017 ; Khetan et al. , 2017 ; Malach & Shalev-Shwartz , 2017 ) , especially when training with large-scale data ( Sun et al. , 2017 ) . One line of work applies a data-driven curriculum learning approach where the data that are most likely labeled correctly are used for learning in the beginning , and then harder data is taken into learning during a later phase ( Jiang et al. , 2017 ) . Researchers have also tried on to apply the loss only on easiest top k-elements in the batch , determine by lowest current loss ( Shen & Sanghavi , 2019 ) . Inspired by these works , our method focuses on selecting only the top easiest positive relations in the batch . Beyond memorization . Deep networks are shown to be extremely easy to memorize and over-fit to the training data ( Zhang et al. , 2016 ; Recht et al. , 2018 ; 2019 ) . For example , it is shown the network can be trained with randomly assigned labels on the ImageNet data , and obtain 100 % training accuracy if augmentations are not adopted . Moreover , even the CIFAR-10 classifier performs well in the validation set , it is shown that it does not really generalize to new collected data which is visually similar to the training and validation set ( Recht et al. , 2018 ) . In this paper , we show that when allowing the network the freedom not to have to learn inner-class relation between different class modes , we can achieve much better generalization , and the representation can be applied in a zero-shot setting . 3 PRELIMINARIES . Let X = { x1 , .. , xn } be a set of samples with labels yi ∈ { 1 , .. , m } . The objective of metric learning is to learn an embedding f ( · , θ ) −→ Rk , in which the neighbourhood of each sample in the embedding space contains samples only from the same class . One of the common approaches for metric learning is using embedding losses in which at each iteration , samples from the same class and samples from different classes are chosen according to same sampling heuristic . The objective of the loss is to push away projections of samples from different classes , and pull closer projections of samples from a same class . In this section , we introduce a few popular embedding losses . Notation : Let xi , xj ∈ X , define : Dfxi , xj = ‖ f ( xi ) −f ( xj ) ‖ 2 . In cases where there is no ambiguity we omit f and simply write Dxi , xj . We also define the function δxi , xj = { 1 if yi = yj 0 otherwise . Lastly , for every a ∈ R , denote ( a ) + : = max ( a , 0 ) . The Contrastive loss ( Hadsell et al . ) takes sample embeddings and pushes the samples from the different classes apart and pulls samples from the same class together . Lfcon ( xi , xj ) = δxi , xj ·Dfxi , xj + ( 1− δxi , xj ) · ( α−D f xi , xj ) + Here α is the margin parameter which defines the desired minimal distance between samples from different classes . While the Contrastive loss imposes a constraint on a pair of samples , the Triplet loss ( Chechik et al. , 2010 ) functions on a triplet of samples . Given a triplet xa , xp , xn ∈ X , the triplet loss is defined by Lftrip ( xa , xp , xn ) = δxa , xp · ( 1− δxa , xn ) · ( D f xa , xp −D f xp , xn + α ) + The Margin loss ( Wu et al. , 2017 ) aims to exploit the flexibility of Triplet loss while maintaining the computational efficiency of the Contrastive loss . This is done by adding a variable which determines the boundary between positive and negative pairs ; given an anchor xa ∈ X the loss is defined by Lf , βmargin ( xa , x ) = δxa , x · ( D f xa , x − βxa + α ) + + ( 1− δxa , x ) · ( βxa −D f xa , x + α ) + 4 CLASS-COLLAPSING . The contrastive loss objective is to pull all the samples with the same class to a single point in the embedding space . We call this the Class-collapsing property . Formally , an embedding f : X −→ Rm has the class-collapsing property , if there exists a label y and a point p ∈ Rm such that { f ( xi ) | yi = y } = { p } . 4.1 EMBEDDING LOSSES OPTIMAL SOLUTION . It is easy to see that an embedding function f that minimizes : Ocon ( f ) = 1 n2 ∑ xi , xj∈X Lfcon ( xi , xj ) has the class-collapsing property with respect to all classes . However , this is not necessarily true for the Triplet loss and the Margin loss . For simplification for the rest of this subsection we will assume there are only two classes . Let A ⊂ X be a subset of elements such that all the elements in A belongs to one class and all the element in Ac belong to the other class . Recall some basic set definitions . Definition 1 . For all sets Y , Z ⊂ Rm define : 1 . The diameter of Y is defined by : diam ( Y ) = sup { ‖y − z‖ |y , z ∈ Y } 2 . The distance between Y and Z is : ‖Y − Z‖ = inf { ‖y − z‖ |y ∈ Y , z ∈ Z } It is easy to see that if f : X −→ Rm is an embedding , such that diam ( f ( A ) ) < 2·α+‖f ( A ) −f ( B ) ‖ , then : Otrip ( f ) = 1 n3 ∑ xi , xj , xk∈X Lftrip ( xi , xj , xk ) = 0 . Moreover , fixing βxi = α for every xi ∈ X , then : Omargin ( f , β ) = 1 n2 ∑ xi , xj∈X Lf , βmargin ( xi , xj ) = 0 . It can be seen that indeed , the family of embedding which induce the global-minimum with respect to the Triplet loss and the Margin loss , is rich and diverse . However , as we will prove in the next subsection , this does not remain true in a noisy environment scenario .
The authors find that the popular triplet loss will force all same-class instances to a single center in a noisy scenario, which is not optimal to deal with the diverse and distinct sub-classes. After some analyses, the authors propose a simple sampling strategy, EPS, where anchors only pull the most similar instances. The method achieves good visualization results on MNIST and gets promising performance on benchmarks.
SP:152f5c76e34d2b7acdafa37763be4bb51aa9ce6f
On the Reproducibility of Neural Network Predictions
1 INTRODUCTION . Deep neural networks ( DNNs ) have seen remarkable success in a range of complex tasks , and significant effort has been spent on further improving their predictive accuracy . However , an equally important desideratum of any machine learning system is stability or reproducibility in its predictions . In practice , machine learning models are continuously ( re ) -trained as new data arrives , or to incorporate architectural and algorithmic changes . A model that changes its predictions on a significant fraction of examples after each update is undesirable , even if each model instantiation attains high accuracy . Reproducibility of predictions is a challenge even if the architecture and training data are fixed across different training runs , which is the focus of this paper . Unfortunately , two key ingredients that help deep networks attain high accuracy — over-parameterization , and the randomization of their training algorithms — pose significant challenges to their reproducibility . The former refers to the fact that NNs typically have many solutions that minimize the training objective ( Neyshabur et al. , 2015 ; Zhang et al. , 2017 ) . The latter refers to the fact that standard training of NNs involves several sources of randomness , e.g. , initialization , mini-batch ordering , non-determinism in training platforms and in some cases data augmentation . Put together , these imply that NN training can find vastly different solutions in each run even when training data is the same , leading to a reproducibility challenge . The prediction disagreement between two models is referred to as churn ( Cormier et al. , 2016 ) 1 . Concretely , given two models , churn is the fraction of test examples where the predictions of the two models disagree . Clearly , churn is zero if both models have perfect accuracy – an unattainable goal for most of the practical settings of interest . Similarly , one can mitigate churn by eliminating all sources of randomness in the underlying training setup . However , even if one controls the seed used for random initialization and the order of data , inherent non-determinism in the current computation platforms is hard to avoid ( see §2.3 ) . Moreover it is desirable to have stable models with predictions 1 Madani et al . ( 2004 ) referred to this as disagreement and used it as an estimate for generalization error and model selection unaffected by such factors in training . Thus , it is critical to quantify churn , and develop methods that reduce it . In this paper , we study the problem of churn in NNs for the classification setting . We demonstrate the presence of churn , and investigate the role of different training factors causing it . Interestingly our experiments show that churn is not avoidable on the computing platforms commonly used in machine learning , further highlighting the necessity of developing techniques to mitigate churn . We then analyze the relation between churn and predicted class probabilities . Based on this , we develop a novel regularized co-distillation approach for reducing churn . Our key contributions are summarized below : ( i ) Besides the disagreement in the final predictions of models , we propose alternative soft metrics to measure churn . We demonstrate the existence of churn on standard image classification tasks ( CIFAR-10 , CIFAR-100 , ImageNet , SVHN and iNaturalist ) , and identify the components of learning algorithms that contribute to the observed churn . Furthermore , we analyze the relationship between churn and model prediction confidences ( cf . § 2 ) . ( ii ) Motivated from our analysis , we propose a regularized co-distillation approach to reduce churn that both improves prediction confidences and reduces prediction variance ( cf . §3 ) . Our approach consists of two components : a ) minimum entropy regularizers that improve prediction confidences ( cf . §3.1 ) , and b ) a new variant of co-distillation ( Anil et al. , 2018 ) to reduce prediction variance across runs . Specifically , we use a symmetric KL divergence based loss to reduce model disagreement , with a linear warmup and joint updates across multiple models ( cf . §3.2 ) . ( iii ) We empirically demonstrate the effectiveness of the proposed approach in reducing churn and ( sometimes ) increasing accuracy . We present ablation studies over its two components to show their complementary nature in reducing churn ( cf . §4 ) . 1.1 RELATED WORK . Reproducibility in machine learning . There is a broad field studying the problem of reproducible research ( Buckheit & Donoho , 1995 ; Gentleman & Lang , 2007 ; Sonnenburg et al. , 2007 ; Kovacevic , 2007 ; Mesirov , 2010 ; Peng , 2011 ; McNutt , 2014 ; Braun & Ong , 2014 ; Rule et al. , 2018 ) , which identifies best practices to facilitate the reproducibility of scientific results . Henderson et al . ( 2018 ) analysed reproducibility of methods in reinforcement learning , showing that performance of certain methods is sensitive to the random seed used in the training . While the performance of NNs on image classification tasks is fairly stable ( Table 2 ) , we focus on analyzing and improving the reproducibility of individual predictions . Thus , churn can be seen as a specific technical component of this reproducibility challenge . Cormier et al . ( 2016 ) defined the disagreement between predictions of two models as churn . They proposed an MCMC approach to train an initial stable model A so that it has a small churn with its future version , say model B . Here , future versions are based on slightly modified training data with possibly additional features . In Goh et al . ( 2016 ) ; Cotter et al . ( 2019 ) , constrained optimization is utilized to reduce churn across different model versions . In contrast , we are interested in capturing the contribution of factors other than training data modification that cause churn . More recently . Madhyastha & Jain ( 2019 ) study instability in the interpretation mechanisms and average performance for deep NNs due to change in random seed , and propose a stochastic weight averaging ( Izmailov et al. , 2018 ) approach to promote robust interpretations . In contrast , we are interested in robustness of individual predictions . Ensembling and online distillation . Ensemble methods ( Dietterich , 2000 ; Lakshminarayanan et al. , 2017 ) that combine the predictions from multiple ( diverse ) models naturally reduce the churn by averaging out the randomness in the training procedure of the individual models . However , such methods incur large memory footprint and high computational cost during the inference time . Distillation ( Hinton et al. , 2015 ; Bucilua et al. , 2006 ) aims to train a single model from the ensemble to alleviate these costs . Even though distilled model aims to recover the accuracy of the underlying ensemble , it is unclear if the distilled model also leads to churn reduction . Furthermore , distillation is a two-stage process , involving first training an ensemble and then distilling it into a single model . To avoid this two-stage training process , multiple recent works Anil et al . ( 2018 ) ; Zhang et al . ( 2018 ) ; Lan et al . ( 2018 ) ; Song & Chai ( 2018 ) ; Guo et al . ( 2020 ) have focused on online distillation , where multiple identical or similar models ( with different initialization ) are trained while regularizing the distance between their prediction probabilities . At the end of the training , any of the participating models can be used for inference . Notably , Anil et al . ( 2018 ) , while referring to this approach as co-distillation , also empirically pointed out its utility for churn reduction on the Criteo Ad dataset2 . In contrast , we develop a deeper understanding of co-distillation framework as a churn reduction mechanism by providing a theoretical justification behind its ability to reduce churn . We experimentally show that using a symmetric KL divergence objective instead of the cross entropy loss for co-distillation ( Anil et al. , 2018 ) leads to lower churn and better accuracy , even improving over the expensive ensembling-distillation approach . Entropy regularizer . Minimum entropy regularization was earlier explored in the context of semi-supervised learning ( Grandvalet & Bengio , 2005 ) . Such techniques have also been used to combat label noise ( Reed et al. , 2015 ) . In contrast , we utilize minimum entropy regularization in fully supervised settings for a distinct purpose of reducing churn and experimentally show its effectiveness . 1.2 NOTATION . Multi-class classification . We consider a multi-class classification setting , where given an instance x P X , the goal is to classify it as a member of one of K classes , indexed by the set Y , rKs . Let W be the set of parameters that define the underlying classification models . In particular , for w PW , the associated classification model fp¨ ; wq : X Ñ ∆K maps the instance x P X in the K-dimensional simplex ∆K Ă RK . Given fpx ; wq , x is classified as element of class ŷx ; w such that ŷx ; w “ arg maxjPY fpx ; wqj . ( 1 ) This gives the misclassification error ` 01 ` y , ŷx ; w ˘ “ 1tŷx ; w‰yu , where y is the true label for x . Let PX , Y be the joint distribution over the instance and label pairs . We learn a classification model by minimizing the risk for some valid surrogate loss ` of the misclassification error ` 01 : L ` w ˘ , EX , Y “ ` pY , fpX ; wqq ‰ . In practice , since we have only finite samples S P pXˆ Yqn , we minimize the corresponding empirical risk . L ` w ; S ˘ , 1 |S| ÿ px , yqPS ` ` y , fpx ; wq ˘ . ( 2 ) 2 CHURN : MEASUREMENT AND ANALYSIS . In this section we define churn , and demonstrate its existence on CIFAR and ImageNet datasets . We also propose and measure alternative soft metrics to quantify churn , that mitigate the discontinuity of churn . Subsequently , we examine the influence of different factors in the learning algorithm on churn . Finally , we present a relation between churn and prediction confidences of the model . We begin by defining churn as the expected disagreement between the predictions of two models ( Cormier et al. , 2016 ) . Definition 1 ( Churn between two models ) . Let w1 , w2 P W define classification models fp¨ ; w1q , fp¨ ; w2q : XÑ ∆K , respectively . Then , the churn between the two models is Churnpw1 , w2q “ EX “ 1tŶx ; w1‰Ŷx ; w2u ‰ “ PX “ ŶX ; w1 ‰ ŶX ; w2 ‰ , ( 3 ) where Ŷx ; w1 , arg maxjPY fpx ; w1qj and Ŷx ; w2 , arg maxjPY fpx ; w2qj . Note that if the models have perfect test accuracy , then their predictions always agree with the true label , which corresponds to zero churn . In practice , however , this is rarely the case . The following rather straightforward result shows that churn is upper bounded by the sum of the test error of the models . See Appendix B for the proof . We note that a similar result was shown in Theorem 1 of Madani et al . ( 2004 ) . Lemma 1 . Let PErr , w1 , PX , YrY ‰ ŶX ; w1s and PErr , w2 , PX , YrY ‰ ŶX ; w2s be the misclassification error for the models w1 and w2 , respectively . Then , Churnpw1 , w2q ď PErr , w1 ` PErr , w2 . 2https : //www.kaggle.com/c/criteo-display-ad-challenge Despite the worst-case bound in Lemma 1 , imperfect accuracy does not preclude the absence of churn . As the best-case scenario , two imperfect models can agree on the predictions for each example ( whether correct or incorrect ) , causing the churn to be zero . For example , multiple runs of a deterministic learning algorithm produce models with zero churn , independent of their accuracy . This shows that , in general , one can not infer churn from test accuracy , and understanding churn of an algorithm requires independent exploration .
The paper proposes methods to address churn in deep neural networks for classification, defined as the extent of disagreements in predictions of two models trained on the same data with the same algorithm. In addition to an existing measure of churn that is based on exact match of predicted classes, the paper introduces a soft measure of churn that measures disagreement by comparing the two models' class probability distributions. The paper proposes three regularization terms that can be added to the primary loss function used during training to reduce churn: two single-model regularization terms (based on cross entropy and KL divergence respectively) that encourage the model to output a more uneven probability distribution for an example, and a divergence-based term that is used by training two models simultaneously and that imposes a KL-divergence-based penalty that encourages the two models to output probability distributions that are as similar as possible. Experiments with ResNet architectures on CIFAR-10/100 and ImageNet indicate that the proposed approaches and their combination indeed reduce churn and do so to a larger extent than the two-model divergence-based approach applied in conjunction with cross-entropy that was proposed in work by Anil et al. in 2018 (although the improvement seems quite minor on ImageNet).
SP:23746625f66c6cd7b2a4cc8e0e452d81a948d34b
On the Reproducibility of Neural Network Predictions
1 INTRODUCTION . Deep neural networks ( DNNs ) have seen remarkable success in a range of complex tasks , and significant effort has been spent on further improving their predictive accuracy . However , an equally important desideratum of any machine learning system is stability or reproducibility in its predictions . In practice , machine learning models are continuously ( re ) -trained as new data arrives , or to incorporate architectural and algorithmic changes . A model that changes its predictions on a significant fraction of examples after each update is undesirable , even if each model instantiation attains high accuracy . Reproducibility of predictions is a challenge even if the architecture and training data are fixed across different training runs , which is the focus of this paper . Unfortunately , two key ingredients that help deep networks attain high accuracy — over-parameterization , and the randomization of their training algorithms — pose significant challenges to their reproducibility . The former refers to the fact that NNs typically have many solutions that minimize the training objective ( Neyshabur et al. , 2015 ; Zhang et al. , 2017 ) . The latter refers to the fact that standard training of NNs involves several sources of randomness , e.g. , initialization , mini-batch ordering , non-determinism in training platforms and in some cases data augmentation . Put together , these imply that NN training can find vastly different solutions in each run even when training data is the same , leading to a reproducibility challenge . The prediction disagreement between two models is referred to as churn ( Cormier et al. , 2016 ) 1 . Concretely , given two models , churn is the fraction of test examples where the predictions of the two models disagree . Clearly , churn is zero if both models have perfect accuracy – an unattainable goal for most of the practical settings of interest . Similarly , one can mitigate churn by eliminating all sources of randomness in the underlying training setup . However , even if one controls the seed used for random initialization and the order of data , inherent non-determinism in the current computation platforms is hard to avoid ( see §2.3 ) . Moreover it is desirable to have stable models with predictions 1 Madani et al . ( 2004 ) referred to this as disagreement and used it as an estimate for generalization error and model selection unaffected by such factors in training . Thus , it is critical to quantify churn , and develop methods that reduce it . In this paper , we study the problem of churn in NNs for the classification setting . We demonstrate the presence of churn , and investigate the role of different training factors causing it . Interestingly our experiments show that churn is not avoidable on the computing platforms commonly used in machine learning , further highlighting the necessity of developing techniques to mitigate churn . We then analyze the relation between churn and predicted class probabilities . Based on this , we develop a novel regularized co-distillation approach for reducing churn . Our key contributions are summarized below : ( i ) Besides the disagreement in the final predictions of models , we propose alternative soft metrics to measure churn . We demonstrate the existence of churn on standard image classification tasks ( CIFAR-10 , CIFAR-100 , ImageNet , SVHN and iNaturalist ) , and identify the components of learning algorithms that contribute to the observed churn . Furthermore , we analyze the relationship between churn and model prediction confidences ( cf . § 2 ) . ( ii ) Motivated from our analysis , we propose a regularized co-distillation approach to reduce churn that both improves prediction confidences and reduces prediction variance ( cf . §3 ) . Our approach consists of two components : a ) minimum entropy regularizers that improve prediction confidences ( cf . §3.1 ) , and b ) a new variant of co-distillation ( Anil et al. , 2018 ) to reduce prediction variance across runs . Specifically , we use a symmetric KL divergence based loss to reduce model disagreement , with a linear warmup and joint updates across multiple models ( cf . §3.2 ) . ( iii ) We empirically demonstrate the effectiveness of the proposed approach in reducing churn and ( sometimes ) increasing accuracy . We present ablation studies over its two components to show their complementary nature in reducing churn ( cf . §4 ) . 1.1 RELATED WORK . Reproducibility in machine learning . There is a broad field studying the problem of reproducible research ( Buckheit & Donoho , 1995 ; Gentleman & Lang , 2007 ; Sonnenburg et al. , 2007 ; Kovacevic , 2007 ; Mesirov , 2010 ; Peng , 2011 ; McNutt , 2014 ; Braun & Ong , 2014 ; Rule et al. , 2018 ) , which identifies best practices to facilitate the reproducibility of scientific results . Henderson et al . ( 2018 ) analysed reproducibility of methods in reinforcement learning , showing that performance of certain methods is sensitive to the random seed used in the training . While the performance of NNs on image classification tasks is fairly stable ( Table 2 ) , we focus on analyzing and improving the reproducibility of individual predictions . Thus , churn can be seen as a specific technical component of this reproducibility challenge . Cormier et al . ( 2016 ) defined the disagreement between predictions of two models as churn . They proposed an MCMC approach to train an initial stable model A so that it has a small churn with its future version , say model B . Here , future versions are based on slightly modified training data with possibly additional features . In Goh et al . ( 2016 ) ; Cotter et al . ( 2019 ) , constrained optimization is utilized to reduce churn across different model versions . In contrast , we are interested in capturing the contribution of factors other than training data modification that cause churn . More recently . Madhyastha & Jain ( 2019 ) study instability in the interpretation mechanisms and average performance for deep NNs due to change in random seed , and propose a stochastic weight averaging ( Izmailov et al. , 2018 ) approach to promote robust interpretations . In contrast , we are interested in robustness of individual predictions . Ensembling and online distillation . Ensemble methods ( Dietterich , 2000 ; Lakshminarayanan et al. , 2017 ) that combine the predictions from multiple ( diverse ) models naturally reduce the churn by averaging out the randomness in the training procedure of the individual models . However , such methods incur large memory footprint and high computational cost during the inference time . Distillation ( Hinton et al. , 2015 ; Bucilua et al. , 2006 ) aims to train a single model from the ensemble to alleviate these costs . Even though distilled model aims to recover the accuracy of the underlying ensemble , it is unclear if the distilled model also leads to churn reduction . Furthermore , distillation is a two-stage process , involving first training an ensemble and then distilling it into a single model . To avoid this two-stage training process , multiple recent works Anil et al . ( 2018 ) ; Zhang et al . ( 2018 ) ; Lan et al . ( 2018 ) ; Song & Chai ( 2018 ) ; Guo et al . ( 2020 ) have focused on online distillation , where multiple identical or similar models ( with different initialization ) are trained while regularizing the distance between their prediction probabilities . At the end of the training , any of the participating models can be used for inference . Notably , Anil et al . ( 2018 ) , while referring to this approach as co-distillation , also empirically pointed out its utility for churn reduction on the Criteo Ad dataset2 . In contrast , we develop a deeper understanding of co-distillation framework as a churn reduction mechanism by providing a theoretical justification behind its ability to reduce churn . We experimentally show that using a symmetric KL divergence objective instead of the cross entropy loss for co-distillation ( Anil et al. , 2018 ) leads to lower churn and better accuracy , even improving over the expensive ensembling-distillation approach . Entropy regularizer . Minimum entropy regularization was earlier explored in the context of semi-supervised learning ( Grandvalet & Bengio , 2005 ) . Such techniques have also been used to combat label noise ( Reed et al. , 2015 ) . In contrast , we utilize minimum entropy regularization in fully supervised settings for a distinct purpose of reducing churn and experimentally show its effectiveness . 1.2 NOTATION . Multi-class classification . We consider a multi-class classification setting , where given an instance x P X , the goal is to classify it as a member of one of K classes , indexed by the set Y , rKs . Let W be the set of parameters that define the underlying classification models . In particular , for w PW , the associated classification model fp¨ ; wq : X Ñ ∆K maps the instance x P X in the K-dimensional simplex ∆K Ă RK . Given fpx ; wq , x is classified as element of class ŷx ; w such that ŷx ; w “ arg maxjPY fpx ; wqj . ( 1 ) This gives the misclassification error ` 01 ` y , ŷx ; w ˘ “ 1tŷx ; w‰yu , where y is the true label for x . Let PX , Y be the joint distribution over the instance and label pairs . We learn a classification model by minimizing the risk for some valid surrogate loss ` of the misclassification error ` 01 : L ` w ˘ , EX , Y “ ` pY , fpX ; wqq ‰ . In practice , since we have only finite samples S P pXˆ Yqn , we minimize the corresponding empirical risk . L ` w ; S ˘ , 1 |S| ÿ px , yqPS ` ` y , fpx ; wq ˘ . ( 2 ) 2 CHURN : MEASUREMENT AND ANALYSIS . In this section we define churn , and demonstrate its existence on CIFAR and ImageNet datasets . We also propose and measure alternative soft metrics to quantify churn , that mitigate the discontinuity of churn . Subsequently , we examine the influence of different factors in the learning algorithm on churn . Finally , we present a relation between churn and prediction confidences of the model . We begin by defining churn as the expected disagreement between the predictions of two models ( Cormier et al. , 2016 ) . Definition 1 ( Churn between two models ) . Let w1 , w2 P W define classification models fp¨ ; w1q , fp¨ ; w2q : XÑ ∆K , respectively . Then , the churn between the two models is Churnpw1 , w2q “ EX “ 1tŶx ; w1‰Ŷx ; w2u ‰ “ PX “ ŶX ; w1 ‰ ŶX ; w2 ‰ , ( 3 ) where Ŷx ; w1 , arg maxjPY fpx ; w1qj and Ŷx ; w2 , arg maxjPY fpx ; w2qj . Note that if the models have perfect test accuracy , then their predictions always agree with the true label , which corresponds to zero churn . In practice , however , this is rarely the case . The following rather straightforward result shows that churn is upper bounded by the sum of the test error of the models . See Appendix B for the proof . We note that a similar result was shown in Theorem 1 of Madani et al . ( 2004 ) . Lemma 1 . Let PErr , w1 , PX , YrY ‰ ŶX ; w1s and PErr , w2 , PX , YrY ‰ ŶX ; w2s be the misclassification error for the models w1 and w2 , respectively . Then , Churnpw1 , w2q ď PErr , w1 ` PErr , w2 . 2https : //www.kaggle.com/c/criteo-display-ad-challenge Despite the worst-case bound in Lemma 1 , imperfect accuracy does not preclude the absence of churn . As the best-case scenario , two imperfect models can agree on the predictions for each example ( whether correct or incorrect ) , causing the churn to be zero . For example , multiple runs of a deterministic learning algorithm produce models with zero churn , independent of their accuracy . This shows that , in general , one can not infer churn from test accuracy , and understanding churn of an algorithm requires independent exploration .
The paper investigates two methods to reduce churn in neural network classification prediction. Churn is when two networks trained on the same data produce outputs that disagree, due to randomness in the training process. The authors identify several sources of randomness, from underlying hardware differences to parameter initialization and more. The authors propose two ways to mitigate churn. One is to use entropy minimization to favor more confident predictions. The second is to use co-distillation, a form of online ensemble learning. The authors show that both together do a good job of reducing churn on three data sets.
SP:23746625f66c6cd7b2a4cc8e0e452d81a948d34b
Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples
1 INTRODUCTION . While reinforcement learning ( RL ) has been shown to successfully solve problems with careful reward design ( Rajeswaran et al. , 2018 ; OpenAI et al. , 2019 ) , RL in its most general form , with no assumptions on the dynamics or reward function , requires solving a challenging uninformed search problem in which rewards are sparsely observed . Techniques which explicitly provide “ rewardshaping ” ( Ng et al. , 1999 ) , or modify the reward function to guide learning , can help take some of the burden off of exploration , but shaped rewards can be difficult to obtain without domain knowledge . In this paper , we aim to reformulate the reinforcement learning problem to make it easier for the user to specify the task and to provide a tractable reinforcement learning objective . Instead of requiring a reward function designed for an objective , our method instead assumes a user-provided set of successful outcome examples : states in which the desired task has been accomplished successfully . The algorithm aims to estimate the distribution over these states and maximize the probability of reaching states that are likely under the distribution . Prior work on learning from success examples ( Fu et al. , 2018b ; Zhu et al. , 2020 ) focused primarily on alleviating the need for manual reward design . In our work , we focus on the potential for this mode of task specification to produce more tractable RL problems and solve more challenging classes of tasks . Intuitively , when provided with explicit examples of successful states , the RL algorithm should be able to direct its exploration , rather than simply hope to randomly chance upon high reward states . The main challenge in instantiating this idea into a practical algorithm is performing appropriate uncertainty quantification in estimating whether a given state corresponds to a successful outcome . Our approach trains a classifier to distinguish successful states , provided by the user , from those generated by the current policy , analogously to generative adversarial networks ( Goodfellow et al. , 2014 ) and previously proposed methods for inverse reinforcement learning ( Fu et al. , 2018a ) . In general , such a classifier is not guaranteed to provide a good optimization landscape for learning the policy . We discuss how a particular form of uncertainty quantification based on the normalized maximum likelihood ( NML ) distribution produces better reward guidance for learning . We also connect our approach to count-based exploration methods , showing that a classifier with suitable uncertainty estimates reduces to a count-based exploration method in the absence of any generalization across states , while also discussing how it improves over count-based exploration in the presence of good generalization . We then propose a practical algorithm to train success classifiers in a computationally efficient way with NML , and show how this form of reward inference allows us to solve difficult problems more efficiently , providing experimental results which outperform existing algorithms on a number of navigation and robotic manipulation domains . 2 RELATED WORK . A number of techniques have been proposed to improve exploration.These techniques either add reward bonuses that encourage a policy to visit novel states in a task-agnostic manner ( Wiering and Schmidhuber , 1998 ; Auer et al. , 2002 ; Schaul et al. , 2011 ; Houthooft et al. , 2016 ; Pathak et al. , 2017 ; Tang et al. , 2017 ; Stadie et al. , 2015 ; Bellemare et al. , 2016 ; Burda et al. , 2018a ; O ’ Donoghue , 2018 ) or perform Thompson sampling or approximate Thompson sampling based on a prior over value functions ( Strens , 2000 ; Osband et al. , 2013 ; 2016 ) . While these techniques are uninformed about the actual task , we consider a constrained set of problems where examples of successes can allow for more task-directed exploration . In real world problems , designing well-shaped reward functions makes exploration easier but often requires significant domain knowledge ( Andrychowicz et al. , 2020 ) , access to privileged information about the environment ( Levine et al. , 2016 ) and/or a human in the loop providing rewards ( Knox and Stone , 2009 ; Singh et al. , 2019b ) . Prior work has considered specifying rewards by providing example demonstrations and inferring rewards with inverse RL ( Abbeel and Ng , 2004 ; Ziebart et al. , 2008 ; Ho and Ermon , 2016 ; Fu et al. , 2018a ) . This requires expensive expert demonstrations to be provided to the agent . In contrast , our work has the minimal requirement of simply providing successful outcome states , which can be done cheaply and more intuitively . This subclass of problems is also related to goal conditioned RL ( Kaelbling , 1993 ; Schaul et al. , 2015 ; Zhu et al. , 2017 ; Andrychowicz et al. , 2017 ; Nair et al. , 2018 ; Veeriah et al. , 2018 ; Rauber et al. , 2018 ; Warde-Farley et al. , 2018 ; Colas et al. , 2019 ; Ghosh et al. , 2019 ; Pong et al. , 2020 ) but is more general , since it allows for a more abstract notion of task success . A core idea behind our work is using a Bayesian classifier to learn a suitable reward function . Bayesian inference with expressive models and high dimensional data can often be intractable , requiring assumptions on the form of the posterior ( Hoffman et al. , 2013 ; Blundell et al. , 2015 ; Maddox et al. , 2019 ) . In this work , we build on the concept of normalized maximum likelihood ( Rissanen , 1996 ; Shtar ’ kov , 1987 ) , or NML , to learn Bayesian classifiers . Although NML is typically considered from the perspective of optimal coding ( Grünwald , 2007 ; Fogel and Feder , 2018 ) , we show how it can be used to learn success classifiers , and discuss its connections to exploration and reward shaping in RL . 3 PRELIMINARIES . In this paper , we study a modified reinforcement learning problem , where instead of the standard reward function , the agent is provided with successful outcome examples . This reformulation not only provides a modality for task specification that may be more natural for users to provide in some settings ( Fu et al. , 2018b ; Zhu et al. , 2020 ; Singh et al. , 2019a ) , but , as we will show , can also make learning easier . We also derive a meta-learned variant of the conditional normalized maximum likelihood ( CNML ) distribution for representing our reward function , in order to make evaluation tractable . We discuss background on successful outcome examples and CNML in this section . 3.1 REINFORCEMENT LEARNING WITH EXAMPLES OF SUCCESSFUL OUTCOMES . We follow the framework proposed by Fu et al . ( 2018b ) and assume that we are provided with a Markov decision process ( MDP ) without a reward function , given by M , where M = ( S , A , T , γ , µ0 ) , as well as successful outcome examples S+ = { sk+ } Kk=1 , which is a set of states in which the desired task has been accomplished . This formalism is easiest to describe in terms of the control as inference framework ( Levine , 2018 ) . The relevant graphical model in Figure 9 consists of states and actions , as well as binary success variables et which represent the occurrence of a particular event . The agent ’ s objective is to cause this event to occur ( e.g. , a robot that is cleaning the floor must cause the “ floor is clean ” event to occur ) . Formally , we assume that the states in S+ are sampled from the distribution p ( st|et = True ) – that is , states where the desired event has taken place . In this work , we focus on efficient methods for solving this reformulation of the RL problem , by utilizing a novel uncertainty quantification method to represent the distribution p ( et|st ) . In practice , prior methods that build on this and similar reformulations of the RL problem ( Fu et al. , 2018b ) derive an algorithm where the reward function in RL is produced by a classifier that estimates p ( et = True|st ) . Following the adversarial inverse reinforcement learning ( AIRL ) derivation ( Fu et al. , 2018a ; Finn et al. , 2016 ) , it is possible to show that the correct source of negative examples for training this classifier is the state distribution of the policy itself , π ( s ) . This insight results in a simple algorithm : at each iteration of the algorithm , the policy is updated to maximize the current reward , given by log p ( et = True|st ) , then samples from the policy are added to the set of negative examples S− , and the classifier is retrained on the original positive set S+ and the updated negative set S− . 3.2 CONDITIONAL NORMALIZED MAXIMUM LIKELIHOOD . Our method builds on the principle of conditional normalized maximum likelihood ( NML ) ( Rissanen and Roos , 2007 ; Grünwald , 2007 ; Fogel and Feder , 2018 ) , which we review briefly . CNML is a method for performing k-way classification , given a model class Θ and a dataset D = { ( x0 , y0 ) , ( x1 , y1 ) , ... , ( xn , yn ) } , and has been shown to provide better calibrated predictions and uncertainty estimates with minimax regret guarantees ( Bibas et al. , 2019 ) . To predict the class of a query point xq , CNML constructs k augmented datasets by adding xq with a different label in each datasets , which we write as D ∪ ( xq , y = i ) , i ∈ ( 1 , 2 , ... , k ) . CNML then defines the class distribution by solving the maximum likelihood estimation problem at query time for each of these augmented datasets to convergence , and normalize the likelihoods as follows : pCNML ( y = i|xq ) = pθi ( y = i|xq ) ∑k j=1 pθj ( y = j|xq ) , θi = arg max θ∈Θ E ( x , y ) ∼D∪ ( xq , y=i ) [ log pθ ( y|x ) ] ( 1 ) Intuitively , if xq is close to other datapoints in D , then the model will struggle to assign a high likelihood to labels that differ substantially from other nearby points . However , if xq is far from all datapoints in D , then the different augmented MLE problems can easily classify xq as an arbitrary class , providing us with a likelihood closer to uniform . We refer readers to Grünwald ( 2007 ) for an in-depth discussion . A major limitation of CNML is that it requires training an entire neural network to convergence on the entire augmented dataset every time we want to evaluate a test point ’ s class probabilities . We will address this issue in Section 5 . 4 BAYESIAN SUCCESS CLASSIFIERS FOR REWARD INFERENCE . Ideally , training a classifier with the policy samples as negative examples as described in Section 3.1 should yield a smooth decision boundary between the well-separated negative and positive examples . For example , Figure 2 depicts a simple 1-D scenario , where the agent starts at the left ( s0 ) and the positive outcomes are at the right ( s+ ) side of the environment . Since the positives are on the right and the negatives are on the left , one might expect a classifier to gradually increase its prediction of a success as we move to the right ( Figure 2a ) , which would provide a dense reward signal for the policy to move to the right . However , this idealized scenario rarely happens in practice . With- out suitable regularization , the decision boundary between the positive and negative examples may not be smooth . In fact , the decision boundary of an optimal classifier may take on the form of a sharp boundary anywhere between the positive and negative examples in the early stages of training ( Figure 2b ) . As a result , the classifier might provide little to no reward signal for the policy , since it can assign arbitrarily small probabilities to the states sampled from the policy . We note that this issue is not pathological : our experiments in Section 6 show that this poor reward signal issue happens in practice and can greatly hinder learning . In this section , we will discuss how an appropriate classifier training method can avoid these uninformative rewards .
This paper considers the problem of learning a policy for an MDP with unspecified reward, given user-provided goal states. To this end, a reward model and a policy are jointly learned: the reward model is the conditional normalized maximum likelihood (CNML) learned from a training set consisting of the example goal states as positive examples, and the policy trajectories as negative examples; the policy is trained to optimize the MDP using the learned reward. Meta-learning is applied to reduce the cost of learning the CNML models.
SP:edf6b1f46c66ca835d3ab608b17a07bed0aeef36
Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples
1 INTRODUCTION . While reinforcement learning ( RL ) has been shown to successfully solve problems with careful reward design ( Rajeswaran et al. , 2018 ; OpenAI et al. , 2019 ) , RL in its most general form , with no assumptions on the dynamics or reward function , requires solving a challenging uninformed search problem in which rewards are sparsely observed . Techniques which explicitly provide “ rewardshaping ” ( Ng et al. , 1999 ) , or modify the reward function to guide learning , can help take some of the burden off of exploration , but shaped rewards can be difficult to obtain without domain knowledge . In this paper , we aim to reformulate the reinforcement learning problem to make it easier for the user to specify the task and to provide a tractable reinforcement learning objective . Instead of requiring a reward function designed for an objective , our method instead assumes a user-provided set of successful outcome examples : states in which the desired task has been accomplished successfully . The algorithm aims to estimate the distribution over these states and maximize the probability of reaching states that are likely under the distribution . Prior work on learning from success examples ( Fu et al. , 2018b ; Zhu et al. , 2020 ) focused primarily on alleviating the need for manual reward design . In our work , we focus on the potential for this mode of task specification to produce more tractable RL problems and solve more challenging classes of tasks . Intuitively , when provided with explicit examples of successful states , the RL algorithm should be able to direct its exploration , rather than simply hope to randomly chance upon high reward states . The main challenge in instantiating this idea into a practical algorithm is performing appropriate uncertainty quantification in estimating whether a given state corresponds to a successful outcome . Our approach trains a classifier to distinguish successful states , provided by the user , from those generated by the current policy , analogously to generative adversarial networks ( Goodfellow et al. , 2014 ) and previously proposed methods for inverse reinforcement learning ( Fu et al. , 2018a ) . In general , such a classifier is not guaranteed to provide a good optimization landscape for learning the policy . We discuss how a particular form of uncertainty quantification based on the normalized maximum likelihood ( NML ) distribution produces better reward guidance for learning . We also connect our approach to count-based exploration methods , showing that a classifier with suitable uncertainty estimates reduces to a count-based exploration method in the absence of any generalization across states , while also discussing how it improves over count-based exploration in the presence of good generalization . We then propose a practical algorithm to train success classifiers in a computationally efficient way with NML , and show how this form of reward inference allows us to solve difficult problems more efficiently , providing experimental results which outperform existing algorithms on a number of navigation and robotic manipulation domains . 2 RELATED WORK . A number of techniques have been proposed to improve exploration.These techniques either add reward bonuses that encourage a policy to visit novel states in a task-agnostic manner ( Wiering and Schmidhuber , 1998 ; Auer et al. , 2002 ; Schaul et al. , 2011 ; Houthooft et al. , 2016 ; Pathak et al. , 2017 ; Tang et al. , 2017 ; Stadie et al. , 2015 ; Bellemare et al. , 2016 ; Burda et al. , 2018a ; O ’ Donoghue , 2018 ) or perform Thompson sampling or approximate Thompson sampling based on a prior over value functions ( Strens , 2000 ; Osband et al. , 2013 ; 2016 ) . While these techniques are uninformed about the actual task , we consider a constrained set of problems where examples of successes can allow for more task-directed exploration . In real world problems , designing well-shaped reward functions makes exploration easier but often requires significant domain knowledge ( Andrychowicz et al. , 2020 ) , access to privileged information about the environment ( Levine et al. , 2016 ) and/or a human in the loop providing rewards ( Knox and Stone , 2009 ; Singh et al. , 2019b ) . Prior work has considered specifying rewards by providing example demonstrations and inferring rewards with inverse RL ( Abbeel and Ng , 2004 ; Ziebart et al. , 2008 ; Ho and Ermon , 2016 ; Fu et al. , 2018a ) . This requires expensive expert demonstrations to be provided to the agent . In contrast , our work has the minimal requirement of simply providing successful outcome states , which can be done cheaply and more intuitively . This subclass of problems is also related to goal conditioned RL ( Kaelbling , 1993 ; Schaul et al. , 2015 ; Zhu et al. , 2017 ; Andrychowicz et al. , 2017 ; Nair et al. , 2018 ; Veeriah et al. , 2018 ; Rauber et al. , 2018 ; Warde-Farley et al. , 2018 ; Colas et al. , 2019 ; Ghosh et al. , 2019 ; Pong et al. , 2020 ) but is more general , since it allows for a more abstract notion of task success . A core idea behind our work is using a Bayesian classifier to learn a suitable reward function . Bayesian inference with expressive models and high dimensional data can often be intractable , requiring assumptions on the form of the posterior ( Hoffman et al. , 2013 ; Blundell et al. , 2015 ; Maddox et al. , 2019 ) . In this work , we build on the concept of normalized maximum likelihood ( Rissanen , 1996 ; Shtar ’ kov , 1987 ) , or NML , to learn Bayesian classifiers . Although NML is typically considered from the perspective of optimal coding ( Grünwald , 2007 ; Fogel and Feder , 2018 ) , we show how it can be used to learn success classifiers , and discuss its connections to exploration and reward shaping in RL . 3 PRELIMINARIES . In this paper , we study a modified reinforcement learning problem , where instead of the standard reward function , the agent is provided with successful outcome examples . This reformulation not only provides a modality for task specification that may be more natural for users to provide in some settings ( Fu et al. , 2018b ; Zhu et al. , 2020 ; Singh et al. , 2019a ) , but , as we will show , can also make learning easier . We also derive a meta-learned variant of the conditional normalized maximum likelihood ( CNML ) distribution for representing our reward function , in order to make evaluation tractable . We discuss background on successful outcome examples and CNML in this section . 3.1 REINFORCEMENT LEARNING WITH EXAMPLES OF SUCCESSFUL OUTCOMES . We follow the framework proposed by Fu et al . ( 2018b ) and assume that we are provided with a Markov decision process ( MDP ) without a reward function , given by M , where M = ( S , A , T , γ , µ0 ) , as well as successful outcome examples S+ = { sk+ } Kk=1 , which is a set of states in which the desired task has been accomplished . This formalism is easiest to describe in terms of the control as inference framework ( Levine , 2018 ) . The relevant graphical model in Figure 9 consists of states and actions , as well as binary success variables et which represent the occurrence of a particular event . The agent ’ s objective is to cause this event to occur ( e.g. , a robot that is cleaning the floor must cause the “ floor is clean ” event to occur ) . Formally , we assume that the states in S+ are sampled from the distribution p ( st|et = True ) – that is , states where the desired event has taken place . In this work , we focus on efficient methods for solving this reformulation of the RL problem , by utilizing a novel uncertainty quantification method to represent the distribution p ( et|st ) . In practice , prior methods that build on this and similar reformulations of the RL problem ( Fu et al. , 2018b ) derive an algorithm where the reward function in RL is produced by a classifier that estimates p ( et = True|st ) . Following the adversarial inverse reinforcement learning ( AIRL ) derivation ( Fu et al. , 2018a ; Finn et al. , 2016 ) , it is possible to show that the correct source of negative examples for training this classifier is the state distribution of the policy itself , π ( s ) . This insight results in a simple algorithm : at each iteration of the algorithm , the policy is updated to maximize the current reward , given by log p ( et = True|st ) , then samples from the policy are added to the set of negative examples S− , and the classifier is retrained on the original positive set S+ and the updated negative set S− . 3.2 CONDITIONAL NORMALIZED MAXIMUM LIKELIHOOD . Our method builds on the principle of conditional normalized maximum likelihood ( NML ) ( Rissanen and Roos , 2007 ; Grünwald , 2007 ; Fogel and Feder , 2018 ) , which we review briefly . CNML is a method for performing k-way classification , given a model class Θ and a dataset D = { ( x0 , y0 ) , ( x1 , y1 ) , ... , ( xn , yn ) } , and has been shown to provide better calibrated predictions and uncertainty estimates with minimax regret guarantees ( Bibas et al. , 2019 ) . To predict the class of a query point xq , CNML constructs k augmented datasets by adding xq with a different label in each datasets , which we write as D ∪ ( xq , y = i ) , i ∈ ( 1 , 2 , ... , k ) . CNML then defines the class distribution by solving the maximum likelihood estimation problem at query time for each of these augmented datasets to convergence , and normalize the likelihoods as follows : pCNML ( y = i|xq ) = pθi ( y = i|xq ) ∑k j=1 pθj ( y = j|xq ) , θi = arg max θ∈Θ E ( x , y ) ∼D∪ ( xq , y=i ) [ log pθ ( y|x ) ] ( 1 ) Intuitively , if xq is close to other datapoints in D , then the model will struggle to assign a high likelihood to labels that differ substantially from other nearby points . However , if xq is far from all datapoints in D , then the different augmented MLE problems can easily classify xq as an arbitrary class , providing us with a likelihood closer to uniform . We refer readers to Grünwald ( 2007 ) for an in-depth discussion . A major limitation of CNML is that it requires training an entire neural network to convergence on the entire augmented dataset every time we want to evaluate a test point ’ s class probabilities . We will address this issue in Section 5 . 4 BAYESIAN SUCCESS CLASSIFIERS FOR REWARD INFERENCE . Ideally , training a classifier with the policy samples as negative examples as described in Section 3.1 should yield a smooth decision boundary between the well-separated negative and positive examples . For example , Figure 2 depicts a simple 1-D scenario , where the agent starts at the left ( s0 ) and the positive outcomes are at the right ( s+ ) side of the environment . Since the positives are on the right and the negatives are on the left , one might expect a classifier to gradually increase its prediction of a success as we move to the right ( Figure 2a ) , which would provide a dense reward signal for the policy to move to the right . However , this idealized scenario rarely happens in practice . With- out suitable regularization , the decision boundary between the positive and negative examples may not be smooth . In fact , the decision boundary of an optimal classifier may take on the form of a sharp boundary anywhere between the positive and negative examples in the early stages of training ( Figure 2b ) . As a result , the classifier might provide little to no reward signal for the policy , since it can assign arbitrarily small probabilities to the states sampled from the policy . We note that this issue is not pathological : our experiments in Section 6 show that this poor reward signal issue happens in practice and can greatly hinder learning . In this section , we will discuss how an appropriate classifier training method can avoid these uninformative rewards .
This paper studies how to solve RL problems with a set of success states instead of a standard reward function. The central idea is to firstly train a Bayesian classifier from both the input success examples and the on-policy sampling using the conditional normalized maximum likelihood (CNML) and then use the learned classifier as a reward function to guide exploration. It is proved that in a tabular case, the success classifier trained with CNML is equivalent to a version of count-based exploration and it is claimed that with function approximation, the classifier attains non-negligible generalization. Empirically, it is claimed that this approach outperforms existing algorithms on a number of navigation and robotic manipulation domains.
SP:edf6b1f46c66ca835d3ab608b17a07bed0aeef36
Necessary and Sufficient Conditions for Compositional Representations
1 INTRODUCTION . Humans recognize the world and create imaginations in a supple way by leveraging systematic compositionality to achieve compositional generalization , the algebraic capacity to understand and produce large amount of novel combinations from known components ( Chomsky , 1957 ; Montague , 1970 ) . This is a key element of human intelligence ( Minsky , 1986 ; Lake et al. , 2017 ) , and we hope to equip machines with such ability . Conventional machine learning has been mainly developed with an assumption that training and test distributions are identical . Compositional generalization , however , is a type of out-of-distribution generalization ( Bengio , 2017 ) which has different training and test distributions . In compositional generalization , a sample is a combination of several components . For example , an image object may have two factor components of color and rotation . In language , a sentence is composed of the lexical meanings and the grammatical structure . The generalization is enabled by recombining seen components for an unseen combination during inference . One approach for compositional generalization is to learn compositional representations1 , or disentangled representation ( Bengio , 2013 ) , which contain several component representations . Each of them depends only on the corresponding underlying factor , and does not change when other factors change . Please see Section 3 for details . Multiple methods have been proposed to learn compositional representations . However , little discussion has been made for some fundamental questions . What kind of factor combinations can be expressed in compositional representation ? Though there are some common factor components such as colors and size , what property enable them ? When a set of components satisfy the conditions , what kind of mappings are available between the entangled and compositional representations ? Can we use the conditions to explain compositionality in conventional models such as attention ? In this paper , we mathematically prove two propositions ( Proposition 1.1 and Proposition 1.2 ) for necessary and sufficient conditions regarding compositional representations . We construct groups for changes on representations , and relate compositional representation with group direct product , and compositional mapping with group action equivalence ( Higgins et al. , 2018 ) . Then , we use theorems and propositions in group theory to prove the conditions . 1The word “ representation ” in this paper refers to variables , not group representation . Proposition 1.1 ( Compositional representation ) . A set of components can be expressed compositionally if and only if the subgroup product equals to the original group , each component subgroup is normal subgroup of the original group , and the group elements intersect only at identity element . Proposition 1.2 ( Compositional mapping ) . Given compositional representation , a mapping is compositional if and only if each component has equivalent action in compositional and entangled representations , and for each element of the entangled representation , the orbits intersect only at the element . Please see Proposition 4.2 and Proposition 4.10 for symbolic statements . We also provide examples to better understand the conditions and how to use them ( Section 5 ) . For representations , we see that whether the components can be expressed with compositional representation does not depend only on each component itself , but also on their combination , and the possible values to take . We use the condition for compositional mapping to explain some existing neural network models and tasks , e.g. , attention mechanism , spacial transformer and grammar tree nodes . We hope , with these examples , the conditions will be used for validating different compositional representations and mappings , and guiding designs of tasks and algorithms with compositionality . Our contributions can be summarized as follows . • We propose and prove necessary and sufficient conditions for compositional representation and compositional mapping . • We provide examples to understand and use the conditions , such as new explanation of attention models . 2 RELATED WORK . Human-level compositional learning ( Marcus , 2003 ; Lake & Baroni , 2018 ) has been an important open challenge ( Yang et al. , 2019 ; Keysers et al. , 2020 ) . There are recent progress on measuring compositionality ( Andreas , 2019 ; Lake & Baroni , 2018 ; Keysers et al. , 2020 ) and learning language compositionality for compositional generalization ( Lake , 2019 ; Russin et al. , 2019 ; Li et al. , 2019 ; Gordon et al. , 2020 ; Liu et al. , 2020 ) and continual learning ( Jin et al. , 2020 ; Li et al. , 2020 ) . Another line of related but different work is statistically and marginally independent disentangled representation learning ( Burgess et al. , 2018 ; Locatello et al. , 2019 ) . This setting assumes marginal independence between underlying factors hence does not have compositional generalization problem . On the other hand , compositional factors may not be marginally independent . Understanding of compositionality has been discussed over time . Some discussions following Montague ( 1970 ) uses homomorphism to define composition operation between representations . Recently , Higgins et al . ( 2018 ) proposes definition of disentangled representation with group theory . The definition is the base of this paper , and we focus on proving the conditions . Li et al . ( 2019 ) defines compositionality probabilistically without discussing conditions to achieve it . Gordon et al . ( 2020 ) finds compositionality in SCAN task can be expressed as permutation group action equivalence . This equivalent action is on a component subgroup , but it does not discuss equivalent action on the whole group and the relations between them . There are also other works related to group theory in machine learning ( Kondor , 2008 ; Cohen & Welling , 2016 ; Ravanbakhsh et al. , 2017 ; Kondor & Trivedi , 2018 ) . However , the previous works do not prove conditions for compositional representation or mapping . In this paper , we provide and theoretically prove necessary and sufficient conditions for compositional representations and compositional mappings . We use definitions , propositions and theorems from group theory . Please refer to Appendix A . Some of them are summarized in books , such as Dummit & Foote ( 2004 ) and Gallian ( 2012 ) , and we refer to them in the later sections . 3 REPRESENTATIONS . In this section , we introduce the definitions of representation and compositional representation used in this paper . Representation in this paper is consistent with the concept in neural network literature . It is a variable , and its value depends on each sample . For example , it can be activations for a layer in a neural network . The values of the activations depend on the network input . Network input and output are also called representations . Compositional representation in this paper means a representation with several separated component representations . It is also called disentangled representation in some literature . “ Separated ” means that the representation is the concatenation of the component representations . Each component representation corresponds to a underlying component , or a generative factor . When a representation is not compositional , it is an entangled representation . In the examples in Figure 1 , the components are color and shape . The upper images are entangled representations , where color and shape are in the same image . However it is not a compositional representation , because an image is not a concatenation of a color part and a rotation part . The lower vectors are compositional representations , where the left vector is for color and the right vector is for shape . 4 NECESSARY AND SUFFICIENT CONDITIONS . In this section , we derive necessary and sufficient conditions for compositionality step by step . We first construct groups for representations . We then describe compositionality with group properties , and study the conditions for them . Based on that , we further study the conditions for mappings between two representations . 4.1 GROUPS ON REPRESENTATIONS . Compositionality arises when we compare different samples , where some components are the same but others are not . This means compositionality is related to the changes between samples . These changes can be regarded as mappings , and since the changes are invertible , the mappings are bijective . To study compositionality we consider a set of all bijections from a set of possible representation values to the set itself , and construct a group with the following Proposition 4.1 . Proposition 4.1 . Let X be any nonempty set and SX be the set of all bijections from X to itself . The set SX is a group under function composition2 . Dummit & Foote ( 2004 ) P.29 Since SX contains all bijections , the group SX acts on the set X ( Definition A.9 ) , and the action is transitive ( Definition A.12 ) . We consider two representations and corresponding sets . X is original entangled representation , and Y is compositional representation . We create group G on set X , and group H on set Y . 4.2 COMPOSITIONAL REPRESENTATION . When multiple hidden variables live in the same representation , and can not be separated by simply splitting the representation , then these variables are entangled in the representation . For example , 2Function composition is different from compositionality being discussed . rotation and color are two hidden variables and they are both in a representation of image . We hope to extract the hidden variables by disentangling the representation . Suppose X is a set of entangled representations , and Y is a set of compositional representations . Y has Cartesian product ofK small sets Y1 , . . . , YK . We hope to find the conditions the changes on X can be expressed by the changes on the components in Y . A component corresponds to a set . For example , color component can take blue , green , etc. , from a set of colors . With Proposition 4.1 , we can construct a group for each component . With Definition A.2 , each of these groups is a subgroup of the original group . We consider K subgroups . We hope the changes on the entangled representation X are equally expressed by changes on the compositional representation Y . This means group G should be isomorphic with the external direct product ( Proposition A.1 ) of the subgroups H = N1 × · · · ×NK . The following Proposition 4.2 has the necessary and sufficient conditions . Proposition 4.2 . N1 , . . . , NK are subgroups of group G. G is isomorphic to the external direct product of the subgroups if and only if G is internal direct product of the subgroups . From Definition A.8 , we have the following . G ∼= N1 × · · · ×NK ⇐⇒ G = N1N2 . . . NK ( A1 ) Ni / G , ∀i = 1 , . . . , K ( A2 ) ( N1 . . . Ni ) ∩Ni+1 = { e } , ∀i = 1 , . . . , K − 1 ( A3 ) Proof . “ ⇐= ” : Theorem A.2 . “ =⇒ ” : G andN1×· · ·×NK are isomorphism , andN1×· · ·×NK satisfies the conditions by construction in definition . ( A1 ) means the subgroup product should cover the original group . ( A2 ) means all the component subgroups are normal subgroups of the original group . ( A3 ) means the intersection of a subgroup and the previous subgroups only contain the identity element . This corresponds to Proposition 1.1 . We will provide examples and look into more details in discussion section .
The paper attempts to formally explore the necessary and sufficient conditions for compositional representations, leveraging the formal tools from group theory. While the ideas look potentially promising, the presentation is fundamentally flawed, with certain key notions left without formal definitions. As a consequence, it becomes impossible to estimate the theoretical significance of the contribution or use the obtained results in practice.
SP:f4fc78afa84a20e4b7ceee32e9e1d2bf3bf0edb0
Necessary and Sufficient Conditions for Compositional Representations
1 INTRODUCTION . Humans recognize the world and create imaginations in a supple way by leveraging systematic compositionality to achieve compositional generalization , the algebraic capacity to understand and produce large amount of novel combinations from known components ( Chomsky , 1957 ; Montague , 1970 ) . This is a key element of human intelligence ( Minsky , 1986 ; Lake et al. , 2017 ) , and we hope to equip machines with such ability . Conventional machine learning has been mainly developed with an assumption that training and test distributions are identical . Compositional generalization , however , is a type of out-of-distribution generalization ( Bengio , 2017 ) which has different training and test distributions . In compositional generalization , a sample is a combination of several components . For example , an image object may have two factor components of color and rotation . In language , a sentence is composed of the lexical meanings and the grammatical structure . The generalization is enabled by recombining seen components for an unseen combination during inference . One approach for compositional generalization is to learn compositional representations1 , or disentangled representation ( Bengio , 2013 ) , which contain several component representations . Each of them depends only on the corresponding underlying factor , and does not change when other factors change . Please see Section 3 for details . Multiple methods have been proposed to learn compositional representations . However , little discussion has been made for some fundamental questions . What kind of factor combinations can be expressed in compositional representation ? Though there are some common factor components such as colors and size , what property enable them ? When a set of components satisfy the conditions , what kind of mappings are available between the entangled and compositional representations ? Can we use the conditions to explain compositionality in conventional models such as attention ? In this paper , we mathematically prove two propositions ( Proposition 1.1 and Proposition 1.2 ) for necessary and sufficient conditions regarding compositional representations . We construct groups for changes on representations , and relate compositional representation with group direct product , and compositional mapping with group action equivalence ( Higgins et al. , 2018 ) . Then , we use theorems and propositions in group theory to prove the conditions . 1The word “ representation ” in this paper refers to variables , not group representation . Proposition 1.1 ( Compositional representation ) . A set of components can be expressed compositionally if and only if the subgroup product equals to the original group , each component subgroup is normal subgroup of the original group , and the group elements intersect only at identity element . Proposition 1.2 ( Compositional mapping ) . Given compositional representation , a mapping is compositional if and only if each component has equivalent action in compositional and entangled representations , and for each element of the entangled representation , the orbits intersect only at the element . Please see Proposition 4.2 and Proposition 4.10 for symbolic statements . We also provide examples to better understand the conditions and how to use them ( Section 5 ) . For representations , we see that whether the components can be expressed with compositional representation does not depend only on each component itself , but also on their combination , and the possible values to take . We use the condition for compositional mapping to explain some existing neural network models and tasks , e.g. , attention mechanism , spacial transformer and grammar tree nodes . We hope , with these examples , the conditions will be used for validating different compositional representations and mappings , and guiding designs of tasks and algorithms with compositionality . Our contributions can be summarized as follows . • We propose and prove necessary and sufficient conditions for compositional representation and compositional mapping . • We provide examples to understand and use the conditions , such as new explanation of attention models . 2 RELATED WORK . Human-level compositional learning ( Marcus , 2003 ; Lake & Baroni , 2018 ) has been an important open challenge ( Yang et al. , 2019 ; Keysers et al. , 2020 ) . There are recent progress on measuring compositionality ( Andreas , 2019 ; Lake & Baroni , 2018 ; Keysers et al. , 2020 ) and learning language compositionality for compositional generalization ( Lake , 2019 ; Russin et al. , 2019 ; Li et al. , 2019 ; Gordon et al. , 2020 ; Liu et al. , 2020 ) and continual learning ( Jin et al. , 2020 ; Li et al. , 2020 ) . Another line of related but different work is statistically and marginally independent disentangled representation learning ( Burgess et al. , 2018 ; Locatello et al. , 2019 ) . This setting assumes marginal independence between underlying factors hence does not have compositional generalization problem . On the other hand , compositional factors may not be marginally independent . Understanding of compositionality has been discussed over time . Some discussions following Montague ( 1970 ) uses homomorphism to define composition operation between representations . Recently , Higgins et al . ( 2018 ) proposes definition of disentangled representation with group theory . The definition is the base of this paper , and we focus on proving the conditions . Li et al . ( 2019 ) defines compositionality probabilistically without discussing conditions to achieve it . Gordon et al . ( 2020 ) finds compositionality in SCAN task can be expressed as permutation group action equivalence . This equivalent action is on a component subgroup , but it does not discuss equivalent action on the whole group and the relations between them . There are also other works related to group theory in machine learning ( Kondor , 2008 ; Cohen & Welling , 2016 ; Ravanbakhsh et al. , 2017 ; Kondor & Trivedi , 2018 ) . However , the previous works do not prove conditions for compositional representation or mapping . In this paper , we provide and theoretically prove necessary and sufficient conditions for compositional representations and compositional mappings . We use definitions , propositions and theorems from group theory . Please refer to Appendix A . Some of them are summarized in books , such as Dummit & Foote ( 2004 ) and Gallian ( 2012 ) , and we refer to them in the later sections . 3 REPRESENTATIONS . In this section , we introduce the definitions of representation and compositional representation used in this paper . Representation in this paper is consistent with the concept in neural network literature . It is a variable , and its value depends on each sample . For example , it can be activations for a layer in a neural network . The values of the activations depend on the network input . Network input and output are also called representations . Compositional representation in this paper means a representation with several separated component representations . It is also called disentangled representation in some literature . “ Separated ” means that the representation is the concatenation of the component representations . Each component representation corresponds to a underlying component , or a generative factor . When a representation is not compositional , it is an entangled representation . In the examples in Figure 1 , the components are color and shape . The upper images are entangled representations , where color and shape are in the same image . However it is not a compositional representation , because an image is not a concatenation of a color part and a rotation part . The lower vectors are compositional representations , where the left vector is for color and the right vector is for shape . 4 NECESSARY AND SUFFICIENT CONDITIONS . In this section , we derive necessary and sufficient conditions for compositionality step by step . We first construct groups for representations . We then describe compositionality with group properties , and study the conditions for them . Based on that , we further study the conditions for mappings between two representations . 4.1 GROUPS ON REPRESENTATIONS . Compositionality arises when we compare different samples , where some components are the same but others are not . This means compositionality is related to the changes between samples . These changes can be regarded as mappings , and since the changes are invertible , the mappings are bijective . To study compositionality we consider a set of all bijections from a set of possible representation values to the set itself , and construct a group with the following Proposition 4.1 . Proposition 4.1 . Let X be any nonempty set and SX be the set of all bijections from X to itself . The set SX is a group under function composition2 . Dummit & Foote ( 2004 ) P.29 Since SX contains all bijections , the group SX acts on the set X ( Definition A.9 ) , and the action is transitive ( Definition A.12 ) . We consider two representations and corresponding sets . X is original entangled representation , and Y is compositional representation . We create group G on set X , and group H on set Y . 4.2 COMPOSITIONAL REPRESENTATION . When multiple hidden variables live in the same representation , and can not be separated by simply splitting the representation , then these variables are entangled in the representation . For example , 2Function composition is different from compositionality being discussed . rotation and color are two hidden variables and they are both in a representation of image . We hope to extract the hidden variables by disentangling the representation . Suppose X is a set of entangled representations , and Y is a set of compositional representations . Y has Cartesian product ofK small sets Y1 , . . . , YK . We hope to find the conditions the changes on X can be expressed by the changes on the components in Y . A component corresponds to a set . For example , color component can take blue , green , etc. , from a set of colors . With Proposition 4.1 , we can construct a group for each component . With Definition A.2 , each of these groups is a subgroup of the original group . We consider K subgroups . We hope the changes on the entangled representation X are equally expressed by changes on the compositional representation Y . This means group G should be isomorphic with the external direct product ( Proposition A.1 ) of the subgroups H = N1 × · · · ×NK . The following Proposition 4.2 has the necessary and sufficient conditions . Proposition 4.2 . N1 , . . . , NK are subgroups of group G. G is isomorphic to the external direct product of the subgroups if and only if G is internal direct product of the subgroups . From Definition A.8 , we have the following . G ∼= N1 × · · · ×NK ⇐⇒ G = N1N2 . . . NK ( A1 ) Ni / G , ∀i = 1 , . . . , K ( A2 ) ( N1 . . . Ni ) ∩Ni+1 = { e } , ∀i = 1 , . . . , K − 1 ( A3 ) Proof . “ ⇐= ” : Theorem A.2 . “ =⇒ ” : G andN1×· · ·×NK are isomorphism , andN1×· · ·×NK satisfies the conditions by construction in definition . ( A1 ) means the subgroup product should cover the original group . ( A2 ) means all the component subgroups are normal subgroups of the original group . ( A3 ) means the intersection of a subgroup and the previous subgroups only contain the identity element . This corresponds to Proposition 1.1 . We will provide examples and look into more details in discussion section .
This paper applies concepts from group theory to help find necessary and sufficient representations on the presence of compositionality in representations and on mappings between them. This topic is very important, as people tend to use "compositional" in many ways, often not explicitly defined. The paper, however, is hard to follow, because the main concepts and theorems are not adequately illustrated with examples. In the final section, when examples are provided, it's still not clear how they apply to representation learning, the topic of this conference. Thus, while the topic is very relevant and the approach welcome, it is hard to know what lesson we have learned from the theorems in the paper.
SP:f4fc78afa84a20e4b7ceee32e9e1d2bf3bf0edb0
A framework for learned CountSketch
1 INTRODUCTION . In recent years , we have seen the influence of machine learning extend far beyond the field of artificial intelligence . The underlying paradigm , which assumes that a given algorithm has an input distribution for which algorithm parameters can be optimized , has even been applied to classical algorithms . Examples of classical problems that have benefitted from ML include cache eviction strategies , online algorithms for job scheduling , frequency estimation of data stream elements , and indexing strategies for data structures ( Lykouris & Vassilvitskii , 2018 ; Purohit et al. , 2018 ; Hsu et al. , 2019 ; Kraska et al. , 2018 ) . This input distribution assumption is often realistic . For example , many real-world applications use data streaming to track things like product purchasing statistics in real time . Consecutively streamed datapoints are usually tightly correlated and closely fit certain distributions . We are interested in how this distributional paradigm can be applied to sketching , a data compression technique . With the dramatic increase in the dimensions of data collected in the past decade , compression methods are more important than ever . Thus , it is of practical interest to improve the accuracy and efficiency of sketching algorithms . We study a sketching scheme in which the input matrix is compressed by multiplying it with a “ sketch ” matrix with a small dimension . This smaller , sketched input is then used to compute an approximate solution . Typically , the sketch matrix and the approximation algorithm are designed to satisfy worst-case bounds on approximation error for arbitrary inputs . With the ML perspective in mind , we examine if it is possible to construct sketches which also have low error in expectation over an input distribution . Essentially , we aim for the best of both worlds : good performance in practice with theoretical worst-case guarantees . Further , we are interested in methods that work for multiple sketching applications . Typically , sketching is very application-specific . The sketch construction and approximation algorithm are tailored to individual applications , like robust regression or clustering ( Sarlos , 2006 ; Clarkson & Woodruff , 2009 ; 2014 ; 2017 ; Cohen et al. , 2015 ; Makarychev et al. , 2019 ) . Instead , we consider three applications at once ( regression , LRA , k-means ) and propose generalizable methods , as well as extending previous application-specific work . Our results . At a high level , our work ’ s aim is to make sketch learning more effective , general , and ultimately , practical . We propose a framework for constructing and using learned CountSketch . We chose CountSketch because it is a sparse , input-independent sketch ( Charikar et al. , 2002 ) . Specifically , it has one non-zero entry ( ±1 ) per column and does not need to be constructed anew for each input matrix it is applied to . These qualities enable CountSketch to be applied quickly , since sparse matrix multiplication is fast and we can reuse the same CountSketch for different inputs . Our “ learned ” CountSketch will retain this characteristic sparsity pattern and input-independence1 , but its non-zero entries will range in R. We list our main contributions and follow this with a discussion . • Two-stage sketch optimization : to first place the non-zero entries and then learn their values . • Theoretical worst-case guarantees , two ways : we derived a time-optimal method which applies to MRR , LRA , k-means , and more . We also proved a simpler method works for k-means . • SOTA experimental results : we showed the versatility of our method on 5 data sets with 3 types . Our method dominated on the majority of experiments . • Theoretical analysis on the necessity of two stages : we proved that including the first stage is strictly better for LRA and two common input distributions . • Empirical demonstration of the necessity of two stages : showed that including the first stage gives a 12 , 20 % boost for MRR , LRA . Our sketch learning algorithm first places the sparse non-zero entries using a greedy strategy , and then learns their values using gradient descent . The resulting learned CountSketch is very different from the classical CountSketch : the non-zero entries no longer have random positions and ±1 values . As a result , the usual worst-case guarantees do not hold . We sought a way to obtain worst-case guarantees that was fast and reasonably general . Our solution is a fast comparison step which performs an approximate evaluation of learned and classical sketches and takes the better of the two . Importantly , we can run this step before the approximation algorithm without increasing its overall time complexity . As such , this solution is time-optimal and applies to MRR , LRA , k-means , and more . An alternate method was proposed by a previous work , but it was only proved for LRA ( Indyk et al. , 2019 ) . This “ sketch concatenation ” method just involves sketching with the concatenation of a learned and a classical sketch . Since it is somewhat simpler , we wanted to extend its applicability . In a novel theoretical result , we proved this works for k-means as well . We also ran a diverse set of experiments to demonstrate the versatility and practicality of our approach . We chose five data sets spanning three categories ( image , text , and graph ) to test our method on three applications ( MRR , LRA , k-means ) . Importantly , these experiments have real-world counterparts . For example , LRA and k-means can be used to compress images , applying SVD ( LRA ) to text data is the basis of a natural language processing technique , and LRA can be used to compute approximate max cuts on graph adjacency matrices . Ultimately , our method dominated on the vast majority of tests , giving a 31 , 70 % improvement over classical CountSketch for MRR , LRA . Finally , we conducted ablation study of the components of our algorithm . In another novel theoretical result , we proved that including the time-consuming first optimization stage is strictly better than not to for LRA and two input distributions ( spiked covariance and Zipfian ) . Empirically , this is case for all 3 applications . Related work . In the last few years , there has been much work on leveraging ML to improve classical algorithms ; we only mention a few examples here . One related body of work is data-dependent 1While learned CountSketch is data-dependent ( it is optimized using sample input matrices ) , it is still considered input-independent because it is applied to unseen input matrices ( test samples ) . dimensionality reduction , such as an approach for pair-wise/multi-wise similarity preservation for indexing big data ( Wang et al. , 2017 ) and a method for learning linear projections for general applications ( Hegde et al. , 2015 ) . We note that multiplying an input matrix on the left with a sparse sketch is equivalent to hashing its rows to a small number of bins . Thus , we also find connections with the body of work on learned hashes , most of which addresses the nearest neighbor search problem ( see Wang et al . for a survey ) . However , in order to obtain approximation guarantees , our “ hash function ” ( sparse sketch ) must satisfy properties which these learned hashes usually do not , such as affine -embedding ( Def . A.1 ) . In particular , we build off of the work of Indyk et al . ( 2019 ) , which introduced gradient descent optimization for the LRA application . It also gave an LRA-specific method for worst-case guarantees . We have surpassed the sketching performance and breadth of this work . Namely , we introduce a sparsity pattern optimization step which is clearly crucial for sparse sketches . We also provide a more general method for worst-case guarantees and extend their method to k-means . 2 PRELIMINARIES . Our learned sketches have the sparsity pattern of the classical CountSketch . The construction of this sketch is described below . We also define affine -embeddings , which is a class of sketches that includes CountSketch . This -embedding property is desirable because it allows us to prove that certain sketching algorithms give ( 1 + ) -approximations . Definition 2.1 ( Classical CountSketch ) . The CountSketch ( abbreviated as CS ) matrix S has one non-zero entry in each column with a random location and random value in { ±1 } . Definition 2.2 ( Affine Embedding ) . Given a pair of matrices A and B , a matrix S is an affine -embedding if for all X of the appropriate shape , ‖S ( AX −B ) ‖2F = ( 1± ) ‖AX −B‖ 2 F . Notation . We denote the singular value decomposition ( SVD ) of A by A = UΣV > with orthogonal U , V and diagonal Σ. Relatedly , the Moore-Penrose pseudo-inverse of A is A† = V Σ−1U > , where Σ−1 is constructed by inverting the non-zero diagonal entries . 3 FRAMEWORK . We describe a framework for learned CountSketch that can be adopted by many different applications , including least-squares regression ( MRR ) , low-rank approximation ( LRA ) , and k-means clustering . We will return to these applications in the next section . In this section , we first describe how to optimize a CountSketch over a set of training samples . Then , we explain how to use the learned CountSketch to achieve good expected performance with worst-case guarantees . By running a “ fast comparison ” step before the approximation algorithm , we can do this in optimal time . SKETCH OPTIMIZATION . Most applications are optimization problems . That is , they are defined by an objective function , L ( · ) . For example , least-squares regression solves min X ‖AX −B‖2F given A ∈ Rn×d , B ∈ Rn×d′ . Of course , the optimal solution is a function of the inputs : X∗ = arg min X ‖AX −B‖2F = A †B However , in the sketching paradigm , we compute an approximately optimal solution as a function of the sketched ( compressed ) inputs . Taking CountSketch S ∈ Rm×n for m n , we have : X̂∗ = ( SA ) † ( SB ) Our goal is to minimize the expected approximation error with respect to S , which is constrained to the set of CountSketch-sparse matrices ( CS ) . However , this is simply equivalent to minimizing the objective value of the approximate solution . S∗ = arg min S∈CS E ( A , B ) ∼D [ LA , B ( ( SA ) † ( SB ) ) − LA , B ( X∗ ) ] = arg min S∈CS E ( A , B ) ∼D [ LA , B ( ( SA ) † ( SB ) ) ] For ease of notation , we will define G ( · ) as a function which maps a sketch and inputs to an approximate solution . G ( · ) is defined by the application-specific approximation algorithm . For MRR , G ( S , ( A , B ) ) = ( SA ) † ( SB ) . More generally , the sketch optimization objective is : S∗ = arg min S∈CS E A∼D [ LA ( G ( S , A ) ) ] ( 3.1 ) If the application is regression , we let A be ( A , B ) . We will solve this constrained optimization in two stages . For both stages , we approximate the expectation in empirical risk minimization ( ERM ) fashion . That is , we approximate the expectation over the true distribution by averaging over a sampled batch of the training set . Now , in the first optimization stage , we compute positions for the CountSketch-sparse nonzero entries . Then , in the second stage , we fix the positions and optimize the nonzero values . Stage 1 : Placing the nonzero entries . We want to maintain the sparsity pattern of CS ( one nonzero entry per column ) , but we are free to place that nonzero entry wherever we like for each column . A naı̈ve method would be to evaluate the objective for the exponential number of full placements . This is clearly intractable , so we consider a greedy alternative . In essence , we construct the sketch one nonzero entry at a time , and we choose the location of the next entry by minimizing ( 3.1 ) over the discrete set of possibilities . More precisely , we build the sketch S ∈ Rm×n iteratively , placing one nonzero entry at a time . For each nonzero entry , we consider m locations and 2 values for each location ( ±1 ) . We evaluate the sketch optimization objective ( 3.1 ) for all 2m incremental updates to S and choose the minimizing update . In the pseudo-code below , we iterate through the n columns of S , each of which contains a non-zero entry . Note that Sw , j = S + w ( ~ej ~ei > ) adds a single entry w to the i-th column , j-th row of the current , partially-constructed S. Algorithm 1 GREEDY STAGE Require : Atrain = { A1 , ... , AN } with Ai ∈ Rn×d ; sketch dimension m 1 : initialize S = Om×n 2 : for i = 1 to n do 3 : w∗ , j∗ = arg min w∈ { ±1 } , j∈ [ m ] ∑ Ai∈Atrain LAi ( G ( Sw , j , Ai ) ) where Sw , j = S + w ( ~ej ~ei > ) 4 : S [ j∗ , i ] = w∗ 5 : end for For some applications it can be inefficient to evaluate ( 3.1 ) , since it requires computing the approximate solution . For MRR and LRA , the approximate solution has a closed form , but for k-means , it must be computed iteratively . This is prohibitively expensive , since we perform many evaluations . In this case , we recommend finding a surrogate L ( · ) with a closed-form solution , as we illustrate in later sections . Stage 2 : Optimizing the nonzero values . We now fix the positions of the nonzero entries and optimize their values using gradient descent . To fix the positions , we represent S as just a vector of its nonzero entries , ~v ∈ Rn . We will denote H ( ~v ) : Rn → Rm×n as the function which maps this concise representation of S to the full matrix . H ( · ) depends on the positions computed in the last stage , which are fixed . Now , we simply differentiate E A∼D [ LA ( G ( H ( ~v ) , A ) ) ] ( 3.1 ) with respect to ~v . Algorithm 2 GRADIENT STAGE Require : Atrain = { A1 , ... , AN } with Ai ∈ Rn×d ; H ( · ) from Alg . 1 ; learning rate α 1 : for i = 1 to niter do 2 : S = Om×n 3 : sample Abatch from Atrain 4 : ~v ← ~v − α ( ∑ A∈Abatch ∂LA ( G ( H ( ~v ) ) , A ) ∂~v ) 5 : end for
The authors consider the problem of ``"sketching" – a popular compression technique in machine learning - used for reducing the size of the data enabling one to quickly compute an approximate solution using this compressed input. This paper introduces a general framework for learning and applying sparse sketching matrices. A two-stage procedure for learning sketches with the same sparsity pattern as CountSketch is proposed which involves first learning the sketch’s non-zero entries, and then optimizing their values. They then show how to apply the obtained sketch so that it has worst case approximation error guarantees. This procedure is applied to three applications, namely least squares regression, low rank approximation (LRA) and k-means clustering. Experimental results demonstrate a substantial reduction in the approximation error compared to other baseline approaches (including classically learned sketches). On the theoretical front, it is shown for regression and LRA that the proposed approach obtains improved error guarantees for fixed time complexity. Additionally, it is shown for LRA (under certain input distributions) that including the first stage is strictly better than not including it. Finally, a more straightforward way of retaining worst case approximation guarantees for k-means is shown.
SP:3bec1083d30b42438c1d66b4f382b80d34032b7e
A framework for learned CountSketch
1 INTRODUCTION . In recent years , we have seen the influence of machine learning extend far beyond the field of artificial intelligence . The underlying paradigm , which assumes that a given algorithm has an input distribution for which algorithm parameters can be optimized , has even been applied to classical algorithms . Examples of classical problems that have benefitted from ML include cache eviction strategies , online algorithms for job scheduling , frequency estimation of data stream elements , and indexing strategies for data structures ( Lykouris & Vassilvitskii , 2018 ; Purohit et al. , 2018 ; Hsu et al. , 2019 ; Kraska et al. , 2018 ) . This input distribution assumption is often realistic . For example , many real-world applications use data streaming to track things like product purchasing statistics in real time . Consecutively streamed datapoints are usually tightly correlated and closely fit certain distributions . We are interested in how this distributional paradigm can be applied to sketching , a data compression technique . With the dramatic increase in the dimensions of data collected in the past decade , compression methods are more important than ever . Thus , it is of practical interest to improve the accuracy and efficiency of sketching algorithms . We study a sketching scheme in which the input matrix is compressed by multiplying it with a “ sketch ” matrix with a small dimension . This smaller , sketched input is then used to compute an approximate solution . Typically , the sketch matrix and the approximation algorithm are designed to satisfy worst-case bounds on approximation error for arbitrary inputs . With the ML perspective in mind , we examine if it is possible to construct sketches which also have low error in expectation over an input distribution . Essentially , we aim for the best of both worlds : good performance in practice with theoretical worst-case guarantees . Further , we are interested in methods that work for multiple sketching applications . Typically , sketching is very application-specific . The sketch construction and approximation algorithm are tailored to individual applications , like robust regression or clustering ( Sarlos , 2006 ; Clarkson & Woodruff , 2009 ; 2014 ; 2017 ; Cohen et al. , 2015 ; Makarychev et al. , 2019 ) . Instead , we consider three applications at once ( regression , LRA , k-means ) and propose generalizable methods , as well as extending previous application-specific work . Our results . At a high level , our work ’ s aim is to make sketch learning more effective , general , and ultimately , practical . We propose a framework for constructing and using learned CountSketch . We chose CountSketch because it is a sparse , input-independent sketch ( Charikar et al. , 2002 ) . Specifically , it has one non-zero entry ( ±1 ) per column and does not need to be constructed anew for each input matrix it is applied to . These qualities enable CountSketch to be applied quickly , since sparse matrix multiplication is fast and we can reuse the same CountSketch for different inputs . Our “ learned ” CountSketch will retain this characteristic sparsity pattern and input-independence1 , but its non-zero entries will range in R. We list our main contributions and follow this with a discussion . • Two-stage sketch optimization : to first place the non-zero entries and then learn their values . • Theoretical worst-case guarantees , two ways : we derived a time-optimal method which applies to MRR , LRA , k-means , and more . We also proved a simpler method works for k-means . • SOTA experimental results : we showed the versatility of our method on 5 data sets with 3 types . Our method dominated on the majority of experiments . • Theoretical analysis on the necessity of two stages : we proved that including the first stage is strictly better for LRA and two common input distributions . • Empirical demonstration of the necessity of two stages : showed that including the first stage gives a 12 , 20 % boost for MRR , LRA . Our sketch learning algorithm first places the sparse non-zero entries using a greedy strategy , and then learns their values using gradient descent . The resulting learned CountSketch is very different from the classical CountSketch : the non-zero entries no longer have random positions and ±1 values . As a result , the usual worst-case guarantees do not hold . We sought a way to obtain worst-case guarantees that was fast and reasonably general . Our solution is a fast comparison step which performs an approximate evaluation of learned and classical sketches and takes the better of the two . Importantly , we can run this step before the approximation algorithm without increasing its overall time complexity . As such , this solution is time-optimal and applies to MRR , LRA , k-means , and more . An alternate method was proposed by a previous work , but it was only proved for LRA ( Indyk et al. , 2019 ) . This “ sketch concatenation ” method just involves sketching with the concatenation of a learned and a classical sketch . Since it is somewhat simpler , we wanted to extend its applicability . In a novel theoretical result , we proved this works for k-means as well . We also ran a diverse set of experiments to demonstrate the versatility and practicality of our approach . We chose five data sets spanning three categories ( image , text , and graph ) to test our method on three applications ( MRR , LRA , k-means ) . Importantly , these experiments have real-world counterparts . For example , LRA and k-means can be used to compress images , applying SVD ( LRA ) to text data is the basis of a natural language processing technique , and LRA can be used to compute approximate max cuts on graph adjacency matrices . Ultimately , our method dominated on the vast majority of tests , giving a 31 , 70 % improvement over classical CountSketch for MRR , LRA . Finally , we conducted ablation study of the components of our algorithm . In another novel theoretical result , we proved that including the time-consuming first optimization stage is strictly better than not to for LRA and two input distributions ( spiked covariance and Zipfian ) . Empirically , this is case for all 3 applications . Related work . In the last few years , there has been much work on leveraging ML to improve classical algorithms ; we only mention a few examples here . One related body of work is data-dependent 1While learned CountSketch is data-dependent ( it is optimized using sample input matrices ) , it is still considered input-independent because it is applied to unseen input matrices ( test samples ) . dimensionality reduction , such as an approach for pair-wise/multi-wise similarity preservation for indexing big data ( Wang et al. , 2017 ) and a method for learning linear projections for general applications ( Hegde et al. , 2015 ) . We note that multiplying an input matrix on the left with a sparse sketch is equivalent to hashing its rows to a small number of bins . Thus , we also find connections with the body of work on learned hashes , most of which addresses the nearest neighbor search problem ( see Wang et al . for a survey ) . However , in order to obtain approximation guarantees , our “ hash function ” ( sparse sketch ) must satisfy properties which these learned hashes usually do not , such as affine -embedding ( Def . A.1 ) . In particular , we build off of the work of Indyk et al . ( 2019 ) , which introduced gradient descent optimization for the LRA application . It also gave an LRA-specific method for worst-case guarantees . We have surpassed the sketching performance and breadth of this work . Namely , we introduce a sparsity pattern optimization step which is clearly crucial for sparse sketches . We also provide a more general method for worst-case guarantees and extend their method to k-means . 2 PRELIMINARIES . Our learned sketches have the sparsity pattern of the classical CountSketch . The construction of this sketch is described below . We also define affine -embeddings , which is a class of sketches that includes CountSketch . This -embedding property is desirable because it allows us to prove that certain sketching algorithms give ( 1 + ) -approximations . Definition 2.1 ( Classical CountSketch ) . The CountSketch ( abbreviated as CS ) matrix S has one non-zero entry in each column with a random location and random value in { ±1 } . Definition 2.2 ( Affine Embedding ) . Given a pair of matrices A and B , a matrix S is an affine -embedding if for all X of the appropriate shape , ‖S ( AX −B ) ‖2F = ( 1± ) ‖AX −B‖ 2 F . Notation . We denote the singular value decomposition ( SVD ) of A by A = UΣV > with orthogonal U , V and diagonal Σ. Relatedly , the Moore-Penrose pseudo-inverse of A is A† = V Σ−1U > , where Σ−1 is constructed by inverting the non-zero diagonal entries . 3 FRAMEWORK . We describe a framework for learned CountSketch that can be adopted by many different applications , including least-squares regression ( MRR ) , low-rank approximation ( LRA ) , and k-means clustering . We will return to these applications in the next section . In this section , we first describe how to optimize a CountSketch over a set of training samples . Then , we explain how to use the learned CountSketch to achieve good expected performance with worst-case guarantees . By running a “ fast comparison ” step before the approximation algorithm , we can do this in optimal time . SKETCH OPTIMIZATION . Most applications are optimization problems . That is , they are defined by an objective function , L ( · ) . For example , least-squares regression solves min X ‖AX −B‖2F given A ∈ Rn×d , B ∈ Rn×d′ . Of course , the optimal solution is a function of the inputs : X∗ = arg min X ‖AX −B‖2F = A †B However , in the sketching paradigm , we compute an approximately optimal solution as a function of the sketched ( compressed ) inputs . Taking CountSketch S ∈ Rm×n for m n , we have : X̂∗ = ( SA ) † ( SB ) Our goal is to minimize the expected approximation error with respect to S , which is constrained to the set of CountSketch-sparse matrices ( CS ) . However , this is simply equivalent to minimizing the objective value of the approximate solution . S∗ = arg min S∈CS E ( A , B ) ∼D [ LA , B ( ( SA ) † ( SB ) ) − LA , B ( X∗ ) ] = arg min S∈CS E ( A , B ) ∼D [ LA , B ( ( SA ) † ( SB ) ) ] For ease of notation , we will define G ( · ) as a function which maps a sketch and inputs to an approximate solution . G ( · ) is defined by the application-specific approximation algorithm . For MRR , G ( S , ( A , B ) ) = ( SA ) † ( SB ) . More generally , the sketch optimization objective is : S∗ = arg min S∈CS E A∼D [ LA ( G ( S , A ) ) ] ( 3.1 ) If the application is regression , we let A be ( A , B ) . We will solve this constrained optimization in two stages . For both stages , we approximate the expectation in empirical risk minimization ( ERM ) fashion . That is , we approximate the expectation over the true distribution by averaging over a sampled batch of the training set . Now , in the first optimization stage , we compute positions for the CountSketch-sparse nonzero entries . Then , in the second stage , we fix the positions and optimize the nonzero values . Stage 1 : Placing the nonzero entries . We want to maintain the sparsity pattern of CS ( one nonzero entry per column ) , but we are free to place that nonzero entry wherever we like for each column . A naı̈ve method would be to evaluate the objective for the exponential number of full placements . This is clearly intractable , so we consider a greedy alternative . In essence , we construct the sketch one nonzero entry at a time , and we choose the location of the next entry by minimizing ( 3.1 ) over the discrete set of possibilities . More precisely , we build the sketch S ∈ Rm×n iteratively , placing one nonzero entry at a time . For each nonzero entry , we consider m locations and 2 values for each location ( ±1 ) . We evaluate the sketch optimization objective ( 3.1 ) for all 2m incremental updates to S and choose the minimizing update . In the pseudo-code below , we iterate through the n columns of S , each of which contains a non-zero entry . Note that Sw , j = S + w ( ~ej ~ei > ) adds a single entry w to the i-th column , j-th row of the current , partially-constructed S. Algorithm 1 GREEDY STAGE Require : Atrain = { A1 , ... , AN } with Ai ∈ Rn×d ; sketch dimension m 1 : initialize S = Om×n 2 : for i = 1 to n do 3 : w∗ , j∗ = arg min w∈ { ±1 } , j∈ [ m ] ∑ Ai∈Atrain LAi ( G ( Sw , j , Ai ) ) where Sw , j = S + w ( ~ej ~ei > ) 4 : S [ j∗ , i ] = w∗ 5 : end for For some applications it can be inefficient to evaluate ( 3.1 ) , since it requires computing the approximate solution . For MRR and LRA , the approximate solution has a closed form , but for k-means , it must be computed iteratively . This is prohibitively expensive , since we perform many evaluations . In this case , we recommend finding a surrogate L ( · ) with a closed-form solution , as we illustrate in later sections . Stage 2 : Optimizing the nonzero values . We now fix the positions of the nonzero entries and optimize their values using gradient descent . To fix the positions , we represent S as just a vector of its nonzero entries , ~v ∈ Rn . We will denote H ( ~v ) : Rn → Rm×n as the function which maps this concise representation of S to the full matrix . H ( · ) depends on the positions computed in the last stage , which are fixed . Now , we simply differentiate E A∼D [ LA ( G ( H ( ~v ) , A ) ) ] ( 3.1 ) with respect to ~v . Algorithm 2 GRADIENT STAGE Require : Atrain = { A1 , ... , AN } with Ai ∈ Rn×d ; H ( · ) from Alg . 1 ; learning rate α 1 : for i = 1 to niter do 2 : S = Om×n 3 : sample Abatch from Atrain 4 : ~v ← ~v − α ( ∑ A∈Abatch ∂LA ( G ( H ( ~v ) ) , A ) ∂~v ) 5 : end for
The authors consider a specific sketching method, CountSketch, and three objective functions defined over the data design matrix: multiple-response regression (MMR), low-rank approximation (LRA), and k-means clustering. They compare the classical CountSketch with a random choices of the {-1,+1}-valued sketching matrix against: (1) Gradient descent optimization of the CountSketch weights. This was previously introduced for LRA and the authors extend it to MMR and k-means. (2) Greedy optimization of the positions of the CountSketch non-zero entries.
SP:3bec1083d30b42438c1d66b4f382b80d34032b7e
Taming GANs with Lookahead-Minmax
Generative Adversarial Networks are notoriously challenging to train . The underlying minmax optimization is highly susceptible to the variance of the stochastic gradient and the rotational component of the associated game vector field . To tackle these challenges , we propose the Lookahead algorithm for minmax optimization , originally developed for single objective minimization only . The backtracking step of our Lookahead–minmax naturally handles the rotational game dynamics , a property which was identified to be key for enabling gradient ascent descent methods to converge on challenging examples often analyzed in the literature . Moreover , it implicitly handles high variance without using large mini-batches , known to be essential for reaching state of the art performance . Experimental results on MNIST , SVHN , CIFAR-10 , and ImageNet demonstrate a clear advantage of combining Lookahead–minmax with Adam or extragradient , in terms of performance and improved stability , for negligible memory and computational cost . Using 30-fold fewer parameters and 16-fold smaller minibatches we outperform the reported performance of the class-dependent BigGAN on CIFAR-10 by obtaining FID of 12.19 without using the class labels , bringing state-of-the-art GAN training within reach of common computational resources . Our source code is available : https : //github.com/Chavdarova/LAGAN-Lookahead_Minimax . 1 INTRODUCTION . Gradient-based methods are the workhorse of machine learning . These methods optimize the parameters of a model with respect to a single objective f : X → R . However , an increasing interest for multi-objective optimization arises in various domains—such as mathematics , economics , multiagent reinforcement learning ( Omidshafiei et al. , 2017 ) —where several agents aim at optimizing their own cost function fi : X1 × · · · × XN → R simultaneously . A particularly successful class of algorithms of this kind are the Generative Adversarial Networks ( Goodfellow et al. , 2014 , ( GANs ) ) , which consist of two players referred to as a generator and a discriminator . GANs were originally formulated as minmax optimization f : X ×Y → R ( Von Neumann & Morgenstern , 1944 ) , where the generator and the discriminator aim at minimizing and maximizing the same value function , see § 2 . A natural generalization of gradient descent for minmax problems is the gradient descent ascent algorithm ( GDA ) , which alternates between a gradient descent step for the min-player and a gradient ascent step for the max-player . This minmax training aims at finding a Nash equilibrium where no player has the incentive of changing its parameters . Despite the impressive quality of the samples generated by the GANs—relative to classical maximum likelihood-based generative models—these models remain notoriously difficult to train . In particular , poor performance ( sometimes manifesting as “ mode collapse ” ) , brittle dependency on hyperparameters , or divergence are often reported . Consequently , obtaining state-of-the-art performance was shown to require large computational resources ( Brock et al. , 2019 ) , making well-performing models unavailable for common computational budgets . ∗Equal contributions . Correspondence to firstname.lastname @ epfl.ch . It was empirically shown that : ( i ) GANs often converge to a locally stable stationary point that is not a differential Nash equilibrium ( Berard et al. , 2020 ) ; ( ii ) increased batch size improves GAN performances ( Brock et al. , 2019 ) in contrast to minimization ( Defazio & Bottou , 2019 ; Shallue et al. , 2018 ) . A principal reason is attributed to the rotations arising due to the adversarial component of the associated vector field of the gradient of the two player ’ s parameters ( Mescheder et al. , 2018 ; Balduzzi et al. , 2018 ) , which are atypical for minimization . More precisely , the Jacobian of the associated vector field ( see def . in § 2 ) can be decomposed into a symmetric and antisymmetric component ( Balduzzi et al. , 2018 ) , which behave as a potential ( Monderer & Shapley , 1996 ) and a Hamiltonian game , resp . Games are often combination of the two , making this general case harder to solve . In the context of single objective minimization , Zhang et al . ( 2019 ) recently proposed the Lookahead algorithm , which intuitively uses an update direction by “ looking ahead ” at the sequence of parameters that change with higher variance due to the “ stochasticity ” of the gradient estimates–called fast weights–generated by an inner optimizer . Lookahead was shown to improve the stability during training and to reduce the variance of the so called slow weights . Contributions . Our contributions can be summarized as follows : • We propose Lookahead–minmax for optimizing minmax problems , that applies extrapolation in the joint parameter space ( see Alg 1 ) , so as to account for the rotational component of the associated game vector field ( defined in § 2 ) . • In the context of : ( i ) single objective minimization : by building on insights of Wang et al . ( 2020 ) , who argue that Lookahead can be interpreted as an instance of local SGD , we derive improved convergence guarantees for the Lookahead algorithm ; ( ii ) two-player games : we elaborate why Lookahead–minmax suppresses the rotational part in a simple bilinear game , and prove its convergence for a given converging base-optimizer ; in § 3 and 4 , resp . • We motivate the use of Lookahead–minmax for games by considering the extensively studied toy bilinear example ( Goodfellow , 2016 ) and show that : ( i ) the use of lookahead allows for convergence of the otherwise diverging GDA on the classical bilinear game in full-batch setting ( see § 4.2.1 ) , as well as ( ii ) it yields good performance on challenging stochastic variants of this game , despite the high variance ( see § 4.2.2 ) . • We empirically benchmark Lookahead–minmax on GANs on four standard datasets— MNIST , CIFAR-10 , SVHN and ImageNet—on two different models ( DCGAN & ResNet ) , with standard optimization methods for GANs , GDA and Extragradient , called LA–AltGAN and LA–ExtraGradient , resp . We consistently observe both stability and performance improvements at a negligible additional cost that does not require additional forward and backward passes , see § 5 . 2 BACKGROUND . GAN formulation . Given the data distribution pd , the generator is a mapping G : z 7→ x , where z is sampled from a known distribution z ∼ pz and ideally x ∼ pd . The discriminator D : x 7→ D ( x ) ∈ [ 0 , 1 ] is a binary classifier whose output represents a conditional probability estimate that an x sampled from a balanced mixture of real data from pd and G-generated data is actually real . The optimization of a GAN is formulated as a differentiable two-player game where the generator G with parameters θ , and the discriminator D with parameters ϕ , aim at minimizing their own cost function Lθ and Lϕ , respectively , as follows : θ ? ∈ arg min θ∈Θ Lθ ( θ , ϕ ? ) and ϕ ? ∈ arg min ϕ∈Φ Lϕ ( θ ? , ϕ ) . ( 2P-G ) When Lϕ = −Lθ the game is called a zero-sum and equation 2P-G is a minmax problem . Minmax optimization methods . As GDA does not converge for some simple convex-concave game , Korpelevich ( 1976 ) proposed the extragradient method , where a “ prediction ” step is performed to obtain an extrapolated point ( θt+ 12 , ϕt+ 12 ) using GDA , and the gradients at the extrapolated point are then applied to the current iterate ( θt , ϕt ) as follows : Extrapolation : { θt+ 12 =θt−η∇θL θ ( θt , ϕt ) ϕt+ 12 =ϕt−η∇ϕL ϕ ( θt , ϕt ) Update : { θt+1=θt−η∇θLθ ( θt+ 12 , ϕt+ 12 ) ϕt+1=ϕt−η∇ϕLϕ ( θt+ 12 , ϕt+ 12 ) ( EG ) where η denotes the step size . In the context of zero-sum games , the extragradient method converges for any convex-concave function L and any closed convex sets Θ and Φ ( Facchinei & Pang , 2003 ) . The joint vector field . Mescheder et al . ( 2017 ) and Balduzzi et al . ( 2018 ) argue that the vector field obtained by concatenating the gradients of the two players gives more insights of the dynamics than studying the loss surface . The joint vector field ( JVF ) and the Jacobian of JVF are defined as : v ( θ , ϕ ) = ( ∇θLθ ( θ , ϕ ) ∇ϕLϕ ( θ , ϕ ) ) , and v′ ( θ , ϕ ) = ( ∇2θLθ ( θ , ϕ ) ∇ϕ∇θLθ ( θ , ϕ ) ∇θ∇ϕLϕ ( θ , ϕ ) ∇2ϕLϕ ( θ , ϕ ) ) , resp . ( JVF ) Rotational component of the game vector field . Berard et al . ( 2020 ) show empirically that GANs converge to a locally stable stationary point ( Verhulst , 1990 , LSSP ) that is not a differential Nash equilibrium–defined as a point where the norm of the Jacobian is zero and where the Hessian of both the players are definite positive , see § C. LSSP is defined as a point ( θ ? , ϕ ? ) where : v ( θ ? , ϕ ? ) = 0 , and R ( λ ) > 0 , ∀λ ∈ Sp ( v′ ( θ ? , ϕ ? ) ) , ( LSSP ) where Sp ( · ) denotes the spectrum of v′ ( · ) andR ( · ) the real part . In summary , ( i ) if all the eigenvalues of v′ ( θt , ϕt ) have positive real part the point ( θt , ϕt ) is LSSP , and ( ii ) if the eigenvalues of v′ ( θt , ϕt ) have imaginary part , the dynamics of the game exhibit rotations . Impact of noise due to the stochastic gradient estimates on games . Chavdarova et al . ( 2019 ) point out that relative to minimization , noise impedes more the game optimization , and show that there exists a class of zero-sum games for which the stochastic extragradient method diverges . Intuitively , bounded noise of the stochastic gradient hurts the convergence as with higher probability the noisy gradient points in a direction that makes the algorithm to diverge from the equilibrium , due to the properties of v′ ( · ) ( see Fig.1 , Chavdarova et al. , 2019 ) . 3 LOOKAHEAD FOR SINGLE OBJECTIVE . In the context of single objective minimization , Zhang et al . ( 2019 ) recently proposed the Lookahead algorithm where at every step t : ( i ) a copy of the current iterate ω̃t is made : ω̃t ← ωt , ( ii ) ω̃t is then updated for k ≥ 1 times yielding ω̃t+k , and finally ( iii ) the actual update ωt+1 is obtained as a point that lies on a line between the two iterates : the current ωt and the predicted one ω̃t+k : ωt+1 ← ωt + α ( ω̃t+k − ωt ) , where α ∈ [ 0 , 1 ] . ( LA ) Algorithm 1 General Lookahead–Minmax pseudocode . 1 : Input : Stopping time T , learning rates ηθ , ηϕ , ini- tial weights θ0 , ϕ0 , lookahead k and α , losses Lθ , Lϕ , dϕ ( · ) , d θ ( · ) base optimizer updates defined in § 4 , pd and pz real and noise–data distributions , resp . 2 : θ̃0,0 , ϕ̃0,0 ← θ0 , ϕ0 3 : for t ∈ 0 , . . . , T−1 do 4 : for i ∈ 0 , . . . , k−1 do 5 : Sample x , z ∼ pd , pz 6 : ϕ̃t , i+1 = ϕ̃t , i − ηϕdϕt , i ( θ̃t , i , ϕ̃t , i , x , z ) 7 : Sample z ∼ pz 8 : θ̃t , i+1 = θ̃t , i+1 − ηθdθt , i ( θ̃t , i , ϕ̃t , i , z ) 9 : end for 10 : ϕt+1 = ϕt + αϕ ( ϕ̃t , k −ϕt ) 11 : θt+1 = θt + αθ ( θ̃t , k − θt ) 12 : θ̃t+1,0 , ϕ̃t+1,0 ← θt+1 , ϕt+1 13 : end for 14 : Output : θT , ϕT Lookahead uses two additional hyperparameters : ( i ) k–the number of steps used to obtain the prediction ω̃t+k , as well as ( ii ) α– that controls how large a step we make towards the predicted iterate ω̃ : the larger the closest , and when α = 1 equation LA is equivalent to regular optimization ( has no impact ) . Besides the extra hyperparameters , LA was shown to help the used optimizer to be more resilient to the choice of its hyperparameters , achieve faster convergence across different tasks , as well as to reduce the variance of the gradient estimates ( Zhang et al. , 2019 ) . Theoretical Analysis . Zhang et al . ( 2019 ) study LA on quadratic functions and Wang et al . ( 2020 ) recently provided an analysis for general smooth non-convex functions . One of their main observations is that LA can be viewed as an instance of local SGD ( or parallel SGD , Stich , 2019 ; Koloskova et al. , 2020 ; Woodworth et al. , 2020b ) which allows us to further tighten prior results . Theorem 1 . Let f : Rd → R be L-smooth ( possibly non-convex ) and assume access to unbiased stochastic gradients σ2-bounded variance . Then the LA optimizer with hyperparemeters ( k , α ) converges to a stationary point E‖∇f ( ωout ) ‖2 ≤ ε ( for the proof refer to Appendix A ) , after at most O ( σ2 ε2 + 1 ε + 1− α α ( σ √ k − 1 ε3/2 + k ε ) ) iterations . Here ωout denotes uniformly at random chosen iterate of LA . Remark 1 . When in addition f is also quadratic , the complexity estimate improves to O ( σ2 ε2 + 1 ε ) . The asymptotically most significant term , O ( σ2 ε2 ) , matches with the corresponding term in the SGD convergence rate for all choices of α ∈ ( 0 , 1 ] , and when α→ 1 , the same convergence guarantees as for SGD can be attained . When σ2 = 0 the rate improves to O ( 1 ε ) , in contrast to O ( 1 ε2 ) in ( Wang et al. , 2020 ) .1 For small values of α , the worst-case complexity estimates of LA can in general be k times worse than for SGD ( except for quadratic functions , where the rates match ) . Deriving tighter analyses for LA that corroborate the observed practical advantages is still an open problem .
This work extends the recently proposed lookahead optimizer (which was designed for single-objective optimization) to minimax optimization, particularly GAN training. The authors claim that the backtracking step in lookahead optimizer alleviates the notorious rotational behavior in GAN dynamics. Moreover, the authors argue that the lookahead optimizer implicitly handles the high variance in the small-batch setting. Both arguments are backed up by toy experiments on stochastic bilinear games. Finally, on standard image datasets, the lookahead minimax algorithm outperforms some popular algorithms and achieves state-of-the-art performance on CIFAR-10.
SP:ebf3053dcae6ca7e0bdbc98e5a71151c55eb384f
Taming GANs with Lookahead-Minmax
Generative Adversarial Networks are notoriously challenging to train . The underlying minmax optimization is highly susceptible to the variance of the stochastic gradient and the rotational component of the associated game vector field . To tackle these challenges , we propose the Lookahead algorithm for minmax optimization , originally developed for single objective minimization only . The backtracking step of our Lookahead–minmax naturally handles the rotational game dynamics , a property which was identified to be key for enabling gradient ascent descent methods to converge on challenging examples often analyzed in the literature . Moreover , it implicitly handles high variance without using large mini-batches , known to be essential for reaching state of the art performance . Experimental results on MNIST , SVHN , CIFAR-10 , and ImageNet demonstrate a clear advantage of combining Lookahead–minmax with Adam or extragradient , in terms of performance and improved stability , for negligible memory and computational cost . Using 30-fold fewer parameters and 16-fold smaller minibatches we outperform the reported performance of the class-dependent BigGAN on CIFAR-10 by obtaining FID of 12.19 without using the class labels , bringing state-of-the-art GAN training within reach of common computational resources . Our source code is available : https : //github.com/Chavdarova/LAGAN-Lookahead_Minimax . 1 INTRODUCTION . Gradient-based methods are the workhorse of machine learning . These methods optimize the parameters of a model with respect to a single objective f : X → R . However , an increasing interest for multi-objective optimization arises in various domains—such as mathematics , economics , multiagent reinforcement learning ( Omidshafiei et al. , 2017 ) —where several agents aim at optimizing their own cost function fi : X1 × · · · × XN → R simultaneously . A particularly successful class of algorithms of this kind are the Generative Adversarial Networks ( Goodfellow et al. , 2014 , ( GANs ) ) , which consist of two players referred to as a generator and a discriminator . GANs were originally formulated as minmax optimization f : X ×Y → R ( Von Neumann & Morgenstern , 1944 ) , where the generator and the discriminator aim at minimizing and maximizing the same value function , see § 2 . A natural generalization of gradient descent for minmax problems is the gradient descent ascent algorithm ( GDA ) , which alternates between a gradient descent step for the min-player and a gradient ascent step for the max-player . This minmax training aims at finding a Nash equilibrium where no player has the incentive of changing its parameters . Despite the impressive quality of the samples generated by the GANs—relative to classical maximum likelihood-based generative models—these models remain notoriously difficult to train . In particular , poor performance ( sometimes manifesting as “ mode collapse ” ) , brittle dependency on hyperparameters , or divergence are often reported . Consequently , obtaining state-of-the-art performance was shown to require large computational resources ( Brock et al. , 2019 ) , making well-performing models unavailable for common computational budgets . ∗Equal contributions . Correspondence to firstname.lastname @ epfl.ch . It was empirically shown that : ( i ) GANs often converge to a locally stable stationary point that is not a differential Nash equilibrium ( Berard et al. , 2020 ) ; ( ii ) increased batch size improves GAN performances ( Brock et al. , 2019 ) in contrast to minimization ( Defazio & Bottou , 2019 ; Shallue et al. , 2018 ) . A principal reason is attributed to the rotations arising due to the adversarial component of the associated vector field of the gradient of the two player ’ s parameters ( Mescheder et al. , 2018 ; Balduzzi et al. , 2018 ) , which are atypical for minimization . More precisely , the Jacobian of the associated vector field ( see def . in § 2 ) can be decomposed into a symmetric and antisymmetric component ( Balduzzi et al. , 2018 ) , which behave as a potential ( Monderer & Shapley , 1996 ) and a Hamiltonian game , resp . Games are often combination of the two , making this general case harder to solve . In the context of single objective minimization , Zhang et al . ( 2019 ) recently proposed the Lookahead algorithm , which intuitively uses an update direction by “ looking ahead ” at the sequence of parameters that change with higher variance due to the “ stochasticity ” of the gradient estimates–called fast weights–generated by an inner optimizer . Lookahead was shown to improve the stability during training and to reduce the variance of the so called slow weights . Contributions . Our contributions can be summarized as follows : • We propose Lookahead–minmax for optimizing minmax problems , that applies extrapolation in the joint parameter space ( see Alg 1 ) , so as to account for the rotational component of the associated game vector field ( defined in § 2 ) . • In the context of : ( i ) single objective minimization : by building on insights of Wang et al . ( 2020 ) , who argue that Lookahead can be interpreted as an instance of local SGD , we derive improved convergence guarantees for the Lookahead algorithm ; ( ii ) two-player games : we elaborate why Lookahead–minmax suppresses the rotational part in a simple bilinear game , and prove its convergence for a given converging base-optimizer ; in § 3 and 4 , resp . • We motivate the use of Lookahead–minmax for games by considering the extensively studied toy bilinear example ( Goodfellow , 2016 ) and show that : ( i ) the use of lookahead allows for convergence of the otherwise diverging GDA on the classical bilinear game in full-batch setting ( see § 4.2.1 ) , as well as ( ii ) it yields good performance on challenging stochastic variants of this game , despite the high variance ( see § 4.2.2 ) . • We empirically benchmark Lookahead–minmax on GANs on four standard datasets— MNIST , CIFAR-10 , SVHN and ImageNet—on two different models ( DCGAN & ResNet ) , with standard optimization methods for GANs , GDA and Extragradient , called LA–AltGAN and LA–ExtraGradient , resp . We consistently observe both stability and performance improvements at a negligible additional cost that does not require additional forward and backward passes , see § 5 . 2 BACKGROUND . GAN formulation . Given the data distribution pd , the generator is a mapping G : z 7→ x , where z is sampled from a known distribution z ∼ pz and ideally x ∼ pd . The discriminator D : x 7→ D ( x ) ∈ [ 0 , 1 ] is a binary classifier whose output represents a conditional probability estimate that an x sampled from a balanced mixture of real data from pd and G-generated data is actually real . The optimization of a GAN is formulated as a differentiable two-player game where the generator G with parameters θ , and the discriminator D with parameters ϕ , aim at minimizing their own cost function Lθ and Lϕ , respectively , as follows : θ ? ∈ arg min θ∈Θ Lθ ( θ , ϕ ? ) and ϕ ? ∈ arg min ϕ∈Φ Lϕ ( θ ? , ϕ ) . ( 2P-G ) When Lϕ = −Lθ the game is called a zero-sum and equation 2P-G is a minmax problem . Minmax optimization methods . As GDA does not converge for some simple convex-concave game , Korpelevich ( 1976 ) proposed the extragradient method , where a “ prediction ” step is performed to obtain an extrapolated point ( θt+ 12 , ϕt+ 12 ) using GDA , and the gradients at the extrapolated point are then applied to the current iterate ( θt , ϕt ) as follows : Extrapolation : { θt+ 12 =θt−η∇θL θ ( θt , ϕt ) ϕt+ 12 =ϕt−η∇ϕL ϕ ( θt , ϕt ) Update : { θt+1=θt−η∇θLθ ( θt+ 12 , ϕt+ 12 ) ϕt+1=ϕt−η∇ϕLϕ ( θt+ 12 , ϕt+ 12 ) ( EG ) where η denotes the step size . In the context of zero-sum games , the extragradient method converges for any convex-concave function L and any closed convex sets Θ and Φ ( Facchinei & Pang , 2003 ) . The joint vector field . Mescheder et al . ( 2017 ) and Balduzzi et al . ( 2018 ) argue that the vector field obtained by concatenating the gradients of the two players gives more insights of the dynamics than studying the loss surface . The joint vector field ( JVF ) and the Jacobian of JVF are defined as : v ( θ , ϕ ) = ( ∇θLθ ( θ , ϕ ) ∇ϕLϕ ( θ , ϕ ) ) , and v′ ( θ , ϕ ) = ( ∇2θLθ ( θ , ϕ ) ∇ϕ∇θLθ ( θ , ϕ ) ∇θ∇ϕLϕ ( θ , ϕ ) ∇2ϕLϕ ( θ , ϕ ) ) , resp . ( JVF ) Rotational component of the game vector field . Berard et al . ( 2020 ) show empirically that GANs converge to a locally stable stationary point ( Verhulst , 1990 , LSSP ) that is not a differential Nash equilibrium–defined as a point where the norm of the Jacobian is zero and where the Hessian of both the players are definite positive , see § C. LSSP is defined as a point ( θ ? , ϕ ? ) where : v ( θ ? , ϕ ? ) = 0 , and R ( λ ) > 0 , ∀λ ∈ Sp ( v′ ( θ ? , ϕ ? ) ) , ( LSSP ) where Sp ( · ) denotes the spectrum of v′ ( · ) andR ( · ) the real part . In summary , ( i ) if all the eigenvalues of v′ ( θt , ϕt ) have positive real part the point ( θt , ϕt ) is LSSP , and ( ii ) if the eigenvalues of v′ ( θt , ϕt ) have imaginary part , the dynamics of the game exhibit rotations . Impact of noise due to the stochastic gradient estimates on games . Chavdarova et al . ( 2019 ) point out that relative to minimization , noise impedes more the game optimization , and show that there exists a class of zero-sum games for which the stochastic extragradient method diverges . Intuitively , bounded noise of the stochastic gradient hurts the convergence as with higher probability the noisy gradient points in a direction that makes the algorithm to diverge from the equilibrium , due to the properties of v′ ( · ) ( see Fig.1 , Chavdarova et al. , 2019 ) . 3 LOOKAHEAD FOR SINGLE OBJECTIVE . In the context of single objective minimization , Zhang et al . ( 2019 ) recently proposed the Lookahead algorithm where at every step t : ( i ) a copy of the current iterate ω̃t is made : ω̃t ← ωt , ( ii ) ω̃t is then updated for k ≥ 1 times yielding ω̃t+k , and finally ( iii ) the actual update ωt+1 is obtained as a point that lies on a line between the two iterates : the current ωt and the predicted one ω̃t+k : ωt+1 ← ωt + α ( ω̃t+k − ωt ) , where α ∈ [ 0 , 1 ] . ( LA ) Algorithm 1 General Lookahead–Minmax pseudocode . 1 : Input : Stopping time T , learning rates ηθ , ηϕ , ini- tial weights θ0 , ϕ0 , lookahead k and α , losses Lθ , Lϕ , dϕ ( · ) , d θ ( · ) base optimizer updates defined in § 4 , pd and pz real and noise–data distributions , resp . 2 : θ̃0,0 , ϕ̃0,0 ← θ0 , ϕ0 3 : for t ∈ 0 , . . . , T−1 do 4 : for i ∈ 0 , . . . , k−1 do 5 : Sample x , z ∼ pd , pz 6 : ϕ̃t , i+1 = ϕ̃t , i − ηϕdϕt , i ( θ̃t , i , ϕ̃t , i , x , z ) 7 : Sample z ∼ pz 8 : θ̃t , i+1 = θ̃t , i+1 − ηθdθt , i ( θ̃t , i , ϕ̃t , i , z ) 9 : end for 10 : ϕt+1 = ϕt + αϕ ( ϕ̃t , k −ϕt ) 11 : θt+1 = θt + αθ ( θ̃t , k − θt ) 12 : θ̃t+1,0 , ϕ̃t+1,0 ← θt+1 , ϕt+1 13 : end for 14 : Output : θT , ϕT Lookahead uses two additional hyperparameters : ( i ) k–the number of steps used to obtain the prediction ω̃t+k , as well as ( ii ) α– that controls how large a step we make towards the predicted iterate ω̃ : the larger the closest , and when α = 1 equation LA is equivalent to regular optimization ( has no impact ) . Besides the extra hyperparameters , LA was shown to help the used optimizer to be more resilient to the choice of its hyperparameters , achieve faster convergence across different tasks , as well as to reduce the variance of the gradient estimates ( Zhang et al. , 2019 ) . Theoretical Analysis . Zhang et al . ( 2019 ) study LA on quadratic functions and Wang et al . ( 2020 ) recently provided an analysis for general smooth non-convex functions . One of their main observations is that LA can be viewed as an instance of local SGD ( or parallel SGD , Stich , 2019 ; Koloskova et al. , 2020 ; Woodworth et al. , 2020b ) which allows us to further tighten prior results . Theorem 1 . Let f : Rd → R be L-smooth ( possibly non-convex ) and assume access to unbiased stochastic gradients σ2-bounded variance . Then the LA optimizer with hyperparemeters ( k , α ) converges to a stationary point E‖∇f ( ωout ) ‖2 ≤ ε ( for the proof refer to Appendix A ) , after at most O ( σ2 ε2 + 1 ε + 1− α α ( σ √ k − 1 ε3/2 + k ε ) ) iterations . Here ωout denotes uniformly at random chosen iterate of LA . Remark 1 . When in addition f is also quadratic , the complexity estimate improves to O ( σ2 ε2 + 1 ε ) . The asymptotically most significant term , O ( σ2 ε2 ) , matches with the corresponding term in the SGD convergence rate for all choices of α ∈ ( 0 , 1 ] , and when α→ 1 , the same convergence guarantees as for SGD can be attained . When σ2 = 0 the rate improves to O ( 1 ε ) , in contrast to O ( 1 ε2 ) in ( Wang et al. , 2020 ) .1 For small values of α , the worst-case complexity estimates of LA can in general be k times worse than for SGD ( except for quadratic functions , where the rates match ) . Deriving tighter analyses for LA that corroborate the observed practical advantages is still an open problem .
This paper proposes a lookahead-minmax algorithm for optimizing minmax problems such as GANs, which updates the parameters (of both the generator and the discriminator) with the extrapolation. With a bilinear example, the authors show that the use of lookahead-minimax allows for convergence in cases where other methods does not, and yields good performance under high variance. Experiments of generative performance on several well-known public datasets demonstrates the effectiveness of the proposed method.
SP:ebf3053dcae6ca7e0bdbc98e5a71151c55eb384f
Regret Bounds and Reinforcement Learning Exploration of EXP-based Algorithms
1 INTRODUCTION . Multi-armed bandit ( MAB ) is to maximize cumulative reward of a player throughout a bandit game by choosing different arms at each time step . It is also equivalent to minimizing the regret defined as the difference between the best rewards that can be achieved and the actual reward gained by the player . Formally , given time horizon T , in time step t ≤ T the player choose one arm at among K arms , receives rtat among rewards r t = ( rt1 , r t 2 , . . . , r t K ) , and maximizes the total reward ∑T t=1 r t at or minimizes the regret . Computationally efficient and with abundant theoretical analyses are the EXP-type MAB algorithms . In EXP3.P , each arm has a trust coefficient ( weight ) . The player samples each arm with probability being the sum of its normalized weights and a bias term , receives reward of the sampled arm and exponentially updates the weights based on the corresponding reward estimates . It achieves the regret of the order O ( √ T ) in a high probability sense . In EXP4 , there are any number of experts . Each has a sample rule over actions and a weight . The player samples according to the weighted average of experts ’ sample rules and updates the weights respectively . Contextual bandit is a variant of MAB by adding context or state space S. At time step t , the player has context st ∈ S with s1 : T = ( s1 , s2 , . . . , sT ) being independent . Rewards rt follow F ( µ ( st ) ) where F is any distribution and µ ( st ) is the mean vector that depends on state st. Reinforcement Learning ( RL ) generalizes contextual bandit , where state and reward transitions follow a Markov Decision Process ( MDP ) represented by transition kernel P ( st+1 , rt|at , st ) . A key challenge in RL is the trade-off between exploration and exploitation . Exploration is to encourage the player to try new arms in MAB or new actions in RL to understand the game better . It helps to plan for the future , but with the sacrifice of potentially lowering the current reward . Exploitation aims to exploit currently known states and arms to maximize the current reward , but it potentially prevents the player to gain more information to increase local reward . To maximize the cumulative reward , the player needs to know the game by exploration , while guaranteeing current reward by exploitation . How to incentivize exploration in RL has been a main focus in RL . Since RL is built on MAB , it is natural to extend MAB techniques to RL and UCB is such a success . UCB ( Auer et al . ( 2002a ) ) motivates count-based exploration ( Strehl and Littman , 2008 ) in RL and the subsequent PseudoCount exploration ( Bellemare et al. , 2016 ) . New deep RL exploration algorithms have been recently proposed . Using deep neural networks to keep track of the Q-values by means of Q-networks in RL is called DQN ( Mnih et al . ( 2013 ) ) . This combination of deep learning and RL has shown great success . -greedy in Mnih et al . ( 2015 ) is a simple exploration technique using DQN . Besides -greedy , intrinsic model exploration computes intrinsic rewards by focusing on experiences . Intrinsic rewards directly measure and incentivize exploration if added to extrinsic ( actual ) rewards of RL , e.g . DORA ( Fox et al. , 2018 ) and ( Stadie et al. , 2015 ) . Random Network Distillation ( RND ) ( Burda et al. , 2018 ) is a more recent suggestion relying on a fixed target network . A drawback of RND is its local focus without global exploration . In order to address weak points of these various exploration algorithms in the RL context , the notion of experts is natural and thus EXP-type MAB algorithms are appropriate . The allowance of arbitrary experts provides exploration for harder contextual bandits and hence providing exploration possibilities for RL . We develop an EXP4 exploration algorithm for RL that relies on several general experts . This is the first RL algorithm using several exploration experts enabling global exploration . Focusing on DQN , in the computational study we focus on two agents consisting of RND and -greedy DQN . We implement the RL EXP4 algorithm on the hard-to-explore RL game Montezuma ’ s Revenge and compare it with the benchmark algorithm RND ( Burda et al . ( 2018 ) ) . The numerical results show that the algorithm gains more exploration than RND and it gains the ability of global exploration by not getting stuck in local maximums of RND . Its total reward also increases with training . Overall , our algorithm improves exploration and exploitation on the benchmark game and demonstrates a learning process in RL . Reward in RL in many cases is unbounded which relates to unbounded MAB rewards . There are three major versions of MAB : Adversarial , Stochastic , and herein introduced Gaussian . For adversarial MAB , rewards of theK arms rt can be chosen arbitrarily by adversaries at step t. For stochastic MAB , the rewards at different steps are assumed to be i.i.d . and the rewards across arms are independent . It is assumed that 0 ≤ rti ≤ 1 for any arm i and step t. For Gaussian MAB , rewards rt follow multi-variate normal N ( µ , Σ ) with µ being the mean vector and Σ the covariance matrix of the K arms . Here the rewards are neither bounded , nor independent among the arms . For this reason the introduced Gaussian MAB reflects the RL setting and is the subject of our MAB analyses of EXP3.P . EXP-type algorithms ( Auer et al . ( 2002b ) ) are optimal in the two classical MABs . Auer et al . ( 2002b ) show lower and upper bounds on regret of the order O ( √ T ) for adversarial MAB and of the order O ( log ( T ) ) for stochastic MAB . All of the proofs of these regret bounds by EXP-type algorithms are based on the bounded reward assumption , which does not hold for Gaussian MAB . Therefore , the regret bounds for Gaussian MAB with unbounded rewards studied herein are significantly different from prior works . We show both lower and upper bounds on regret of Gaussian MAB under certain assumptions . Some analyses even hold for more generally distributed MAB . Upper bounds borrow some ideas from the analysis of the EXP3.P algorithm in Auer et al . ( 2002b ) for bounded MAB to our unbounded MAB , while lower bounds are by our brand new construction of instances . Precisely , we derive lower bounds of order Ω ( T ) for certain fixed T and upper bounds of order O∗ ( √ T ) for T being large enough . The question of bounds for any value of T remains open . The main contributions of this work are as follows . On the analytical side we introduce Gaussian MAB with the unique aspect and challenge of unbounded rewards . We provide the very first regret lower bound in such a case by constructing a novel family of Gaussian bandits and we are able to analyze the EXP3.P algorithm for Gaussian MAB . Unbounded reward poses a non-trivial challenge in the analyses . We also provide the very first extension of EXP4 to RL exploration . We show its superior performance on two hard-to-explore RL games . A literature review is provided in Section 2 . Then in Section 3 we exhibit upper bounds for unbounded MAB of the EXP3.P algorithm and lower bounds , respectively . Section 4 discusses the EXP4 algorithm for RL exploration . Finally , in Section 5 , we present numerical results related to the proposed algorithm . 2 LITERATURE REVIEW . The importance of exploration in RL is well understood . Count-based exploration in RL relies on UCB . Strehl and Littman ( 2008 ) develop Bellman value iteration V ( s ) = maxa R̂ ( s , a ) + γE [ V ( s′ ) ] + βN ( s , a ) − 1 2 , where N ( s , a ) is the number of visits to ( s , a ) for state s and action a . Value N ( s , a ) − 1 2 is positively correlated with curiosity of ( s , a ) and encourages exploration . This method is limited to tableau model-based MDP for small state spaces , while Bellemare et al . ( 2016 ) introduce Pseudo-Count exploration for non-tableau MDP with density models . In conjunction with DQN , -greedy in Mnih et al . ( 2015 ) is a simple exploration technique using DQN . Besides -greedy , intrinsic model exploration computes intrinsic rewards by the accuracy of a model trained on experiences . Intrinsic rewards directly measure and incentivize exploration if added to extrinsic ( actual ) rewards of RL , e.g . DORA in Fox et al . ( 2018 ) and Stadie et al . ( 2015 ) . Intrinsic rewards in Stadie et al . ( 2015 ) are defined as e ( s , a ) = ||σ ( s′ ) −Mφ ( σ ( s ) , a ) ||22 where Mφ is a parametric model , s′ is the next state and σ is input extraction . Intrinsic reward e ( s , a ) relies on stochastic transition from s to s′ and brings noise to exploration . Random Network Distillation ( RND ) in Burda et al . ( 2018 ) addresses this by defining e ( s , a ) = ||f̂ ( s′ ) − f ( s′ ) ||22 where f̂ is a parametric model and f is a randomly initialized but fixed model . Here e ( s , a ) , independent of the transition , only depends on state s′ and drives RND to outperform other algorithms on Montezuma ’ s Revenge . None of these algorithms use several experts which is a significant departure from our work . In terms of MAB regret analyses focusing on EXP-type algorithms , Auer et al . ( 2002b ) first introduce EXP3.P for bounded adversarial MAB and EXP4 for contextual bandits . Under the EXP3.P algorithm , an upper bound on regret of the order O ( √ T ) is achieved , which has no gap with the lower bound and hence it establishes that EXP3.P is optimal . However these regret bounds are not applicable to Gaussian MAB since rewards can be infinite . Meanwhile for unbounded MAB , Srinivas et al . ( 2010 ) demonstrate a regret bound of order O ( √ T · γT ) for noisy Gaussian process bandits where a reward observation contains noise . The information gain γT is not well-defined in a noiseless Gaussian setting . For noiseless Gaussian bandits , Grünewälder et al . ( 2010 ) show both the optimal lower and upper bounds on regret , but the regret definition is not consistent with the one used in Auer et al . ( 2002b ) . We establish a lower bound of the order Ω ( T ) for certain T and an upper bound of the order O∗ ( √ T ) asymptotically on regret of unbounded noiseless Gaussian MAB following standard definitions of regret .
This paper contributes to the study of EXP-based algorithms in two aspects. One is on the theoretical aspect: It analyzes the lower and upper bounds of EXP-3 for Gaussian multi-bandit setting for which the reward can be unbounded. The other is on the empirical aspect: It applied EXP4, originally developed for MAB, to Rl applications and demonstrated its advanced empirical performance.
SP:6cb547cec2e67bdd5ae1b278d99fabdeac826ed3
Regret Bounds and Reinforcement Learning Exploration of EXP-based Algorithms
1 INTRODUCTION . Multi-armed bandit ( MAB ) is to maximize cumulative reward of a player throughout a bandit game by choosing different arms at each time step . It is also equivalent to minimizing the regret defined as the difference between the best rewards that can be achieved and the actual reward gained by the player . Formally , given time horizon T , in time step t ≤ T the player choose one arm at among K arms , receives rtat among rewards r t = ( rt1 , r t 2 , . . . , r t K ) , and maximizes the total reward ∑T t=1 r t at or minimizes the regret . Computationally efficient and with abundant theoretical analyses are the EXP-type MAB algorithms . In EXP3.P , each arm has a trust coefficient ( weight ) . The player samples each arm with probability being the sum of its normalized weights and a bias term , receives reward of the sampled arm and exponentially updates the weights based on the corresponding reward estimates . It achieves the regret of the order O ( √ T ) in a high probability sense . In EXP4 , there are any number of experts . Each has a sample rule over actions and a weight . The player samples according to the weighted average of experts ’ sample rules and updates the weights respectively . Contextual bandit is a variant of MAB by adding context or state space S. At time step t , the player has context st ∈ S with s1 : T = ( s1 , s2 , . . . , sT ) being independent . Rewards rt follow F ( µ ( st ) ) where F is any distribution and µ ( st ) is the mean vector that depends on state st. Reinforcement Learning ( RL ) generalizes contextual bandit , where state and reward transitions follow a Markov Decision Process ( MDP ) represented by transition kernel P ( st+1 , rt|at , st ) . A key challenge in RL is the trade-off between exploration and exploitation . Exploration is to encourage the player to try new arms in MAB or new actions in RL to understand the game better . It helps to plan for the future , but with the sacrifice of potentially lowering the current reward . Exploitation aims to exploit currently known states and arms to maximize the current reward , but it potentially prevents the player to gain more information to increase local reward . To maximize the cumulative reward , the player needs to know the game by exploration , while guaranteeing current reward by exploitation . How to incentivize exploration in RL has been a main focus in RL . Since RL is built on MAB , it is natural to extend MAB techniques to RL and UCB is such a success . UCB ( Auer et al . ( 2002a ) ) motivates count-based exploration ( Strehl and Littman , 2008 ) in RL and the subsequent PseudoCount exploration ( Bellemare et al. , 2016 ) . New deep RL exploration algorithms have been recently proposed . Using deep neural networks to keep track of the Q-values by means of Q-networks in RL is called DQN ( Mnih et al . ( 2013 ) ) . This combination of deep learning and RL has shown great success . -greedy in Mnih et al . ( 2015 ) is a simple exploration technique using DQN . Besides -greedy , intrinsic model exploration computes intrinsic rewards by focusing on experiences . Intrinsic rewards directly measure and incentivize exploration if added to extrinsic ( actual ) rewards of RL , e.g . DORA ( Fox et al. , 2018 ) and ( Stadie et al. , 2015 ) . Random Network Distillation ( RND ) ( Burda et al. , 2018 ) is a more recent suggestion relying on a fixed target network . A drawback of RND is its local focus without global exploration . In order to address weak points of these various exploration algorithms in the RL context , the notion of experts is natural and thus EXP-type MAB algorithms are appropriate . The allowance of arbitrary experts provides exploration for harder contextual bandits and hence providing exploration possibilities for RL . We develop an EXP4 exploration algorithm for RL that relies on several general experts . This is the first RL algorithm using several exploration experts enabling global exploration . Focusing on DQN , in the computational study we focus on two agents consisting of RND and -greedy DQN . We implement the RL EXP4 algorithm on the hard-to-explore RL game Montezuma ’ s Revenge and compare it with the benchmark algorithm RND ( Burda et al . ( 2018 ) ) . The numerical results show that the algorithm gains more exploration than RND and it gains the ability of global exploration by not getting stuck in local maximums of RND . Its total reward also increases with training . Overall , our algorithm improves exploration and exploitation on the benchmark game and demonstrates a learning process in RL . Reward in RL in many cases is unbounded which relates to unbounded MAB rewards . There are three major versions of MAB : Adversarial , Stochastic , and herein introduced Gaussian . For adversarial MAB , rewards of theK arms rt can be chosen arbitrarily by adversaries at step t. For stochastic MAB , the rewards at different steps are assumed to be i.i.d . and the rewards across arms are independent . It is assumed that 0 ≤ rti ≤ 1 for any arm i and step t. For Gaussian MAB , rewards rt follow multi-variate normal N ( µ , Σ ) with µ being the mean vector and Σ the covariance matrix of the K arms . Here the rewards are neither bounded , nor independent among the arms . For this reason the introduced Gaussian MAB reflects the RL setting and is the subject of our MAB analyses of EXP3.P . EXP-type algorithms ( Auer et al . ( 2002b ) ) are optimal in the two classical MABs . Auer et al . ( 2002b ) show lower and upper bounds on regret of the order O ( √ T ) for adversarial MAB and of the order O ( log ( T ) ) for stochastic MAB . All of the proofs of these regret bounds by EXP-type algorithms are based on the bounded reward assumption , which does not hold for Gaussian MAB . Therefore , the regret bounds for Gaussian MAB with unbounded rewards studied herein are significantly different from prior works . We show both lower and upper bounds on regret of Gaussian MAB under certain assumptions . Some analyses even hold for more generally distributed MAB . Upper bounds borrow some ideas from the analysis of the EXP3.P algorithm in Auer et al . ( 2002b ) for bounded MAB to our unbounded MAB , while lower bounds are by our brand new construction of instances . Precisely , we derive lower bounds of order Ω ( T ) for certain fixed T and upper bounds of order O∗ ( √ T ) for T being large enough . The question of bounds for any value of T remains open . The main contributions of this work are as follows . On the analytical side we introduce Gaussian MAB with the unique aspect and challenge of unbounded rewards . We provide the very first regret lower bound in such a case by constructing a novel family of Gaussian bandits and we are able to analyze the EXP3.P algorithm for Gaussian MAB . Unbounded reward poses a non-trivial challenge in the analyses . We also provide the very first extension of EXP4 to RL exploration . We show its superior performance on two hard-to-explore RL games . A literature review is provided in Section 2 . Then in Section 3 we exhibit upper bounds for unbounded MAB of the EXP3.P algorithm and lower bounds , respectively . Section 4 discusses the EXP4 algorithm for RL exploration . Finally , in Section 5 , we present numerical results related to the proposed algorithm . 2 LITERATURE REVIEW . The importance of exploration in RL is well understood . Count-based exploration in RL relies on UCB . Strehl and Littman ( 2008 ) develop Bellman value iteration V ( s ) = maxa R̂ ( s , a ) + γE [ V ( s′ ) ] + βN ( s , a ) − 1 2 , where N ( s , a ) is the number of visits to ( s , a ) for state s and action a . Value N ( s , a ) − 1 2 is positively correlated with curiosity of ( s , a ) and encourages exploration . This method is limited to tableau model-based MDP for small state spaces , while Bellemare et al . ( 2016 ) introduce Pseudo-Count exploration for non-tableau MDP with density models . In conjunction with DQN , -greedy in Mnih et al . ( 2015 ) is a simple exploration technique using DQN . Besides -greedy , intrinsic model exploration computes intrinsic rewards by the accuracy of a model trained on experiences . Intrinsic rewards directly measure and incentivize exploration if added to extrinsic ( actual ) rewards of RL , e.g . DORA in Fox et al . ( 2018 ) and Stadie et al . ( 2015 ) . Intrinsic rewards in Stadie et al . ( 2015 ) are defined as e ( s , a ) = ||σ ( s′ ) −Mφ ( σ ( s ) , a ) ||22 where Mφ is a parametric model , s′ is the next state and σ is input extraction . Intrinsic reward e ( s , a ) relies on stochastic transition from s to s′ and brings noise to exploration . Random Network Distillation ( RND ) in Burda et al . ( 2018 ) addresses this by defining e ( s , a ) = ||f̂ ( s′ ) − f ( s′ ) ||22 where f̂ is a parametric model and f is a randomly initialized but fixed model . Here e ( s , a ) , independent of the transition , only depends on state s′ and drives RND to outperform other algorithms on Montezuma ’ s Revenge . None of these algorithms use several experts which is a significant departure from our work . In terms of MAB regret analyses focusing on EXP-type algorithms , Auer et al . ( 2002b ) first introduce EXP3.P for bounded adversarial MAB and EXP4 for contextual bandits . Under the EXP3.P algorithm , an upper bound on regret of the order O ( √ T ) is achieved , which has no gap with the lower bound and hence it establishes that EXP3.P is optimal . However these regret bounds are not applicable to Gaussian MAB since rewards can be infinite . Meanwhile for unbounded MAB , Srinivas et al . ( 2010 ) demonstrate a regret bound of order O ( √ T · γT ) for noisy Gaussian process bandits where a reward observation contains noise . The information gain γT is not well-defined in a noiseless Gaussian setting . For noiseless Gaussian bandits , Grünewälder et al . ( 2010 ) show both the optimal lower and upper bounds on regret , but the regret definition is not consistent with the one used in Auer et al . ( 2002b ) . We establish a lower bound of the order Ω ( T ) for certain T and an upper bound of the order O∗ ( √ T ) asymptotically on regret of unbounded noiseless Gaussian MAB following standard definitions of regret .
The authors consider analyzing the EXP3.P algorithm for the case of unbounded reward functions, in the sense that the rewards are governed by a Gaussian distribution. The authors first demonstrate a regret lower bound result on the Gaussian MABs when the time horizon is bounded from above. Then, the authors proceed to the analysis of the EXP3.P algorithm on the Gaussian MABs, and establish a regret bound similar to that of Auer et al. 2002. Finally, the authors apply the EXP3.P, where an expert corresponds to a Q-learning network, in the EXP4-RL algorithm, and evaluate it on multiple RL instances.
SP:6cb547cec2e67bdd5ae1b278d99fabdeac826ed3
MLR-SNet: Transferable LR Schedules for Heterogeneous Tasks
The learning rate ( LR ) is one of the most important hyper-parameters in stochastic gradient descent ( SGD ) for deep neural networks ( DNN ) training and generalization . However , current hand-designed LR schedules need to manually pre-specify a fixed form , which limits their ability to adapt to non-convex optimization problems due to the significant variation of training dynamics . Meanwhile , it always needs to search a proper LR schedule from scratch for new tasks . To address these issues , we propose to parameterize LR schedules with an explicit mapping formulation , called MLR-SNet . The learnable structure brings more flexibility for MLR-SNet to learn a proper LR schedule to comply with the training dynamics of DNN . Image and text classification benchmark experiments substantiate the capability of our method for achieving proper LR schedules . Moreover , the meta-learned MLR-SNet is plugand-play to generalize to new heterogeneous tasks . We transfer our meta-trained MLR-SNet to tasks like different training epochs , network architectures , datasets , especially large scale ImageNet dataset , and achieve comparable performance with hand-designed LR schedules . Finally , MLR-SNet can achieve better robustness when training data are biased with corrupted noise . 1 INTRODUCTION . Stochastic gradient descent ( SGD ) and its many variants ( Robbins & Monro , 1951 ; Duchi et al. , 2011 ; Zeiler , 2012 ; Tieleman & Hinton , 2012 ; Kingma & Ba , 2015 ) , have been served as the cornerstone of modern machine learning with big data . It has been empirically shown that DNN achieves stateof-the-art generalization performance on a wide variety of tasks when trained with SGD ( Zhang et al. , 2017 ) . Several recent researches observe that SGD tends to select the so-called flat minima ( Hochreiter & Schmidhuber , 1997a ; Keskar et al. , 2017 ) , which seems to generalize better in practice . Scheduling learning rate ( LR ) for SGD is one of the most widely studied aspects to help improve the SGD training for DNN . Specifically , it has been experimentally studied how the LR ( Jastrzebski et al. , 2017 ) influences mimima solutions found by SGD . Theoretically , Wu et al . ( 2018a ) analyze that LR plays an important role in minima selection from a dynamical stability perspective . He et al . ( 2019 ) provide a PAC-Bayes generalization bound for DNN trained by SGD , which is correlated with LR . In a word , finding a proper LR schedule highly influences the generalization performance of DNN , which has been widely studied recently ( Bengio , 2012 ; Schaul et al. , 2013 ; Nar & Sastry , 2018 ) . There mainly exist three kinds of hand-designed LR schedules : ( 1 ) Pre-defined LR policy is mostly used in current DNN training , like decaying or cyclic LR ( Gower et al. , 2019 ; Loshchilov & Hutter , 2017 ) , and brings large improvements in training efficiency . Some theoretical works suggested that the decaying schedule can yield faster convergence ( Ge et al. , 2019 ; Davis et al. , 2019 ) or avoid strict saddles ( Lee et al. , 2019 ; Panageas et al. , 2019 ) under some mild conditions . ( 2 ) LR search methods in tranditional convex optimization ( Nocedal & Wright , 2006 ) can be extended to DNN training by searching LR adaptively in each step , such as Polyak ’ s update rule ( Rolinek & Martius , 2018 ) , Frank-Wolfe algorithm ( Berrada et al. , 2019 ) , and Armijo line-search ( Vaswani et al. , 2019 ) , etc . ( 3 ) Adaptive gradient methods like Adam ( Duchi et al. , 2011 ; Tieleman & Hinton , 2012 ; Kingma & Ba , 2015 ) , adapt LR for each parameters separately according to some gradient information . Although above LR schedules ( as depicted in Fig . 1 ( a ) and 1 ( b ) ) can achieve competitive results on their learning tasks , they still have evident deficiencies in practice . On the one hand , these policies need to manually pre-specify the form of LR schedules , suffering from the limited flexibility to adapt to non-convex optimization problems due to the significant variation of training dynamics . On the other hand , when solving new heterogeneous tasks , it always needs to search a proper LR schedule from scratch , as well as to tune their involving hyper-parameters . This process is time and computation expensive , which tends to further raise their application difficulty in real problems . To alleviate the aforementioned issues , this paper presents a model to learn a plug-and-play LR schedule . The main idea is to parameterize the LR schedule as a LSTM network ( Hochreiter & Schmidhuber , 1997b ) , which is capable of dealing with such a long-term information dependent problem . As shown in Fig . 1 ( c ) , the proposed Meta-LR-Schedule-Net ( MLR-SNet ) learns an explicit loss-LR dependent relationship . In a nutshell , this paper makes the following three-fold contributions . ( 1 ) We propose a MLR-SNet to learn an adaptive LR schedule , which can adjust LR based on current training loss as well as the information delivered from past training histories stored in the MLR-SNet . Due to the parameterized form of the MLR-SNet , it can be more flexible than hand-designed policies to find a proper LR schedule for the specific learning task . Fig.1 ( d ) and 1 ( e ) show our learned LR schedules , which have similar tendency as pre-defined policies , but more variations at their locality . This validates the efficacy of our method for adaptively adjusting LR according to training dynamics . ( 2 ) With an explicit parameterized structure , the meta-trained MLR-SNet can be transferred to new heterogeneous tasks ( meta-test stage ) , including different training epochs , network architectures and datasets . Experimental results verify that our plug-and-play LR schedules can achieve comparable performance , while do not have any hyper-parameters compared with tranditional LR schedules . This potentially saves large labor and computation cost in real world applications . ( 3 ) The MLR-SNet is meta-learned to improve generalization performance on unseen data . We validate that with the guidance of clean data , our MLR-SNet can achieve better robustness when training data are biased with corrupted noise than hand-designed LR schedules . 2 RELATED WORK . Meta learning for optimization . Meta learning has a long history in psychology ( Ward , 1937 ; Lake et al. , 2017 ) . Meta learning for optimization can date back to 1980s-1990s ( Schmidhuber , 1992 ; Bengio et al. , 1991 ) , aiming to meta-learn the optimization process of learning itself . Recently , Andrychowicz et al . ( 2016 ) ; Ravi & Larochelle ( 2017 ) ; Chen et al . ( 2017 ) ; Wichrowska et al . ( 2017 ) ; Li & Malik ( 2017 ) ; Lv et al . ( 2017 ) have attempted to scale this idea to larger DNN optimization problems . The main idea is to construct a meta-learner as the optimizer , which takes the gradients as input and outputs the whole updating rules . These approaches tend to make selecting appropriate training algorithms , scheduling LR and tuning other hyper-parameters in an automatic way . Except for solving continuous optimization problems , some works employ these ideas to other optimization problems , such as black-box functions ( Chen et al. , 2017 ) , few-shot learning ( Li et al. , 2017 ) , model ’ s curvature ( Park & Oliva , 2019 ) , evolution strategies ( Houthooft et al. , 2018 ) , combinatorial functions ( Rosenfeld et al. , 2018 ) , etc . Though faster in decreasing training loss than the traditional optimizers in some cases , the learned optimizers may not always generalize well to diverse problems , especially longer horizons ( Lv et al. , 2017 ) and large scale optimization problems ( Wichrowska et al. , 2017 ) . Moreover , they can not be guaranteed to output a proper descent direction in each iteration for DNN training , since they assume all parameters share one small net and ignore the relationship between each parameters . Our proposed method attempts to learn an adaptive LR schedule rather than the whole update rules . This makes it easy to learn and the meta-learned LR schedule can be transferred to new heterogeneous tasks . HPO and LR schedule adaptation . Hyper-parameter optimization ( HPO ) was historically investigated by selecting proper values for algorithm hyper-parameters to obtain better performance on validation set ( see ( Hutter et al. , 2019 ) for an overview ) . Typical methods include grid search , random search ( Bergstra & Bengio , 2012 ) , Bayesian optimization ( Snoek et al. , 2012 ) , gradient-based methods ( Franceschi et al. , 2017 ; Shu et al. , 2020a ; b ) , etc . Recently , some works attempt to find a proper LR schedule under the framework of gradient-based HPO , which can be solved by bilevel optimization ( Franceschi et al. , 2017 ; Baydin et al. , 2018 ) . However , most HPO techniques tend to fall into short-horizon bias and easily find a bad minima ( Wu et al. , 2018b ) . Our MLR-SNet has an explicit function form , which makes the optimization of the LR schedules more robust and effective . Transfer to heterogeneous tasks . Transfer learning ( Pan & Yang , 2009 ) aims to transfer knowledge obtained from source task to help the learning on the target task . Most transfer learning methods assume the source and target tasks consist of the same instance , feature or model spaces ( Yang et al. , 2020 ) , which greatly limits their applications . Recently , meta learning ( Finn et al. , 2017 ) aims to learn common knowledge shared over a distribution of tasks , such that the learned knowledge can transfer to unseen heterogeneous tasks . Most meta learning approaches focus on few shot learning framework , while we attempt to extend it into a standard learning framwork . The hand-designed LR schedules and HPO methods just try to find a proper LR schedule for given tasks , and need to be learned from scratch for new tasks . However , our meta-learned MLR-SNet is plug-and-play , which can directly transfer how to schedule LR for SGD to heterogeneous tasks without additional learning . 3 THE PROPOSED META-LR-SCHEDULE-NET ( MLR-SNET ) METHOD . The problem of training DNN can be formulated as the following non-convex optimization problem , min w∈Rn LTr ( DTr ; w ) : = 1 N N∑ i=1 LTri ( w ) , ( 1 ) where LTri is the training loss function for data samples i ∈ DTr = { 1 , 2 , · · · , N } , which characters the deviation of the model prediction from the data , and w ∈ Rn represents the parameters of the model ( e.g. , the weight matrices in DNN ) to be optimized . SGD ( Robbins & Monro , 1951 ; Polyak , 1964 ) and its variants , including Momentum ( Tseng , 1998 ) , Adagrad ( Duchi et al. , 2011 ) , Adadelta ( Zeiler , 2012 ) , RMSprop ( Tieleman & Hinton , 2012 ) , Adam ( Kingma & Ba , 2015 ) , are often used for training DNN . In general , these algorithms can be summarized as the following formulation , wt+1 = wt + ∆wt , ∆wt = Ot ( ∇LTr ( wt ) , Ht ; Θt ) , ( 2 ) where wt is t-th updating model parameters , ∇LTr ( wt ) denotes the gradient of LTr at wt , Ht represents the historical gradient information , and Θt is the hyperparameter of the optimizer O , e.g. , LR . To present our method ’ s efficiency , we focus on the following vanilla SGD formulation , wt+1 = wt − αt ( 1 |Bt| ∑ i∈Bt ∇LTri ( wt ) ) , ( 3 ) whereBt ⊂ DTr denotes the batch samples randomly sampled from the training dataset , |Bt| denotes the number of the sampled batch samples , and∇LTri ( wt ) denotes the gradient of sample i computed at wt and αt is the LR at t-th iteration .
The paper proposes to parameterize learning rate (LR) schedule with an explicit mapping formulation. This learnable structure allowed the proposed meta-trained MLR-SNet to achieve good LR schedules. For validation, the proposed method is evaluated on both image and text classification benchmark with various network architectures and datasets, as well as transfer the learned network for new task or architectures.
SP:fe4742a7415e0bbb189d999c86edf90781733e46
MLR-SNet: Transferable LR Schedules for Heterogeneous Tasks
The learning rate ( LR ) is one of the most important hyper-parameters in stochastic gradient descent ( SGD ) for deep neural networks ( DNN ) training and generalization . However , current hand-designed LR schedules need to manually pre-specify a fixed form , which limits their ability to adapt to non-convex optimization problems due to the significant variation of training dynamics . Meanwhile , it always needs to search a proper LR schedule from scratch for new tasks . To address these issues , we propose to parameterize LR schedules with an explicit mapping formulation , called MLR-SNet . The learnable structure brings more flexibility for MLR-SNet to learn a proper LR schedule to comply with the training dynamics of DNN . Image and text classification benchmark experiments substantiate the capability of our method for achieving proper LR schedules . Moreover , the meta-learned MLR-SNet is plugand-play to generalize to new heterogeneous tasks . We transfer our meta-trained MLR-SNet to tasks like different training epochs , network architectures , datasets , especially large scale ImageNet dataset , and achieve comparable performance with hand-designed LR schedules . Finally , MLR-SNet can achieve better robustness when training data are biased with corrupted noise . 1 INTRODUCTION . Stochastic gradient descent ( SGD ) and its many variants ( Robbins & Monro , 1951 ; Duchi et al. , 2011 ; Zeiler , 2012 ; Tieleman & Hinton , 2012 ; Kingma & Ba , 2015 ) , have been served as the cornerstone of modern machine learning with big data . It has been empirically shown that DNN achieves stateof-the-art generalization performance on a wide variety of tasks when trained with SGD ( Zhang et al. , 2017 ) . Several recent researches observe that SGD tends to select the so-called flat minima ( Hochreiter & Schmidhuber , 1997a ; Keskar et al. , 2017 ) , which seems to generalize better in practice . Scheduling learning rate ( LR ) for SGD is one of the most widely studied aspects to help improve the SGD training for DNN . Specifically , it has been experimentally studied how the LR ( Jastrzebski et al. , 2017 ) influences mimima solutions found by SGD . Theoretically , Wu et al . ( 2018a ) analyze that LR plays an important role in minima selection from a dynamical stability perspective . He et al . ( 2019 ) provide a PAC-Bayes generalization bound for DNN trained by SGD , which is correlated with LR . In a word , finding a proper LR schedule highly influences the generalization performance of DNN , which has been widely studied recently ( Bengio , 2012 ; Schaul et al. , 2013 ; Nar & Sastry , 2018 ) . There mainly exist three kinds of hand-designed LR schedules : ( 1 ) Pre-defined LR policy is mostly used in current DNN training , like decaying or cyclic LR ( Gower et al. , 2019 ; Loshchilov & Hutter , 2017 ) , and brings large improvements in training efficiency . Some theoretical works suggested that the decaying schedule can yield faster convergence ( Ge et al. , 2019 ; Davis et al. , 2019 ) or avoid strict saddles ( Lee et al. , 2019 ; Panageas et al. , 2019 ) under some mild conditions . ( 2 ) LR search methods in tranditional convex optimization ( Nocedal & Wright , 2006 ) can be extended to DNN training by searching LR adaptively in each step , such as Polyak ’ s update rule ( Rolinek & Martius , 2018 ) , Frank-Wolfe algorithm ( Berrada et al. , 2019 ) , and Armijo line-search ( Vaswani et al. , 2019 ) , etc . ( 3 ) Adaptive gradient methods like Adam ( Duchi et al. , 2011 ; Tieleman & Hinton , 2012 ; Kingma & Ba , 2015 ) , adapt LR for each parameters separately according to some gradient information . Although above LR schedules ( as depicted in Fig . 1 ( a ) and 1 ( b ) ) can achieve competitive results on their learning tasks , they still have evident deficiencies in practice . On the one hand , these policies need to manually pre-specify the form of LR schedules , suffering from the limited flexibility to adapt to non-convex optimization problems due to the significant variation of training dynamics . On the other hand , when solving new heterogeneous tasks , it always needs to search a proper LR schedule from scratch , as well as to tune their involving hyper-parameters . This process is time and computation expensive , which tends to further raise their application difficulty in real problems . To alleviate the aforementioned issues , this paper presents a model to learn a plug-and-play LR schedule . The main idea is to parameterize the LR schedule as a LSTM network ( Hochreiter & Schmidhuber , 1997b ) , which is capable of dealing with such a long-term information dependent problem . As shown in Fig . 1 ( c ) , the proposed Meta-LR-Schedule-Net ( MLR-SNet ) learns an explicit loss-LR dependent relationship . In a nutshell , this paper makes the following three-fold contributions . ( 1 ) We propose a MLR-SNet to learn an adaptive LR schedule , which can adjust LR based on current training loss as well as the information delivered from past training histories stored in the MLR-SNet . Due to the parameterized form of the MLR-SNet , it can be more flexible than hand-designed policies to find a proper LR schedule for the specific learning task . Fig.1 ( d ) and 1 ( e ) show our learned LR schedules , which have similar tendency as pre-defined policies , but more variations at their locality . This validates the efficacy of our method for adaptively adjusting LR according to training dynamics . ( 2 ) With an explicit parameterized structure , the meta-trained MLR-SNet can be transferred to new heterogeneous tasks ( meta-test stage ) , including different training epochs , network architectures and datasets . Experimental results verify that our plug-and-play LR schedules can achieve comparable performance , while do not have any hyper-parameters compared with tranditional LR schedules . This potentially saves large labor and computation cost in real world applications . ( 3 ) The MLR-SNet is meta-learned to improve generalization performance on unseen data . We validate that with the guidance of clean data , our MLR-SNet can achieve better robustness when training data are biased with corrupted noise than hand-designed LR schedules . 2 RELATED WORK . Meta learning for optimization . Meta learning has a long history in psychology ( Ward , 1937 ; Lake et al. , 2017 ) . Meta learning for optimization can date back to 1980s-1990s ( Schmidhuber , 1992 ; Bengio et al. , 1991 ) , aiming to meta-learn the optimization process of learning itself . Recently , Andrychowicz et al . ( 2016 ) ; Ravi & Larochelle ( 2017 ) ; Chen et al . ( 2017 ) ; Wichrowska et al . ( 2017 ) ; Li & Malik ( 2017 ) ; Lv et al . ( 2017 ) have attempted to scale this idea to larger DNN optimization problems . The main idea is to construct a meta-learner as the optimizer , which takes the gradients as input and outputs the whole updating rules . These approaches tend to make selecting appropriate training algorithms , scheduling LR and tuning other hyper-parameters in an automatic way . Except for solving continuous optimization problems , some works employ these ideas to other optimization problems , such as black-box functions ( Chen et al. , 2017 ) , few-shot learning ( Li et al. , 2017 ) , model ’ s curvature ( Park & Oliva , 2019 ) , evolution strategies ( Houthooft et al. , 2018 ) , combinatorial functions ( Rosenfeld et al. , 2018 ) , etc . Though faster in decreasing training loss than the traditional optimizers in some cases , the learned optimizers may not always generalize well to diverse problems , especially longer horizons ( Lv et al. , 2017 ) and large scale optimization problems ( Wichrowska et al. , 2017 ) . Moreover , they can not be guaranteed to output a proper descent direction in each iteration for DNN training , since they assume all parameters share one small net and ignore the relationship between each parameters . Our proposed method attempts to learn an adaptive LR schedule rather than the whole update rules . This makes it easy to learn and the meta-learned LR schedule can be transferred to new heterogeneous tasks . HPO and LR schedule adaptation . Hyper-parameter optimization ( HPO ) was historically investigated by selecting proper values for algorithm hyper-parameters to obtain better performance on validation set ( see ( Hutter et al. , 2019 ) for an overview ) . Typical methods include grid search , random search ( Bergstra & Bengio , 2012 ) , Bayesian optimization ( Snoek et al. , 2012 ) , gradient-based methods ( Franceschi et al. , 2017 ; Shu et al. , 2020a ; b ) , etc . Recently , some works attempt to find a proper LR schedule under the framework of gradient-based HPO , which can be solved by bilevel optimization ( Franceschi et al. , 2017 ; Baydin et al. , 2018 ) . However , most HPO techniques tend to fall into short-horizon bias and easily find a bad minima ( Wu et al. , 2018b ) . Our MLR-SNet has an explicit function form , which makes the optimization of the LR schedules more robust and effective . Transfer to heterogeneous tasks . Transfer learning ( Pan & Yang , 2009 ) aims to transfer knowledge obtained from source task to help the learning on the target task . Most transfer learning methods assume the source and target tasks consist of the same instance , feature or model spaces ( Yang et al. , 2020 ) , which greatly limits their applications . Recently , meta learning ( Finn et al. , 2017 ) aims to learn common knowledge shared over a distribution of tasks , such that the learned knowledge can transfer to unseen heterogeneous tasks . Most meta learning approaches focus on few shot learning framework , while we attempt to extend it into a standard learning framwork . The hand-designed LR schedules and HPO methods just try to find a proper LR schedule for given tasks , and need to be learned from scratch for new tasks . However , our meta-learned MLR-SNet is plug-and-play , which can directly transfer how to schedule LR for SGD to heterogeneous tasks without additional learning . 3 THE PROPOSED META-LR-SCHEDULE-NET ( MLR-SNET ) METHOD . The problem of training DNN can be formulated as the following non-convex optimization problem , min w∈Rn LTr ( DTr ; w ) : = 1 N N∑ i=1 LTri ( w ) , ( 1 ) where LTri is the training loss function for data samples i ∈ DTr = { 1 , 2 , · · · , N } , which characters the deviation of the model prediction from the data , and w ∈ Rn represents the parameters of the model ( e.g. , the weight matrices in DNN ) to be optimized . SGD ( Robbins & Monro , 1951 ; Polyak , 1964 ) and its variants , including Momentum ( Tseng , 1998 ) , Adagrad ( Duchi et al. , 2011 ) , Adadelta ( Zeiler , 2012 ) , RMSprop ( Tieleman & Hinton , 2012 ) , Adam ( Kingma & Ba , 2015 ) , are often used for training DNN . In general , these algorithms can be summarized as the following formulation , wt+1 = wt + ∆wt , ∆wt = Ot ( ∇LTr ( wt ) , Ht ; Θt ) , ( 2 ) where wt is t-th updating model parameters , ∇LTr ( wt ) denotes the gradient of LTr at wt , Ht represents the historical gradient information , and Θt is the hyperparameter of the optimizer O , e.g. , LR . To present our method ’ s efficiency , we focus on the following vanilla SGD formulation , wt+1 = wt − αt ( 1 |Bt| ∑ i∈Bt ∇LTri ( wt ) ) , ( 3 ) whereBt ⊂ DTr denotes the batch samples randomly sampled from the training dataset , |Bt| denotes the number of the sampled batch samples , and∇LTri ( wt ) denotes the gradient of sample i computed at wt and αt is the LR at t-th iteration .
In this work, the authors use an LSTM to meta-learn learning rate schedules. This LSTM depends only on the validation loss at time t. They train this LSTM for some tasks and they show that it gives good performance when compared with baselines. Then they transfer one of these trained LSTMs to different tasks and gives good results.
SP:fe4742a7415e0bbb189d999c86edf90781733e46
Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved Complexity
1 INTRODUCTION . In reinforcement learning ( RL ) , an agent interacts with a stochastic environment following a certain policy and receives some reward , and it aims to learn an optimal policy that yields the maximum accumulated reward Sutton & Barto ( 2018 ) . In particular , many RL algorithms have been developed to learn the optimal control policy , and they have been widely applied to various practical applications such as finance , robotics , computer games and recommendation systems Mnih et al . ( 2015 ; 2016 ) ; Silver et al . ( 2016 ) ; Kober et al . ( 2013 ) . Conventional RL algorithms such as Q-learning Watkins & Dayan ( 1992 ) and SARSA Rummery & Niranjan ( 1994 ) have been well studied and their convergence is guaranteed in the tabular setting . However , it is known that these algorithms may diverge in the popular off-policy setting under linear function approximation Baird ( 1995 ) ; Gordon ( 1996 ) . To address this issue , the two time-scale Greedy-GQ algorithm was developed in Maei et al . ( 2010 ) for learning the optimal policy . This algorithm extends the efficient gradient temporal difference ( GTD ) algorithms for policy evaluation Sutton et al . ( 2009b ) to policy optimization . In particular , the asymptotic convergence of GreedyGQ to a stationary point has been established in Maei et al . ( 2010 ) . More recently , Wang & Zou ( 2020 ) studied the finite-time convergence of Greedy-GQ under linear function approximation and Markovian sampling , and it is shown that the algorithm achieves an -stationary point of the objective function with a sample complexity in the order of O ( −3 ) . Such an undesirable high sample complexity is caused by the large variance induced by the Markovian samples queried from the dynamic environment . Therefore , we want to ask the following question . • Q1 : Can we develop a variance reduction scheme for the two time-scale Greedy-GQ algorithm ? In fact , in the existing literature , many recent work proposed to apply the variance reduction techniques developed in the stochastic optimization literature to reduce the variance of various TD learning algorithms for policy evaluation , e.g. , Du et al . ( 2017 ) ; Peng et al . ( 2019 ) ; Korda & La ( 2015 ) ; Xu et al . ( 2020 ) . Some other work applied variance reduction techniques to Q-learning algorithms , e.g. , Wainwright ( 2019 ) ; Jia et al . ( 2020 ) . Hence , it is much desired to develop a variance-reduced Greedy-GQ algorithm for optimal control . In particular , as many of the existing variance-reduced RL algorithms have been shown to achieve an improved sample complexity under variance reduction , it is natural to ask the following fundamental question . • Q2 : Can variance-reduced Greedy-GQ achieve an improved sample complexity under Markovian sampling ? In this paper , we provide affirmative answers to these fundamental questions . Specifically , we develop a two time-scale variance reduction scheme for the Greedy-GQ algorithm by leveraging the SVRG scheme Johnson & Zhang ( 2013 ) . Moreover , under linear function approximation and Markovian sampling , we prove that the proposed variance-reduced Greedy-GQ algorithm achieves an -stationary point with an improved sample complexity O ( −2 ) . We summarize our technical contributions as follows . 1.1 OUR CONTRIBUTIONS . We develop a variance-reduced Greedy-GQ ( VR-Greedy-GQ ) algorithm for optimal control in reinforcement learning . Specifically , the algorithm leverages the SVRG variance reduction scheme Johnson & Zhang ( 2013 ) to construct variance-reduced stochastic updates for updating the parameters in both time-scales . We study the finite-time convergence of VR-Greedy-GQ under linear function approximation and Markovian sampling in the off-policy setting . Specifically , we show that VR-Greedy-GQ achieves an -stationary point of the objective function J ( i.e. , ‖∇J ( θ ) ‖2 ≤ ) with a sample complexity in the order of O ( −2 ) . Such a complexity result improves that of the original Greedy-GQ by a significant factor of O ( −1 ) Wang & Zou ( 2020 ) . In particular , our analysis shows that the bias error caused by the Markovian sampling and the variance error of the stochastic updates are in the order ofO ( M−1 ) , O ( ηθM−1 ) , respectively , where ηθ is the learning rate and M corresponds to the batch size of the SVRG reference batch update . This shows that the proposed variance reduction scheme can significantly reduce the bias and variance errors of the original Greedy-GQ update ( by a factor of M ) and lead to an improved overall sample complexity . The analysis logic of VR-Greedy-GQ partly follows that of the conventional SVRG , but requires substantial new technical developments . Specifically , we must address the following challenges . First , VR-Greedy-GQ involves two time-scale variance-reduced updates that are correlated with each other . Such an extension of the SVRG scheme to the two time-scale updates is novel and requires new technical developments . Specifically , we need to develop tight variance bounds for the two time-scale updates under Markovian sampling . Second , unlike the convex objective functions of the conventional GTD type of algorithms , the objective function of VR-Greedy-GQ is generally non-convex due to the non-stationary target policy . Hence , we need to develop new techniques to characterize the per-iteration optimization progress towards a stationary point under nonconvexity . In particular , to analyze the two time-scale variance reduction updates of the algorithm , we introduce a ‘ fine-tuned ’ Lyapunov function of the form Rmt = J ( θ ( m ) t ) + ct‖θ ( m ) t − θ̃ ( m ) ‖2 , where the parameter ct is fine-tuned to cancel other additional quadratic terms ‖θ ( m ) t − θ̃ ( m ) ‖2 that are implicitly involved in the tracking error terms . The design of this special Lyapunov function is critical to establish the formal convergence of the algorithm . With these technical developments , we are able to establish an improved finite-time convergence rate and sample complexity for VR-Greedy-GQ . 1.2 RELATED WORK . Q-learning and SARSA with function approximation . The asymptotic convergence of Q-learning and SARSA under linear function approximation were established in Melo et al . ( 2008 ) ; Perkins & Precup ( 2003 ) , and their finite-time analysis were developed in Zou et al . ( 2019 ) ; Chen et al . ( 2019 ) . However , these algorithms may diverge in off-policy training Baird ( 1995 ) . Also , recent works focused on the Markovian setting . Various analysis techniques have been developed to analyze the finite-time convergence of TD/Q-learning under Markovian samples . Specifically , Wang et al . ( 2020 ) developed a multi-step Lyapunov analysis for addressing the biasedness of the stochastic approximation in Q-learning . Srikant & Ying ( 2019 ) developed a drift analysis to the linear stochastic approximation problem . Besides the linear function approximation , the finite-time analysis of Q-learning under neural network function approximation is developed in Xu & Gu ( 2019 ) . GTD algorithms . The GTD2 and TDC algorithms were developed for off-policy TD learning . Their asymptotic convergence was proved in Sutton et al . ( 2009a ; b ) ; Yu ( 2017 ) , and their finite-time analysis were developed recently in Dalal et al . ( 2018 ) ; Wang et al . ( 2017 ) ; Liu et al . ( 2015 ) ; Gupta et al . ( 2019 ) ; Xu et al . ( 2019 ) . The Greedy-GQ algorithm is an extension of these algorithms to optimal control and involves nonlinear updates . RL with variance reduction : Variance reduction techniques have been applied to various RL algorithms . In TD learning , Du et al . ( 2017 ) reformulate the MSPBE problem as a convex-concave saddle-point optimization problem and applied SVRG Johnson & Zhang ( 2013 ) and SAGA Defazio et al . ( 2014 ) to primal-dual batch gradient algorithm . In Korda & La ( 2015 ) , the variance-reduced TD algorithm was introduced for solving the MSPBE problem , and later Xu et al . ( 2020 ) provided a correct non-asymptotic analysis for this algorithm over Markovian samples . Recently , some other works applied the SVRG , SARAH Nguyen et al . ( 2017 ) and SPIDER Fang et al . ( 2018 ) variance reduction techniques to develop variance-reduced Q-learning algorithms , e.g. , Wainwright ( 2019 ) ; Jia et al . ( 2020 ) . In these works , TD or TDC algorithms are in the form of linear stochastic approximation , and Q-learning has only a single time-scale update . As a comparison , our VR-Greedy-GQ takes nonlinear two time-scale updates to optimization a nonconvex MSPBE . 2 PRELIMINARIES : POLICY OPTIMIZATION AND GREEDY-GQ . In this section , we review some preliminaries of reinforcement learning and recap the Greedy-GQ algorithm under linear function approximation . 2.1 POLICY OPTIMIZATION IN REINFORCEMENT LEARNING . In reinforcement learning , an agent takes actions to interact with the environment via a Markov Decision Process ( MDP ) . Specifically , an MDP is specified by the tuple ( S , A , P , r , γ ) , where S and A respectively correspond to the state and action spaces that include finite elements , r : S×A×S → [ 0 , +∞ ) denotes a reward function and γ ∈ ( 0 , 1 ) is the associated reward discount factor . At any time t , assume that the agent is in the state st ∈ S and takes a certain action at ∈ A following a stationary policy π , i.e. , at ∼ π ( ·|st ) . Then , at the subsequent time t + 1 , the current state of the agent transfers to a new state st+1 according to the transition kernel P ( ·|st , at ) . At the same time , the agent receives a reward rt = r ( st , at , st+1 ) from the environment for this action-state transition . To evaluate the quality of a given policy π , we often use the action-state value function Qπ : S ×A → R that accumulates the discounted rewards as follows : Qπ ( s , a ) = Es′∼P ( ·|s , a ) [ r ( s , a , s′ ) + γV π ( s′ ) ] , where V π ( s ) is the state value function defined as V π ( s ) = E [ ∑∞ t=0 γ trt|s0 = s ] . In particu- lar , define the Bellman operator Tπ such that TπQ ( s , a ) = Es′ , a′ [ r ( s , a , s′ ) + γQ ( s′ , a′ ) ] for any Q ( s , a ) , where a′ ∼ π ( ·|s′ ) . Then , Qπ ( s , a ) is a fixed point of Tπ , i.e. , TπQπ ( s , a ) = Qπ ( s , a ) , ∀s , a . ( 1 ) The goal of policy optimization is to learn the optimal policy π∗ that maximizes the expected total reward E [ ∑∞ t=0 γ trt|s0 = s ] for any initial state s ∈ S , and this is equivalent to learn the optimal value function Q∗ ( s , a ) = supπ Q π ( s , a ) , ∀s , a . In particular , Q∗ is a fixed point of the Bellman operator T that is defined as TQ ( s , a ) = Es′∼P ( ·|s , a ) [ r ( s , a , s′ ) + γmaxb∈AQ ( s′ , b ) ] .
This paper combines a widely used variance reduction technique SVRG with the greedy-GQ. It provides a finite-time analysis of the proposed algorithm in the off-policy and Markovian sampling setting (convergence to the stationary point) and improves the sample complexity from the order $\epsilon^{-3}$ to $\epsilon^{-2}$ comparing with the vanilla greedy GQ. Interestingly, the analysis shows that the biase error caused by the Markovian sampling and the variance error of the stochastic gradient are reduced by the $M$, where M is the batch size of the batch gradient in SVRG. At last, it verifies the theoretical claim by two toy examples.
SP:3447c4f7cb227a9eba660c6c43fcbddb3d866567
Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved Complexity
1 INTRODUCTION . In reinforcement learning ( RL ) , an agent interacts with a stochastic environment following a certain policy and receives some reward , and it aims to learn an optimal policy that yields the maximum accumulated reward Sutton & Barto ( 2018 ) . In particular , many RL algorithms have been developed to learn the optimal control policy , and they have been widely applied to various practical applications such as finance , robotics , computer games and recommendation systems Mnih et al . ( 2015 ; 2016 ) ; Silver et al . ( 2016 ) ; Kober et al . ( 2013 ) . Conventional RL algorithms such as Q-learning Watkins & Dayan ( 1992 ) and SARSA Rummery & Niranjan ( 1994 ) have been well studied and their convergence is guaranteed in the tabular setting . However , it is known that these algorithms may diverge in the popular off-policy setting under linear function approximation Baird ( 1995 ) ; Gordon ( 1996 ) . To address this issue , the two time-scale Greedy-GQ algorithm was developed in Maei et al . ( 2010 ) for learning the optimal policy . This algorithm extends the efficient gradient temporal difference ( GTD ) algorithms for policy evaluation Sutton et al . ( 2009b ) to policy optimization . In particular , the asymptotic convergence of GreedyGQ to a stationary point has been established in Maei et al . ( 2010 ) . More recently , Wang & Zou ( 2020 ) studied the finite-time convergence of Greedy-GQ under linear function approximation and Markovian sampling , and it is shown that the algorithm achieves an -stationary point of the objective function with a sample complexity in the order of O ( −3 ) . Such an undesirable high sample complexity is caused by the large variance induced by the Markovian samples queried from the dynamic environment . Therefore , we want to ask the following question . • Q1 : Can we develop a variance reduction scheme for the two time-scale Greedy-GQ algorithm ? In fact , in the existing literature , many recent work proposed to apply the variance reduction techniques developed in the stochastic optimization literature to reduce the variance of various TD learning algorithms for policy evaluation , e.g. , Du et al . ( 2017 ) ; Peng et al . ( 2019 ) ; Korda & La ( 2015 ) ; Xu et al . ( 2020 ) . Some other work applied variance reduction techniques to Q-learning algorithms , e.g. , Wainwright ( 2019 ) ; Jia et al . ( 2020 ) . Hence , it is much desired to develop a variance-reduced Greedy-GQ algorithm for optimal control . In particular , as many of the existing variance-reduced RL algorithms have been shown to achieve an improved sample complexity under variance reduction , it is natural to ask the following fundamental question . • Q2 : Can variance-reduced Greedy-GQ achieve an improved sample complexity under Markovian sampling ? In this paper , we provide affirmative answers to these fundamental questions . Specifically , we develop a two time-scale variance reduction scheme for the Greedy-GQ algorithm by leveraging the SVRG scheme Johnson & Zhang ( 2013 ) . Moreover , under linear function approximation and Markovian sampling , we prove that the proposed variance-reduced Greedy-GQ algorithm achieves an -stationary point with an improved sample complexity O ( −2 ) . We summarize our technical contributions as follows . 1.1 OUR CONTRIBUTIONS . We develop a variance-reduced Greedy-GQ ( VR-Greedy-GQ ) algorithm for optimal control in reinforcement learning . Specifically , the algorithm leverages the SVRG variance reduction scheme Johnson & Zhang ( 2013 ) to construct variance-reduced stochastic updates for updating the parameters in both time-scales . We study the finite-time convergence of VR-Greedy-GQ under linear function approximation and Markovian sampling in the off-policy setting . Specifically , we show that VR-Greedy-GQ achieves an -stationary point of the objective function J ( i.e. , ‖∇J ( θ ) ‖2 ≤ ) with a sample complexity in the order of O ( −2 ) . Such a complexity result improves that of the original Greedy-GQ by a significant factor of O ( −1 ) Wang & Zou ( 2020 ) . In particular , our analysis shows that the bias error caused by the Markovian sampling and the variance error of the stochastic updates are in the order ofO ( M−1 ) , O ( ηθM−1 ) , respectively , where ηθ is the learning rate and M corresponds to the batch size of the SVRG reference batch update . This shows that the proposed variance reduction scheme can significantly reduce the bias and variance errors of the original Greedy-GQ update ( by a factor of M ) and lead to an improved overall sample complexity . The analysis logic of VR-Greedy-GQ partly follows that of the conventional SVRG , but requires substantial new technical developments . Specifically , we must address the following challenges . First , VR-Greedy-GQ involves two time-scale variance-reduced updates that are correlated with each other . Such an extension of the SVRG scheme to the two time-scale updates is novel and requires new technical developments . Specifically , we need to develop tight variance bounds for the two time-scale updates under Markovian sampling . Second , unlike the convex objective functions of the conventional GTD type of algorithms , the objective function of VR-Greedy-GQ is generally non-convex due to the non-stationary target policy . Hence , we need to develop new techniques to characterize the per-iteration optimization progress towards a stationary point under nonconvexity . In particular , to analyze the two time-scale variance reduction updates of the algorithm , we introduce a ‘ fine-tuned ’ Lyapunov function of the form Rmt = J ( θ ( m ) t ) + ct‖θ ( m ) t − θ̃ ( m ) ‖2 , where the parameter ct is fine-tuned to cancel other additional quadratic terms ‖θ ( m ) t − θ̃ ( m ) ‖2 that are implicitly involved in the tracking error terms . The design of this special Lyapunov function is critical to establish the formal convergence of the algorithm . With these technical developments , we are able to establish an improved finite-time convergence rate and sample complexity for VR-Greedy-GQ . 1.2 RELATED WORK . Q-learning and SARSA with function approximation . The asymptotic convergence of Q-learning and SARSA under linear function approximation were established in Melo et al . ( 2008 ) ; Perkins & Precup ( 2003 ) , and their finite-time analysis were developed in Zou et al . ( 2019 ) ; Chen et al . ( 2019 ) . However , these algorithms may diverge in off-policy training Baird ( 1995 ) . Also , recent works focused on the Markovian setting . Various analysis techniques have been developed to analyze the finite-time convergence of TD/Q-learning under Markovian samples . Specifically , Wang et al . ( 2020 ) developed a multi-step Lyapunov analysis for addressing the biasedness of the stochastic approximation in Q-learning . Srikant & Ying ( 2019 ) developed a drift analysis to the linear stochastic approximation problem . Besides the linear function approximation , the finite-time analysis of Q-learning under neural network function approximation is developed in Xu & Gu ( 2019 ) . GTD algorithms . The GTD2 and TDC algorithms were developed for off-policy TD learning . Their asymptotic convergence was proved in Sutton et al . ( 2009a ; b ) ; Yu ( 2017 ) , and their finite-time analysis were developed recently in Dalal et al . ( 2018 ) ; Wang et al . ( 2017 ) ; Liu et al . ( 2015 ) ; Gupta et al . ( 2019 ) ; Xu et al . ( 2019 ) . The Greedy-GQ algorithm is an extension of these algorithms to optimal control and involves nonlinear updates . RL with variance reduction : Variance reduction techniques have been applied to various RL algorithms . In TD learning , Du et al . ( 2017 ) reformulate the MSPBE problem as a convex-concave saddle-point optimization problem and applied SVRG Johnson & Zhang ( 2013 ) and SAGA Defazio et al . ( 2014 ) to primal-dual batch gradient algorithm . In Korda & La ( 2015 ) , the variance-reduced TD algorithm was introduced for solving the MSPBE problem , and later Xu et al . ( 2020 ) provided a correct non-asymptotic analysis for this algorithm over Markovian samples . Recently , some other works applied the SVRG , SARAH Nguyen et al . ( 2017 ) and SPIDER Fang et al . ( 2018 ) variance reduction techniques to develop variance-reduced Q-learning algorithms , e.g. , Wainwright ( 2019 ) ; Jia et al . ( 2020 ) . In these works , TD or TDC algorithms are in the form of linear stochastic approximation , and Q-learning has only a single time-scale update . As a comparison , our VR-Greedy-GQ takes nonlinear two time-scale updates to optimization a nonconvex MSPBE . 2 PRELIMINARIES : POLICY OPTIMIZATION AND GREEDY-GQ . In this section , we review some preliminaries of reinforcement learning and recap the Greedy-GQ algorithm under linear function approximation . 2.1 POLICY OPTIMIZATION IN REINFORCEMENT LEARNING . In reinforcement learning , an agent takes actions to interact with the environment via a Markov Decision Process ( MDP ) . Specifically , an MDP is specified by the tuple ( S , A , P , r , γ ) , where S and A respectively correspond to the state and action spaces that include finite elements , r : S×A×S → [ 0 , +∞ ) denotes a reward function and γ ∈ ( 0 , 1 ) is the associated reward discount factor . At any time t , assume that the agent is in the state st ∈ S and takes a certain action at ∈ A following a stationary policy π , i.e. , at ∼ π ( ·|st ) . Then , at the subsequent time t + 1 , the current state of the agent transfers to a new state st+1 according to the transition kernel P ( ·|st , at ) . At the same time , the agent receives a reward rt = r ( st , at , st+1 ) from the environment for this action-state transition . To evaluate the quality of a given policy π , we often use the action-state value function Qπ : S ×A → R that accumulates the discounted rewards as follows : Qπ ( s , a ) = Es′∼P ( ·|s , a ) [ r ( s , a , s′ ) + γV π ( s′ ) ] , where V π ( s ) is the state value function defined as V π ( s ) = E [ ∑∞ t=0 γ trt|s0 = s ] . In particu- lar , define the Bellman operator Tπ such that TπQ ( s , a ) = Es′ , a′ [ r ( s , a , s′ ) + γQ ( s′ , a′ ) ] for any Q ( s , a ) , where a′ ∼ π ( ·|s′ ) . Then , Qπ ( s , a ) is a fixed point of Tπ , i.e. , TπQπ ( s , a ) = Qπ ( s , a ) , ∀s , a . ( 1 ) The goal of policy optimization is to learn the optimal policy π∗ that maximizes the expected total reward E [ ∑∞ t=0 γ trt|s0 = s ] for any initial state s ∈ S , and this is equivalent to learn the optimal value function Q∗ ( s , a ) = supπ Q π ( s , a ) , ∀s , a . In particular , Q∗ is a fixed point of the Bellman operator T that is defined as TQ ( s , a ) = Es′∼P ( ·|s , a ) [ r ( s , a , s′ ) + γmaxb∈AQ ( s′ , b ) ] .
Greedy-GQ is an RL algorithm for a control problem that extends on GTD, which is a prediction algorithm. While Greedy-GQ asymptotically converges to a stationary point, it does so with high sample complexity. The authors reduce the variance of Greedy-GQ by incorporating SVRG variance reduction scheme to both the time-scale update of the algorithm. The main contribution of the paper is in showing that the variance reduced Greedy-GQ algorithm achieves a sample complexity that is an order of magnitude less than the vanilla Greedy-GQ.
SP:3447c4f7cb227a9eba660c6c43fcbddb3d866567
Federated Mixture of Experts
1 INTRODUCTION An ever-increasing amount of devices are being connected to the internet , sensing their environment , and generating vast amounts of data . The term federated learning ( FL ) has been established to describe the scenario where we aim to learn from the data generated by this “ federation ” of devices ( McMahan et al. , 2016 ) . Not only does the number of sensing devices increase , but also their processing power is increasing continuously to the point that it becomes viable to perform inference and training of machine learning models on device . In federated learning , the goal is to learn from these client devices ’ data without collecting the data centrally , which naturally allows for more private exchange of information . Several challenges arise in the federated scenario . Federated devices are generally resource-constrained , both in their computational capacity as well as in communication bandwidth and latency . In a practical example , a smartphone has limited heat dissipation capacity and must communicate via Wi-Fi . From a global perspective , devices ’ processing power and network connection can be highly heterogeneous across geographical regions and socio-economical status of device owners , causing practical issues ( Bonawitz et al. , 2019 ) and raising questions of fairness in FL ( Li et al. , 2019 ; Mohri et al. , 2019 ) . One of the key challenges in FL that we aim to address in this work is the non-i.i.d nature of the shards of data that are distributed across devices . In non-federated machine learning , assuming independent and identically distributed data is generally justifiable and not detrimental to model performance . In FL however , each client performs a series of parameter updates on its own data shard to amortize the costs of communication . Over time , the direction of progress across shards with non-i.i.d data starts diverging ( as shown in Figure 1 ) , which can set back training progress , significantly slow down convergence and decrease model performance ( Hsu et al. , 2019 ) . To this end , we propose Federated Mixture of Experts ( FedMix ) , an algorithm for FL that allows for training an ensemble of specialized models instead of a single global model . In FedMix , expert models are learning to specialize in regions of the input space such that , for a given expert , each client ’ s progress on that expert is aligned . FedMix allows each client to learn which experts are relevant for its shard and we show how it can be extended for inference on a previously unseen client . FedMix shows competitive performance against the established standard in FL , FedAvg ( McMahan et al. , 2016 ; Deng et al. , 2020 ) across a range of visual classification tasks . Code will be released upon publication . 2 FEDERATED MIXTURE OF EXPERTS . Federated learning ( McMahan et al. , 2016 ) deals with the problem of learning a server model with parameters w , e.g. , a neural network , from a datasetD = { ( x1 , y1 ) , . . . , ( xN , yN ) } ofN datapoints that is distributed across S shards , i.e. , D = D1 ∪ · · · ∪ DS , without accessing the shard specific datasets directly . By defining a loss function Ls ( Ds ; w ) per shard , the total risk can be written as arg min w S∑ s=1 Ns N Ls ( Ds ; w ) , Ls ( Ds ; w ) : = 1 Ns Ns∑ i=1 L ( Dsi ; w ) . ( 1 ) It is easy to see that this objective corresponds to empirical risk minimization over the joint datasetD with a loss L ( · ) for each datapoint . In federated learning one is interested in reducing the communication costs ; for this reason McMahan et al . ( 2016 ) propose to do multiple gradient updates for w in the inner optimization objective for each shard s , thus obtaining “ local ” models with parameters ws . These multiple gradient updates are denoted as “ local epochs ” , i.e. , amount of passes through the entire local dataset , with an abbreviation of E. Each of the shards then communicates the local model ws to the server and the server updates the global model at “ round ” t by averaging the parameters of the local models wt = ∑ s Ns N w t s. This constitutes federated averaging ( FedAvg ) ( McMahan et al. , 2016 ) , the standard in federated learning . One of the main challenges in federated learning is the fact that usually the data are non-i.i.d . distributed across the shards S , that is p ( D|si ) 6= p ( D|sj ) for i 6= j . On the one hand , this can make learning a single global model from all of the data with the classical FedAvg problematic . On the other hand , there is one extreme that does not suffer from this issue ; learning S individual models , i.e. , only optimizing ws on Ds . Although these individual models by definition do not suffer from non-i.i.d data , clearly we should aim to do better and exchange meaningful information between clients to learn more robust and expressive models . 2.1 THE FEDMIX ALGORITHM With FedMix , we propose to strike a balance between the two aforementioned extremes ; learning a single global model and learning S individual models . For this reason , we revisit an old model formulation , the Mixture of Experts ( MoE ) . The classical formulation of a MoE model ( Jacobs et al. , 1991 ; Jordan & Jacobs , 1994 ) contains a set ofK experts and a gating mechanism that is responsible for choosing an expert for a given data-point . A MoE model for a data point ( x , y ) can generally be described by pw1 : K , θ ( y|x ) = K∑ z=1 pwz ( y|x , z ) pθ ( z|x ) , ( 2 ) where z is a categorical variable that denotes the expert , wk are the parameters of expert k and θ are the parameters of the selection mechanism . The MoE was proposed as a model for datasets where different subsets of the data exhibit different relationships between input x and output y . Instead of training a single global model to fit this relationship everywhere , each expert performs well on a different subset of the input space . The gating function models the decision boundary between input regions , assigning data-points from subsets of the input region to their respective experts . In this work , we show that , in the federated scenario , sub-dividing the input region through a MoE can alleviate the consequences of non-i.i.d data by aligning gradient updates across experts ( Figure 1 ) . In Federated Mixture of Experts ( FedMix ) we enrich this model by conditioning the gating mechanism on the shard assignment s. Whatever characteristics make shard s different from other shards can manifest in learning a different , localized gating mechanism that does not need to be communicated to the server . In choosingK = 1 , FedMix recovers the standard setting of federated averaging . K = S in combination with fixing p ( z = s|x , s ) = 1 recovers S independent models . From a global perspective , we are interested in maximizing the following single objective : S∑ s=1 Ns∑ i=1 log pw1 : K , θs ( ys , i|xs , i , s ) = S∑ s=1 Ns∑ i=1 log [ K∑ z=1 pθs ( z|xs , i , s ) pwz ( ys , i|xs , i , z ) ] ( 3 ) Given the graphical model decomposition depicted in Figure 2 , the objective in Eq . 3 corresponds to a federated MoE , where we have omitted the generative models p ( x|s ) . We will briefly touch upon the role of learning generative models in Appendix F but focus on the discriminative part of the model , i.e. , the MoE , in this paper . While it is possible to optimize Eq . 3 directly , we have found empirically that it is hard to achieve both : avoiding collapse to a single expert , thus obtaining FedAvg , and specialization of the experts . Instead , we propose to form a variational lower-bound on Eq . 3 with a global variational approximation qφ ( z| . . . ) to the true posterior p ( z|x , y , s ) with parameters φ . At test time , p ( y|x∗ , s ) = ∑K k=1 p ( y|x∗ , z ) p ( z|x∗ , s ) can be readily evaluated without requiring q . This allows us to condition qφ ( z| . . . ) on any available side-information at training time that might result in better specialization in the non-i.i.d federated scenario . In this paper we mainly consider classification tasks whose non-i.i.d nature predominantly stems from the non-i.i.d distribution of labels y . Other or additional known sources of misalignment could be included to further improve this approximation , such as a manufacturer-id for a medical device in a medical scenario , a geographic identifier , or general domain-specific information . We show one such additional example in 4.2 . The lower bound to be maximized in FedMix therefore is as follows : S∑ s=1 Ns∑ i=1 logpw1 : K , θs ( ys , i|xs , i , s ) ( 4 ) ≥ S∑ s=1 Ns∑ i=1 K∑ z=1 qφ ( z|ys , i ) [ log pwz ( ys , i|xs , i , z ) pθs ( z|xs , i , s ) − log qφ ( z|ys , i ) ] ( 5 ) = S∑ s=1 Ns∑ i=1 ( K∑ z=1 qφ ( z|ys , i ) [ log pwz ( ys , i|xs , i , z ) pθs ( z|xs , i , s ) ] ) +H ( qφ ( z|ys , i ) ) . ( 6 ) Figure 3 : The effect of specialisation ( without H ( q ) ) compared to an ensemble ( with H ( q ) ) and FedAvg on Cifar10 . The experimental setup is identical to what is described in Section 4 . Conditioning only on y allows us to efficiently parameterize the variational approximation , incurring only a small communication overhead . While it would be possible to condition qφ ( z|y ) on s , thus having localized approximations with parameters φs that do not need to be communicated , we found a global approximation to help align the gating mechanisms across shards . A global qφ ( z|y ) encourages shards that contain data with the same label to assign them to the same expert . Specialization of the experts is a key ingredient for FedMix to be successful ; with specialization , the gradients for each expert become aligned across shards ( see Figure 1 ) , the hold-out accuracy improves ( see Figure 3 ) , and the communication costs decrease as each shard may only need to access a subset of the experts . We find that performing maximum a-posteriori ( MAP ) inference for z generally leads to better and more personalized models . By removing the entropy term from equation 4 , qφ ( z|y ) and therefore pθs ( z|x , s ) are encouraged to concentrate and select only one expert for a given data point . In the extreme case where a client ’ s shard contains only data that is assigned to the same expert , we can reduce communication by receiving and sending updates for that single expert only . We show in Section 4 that communicating and evaluating experts based on thresholding the aggregate qφ ( z|s ) = Ey∼Ds [ qφ ( z|y ) ] can reduce communication and computation overhead . Figure 3 compares FedMix with and without the entropy term to standard FedAvg as a function of communication steps . With the entropy term , FedMix develops no expert specialization and collapses to an ensemble of K = 4 models . One drawback of the heavy specialization with MAP inference is that sometimes FedMix prematurely completely prunes experts , i.e. , pθs ( z = k|x , s ) ≈ 0 ∀x , s. This can be undesirable as we lose model capacity that can be used for better modeling the data . As qφ ( z|y ) is one of the main training signals of pθs ( z|x , s ) , we introduce the marginal entropy term in the server , H ( Ep ( y ) [ qφ ( z|y ) ] ) , as a regularizer . Notice that this leads to different training dynamics than locally optimizing the lower bound with the entropy included and we , empirically , found that it alleviates premature pruning , while still leading to specialized models . Figure 14a in Appendix H visualizes the development of qφ ( z|y ) over time from initially uniform to high specialisation for the experiment depicted in Figure 3 . Server Side Updates In a general federated learning algorithm , a central server selects a subset S′ ⊂ { 1 , . . . , S } of clients at time t and transmits the current estimate of the global parameters wt to them . These clients perform a series of mini-batch gradient updates with data from their shard Ds on a local loss function , which can come at the price of each client moving in possibly different directions in parameter space . In generalized FedAvg ( Reddi et al. , 2020 ) , the server interprets ∆ts = w t − wt+1s as a single-step gradient update from client s , averages those gradients and applies an optimizer such as Adam ( Kingma & Ba , 2014 ) to receive wt+1 . In light of non-i.i.d data across clients , this averaging strategy can result in slow progress since averaging updates in a highly non-convex parameter space can be sub-optimal . In FedMix , this effect is mitigated since for a given expert , the data that is used to update its parameters are aligned better across shards . FedMix offers a second way to improve convergence speed by modifying the server-side updates . In generalized FedAvg , the individual gradients returned by the subset S′ of clients are averaged according to ∆t = S′∑ s=1 p ( s ) ·∆ts , p ( s ) = Ns NS′ . ( 7 ) In FedMix , we can speed up convergence by considering expert-specific updates ∆tk , s = w t k − wt+1k , s . If a client s pruned away expert k from its local gating mechanism , ∆ t k , s will be zero . We propose to normalize the effective magnitude of the resulting update ∆k by up-weighing the updates of all other clients that do consider expert k for their local mixture : ∆tk = S′∑ s=1 p ( s|z = k ) ·∆tk , s , p ( s|z = k ) ∝ p ( z = k|s ) p ( s ) , p ( s ) = Ns NS′ . ( 8 ) Computing p ( z|s ) = Ex∼Ds [ pθs ( z|s , x ) ] prior to sending updates to the server involves evaluating potentially large neural network models . Therefore we choose to approximate p ( z|s ) ≈ qφ ( z|s ) = Ey∼Ds [ qφ ( z|y ) ] , which involves just a single matrix multiplication . We discuss the implications of sending qφ ( z|s ) to privacy and how these fare relative to FedAvg in Appendix C. Pruning experts Maintaining the cheap-to-compute marginal posterior per shard offers an additional opportunity to increase computation speed during local shard iterations and reduce overall communication costs . We propose to “ prune away ” experts locally from the MoE if qφ ( z|s ) does not surpass a threshold η/K . In order to still optimize a valid bound , we need to re-normalize qφ ( z|y ) before evaluation of the loss function ( Pal et al. , 2005 ) . We evaluate the same threshold prior to sending updates to the server in order to avoid communicating parameters that have not changed during the client ’ s iterations . Once the server selects a client for another round , it provides only those experts to the client that were updated by the client in the previous round . We empirically find that the entropy of q ( z|s ) decreases steadily and we prune away experts k with probability q ( z = k|s ) < η/K without significant drop in performance . We explore the consequences of pruning experts in the experiment section and in Appendix E. Algorithm 2 shows how FedMix can be enriched by pruning . Algorithm 1 The FedMix algorithm . α , β are the client and server learning rates respectively function SERVER SIDE Initialize φ and K vectors W = [ w1 , . . . , wK ] for round t in 1 , . . . T do S′ ← random subset of the clients Initialize ∆tW = 0 , ∆ t φ = 0 for s in S′ do Wts , φ t s , p ( z|s ) ← CLIENT SIDE ( s , φ , W ) end for p ( s|z ) ← p ( z|s ) p ( s ) / ∑ s∈S′ p ( z|s ) p ( s ) for s in S′ do ∆twk+ = p ( s|z = k ) ( w t−1 k −wts , k ) ∀k ∆tφ+ = Ns NS′ ( φt−1 − φts ) end for ∆tφ− = ∇φH ( ∑ cqφ ( z|y=c ) p ( y=c ) ) wt+11 : K ← ADAM ( ∆tw1 : K , β ) φt+1 ← ADAM ( ∆tφ , β ) end for end function function CLIENT SIDE ( s , φ , W ) Get local parameters θs for epoch e in 1 , . . . , E do for batch b ∈ B do Ls ← Eqφ ( z|yb ) [ log pwz ( yb|xb , z ) pθs ( z|xb , s ) ] φ+ = α∇φLs W+ = α∇WLs θs+ = α∇θsLs end for end for q ( z|s ) ← Ey∼Ds [ qφ ( z|y ) ] return w1 : K , φ , q ( z|s ) end function Designing robust gates In the federated scenario , Ns is often much smaller than N and especially small in relation to the complexity of the data we try to model . Any localized parameters therefore are prone to overfitting . On the other hand , the global parameters of an expert are trained using all data-points assigned to that expert across all shards , allowing to learn more robust features . We can make use of the robustness of these experts ’ features for the gating mechanism by conditioning on them instead of training an entirely separate model for pθs ( z|x , s ) . Let us define hk ( x ) as intermediary features of expert k. Since not all experts might be used for a given shard and in order to scale with K , we average over the marginal posterior of the training set at that shard before applying a linear transformation to compute the input to the softmax gates : hs ( x ) = K∑ k=1 qφ ( z = k|s ) hk ( x ) pθs ( z|x , s ) = SM ( ATs hs ( x ) +bs ) ( 9 ) where θs = ( As , bs ) are local learnable parameters and SM represents the softmax function . Inference at test time We consider three variants for test-time evaluation of FedMix . In the first case , a client s that participated in training is presented with a new data point ( x∗ , s ) . Predictions can then be straightforwardly done by selecting the y that maximizes ∑K z=1 p ( y|x∗ , z ) p ( z|x∗ , s ) . In the second , more challenging , scenario a new client s∗ is introduced together with a new labelled local data set Ds∗ . Here we propose to instantiate and train the local gating mechanism by optimizing the parameters θs of pθs ( z|x , s∗ ) via MAP inference at the local objective . Afterwards , predictions can be made in a manner similar to the first case . Finally , we consider the case in which a new client s∗ has no labelled dataset available . Without a local gating function , simply ensembling experts exhibits almost random behaviour since experts can be overly confident on out-of-distribution data ( Snoek et al. , 2019 ) . We therefore propose to ensemble across local gating mechanisms to compute p ( z|x∗ ) = ∑S s=1 pθs ( z|x∗ , s ) p ( s|x∗ ) ; a method which works well in practice . In Appendix F we discuss results for new shard inference as well as a more principled approach which makes use of the graphical model formulation in Figure 2 .
The paper proposes a novel algorithm, which is a federated form on mixture of experts, called Federated Mixture of Experts (FedMix). In FedMix, an ensemble of specialized models is trained instead of a single global model. This strikes a compromise between training a single global model and one model per client. A gating mechanism is employed, to choose an expert model that is responsible for a given data point, thus aligning the gradient updates across experts and alleviating the consequences of non-i.i.d. data.
SP:4113fc49197e2f7904f2ede5bda27d5d9dd6bc0e
Federated Mixture of Experts
1 INTRODUCTION An ever-increasing amount of devices are being connected to the internet , sensing their environment , and generating vast amounts of data . The term federated learning ( FL ) has been established to describe the scenario where we aim to learn from the data generated by this “ federation ” of devices ( McMahan et al. , 2016 ) . Not only does the number of sensing devices increase , but also their processing power is increasing continuously to the point that it becomes viable to perform inference and training of machine learning models on device . In federated learning , the goal is to learn from these client devices ’ data without collecting the data centrally , which naturally allows for more private exchange of information . Several challenges arise in the federated scenario . Federated devices are generally resource-constrained , both in their computational capacity as well as in communication bandwidth and latency . In a practical example , a smartphone has limited heat dissipation capacity and must communicate via Wi-Fi . From a global perspective , devices ’ processing power and network connection can be highly heterogeneous across geographical regions and socio-economical status of device owners , causing practical issues ( Bonawitz et al. , 2019 ) and raising questions of fairness in FL ( Li et al. , 2019 ; Mohri et al. , 2019 ) . One of the key challenges in FL that we aim to address in this work is the non-i.i.d nature of the shards of data that are distributed across devices . In non-federated machine learning , assuming independent and identically distributed data is generally justifiable and not detrimental to model performance . In FL however , each client performs a series of parameter updates on its own data shard to amortize the costs of communication . Over time , the direction of progress across shards with non-i.i.d data starts diverging ( as shown in Figure 1 ) , which can set back training progress , significantly slow down convergence and decrease model performance ( Hsu et al. , 2019 ) . To this end , we propose Federated Mixture of Experts ( FedMix ) , an algorithm for FL that allows for training an ensemble of specialized models instead of a single global model . In FedMix , expert models are learning to specialize in regions of the input space such that , for a given expert , each client ’ s progress on that expert is aligned . FedMix allows each client to learn which experts are relevant for its shard and we show how it can be extended for inference on a previously unseen client . FedMix shows competitive performance against the established standard in FL , FedAvg ( McMahan et al. , 2016 ; Deng et al. , 2020 ) across a range of visual classification tasks . Code will be released upon publication . 2 FEDERATED MIXTURE OF EXPERTS . Federated learning ( McMahan et al. , 2016 ) deals with the problem of learning a server model with parameters w , e.g. , a neural network , from a datasetD = { ( x1 , y1 ) , . . . , ( xN , yN ) } ofN datapoints that is distributed across S shards , i.e. , D = D1 ∪ · · · ∪ DS , without accessing the shard specific datasets directly . By defining a loss function Ls ( Ds ; w ) per shard , the total risk can be written as arg min w S∑ s=1 Ns N Ls ( Ds ; w ) , Ls ( Ds ; w ) : = 1 Ns Ns∑ i=1 L ( Dsi ; w ) . ( 1 ) It is easy to see that this objective corresponds to empirical risk minimization over the joint datasetD with a loss L ( · ) for each datapoint . In federated learning one is interested in reducing the communication costs ; for this reason McMahan et al . ( 2016 ) propose to do multiple gradient updates for w in the inner optimization objective for each shard s , thus obtaining “ local ” models with parameters ws . These multiple gradient updates are denoted as “ local epochs ” , i.e. , amount of passes through the entire local dataset , with an abbreviation of E. Each of the shards then communicates the local model ws to the server and the server updates the global model at “ round ” t by averaging the parameters of the local models wt = ∑ s Ns N w t s. This constitutes federated averaging ( FedAvg ) ( McMahan et al. , 2016 ) , the standard in federated learning . One of the main challenges in federated learning is the fact that usually the data are non-i.i.d . distributed across the shards S , that is p ( D|si ) 6= p ( D|sj ) for i 6= j . On the one hand , this can make learning a single global model from all of the data with the classical FedAvg problematic . On the other hand , there is one extreme that does not suffer from this issue ; learning S individual models , i.e. , only optimizing ws on Ds . Although these individual models by definition do not suffer from non-i.i.d data , clearly we should aim to do better and exchange meaningful information between clients to learn more robust and expressive models . 2.1 THE FEDMIX ALGORITHM With FedMix , we propose to strike a balance between the two aforementioned extremes ; learning a single global model and learning S individual models . For this reason , we revisit an old model formulation , the Mixture of Experts ( MoE ) . The classical formulation of a MoE model ( Jacobs et al. , 1991 ; Jordan & Jacobs , 1994 ) contains a set ofK experts and a gating mechanism that is responsible for choosing an expert for a given data-point . A MoE model for a data point ( x , y ) can generally be described by pw1 : K , θ ( y|x ) = K∑ z=1 pwz ( y|x , z ) pθ ( z|x ) , ( 2 ) where z is a categorical variable that denotes the expert , wk are the parameters of expert k and θ are the parameters of the selection mechanism . The MoE was proposed as a model for datasets where different subsets of the data exhibit different relationships between input x and output y . Instead of training a single global model to fit this relationship everywhere , each expert performs well on a different subset of the input space . The gating function models the decision boundary between input regions , assigning data-points from subsets of the input region to their respective experts . In this work , we show that , in the federated scenario , sub-dividing the input region through a MoE can alleviate the consequences of non-i.i.d data by aligning gradient updates across experts ( Figure 1 ) . In Federated Mixture of Experts ( FedMix ) we enrich this model by conditioning the gating mechanism on the shard assignment s. Whatever characteristics make shard s different from other shards can manifest in learning a different , localized gating mechanism that does not need to be communicated to the server . In choosingK = 1 , FedMix recovers the standard setting of federated averaging . K = S in combination with fixing p ( z = s|x , s ) = 1 recovers S independent models . From a global perspective , we are interested in maximizing the following single objective : S∑ s=1 Ns∑ i=1 log pw1 : K , θs ( ys , i|xs , i , s ) = S∑ s=1 Ns∑ i=1 log [ K∑ z=1 pθs ( z|xs , i , s ) pwz ( ys , i|xs , i , z ) ] ( 3 ) Given the graphical model decomposition depicted in Figure 2 , the objective in Eq . 3 corresponds to a federated MoE , where we have omitted the generative models p ( x|s ) . We will briefly touch upon the role of learning generative models in Appendix F but focus on the discriminative part of the model , i.e. , the MoE , in this paper . While it is possible to optimize Eq . 3 directly , we have found empirically that it is hard to achieve both : avoiding collapse to a single expert , thus obtaining FedAvg , and specialization of the experts . Instead , we propose to form a variational lower-bound on Eq . 3 with a global variational approximation qφ ( z| . . . ) to the true posterior p ( z|x , y , s ) with parameters φ . At test time , p ( y|x∗ , s ) = ∑K k=1 p ( y|x∗ , z ) p ( z|x∗ , s ) can be readily evaluated without requiring q . This allows us to condition qφ ( z| . . . ) on any available side-information at training time that might result in better specialization in the non-i.i.d federated scenario . In this paper we mainly consider classification tasks whose non-i.i.d nature predominantly stems from the non-i.i.d distribution of labels y . Other or additional known sources of misalignment could be included to further improve this approximation , such as a manufacturer-id for a medical device in a medical scenario , a geographic identifier , or general domain-specific information . We show one such additional example in 4.2 . The lower bound to be maximized in FedMix therefore is as follows : S∑ s=1 Ns∑ i=1 logpw1 : K , θs ( ys , i|xs , i , s ) ( 4 ) ≥ S∑ s=1 Ns∑ i=1 K∑ z=1 qφ ( z|ys , i ) [ log pwz ( ys , i|xs , i , z ) pθs ( z|xs , i , s ) − log qφ ( z|ys , i ) ] ( 5 ) = S∑ s=1 Ns∑ i=1 ( K∑ z=1 qφ ( z|ys , i ) [ log pwz ( ys , i|xs , i , z ) pθs ( z|xs , i , s ) ] ) +H ( qφ ( z|ys , i ) ) . ( 6 ) Figure 3 : The effect of specialisation ( without H ( q ) ) compared to an ensemble ( with H ( q ) ) and FedAvg on Cifar10 . The experimental setup is identical to what is described in Section 4 . Conditioning only on y allows us to efficiently parameterize the variational approximation , incurring only a small communication overhead . While it would be possible to condition qφ ( z|y ) on s , thus having localized approximations with parameters φs that do not need to be communicated , we found a global approximation to help align the gating mechanisms across shards . A global qφ ( z|y ) encourages shards that contain data with the same label to assign them to the same expert . Specialization of the experts is a key ingredient for FedMix to be successful ; with specialization , the gradients for each expert become aligned across shards ( see Figure 1 ) , the hold-out accuracy improves ( see Figure 3 ) , and the communication costs decrease as each shard may only need to access a subset of the experts . We find that performing maximum a-posteriori ( MAP ) inference for z generally leads to better and more personalized models . By removing the entropy term from equation 4 , qφ ( z|y ) and therefore pθs ( z|x , s ) are encouraged to concentrate and select only one expert for a given data point . In the extreme case where a client ’ s shard contains only data that is assigned to the same expert , we can reduce communication by receiving and sending updates for that single expert only . We show in Section 4 that communicating and evaluating experts based on thresholding the aggregate qφ ( z|s ) = Ey∼Ds [ qφ ( z|y ) ] can reduce communication and computation overhead . Figure 3 compares FedMix with and without the entropy term to standard FedAvg as a function of communication steps . With the entropy term , FedMix develops no expert specialization and collapses to an ensemble of K = 4 models . One drawback of the heavy specialization with MAP inference is that sometimes FedMix prematurely completely prunes experts , i.e. , pθs ( z = k|x , s ) ≈ 0 ∀x , s. This can be undesirable as we lose model capacity that can be used for better modeling the data . As qφ ( z|y ) is one of the main training signals of pθs ( z|x , s ) , we introduce the marginal entropy term in the server , H ( Ep ( y ) [ qφ ( z|y ) ] ) , as a regularizer . Notice that this leads to different training dynamics than locally optimizing the lower bound with the entropy included and we , empirically , found that it alleviates premature pruning , while still leading to specialized models . Figure 14a in Appendix H visualizes the development of qφ ( z|y ) over time from initially uniform to high specialisation for the experiment depicted in Figure 3 . Server Side Updates In a general federated learning algorithm , a central server selects a subset S′ ⊂ { 1 , . . . , S } of clients at time t and transmits the current estimate of the global parameters wt to them . These clients perform a series of mini-batch gradient updates with data from their shard Ds on a local loss function , which can come at the price of each client moving in possibly different directions in parameter space . In generalized FedAvg ( Reddi et al. , 2020 ) , the server interprets ∆ts = w t − wt+1s as a single-step gradient update from client s , averages those gradients and applies an optimizer such as Adam ( Kingma & Ba , 2014 ) to receive wt+1 . In light of non-i.i.d data across clients , this averaging strategy can result in slow progress since averaging updates in a highly non-convex parameter space can be sub-optimal . In FedMix , this effect is mitigated since for a given expert , the data that is used to update its parameters are aligned better across shards . FedMix offers a second way to improve convergence speed by modifying the server-side updates . In generalized FedAvg , the individual gradients returned by the subset S′ of clients are averaged according to ∆t = S′∑ s=1 p ( s ) ·∆ts , p ( s ) = Ns NS′ . ( 7 ) In FedMix , we can speed up convergence by considering expert-specific updates ∆tk , s = w t k − wt+1k , s . If a client s pruned away expert k from its local gating mechanism , ∆ t k , s will be zero . We propose to normalize the effective magnitude of the resulting update ∆k by up-weighing the updates of all other clients that do consider expert k for their local mixture : ∆tk = S′∑ s=1 p ( s|z = k ) ·∆tk , s , p ( s|z = k ) ∝ p ( z = k|s ) p ( s ) , p ( s ) = Ns NS′ . ( 8 ) Computing p ( z|s ) = Ex∼Ds [ pθs ( z|s , x ) ] prior to sending updates to the server involves evaluating potentially large neural network models . Therefore we choose to approximate p ( z|s ) ≈ qφ ( z|s ) = Ey∼Ds [ qφ ( z|y ) ] , which involves just a single matrix multiplication . We discuss the implications of sending qφ ( z|s ) to privacy and how these fare relative to FedAvg in Appendix C. Pruning experts Maintaining the cheap-to-compute marginal posterior per shard offers an additional opportunity to increase computation speed during local shard iterations and reduce overall communication costs . We propose to “ prune away ” experts locally from the MoE if qφ ( z|s ) does not surpass a threshold η/K . In order to still optimize a valid bound , we need to re-normalize qφ ( z|y ) before evaluation of the loss function ( Pal et al. , 2005 ) . We evaluate the same threshold prior to sending updates to the server in order to avoid communicating parameters that have not changed during the client ’ s iterations . Once the server selects a client for another round , it provides only those experts to the client that were updated by the client in the previous round . We empirically find that the entropy of q ( z|s ) decreases steadily and we prune away experts k with probability q ( z = k|s ) < η/K without significant drop in performance . We explore the consequences of pruning experts in the experiment section and in Appendix E. Algorithm 2 shows how FedMix can be enriched by pruning . Algorithm 1 The FedMix algorithm . α , β are the client and server learning rates respectively function SERVER SIDE Initialize φ and K vectors W = [ w1 , . . . , wK ] for round t in 1 , . . . T do S′ ← random subset of the clients Initialize ∆tW = 0 , ∆ t φ = 0 for s in S′ do Wts , φ t s , p ( z|s ) ← CLIENT SIDE ( s , φ , W ) end for p ( s|z ) ← p ( z|s ) p ( s ) / ∑ s∈S′ p ( z|s ) p ( s ) for s in S′ do ∆twk+ = p ( s|z = k ) ( w t−1 k −wts , k ) ∀k ∆tφ+ = Ns NS′ ( φt−1 − φts ) end for ∆tφ− = ∇φH ( ∑ cqφ ( z|y=c ) p ( y=c ) ) wt+11 : K ← ADAM ( ∆tw1 : K , β ) φt+1 ← ADAM ( ∆tφ , β ) end for end function function CLIENT SIDE ( s , φ , W ) Get local parameters θs for epoch e in 1 , . . . , E do for batch b ∈ B do Ls ← Eqφ ( z|yb ) [ log pwz ( yb|xb , z ) pθs ( z|xb , s ) ] φ+ = α∇φLs W+ = α∇WLs θs+ = α∇θsLs end for end for q ( z|s ) ← Ey∼Ds [ qφ ( z|y ) ] return w1 : K , φ , q ( z|s ) end function Designing robust gates In the federated scenario , Ns is often much smaller than N and especially small in relation to the complexity of the data we try to model . Any localized parameters therefore are prone to overfitting . On the other hand , the global parameters of an expert are trained using all data-points assigned to that expert across all shards , allowing to learn more robust features . We can make use of the robustness of these experts ’ features for the gating mechanism by conditioning on them instead of training an entirely separate model for pθs ( z|x , s ) . Let us define hk ( x ) as intermediary features of expert k. Since not all experts might be used for a given shard and in order to scale with K , we average over the marginal posterior of the training set at that shard before applying a linear transformation to compute the input to the softmax gates : hs ( x ) = K∑ k=1 qφ ( z = k|s ) hk ( x ) pθs ( z|x , s ) = SM ( ATs hs ( x ) +bs ) ( 9 ) where θs = ( As , bs ) are local learnable parameters and SM represents the softmax function . Inference at test time We consider three variants for test-time evaluation of FedMix . In the first case , a client s that participated in training is presented with a new data point ( x∗ , s ) . Predictions can then be straightforwardly done by selecting the y that maximizes ∑K z=1 p ( y|x∗ , z ) p ( z|x∗ , s ) . In the second , more challenging , scenario a new client s∗ is introduced together with a new labelled local data set Ds∗ . Here we propose to instantiate and train the local gating mechanism by optimizing the parameters θs of pθs ( z|x , s∗ ) via MAP inference at the local objective . Afterwards , predictions can be made in a manner similar to the first case . Finally , we consider the case in which a new client s∗ has no labelled dataset available . Without a local gating function , simply ensembling experts exhibits almost random behaviour since experts can be overly confident on out-of-distribution data ( Snoek et al. , 2019 ) . We therefore propose to ensemble across local gating mechanisms to compute p ( z|x∗ ) = ∑S s=1 pθs ( z|x∗ , s ) p ( s|x∗ ) ; a method which works well in practice . In Appendix F we discuss results for new shard inference as well as a more principled approach which makes use of the graphical model formulation in Figure 2 .
The paper proposes a method for federated learning of a mixture of experts model (FedMix). The approach allows training an ensemble of models each of which specializes to a subset of clients with similar data characteristics. The authors argue that this way of training an ensemble reduces the gradient divergence/interference, improves the overall performance, and sometimes reduces the communication overhead. The new method is evaluated a few federated image classification datasets.
SP:4113fc49197e2f7904f2ede5bda27d5d9dd6bc0e
Learning Accurate Entropy Model with Global Reference for Image Compression
1 INTRODUCTION . Image compression is a fundamental research topic in computer vision . The goal of image compression is to preserve the critical visual information of the image while reducing the bit-rate for storage or transmission . The state-of-the-art image compression standards , such as JPEG ( Wallace , 1992 ) , JPEG2000 ( Rabbani & Joshi , 2002 ) , HEVC/H.265 ( Sullivan et al. , 2012 ) and Versatile Video Coding ( VVC ) ( Ohm & Sullivan , 2018 ) , are carefully engineered and highly tuned to achieve better performance . Albeit widely deployed , the conventional human-designed codecs take decades of development to achieve impressive compression rate today . Any further improvement is expected to be even more difficult . Inspired by the successful stories of deep learning in many vision tasks , several pioneer works ( Toderici et al. , 2016 ; Agustsson et al. , 2017 ; Theis et al. , 2017 ; Ballé et al. , 2017 ; Ballé et al. , 2018 ; Mentzer et al. , 2018 ; Lee et al. , 2019 ; Minnen et al. , 2018a ) demonstrate that the image compression task can be effectively solved by deep learning too . This breakthrough allows us to use data-driven learning system to design novel compression algorithms automatically . As a result , a majority of deep image compression ( DIC ) models are based on autoencoder framework . In this framework , an encoder transforms pixels into a quantized latent representation suitable for compression , while a decoder is jointly optimized to transform the latent representation back into pixels . ∗Corresponding author . The latent representation can be losslessly compressed to create a bitstream by using entropy coding method ( Rissanen & Langdon , 1981 ) . In the entropy coding , the compression quality is controlled by the entropy estimation of latent features generated by the encoder . It is therefore important to learn an accurate entropy model . To this end , several solutions have been considered . With additional bits , some methods propose entropy model conditioned on a hyperprior , using side information of local histograms over the latent representation ( Minnen et al. , 2018b ) or a hierarchical learned prior ( Ballé et al. , 2018 ) . Contextadaptive models ( Minnen et al. , 2018a ; Lee et al. , 2019 ) incorporate predictions from neighboring symbols to avoid storing the additional bits . While these methods improve the accuracy of the entropy models , they are unable to use global context information during the compression , leading to suboptimal performance . In this work , we observe that global spatial redundancy remains in the latents , as shown in Figure 1 . Motivated by this , we propose to build up a global relevance throughout the latents . Inspired by the recent reference-based Super-Resolution ( SR ) methods ( Zheng et al. , 2018 ; Yang et al. , 2020 ) , we empower the entropy model with global vision by incorporating a reference component . Unlike the super-resolution scenario , incorporating global reference information is non-trivial in deep image compression . The image during decoding is often incomplete which means that the information is badly missing . Besides , our target is to reduce the bit rates and recover the image from the bitstream faithfully , rather than inpainting a low-resolution image with vivid generated details . To address the above challenges , in our proposed method , a global reference module searches over the decoded latents to find the relevant latents to the target latent . The feature map of the relevant latent is then combined with local context and hyperprior to generate a more accurate entropy estimation . A key ingredient in the global reference ensemble step is that we consider not only the similarity between the relevant and the target but also a confidence score to measure the high-order statictics in the latent feature distribution . The introduction of the confidence score enhances the robustness of the entropy model , especially for images with noisy backgrounds . Also , we found that the widely used Generalized Divisive Normalization ( GDN ) in image compression suffers from a mean-shifting problem . Since the GDN densities are zero-mean by definition , mean removal is necessary to fit the density ( Ballé et al. , 2016b ) . Therefore we propose an improved version of GDN , named GSDN ( Generalized Subtractive and Divisive Normalization ) to overcome this difficulty . We summarize our main contributions as follows : • To the best of our knowledge , we are the first to introduce global reference into the entropy model for deep image compression . We develop a robust reference algorithm to ensemble local context , global reference and hyperprior in a novel architecture . When estimating the latent feature entropy , both similarity score and confidence score of the reference area are considered to battle with the noisy background signals . • We propose a novel GSDN module that corrects the mean-shifting problem . • Experiments show that our method outperforms the most advanced codes available today on both PSNR and MS-SSIM quality metrics . Our method saves by 6.1 % compared to the context-adaptive deep models ( Minnen et al. , 2018b ; Lee et al. , 2019 ) and as much as 21.0 % relative to BPG ( Bellard. , 2014 ) . The remainder of this work is organized as follows . In Section 2 , we introduce the backbone of the end-to-end deep image compression network as well as the reference-based component for the entropy model . Section 3 demonstrates the structure of our combined entropy model . The GSDN with mean-shifting correction is given in Section 4 . We present experimental comparison and visualization in Section 5 . Finally , we enclose this work with an open discussion in Section 6 . 2 LEARNED IMAGE COMPRESSION . Learned image compression using deep neural networks has attracted considerable attention recently . The work of Toderici et al . ( 2016 ) first explored a recurrent architecture using an LSTMbased entropy model . A wide range of models ( Ballé et al. , 2017 ; Ballé et al. , 2018 ; Mentzer et al. , 2018 ; Minnen et al. , 2018a ; Lee et al. , 2019 ; Hu et al. , 2020 ; Cheng et al. , 2020 ) used a CNN-based autoencoder with constrained entropy . General learned image compression consists of an encoder , a quantizer , a decoder , and an entropy model . An image x is transformed into a latent representation y via the encoder ga ( x ) , which is discretized by the quantizer Q ( y ) to form ŷ . Given the entropy model pŷ , the discretized value ŷ can be compressed into a bitstream using entropy coding techniques such as arithmetic coding ( Rissanen & Langdon , 1981 ) . The decoder gs ( ŷ ) then forms the reconstructed image x̂ from the quantized latent representation ŷ , which is decompressed from the bitstream . The training goal for learned image compression is to optimize the trade-off between the estimated coding length of the bitstream and the quality of the reconstruction , which is a rate-distortion optimization problem : L = R+ λD = Ex∼px [ − log2 pŷ ( Q ( ga ( x ) ) ) ) ] + λEx∼px [ d ( x , gs ( ŷ ) ] , ( 1 ) where λ is the coefficient which controls the rate-distortion trade-off , px is the unknown distribution of natural images . The first term represents the estimated compression rate of the latent representation . The second term d ( x , x̂ ) represents the distortion value under given metric , such as mean squared error ( MSE ) or MS-SSIM ( Wang et al . ( 2003 ) ) . Entropy coding relies on an entropy model to estimate the prior probability of the latent representation . Ballé et al . ( 2017 ) propose a fully factorized prior for entropy estimation as shown in Figure 2 ( a ) , while the prior probability of discrete latent representations is not adaptive for different images . As shown in Figure 2 ( b ) , Ballé et al . ( 2018 ) model the latent representation as a zero-mean Gaussian distribution based on a spatial dependency with additional bits . In Lee et al . ( 2019 ) and Minnen et al . ( 2018a ) , they introduce an autoregressive component into the entropy model . Taking advantage of high correlation of local dependency , context-adaptive models contribute to more accurate entropy estimation . However , since their context-adaptive entropy models only capture the spatial information of neighboring latents , there is redundant spatial information across the whole image . To further remove such redundancy , our method incorporates a reference-based model to capture global spatial dependency . Specially for learned image compression , a generalized divisive normalization ( GDN ) ( Ballé et al. , 2016a ) transform with optimized parameters has proven effective in Gaussianizing the local joint statistics of natural images . Unlike many other normalization methods whose parameters are typically fixed after training , GDN is spatially adaptive therefore is highly nonlinear . As the referencebased model calculates relevance over the latents , it is crucial to align the distribution of the latents . To better align the latents , the proposed GSDN incorporates a subtracting factor with GDN . We also present an effective method of inverting it when decompressing the latent representation back to image . 3 COMBINED LOCAL , GLOBAL AND HYPERPRIOR ENTROPY MODEL . The models we analyze in this paper build on the architecture introduced in Minnen et al . ( 2018a ) , which combined an autoregressive model with the hyperprior . Figure 3 provides a high-level overview of our approach . The compression model contains two main sub-networks . The first is the core autoencoder , which learns the transform and the inverse transform between image and latent representation . Q represents the quantization function . The gradient-based optimization in learned methods is hindered by quantization . Here , we make use of a mixed approach that has proven efficient in Minnen & Singh ( 2020 ) . The second sub-network is the combined entropy model , which is responsible for estimating a probabilistic model over the latents for entropy coding . The combined entropy model consists of a context model , a reference model , and a hyper-network ( hyper encoder and hyper decoder ) . The three components are combined progressively . Then three parameter networks generate the mean and scale parameters for a conditional Gaussian entropy model respectively . Following the work of Minnen et al . ( 2018a ) , we model each latent , ŷi , as a Gaussian with mean µi and deviation σi convolved with a unit uniform distribution : pŷ ( ŷ|ẑ , θ ) = ∏ i=1 ( N ( µi , σ2i ) ∗ U ( −0.5 , 0.5 ) ) ) ( ŷi ) ( 2 ) where µ and σ are the predicted parameters of entropy model , ẑ is the quantized hyper-latents , θ is the entropy model parameters . The entropy model for the hyperprior is the same as in Ballé et al . ( 2018 ) , which is a non-parametric , fully factorized density model . As the hyperprior is part of the compressed bitstream , we extend the rate of Equation 1 as follows : R = Ex∼px [ − log2 pŷ ( ŷ ) ] + Ex∼px [ − log2 pẑ ( ẑ ) ] ( 3 ) The compressed latents and the compressed hyper-latents are part of the bitstream . The reference-based SR methods ( Zheng et al. , 2018 ; Yang et al. , 2020 ) adopt “ patch match ” to search for proper reference information . However , in the serial processing of image compression , the latent representation during decoding is often incomplete . We extend this search method by using a masked patch . Figure 4 illustrates how the relevance embedding module estimates similarity and fetches the relevant latents . When decoding the target latent , we use neighboring latents ( left and top ) as a basis to compute the similarities between the target latent and its previous latents . Particularly , the latents are unfolded into patches and then masked , denoted as q ∈ [ H×W , k×k×C ] ( whereH , W , k , C correspond to height , width , unfold kernel size and channels , respectively ) . We calculate the similarity matrix r ∈ [ H×W , H×W ] throughout the masked patches by using cosine similarity , ri , j = 〈 qi ‖qi‖ , qj ‖qj‖ 〉 ( 4 ) Similarity 𝑦 '' 𝑦 '' Mask Conv 𝑡 ! ̂ Relevant Index S Figure 4 : A mask slide patch searches on all the decoded latents ( tan area ) . The relevant latents are fetched and learned with a masked convolution . Note that we can only see the decoded latents , so the lower triangular of the similarity matrix is set to zero . We get the most relevant position for each latent as well as the similarity score . According to the position , we fetch the neighboring latents ( left and top ) as well as the center latent , which is named as “ relevant latents ” . We use a masked convolution as in Van den Oord et al . ( 2016 ) to transfer the relevant latents . To measure how likely a reference patch perfectly matches the target patch , Yang et al . ( 2020 ) propose a soft-attention module to transfer features by using the similarity map S. However , we found that a similarity score is not sufficient to reflect the quality of reference latent in image compression . For this reason , a confidence score is introduced to measure the texture complexity of the relevant latent . We use the context model to predict the Gaussian parameters ( i.e. , µ1 , σ1 ) of latents solely . The latents ŷ are now modeled as Gaussian with mean µ1 and standard deviation σ1 . The probabilities of the latents are then calculated according to ( µ1 , σ1 ) as in Equation 2 . As reference model is designed in spatial dimension , the confidence map U is obtained by averaging the probabilities across channel . With the above two parameters , the more relevant latent combination would be enhanced while the less relevant one would be relived . The similarity S and the confidence U are both 2D feature maps . Figure 5 provides the structure of our combined entropy model . For the context model , we transfer the latents ( i.e. , ŷ ) with a masked convolution . For the reference model , we transfer the unfolded relevant latents with a masked convolution . We use 1 × 1 convolution in the parameter networks . Local , global and hyperprior features are ensembled stage by stage , as well as the predicted Gaussian parameters . The mean parameters are estimated by the context model first , and then updated by the global model and the hyperprior model . We use the Log-Sum-Exp trick for resolving the under or overflow issue of deviation parameters . The output of global reference is further multiplied by the similarity S and the confidence U . The context model is based on the neighboring latents of the target latent to reduce the local redundancy . From the perspective of the global context , the reference model makes further efforts to capture spatial dependency . As the first two models predict by the decoded latents , there exists uncertainty that can not be eliminated solely . The hyperprior model learns to store information needed to reduce the uncertainty . This progressive mechanism allows an incremental accuracy for distribution estimation .
This paper propose two methods for improve deep image compression performance: (i) Global Reference Module and (ii) Mean-shifting GDN Module (GSDN). (i) Global Reference Module searches over the decoded latents to find the relevant latents to the target latent for improve accuracy of entropy estimate. Authors extended Yang et al. 2020 method to using masked patch. (ii) GSDN extends GDN to use subtractive operation.
SP:21d5838371c68811775cd8bcf04b2374d427fe86
Learning Accurate Entropy Model with Global Reference for Image Compression
1 INTRODUCTION . Image compression is a fundamental research topic in computer vision . The goal of image compression is to preserve the critical visual information of the image while reducing the bit-rate for storage or transmission . The state-of-the-art image compression standards , such as JPEG ( Wallace , 1992 ) , JPEG2000 ( Rabbani & Joshi , 2002 ) , HEVC/H.265 ( Sullivan et al. , 2012 ) and Versatile Video Coding ( VVC ) ( Ohm & Sullivan , 2018 ) , are carefully engineered and highly tuned to achieve better performance . Albeit widely deployed , the conventional human-designed codecs take decades of development to achieve impressive compression rate today . Any further improvement is expected to be even more difficult . Inspired by the successful stories of deep learning in many vision tasks , several pioneer works ( Toderici et al. , 2016 ; Agustsson et al. , 2017 ; Theis et al. , 2017 ; Ballé et al. , 2017 ; Ballé et al. , 2018 ; Mentzer et al. , 2018 ; Lee et al. , 2019 ; Minnen et al. , 2018a ) demonstrate that the image compression task can be effectively solved by deep learning too . This breakthrough allows us to use data-driven learning system to design novel compression algorithms automatically . As a result , a majority of deep image compression ( DIC ) models are based on autoencoder framework . In this framework , an encoder transforms pixels into a quantized latent representation suitable for compression , while a decoder is jointly optimized to transform the latent representation back into pixels . ∗Corresponding author . The latent representation can be losslessly compressed to create a bitstream by using entropy coding method ( Rissanen & Langdon , 1981 ) . In the entropy coding , the compression quality is controlled by the entropy estimation of latent features generated by the encoder . It is therefore important to learn an accurate entropy model . To this end , several solutions have been considered . With additional bits , some methods propose entropy model conditioned on a hyperprior , using side information of local histograms over the latent representation ( Minnen et al. , 2018b ) or a hierarchical learned prior ( Ballé et al. , 2018 ) . Contextadaptive models ( Minnen et al. , 2018a ; Lee et al. , 2019 ) incorporate predictions from neighboring symbols to avoid storing the additional bits . While these methods improve the accuracy of the entropy models , they are unable to use global context information during the compression , leading to suboptimal performance . In this work , we observe that global spatial redundancy remains in the latents , as shown in Figure 1 . Motivated by this , we propose to build up a global relevance throughout the latents . Inspired by the recent reference-based Super-Resolution ( SR ) methods ( Zheng et al. , 2018 ; Yang et al. , 2020 ) , we empower the entropy model with global vision by incorporating a reference component . Unlike the super-resolution scenario , incorporating global reference information is non-trivial in deep image compression . The image during decoding is often incomplete which means that the information is badly missing . Besides , our target is to reduce the bit rates and recover the image from the bitstream faithfully , rather than inpainting a low-resolution image with vivid generated details . To address the above challenges , in our proposed method , a global reference module searches over the decoded latents to find the relevant latents to the target latent . The feature map of the relevant latent is then combined with local context and hyperprior to generate a more accurate entropy estimation . A key ingredient in the global reference ensemble step is that we consider not only the similarity between the relevant and the target but also a confidence score to measure the high-order statictics in the latent feature distribution . The introduction of the confidence score enhances the robustness of the entropy model , especially for images with noisy backgrounds . Also , we found that the widely used Generalized Divisive Normalization ( GDN ) in image compression suffers from a mean-shifting problem . Since the GDN densities are zero-mean by definition , mean removal is necessary to fit the density ( Ballé et al. , 2016b ) . Therefore we propose an improved version of GDN , named GSDN ( Generalized Subtractive and Divisive Normalization ) to overcome this difficulty . We summarize our main contributions as follows : • To the best of our knowledge , we are the first to introduce global reference into the entropy model for deep image compression . We develop a robust reference algorithm to ensemble local context , global reference and hyperprior in a novel architecture . When estimating the latent feature entropy , both similarity score and confidence score of the reference area are considered to battle with the noisy background signals . • We propose a novel GSDN module that corrects the mean-shifting problem . • Experiments show that our method outperforms the most advanced codes available today on both PSNR and MS-SSIM quality metrics . Our method saves by 6.1 % compared to the context-adaptive deep models ( Minnen et al. , 2018b ; Lee et al. , 2019 ) and as much as 21.0 % relative to BPG ( Bellard. , 2014 ) . The remainder of this work is organized as follows . In Section 2 , we introduce the backbone of the end-to-end deep image compression network as well as the reference-based component for the entropy model . Section 3 demonstrates the structure of our combined entropy model . The GSDN with mean-shifting correction is given in Section 4 . We present experimental comparison and visualization in Section 5 . Finally , we enclose this work with an open discussion in Section 6 . 2 LEARNED IMAGE COMPRESSION . Learned image compression using deep neural networks has attracted considerable attention recently . The work of Toderici et al . ( 2016 ) first explored a recurrent architecture using an LSTMbased entropy model . A wide range of models ( Ballé et al. , 2017 ; Ballé et al. , 2018 ; Mentzer et al. , 2018 ; Minnen et al. , 2018a ; Lee et al. , 2019 ; Hu et al. , 2020 ; Cheng et al. , 2020 ) used a CNN-based autoencoder with constrained entropy . General learned image compression consists of an encoder , a quantizer , a decoder , and an entropy model . An image x is transformed into a latent representation y via the encoder ga ( x ) , which is discretized by the quantizer Q ( y ) to form ŷ . Given the entropy model pŷ , the discretized value ŷ can be compressed into a bitstream using entropy coding techniques such as arithmetic coding ( Rissanen & Langdon , 1981 ) . The decoder gs ( ŷ ) then forms the reconstructed image x̂ from the quantized latent representation ŷ , which is decompressed from the bitstream . The training goal for learned image compression is to optimize the trade-off between the estimated coding length of the bitstream and the quality of the reconstruction , which is a rate-distortion optimization problem : L = R+ λD = Ex∼px [ − log2 pŷ ( Q ( ga ( x ) ) ) ) ] + λEx∼px [ d ( x , gs ( ŷ ) ] , ( 1 ) where λ is the coefficient which controls the rate-distortion trade-off , px is the unknown distribution of natural images . The first term represents the estimated compression rate of the latent representation . The second term d ( x , x̂ ) represents the distortion value under given metric , such as mean squared error ( MSE ) or MS-SSIM ( Wang et al . ( 2003 ) ) . Entropy coding relies on an entropy model to estimate the prior probability of the latent representation . Ballé et al . ( 2017 ) propose a fully factorized prior for entropy estimation as shown in Figure 2 ( a ) , while the prior probability of discrete latent representations is not adaptive for different images . As shown in Figure 2 ( b ) , Ballé et al . ( 2018 ) model the latent representation as a zero-mean Gaussian distribution based on a spatial dependency with additional bits . In Lee et al . ( 2019 ) and Minnen et al . ( 2018a ) , they introduce an autoregressive component into the entropy model . Taking advantage of high correlation of local dependency , context-adaptive models contribute to more accurate entropy estimation . However , since their context-adaptive entropy models only capture the spatial information of neighboring latents , there is redundant spatial information across the whole image . To further remove such redundancy , our method incorporates a reference-based model to capture global spatial dependency . Specially for learned image compression , a generalized divisive normalization ( GDN ) ( Ballé et al. , 2016a ) transform with optimized parameters has proven effective in Gaussianizing the local joint statistics of natural images . Unlike many other normalization methods whose parameters are typically fixed after training , GDN is spatially adaptive therefore is highly nonlinear . As the referencebased model calculates relevance over the latents , it is crucial to align the distribution of the latents . To better align the latents , the proposed GSDN incorporates a subtracting factor with GDN . We also present an effective method of inverting it when decompressing the latent representation back to image . 3 COMBINED LOCAL , GLOBAL AND HYPERPRIOR ENTROPY MODEL . The models we analyze in this paper build on the architecture introduced in Minnen et al . ( 2018a ) , which combined an autoregressive model with the hyperprior . Figure 3 provides a high-level overview of our approach . The compression model contains two main sub-networks . The first is the core autoencoder , which learns the transform and the inverse transform between image and latent representation . Q represents the quantization function . The gradient-based optimization in learned methods is hindered by quantization . Here , we make use of a mixed approach that has proven efficient in Minnen & Singh ( 2020 ) . The second sub-network is the combined entropy model , which is responsible for estimating a probabilistic model over the latents for entropy coding . The combined entropy model consists of a context model , a reference model , and a hyper-network ( hyper encoder and hyper decoder ) . The three components are combined progressively . Then three parameter networks generate the mean and scale parameters for a conditional Gaussian entropy model respectively . Following the work of Minnen et al . ( 2018a ) , we model each latent , ŷi , as a Gaussian with mean µi and deviation σi convolved with a unit uniform distribution : pŷ ( ŷ|ẑ , θ ) = ∏ i=1 ( N ( µi , σ2i ) ∗ U ( −0.5 , 0.5 ) ) ) ( ŷi ) ( 2 ) where µ and σ are the predicted parameters of entropy model , ẑ is the quantized hyper-latents , θ is the entropy model parameters . The entropy model for the hyperprior is the same as in Ballé et al . ( 2018 ) , which is a non-parametric , fully factorized density model . As the hyperprior is part of the compressed bitstream , we extend the rate of Equation 1 as follows : R = Ex∼px [ − log2 pŷ ( ŷ ) ] + Ex∼px [ − log2 pẑ ( ẑ ) ] ( 3 ) The compressed latents and the compressed hyper-latents are part of the bitstream . The reference-based SR methods ( Zheng et al. , 2018 ; Yang et al. , 2020 ) adopt “ patch match ” to search for proper reference information . However , in the serial processing of image compression , the latent representation during decoding is often incomplete . We extend this search method by using a masked patch . Figure 4 illustrates how the relevance embedding module estimates similarity and fetches the relevant latents . When decoding the target latent , we use neighboring latents ( left and top ) as a basis to compute the similarities between the target latent and its previous latents . Particularly , the latents are unfolded into patches and then masked , denoted as q ∈ [ H×W , k×k×C ] ( whereH , W , k , C correspond to height , width , unfold kernel size and channels , respectively ) . We calculate the similarity matrix r ∈ [ H×W , H×W ] throughout the masked patches by using cosine similarity , ri , j = 〈 qi ‖qi‖ , qj ‖qj‖ 〉 ( 4 ) Similarity 𝑦 '' 𝑦 '' Mask Conv 𝑡 ! ̂ Relevant Index S Figure 4 : A mask slide patch searches on all the decoded latents ( tan area ) . The relevant latents are fetched and learned with a masked convolution . Note that we can only see the decoded latents , so the lower triangular of the similarity matrix is set to zero . We get the most relevant position for each latent as well as the similarity score . According to the position , we fetch the neighboring latents ( left and top ) as well as the center latent , which is named as “ relevant latents ” . We use a masked convolution as in Van den Oord et al . ( 2016 ) to transfer the relevant latents . To measure how likely a reference patch perfectly matches the target patch , Yang et al . ( 2020 ) propose a soft-attention module to transfer features by using the similarity map S. However , we found that a similarity score is not sufficient to reflect the quality of reference latent in image compression . For this reason , a confidence score is introduced to measure the texture complexity of the relevant latent . We use the context model to predict the Gaussian parameters ( i.e. , µ1 , σ1 ) of latents solely . The latents ŷ are now modeled as Gaussian with mean µ1 and standard deviation σ1 . The probabilities of the latents are then calculated according to ( µ1 , σ1 ) as in Equation 2 . As reference model is designed in spatial dimension , the confidence map U is obtained by averaging the probabilities across channel . With the above two parameters , the more relevant latent combination would be enhanced while the less relevant one would be relived . The similarity S and the confidence U are both 2D feature maps . Figure 5 provides the structure of our combined entropy model . For the context model , we transfer the latents ( i.e. , ŷ ) with a masked convolution . For the reference model , we transfer the unfolded relevant latents with a masked convolution . We use 1 × 1 convolution in the parameter networks . Local , global and hyperprior features are ensembled stage by stage , as well as the predicted Gaussian parameters . The mean parameters are estimated by the context model first , and then updated by the global model and the hyperprior model . We use the Log-Sum-Exp trick for resolving the under or overflow issue of deviation parameters . The output of global reference is further multiplied by the similarity S and the confidence U . The context model is based on the neighboring latents of the target latent to reduce the local redundancy . From the perspective of the global context , the reference model makes further efforts to capture spatial dependency . As the first two models predict by the decoded latents , there exists uncertainty that can not be eliminated solely . The hyperprior model learns to store information needed to reduce the uncertainty . This progressive mechanism allows an incremental accuracy for distribution estimation .
The paper presents a learning-based approach for image compression. To reduce the compression rate, it describes two novel extensions, one to take the global context into account and an improved version of the commonly used GDN layer. Their advantage has been shown in a thorough ablation study. Overall, the method achieves superior performance compared to standard codecs as well as other state-of-the art learning-based method on the evaluated dataset (from Kodak).
SP:21d5838371c68811775cd8bcf04b2374d427fe86
Language Models are Open Knowledge Graphs
1 INTRODUCTION . Knowledge graphs ( KGs ) are an important resource for both humans and machines . Factual knowledge in KGs is injected into AI applications to imitate important skills possessed by humans , e.g. , reasoning and understanding . KG construction is mainly supervised , requiring humans to handwrite every fact , such as Freebase ( Bollacker et al. , 2008 ) and Wikidata . KGs can also be constructed in a semi-supervised way , in which a semi-automatic extractor is used to obtain the facts from web corpora ( e.g. , NELL ( Carlson et al. , 2010 ) and Knowledge Vault ( Dong et al. , 2014 ) ) . Humans however still need to interact with the extractor to improve the quality of the discovered facts . Therefore , human supervision , which is often expensive , is required in constructing KGs . Recent progress in language models ( LMs ) , such as BERT ( Devlin et al. , 2018 ) and GPT-2/3 ( Radford et al. , 2019 ; Brown et al. , 2020 ) , has led to superior results even outperforming humans in a wide range of tasks , e.g. , sentence classification ( Wang et al. , 2018 ) , question answering ( Brown et al. , 2020 ) . Pre-trained LMs are also capable to write poetry , music , and code , while such tasks often require we human to spend a significant amount of time in learning the relevant knowledge to work well . In fact , these pre-trained LMs automatically acquire factual knowledge from large-scale corpora ( e.g. , BookCorpus ( Zhu et al. , 2015 ) , Common Crawl ( Brown et al. , 2020 ) ) via pre-training . The learned knowledge in pre-trained LMs is the key to the current success . We therefore consider the following question : instead of using the manually created knowledge , can we use the knowledge stored in pre-trained LMs to construct KGs ? In this paper , we design an unsupervised approach called MAMA that successfully recovers the factual knowledge stored in LMs to build KGs from scratch . MAMA constructs a KG with a single forward pass of a pre-trained LM ( without fine-tuning ) over a textual corpus . As illustrated in Figure 1 , MAMA has two stages : Match and Map . Match stage generates a set of candidate facts by matching the facts in the textual corpus with the knowledge in the pre-trained LM . General or world knowledge from large-scale corpora is embedded in the LM , thus candidate facts in the target corpus are often covered by the knowledge in the LM . The candidate facts are matched through an efficient beam search in the attention weight matrices of the pre-trained LM without fine-tuning . Map stage produces an open KG via mapping the matched candidate facts from Match stage to both fixed KG schema and open schema . If the schema of candidate facts exists in the KG schema , we map the candidate facts directly to the fixed KG schema . Otherwise , we reserve the unmapped candidate 1 Under review as a conference paper at ICLR 2021 with a single forward pass of the pre-trained language model ( LM ) ( without fine-tuning ) over the corpus . Given the input : a textual corpus containing passages and sentences , e.g. , English Wikipedia , and a pre-trained LM , e.g. , BERT , GPT-2/3 , MAMA ( 1 ) generates a set of candidate facts via matching the knowledge in the pretrained LM with facts in the textual corpus , e.g. , a candidate fact ( Dylan , is , songwriter ) from the sentence “ Dylan is a songwriter. ” , and ( 2 ) produces an open KG by mapping the matched candidate facts to both an existing KG schema , e.g. , ( Bob Dylan.Q392 , occupation.P106 , Songwriter.Q753110 ) in Wikidata schema , and an open schema , e.g. , ( Bob Dylan.Q392 , sign , Albert Grossman.Q708584 ) . facts in the open schema . This results in a new type of KG , open KG , with a mixture of mapped facts in fixed KG schema and unmapped facts in the open schema . Our contributions are as follows : 1 . We show how to construct KGs from pre-trained LMs . The KGs are constructed with a single forward pass of the pre-trained LMs without fine-tuning over the textual corpora . This helps researchers explicitly understand what the language models learn , bridging the deep LM and KG communities through enhanced model transparency . 2 . We propose an unsupervised two-stage approach , MAMA , to first match the candidate facts in the corpora with the knowledge stored in LMs , then map the matched candidate facts to both fixed and open schema to produce a KG . 3 . We generate a new type of KG , namely open KG , consists of mapped facts in the fixed KG schema of existing KGs ( Wikidata and TAC KBP ) annotated by humans ; and unmapped facts in the open schema that are new in the reference KG schema . The reach of this result is broad and has downstream utility for knowledge graph construction , deep neural network interpretation , and information extraction . 2 MAMA . We introduce an unsupervised end-to-end approach Match and Map ( MAMA ) as illustrated in Figure 1 to construct open knowledge graphs ( KGs ) from language models ( LMs ) . MAMA constructs the KGs with a single forward pass of the pre-trained LMs ( without fine-tuning ) over the corpora . The two stages of MAMA are : Match generates a set of candidate facts from a textual corpus . LMs contain global or world knowledge learned from large-scale corpora , which often does not perfectly match the knowledge in the target corpus . The goal of this stage is to match the knowledge stored in pre-trained LMs with facts in the corpus . Each fact is represented as a triplet ( head , relation , tail ) 1 , in short , ( h , r , t ) , and passed to Map stage . Match procedure is detailed in Sec . 2.1 . Map produces an open KG using the matched candidate facts from Match stage . The constructed open KG has two portions : ( a ) mapped candidate facts that are in a fixed KG schema , e.g. , ( Dylan , is , songwriter ) is mapped to ( Bob Dylan.Q392 , occupation.P106 , Songwriter.Q753110 ) according to Wikidata schema ; and ( b ) unmapped candidate facts that are in an open schema , e.g. , a candidate fact ( Dylan , signed , Albert Grossman ) is partially mapped to ( Bob Dylan.Q392 , sign , Albert Grossman.Q708584 ) in the open schema . This stage is described in Sec . 2.2 . 2 Under review as a conference paper at ICLR 2021 the best matched candidate fact ( Dylan , is , songwriter ) from the sentence “ Dylan is a songwriter. ” The lower portion shows the corresponding step-by-step process . Given a head-tail pair ( Dylan , songwriter ) , at each step , the search chooses one of the actions , i.e. , START , YIELD , STOP to produce an intermediate candidate fact . The search starts by adding the head “ Dylan ” as an initial candidate ( step 0 ) . The matching degree of the candidate is initialized as 0 . Next , a new candidate is yielded if the candidate has not reached the tail “ songwriter ” ( step 1 and step 2 ) , by appending the next largest attended token ( with the largest score from the attention matrix ( b ) of the sentence ) to the end of the current candidate , and the corresponding matching degrees are increased by the associated attention scores ( 0.3 and 0.4 ) to 0.3 ( 0+0.3 ) and 0.7 ( 0.3+0.4 ) respectively . Otherwise , the search stops , and the candidate fact with the best matching degree is returned for the head-tail pair ( step 3 ) . The attention matrix ( b ) is from the forward pass of the LM without fine-tuning over the sentence . “ x ” marks the tokens to prevent searching backward . 2.1 MATCH . We frame the matching procedure as a search problem . To obtain the best matched candidate facts of an input sentence , the candidates with the top matching degrees are returned from a search process . The matching degree is derived from the search in the attention weight matrices of the pre-trained LM , since the attention weight matrices are one of the main containers of the knowledge in the pre-trained LM . The attention weight matrices are simply from the forward pass of the LM without fine-tuning over the sentence . 2.1.1 BEAM SEARCH . We design a simple-yet-effective beam search to find the best matched candidate facts . For every head-tail pair ( h , t ) in a sentence , the search maintains the k-best matched candidate facts of the pair . Let ’ s first consider the search from left to right with beam size equals to 1 . An example search process is shown in Figure 2 . Given a head-tail pair ( Dylan , songwriter ) , at each step , the search performs one of the following actions : START the search from the head . The head h is added as an initial candidate into the beam . For simplicity , we use START ( h ) to denote the action , which returns a candidate ( h , . In Figure 2 ( a ) , at step 0 , the head “ Dylan ” is added as ( Dylan , into the beam . The matching degree is initialized to 0 . YIELD a new intermediate candidate in the beam if the current candidate has not reached the tail . The next largest attended token ( with the largest score from the attention matrix ) is appended to the end of the current candidate to yield the new candidate . The corresponding matching degrees are increased by the associated attention scores . At step 1 ( orange arrow in Figure 2 ( a ) ) , “ is ” is appended to the current candidate to yield ( Dylan , is , , since “ is ” has the largest attention score with “ Dylan ” in the attention matrix . The attention score is 0.3 as highlighted in orange in Figure 2 ( b ) . The matching degree becomes 0.3 ( i.e . 0+0.3 ) . The multi-head attention is reduced to a single head so that every two tokens of the sentence are associated with one attention weight . We experiment with different reduction 1 We use the term “ head ” and “ tail ” to denote head and tail ’ s “ entities ” or “ entity mentions ” for simplicity . 3 Under review as a conference paper at ICLR 2021 Algorithm 1 Beam search for matching candidate facts . Input : Head-tail pair ( h , t ) , sentence s , attention matrix As , action manager O = { START , YIELD , STOP } , beam size k Output : Candidate facts T ( h , t ) 1 : T ( h , t ) { START ( h ) } . Start by adding the head as a candidate in the beam 2 : while 9c 2 T ( h , t ) [ O ( c ) = YIELD ] do 3 : eT ( h , t ) ; . Initialize a new beam 4 : for each c 2 T ( h , t ) do 5 : if O ( c ) = YIELD then 6 : eT ( h , t ) eT ( h , t ) [ { YIELD ( c , s , As ) } . Yield a new candidate if not reached the tail 7 : else 8 : eT ( h , t ) eT ( h , t ) [ { STOP ( c , t ) } . Stop then produce a valid fact if reached the tail 9 : end if 10 : end for 11 : T ( h , t ) TOP ( k , eT ( h , t ) ) . Maintain k-best candidates in the beam 12 : end while 13 : return T ( h , t ) setups in Sec . A.3 . “ x ” marks the tokens ( prior to the current token ) that are not considered in the search to prevent searching backward . Step 2 similarly takes YIELD action to produce ( Dylan , is songwriter , . The matching degree is now 0.7 ( i.e . 0.3+0.4 ) . We use YIELD ( c , s , As ) to denote the action , where c is a current candidate , s represents the sentence , and As is the attention matrix from the forward pass of the pre-trained LM over s , which yields a new candidate . STOP the search step if the candidate has reached the tail , then add the candidate as a valid candidate fact into the beam . As beam size equals to 1 , ( Dylan , is , songwriter ) is the only returned candidate fact for the given pair . The final matching degree of the candidate is 0.7 . We denote this step using STOP ( c , t ) , which returns a valid fact . The details of the proposed beam search are in Algorithm 1 . The inputs of the search algorithm are a head-tail pair ( h , t ) , a sentence s , an attention matrix As of s. Both h and t are identified by the noun chunk in s. As is the attention matrix associated with s from the forward pass of LM without fine-tuning . The search gets started by adding the head h as the initial candidate in the beam ( line 1 ) . While there are still new candidates waiting to be yielded ( line 2 ) , the search continues , and the top k candidates sorted by the matching degrees are maintained ( line 3-11 ) in the beam . In practice , we implement an action manager O to decide which action to take at each step . Given a candidate c in the beam , O ( c ) = START always happens at the beginning of the search . If c has not reached the tail t yet , O ( c ) = YIELD . Otherwise , O ( c ) = STOP . We convert the subwords to the corresponding full words . We also notice some facts are in reverse order in the sentence , e.g. , “ · · · said Jason Forcier , a vice president at battery maker A123 Systems Inc. ” for facts of relation “ org : top members employees ” , thus enable bidirectionality by running the algorithm in both directions ( left to right and right to left ) . The beam search is implemented by the breadth-first search , which is efficient as the time complexity is O ( k · d ) , where d is the maximum depth of the search tree .
This paper presents an unsupervised approach for extracting OpenIE style triples from a corpus. The approach leverages the internal attention maps of pretrained transformers to identify paths which correspond to relations between a head entity and a tail entity. The extracted open triples are then mapped, wherever possible, to an existing KG to create what is referred to as an Open KG.
SP:6e57d34252a302c1f3ce4aaacf6fb59fdbbf12b8
Language Models are Open Knowledge Graphs
1 INTRODUCTION . Knowledge graphs ( KGs ) are an important resource for both humans and machines . Factual knowledge in KGs is injected into AI applications to imitate important skills possessed by humans , e.g. , reasoning and understanding . KG construction is mainly supervised , requiring humans to handwrite every fact , such as Freebase ( Bollacker et al. , 2008 ) and Wikidata . KGs can also be constructed in a semi-supervised way , in which a semi-automatic extractor is used to obtain the facts from web corpora ( e.g. , NELL ( Carlson et al. , 2010 ) and Knowledge Vault ( Dong et al. , 2014 ) ) . Humans however still need to interact with the extractor to improve the quality of the discovered facts . Therefore , human supervision , which is often expensive , is required in constructing KGs . Recent progress in language models ( LMs ) , such as BERT ( Devlin et al. , 2018 ) and GPT-2/3 ( Radford et al. , 2019 ; Brown et al. , 2020 ) , has led to superior results even outperforming humans in a wide range of tasks , e.g. , sentence classification ( Wang et al. , 2018 ) , question answering ( Brown et al. , 2020 ) . Pre-trained LMs are also capable to write poetry , music , and code , while such tasks often require we human to spend a significant amount of time in learning the relevant knowledge to work well . In fact , these pre-trained LMs automatically acquire factual knowledge from large-scale corpora ( e.g. , BookCorpus ( Zhu et al. , 2015 ) , Common Crawl ( Brown et al. , 2020 ) ) via pre-training . The learned knowledge in pre-trained LMs is the key to the current success . We therefore consider the following question : instead of using the manually created knowledge , can we use the knowledge stored in pre-trained LMs to construct KGs ? In this paper , we design an unsupervised approach called MAMA that successfully recovers the factual knowledge stored in LMs to build KGs from scratch . MAMA constructs a KG with a single forward pass of a pre-trained LM ( without fine-tuning ) over a textual corpus . As illustrated in Figure 1 , MAMA has two stages : Match and Map . Match stage generates a set of candidate facts by matching the facts in the textual corpus with the knowledge in the pre-trained LM . General or world knowledge from large-scale corpora is embedded in the LM , thus candidate facts in the target corpus are often covered by the knowledge in the LM . The candidate facts are matched through an efficient beam search in the attention weight matrices of the pre-trained LM without fine-tuning . Map stage produces an open KG via mapping the matched candidate facts from Match stage to both fixed KG schema and open schema . If the schema of candidate facts exists in the KG schema , we map the candidate facts directly to the fixed KG schema . Otherwise , we reserve the unmapped candidate 1 Under review as a conference paper at ICLR 2021 with a single forward pass of the pre-trained language model ( LM ) ( without fine-tuning ) over the corpus . Given the input : a textual corpus containing passages and sentences , e.g. , English Wikipedia , and a pre-trained LM , e.g. , BERT , GPT-2/3 , MAMA ( 1 ) generates a set of candidate facts via matching the knowledge in the pretrained LM with facts in the textual corpus , e.g. , a candidate fact ( Dylan , is , songwriter ) from the sentence “ Dylan is a songwriter. ” , and ( 2 ) produces an open KG by mapping the matched candidate facts to both an existing KG schema , e.g. , ( Bob Dylan.Q392 , occupation.P106 , Songwriter.Q753110 ) in Wikidata schema , and an open schema , e.g. , ( Bob Dylan.Q392 , sign , Albert Grossman.Q708584 ) . facts in the open schema . This results in a new type of KG , open KG , with a mixture of mapped facts in fixed KG schema and unmapped facts in the open schema . Our contributions are as follows : 1 . We show how to construct KGs from pre-trained LMs . The KGs are constructed with a single forward pass of the pre-trained LMs without fine-tuning over the textual corpora . This helps researchers explicitly understand what the language models learn , bridging the deep LM and KG communities through enhanced model transparency . 2 . We propose an unsupervised two-stage approach , MAMA , to first match the candidate facts in the corpora with the knowledge stored in LMs , then map the matched candidate facts to both fixed and open schema to produce a KG . 3 . We generate a new type of KG , namely open KG , consists of mapped facts in the fixed KG schema of existing KGs ( Wikidata and TAC KBP ) annotated by humans ; and unmapped facts in the open schema that are new in the reference KG schema . The reach of this result is broad and has downstream utility for knowledge graph construction , deep neural network interpretation , and information extraction . 2 MAMA . We introduce an unsupervised end-to-end approach Match and Map ( MAMA ) as illustrated in Figure 1 to construct open knowledge graphs ( KGs ) from language models ( LMs ) . MAMA constructs the KGs with a single forward pass of the pre-trained LMs ( without fine-tuning ) over the corpora . The two stages of MAMA are : Match generates a set of candidate facts from a textual corpus . LMs contain global or world knowledge learned from large-scale corpora , which often does not perfectly match the knowledge in the target corpus . The goal of this stage is to match the knowledge stored in pre-trained LMs with facts in the corpus . Each fact is represented as a triplet ( head , relation , tail ) 1 , in short , ( h , r , t ) , and passed to Map stage . Match procedure is detailed in Sec . 2.1 . Map produces an open KG using the matched candidate facts from Match stage . The constructed open KG has two portions : ( a ) mapped candidate facts that are in a fixed KG schema , e.g. , ( Dylan , is , songwriter ) is mapped to ( Bob Dylan.Q392 , occupation.P106 , Songwriter.Q753110 ) according to Wikidata schema ; and ( b ) unmapped candidate facts that are in an open schema , e.g. , a candidate fact ( Dylan , signed , Albert Grossman ) is partially mapped to ( Bob Dylan.Q392 , sign , Albert Grossman.Q708584 ) in the open schema . This stage is described in Sec . 2.2 . 2 Under review as a conference paper at ICLR 2021 the best matched candidate fact ( Dylan , is , songwriter ) from the sentence “ Dylan is a songwriter. ” The lower portion shows the corresponding step-by-step process . Given a head-tail pair ( Dylan , songwriter ) , at each step , the search chooses one of the actions , i.e. , START , YIELD , STOP to produce an intermediate candidate fact . The search starts by adding the head “ Dylan ” as an initial candidate ( step 0 ) . The matching degree of the candidate is initialized as 0 . Next , a new candidate is yielded if the candidate has not reached the tail “ songwriter ” ( step 1 and step 2 ) , by appending the next largest attended token ( with the largest score from the attention matrix ( b ) of the sentence ) to the end of the current candidate , and the corresponding matching degrees are increased by the associated attention scores ( 0.3 and 0.4 ) to 0.3 ( 0+0.3 ) and 0.7 ( 0.3+0.4 ) respectively . Otherwise , the search stops , and the candidate fact with the best matching degree is returned for the head-tail pair ( step 3 ) . The attention matrix ( b ) is from the forward pass of the LM without fine-tuning over the sentence . “ x ” marks the tokens to prevent searching backward . 2.1 MATCH . We frame the matching procedure as a search problem . To obtain the best matched candidate facts of an input sentence , the candidates with the top matching degrees are returned from a search process . The matching degree is derived from the search in the attention weight matrices of the pre-trained LM , since the attention weight matrices are one of the main containers of the knowledge in the pre-trained LM . The attention weight matrices are simply from the forward pass of the LM without fine-tuning over the sentence . 2.1.1 BEAM SEARCH . We design a simple-yet-effective beam search to find the best matched candidate facts . For every head-tail pair ( h , t ) in a sentence , the search maintains the k-best matched candidate facts of the pair . Let ’ s first consider the search from left to right with beam size equals to 1 . An example search process is shown in Figure 2 . Given a head-tail pair ( Dylan , songwriter ) , at each step , the search performs one of the following actions : START the search from the head . The head h is added as an initial candidate into the beam . For simplicity , we use START ( h ) to denote the action , which returns a candidate ( h , . In Figure 2 ( a ) , at step 0 , the head “ Dylan ” is added as ( Dylan , into the beam . The matching degree is initialized to 0 . YIELD a new intermediate candidate in the beam if the current candidate has not reached the tail . The next largest attended token ( with the largest score from the attention matrix ) is appended to the end of the current candidate to yield the new candidate . The corresponding matching degrees are increased by the associated attention scores . At step 1 ( orange arrow in Figure 2 ( a ) ) , “ is ” is appended to the current candidate to yield ( Dylan , is , , since “ is ” has the largest attention score with “ Dylan ” in the attention matrix . The attention score is 0.3 as highlighted in orange in Figure 2 ( b ) . The matching degree becomes 0.3 ( i.e . 0+0.3 ) . The multi-head attention is reduced to a single head so that every two tokens of the sentence are associated with one attention weight . We experiment with different reduction 1 We use the term “ head ” and “ tail ” to denote head and tail ’ s “ entities ” or “ entity mentions ” for simplicity . 3 Under review as a conference paper at ICLR 2021 Algorithm 1 Beam search for matching candidate facts . Input : Head-tail pair ( h , t ) , sentence s , attention matrix As , action manager O = { START , YIELD , STOP } , beam size k Output : Candidate facts T ( h , t ) 1 : T ( h , t ) { START ( h ) } . Start by adding the head as a candidate in the beam 2 : while 9c 2 T ( h , t ) [ O ( c ) = YIELD ] do 3 : eT ( h , t ) ; . Initialize a new beam 4 : for each c 2 T ( h , t ) do 5 : if O ( c ) = YIELD then 6 : eT ( h , t ) eT ( h , t ) [ { YIELD ( c , s , As ) } . Yield a new candidate if not reached the tail 7 : else 8 : eT ( h , t ) eT ( h , t ) [ { STOP ( c , t ) } . Stop then produce a valid fact if reached the tail 9 : end if 10 : end for 11 : T ( h , t ) TOP ( k , eT ( h , t ) ) . Maintain k-best candidates in the beam 12 : end while 13 : return T ( h , t ) setups in Sec . A.3 . “ x ” marks the tokens ( prior to the current token ) that are not considered in the search to prevent searching backward . Step 2 similarly takes YIELD action to produce ( Dylan , is songwriter , . The matching degree is now 0.7 ( i.e . 0.3+0.4 ) . We use YIELD ( c , s , As ) to denote the action , where c is a current candidate , s represents the sentence , and As is the attention matrix from the forward pass of the pre-trained LM over s , which yields a new candidate . STOP the search step if the candidate has reached the tail , then add the candidate as a valid candidate fact into the beam . As beam size equals to 1 , ( Dylan , is , songwriter ) is the only returned candidate fact for the given pair . The final matching degree of the candidate is 0.7 . We denote this step using STOP ( c , t ) , which returns a valid fact . The details of the proposed beam search are in Algorithm 1 . The inputs of the search algorithm are a head-tail pair ( h , t ) , a sentence s , an attention matrix As of s. Both h and t are identified by the noun chunk in s. As is the attention matrix associated with s from the forward pass of LM without fine-tuning . The search gets started by adding the head h as the initial candidate in the beam ( line 1 ) . While there are still new candidates waiting to be yielded ( line 2 ) , the search continues , and the top k candidates sorted by the matching degrees are maintained ( line 3-11 ) in the beam . In practice , we implement an action manager O to decide which action to take at each step . Given a candidate c in the beam , O ( c ) = START always happens at the beginning of the search . If c has not reached the tail t yet , O ( c ) = YIELD . Otherwise , O ( c ) = STOP . We convert the subwords to the corresponding full words . We also notice some facts are in reverse order in the sentence , e.g. , “ · · · said Jason Forcier , a vice president at battery maker A123 Systems Inc. ” for facts of relation “ org : top members employees ” , thus enable bidirectionality by running the algorithm in both directions ( left to right and right to left ) . The beam search is implemented by the breadth-first search , which is efficient as the time complexity is O ( k · d ) , where d is the maximum depth of the search tree .
Paper summary: The paper introduces an unsupervised method that utilizes an off-the-shelf BERT (without any fine-tuning) to create an information extraction system without any training data. It first creates ungrounded triples (a.k.a. OpenIE) from raw text by looking into the attention weights between words and finding a sequence of words through a beam search that has high attention weights between every consecutive words. When such sequence is created, the first and the last word become the head and the tail entities and the words between them becomes the relation. The next step is grounding each triplet to a known Knowledge Graph by utilizing off-the-shelf entity and relation grounding mechanisms. The proposed method shows 1-2% F1 score advantage over Stanford OpenIE on TAC KBP and Wikidata.
SP:6e57d34252a302c1f3ce4aaacf6fb59fdbbf12b8
The Risks of Invariant Risk Minimization
1 INTRODUCTION . Prediction algorithms are evaluated by their performance on unseen test data . In classical machine learning , it is common to assume that such data are drawn i.i.d . from the same distribution as the data set on which the learning algorithm was trained—in the real world , however , this is often not the case . When this discrepancy occurs , algorithms with strong in-distribution generalization guarantees , such as Empirical Risk Minimization ( ERM ) , can fail catastrophically . In particular , while deep neural networks achieve superhuman performance on many tasks , there is evidence that they rely on statistically informative but non-causal features in the data ( Beery et al. , 2018 ; Geirhos et al. , 2018 ; Ilyas et al. , 2019 ) . As a result , such models are prone to errors under surprisingly minor distribution shift ( Su et al. , 2019 ; Recht et al. , 2019 ) . To address this , researchers have investigated alternative objectives for training predictors which are robust to possibly egregious shifts in the test distribution . The task of generalizing under such shifts , known as Out-of-Distribution ( OOD ) Generalization , has led to many separate threads of research . One approach is Bayesian deep learning , accounting for a classifier ’ s uncertainty at test time ( Neal , 2012 ) . Another technique that has shown promise is data augmentation—this includes both automated data modifications which help prevent overfitting ( Shorten & Khoshgoftaar , 2019 ) and specific counterfactual augmentations to ensure invariance in the resulting features ( Volpi et al. , 2018 ; Kaushik et al. , 2020 ) . A strategy which has recently gained particular traction is Invariant Causal Prediction ( ICP ; Peters et al . 2016 ) , which views the task of OOD generalization through the lens of causality . This framework assumes that the data are generated according to a Structural Equation Model ( SEM ; Bollen 2005 ) , which consists of a set of so-called mechanisms or structural equations that specify variables given their parents . ICP assumes moreover that the data can be partitioned into environments , where each environment corresponds to interventions on the SEM ( Pearl , 2009 ) , but where the mechanism by which the target variable is generated via its direct parents is unaffected . Thus the causal mechanism of the target variable is unchanging but other aspects of the distribution can vary broadly . As a result , learning mechanisms that are the same across environments ensures recovery of the invariant features which generalize under arbitrary interventions . In this work , we consider objectives that attempt to learn what we refer to as the “ optimal invariant predictor ” —this is the classifier which uses and is optimal with respect to only the invariant features in the SEM . By definition , such a classifier does not overfit to environment-specific properties of the data distribution , so it will generalize even under major distribution shift at test time . In particular , we focus our analysis on one of the more popular objectives , Invariant Risk Minimization ( IRM ; Arjovsky et al . ( 2019 ) ) , but our results can easily be extended to similar recently proposed alternatives . Various works on invariant prediction ( Muandet et al. , 2013 ; Ghassami et al. , 2017 ; Heinze-Deml et al. , 2018 ; Rojas-Carulla et al. , 2018 ; Subbaswamy et al. , 2019 ; Christiansen et al. , 2020 ) consider regression in both the linear and non-linear setting , but they exclusively focus on learning with fully or partially observed covariates or some other source of information . Under such a condition , results from causal inference ( Maathuis et al. , 2009 ; Peters et al. , 2017 ) allow for formal guarantees of the identification of the invariant features , or at least a strict subset of them . With the rise of deep learning , more recent literature has developed objectives for learning invariant representations when the data are a non-linear function of unobserved latent factors , a common assumption when working with complex , high-dimensional data such as images . Causal discovery and inference with unobserved confounders or latents is a much harder problem ( Peters et al. , 2017 ) , so while empirical results seem encouraging , these objectives are presented with few formal guarantees . IRM is one such objective for invariant representation learning . The goal of IRM is to learn a feature embedder such that the optimal linear predictor on top of these features is the same for every environment—the idea being that only the invariant features will have an optimal predictor that is invariant . Recent works have pointed to shortcomings of IRM and have suggested modifications which they claim prevent these failures . However , these alternatives are compared in broad strokes , with little in the way of theory . In this work , we present the first formal analysis of classification under the IRM objective under a fairly natural and general model which carefully formalizes the intuition behind the original work . Our results show that despite being inspired by invariant prediction , this objective can frequently be expected to perform no better than ERM . In the linear setting , we present simple , exact conditions under which solving to optimality succeeds or , more often , breaks down in recovering the optimal invariant predictor . We also demonstrate another major failure case—under mild conditions , there exists a feasible point that uses only non-invariant features and achieves lower empirical risk than the optimal invariant predictor ; thus it will appear as a more attractive solution , yet its reliance on non-invariant features mean it will fail to generalize . As corollaries , we present similar settings where all recently suggested alternatives to IRM likewise fail . Futhermore , we present the first results in the non-linear regime : we demonstrate the existence of a classifier with exponentially small suboptimality which nevertheless heavily relies on non-invariant features on most test inputs , resulting in worse-than-chance performance on distributions that are sufficiently dissimilar from the training environments . These findings strongly suggest that existing approaches to ICP for high-dimensional latent variable models do not cleanly achieve their stated objective and that future work would benefit from a more formal treatment . 2 RELATED WORK . Works on learning deep invariant representations vary considerably : some search for a domaininvariant representation ( Muandet et al. , 2013 ; Ganin et al. , 2016 ) , i.e . invariance of the distribution p ( Φ ( x ) ) , typically used for domain adaptation ( Ben-David et al. , 2010 ; Ganin & Lempitsky , 2015 ; Zhang et al. , 2015 ; Long et al. , 2018 ) , with assumed access to labeled or unlabeled data from the target distribution . Other works instead hope to find representations that are conditionally domain-invariant , with invariance of p ( Φ ( x ) | y ) ( Gong et al. , 2016 ; Li et al. , 2018 ) . However , there is evidence that invariance may not be sufficient for domain adaptation ( Zhao et al. , 2019 ; Johansson et al. , 2019 ) . In contrast , this paper focuses instead on domain generalization ( Blanchard et al. , 2011 ; Rosenfeld et al. , 2021 ) , where access to the test distribution is not assumed . Recent works on domain generalization , including the objectives discussed in this paper , suggest invariance of the feature-conditioned label distribution . In particular , Arjovsky et al . ( 2019 ) only assume invariance of E [ y | Φ ( x ) ] ; follow-up works rely on a stronger assumption of invariance of higher conditional moments ( Krueger et al. , 2020 ; Xie et al. , 2020 ; Jin et al. , 2020 ; Mahajan et al. , 2020 ; Bellot & van der Schaar , 2020 ) . Though this approach has become popular in the last year , it is somewhat similar to the existing concept of covariate shift ( Shimodaira , 2000 ; Bickel et al. , 2009 ) , which considers the same setting . The main difference is that these more recent works assume that the shifts in p ( Φ ( x ) ) occur between discrete , labeled environments , as opposed to more generally from train to test distributions . Some concurrent lines of work study different settings yet give results which are remarkably similar to ours . Xu et al . ( 2021 ) show that an infinitely wide two-layer network extrapolates linear functions when the training data is sufficiently diverse . In the context of domain generalization specifically , Rosenfeld et al . ( 2021 ) prove that ERM remains optimal for both interpolation and extrapolation in the linear setting and that the latter is exponentially harder than the former . These results mirror our findings that none of the studied objectives outperform ERM . 3 MODEL AND INFORMAL RESULTS . We consider an SEM with explicit separation of invariant features zc , whose joint distribution with the label is fixed for all environments , and environmental features ze ( “ non-invariant ” ) , whose distribution can vary . This choice is to ensure that our model properly formalizes the intuition behind invariant prediction techniques such as IRM , whose objective is to ensure generalizing predictors by recovering only the invariant features—we put off a detailed description of these objectives until after we have introduced the necessary terminology . We assume that data are drawn from a set of E training environments E = { e1 , e2 , . . . , eE } and that we know from which environment each sample is drawn . For a given environment e , the data are defined by the following process : first , a label y ∈ { ±1 } is drawn according to a fixed probability : y = { 1 , w.p . η −1 , otherwise . ( 1 ) Next , both invariant features and environmental features are drawn according to a Gaussian:1 zc ∼ N ( y · µc , σ2cI ) , ze ∼ N ( y · µe , σ2eI ) , ( 2 ) with µc ∈ Rdc , µe ∈ Rde—typically , for complex , high-dimensional data we would expect E < dc de . Finally , the observation x is generated as a function of the latent features : x = f ( zc , ze ) . ( 3 ) The complete data generating process is displayed in Figure 3.1 . We assume f is injective , so that it is in principle possible to recover the latent features from the observations , i.e . there exists a function Φ such that Φ ( f ( zc , ze ) ) = [ zc , ze ] T . We remark that this our only assumption on f , even when it is non-linear . Further , note that we model class-conditional means as direct opposites merely for clarity , as it greatly simplifies the calculations . None of our proofs require this condition : it is straightforward to extend our results to arbitrary means , and the non-linear setting also allows for arbitrary covariances . In fact , our proof technique for non-linear f could be applied to any distribution that sufficiently concentrates about its mean ( e.g. , sub-Gaussian ) . We write the joint and marginal distributions as pe ( x , y , zc , ze ) . When clear from context , we omit the specific arguments . Remarks on the model . This model is natural and flexible ; it generalizes several existing models used to analyze learning under the existence of adversarial distribution shift or non-invariant correlations ( Schmidt et al. , 2018 ; Sagawa et al. , 2020 ) . The fundamental facet of this model is the constancy of the invariant parameters η , µc , σ2c , f across environments—the dependence of µe , σe on the environment allows for varying distributions , while the true causal process remains unchanged . Here we make a few clarifying remarks : • We do not impose any constraints on the model parameters . In particular , we do not assume a prior over the environmental parameters . Observe that µc , σ2c are the same for all environments , 1Note the deliberate choice to have ze depend on y . Much work on this problem models spurious features which correlate with the label but are not causal . However , the term “ spurious ” is often applied incongruously ; in recent work , the term has been co-opted to refer to any feature that correlates with the label but does not cause it . Thus there is a subtle distinction : if we allow for anti-causality , i.e . the label causing the features , the resulting correlation is not spurious . We therefore avoid using the term “ spurious ” in this work . hence the subscript indicates the invariant relationship . In contrast , with some abuse of notation , the environmental subscript is used to indicate both dependence on the environment and the index of the environment itself ( e.g. , µi represents the mean specific to environment i ) . • While we have framed the model as y causing zc , the causation can just as easily be viewed in the other direction . The log-odds of y are a linear function of zc—this matches logistic regression with an invariant regression vector βc = 2µc/σ2c and bias β0 = log η 1−η . We present the model as above to emphasize that the causal relationships between y and the zc , ze are a priori indistinguishable , and because we believe this direction is more intuitive . We consider the setting where we are given infinite samples from each environment ; this allows us to isolate the behavior of the objectives themselves , rather than finite-sample effects . Upon observing samples from this model , our objective is thus to learn a feature embedder Φ and classifier2 β̂ to minimize the risk on an unseen environment e : Re ( Φ , β̂ ) : = E ( x , y ) ∼pe [ ` ( σ ( β̂TΦ ( x ) ) , y ) ] . The function ` can be any loss appropriate to classification : in this work we consider the logistic and the 0-1 loss . Note that we are not hoping to minimize risk in expectation over the environments ; this is already accomplished via ERM or distributionally robust optimization ( DRO ; Bagnell 2005 ; Ben-Tal et al . 2009 ) . Rather , we hope to extract and regress on invariant features while ignoring environmental features , such that our predictor generalizes to all unseen environments regardless of their parameters . In other words , the focus is on minimizing risk in the worst-case . We refer to the predictor which will minimize worst-case risk under arbitrary distribution shift as the optimal invariant predictor . To discuss this formally , we define precisely what we mean by this term . Definition 1 . Under the model described by Equations 1-3 , the optimal invariant predictor is the predictor defined by the composition of a ) the featurizer which recovers the invariant features and b ) the classifier which is optimal with respect to those features : Φ∗ ( x ) : = [ I 0 0 0 ] ◦ f−1 ( x ) = [ zc ] , β̂∗ : = [ βc β0 ] : = [ 2µc/σ 2 c log η1−η ] . Observe that this definition closely resembles Definition 3 of Arjovsky et al . ( 2019 ) ; the only difference is that here the optimal invariant predictor must recover all invariant features . As Arjovsky et al . ( 2019 ) do not posit a data model , the concept of recovering “ all invariant features ” is not welldefined for their setting ; technically , a featurizer which outputs the empty set would elicit an invariant predictor , but this would not satisfy the above definition . The classifier β̂∗ is optimal with respect to the invariant features and so it achieves the minimum possible risk without using environmental features . Observe that the optimal invariant predictor is distinct from the Bayes classifier ; the Bayes classifier uses environmental features which are informative of the label but non-invariant ; the optimal invariant predictor explicitly ignores these features . With the model defined , we can informally present our results ; we defer the formal statements to first give a background on the IRM objective in the next section . With a slight abuse of notation , we identify a predictor by the tuple Φ , β̂ which parametrizes it . First , we show that the usefulness of IRM exhibits a “ thresholding ” behavior depending on E and de : Theorem 3.1 ( Informal , Linear ) . For linear f , consider solving the IRM objective to learn a linear Φ with invariant optimal classifier β̂ . If E > de , then Φ , β̂ is precisely the optimal invariant predictor ; it uses only invariant features and generalizes to all environments with minimax-optimal risk . If E ≤ de , then Φ , β̂ relies upon non-invariant features . In fact , when E ≤ de it is even possible to learn a classifier solely relying on environmental features that achieves lower risk on the training environments than the optimal invariant predictor : 2Following the terminology of Arjovsky et al . ( 2019 ) , we refer to the regression vector β̂ as a “ classifier ” and the composition of Φ , β̂ as a “ predictor ” . Theorem 3.2 ( Informal , Linear ) . For linear f and E ≤ de there exists a linear predictor Φ , β̂ which uses only environmental features , yet achieves lower risk than the optimal invariant predictor . Finally , in the non-linear case , we show that IRM fails unless the training environments approximately “ cover ” the space of possible environments , and therefore it behaves similarly to ERM : Theorem 3.3 ( Informal , Non-linear ) . For arbitrary f , there exists a non-linear predictor Φ , β̂ which is nearly optimal under the penalized objective and furthermore is nearly identical to the optimal invariant predictor on the training distribution . However , for any test environment with a mean sufficiently different from the training means , this predictor will be equivalent to the ERM solution on nearly all test points . For test distributions where the environmental feature correlations with the label are reversed , this predictor has almost 0 accuracy . Extensions to other objectives . Many follow-up works have suggested alternatives to IRM—some are described in the next section . Though these objectives perform better on various baselines , there are few formal guarantees and no results beyond the linear case . Due to their collective similarities , we can easily derive corollaries which extend every theorem in this paper to these objectives , demonstrating that they all suffer from the same shortcomings . Appendix E contains example corollaries for each of the results presented in this work .
* The work gives extended theoretical analysis on the effect of invariant risk minimization scheme, which is an increasingly popular framework for robust prediction. The work considerably extends the results in the original IRM paper. The results seem reasonable, and clarify implausible beliefs on the framework. The intuitions and proving techniques could also inspire new methods to improve the current framework.
SP:c4d8b135b7625ac0b52bdc4b9753c0db61b4d777