paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Pruning Compact ConvNets For Efficient Inference
1 INTRODUCTION . Neural networks frequently suffer from the problem of over-parameterization , such that the model can be compressed by a large factor to drastically reduce memory footprint , computation as well as energy consumption while maintaining similar performance . This is especially pronounced for models for computer vision ( Simonyan & Zisserman , 2014 ) , speech recognition ( Pratap et al. , 2020 ) and large text understanding models such as BERT ( Devlin et al. , 2018 ) . The improvements obtained from intelligently reducing the number of model parameters has several benefits , such as reduction in datacenter power consumption , faster inference and reduced memory footprint on edge devices such as mobile phones which also enable decentralized techniques ex . federated learning ( Kairouz et al. , 2019 ) . There are several techniques to reduce model size while maintaining similar generalization performance , such as model quantization ( Polino et al. , 2018 ) , NAS ( Neural Architecture Search ) ( Elsken et al. , 2019 ) and model distillation through teacher-student networks ( Gou et al. , 2021 ) . For the scope of this paper , we consider pruning as a technique to remove trainable weights in the network , and save on computation costs for the FBNet family of models . The motivations for this are two-fold . Firstly , state-of-the-art models such as FBNet ( Wu et al. , 2019 ) already adopt the best practices in the area of efficient hardware-aware design of convolutional neural network based models , and are widely used across different vision tasks . This makes them suitable baselines to understand whether pruning can offer any performance gain over their already optimized behavior . While there has been limited work on pruning for efficient convolution network models they investigate older architectures such as EfficientNet and MobileNet ( Aflalo et al. , 2020 ) or integrate pruning into expensive techniques such as joint prune-and-architecture search ( Wang et al. , 2020 ) . For each of the constituent models of the FBNetV3 family ( FBNetV3A , FBNetV3B , ... , FBNetV3G ) we reduce the number of parameters using two pruning based approaches : ( 1 ) Global magnitudebased pruning : Starting with the pre-trained model , we prune all weights whose magnitude is below a threshold chosen in order to achieve a target number of FLOPs for the pruned model ; ( 2 ) Uniform magnitude-based pruning : Starting with the pre-trained model , we prune weights in each layer whose magnitude is below a level-specific threshold in order to yield a pruned model achieving a target number of FLOPs with the same sparsity in each layer . After either pruning method is applied , we fine-tune the pruned model for a certain number of epochs until convergence is reached . Within the scope of our study in this paper , we are mostly interested in the following research questions : • RQ1 : Pruning to improve computation vs. performance tradeoff . Can a model obtained by pruning a larger FBNetV3 model M1 ( optimized using NAS ) achieve higher generalization performance than a smaller FBNetV3 model M2 when the pruned model has the same number of FLOPs as M2 ? • RQ2 : Pruning as an efficient paradigm . When a larger FBNetV3 model M1 is available and computational resources are limited , is pruning a faster and less computationally expensive approach to obtain a model with higher accuracy at a desired computation level ( FLOPs ) than running a full-fledged architecture search ? Pruning to improve computation vs. performance tradeoff ( RQ1 ) . There have been recent research advances in the area of building hardware-aware efficient models ( Deng et al. , 2020 ) . These can provide good generalization performance while adhering to constraints on memory , inference latency and battery power , which are often dictated by the hardware environment where inference happens . Experiments described in existing work on efficient vision models such as ChamNet ( Dai et al. , 2019 ) , MobileNet ( Howard et al. , 2017 ) , EfficientNet ( Tan & Le , 2019 ) and FBNetV2 ( Wan et al. , 2020 ) have shown that it is possible to achieve even higher performances on standard image recognition tasks such as ImageNet ( Deng et al. , 2009 ) at a certain level of FLOPs . However the efficient design of these models does not solve the over-parameterization problem completely , and none of these approaches study how model pruning can be performed to obtain even better trade-offs between computation and model accuracy . This paper is the first of its kind to understand how we can improve on the state-of-the-art in this problem space . Pruning as an efficient paradigm ( RQ2 ) . In addition to achieving state-of-the-art performance with reduced FLOPs , we are also interested in understanding how such pruned models can be obtained inexpensively with limited resources that are generally available to a machine learning practitioner who has access to existing optimized models but limited computing resources . For example , the FBNetV3 models are freely available through Facebook ’ s Mobile Model Zoo1 , while EfficientNet models can be obtained at GitHub2 . While the techniques needed to obtain computation- and latencyfriendly models have been democratized through open-sourcing the source code as well as the models themselves , fully applying these techniques necessitates costly operations such as finding an optimal network topology through meta-learning approaches ( You et al. , 2020 ) and search algorithms such as Genetic Algorithms ( GAs ) ( Goldberg & Deb , 1991 ) . Given the high-degree of intractability of this problem , expensive computational resources are often needed in this case , easily exceeding the budget available to a university research laboratory or an angel-stage startup ( Zoph & Le , 2016 ) . When a starting model is already available , for example through open-sourcing , the best option would be to perform a cheap modification of the model to fit a certain target FLOPs/latency requirement . In this paper we have compared the NAS approaches for training FBNetV3 models with our pruning techniques on a computational complexity metric ( GPU-hours ) to effectively answer RQ2 . Benchmark results . In addition to experimental outcomes for answering RQ1 and RQ2 , we also benchmark pruned FBNetV3 models using available open-sourced quantized sparse kernels and conduct ablation studies to obtain additional insights into pruning performance . These results augment our main observations and demonstrate that with existing hardware support , it is possible to deploy pruned cutting-edge computer vision models with practical latency reductions and improve further beyond the performance vs. FLOPs trade-off . We conduct our experiments on ImageNet , which is an object-recognition task on a large training dataset of 1.2 million images . We show that computationally less intensive techniques such as uniform and global magnitude-based pruning of larger FBNetV3 models can yield higher test accuracies than small models while having the same number of FLOPs . Given a target computation budget for an efficient model , we show that it is more practically advantageous ( both in terms of performance and running time ) to simply prune the larger model than run a neural architecture search to find the target model from scratch . 1FBNetV3 models available here http : //https : //github.com/facebookresearch/ mobile_cv/model_zoo/models/model_info/fbnet_v2/model_info_fbnet_v3.json 2EfficientNet models available here https : //github.com/mingxingtan/efficientnet The technique we have employed for pruning ( unstructured sparsity ) is already tried and tested , however our novelty lies in studying whether efficient image recognition models such as FBNetV3 can be optimized further to improve on the FLOPs-accuracy curve , and the contributions are two fold : ( 1 ) FBNets are themselves state-of-the-art in efficient vision models and we achieve better accuracy-FLOPs tradeoff over these models and ( 2 ) from the standpoint of computational overhead , we significantly reduce the amount of GPU hours required to obtain such models . Pruning a publicly available NAS optimized model incurs ≈4x less GPU hours to achieve a target FLOPs level , compared to training a full-fledged NAS to obtain a model which has less accuracy at the same FLOPs level . Paper organization . The remainder of this paper is organized as follows . In Section 2 , we describe related work in the area of efficient vision model design and also provide an introduction to different pruning techniques . In Section 3 , we discuss our experimental setup , including a description of the baseline models and the global and uniform pruning approaches we have employed . Section 4 describes our main findings and we conclude the paper in Section 5 . 2 RELATED WORK . We discuss related literature in the areas of computationally efficient vision models and model pruning . Within the scope of our work , we mainly focus on inference efficiency of models in contrast to training efficiency . Computationally efficient vision models : Neural networks for computer vision are generally characterized by convolutional layers and fully-connected layers , along with blocks such as residual or skip connections . This makes such networks resource intensive in terms of FLOPs , which affects the memory storage and power consumed , and also leads to increased latency . It is of paramount importance to design more efficient networks which can provide higher performance for the same FLOPs or latency level , or even to optimize them appropriately to provide the same performance at reduced FLOPs/latency . This can be performed either through the design of new simplified layers , for example in deep residual learning ( He et al. , 2016 ) or though explicit model compression as in weight quantization ( Polino et al. , 2018 ) . Extremely deep networks for image recognition often suffer from not only high complexity and inference latency , but also from the issue of vanishing gradients ( Pascanu et al. , 2013 ) . This was addressed through deep residual networks which effectively simplified network design through skip-connections . MobileNets ( Howard et al. , 2017 ) are one of the earlier approaches to building small low-latency networks by using depthwise separable convolutions with two parameters , width and resolution multipliers . They demonstrate the effectiveness of MobileNets across different vision tasks , such as face embeddings and object detection . MobileNetV2 ( Sandler et al. , 2018 ) extends MobileNets by utilizing inverted residual filter structures and linear bottlenecks , obtaining improvements on state-of-the-art models both in terms of accuracy and computational complexity . ShuffleNets ( Zhang et al. , 2018 ) propose dedicated residual units where 1×1 convolutions are replaced with pointwise group convolutions and channel shuffling reducing FLOPs computations . More recently , the focus on building efficient neural network models has shifted to techniques that treat the design of efficient networks as a search problem , falling under the umbrella of Neural Architecture Search ( NAS ) . EfficientNets ( Tan & Le , 2019 ) propose a novel scaling method which adjusts the network ’ s length , width , and resolution to optimize performance subject to target memory and FLOPs constraints . They also define a novel baseline that is optimized by a multi-objective neural architecture search . The FBNet collections of models—FBNet ( Wu et al. , 2019 ) , FBNetV2 ( Wan et al. , 2020 ) and FBNetV3 ( Dai et al. , 2021 ) —employ neural architecture search to obtain highly-optimized models that improve on the state-of-the-art for different visual understanding tasks . FBNet frames the architecture search as a differentiable meta-learning problem with gradient based techniques , namely DNAS—Differentiable Neural Architecture Search—by Wu et al . ( 2019 ) , and avoids selecting the optimized model over a discrete set . The subsequent entry in this collection , FBNetV2 , expands the search space over conventional DNAS , and employs a masking scheme to maintain the same level of computational complexity while searching over this expanded space . FBNetV3 further improves on the state-of-the-art by employing Neural Architecture Recipe Search ( NARS ) and searching over the space of not only architectures , but also corresponding recipes ( which are generally hyper-parameters ) . In this paper , we consider FBNetV3 models as our baselines as they are state-of-the-art . We are interested in understanding if they are overparameterized and evaluate how much model pruning can improve performance at a certain FLOPs level over the state-of-the-art in this family of models . Model Pruning : Modern neural networks , particularly those processing complex sensory inputs ( such as speech , vision and language ) for perception applications , are often over-parameterized . It is only to be expected that we should be able to compress such networks significantly to maintain the same level of performance at decreased level of computation ( fewer weights and reduced FLOPs ) , memory footprint and power consumption . Foundational efforts in this space include the Optimal Brain Surgeon ( Hassibi & Stork , 1993 ) and Optimal Brain Damage ( LeCun et al. , 1990 ) . Recently the idea of network pruning has been formalized through the lottery ticket hypothesis ( Frankle & Carbin , 2018 ) , which claims that randomly initialized , feed-forward networks have winning sub-networks that perform just as well as the original network on an unseen test dataset . Model pruning is generally of two types : unstructured and structured pruning . Unstructured pruning , as the name suggests , doesn ’ t adhere to any structure and prunes neurons based on chosen criteria ( such as magnitude ) . This has the advantage of providing higher performance , but is difficult to implement in hardware , as it needs dedicated support for efficient sparse matrix multiplications . Meanwhile , structured pruning is the practice of removing entire groups of neurons ( e.g. , blocks within the weight matrix , or channels in convolutional neural networks ) . This is easy to implement without dedicated hardware support , but has the issue of lower generalization performance than unstructured pruning ( Yao et al. , 2019 ) . In the literature , there have also been several studies , for example investigating whether rewinding ( training from scratch with a fixed mask ) can perform just as well as the fine-tuning on top of the original unpruned network ( Renda et al. , 2020 ) . Blalock et al . ( 2020 ) provide an overview survey of recent advances and open problems in neural network pruning . In the research area of designing efficient networks for computer vision , there has not been much focus on understanding how pruning can be applied to the current generation of models . Most literature on pruning is based on older networks such as VGGNet , ResNet ( He et al. , 2016 ) , and MobileNet ( Sandler et al. , 2018 ) . Our work improves upon these existing studies by understanding how pruning can improve the FLOPs-accuracy tradeoff over existing state-of-the-art networks .
The paper presents an experimental evaluation of simple pruning techniques applied to modern architectures that are designed to be inherently resource-efficient. It is shown that pruning large models in the FBNetV3 family achieves better accuracy-FLOPS trade-offs than smaller models in the FBNetV3 family. It is also shown that pruning large models is faster than performing NAS to find the smaller models directly.
SP:070b00b3bd28545b1bbdf3f6884e748756fb3101
Persistent Homology Captures the Generalization of Neural Networks Without A Validation Set
1 INTRODUCTION . Generalization is what makes a machine learning model useful ; the uncertainty of its behaviour with unseen data is what makes it potentially dangerous . Thus , understanding the generalization error of a model can be considered one of the holy grails of the entire machine learning field . Machine learning practitioners typically monitor some metrics of the model to estimate its generalization error and stop the training even before the numerical convergence to prevent the overfitting of the model . Usually , the error measure or the metric relevant to the task is computed for a holdout set , the validation set . Since these data have not been directly used for updating the parameters , it is assumed that the performance of the model on the validation set can be used as a proxy of the generalization error , provided it is representative of the data that will be used in inference . One can , though , potentially overfit to this holdout set if is repeatedly used for guiding a hyperparameter search . Instead of relying on an external set , though , the question of whether it could be possible to estimate the generalization error with some intrinsic property of the model is highly relevant , and it has been barely explored in the literature . On the other hand , Algebraic Topology has recently been gaining momentum as a mathematical tool for studying graphs , machine learning algorithms , and data . In this work , we have the goal of , once having characterized neural networks as weighted , acyclic graphs , represented as Algebraic Topology objects ( following previous works ) , computing distances between consecutive neural network states . More specifically , we can calculate the Persistent Homology ( PH ) diagram distances between a give state ( i.e. , when having a specific weights during the training process ) and the next one ( i.e. , after having updated the weights in a training step ) , as depicted in Figure 1 . We observe that during the training procedure of neural networks we can measure this distance in each learning step , and show that there exists a high correlation with the corresponding validation accuracy of the model . We do so in a diverse set of deep learning benchmarks and model hyperparameters . This shines light on the question of whether the generalization error could be estimated from intrinsic properties of the model , and opens the path towards a better theoretical understanding of the dynamics of the training of neural networks . In summary , our contributions are as follows : • Based on principles of Algebraic Topology , we propose measuring the distances ( Silhouette and Heat ) between the PH persistence diagrams obtained from a given state of a neural network during the training procedure and the one in the immediately previous weights update . • We empirically show that the evolution of these measures during training correlate with the accuracy in the validation set . We do so in diverse benchmarks ( MNIST , CIFAR10 , CIFAR100 , Reuters text classification ) , and models ( MLPs in MNIST and Reuters , MLPs and CNNs in CIFAR100 and CIFAR100 ) . • We thus provide empirical proof of the fact that valuable information related to the learning process of neural networks can be obtained from PH distances between persistence diagrams ( we will call this process homological convergence ) . In particular , we show that homological convergence is related to learning process and the generalization properties of neural networks . • In practice , we provide a new tool for monitoring the training of neural networks , and open the path to estimating their generalization error without a validation set . The remainder of this article is as follows . In Section 2 we describe the theoretical background of our proposal in terms of Algebraic Topology , while in Section 3 we go through the related work . Then , in Section 4 we formalize our method . Finally , in sections 6 and 7 we present and discuss our empirical results , respectively . 2 BACKGROUND . In this section we introduce the mathematical foundations of this paper . A detailed mathematical description is included in the Supplementary Material . A simplicial complex is a set composed of points , line segments , triangles , and their n-dimensional counterparts , named simplex ( K ) . In particular , a simplicial complex must comply with two properties : 1 . Every face of a simplex is also in the simplicial complex ( of lower dimension ) . 2 . The non-empty intersection of any two simplices contained on a simplicial complex is a face of both . 0,1,2,3-simplex and non simplex examples are shown in Figure 2 . We can associate to an undirected graph , G = ( V , E ) , a simplicial complex where all the vertices of G are the 0-simplex of the simplicial complex and the complete subgraphs with i vertices , in G corresponds to a ( i−1 ) -simplex . This type of construction is usually called a complex clique on the graph G , and is denoted by Cl ( G ) . Figure 3 shows a graph clique complex Cl ( G ) example . The boundary function is defined as a map , from an i-simplex to an ( i− 1 ) -simplex , as the sum of its ( i− 1 ) -dimensional faces . A boundary function sample is shown in Figure 4 . In algebraic topology , a k-chain is a combination of k-simplices ( sometimes symbolized as a linear combination of simplices that compose the chain ) . The boundary of a k-chain is a ( k−1 ) chain . It is the linear and signed combination of chain element boundary simplices . The space of i-chains is denoted by Ci ( K ) . There are two special cases of chains that will be useful to define homology group : • Closed chain or i-cycle : i-chain with empty boundary . An i-chain c is an i-cycle if and only if ∂ic = 0 , i.e . c ∈ ker ( ∂i ) . This subspace of Ci ( K ) is denoted as Zi ( K ) . • Exact chain or i-boundary : An i-chain c is an i-boundary if there exists an ( i+ 1 ) -chain d such that c = ∂i+1 ( d ) , i.e . c ∈ im ( ∂ i+1 ) . This subspace of Ci ( K ) , the set of all such i-boundaries forms , is denoted by Bi ( K ) . Now , if we consider i-cycles not bounding an ( i+1 ) -simplicial complex , this is the definition of an i-th homology group of the simplicial complex K. The precise definition is the quotient group of Bi ( K ) module Zi ( K ) ( i.e . Bi ( K ) /Zi ( K ) , see Supplementary Material ) . The number of non equivalent i-cycles ( Figure 5 ) is the dimension of the homology group Hi ( K ) , also named Betti numbers . It is possible to compare two PDs using specific distances ( Wasserstein and Bottleneck ) . To efficiently perform this operation , due to the size of these diagrams , it is sometimes necessary to simplify them by means of a discretization process ( such as Weighted Silhouette and Heat vectorizations ) . 3 RELATED WORK . Algebraic Topology and Machine Learning The use of Algebraic Topology in the fields of data science and machine learning has been gaining momentum in recent years ( see Carlsson ( 2009 ) ) . Specifically in the case of neural networks , some works have applied topology for improving the training procedure of the models Hofer et al . ( 2020 ) ; Clough et al . ( 2020 ) , or pruning the model afterwards Watanabe & Yamana ( 2020b ) . Other works have focused on analyzing the capacity of neural networks Guss & Salakhutdinov ( 2018a ) ; Rieck et al . ( 2019b ) ; Konuk & Smith ( 2019 ) or the complexity of input data Konuk & Smith ( 2019 ) . Furthermore , recent works have provided topological analysis of the decision boundaries of classifiers based on PH and Betti numbers Ramamurthy et al . ( 2019 ) ; Naitzat et al . ( 2020 ) . Graph and topological representations of neural networks Gebhart et al . ( 2019 ) suggest a method for computing the PH over the graphical activation neural networks , while Watanabe & Yamana ( 2020a ) propose representing neural networks via simplicial complexes based on Taylor decomposition , from which one can compute the PH . Chowdhury et al . ( 2019 ) show that directed homology can be used to represent MLPs . Anonymous ( 2021 ) concurrently show neural networks , when represented as directed , acyclic graphs , can be associated to an Algebraic Topology object , specifically to a directed flag complex . By computing the PH diagram , one can effectively characterize neural networks , and even compute distances between two given neural networks , which can be used to measure their similarity . This is unlike other works Corneanu et al . ( 2019 ) ; Guss & Salakhutdinov ( 2018b ) approximating neural networks representations with regard to the input space . Relevant to our work , in Rieck et al . ( 2019b ) authors propose a complexity measure of neural networks based on persistent homology . However , we will see that their representation does not fulfill our requirements in Section 4 . Estimating the generalization and studying the learning process We are , though , specifically interested in the use of PH for analyzing the learning process , especially with the goal of estimating generalization . In this regard , the literature is perhaps more limited . Jiang et al . ( 2019 ) work on understanding what drives generalization in deep networks from a Bayesian of view . Neyshabur et al . ( 2017 ) study the generalization gap prediction from the training data and network parameters using a margin distribution , which are the distances of training points to the decision boundary . In Li et al . ( 2020 ) , authors propose an alternative to cross-validation for model selection based on training once on the whole train set , without any data split , deriving a validation set with data augmentation . Corneanu et al . ( 2020 ) try to estimate the performance gap between training and testing using PH measures . They claim . However , one can observe some caveats . The first one is that their regression fitted to predict the test error has a considerably high error , making it not usable in practice . The second caveat is that for fitting the regression one needs at least part of the sequestered testing set . In this work , motivated by the interest of having a better understanding of whether it would be possible to estimate the generalization of neural networks without a holdout set , we suggest using the topological characterization and distances concurrently proposed in Anonymous ( 2021 ) but , crucially , measured between consecutive weight updates . We will show that the evolution of this distance is similar to the one of the validation accuracy . Unlike Li et al . ( 2020 ) , we do not use any data at all . Unlike Corneanu et al . ( 2020 ) , we do not build a statistical or machine learning model ( linear regression ) for predicting the testing error . Instead , we propose a new measure , and we empirically show that it highly correlates with the validation accuracy . Note that in this work we do not work with any input data and activations , but with the parameters of the neural network themselves . The code and outputs are fully available in the Supplementary Material under a MIT License .
This paper analyses the training of neural networks from a topological perspective, presenting a pipeline that can measure (pseudo) distances between the network's weights during training. Such information is then employed to study the generalisation error of a neural network. In contrast to existing methods for estimating this error, this paper does *not* require a specific hold-out data set, as topological features of the neural network are monitored during training. This frees up additional data for fitting, which can be highly relevant in the sparse data regime.
SP:0a1ecec6f5447992f3ad0011e1b7efa39da28442
Persistent Homology Captures the Generalization of Neural Networks Without A Validation Set
1 INTRODUCTION . Generalization is what makes a machine learning model useful ; the uncertainty of its behaviour with unseen data is what makes it potentially dangerous . Thus , understanding the generalization error of a model can be considered one of the holy grails of the entire machine learning field . Machine learning practitioners typically monitor some metrics of the model to estimate its generalization error and stop the training even before the numerical convergence to prevent the overfitting of the model . Usually , the error measure or the metric relevant to the task is computed for a holdout set , the validation set . Since these data have not been directly used for updating the parameters , it is assumed that the performance of the model on the validation set can be used as a proxy of the generalization error , provided it is representative of the data that will be used in inference . One can , though , potentially overfit to this holdout set if is repeatedly used for guiding a hyperparameter search . Instead of relying on an external set , though , the question of whether it could be possible to estimate the generalization error with some intrinsic property of the model is highly relevant , and it has been barely explored in the literature . On the other hand , Algebraic Topology has recently been gaining momentum as a mathematical tool for studying graphs , machine learning algorithms , and data . In this work , we have the goal of , once having characterized neural networks as weighted , acyclic graphs , represented as Algebraic Topology objects ( following previous works ) , computing distances between consecutive neural network states . More specifically , we can calculate the Persistent Homology ( PH ) diagram distances between a give state ( i.e. , when having a specific weights during the training process ) and the next one ( i.e. , after having updated the weights in a training step ) , as depicted in Figure 1 . We observe that during the training procedure of neural networks we can measure this distance in each learning step , and show that there exists a high correlation with the corresponding validation accuracy of the model . We do so in a diverse set of deep learning benchmarks and model hyperparameters . This shines light on the question of whether the generalization error could be estimated from intrinsic properties of the model , and opens the path towards a better theoretical understanding of the dynamics of the training of neural networks . In summary , our contributions are as follows : • Based on principles of Algebraic Topology , we propose measuring the distances ( Silhouette and Heat ) between the PH persistence diagrams obtained from a given state of a neural network during the training procedure and the one in the immediately previous weights update . • We empirically show that the evolution of these measures during training correlate with the accuracy in the validation set . We do so in diverse benchmarks ( MNIST , CIFAR10 , CIFAR100 , Reuters text classification ) , and models ( MLPs in MNIST and Reuters , MLPs and CNNs in CIFAR100 and CIFAR100 ) . • We thus provide empirical proof of the fact that valuable information related to the learning process of neural networks can be obtained from PH distances between persistence diagrams ( we will call this process homological convergence ) . In particular , we show that homological convergence is related to learning process and the generalization properties of neural networks . • In practice , we provide a new tool for monitoring the training of neural networks , and open the path to estimating their generalization error without a validation set . The remainder of this article is as follows . In Section 2 we describe the theoretical background of our proposal in terms of Algebraic Topology , while in Section 3 we go through the related work . Then , in Section 4 we formalize our method . Finally , in sections 6 and 7 we present and discuss our empirical results , respectively . 2 BACKGROUND . In this section we introduce the mathematical foundations of this paper . A detailed mathematical description is included in the Supplementary Material . A simplicial complex is a set composed of points , line segments , triangles , and their n-dimensional counterparts , named simplex ( K ) . In particular , a simplicial complex must comply with two properties : 1 . Every face of a simplex is also in the simplicial complex ( of lower dimension ) . 2 . The non-empty intersection of any two simplices contained on a simplicial complex is a face of both . 0,1,2,3-simplex and non simplex examples are shown in Figure 2 . We can associate to an undirected graph , G = ( V , E ) , a simplicial complex where all the vertices of G are the 0-simplex of the simplicial complex and the complete subgraphs with i vertices , in G corresponds to a ( i−1 ) -simplex . This type of construction is usually called a complex clique on the graph G , and is denoted by Cl ( G ) . Figure 3 shows a graph clique complex Cl ( G ) example . The boundary function is defined as a map , from an i-simplex to an ( i− 1 ) -simplex , as the sum of its ( i− 1 ) -dimensional faces . A boundary function sample is shown in Figure 4 . In algebraic topology , a k-chain is a combination of k-simplices ( sometimes symbolized as a linear combination of simplices that compose the chain ) . The boundary of a k-chain is a ( k−1 ) chain . It is the linear and signed combination of chain element boundary simplices . The space of i-chains is denoted by Ci ( K ) . There are two special cases of chains that will be useful to define homology group : • Closed chain or i-cycle : i-chain with empty boundary . An i-chain c is an i-cycle if and only if ∂ic = 0 , i.e . c ∈ ker ( ∂i ) . This subspace of Ci ( K ) is denoted as Zi ( K ) . • Exact chain or i-boundary : An i-chain c is an i-boundary if there exists an ( i+ 1 ) -chain d such that c = ∂i+1 ( d ) , i.e . c ∈ im ( ∂ i+1 ) . This subspace of Ci ( K ) , the set of all such i-boundaries forms , is denoted by Bi ( K ) . Now , if we consider i-cycles not bounding an ( i+1 ) -simplicial complex , this is the definition of an i-th homology group of the simplicial complex K. The precise definition is the quotient group of Bi ( K ) module Zi ( K ) ( i.e . Bi ( K ) /Zi ( K ) , see Supplementary Material ) . The number of non equivalent i-cycles ( Figure 5 ) is the dimension of the homology group Hi ( K ) , also named Betti numbers . It is possible to compare two PDs using specific distances ( Wasserstein and Bottleneck ) . To efficiently perform this operation , due to the size of these diagrams , it is sometimes necessary to simplify them by means of a discretization process ( such as Weighted Silhouette and Heat vectorizations ) . 3 RELATED WORK . Algebraic Topology and Machine Learning The use of Algebraic Topology in the fields of data science and machine learning has been gaining momentum in recent years ( see Carlsson ( 2009 ) ) . Specifically in the case of neural networks , some works have applied topology for improving the training procedure of the models Hofer et al . ( 2020 ) ; Clough et al . ( 2020 ) , or pruning the model afterwards Watanabe & Yamana ( 2020b ) . Other works have focused on analyzing the capacity of neural networks Guss & Salakhutdinov ( 2018a ) ; Rieck et al . ( 2019b ) ; Konuk & Smith ( 2019 ) or the complexity of input data Konuk & Smith ( 2019 ) . Furthermore , recent works have provided topological analysis of the decision boundaries of classifiers based on PH and Betti numbers Ramamurthy et al . ( 2019 ) ; Naitzat et al . ( 2020 ) . Graph and topological representations of neural networks Gebhart et al . ( 2019 ) suggest a method for computing the PH over the graphical activation neural networks , while Watanabe & Yamana ( 2020a ) propose representing neural networks via simplicial complexes based on Taylor decomposition , from which one can compute the PH . Chowdhury et al . ( 2019 ) show that directed homology can be used to represent MLPs . Anonymous ( 2021 ) concurrently show neural networks , when represented as directed , acyclic graphs , can be associated to an Algebraic Topology object , specifically to a directed flag complex . By computing the PH diagram , one can effectively characterize neural networks , and even compute distances between two given neural networks , which can be used to measure their similarity . This is unlike other works Corneanu et al . ( 2019 ) ; Guss & Salakhutdinov ( 2018b ) approximating neural networks representations with regard to the input space . Relevant to our work , in Rieck et al . ( 2019b ) authors propose a complexity measure of neural networks based on persistent homology . However , we will see that their representation does not fulfill our requirements in Section 4 . Estimating the generalization and studying the learning process We are , though , specifically interested in the use of PH for analyzing the learning process , especially with the goal of estimating generalization . In this regard , the literature is perhaps more limited . Jiang et al . ( 2019 ) work on understanding what drives generalization in deep networks from a Bayesian of view . Neyshabur et al . ( 2017 ) study the generalization gap prediction from the training data and network parameters using a margin distribution , which are the distances of training points to the decision boundary . In Li et al . ( 2020 ) , authors propose an alternative to cross-validation for model selection based on training once on the whole train set , without any data split , deriving a validation set with data augmentation . Corneanu et al . ( 2020 ) try to estimate the performance gap between training and testing using PH measures . They claim . However , one can observe some caveats . The first one is that their regression fitted to predict the test error has a considerably high error , making it not usable in practice . The second caveat is that for fitting the regression one needs at least part of the sequestered testing set . In this work , motivated by the interest of having a better understanding of whether it would be possible to estimate the generalization of neural networks without a holdout set , we suggest using the topological characterization and distances concurrently proposed in Anonymous ( 2021 ) but , crucially , measured between consecutive weight updates . We will show that the evolution of this distance is similar to the one of the validation accuracy . Unlike Li et al . ( 2020 ) , we do not use any data at all . Unlike Corneanu et al . ( 2020 ) , we do not build a statistical or machine learning model ( linear regression ) for predicting the testing error . Instead , we propose a new measure , and we empirically show that it highly correlates with the validation accuracy . Note that in this work we do not work with any input data and activations , but with the parameters of the neural network themselves . The code and outputs are fully available in the Supplementary Material under a MIT License .
This paper presents some empirical observations about the relationship of a persistent homology-based measure of learning dynamics and validation set error, during the training of deep neural nets. The paper opens with an introduction on persistent homology, then introduces an approach to study the structure of deep nets using topological data analysis (TDA) tools. Three case studies are then presented, for which a measure of change in topological structure of the network during training is compared with the validation set error. The argument of the paper is that the two measures are correlated, and therefore it may be possible to use the topological measure (which depends on the structure of the network only, and not on data) in place of the validation set error to assess the generalization performance of a deep net.
SP:0a1ecec6f5447992f3ad0011e1b7efa39da28442
Neural tangent kernel eigenvalues accurately predict generalization
1 INTRODUCTION . Understanding and predicting a machine learning model ’ s generalization to unseen data is a central goal of machine learning theory . For a given class of model , one would ideally want a simple picture of a model ’ s inductive bias , identifying the set of functions on which a given model will generalize well and those on which it will generalize poorly and making quantitative predictions of key measures of the quality of generalization . In this paper , we derive such a theory for ridgeless kernel regression with any kernel and , using the neural tangent kernel ( NTK ) equivalence between kernel regression and infinitely wide deep neural networks , shed light on the inductive bias and generalization of wide deep neural networks . Our main contributions are as follows : • We prove a basic conservation law describing the inductive bias of kernel regression ( Theorem 1 ) . This law states that any kernel has a fixed budget of a quantity we call “ learnability ” that it must allocate to an orthogonal basis of functions , and this budget is equal to the size of the training set . As a consequence of this conservation law , we prove that for every kernel , there exists a target function on which kernel regression generalizes worse than chance ( Corollary 1 ) , and we provide a recipe to construct such functions . • We derive a new theory of generalization for kernel regression , culminating in analytical expressions for all first- and second-order statistics of the learned function ( Equations 11 and 13 ) . This theory extends the spectral picture of kernel regression revealed by Bordelon et al . ( 2020 ) and Canatar et al . ( 2021 ) . We conclude from our theory that , in realistic settings , most kernel eigenfunctions have the counterintuitive property that MSE increases as examples are added to a small training set ( Section 2.7 ) . • We empirically verify all our results on synthetic datasets using both exact kernel regression and deep networks of width 500 trained with gradient descent . We find that our conservation law and analytical expressions for generalization performance hold to an excellent approximation even for wide finite networks , and we observe worse-than-chance generalization and increasing MSE as predicted . We find that our theory ’ s core predictions remain fairly accurate even down to width 20 for depth-four networks , suggesting it is a promising starting point for understanding generalization in practical architectures . 1.1 RELATED WORK . The study of the generalization of kernel regression via spectral analysis began in the literature on Gaussian process inference , for which kernel regression gives the mean of the posterior function . In a limited teacher-student setting in which teacher and student were described by the same Gaussian process , Sollich ( 1999 ) and Vivarelli & Opper ( 1999 ) studied expected MSE as a function of training set size , and Sollich ( 2001 ) extended their results to the setting in which the Gaussian processes ’ eigenvalues can differ , but no similar results from this era described the fully general setting . The discovery of the equivalence between wide neural networks and kernel regression via the NTK ( Jacot et al. , 2018 ; Lee et al. , 2019 ) sparked a resurgence of interest in the generalization of kernel regression ( Belkin et al. , 2018a ; 2019b ; a ; Liang & Rakhlin , 2020 ; Bietti & Mairal , 2019 ) . Further noting that both kernel regression and deep learning can generalize well despite perfectly interpolating their training data , Belkin et al . ( 2018b ) argued that “ to understand deep learning we need to understand kernel learning. ” Arora et al . ( 2019a ) derived a data-dependent generalization bound for a wide two-layer architecture involving its infinite-width NTK . Though this bound is a significant advance over VC-dimension-based bounds inapplicable to overparameterized models , it only applies to a specific architecture and can be many times greater than the true loss , while our results predict true loss within small error bars and apply to any infinite-width architecture ( and in fact to any incarnation of kernel regression ) . In pioneering work , Bordelon et al . ( 2020 ) and Canatar et al . ( 2021 ) extended classic results on the generalization of kernel regression to the fully general case and , using the NTK , confirmed that their approximate expressions for expected MSE agree even with wide finite deep neural networks . Their results reveal a simple picture of neural network generalization : as samples are added to the training set , the network generalizes well on a larger and larger subspace of input functions . The natural basis for this subspace of learnable functions is the eigenbasis of the NTK , and its eigenfunctions are learned in descending order of their eigenvalues . The results of the present paper corroborate and extend this picture of the generalization of kernel regression . Our work chiefly differs from theirs in that ( a ) our conservation law is new , ( b ) we derive and test expressions for generic first- and second-order statistics of the learned function , not just MSE , and ( c ) our derivation is quite different ( and , we believe , easier to understand ) even when it arrives at the same results . Our work is related to the well-known observation that neural networks have a “ spectral bias ” towards representing slowly-varying functions ( Valle-Perez et al. , 2018 ; Yang & Salman , 2019 ) ; in particular , the kernel spectrum picture clarifies that this bias is a consequence of the fact that high NTK eigenmodes are typically slowly-varying . We also note a body of work studying the related phenomenon that slowly-varying functions are learned first during training ( Rahaman et al. , 2019 ; Xu et al. , 2019b ; a ; Xu , 2018 ; Cao et al. , 2019 ; Su & Yang , 2019 ) . 2 THEORY . 2.1 A REVIEW OF KERNEL REGRESSION . Consider the task of learning an m-element function f : X → Rm given a set of n unique training points D = { xi } ni=1 ⊆ X and their corresponding function values f ( D ) ∈ Rn×m . To simplify our analysis , we will let the domain X be discrete with size M ≡ |X | and assume the n training points are uniformly sampled from X . This choice of discrete domain will permit us to use matrices and vectors instead of operators and functions in our derivations . By taking M →∞ , our results easily extend to problems with continuous domain : for example , if M →∞ and we allow the points in X to approach a density equal to some desired measure over Rd , we recover the standard setting where the data are sampled nonuniformly from Rd . We will use f̂ to denote the function learned by a neural network trained on this dataset . Remarkably , for an infinite-width neural network optimized via gradient descent to zero training MSE loss , this learned function is given by f̂ ( x ) = K ( x , D ) K ( D , D ) −1f ( D ) , ( 1 ) where K : X × X → R is the network ’ s “ neural tangent kernel ” ( NTK ) ( Jacot et al. , 2018 ; Lee et al. , 2019 ) , K ( D , D ) is the “ kernel matrix ” defined by K ( D , D ) ij = K ( xi , xj ) , and K ( x , D ) is a row vector with components K ( x , D ) i = K ( x , xi ) .1 We give a brief introduction to the NTK in Appendix C. Due to its similarity to the normal equation of linear regression , Equation 1 is often called “ kernel regression. ” 2 Equation 1 holds exactly in the infinite-width limit of fully-connected networks ( Lee et al. , 2019 ) , convolutional networks ( Arora et al. , 2019b ) , transformers ( Hron et al. , 2020 ) , and more ( Yang , 2019 ) . Moreover , several empirical studies have shown it to be a good approximation for networks of even modest width ( Lee et al. , 2019 ; 2020 ) . Our approach will be to study the generalization behavior of Equation 1 , conjecture that our results also apply to finite networks , and finally provide strong support for our conjecture with experiments . Examining Equation 1 , one finds that the m indices of f can each be treated separately : the learned f̂ is equivalent to simply performing kernel regression with each of the m indices as a scalar target function and then vectorizing the results . For simplicity , then , we hereafter assume m = 1 . The extension to m > 1 is straightforward . 2.2 FIGURES OF MERIT OF f̂ We will study three measures of the quality of the learned function f̂ . All three will be defined in terms of the inner product over X : for two functions g , h : X → R , their inner product is 〈g , h〉 ≡ 1M ∑ x∈X g ( x ) h ( x ) . The first measure of quality is mean-squared error ( MSE ) . For a particular dataset D , MSE is given by E ( D ) ( f ) ≡ 〈f − f̂ , f − f̂〉 . Of more interest will be the expected MSE over all datasets of size n , given by E ( f ) ≡ ED [ E ( D ) ( f ) ] . We note that the inner product is taken over all X , including D , even though f̂ ( x ) = f ( x ) for x ∈ D for kernel regression . In maximizing the similarity of f̂ to f , we typically wish to minimize its similarity to all functions orthogonal to f . The second measure examines the coefficient in f̂ of one such orthogonal function to f . Letting g : X → R be a function such that 〈f , g〉 = 0 , we consider the mean and variance of the quantity 〈f̂ , g〉 . We will derive accurate predictions for this metric of generalization . Lastly , we introduce a figure of merit quantifying the alignment of f and f̂ , which we call “ learnability. ” It is given by L ( D ) ( f ) ≡ 〈f , f̂〉 〈f , f〉 , L ( f ) ≡ ED [ L ( D ) ( f ) ] , ( 2 ) where L ( D ) ( f ) is the dataset-dependent learnability of the function f ( “ D-learnability ” ) and L ( f ) is its expectation over random data ( “ learnability ” ) . Though at first glance these two seem like odd figures of merit , we will soon show that they have many desirable properties when f̂ is given by Equation 1 : unlike MSE , both are bounded in [ 0 , 1 ] ( Lemma 1d ) , always change monotonically as new data points are added ( Lemma 1e ) , are invariant to rescalings of f , and obey a simple conservation law ( Theorem 1 ) . Furthermore , expanding the inner product in the definition of E and noting that E ( f ) ≥ 〈f , f〉 ( 1 − L ( f ) ) 2 , one can see that low MSE is impossible without high learnability . We will ultimately derive an accurate approximation for learnability that is substantially simpler than any known approximation for MSE . 1Naively , Equation 1 is only the expected learned function , and the true learned function will include a fluctuation term reflecting the random initialization . However , by storing a copy of the parameters at t = 0 and redefining f̂t ( x ) : = f̂t ( x ) − f̂0 ( x ) throughout optimization and at test time , this term becomes zero , and so we neglect it in our theory and use this trick in our experiments . 2Interestingly , exact Bayesian inference for infinite-width neural networks yields predictions of the same form as Equation 1 , with K being the “ neural network Gaussian process ” ( NNGP ) kernel instead of the NTK ( Lee et al. , 2018 ) . We will proceed treating K as a network ’ s NTK , but our theory and exact results ( including our “ no-free-lunch ” theorem ) apply equally well to any incarnation of kernel regression , including RBF kernel regression and linear regression .
This paper provides a novel theoretical account of generalization for kernel regression. To do so, the authors study a matrix built out of the kernel eigensystem evaluated on the training set that they call the "learning transfer matrix," and which relates the decomposition of the true function in the eigenbasis to the decomposition of the learned function in the eigenbasis. In other words, this matrix characterizes the kernel regression solution in the eigenbasis. By making a number of approximations, they are able to find a closed-form expression for this matrix and study its first and second order statistics. A main conclusion from this analysis is that functions are more learnable by kernel regression if they have more weight in higher eigenvalue modes. Moreover, the authors also introduce a new metric, which they call "learnability," that they use to prove a new no-free-lunch theorem whose content is that when averaging over a complete basis of functions the learnability is independent of the kernel. This means that the choice of kernel should be tailored to the details of the function being learned in order for kernel regression to succeed. A related result is that there are some functions for which the kernel regression solution generalizes worse than simply outputting "0" on data outside the training set, i.e. for which the solution fails to generalize at all.
SP:0b2f9008cd16f792368bbccb68b17d8bf9cf63c5
Neural tangent kernel eigenvalues accurately predict generalization
1 INTRODUCTION . Understanding and predicting a machine learning model ’ s generalization to unseen data is a central goal of machine learning theory . For a given class of model , one would ideally want a simple picture of a model ’ s inductive bias , identifying the set of functions on which a given model will generalize well and those on which it will generalize poorly and making quantitative predictions of key measures of the quality of generalization . In this paper , we derive such a theory for ridgeless kernel regression with any kernel and , using the neural tangent kernel ( NTK ) equivalence between kernel regression and infinitely wide deep neural networks , shed light on the inductive bias and generalization of wide deep neural networks . Our main contributions are as follows : • We prove a basic conservation law describing the inductive bias of kernel regression ( Theorem 1 ) . This law states that any kernel has a fixed budget of a quantity we call “ learnability ” that it must allocate to an orthogonal basis of functions , and this budget is equal to the size of the training set . As a consequence of this conservation law , we prove that for every kernel , there exists a target function on which kernel regression generalizes worse than chance ( Corollary 1 ) , and we provide a recipe to construct such functions . • We derive a new theory of generalization for kernel regression , culminating in analytical expressions for all first- and second-order statistics of the learned function ( Equations 11 and 13 ) . This theory extends the spectral picture of kernel regression revealed by Bordelon et al . ( 2020 ) and Canatar et al . ( 2021 ) . We conclude from our theory that , in realistic settings , most kernel eigenfunctions have the counterintuitive property that MSE increases as examples are added to a small training set ( Section 2.7 ) . • We empirically verify all our results on synthetic datasets using both exact kernel regression and deep networks of width 500 trained with gradient descent . We find that our conservation law and analytical expressions for generalization performance hold to an excellent approximation even for wide finite networks , and we observe worse-than-chance generalization and increasing MSE as predicted . We find that our theory ’ s core predictions remain fairly accurate even down to width 20 for depth-four networks , suggesting it is a promising starting point for understanding generalization in practical architectures . 1.1 RELATED WORK . The study of the generalization of kernel regression via spectral analysis began in the literature on Gaussian process inference , for which kernel regression gives the mean of the posterior function . In a limited teacher-student setting in which teacher and student were described by the same Gaussian process , Sollich ( 1999 ) and Vivarelli & Opper ( 1999 ) studied expected MSE as a function of training set size , and Sollich ( 2001 ) extended their results to the setting in which the Gaussian processes ’ eigenvalues can differ , but no similar results from this era described the fully general setting . The discovery of the equivalence between wide neural networks and kernel regression via the NTK ( Jacot et al. , 2018 ; Lee et al. , 2019 ) sparked a resurgence of interest in the generalization of kernel regression ( Belkin et al. , 2018a ; 2019b ; a ; Liang & Rakhlin , 2020 ; Bietti & Mairal , 2019 ) . Further noting that both kernel regression and deep learning can generalize well despite perfectly interpolating their training data , Belkin et al . ( 2018b ) argued that “ to understand deep learning we need to understand kernel learning. ” Arora et al . ( 2019a ) derived a data-dependent generalization bound for a wide two-layer architecture involving its infinite-width NTK . Though this bound is a significant advance over VC-dimension-based bounds inapplicable to overparameterized models , it only applies to a specific architecture and can be many times greater than the true loss , while our results predict true loss within small error bars and apply to any infinite-width architecture ( and in fact to any incarnation of kernel regression ) . In pioneering work , Bordelon et al . ( 2020 ) and Canatar et al . ( 2021 ) extended classic results on the generalization of kernel regression to the fully general case and , using the NTK , confirmed that their approximate expressions for expected MSE agree even with wide finite deep neural networks . Their results reveal a simple picture of neural network generalization : as samples are added to the training set , the network generalizes well on a larger and larger subspace of input functions . The natural basis for this subspace of learnable functions is the eigenbasis of the NTK , and its eigenfunctions are learned in descending order of their eigenvalues . The results of the present paper corroborate and extend this picture of the generalization of kernel regression . Our work chiefly differs from theirs in that ( a ) our conservation law is new , ( b ) we derive and test expressions for generic first- and second-order statistics of the learned function , not just MSE , and ( c ) our derivation is quite different ( and , we believe , easier to understand ) even when it arrives at the same results . Our work is related to the well-known observation that neural networks have a “ spectral bias ” towards representing slowly-varying functions ( Valle-Perez et al. , 2018 ; Yang & Salman , 2019 ) ; in particular , the kernel spectrum picture clarifies that this bias is a consequence of the fact that high NTK eigenmodes are typically slowly-varying . We also note a body of work studying the related phenomenon that slowly-varying functions are learned first during training ( Rahaman et al. , 2019 ; Xu et al. , 2019b ; a ; Xu , 2018 ; Cao et al. , 2019 ; Su & Yang , 2019 ) . 2 THEORY . 2.1 A REVIEW OF KERNEL REGRESSION . Consider the task of learning an m-element function f : X → Rm given a set of n unique training points D = { xi } ni=1 ⊆ X and their corresponding function values f ( D ) ∈ Rn×m . To simplify our analysis , we will let the domain X be discrete with size M ≡ |X | and assume the n training points are uniformly sampled from X . This choice of discrete domain will permit us to use matrices and vectors instead of operators and functions in our derivations . By taking M →∞ , our results easily extend to problems with continuous domain : for example , if M →∞ and we allow the points in X to approach a density equal to some desired measure over Rd , we recover the standard setting where the data are sampled nonuniformly from Rd . We will use f̂ to denote the function learned by a neural network trained on this dataset . Remarkably , for an infinite-width neural network optimized via gradient descent to zero training MSE loss , this learned function is given by f̂ ( x ) = K ( x , D ) K ( D , D ) −1f ( D ) , ( 1 ) where K : X × X → R is the network ’ s “ neural tangent kernel ” ( NTK ) ( Jacot et al. , 2018 ; Lee et al. , 2019 ) , K ( D , D ) is the “ kernel matrix ” defined by K ( D , D ) ij = K ( xi , xj ) , and K ( x , D ) is a row vector with components K ( x , D ) i = K ( x , xi ) .1 We give a brief introduction to the NTK in Appendix C. Due to its similarity to the normal equation of linear regression , Equation 1 is often called “ kernel regression. ” 2 Equation 1 holds exactly in the infinite-width limit of fully-connected networks ( Lee et al. , 2019 ) , convolutional networks ( Arora et al. , 2019b ) , transformers ( Hron et al. , 2020 ) , and more ( Yang , 2019 ) . Moreover , several empirical studies have shown it to be a good approximation for networks of even modest width ( Lee et al. , 2019 ; 2020 ) . Our approach will be to study the generalization behavior of Equation 1 , conjecture that our results also apply to finite networks , and finally provide strong support for our conjecture with experiments . Examining Equation 1 , one finds that the m indices of f can each be treated separately : the learned f̂ is equivalent to simply performing kernel regression with each of the m indices as a scalar target function and then vectorizing the results . For simplicity , then , we hereafter assume m = 1 . The extension to m > 1 is straightforward . 2.2 FIGURES OF MERIT OF f̂ We will study three measures of the quality of the learned function f̂ . All three will be defined in terms of the inner product over X : for two functions g , h : X → R , their inner product is 〈g , h〉 ≡ 1M ∑ x∈X g ( x ) h ( x ) . The first measure of quality is mean-squared error ( MSE ) . For a particular dataset D , MSE is given by E ( D ) ( f ) ≡ 〈f − f̂ , f − f̂〉 . Of more interest will be the expected MSE over all datasets of size n , given by E ( f ) ≡ ED [ E ( D ) ( f ) ] . We note that the inner product is taken over all X , including D , even though f̂ ( x ) = f ( x ) for x ∈ D for kernel regression . In maximizing the similarity of f̂ to f , we typically wish to minimize its similarity to all functions orthogonal to f . The second measure examines the coefficient in f̂ of one such orthogonal function to f . Letting g : X → R be a function such that 〈f , g〉 = 0 , we consider the mean and variance of the quantity 〈f̂ , g〉 . We will derive accurate predictions for this metric of generalization . Lastly , we introduce a figure of merit quantifying the alignment of f and f̂ , which we call “ learnability. ” It is given by L ( D ) ( f ) ≡ 〈f , f̂〉 〈f , f〉 , L ( f ) ≡ ED [ L ( D ) ( f ) ] , ( 2 ) where L ( D ) ( f ) is the dataset-dependent learnability of the function f ( “ D-learnability ” ) and L ( f ) is its expectation over random data ( “ learnability ” ) . Though at first glance these two seem like odd figures of merit , we will soon show that they have many desirable properties when f̂ is given by Equation 1 : unlike MSE , both are bounded in [ 0 , 1 ] ( Lemma 1d ) , always change monotonically as new data points are added ( Lemma 1e ) , are invariant to rescalings of f , and obey a simple conservation law ( Theorem 1 ) . Furthermore , expanding the inner product in the definition of E and noting that E ( f ) ≥ 〈f , f〉 ( 1 − L ( f ) ) 2 , one can see that low MSE is impossible without high learnability . We will ultimately derive an accurate approximation for learnability that is substantially simpler than any known approximation for MSE . 1Naively , Equation 1 is only the expected learned function , and the true learned function will include a fluctuation term reflecting the random initialization . However , by storing a copy of the parameters at t = 0 and redefining f̂t ( x ) : = f̂t ( x ) − f̂0 ( x ) throughout optimization and at test time , this term becomes zero , and so we neglect it in our theory and use this trick in our experiments . 2Interestingly , exact Bayesian inference for infinite-width neural networks yields predictions of the same form as Equation 1 , with K being the “ neural network Gaussian process ” ( NNGP ) kernel instead of the NTK ( Lee et al. , 2018 ) . We will proceed treating K as a network ’ s NTK , but our theory and exact results ( including our “ no-free-lunch ” theorem ) apply equally well to any incarnation of kernel regression , including RBF kernel regression and linear regression .
The paper examines the eigenvalues of a neural network’s “Neural Tangent Kernel” to analyze its generalization performance in the infinite-width regime. It conjectures that the same results will also apply in the finite width regime as well. By analyzing kernel regression and by defining a measure as the “learnability” of a given target function, the paper proves a “no-free-lunch” theorem which implies that improving a network’s or kernel's generalization for a given target function must worsen its generalization performance for its "orthogonal functions". The paper then analytically predicts two phenomena: worse than chance generalization for hard functions and non-monotonic error curves in a small data regime. It also provides some simulations to corroborate the analytic results.
SP:0b2f9008cd16f792368bbccb68b17d8bf9cf63c5
MaiT: integrating spatial locality into image transformers with attention masks
1 INTRODUCTION . Convolutional neural networks ( CNNs ) ( Krizhevsky et al. , 2012 ; He et al. , 2016 ; Tan & Le , 2019 ) have been the de facto model for computer vision ( CV ) tasks , which are inherently equipped with inductive biases such as translation equivariance and locality . Recently , vision transformers ( Dosovitskiy et al. , 2021 ; Touvron et al. , 2021a ) are gaining momentum translating the success from transformer-based models in natural language processing ( NLP ) tasks ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ) . Self-attention heads excel at capturing the long-range dependencies in sequences but struggle at focusing on local information . Unlike CNNs which have gone through many iterations of optimization , vision transformers are not very efficient and still require huge computing power and a large number of Flops . Naturally , combining the benefits of both CNNs and vision transformers is promising to further boost performances of CV models . The question remains how to effectively integrate inductive bias such as spatial locality into transformers . One direction is to utilize convolutional blocks to extract spatial information by adapting either the patch-token embedding layer , self-attention module or feed-forward layers , to form CNN-transformer hybrid structures as in Li et al . ( 2021b ) ; Srinivas et al . ( 2021 ) ; Graham et al . ( 2021 ) ; Wu et al . ( 2021a ) ; Yuan et al . ( 2021a ) ; Guo et al . ( 2021 ) . However , forcefully inserting convolutional operations into transformers may potentially constrain the learning capacity of transformers . To capture the spatial information without significantly changing the transformer model architecture , Chu et al . ( 2021b ) introduce extra positional encoding . Han et al . ( 2021 ) ; Chen et al . ( 2021 ) fuse local and global representations using multiple transformer blocks or branches to simultaneously process images at different scales such as pixel-level , small-patch or large patch . Yuan et al . ( 2021b ) apply a layerwise tokens-to-tokens transformation to capture local structure . These approaches usually come with the cost of extra parameters and model complexity , thus potentially lowering the inference speed . d ’ Ascoli et al . ( 2021 ) ; Zhou et al . ( 2021 ) improve the self-attention for better representation with gated positional self-attention and learnable transformation matrix respectively . Hu et al . ( 2019 ) , Ramachandran et al . ( 2019 ) adapt the self-attention module and improve the performance of CNN models . Liu et al . ( 2021 ) captures locality within shifted windows in a hierarchical structure . Though it could save computation in some cases , the added model complexity may lower its potentials . Different from prior works , we try to incorporate spatial locality without changing its architecture or adding extra parameters/FLOPs . We propose attention masks to guide the attention heads to focus on local information . Masked attention heads extract local dependencies more efficiently by allowing information aggregation only from the closest neighbors . This liberates other unmasked heads to learn global information more effectively . We name the modified model , Masked attention image Transformer ( MaiT ) , which is built on top of DeiT ( Touvron et al. , 2021a ) . MaiT gathers both local and global information at the same time from different heads . Moreover , the regularization effects from attention masks facilitate the training of deep transformers by guiding the attention map learning and promoting diversity across transformer layers . We proved that less is more in this specific case with attention masks . MaiT achieves up to 2.5 % higher top-1 accuracy on ImageNet ( Deng et al. , 2009 ) with the same model architecture as DeiT ( Touvron et al. , 2021a ) . Additionally , deep MaiT outperforms CaiT ( Touvron et al. , 2021b ) by up to 1.7 % in top-1 accuracy with fewer parameters and simpler structure . In summary , we make three major contributions in this work : 1 ) . we propose attention masks to encode the spatial locality into self-attention heads without structural change or extra parameter/computation , while improving model efficiency . 2 ) . We present a quick empirical search strategy for the masking scheme exploration , along with an automatic end-to-end search alternative . 3 ) . We also reveal the importance of the locality across layers and the performance impact of attention masks on deep vision transformers . Note though MaiT is demonstrated with DeiT/CaiT , the attention mask is applicable to other vision transformers as well . 2 RELATED WORK . Spatial locality is an integral part of the convolutional operation with weight filters attending to local regions of input feature maps . Vision transformer ( ViT ) ( Dosovitskiy et al. , 2021 ) is the first pure transformer-based model on vision tasks , but it requires a large private labeled dataset JFT300M ( Sun et al. , 2017 ) to achieve competitive performances . Data-efficient image transformer ( DeiT ) ( Touvron et al. , 2021a ) improves upon ViT models by introducing stronger data augmentation , regularization , and knowledge distillation . Class-attention in image transformer ( CaiT ) ( Touvron et al. , 2021b ) extends DeiT by increasing the number of transformer layers . To overcome the difficulties of training deeper transformers , CaiT introduces LayerScale and class-attention layers , which increase the parameters and model complexity . Tokens-to-Token vision transformer ( T2T ) ( Yuan et al. , 2021b ) proposes an image transformation by recursive token aggregation to capture local structure . Stand-alone self-attention ( Ramachandran et al. , 2019 ) applies local self-attention layer to replace spatial convolution and outperform original ResNet models . Even though sharing value and key spatially is parameter efficient in this approach , content-based information is lost . Transformer-iN-Transformer ( TNT ) ( Han et al. , 2021 ) models both patch-level and pixel-level representations and applies outer and inner transformer blocks to extract global and local information respectively . ConViT ( d ’ Ascoli et al. , 2021 ) proposes the gated positional self-attention to incorporate soft convolutional biases . CrossViT ( Chen et al. , 2021 ) proposes a dual-branch transformer architecture for multi-scale feature extraction . For pixel-level prediction tasks such as semantic segmentation , object detection , Pyramid Vision Transformer ( PVT ) ( Wang et al. , 2021a ) introduces a progressive shrinking pyramid and spatialreduction attention with fine-grained image patches . DETR ( Carion et al. , 2020 ) adapts transformers for object detection tasks . Swin Transformer ( Liu et al. , 2021 ) applies hierarchical transformer with shifted windows of varying sizes . Twins ( Chu et al. , 2021a ) deploys interleaved locally-grouped self-attention and global sub-sample attention layers to improve performances . To optimize transformer and save computation , Wu et al . ( 2021b ) uses centroid attention to extract and compress input information , Jaegle et al . ( 2021 ) iteratively distill inputs into latent space with attention bottlenecks , Wang et al . ( 2021b ) dynamically adjusts the number of tokens with multiple cascading transformers , and Wu et al . ( 2020 ) introduced semantic token to replace pixel-based transformers to save computation . There are hybrid architectures fusing convolutional and transformer blocks , such as LocalViT ( Li et al. , 2021b ) , BoTNet ( Srinivas et al. , 2021 ) , LeViT ( Graham et al. , 2021 ) , BossNet ( Li et al. , 2021a ) , CvT ( Wu et al. , 2021a ) , CoaT ( Xu et al. , 2021 ) , CMT ( Guo et al. , 2021 ) for higher accuracy and faster inference . Unlike prior literature , our work explores the intrinsic capability of pure transformer block on incorporating spatial locality and the impact of the locality along the depth direction . This work is also inspired by the emerging graph attention network ( Veličković et al. , 2018 ) , borrowing the concept of message passing and information aggregation from nearest neighbors . Note attention masks have been used in NLP tasks as a sparsification method to reduce the computation complexity , as well as to capture local information , as in Guo et al . ( 2019 ) ; Child et al . ( 2019 ) ; Beltagy et al . ( 2020 ) ; Ainslie et al . ( 2020 ) . 3 VISION TRANSFORMER PRELIMINARIES . The transformer architecture introduced by Vaswani ( Vaswani et al. , 2017 ) inspired many model variants with remarkable success in NLP tasks . ViT ( Dosovitskiy et al. , 2021 ) extends pure transformer-based architecture into CV applications . Instead of pixel-level processing , ViT splits the original images into a sequence of patches as inputs and transforms them into patch tokens , for better computation efficiency . In general , ViT consists of 3 fundamental modules : embedding layer , multi-head self-attention , and feed-forward network . To process images in transformer , the original RGB images ( 224×224 ) is flattened into a sequence of N ( 14×14 ) patches . Each patch has a fixed size ( typical 16x16 pixels ) . Patches are then transformed into patch embedding with hidden dimensions ( D ) of 192 , 384 , and 768 for tiny , small , and base models respectively in ViT/DeiT . In addition to patch tokens , the embedding layer also integrates positional information , classification and knowledge distillation through the positional token , class token and distillation token , respectively . Positional token is added into the patch embedding with a trainable positional embedding . However , this positional embedding is added only in the embedding layers . The spatial information is largely lost in the transformer layers since all-to-all attention is invariant to the order of the patches . The class token is another trainable vector ( 1×D ) , concatenated to the patch tokens ( total N+1 ) . It is used to collect information from the patch tokens to make output predictions , while also spreading information among patch tokens during training . Distillation token is sometimes added for knowledge transfer from teacher models , such as a CNN model . When training the distilled version of the model , a distillation token is further concatenated to the patch token along with the class token ( total N+2 ) . Multi-head self-attention ( MHA ) module has multiple parallel attention heads , where each of them comprises three main components : Key ( K ) , Query ( Q ) , and Value ( V ) , Key and Query are trained and multiplied to estimate how much weights on each corresponding token in Value for output : Attention ( K , Q , V ) = Softmax ( QKT√ d ) V ( 1 ) Where softmax is applied to each row of the input product matrix ( QKT ) and √ d provides appropriate normalization . Multiple attention heads in MHA attend to different parts of the input simultaneously . Considering H heads in MHA layer , the hidden dimension D is split equally across all heads ( D = H × d ) . Feed-forward network ( FFN ) follows after the MHA module , containing two linear transformation layers separated by Gelu activation . The hidden dimension expands by 4x after the first linear layer from D to 4D , and is reduced back to D in the second linear layer . Both MHA and FFN use skip-connections with layer normalization as the residual operation . 4 MASKED ATTENTION HEAD . Spatial locality plays a crucial role in computer vision tasks . CNN models capture it using the sliding filter of shared weights , typically with a receptive field of 3×3 , 5×5 , or 7×7 . In contrast to CNN models , the locality is not explicitly introduced in the transformer structure . With attention masks , we can explicitly insert locality into self-attention modules without introducing any extra parameter or computation . The key idea is to apply a mask on the all-to-all attention products ( i.e . QKT ) to reinforce the weights from the closest neighbors by aggregating information only from tokens selected by the mask . Figure 1 illustrates an example of our proposed masking scheme . The orange box shows a 3 × 3 mask , where only the direct neighboring patches are selected . Specifically , Patch 16 only gathers information from the closest neighbors of Patch 1 , 2 , 3 , 15 , 17 , 29 , 30 , and 31 , and ignores the rest of patches . This is different from the typical all-to-all attention module , where Patch 16 attends to all 0-195 patches . We can easily expand the depth of the attention mask beyond the closest neighbors , to second-level neighbors ( green box in Figure 1 ) or more . Note that the class token ( and distillation token ) still attends to all the patches to collect and spread information during forward and backward passes . Since each attention product selected by the mask is calculated by Q and K , the masked attention head also retains the content-based locality information . The attention mask is added before Softmax , regulating the distribution of attention maps to focus more on the closest neighbors : Masked Attention ( K , Q , V ) = Softmax ( M QKT√ d ) V ( 2 ) where M ∈ { 0 , 1 } ( N+1 ) × ( N+1 ) is a binary attention mask , encoding the spatial locality into the attention head by passing through only the weights from close neighbors and setting the rest to zero . Note it ’ s important to add the mask before Softmax because it allows the model to learn the importance of the locality flexibly . More precisely , unselected patches appear as e0 = 1 in the numerator of the softmax operation . Thus , if the attention product result of the closest neighbors is meaningfully larger than zero ( i.e . M QKT 0 ) , it suggests that local information dominates . However , if those results are negative or close to zero , it implies that local information is insignificant and global information is more important . Therefore , inserting masks before the softmax operation allows models to enforce locality or bypass it .
This paper proposes to bring locality into the attention module of vision transformers. This locality mechanism is brought by the introduced attention masks. Basically, the attention masks are binary and is likely to restrict the attention to the local field of a token. The local attention mask results in a block matrix before the application of Softmax function. Tokens with zero mask values are given a constant attention value of 1. Thus, those tokens are still involved in the computation. It is claimed that the binary masks is able to keep the global connection when necessary.
SP:7ba7db3bba0bb539fe00165024a483e9f59d5d35
MaiT: integrating spatial locality into image transformers with attention masks
1 INTRODUCTION . Convolutional neural networks ( CNNs ) ( Krizhevsky et al. , 2012 ; He et al. , 2016 ; Tan & Le , 2019 ) have been the de facto model for computer vision ( CV ) tasks , which are inherently equipped with inductive biases such as translation equivariance and locality . Recently , vision transformers ( Dosovitskiy et al. , 2021 ; Touvron et al. , 2021a ) are gaining momentum translating the success from transformer-based models in natural language processing ( NLP ) tasks ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ) . Self-attention heads excel at capturing the long-range dependencies in sequences but struggle at focusing on local information . Unlike CNNs which have gone through many iterations of optimization , vision transformers are not very efficient and still require huge computing power and a large number of Flops . Naturally , combining the benefits of both CNNs and vision transformers is promising to further boost performances of CV models . The question remains how to effectively integrate inductive bias such as spatial locality into transformers . One direction is to utilize convolutional blocks to extract spatial information by adapting either the patch-token embedding layer , self-attention module or feed-forward layers , to form CNN-transformer hybrid structures as in Li et al . ( 2021b ) ; Srinivas et al . ( 2021 ) ; Graham et al . ( 2021 ) ; Wu et al . ( 2021a ) ; Yuan et al . ( 2021a ) ; Guo et al . ( 2021 ) . However , forcefully inserting convolutional operations into transformers may potentially constrain the learning capacity of transformers . To capture the spatial information without significantly changing the transformer model architecture , Chu et al . ( 2021b ) introduce extra positional encoding . Han et al . ( 2021 ) ; Chen et al . ( 2021 ) fuse local and global representations using multiple transformer blocks or branches to simultaneously process images at different scales such as pixel-level , small-patch or large patch . Yuan et al . ( 2021b ) apply a layerwise tokens-to-tokens transformation to capture local structure . These approaches usually come with the cost of extra parameters and model complexity , thus potentially lowering the inference speed . d ’ Ascoli et al . ( 2021 ) ; Zhou et al . ( 2021 ) improve the self-attention for better representation with gated positional self-attention and learnable transformation matrix respectively . Hu et al . ( 2019 ) , Ramachandran et al . ( 2019 ) adapt the self-attention module and improve the performance of CNN models . Liu et al . ( 2021 ) captures locality within shifted windows in a hierarchical structure . Though it could save computation in some cases , the added model complexity may lower its potentials . Different from prior works , we try to incorporate spatial locality without changing its architecture or adding extra parameters/FLOPs . We propose attention masks to guide the attention heads to focus on local information . Masked attention heads extract local dependencies more efficiently by allowing information aggregation only from the closest neighbors . This liberates other unmasked heads to learn global information more effectively . We name the modified model , Masked attention image Transformer ( MaiT ) , which is built on top of DeiT ( Touvron et al. , 2021a ) . MaiT gathers both local and global information at the same time from different heads . Moreover , the regularization effects from attention masks facilitate the training of deep transformers by guiding the attention map learning and promoting diversity across transformer layers . We proved that less is more in this specific case with attention masks . MaiT achieves up to 2.5 % higher top-1 accuracy on ImageNet ( Deng et al. , 2009 ) with the same model architecture as DeiT ( Touvron et al. , 2021a ) . Additionally , deep MaiT outperforms CaiT ( Touvron et al. , 2021b ) by up to 1.7 % in top-1 accuracy with fewer parameters and simpler structure . In summary , we make three major contributions in this work : 1 ) . we propose attention masks to encode the spatial locality into self-attention heads without structural change or extra parameter/computation , while improving model efficiency . 2 ) . We present a quick empirical search strategy for the masking scheme exploration , along with an automatic end-to-end search alternative . 3 ) . We also reveal the importance of the locality across layers and the performance impact of attention masks on deep vision transformers . Note though MaiT is demonstrated with DeiT/CaiT , the attention mask is applicable to other vision transformers as well . 2 RELATED WORK . Spatial locality is an integral part of the convolutional operation with weight filters attending to local regions of input feature maps . Vision transformer ( ViT ) ( Dosovitskiy et al. , 2021 ) is the first pure transformer-based model on vision tasks , but it requires a large private labeled dataset JFT300M ( Sun et al. , 2017 ) to achieve competitive performances . Data-efficient image transformer ( DeiT ) ( Touvron et al. , 2021a ) improves upon ViT models by introducing stronger data augmentation , regularization , and knowledge distillation . Class-attention in image transformer ( CaiT ) ( Touvron et al. , 2021b ) extends DeiT by increasing the number of transformer layers . To overcome the difficulties of training deeper transformers , CaiT introduces LayerScale and class-attention layers , which increase the parameters and model complexity . Tokens-to-Token vision transformer ( T2T ) ( Yuan et al. , 2021b ) proposes an image transformation by recursive token aggregation to capture local structure . Stand-alone self-attention ( Ramachandran et al. , 2019 ) applies local self-attention layer to replace spatial convolution and outperform original ResNet models . Even though sharing value and key spatially is parameter efficient in this approach , content-based information is lost . Transformer-iN-Transformer ( TNT ) ( Han et al. , 2021 ) models both patch-level and pixel-level representations and applies outer and inner transformer blocks to extract global and local information respectively . ConViT ( d ’ Ascoli et al. , 2021 ) proposes the gated positional self-attention to incorporate soft convolutional biases . CrossViT ( Chen et al. , 2021 ) proposes a dual-branch transformer architecture for multi-scale feature extraction . For pixel-level prediction tasks such as semantic segmentation , object detection , Pyramid Vision Transformer ( PVT ) ( Wang et al. , 2021a ) introduces a progressive shrinking pyramid and spatialreduction attention with fine-grained image patches . DETR ( Carion et al. , 2020 ) adapts transformers for object detection tasks . Swin Transformer ( Liu et al. , 2021 ) applies hierarchical transformer with shifted windows of varying sizes . Twins ( Chu et al. , 2021a ) deploys interleaved locally-grouped self-attention and global sub-sample attention layers to improve performances . To optimize transformer and save computation , Wu et al . ( 2021b ) uses centroid attention to extract and compress input information , Jaegle et al . ( 2021 ) iteratively distill inputs into latent space with attention bottlenecks , Wang et al . ( 2021b ) dynamically adjusts the number of tokens with multiple cascading transformers , and Wu et al . ( 2020 ) introduced semantic token to replace pixel-based transformers to save computation . There are hybrid architectures fusing convolutional and transformer blocks , such as LocalViT ( Li et al. , 2021b ) , BoTNet ( Srinivas et al. , 2021 ) , LeViT ( Graham et al. , 2021 ) , BossNet ( Li et al. , 2021a ) , CvT ( Wu et al. , 2021a ) , CoaT ( Xu et al. , 2021 ) , CMT ( Guo et al. , 2021 ) for higher accuracy and faster inference . Unlike prior literature , our work explores the intrinsic capability of pure transformer block on incorporating spatial locality and the impact of the locality along the depth direction . This work is also inspired by the emerging graph attention network ( Veličković et al. , 2018 ) , borrowing the concept of message passing and information aggregation from nearest neighbors . Note attention masks have been used in NLP tasks as a sparsification method to reduce the computation complexity , as well as to capture local information , as in Guo et al . ( 2019 ) ; Child et al . ( 2019 ) ; Beltagy et al . ( 2020 ) ; Ainslie et al . ( 2020 ) . 3 VISION TRANSFORMER PRELIMINARIES . The transformer architecture introduced by Vaswani ( Vaswani et al. , 2017 ) inspired many model variants with remarkable success in NLP tasks . ViT ( Dosovitskiy et al. , 2021 ) extends pure transformer-based architecture into CV applications . Instead of pixel-level processing , ViT splits the original images into a sequence of patches as inputs and transforms them into patch tokens , for better computation efficiency . In general , ViT consists of 3 fundamental modules : embedding layer , multi-head self-attention , and feed-forward network . To process images in transformer , the original RGB images ( 224×224 ) is flattened into a sequence of N ( 14×14 ) patches . Each patch has a fixed size ( typical 16x16 pixels ) . Patches are then transformed into patch embedding with hidden dimensions ( D ) of 192 , 384 , and 768 for tiny , small , and base models respectively in ViT/DeiT . In addition to patch tokens , the embedding layer also integrates positional information , classification and knowledge distillation through the positional token , class token and distillation token , respectively . Positional token is added into the patch embedding with a trainable positional embedding . However , this positional embedding is added only in the embedding layers . The spatial information is largely lost in the transformer layers since all-to-all attention is invariant to the order of the patches . The class token is another trainable vector ( 1×D ) , concatenated to the patch tokens ( total N+1 ) . It is used to collect information from the patch tokens to make output predictions , while also spreading information among patch tokens during training . Distillation token is sometimes added for knowledge transfer from teacher models , such as a CNN model . When training the distilled version of the model , a distillation token is further concatenated to the patch token along with the class token ( total N+2 ) . Multi-head self-attention ( MHA ) module has multiple parallel attention heads , where each of them comprises three main components : Key ( K ) , Query ( Q ) , and Value ( V ) , Key and Query are trained and multiplied to estimate how much weights on each corresponding token in Value for output : Attention ( K , Q , V ) = Softmax ( QKT√ d ) V ( 1 ) Where softmax is applied to each row of the input product matrix ( QKT ) and √ d provides appropriate normalization . Multiple attention heads in MHA attend to different parts of the input simultaneously . Considering H heads in MHA layer , the hidden dimension D is split equally across all heads ( D = H × d ) . Feed-forward network ( FFN ) follows after the MHA module , containing two linear transformation layers separated by Gelu activation . The hidden dimension expands by 4x after the first linear layer from D to 4D , and is reduced back to D in the second linear layer . Both MHA and FFN use skip-connections with layer normalization as the residual operation . 4 MASKED ATTENTION HEAD . Spatial locality plays a crucial role in computer vision tasks . CNN models capture it using the sliding filter of shared weights , typically with a receptive field of 3×3 , 5×5 , or 7×7 . In contrast to CNN models , the locality is not explicitly introduced in the transformer structure . With attention masks , we can explicitly insert locality into self-attention modules without introducing any extra parameter or computation . The key idea is to apply a mask on the all-to-all attention products ( i.e . QKT ) to reinforce the weights from the closest neighbors by aggregating information only from tokens selected by the mask . Figure 1 illustrates an example of our proposed masking scheme . The orange box shows a 3 × 3 mask , where only the direct neighboring patches are selected . Specifically , Patch 16 only gathers information from the closest neighbors of Patch 1 , 2 , 3 , 15 , 17 , 29 , 30 , and 31 , and ignores the rest of patches . This is different from the typical all-to-all attention module , where Patch 16 attends to all 0-195 patches . We can easily expand the depth of the attention mask beyond the closest neighbors , to second-level neighbors ( green box in Figure 1 ) or more . Note that the class token ( and distillation token ) still attends to all the patches to collect and spread information during forward and backward passes . Since each attention product selected by the mask is calculated by Q and K , the masked attention head also retains the content-based locality information . The attention mask is added before Softmax , regulating the distribution of attention maps to focus more on the closest neighbors : Masked Attention ( K , Q , V ) = Softmax ( M QKT√ d ) V ( 2 ) where M ∈ { 0 , 1 } ( N+1 ) × ( N+1 ) is a binary attention mask , encoding the spatial locality into the attention head by passing through only the weights from close neighbors and setting the rest to zero . Note it ’ s important to add the mask before Softmax because it allows the model to learn the importance of the locality flexibly . More precisely , unselected patches appear as e0 = 1 in the numerator of the softmax operation . Thus , if the attention product result of the closest neighbors is meaningfully larger than zero ( i.e . M QKT 0 ) , it suggests that local information dominates . However , if those results are negative or close to zero , it implies that local information is insignificant and global information is more important . Therefore , inserting masks before the softmax operation allows models to enforce locality or bypass it .
This paper attempts to improve the classic vision transformers, or specifically the DeiT model by introducing the Masked Attention Head. Instead of focusing on aggregating global information in the original self-attention heads, this paper introduce local information via the proposed Masked Attention Head into self-attention. Experiments on ImageNet show that the proposed approach does matter for lifting the model performance of the original DeiT model.
SP:7ba7db3bba0bb539fe00165024a483e9f59d5d35
Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis
1 INTRODUCTION . Multi-agent reinforcement learning ( MARL ) has achieved great success in various application domains , including control ( 66 ; 10 ; 51 ) , robotics ( 64 ) , wireless sensor networks ( 24 ; 67 ) , intelligent systems ( 71 ) , etc . In MARL , a set of fully decentralized agents interact with a dynamic environment following their own policies and collect local rewards , and their goal is to collaboratively learn the optimal joint policy that achieves the maximum expected accumulated reward . Classical policy optimization algorithms have been well developed and studied , e.g. , policy gradient ( PG ) ( 49 ) , actor-critic ( AC ) ( 23 ) and natural actor-critic ( NAC ) ( 37 ; 7 ) . In particular , AC-type algorithms are more computationally tractable and efficient as they take advantages of both policy gradient and value-based updates . However , in the multi-agent setting , decentralized AC is more challenging to design compared with the centralized AC , as the algorithm updates involve sensitive agent information , e.g. , local actions , rewards and policies , which must be kept locally in the decentralized learning process . In the existing designs of decentralized AC , the agents need to share either their local actions ( 70 ; 69 ; 8 ; 36 ; 72 ; 27 ; 19 ; 26 ; 11 ) or local rewards ( 15 ; 33 ; 32 ) with their neighbors , and hence are not desired . This issue is addressed by Algorithm 2 of ( 70 ) at the cost of learning a parameterized model to estimate the averaged reward , yet this approach requires extra learning effort and the reward estimation can be inaccurate . Moreover , existing decentralized AC algorithms are not sample and communication-efficient , and do not have finite-time convergence guarantee , especially under the practical Markovian sampling setting . Therefore , we aim to address the following important question . • Q1 : Can we develop a decentralized AC algorithm that is convergent , sample and communicationefficient , and does not require sharing agents ’ local actions and policies ? On the other hand , as an important variant of the decentralized AC , decentralized NAC algorithm has not been formally developed and rigorously analyzed in the existing literature . In particular , a major challenge is that we need to develop a fully decentralized and computationally tractable scheme to compute the inverse of the high dimensional Fisher information matrix , and this scheme must be both sample and communication efficient . Hence , we want to ask : • Q2 : Can we develop a computationally tractable and communication-efficient decentralized NAC algorithm that has a low finite-time sample and communication complexity ? In this study , we provide affirmative answers to the above two questions by developing fully decentralized AC and NAC algorithms that are sample and communication-efficient , and do not reveal agents ’ local actions and policies . We also develop rigorous finite-time analysis of these algorithms under Markovian sampling . Our contributions are summarized as follows . 1.1 OUR CONTRIBUTIONS . We develop fully decentralized AC and NAC algorithms and analyze their finite-time sample and communication complexities under Markovian sampling . Our results and comparisons to existing works are summarized in Table 1 1 . Our decentralized AC and NAC algorithms adopt the following novel designs to accurately estimate the policy gradient in an efficient way . • Noisy Rewards : In a decentralized setting , local policy gradients ( estimated locally by the agents ) involve the average of all agents ’ local rewards . To help agents estimate this averaged reward without revealing the raw local rewards , we let them share Gaussian-corrupted local rewards with their neighbor , and the variance of the Gaussian noise can be adjusted by each agent to reach its desired level . • Mini-batch Updates : We apply mini-batch Markovian sampling to both the decentralized actor and critic updates . This approach 1 ) helps the agents obtain accurate estimations of the corrupted averaged reward ; 2 ) significantly reduces the variance of policy gradient caused by Markovian sampling ; and 3 ) significantly reduces the communication frequency and complexity . Moreover , for our decentralized NAC algorithm , we additionally adopt the following design to compute the inverse of the Fisher information matrix in an efficient and decentralized way . • Decentralized Natural Policy Gradient : By reformulating the natural policy gradient as the minimizer of a quadratic program , we develop a decentralized SGD with Markovian sampling that allows the agents to estimate the corresponding local natural gradients by communicating only 1In this table , Õ ( · ) hides all logarithm factors . In ( 25 ) , the sample complexity has been established for various AC-type algorithms , and we compare with the best one . In ( 70 ) , the Algorithm 1 needs to share local actions while the Algorithm 2 does not . scalar variables with their neighbors . In particular , in order to minimize the sample complexity of the decentralized SGD , we set the batch size to be exponentially increasing . Theoretically , for the first time , we provide finite-time convergence analysis of decentralized AC and NAC algorithms under Markovian sampling . Specifically , we prove that our decentralized AC and NAC algorithms achieve the overall sample complexities O ( −2 ln −1 ) and O ( −3 ln −1 ) , respectively , and both match the state-of-the-art complexities of their centralized versions ( 60 ) . Moreover , both decentralized algorithms achieve a significantly reduced overall communication complexity O ( −1 ln −1 ) . In particular , our analysis involves new technical developments . First , we need to characterize the bias and variance of ( natural ) policy gradient and stochastic gradient caused by the noisy rewards and the inexact local averaging steps , and control them with proper choices of batch sizes and number of local averaging steps . Second , when using decentralized Markovian SGD to compute the inverse Fisher information matrix , we need to use an exponentially increasing batch size to achieve an optimized sample complexity bound . Such a Markovian SGD with adaptive batch size has not been studied before and can be of independent interest . 1.2 RELATED WORK . Convergence analysis of AC and NAC . In the centralized setting , the AC algorithm was firstly proposed by ( 23 ) and later developed into the natural actor-critic ( NAC ) algorithm ( 37 ; 7 ) . Specifically , ( 37 ) does not provide any convergence result , while ( 22 ; 5 ) and ( 20 ; 6 ; 7 ) establish the asymptotic convergence rate of centralized AC and NAC , respectively , which is weaker than our finite-time convergence results . Furthermore , ( 54 ; 25 ; 38 ; 61 ; 56 ) and ( 54 ) establish the finite-time convergence rate of centralized AC and NAC , respectively . Please refer to Table 1 for their sample complexities . Moreover , ( 60 ) improve the finite-time sample complexities of the above works to the state-of-theart result for both centralized AC and NAC by leveraging mini batch sampling , and our sample complexities match these state-of-the-art results . In the decentralized setting , a few works have established the almost sure convergence result of AC ( 15 ; 27 ; 47 ; 33 ) , but they do not characterize the finite-time convergence rate and the sample complexity . To the best of our knowledge , there is no formally developed decentralized NAC algorithm . Decentralized TD-type algorithms . The finite-time convergence of decentralized TD ( 0 ) has been obtained using i.i.d samples ( 52 ; 14 ; 53 ; 28 ) and Markovian samples ( 46 ; 53 ) , respectively , without revealing the agents ’ local actions , policies and rewards . Decentralized off-policy TD-type algorithms have been studied in ( 34 ; 45 ; 9 ; 12 ) . Decentralized AC in other MARL settings . Some works apply decentralized AC to other MARL settings that are very different from ours . For example , ( 44 ; 36 ; 17 ; 11 ; 57 ) studied adversarial game . ( 30 ) studied a mixed cooperative-competitive environment where each agent maximizes its own Q function ( 30 ) . ( 11 ) proposed Delay-Aware Markov Game which considers delay in Markov game . ( 68 ; 31 ) studied linear control system and linear quadratic regulators instead of an MDP . ( 55 ) studied sequential prisoner ’ s dilemmas . Policy gradient algorithms . Policy gradient ( PG ) and natural policy gradient ( NPG ) are popular policy optimization algorithms . ( 1 ) characterizes the iteration complexity ( i.e. , number of episodes ) of centralized PG and NPG algorithms by assuming access to exact policy gradient . They also established a sample complexity result O ( −6 ) in the i.i.d . setting for NPG , which is worse than the state-of-the-art result O ( −3 ln −1 ) of both centralized NAC ( 60 ) and our decentralized NAC with Markovian samples . ( 3 ) proposes decentralized PG in a simple cooperative MARL setting , where all the agents share one action and the same policy , and they establish a iteration complexity in the order of O ( −4 ) . ( 13 ; 73 ) apply decentralized PG to Markov games . ( 2 ) applies decentralized NPG to a different cooperative MARL setting where each agent observes its own state , takes its own action and has access to these information of its neighbors . Value-based algorithms . Value-based algorithms have also been develop for MARL . Specifically , ( 21 ; 18 ) develop distributed Q-learning in a simplified cooperative MARL setting , where the agents share a joint action . In particular , ( 18 ) characterizes the convergence rate of a value function-based convergence error , which is a different optimality measure from that of AC-type algorithms . ( 35 ) applies distributed Q-learning to another cooperative MARL setting , where each agent observes its own state and takes its own action . It establishes an asymptotic convergence guarantee , and no convergence rate is given . ( 40 ) develops a value propagation algorithm that uses primal-dual method to minimize a soft Bellman error in the MARL setting . Under an assumption that the variance of the stochastic gradient is uniformly bounded , it establishes a non-asymptotic convergence rate to an approximate stationary point . 2 REVIEW OF MULTI-AGENT REINFORCEMENT LEARNING . In this section , we first introduce some standard settings of RL . Consider an agent that starts from an initial state s0 ∼ ξ and collects a trajectory of Markovian samples { st , at , Rt } t ⊂ S × A × R by interacting with an underlying environment ( with transition kernel P ) following a parameterized policy πω with induced stationary state distribution µω . The agent aims to learn an optimal policy that maximizes the expected accumulated reward J ( ω ) = ( 1− γ ) E [ ∑∞ t=0 γ tRt ] , where γ ∈ ( 0 , 1 ) is a discount factor . The marginal state distribution is denoted as Pω ( st ) and the visitation measure is defined as νω ( s ) : = ( 1− γ ) ∑∞ t=0 γ tPω ( st = s ) , both of which depend on the policy parameter ω ∈ Ω and the transition kernel P . We also define the mixed transition kernel Pξ ( ·|s , a ) : = γP ( ·|s , a ) + ( 1− γ ) ξ ( · ) , whose stationary state distribution is known to be νω . In the multi-agent RL ( MARL ) setting , M agents are connected via a fully decentralized network and interact with a shared environment . The network topology is specified by a doubly stochastic communication matrix W ∈ RM×M . At any time t , all the agents share a common state st. Then , every agent m takes an action a ( m ) t following its own current policy π ( m ) t ( ·|st ) parameterized by ω ( m ) t . After all the actions at : = { a ( m ) t } Mm=1 are taken , the global state st transfers to a new state st+1 and every agent m receives a local reward R ( m ) t . In this MARL setting , each agent m can only access the global state { st } t , its own actions { a ( m ) t } t and rewards { R ( m ) t } t and policy π ( m ) t . Next , define the joint policy πt ( at|st ) : = ∏M m=1 π ( m ) t ( a ( m ) t |st ) parameterized by ωt = [ ω ( 1 ) t ; . . . ; ω ( M ) t ] , and define the average reward Rt : = 1M ∑M m=1R ( m ) t . The goal of the agents is to collaboratively learn the optimal joint policy that maximizes the expected accumulated average reward J ( ω ) : = ( 1− γ ) E [ ∑∞ t=0 γ tRt ∣∣∣s0 ∼ ξ ] . Throughout , we consider the setting that the agents keep interacting with the environment and observing a trajectory of MDP transition samples , which are further used to learn the optimal joint policy .
This paper developed two sample and communication efficient decentralized actor-critic algorithms for multi-agent reinforcement learning. Specifically, the authors proposed decentralized AC and natural AC algorithms that can be private and efficient for different agents to learn. They added noise the local rewards of agents, which were shared among different agents. Moreover, the authors adopted mini-batch updates for the actors and critics in these two algorithms. Theoretically, the authors showed that both algorithms were able to achieve the state-of-the-art sample complexities. To validate the proposed algorithms, they used a simple environment to show the superiority over the existing decentralized AC algorithm. Overall, the investigated topic in this paper is absolutely interesting. The sample and communication complexities of decentralized MARL algorithms remain active research areas. While the mathematical analysis in this work looks good, the numerical results to me don't look very convincing. Additionally, there are quite a few points I have in mind regarding the proposed algorithm and the relevant analysis.
SP:2e125a1c23bb6dca29df2b8cc9455ce23322c994
Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis
1 INTRODUCTION . Multi-agent reinforcement learning ( MARL ) has achieved great success in various application domains , including control ( 66 ; 10 ; 51 ) , robotics ( 64 ) , wireless sensor networks ( 24 ; 67 ) , intelligent systems ( 71 ) , etc . In MARL , a set of fully decentralized agents interact with a dynamic environment following their own policies and collect local rewards , and their goal is to collaboratively learn the optimal joint policy that achieves the maximum expected accumulated reward . Classical policy optimization algorithms have been well developed and studied , e.g. , policy gradient ( PG ) ( 49 ) , actor-critic ( AC ) ( 23 ) and natural actor-critic ( NAC ) ( 37 ; 7 ) . In particular , AC-type algorithms are more computationally tractable and efficient as they take advantages of both policy gradient and value-based updates . However , in the multi-agent setting , decentralized AC is more challenging to design compared with the centralized AC , as the algorithm updates involve sensitive agent information , e.g. , local actions , rewards and policies , which must be kept locally in the decentralized learning process . In the existing designs of decentralized AC , the agents need to share either their local actions ( 70 ; 69 ; 8 ; 36 ; 72 ; 27 ; 19 ; 26 ; 11 ) or local rewards ( 15 ; 33 ; 32 ) with their neighbors , and hence are not desired . This issue is addressed by Algorithm 2 of ( 70 ) at the cost of learning a parameterized model to estimate the averaged reward , yet this approach requires extra learning effort and the reward estimation can be inaccurate . Moreover , existing decentralized AC algorithms are not sample and communication-efficient , and do not have finite-time convergence guarantee , especially under the practical Markovian sampling setting . Therefore , we aim to address the following important question . • Q1 : Can we develop a decentralized AC algorithm that is convergent , sample and communicationefficient , and does not require sharing agents ’ local actions and policies ? On the other hand , as an important variant of the decentralized AC , decentralized NAC algorithm has not been formally developed and rigorously analyzed in the existing literature . In particular , a major challenge is that we need to develop a fully decentralized and computationally tractable scheme to compute the inverse of the high dimensional Fisher information matrix , and this scheme must be both sample and communication efficient . Hence , we want to ask : • Q2 : Can we develop a computationally tractable and communication-efficient decentralized NAC algorithm that has a low finite-time sample and communication complexity ? In this study , we provide affirmative answers to the above two questions by developing fully decentralized AC and NAC algorithms that are sample and communication-efficient , and do not reveal agents ’ local actions and policies . We also develop rigorous finite-time analysis of these algorithms under Markovian sampling . Our contributions are summarized as follows . 1.1 OUR CONTRIBUTIONS . We develop fully decentralized AC and NAC algorithms and analyze their finite-time sample and communication complexities under Markovian sampling . Our results and comparisons to existing works are summarized in Table 1 1 . Our decentralized AC and NAC algorithms adopt the following novel designs to accurately estimate the policy gradient in an efficient way . • Noisy Rewards : In a decentralized setting , local policy gradients ( estimated locally by the agents ) involve the average of all agents ’ local rewards . To help agents estimate this averaged reward without revealing the raw local rewards , we let them share Gaussian-corrupted local rewards with their neighbor , and the variance of the Gaussian noise can be adjusted by each agent to reach its desired level . • Mini-batch Updates : We apply mini-batch Markovian sampling to both the decentralized actor and critic updates . This approach 1 ) helps the agents obtain accurate estimations of the corrupted averaged reward ; 2 ) significantly reduces the variance of policy gradient caused by Markovian sampling ; and 3 ) significantly reduces the communication frequency and complexity . Moreover , for our decentralized NAC algorithm , we additionally adopt the following design to compute the inverse of the Fisher information matrix in an efficient and decentralized way . • Decentralized Natural Policy Gradient : By reformulating the natural policy gradient as the minimizer of a quadratic program , we develop a decentralized SGD with Markovian sampling that allows the agents to estimate the corresponding local natural gradients by communicating only 1In this table , Õ ( · ) hides all logarithm factors . In ( 25 ) , the sample complexity has been established for various AC-type algorithms , and we compare with the best one . In ( 70 ) , the Algorithm 1 needs to share local actions while the Algorithm 2 does not . scalar variables with their neighbors . In particular , in order to minimize the sample complexity of the decentralized SGD , we set the batch size to be exponentially increasing . Theoretically , for the first time , we provide finite-time convergence analysis of decentralized AC and NAC algorithms under Markovian sampling . Specifically , we prove that our decentralized AC and NAC algorithms achieve the overall sample complexities O ( −2 ln −1 ) and O ( −3 ln −1 ) , respectively , and both match the state-of-the-art complexities of their centralized versions ( 60 ) . Moreover , both decentralized algorithms achieve a significantly reduced overall communication complexity O ( −1 ln −1 ) . In particular , our analysis involves new technical developments . First , we need to characterize the bias and variance of ( natural ) policy gradient and stochastic gradient caused by the noisy rewards and the inexact local averaging steps , and control them with proper choices of batch sizes and number of local averaging steps . Second , when using decentralized Markovian SGD to compute the inverse Fisher information matrix , we need to use an exponentially increasing batch size to achieve an optimized sample complexity bound . Such a Markovian SGD with adaptive batch size has not been studied before and can be of independent interest . 1.2 RELATED WORK . Convergence analysis of AC and NAC . In the centralized setting , the AC algorithm was firstly proposed by ( 23 ) and later developed into the natural actor-critic ( NAC ) algorithm ( 37 ; 7 ) . Specifically , ( 37 ) does not provide any convergence result , while ( 22 ; 5 ) and ( 20 ; 6 ; 7 ) establish the asymptotic convergence rate of centralized AC and NAC , respectively , which is weaker than our finite-time convergence results . Furthermore , ( 54 ; 25 ; 38 ; 61 ; 56 ) and ( 54 ) establish the finite-time convergence rate of centralized AC and NAC , respectively . Please refer to Table 1 for their sample complexities . Moreover , ( 60 ) improve the finite-time sample complexities of the above works to the state-of-theart result for both centralized AC and NAC by leveraging mini batch sampling , and our sample complexities match these state-of-the-art results . In the decentralized setting , a few works have established the almost sure convergence result of AC ( 15 ; 27 ; 47 ; 33 ) , but they do not characterize the finite-time convergence rate and the sample complexity . To the best of our knowledge , there is no formally developed decentralized NAC algorithm . Decentralized TD-type algorithms . The finite-time convergence of decentralized TD ( 0 ) has been obtained using i.i.d samples ( 52 ; 14 ; 53 ; 28 ) and Markovian samples ( 46 ; 53 ) , respectively , without revealing the agents ’ local actions , policies and rewards . Decentralized off-policy TD-type algorithms have been studied in ( 34 ; 45 ; 9 ; 12 ) . Decentralized AC in other MARL settings . Some works apply decentralized AC to other MARL settings that are very different from ours . For example , ( 44 ; 36 ; 17 ; 11 ; 57 ) studied adversarial game . ( 30 ) studied a mixed cooperative-competitive environment where each agent maximizes its own Q function ( 30 ) . ( 11 ) proposed Delay-Aware Markov Game which considers delay in Markov game . ( 68 ; 31 ) studied linear control system and linear quadratic regulators instead of an MDP . ( 55 ) studied sequential prisoner ’ s dilemmas . Policy gradient algorithms . Policy gradient ( PG ) and natural policy gradient ( NPG ) are popular policy optimization algorithms . ( 1 ) characterizes the iteration complexity ( i.e. , number of episodes ) of centralized PG and NPG algorithms by assuming access to exact policy gradient . They also established a sample complexity result O ( −6 ) in the i.i.d . setting for NPG , which is worse than the state-of-the-art result O ( −3 ln −1 ) of both centralized NAC ( 60 ) and our decentralized NAC with Markovian samples . ( 3 ) proposes decentralized PG in a simple cooperative MARL setting , where all the agents share one action and the same policy , and they establish a iteration complexity in the order of O ( −4 ) . ( 13 ; 73 ) apply decentralized PG to Markov games . ( 2 ) applies decentralized NPG to a different cooperative MARL setting where each agent observes its own state , takes its own action and has access to these information of its neighbors . Value-based algorithms . Value-based algorithms have also been develop for MARL . Specifically , ( 21 ; 18 ) develop distributed Q-learning in a simplified cooperative MARL setting , where the agents share a joint action . In particular , ( 18 ) characterizes the convergence rate of a value function-based convergence error , which is a different optimality measure from that of AC-type algorithms . ( 35 ) applies distributed Q-learning to another cooperative MARL setting , where each agent observes its own state and takes its own action . It establishes an asymptotic convergence guarantee , and no convergence rate is given . ( 40 ) develops a value propagation algorithm that uses primal-dual method to minimize a soft Bellman error in the MARL setting . Under an assumption that the variance of the stochastic gradient is uniformly bounded , it establishes a non-asymptotic convergence rate to an approximate stationary point . 2 REVIEW OF MULTI-AGENT REINFORCEMENT LEARNING . In this section , we first introduce some standard settings of RL . Consider an agent that starts from an initial state s0 ∼ ξ and collects a trajectory of Markovian samples { st , at , Rt } t ⊂ S × A × R by interacting with an underlying environment ( with transition kernel P ) following a parameterized policy πω with induced stationary state distribution µω . The agent aims to learn an optimal policy that maximizes the expected accumulated reward J ( ω ) = ( 1− γ ) E [ ∑∞ t=0 γ tRt ] , where γ ∈ ( 0 , 1 ) is a discount factor . The marginal state distribution is denoted as Pω ( st ) and the visitation measure is defined as νω ( s ) : = ( 1− γ ) ∑∞ t=0 γ tPω ( st = s ) , both of which depend on the policy parameter ω ∈ Ω and the transition kernel P . We also define the mixed transition kernel Pξ ( ·|s , a ) : = γP ( ·|s , a ) + ( 1− γ ) ξ ( · ) , whose stationary state distribution is known to be νω . In the multi-agent RL ( MARL ) setting , M agents are connected via a fully decentralized network and interact with a shared environment . The network topology is specified by a doubly stochastic communication matrix W ∈ RM×M . At any time t , all the agents share a common state st. Then , every agent m takes an action a ( m ) t following its own current policy π ( m ) t ( ·|st ) parameterized by ω ( m ) t . After all the actions at : = { a ( m ) t } Mm=1 are taken , the global state st transfers to a new state st+1 and every agent m receives a local reward R ( m ) t . In this MARL setting , each agent m can only access the global state { st } t , its own actions { a ( m ) t } t and rewards { R ( m ) t } t and policy π ( m ) t . Next , define the joint policy πt ( at|st ) : = ∏M m=1 π ( m ) t ( a ( m ) t |st ) parameterized by ωt = [ ω ( 1 ) t ; . . . ; ω ( M ) t ] , and define the average reward Rt : = 1M ∑M m=1R ( m ) t . The goal of the agents is to collaboratively learn the optimal joint policy that maximizes the expected accumulated average reward J ( ω ) : = ( 1− γ ) E [ ∑∞ t=0 γ tRt ∣∣∣s0 ∼ ξ ] . Throughout , we consider the setting that the agents keep interacting with the environment and observing a trajectory of MDP transition samples , which are further used to learn the optimal joint policy .
In this work, the authors propose two decentralized policy gradient-type algorithms for multi-agent reinforcement learning, namely, a decentralized actor-critic algorithm and a decentralized natural actor-critic algorithm. The stochastic updates of both algorithms preserve the agents' privacy information, including their local actions and local rewards. The authors analyze the finite-time convergence rate of both algorithms under Markovian sampling and linear function approximation, and prove that both algorithms achieve the state-of-the-art sample complexities and improved communication complexities.
SP:2e125a1c23bb6dca29df2b8cc9455ce23322c994
Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework
1 INTRODUCTION . Lately , point cloud analysis has emerged as a popular topic in 3D understanding , attracting attention from academia and industry Qi et al . ( 2017a ) ; Shi et al . ( 2019 ) ; Xu et al . ( 2020 ) . Different from 2D images represented by regular dense pixels , point clouds are composed of unordered and irregular sets of points P ∈ RN×3 , making it infeasible to directly apply image processing methods to point cloud analysis . Meanwhile , the nature of sparseness and the presence of noises further restrict the performance . In the past few years , endowing with neural networks , point cloud analysis has seen a great improvement in various applications , including 3D shape classification Qi et al . ( 2017a ) , semantic segmentation Hu et al . ( 2020 ) and object detection Shi & Rajkumar ( 2020 ) , etc . Recent efforts have shown promising results for point cloud analysis by exploring local geometric information , using convolution Li et al . ( 2021a ) , graph Li et al . ( 2021a ) , or attention mechanism Guo et al . ( 2021 ) ( see Section 2 for details ) . These methods , despite their gratifying results , have mainly relied on the premise that an elaborate local extractor is essential for point cloud analysis , leading to the competition for careful designs that explore fine local geometric properties . Nevertheless , sophisticated extractors are not without drawbacks . On one hand , due to prohibitive computations and the overhead of memory access , these sophisticated extractors hamper the efficiency of applications in real scenes . As an example , until now , most 3D point cloud applications are still based on the simple PointNet ( and PointNet++ ) or the voxel-based methods Liu et al . ( 2021 ) ; Li et al . ( 2021b ) ; Zhang et al . ( 2021 ) . Applications that employ the aforementioned advanced methods , however , are rare in literature . On the other hand , the booming sophisticated extractors , saturate the performance since they already describe the local geometric properties well and more complicated design is no longer to further improve the performance a lot . These phenomena suggest that we may need to stop the race of local feature extraction designing , rethinking the necessity of elaborate local feature extractors and further revisiting the succinct design philosophy in point cloud analysis . In this paper , we aim at the ambitious goal of building a deep network for point cloud analysis using only residual feed-forward MLPs , without any delicate local feature explorations . By doing so , we eschew the prohibitive computations and ceaseless memory access caused by the sophisticated local geometric extractors , and enjoy the advantage of efficiency from the highly-optimized MLPs . To further improve the performance and generalization ability , We further introduce a lightweight local geometric affine module that adaptively transforms the point feature in a local region . We term our new network architecture as PointMLP . In the sense of MLP-based design philosophy , our PointMLP is similar to PointNet and PointNet++ Qi et al . ( 2017a ; b ) . However , our model is more generic and exhibits promising performance . Different from the models with sophisticated lo- cal geometric extractors ( e.g. , DeepGCNs Li et al . ( 2019 ) , RSCNN Liu et al . ( 2019b ) , etc . ) , our PointMLP is conceptually simpler and achieves results on par or even better than these state-ofthe-art methods ( see Figure 1 ) . Keep in mind that we did not challenge the advantages of these local geometric extractors and acknowledge their contributions , however , a more succinct framework should be studied considering both the efficiency and accuracy . In Table 1 , we compare our PointMLP with some representative methods . Even the design philosophy is simple , PointMLP exhibits prominent performance on 3D point cloud analysis . Specifically , we achieve the state-of-the-art classification performance , 94.5 % , on the ModelNet40 benchmark , and we outperform related works by 3.3 % accuracy on the real-world ScanObjectNN dataset , with a significantly higher inference speed . 2 RELATED WORK . Point cloud analysis . There are mainly two streams to process point cloud . Since the point cloud data structure is irregular and unordered , some works consider projecting the original point clouds to intermediate voxels Maturana & Scherer ( 2015 ) ; Shi et al . ( 2020 ) or images You et al . ( 2018 ) ; Li et al . ( 2020 ) , translating the challenging 3D task into a well-explored 2D image problem . In this regime , point clouds understanding is largely boosted and enjoys the fast processing speed from 2D images or voxels . Albeit efficient , information loss caused by projection degrades the representational quality of details for point clouds Yang et al . ( 2019 ) . To this end , some methods are proposed to directly process the original point cloud sets . PointNet Qi et al . ( 2017a ) is a pioneering work that directly consumes unordered point sets as inputs using shared MLPs . Base on PointNet , PointNet++ Qi et al . ( 2017b ) further introduced a hierarchical feature learning paradigm to recursively capture the local geometric structures . Owing to the local point representation ( and multi-scale information ) , PointNet++ exhibits promising results and has been the cornerstone of modern point cloud methods Wang et al . ( 2019 ) ; Fan et al . ( 2021 ) . Our PointMLP also follows the design philosophy of PointNet++ but explores a simpler yet much deeper network architecture . Local geometry exploration . As PointNet++ built the generic point cloud analysis network framework , the recent research focus is shifted to how to generate better regional points representation . Predominantly , the explorations of local points representation can be divided into three categories : convolution- , graph- , and attention-based methods . One of the most distinguished convolution-based methods is PointConv Wu et al . ( 2019 ) . By approximating continuous weight and density functions in convolutional filters using an MLP , PointConv is able to extend the dynamic filter to a new convolution operation . Also , PAConv Xu et al . ( 2021a ) constructs the convolution kernel by dynamically assembling basic weight matrices stored in a weight bank . Without modifying network configurations , PAConv can be seamlessly integrated into classical MLP-based pipelines . Unlike convolution-based methods , Graph-based methods investigate mutually correlated relationships among points with a graph . In Wang et al . ( 2019 ) , an EdgeConv is proposed to generate edge features that describe the relationships between a point and its neighbors . By doing so , a local graph is built and the point relationships are well preserved . In 3D-GCN Lin et al . ( 2021 ) , authors aim at deriving deformable 3D kernels using a 3D Graph Convolution Network . Closely related to graphbased methods , the attention-based methods exhibit excellent ability on relationship exploration as well , like PCT Guo et al . ( 2021 ) and Point Transformer Zhao et al . ( 2021 ) ; Engel et al . ( 2020 ) . With the development of local geometry exploration , the performances on various tasks appear to saturate , continuing on this track would bring minimal improvements . In this paper , we showcase that even without the carefully designed operations for local geometry exploration , a pure deep hierarchical MLP is able to exhibit gratifying performances and even better results . Deep network architecture for point cloud . Interestingly , the development of point cloud analysis is closely related to the evolution of the image processing network . In the early era , works in the image processing field simply stack several learning layers to probe the performance limitations Krizhevsky et al . ( 2012 ) ; Simonyan & Zisserman ( 2015 ) ; Dong et al . ( 2014 ) . Then , the great success of deep learning was significantly promoted by deep neural architectures like ResNet He et al . ( 2016 ) , which brings a profound impact to various research fields . Recently , attention-based models , including atten- tion blocks Wang et al . ( 2018 ) and Transformer architectures Dosovitskiy et al . ( 2021 ) ; Touvron et al . ( 2021b ) , further flesh out the community . Most recently , the succinct deep MLP architectures have attracted a lot of attention due to their efficiency and generality . Point cloud analysis follows the same develop history as well , from MLP-based PointNet Qi et al . ( 2017a ) , deep hierarchical PointNet++ Qi et al . ( 2017b ) , convolution- ( or graph- ) based methods Wu et al . ( 2019 ) ; Wang et al . ( 2019 ) , to state-of-the-art Transformer-based models Guo et al . ( 2021 ) ; Zhao et al . ( 2021 ) . In this paper , we abandon sophisticated details and present a simple yet effective deep residual MLP network for point cloud analysis . Instead of following the tendency in the vision community deliberately , we are in pursuit of an inherently simple yet powerful architecture for point cloud analysis . 3 DEEP RESIDUAL MLP FOR POINT CLOUD . We propose to learn a point cloud representation by a simple feed-forward residual MLP network , which hierarchically aggregates the local features extracted by MLPs , and abandons the use of delicate local geometric extractors . To further improve the robustness and stability , we introduce a lightweight geometric affine module to transform the local points to a normal distribution . The detailed framework of our method is illustrated in Figure 2 . 3.1 REVISITING POINT-BASED METHODS . The design of point-based methods for point cloud analysis dates back to the PointNet and PointNet++ papers Qi et al . ( 2017a ; b ) , if not earlier . The motivation behind this direction is to directly consume point clouds and avoid unnecessary rendering processes . Given a set of points P = { pi|i = 1 , · · · , N } ∈ RN×3 , where N indicates the number of points in a ( x , y , z ) Cartesian space , point-based methods aims to directly learn the underlying representation f of P using neural networks . One of the most pioneering works is PointNet++ , which learns hierarchical features by stacking multiple learning stages . In each stage , Ns points are re-sampled by the farthest point sampling ( FPS ) algorithm where s indexes the stage andK neighbors are employed for each sampled point and aggregated by max-pooling to capture local structures . Conceptually , the kernel operation of PointNet++ can be formulated as : gi = A ( Φ ( fi , j ) |j = 1 , · · · , K ) , ( 1 ) where A ( · ) means aggregation function ( max-pooling in PointNet++ ) , Φ ( · ) denotes the local feature extraction function ( MLP in PointNet++ ) , and fi , j is the j-th neighbor point feature of i-th sampled point . By doing so , PointNet++ is able to effectively capture local geometric information and progressively enlarge the receptive fields by repeating the operation . In the sense of network architecture design , PointNet++ exhibits a universal pipeline for point cloud analysis . Following this pipeline , some plug-and-play methods have been proposed , mainly focusing on the local feature extractor Φ ( · ) Xu et al . ( 2021a ) ; Liu et al . ( 2019b ) ; Thomas et al . ( 2019 ) ; Zhao et al . ( 2021 ) . Generally , these local feature extractors thoroughly explore the local geometric information using convolution , graph , or self-attention mechanisms . In RSCNN Liu et al . ( 2019b ) , the extractor is mainly achieved by exploring point relations as follow : Φ ( fi , j ) = MLP ( [ ‖xi , j − xi‖2 , xi , j − xi , xi , j , xi ] ) ∗ fi , j , ∀j ∈ { 1 , · · · , K } , ( 2 ) where [ · ] is the concatenation operation and MLP is a small network composed of a Fully-connected ( FC ) layer , Batch Normalization layer , and activation function . Unlike RSCNN , Point Transformer introduces the self-attention mechanism into point cloud analysis and considers the similarities between pair-wise points in a local region . To this end , it re-formulates the extractor as : Φ ( fi ) = k∑ j=1 ρ ( γ ( ϕ ( fi ) − ψ ( fi , j ) + δ ) ) ( α ( fi , j + δ ) ) , ( 3 ) where γ , ϕ , ψ and α are linear mapping function , “ ” is a Hadamard product , and ρ is a softmax normalization . In particular , Point Transformer introduces a relative position encoding , δ = θ ( xi − xi , j ) , where the relative position is encoded by two FC layers with a ReLU nonlinearity layer , into both attention weights and features . The lightweight positional encoder largely improves the performance of Point Transformer . While these methods can easily take the advantage of detailed local geometric information and usually exhibit promising results , two issues limit their development . First , with the introduction of delicate extractors , the computational complexity is largely increased , leading to prohibitive inference latency 1 . For example , the FLOPs of Equation 3 in Point Transformer would be 14Kd2 , ignoring the summation and subtraction operations . Compare with the conventional FC layers that 1We emphasize that the model complexity could not be simply revealed by FLOPs or parameters , other metrics like memory access cost ( MAC ) and the degree of parallelism also significantly affect the speed Ma et al . ( 2018 ) ; Zhang et al . ( 2020 ) . However , these important metrics are always ignored in point clouds analysis . enjoy 2Kd2 FLOPs , it increases the computations by times . Notice that the memory access cost is not considered yet . Second , with the development of local feature extractors , the performance gain has started to saturate on popular benchmarks . Moreover , empirical analysis in Liu et al . ( 2020 ) reveals that most sophisticated local extractors make surprisingly similar contributions to the network performance under the same network input . Both limitations encourage us to develop a new method for point cloud analysis that circumvents the employment of sophisticated local extractors , as well as providing gratifying results .
This paper proposed an alternative point cloud feature extractor, named PointMLP. PointMLP is composed of residual MLPs and geometric affine modules. Classification results on ModelNet40 dataset show the proposed methods achieves slightly better accuracy while using much smaller number of parameters and faster runtime. The proposed method achieves better accuracy in classification results on ScanObjectNN dataset and is on par with state-of-the-art methods for 3D shape part segmentation task on the ShapeNetPart benchmark.
SP:2dea3a92d8827e212ea00095f4f7e5f011538497
Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework
1 INTRODUCTION . Lately , point cloud analysis has emerged as a popular topic in 3D understanding , attracting attention from academia and industry Qi et al . ( 2017a ) ; Shi et al . ( 2019 ) ; Xu et al . ( 2020 ) . Different from 2D images represented by regular dense pixels , point clouds are composed of unordered and irregular sets of points P ∈ RN×3 , making it infeasible to directly apply image processing methods to point cloud analysis . Meanwhile , the nature of sparseness and the presence of noises further restrict the performance . In the past few years , endowing with neural networks , point cloud analysis has seen a great improvement in various applications , including 3D shape classification Qi et al . ( 2017a ) , semantic segmentation Hu et al . ( 2020 ) and object detection Shi & Rajkumar ( 2020 ) , etc . Recent efforts have shown promising results for point cloud analysis by exploring local geometric information , using convolution Li et al . ( 2021a ) , graph Li et al . ( 2021a ) , or attention mechanism Guo et al . ( 2021 ) ( see Section 2 for details ) . These methods , despite their gratifying results , have mainly relied on the premise that an elaborate local extractor is essential for point cloud analysis , leading to the competition for careful designs that explore fine local geometric properties . Nevertheless , sophisticated extractors are not without drawbacks . On one hand , due to prohibitive computations and the overhead of memory access , these sophisticated extractors hamper the efficiency of applications in real scenes . As an example , until now , most 3D point cloud applications are still based on the simple PointNet ( and PointNet++ ) or the voxel-based methods Liu et al . ( 2021 ) ; Li et al . ( 2021b ) ; Zhang et al . ( 2021 ) . Applications that employ the aforementioned advanced methods , however , are rare in literature . On the other hand , the booming sophisticated extractors , saturate the performance since they already describe the local geometric properties well and more complicated design is no longer to further improve the performance a lot . These phenomena suggest that we may need to stop the race of local feature extraction designing , rethinking the necessity of elaborate local feature extractors and further revisiting the succinct design philosophy in point cloud analysis . In this paper , we aim at the ambitious goal of building a deep network for point cloud analysis using only residual feed-forward MLPs , without any delicate local feature explorations . By doing so , we eschew the prohibitive computations and ceaseless memory access caused by the sophisticated local geometric extractors , and enjoy the advantage of efficiency from the highly-optimized MLPs . To further improve the performance and generalization ability , We further introduce a lightweight local geometric affine module that adaptively transforms the point feature in a local region . We term our new network architecture as PointMLP . In the sense of MLP-based design philosophy , our PointMLP is similar to PointNet and PointNet++ Qi et al . ( 2017a ; b ) . However , our model is more generic and exhibits promising performance . Different from the models with sophisticated lo- cal geometric extractors ( e.g. , DeepGCNs Li et al . ( 2019 ) , RSCNN Liu et al . ( 2019b ) , etc . ) , our PointMLP is conceptually simpler and achieves results on par or even better than these state-ofthe-art methods ( see Figure 1 ) . Keep in mind that we did not challenge the advantages of these local geometric extractors and acknowledge their contributions , however , a more succinct framework should be studied considering both the efficiency and accuracy . In Table 1 , we compare our PointMLP with some representative methods . Even the design philosophy is simple , PointMLP exhibits prominent performance on 3D point cloud analysis . Specifically , we achieve the state-of-the-art classification performance , 94.5 % , on the ModelNet40 benchmark , and we outperform related works by 3.3 % accuracy on the real-world ScanObjectNN dataset , with a significantly higher inference speed . 2 RELATED WORK . Point cloud analysis . There are mainly two streams to process point cloud . Since the point cloud data structure is irregular and unordered , some works consider projecting the original point clouds to intermediate voxels Maturana & Scherer ( 2015 ) ; Shi et al . ( 2020 ) or images You et al . ( 2018 ) ; Li et al . ( 2020 ) , translating the challenging 3D task into a well-explored 2D image problem . In this regime , point clouds understanding is largely boosted and enjoys the fast processing speed from 2D images or voxels . Albeit efficient , information loss caused by projection degrades the representational quality of details for point clouds Yang et al . ( 2019 ) . To this end , some methods are proposed to directly process the original point cloud sets . PointNet Qi et al . ( 2017a ) is a pioneering work that directly consumes unordered point sets as inputs using shared MLPs . Base on PointNet , PointNet++ Qi et al . ( 2017b ) further introduced a hierarchical feature learning paradigm to recursively capture the local geometric structures . Owing to the local point representation ( and multi-scale information ) , PointNet++ exhibits promising results and has been the cornerstone of modern point cloud methods Wang et al . ( 2019 ) ; Fan et al . ( 2021 ) . Our PointMLP also follows the design philosophy of PointNet++ but explores a simpler yet much deeper network architecture . Local geometry exploration . As PointNet++ built the generic point cloud analysis network framework , the recent research focus is shifted to how to generate better regional points representation . Predominantly , the explorations of local points representation can be divided into three categories : convolution- , graph- , and attention-based methods . One of the most distinguished convolution-based methods is PointConv Wu et al . ( 2019 ) . By approximating continuous weight and density functions in convolutional filters using an MLP , PointConv is able to extend the dynamic filter to a new convolution operation . Also , PAConv Xu et al . ( 2021a ) constructs the convolution kernel by dynamically assembling basic weight matrices stored in a weight bank . Without modifying network configurations , PAConv can be seamlessly integrated into classical MLP-based pipelines . Unlike convolution-based methods , Graph-based methods investigate mutually correlated relationships among points with a graph . In Wang et al . ( 2019 ) , an EdgeConv is proposed to generate edge features that describe the relationships between a point and its neighbors . By doing so , a local graph is built and the point relationships are well preserved . In 3D-GCN Lin et al . ( 2021 ) , authors aim at deriving deformable 3D kernels using a 3D Graph Convolution Network . Closely related to graphbased methods , the attention-based methods exhibit excellent ability on relationship exploration as well , like PCT Guo et al . ( 2021 ) and Point Transformer Zhao et al . ( 2021 ) ; Engel et al . ( 2020 ) . With the development of local geometry exploration , the performances on various tasks appear to saturate , continuing on this track would bring minimal improvements . In this paper , we showcase that even without the carefully designed operations for local geometry exploration , a pure deep hierarchical MLP is able to exhibit gratifying performances and even better results . Deep network architecture for point cloud . Interestingly , the development of point cloud analysis is closely related to the evolution of the image processing network . In the early era , works in the image processing field simply stack several learning layers to probe the performance limitations Krizhevsky et al . ( 2012 ) ; Simonyan & Zisserman ( 2015 ) ; Dong et al . ( 2014 ) . Then , the great success of deep learning was significantly promoted by deep neural architectures like ResNet He et al . ( 2016 ) , which brings a profound impact to various research fields . Recently , attention-based models , including atten- tion blocks Wang et al . ( 2018 ) and Transformer architectures Dosovitskiy et al . ( 2021 ) ; Touvron et al . ( 2021b ) , further flesh out the community . Most recently , the succinct deep MLP architectures have attracted a lot of attention due to their efficiency and generality . Point cloud analysis follows the same develop history as well , from MLP-based PointNet Qi et al . ( 2017a ) , deep hierarchical PointNet++ Qi et al . ( 2017b ) , convolution- ( or graph- ) based methods Wu et al . ( 2019 ) ; Wang et al . ( 2019 ) , to state-of-the-art Transformer-based models Guo et al . ( 2021 ) ; Zhao et al . ( 2021 ) . In this paper , we abandon sophisticated details and present a simple yet effective deep residual MLP network for point cloud analysis . Instead of following the tendency in the vision community deliberately , we are in pursuit of an inherently simple yet powerful architecture for point cloud analysis . 3 DEEP RESIDUAL MLP FOR POINT CLOUD . We propose to learn a point cloud representation by a simple feed-forward residual MLP network , which hierarchically aggregates the local features extracted by MLPs , and abandons the use of delicate local geometric extractors . To further improve the robustness and stability , we introduce a lightweight geometric affine module to transform the local points to a normal distribution . The detailed framework of our method is illustrated in Figure 2 . 3.1 REVISITING POINT-BASED METHODS . The design of point-based methods for point cloud analysis dates back to the PointNet and PointNet++ papers Qi et al . ( 2017a ; b ) , if not earlier . The motivation behind this direction is to directly consume point clouds and avoid unnecessary rendering processes . Given a set of points P = { pi|i = 1 , · · · , N } ∈ RN×3 , where N indicates the number of points in a ( x , y , z ) Cartesian space , point-based methods aims to directly learn the underlying representation f of P using neural networks . One of the most pioneering works is PointNet++ , which learns hierarchical features by stacking multiple learning stages . In each stage , Ns points are re-sampled by the farthest point sampling ( FPS ) algorithm where s indexes the stage andK neighbors are employed for each sampled point and aggregated by max-pooling to capture local structures . Conceptually , the kernel operation of PointNet++ can be formulated as : gi = A ( Φ ( fi , j ) |j = 1 , · · · , K ) , ( 1 ) where A ( · ) means aggregation function ( max-pooling in PointNet++ ) , Φ ( · ) denotes the local feature extraction function ( MLP in PointNet++ ) , and fi , j is the j-th neighbor point feature of i-th sampled point . By doing so , PointNet++ is able to effectively capture local geometric information and progressively enlarge the receptive fields by repeating the operation . In the sense of network architecture design , PointNet++ exhibits a universal pipeline for point cloud analysis . Following this pipeline , some plug-and-play methods have been proposed , mainly focusing on the local feature extractor Φ ( · ) Xu et al . ( 2021a ) ; Liu et al . ( 2019b ) ; Thomas et al . ( 2019 ) ; Zhao et al . ( 2021 ) . Generally , these local feature extractors thoroughly explore the local geometric information using convolution , graph , or self-attention mechanisms . In RSCNN Liu et al . ( 2019b ) , the extractor is mainly achieved by exploring point relations as follow : Φ ( fi , j ) = MLP ( [ ‖xi , j − xi‖2 , xi , j − xi , xi , j , xi ] ) ∗ fi , j , ∀j ∈ { 1 , · · · , K } , ( 2 ) where [ · ] is the concatenation operation and MLP is a small network composed of a Fully-connected ( FC ) layer , Batch Normalization layer , and activation function . Unlike RSCNN , Point Transformer introduces the self-attention mechanism into point cloud analysis and considers the similarities between pair-wise points in a local region . To this end , it re-formulates the extractor as : Φ ( fi ) = k∑ j=1 ρ ( γ ( ϕ ( fi ) − ψ ( fi , j ) + δ ) ) ( α ( fi , j + δ ) ) , ( 3 ) where γ , ϕ , ψ and α are linear mapping function , “ ” is a Hadamard product , and ρ is a softmax normalization . In particular , Point Transformer introduces a relative position encoding , δ = θ ( xi − xi , j ) , where the relative position is encoded by two FC layers with a ReLU nonlinearity layer , into both attention weights and features . The lightweight positional encoder largely improves the performance of Point Transformer . While these methods can easily take the advantage of detailed local geometric information and usually exhibit promising results , two issues limit their development . First , with the introduction of delicate extractors , the computational complexity is largely increased , leading to prohibitive inference latency 1 . For example , the FLOPs of Equation 3 in Point Transformer would be 14Kd2 , ignoring the summation and subtraction operations . Compare with the conventional FC layers that 1We emphasize that the model complexity could not be simply revealed by FLOPs or parameters , other metrics like memory access cost ( MAC ) and the degree of parallelism also significantly affect the speed Ma et al . ( 2018 ) ; Zhang et al . ( 2020 ) . However , these important metrics are always ignored in point clouds analysis . enjoy 2Kd2 FLOPs , it increases the computations by times . Notice that the memory access cost is not considered yet . Second , with the development of local feature extractors , the performance gain has started to saturate on popular benchmarks . Moreover , empirical analysis in Liu et al . ( 2020 ) reveals that most sophisticated local extractors make surprisingly similar contributions to the network performance under the same network input . Both limitations encourage us to develop a new method for point cloud analysis that circumvents the employment of sophisticated local extractors , as well as providing gratifying results .
This paper introduces a lightweight and fast neural network architecture processing 3D point cloud. While the idea is quite simple, the proposed architecture outperforms or matches the performance of the previous architectures in terms of both accuracy and also inference time with various tasks including ModelNet40 classification, ScanObjectNN classification, and ShapeNetPart segmentation, as shown in the experimental results. The main idea is to make two modifications in the PointNet++ architecture, 1) adding MLP layers after each of the point feature aggregation (max-pooling) steps, and 2) applying a geometric affine transformation to the features processed with an MLP in each small PointNet module. It's interesting to see that these small modifications (especially the second) make dramatic changes in the results.
SP:2dea3a92d8827e212ea00095f4f7e5f011538497
Equivariant Vector Field Network for Many-body System Modeling
1 INTRODUCTION . Modeling many-body systems has been a long-standing challenge in scientific fields from classical and quantum physics ( Carleo & Troyer , 2017 ; Zhang et al. , 2018 ; Satorras et al. , 2021c ) to structural biology ( Senior et al. , 2020 ; Shi et al. , 2021 ) , due to its high numerical complexity and complicated evolving mechanism . Graph neural network ( GNN ) , which is superior to model the high-dimensional structured data with permutation equivariance , brings a new opportunity to model the many-body systems in an end-to-end manner . Since many-body physical systems follow many physical constraints like SE ( 3 ) symmetry , pure black-box GNN models show limitations on generalization in this scenario and symmetry-preserving GNN models have become a hot research direction . The core question to be solved for developing general equivariant GNN models is how to conduct nonlinear operations on tensors in a reference-free way . To represent and manipulate equivariant tensor of arbitrary orders , some approaches resort to equivariant function spaces such as spherical harmonics ( Thomas et al. , 2018 ; Fuchs et al. , 2020 ; Bogatskiy et al. , 2020 ; Fuchs et al. , 2021 ) or lifting the spatial space to high-dimensional spaces such as Lie group space ( Cohen & Welling , 2016 ; Cohen et al. , 2018 ; 2019b ; Finzi et al. , 2020 ; Hutchinson et al. , 2021 ) . Since no restriction on the order of tensors is imposed on these methods , sufficient expressive power of these models is guaranteed . Unfortunately , transforming a many-body system into those high-dimensional spaces or calculating equivariant functions usually brings excessive computational cost and great optimization difficulty , which is unacceptable in some real-world scenarios . To remedy this issue , Satorras et al . ( 2021c ) proposed EGNN to directly implement equivariant operations in the original space , providing an efficient way to preserve equivariance without performing complex space transformations . Detailed experiments in ( Satorras et al. , 2021c ) have shown that preserving equivariance without transforming the space is theoretically possible and computationally efficient in practice . However , one trade-off of EGNN is abandoning a certain amount of tensor information 1 , which causes the equivariance function class EGNN can approximate to be restricted . This drawback may become serious when modeling complex dynamical scenarios such as molecular simulation , where geometric information ( e.g. , angular potentials and rotatable bonds ) plays an important role in inducing conformation changes ( Klicpera et al . ( 2020 ) ; Xu et al . ( 2021 ) ) . To mitigate this issue , we propose a new model called Equivariant Vector Field Network ( EVFN ) to fit the gradient vector fields ( Song & Ermon , 2019 ; Shi et al. , 2021 ) of many-body systems . With a scalarization block and a vectorization block , EVFN is able to represent tensor information losslessly in the original space and outputs equivariant vector fields with no restriction on the direction . Inspired by the scalarization technique from differential geometry ( Kobayashi , 1963 ; Hsu , 2002 ) , EVFN first introduces a tuple of complete basis associated with each particle pair that preserves permutation and SE ( 3 ) symmetry under global rotation and translation transformation on the manybody system . Based on this basis , the scalarization block losslessly transforms the geometric information into SO ( 3 ) -invariant scalar representations . In principle , the scalar representations can be fed into any permutation-equivariant network to implement complex nonlinear transformations . Moreover , the vectorization block could reverse the scalars back to the vector field without sacrificing geometric information with the complete basis . Once obtained the estimated gradient field , we can predict a certain state or the whole dynamical trajectory of a 3D many-body system via an integration procedure . We evaluate the proposed framework on two many-body scenarios that require equivariance : ( 1 ) the simulated Newtonian many-body dynamics trajectory prediction and ( 2 ) the real-world molecular conformation generation . Our model achieves best or competitive results in various types of datasets . 2 BACKGROUND . In this section , we first introduce some basic concepts on the notion of equivariance and tensor field and then describe the scalarization technique from differential geometry . Finally , we define the ‘ vector field ’ as the differential of a many-body system . Let X = ( x1 , . . . , xN ) ∈ RN×3 be a many-body system living in R3 , where N is the number of particles . We use xi ( t ) to denote the position of the particle i at time t. SE ( 3 ) group and Equivariance In the Euclidean space R3 we can consider affine transformations that preserve the distance between any two points , i.e. , the isometric group SE ( 3 ) . We call it the symmetry group w.r.t . the Euclidean metric , and it turns out that SE ( 3 ) can be generated by the translation group and the rotation group SO ( 3 ) . Once we have a symmetry group , it ’ s valid to define quantities that are “ equivariant ” under the symmetry group . Given a function f : Rm → Rn , assuming the symmetry group G acts on Rm and Rn , then f is G-equivariant if f ( gx ) = gf ( x ) , ∀x ∈ Rm and g ∈ G. For SO ( 3 ) group , if n = 1 , i.e. , the output of f is a scalar , then the group action on R1 is the identity map , in this case f should be SO ( 3 ) -invariant ( Thomas et al. , 2018 ) : f ( gx ) = f ( x ) . The notion of tensor field can be defined for general Riemannian manifold ( see Definition 2.1.10 of ( Jost & Jost , 2008 ) ) . Let { ∂ ∂xi } 3i=1 and { dxi } 3i=1 be the tangent vectors and dual vectors in R3 respectively , and ⊗ denotes the tensor product . Then recall the definition of tensor field for R3 w.r.t . the SO ( 3 ) group : Definition 2.1 . A ( r , s ) - tensor field θ is a multi-linear map from a collection of r vectors and s dual vectors in R3 to R : θ ( x ) = θi1···irj1···js ∂ ∂xi1 ⊗ · · · ⊗ ∂ ∂xir ⊗ dxj1 ⊗ · · · ⊗ dxjs . It implies that under SO ( 3 ) coordinate transformation g : = { gij } 1≤i , j≤n , the tensor field θ transforms equivariantly : θ i′1···i ′ r j′1···j′s = gi′1i1 · · · gi′rirg T j1j ′ 1 · · · gTjsj′sθ i1···ir j1···js , where g T is the inverse of g. 1For example , dihedral angle is a function of position vectors rather than the position ’ s norm , which are the input of EGNN . Detailed definition is given in appendix A.2.4 Equivariant Vector field To model X ( t ) , a natural way is to estimate its differential dX ( t ) dt and apply an ODE solver to integrate the differential to obtain the dynamic trajectory or a state at a given time . Due to the SO ( 3 ) symmetry , we define such differential as an equivariant vector field . Most 3D real-world scenarios adopt first-order and second-order equivariant vector fields to depict their dynamic evolving mechanisms , which are also the modeling targets in this paper . A typical secondorder vector field is the acceleration field of Newtonian systems . Gradient field is a widely used terminology meaning the first-order derivative w.r.t . a scalar function ( Jost & Jost , 2008 ; Song & Ermon , 2019 ) . To generate molecular conformations ( i.e . equilibrium state ) with a single stage , ( Shi et al. , 2021 ) define “ gradient field ” to serve as pseudo-forces acting on each particle . By evolving the particles following the direction of the gradient field , the nonequilibrium system will finally converge to an equilibrium state . Gradient field is a special case of first-order vector field . 3 METHODOLOGY . Given a many-body system X , we aim at modeling its vector field to predict the long-term dynamic trajectory or the equilibrium state within a single stage . To preserve the physical symmetries of the system , the estimated vector field should be equivariant for permutation and SE ( 3 ) group . To achieve this goal , we represent the system as a spatial graph and construct EVFN based on it with three key components : ( 1 ) a Scalarization block to encode the geometric tensors into SO ( 3 ) invariant scalar representations attached to each node ; ( 2 ) a Graph Transformer block to learn SO ( 3 ) -invariant edgewise embeddings by propagating and aggregating information on the graph ; and ( 3 ) a Vectorization block to reverse scalar representations back to geometric tensors to estimate the vector field . A brief overview of EVFN is illustrated in Figure 1 . Once the vector field network ( EVFN ) is optimized , an Evolving block is incorporated to integrate the vector field for predicting the dynamics . The translation symmetry can be easily preserved by moving the particle system ’ s centroid at t = 0 to the origin ( the Centralization operation in Figure 1 ) . Permutation equivariance is automatically guaranteed for the message-passing based Graph Transformer . We provide detailed proof about these symmetries in Appendix A.2.1 . Now we concentrate on SO ( 3 ) symmetry in the following sections . 3.1 SCALARIZATION BLOCK . The scalarization block is designed to transform geometric tensors into edgewise SO ( 3 ) -invariant scalar features by introducing a novel tuple of complete basis . Given a particle xi ( t ) , define neighbor N ( xi ( t ) ) as the particles that react with xi ( t ) . Then we can consider a particle pair ( xi ( t ) , xj ( t ) ) , where xj ( t ) ∈ N ( xi ( t ) ) . Suppose we take the positions of the two particles as the relevant geometric information of edge ⟨i , j⟩ , then the edgewise SO ( 3 ) -invariant scalars could be defined as tij : = Scalarize ( xi ( t ) , xj ( t ) , Fij ) , where Scalarize is the scalarization operation under an edgewise dynamical basis Fij defined below . Equivariant basis construction For the particle pair ( xi ( t ) , xj ( t ) ) , let a ( t ) = xi ( t ) −xj ( t ) ∥xi ( t ) −xj ( t ) ∥ and × denotes the cross product of two vectors , define b ( t ) = xi ( t ) × xj ( t ) ∥xi ( t ) × xj ( t ) ∥ and c ( t ) = a ( t ) × b ( t ) , ( 3.1 ) Then we build a SO ( 3 ) -equivariant basis Fij : = ( a ( t ) , b ( t ) , c ( t ) ) . In practice we add a small constant ϵ to the normalization factor in case that xi and xj collapse . Under the condition that the matrix ( a ( t ) , b ( t ) , c ( t ) ) is non-degenerate , Fij formulates a complete orthonormal basis ( frame ) of the tangent space at xi ( t ) . Note that this is a dynamical basis w.r.t . t and the construction process of such basis is permutation-equivariant . Since the Euclidean metric is flat , the dual basis ( living in the cotangent space of xi ( Hsu , 2002 ) ) of Fij is just its transpose : ( aT ( t ) , bT ( t ) , cT ( t ) ) . The ‘ bad ’ event that Fij is degenerate for all neighbors happens only when all particles are restricted to a straight line , which is a measure zero set in R3 . Therefore , we assume Fij is non-degenerate from now on . Proof for SO ( 3 ) -equivariance of Fij is provided in proposition A.1 . Invariant scalarization of geometric tensors With the complete equivariant basis , we can realize the scalarization operation ( Kobayashi , 1963 ) in an elementary way . First of all , notice that under the basis Fij ( ( a ( t ) , b ( t ) , c ( t ) ) ) , the position vector of xk naturally owns a ‘ coefficient ’ or ‘ scalar ’ representation : ( xk · a ( t ) , xk · b ( t ) , xk · c ( t ) ) . ( 3.2 ) We define the process obtaining such scalars as Scalarize operation . Here we demonstrate that the set of obtained coefficients ( 3.2 ) is actually a SO ( 3 ) -invariant scalar tuple . Let g ∈ SO ( 3 ) be an arbitrary orthogonal transformation , then xk → gxk and ( a ( t ) , b ( t ) , c ( t ) ) → ( g · a ( t ) , g · b ( t ) , g · c ( t ) ) . Therefore ( 3.2 ) undergoes ( xk · a ( t ) , xk · b ( t ) , xk · c ( t ) ) → ( gxk · ga ( t ) , gxk · gb ( t ) , gxk · gc ( t ) ) = ( xk · a ( t ) , xk · b ( t ) , xk · c ( t ) ) , ( 3.3 ) where we use the fact that gT g = I to get the last line . It is easy to prove that the Scalarize operation could transform arbitrary geometric tensors into SO ( 3 ) -invariant scalars . Taking ( 2,0 ) -type tensors as an example , by extending the complete basis ( a , b , c ) through tensor product , it ’ s easy to check that { a⊗ a , b⊗ b , c⊗ c , a⊗ b , b⊗ a , a⊗ c , c⊗ a , b⊗ c , c⊗ b } forms an equivariant basis of the ( 2 , 0 ) -type tensor space . Then the scalarization of a tensor is just the linear combination coefficients under this basis . In the same way as ( 3.3 ) , we can prove that the coefficients are also SO ( 3 ) -invariant scalars . Given a ( 2 , 0 ) -symmetric tensor θ ( e.g. , energymomentum tensor ) , under the complete basis Fij = ( a , b , c ) , θ can be expressed as : θ =θaaa⊗ a+ θbbb⊗ b+ θccc⊗ c+ θab ( a⊗ b+ b⊗ a ) + θac ( a⊗ c+ c⊗ a ) + θbc ( b⊗ c+ c⊗ b ) . ( 3.4 ) The scalars tuple tij : = { θaa , θab , . . . } are the scalarization of θ under the equivariant basis Fij , which are SO ( 3 ) -invariant . Since any nonlinear transformations acting on SO ( 3 ) -invariant scalars are still SO ( 3 ) -invariant , therefore the scalars tuple can be fed into any neural network architectures without concerns about breaking the equivariance symmetry . Our scalarization is inspired by the scalarization technique on the frame bundle in differential geometry ( Hsu , 2002 ) , the equivalence of the scalarization technique and ( 3.4 ) is given in proposition A.2 . In practice , we focus on the scalarization of ( 1 , 0 ) -type tensors ( i.e. , vectors ) . Since the most common geometric information of input is vectors in real-world scenarios . We define the Scalarize operation as : Scalarize ( xi , xj , Fij ) = ( xi · a ( t ) , xi · b ( t ) , xi · c ( t ) , xj · a ( t ) , xj · b ( t ) , xj · c ( t ) ) ( 3.5 )
The authors introduce a model to predict the time evolution of Newton mechanical systems and small molecules. The model takes a graphical representation as input and converts it to an SE(3) and permutation equivariant representation using physical principles (i.e., white-box model). This representation is passed through a learned graph transformer module to produce a vector field which is used to predict the time evolution of the system/molecule.
SP:e02b51aa077852aeddc65bb2217968aca35e9105
Equivariant Vector Field Network for Many-body System Modeling
1 INTRODUCTION . Modeling many-body systems has been a long-standing challenge in scientific fields from classical and quantum physics ( Carleo & Troyer , 2017 ; Zhang et al. , 2018 ; Satorras et al. , 2021c ) to structural biology ( Senior et al. , 2020 ; Shi et al. , 2021 ) , due to its high numerical complexity and complicated evolving mechanism . Graph neural network ( GNN ) , which is superior to model the high-dimensional structured data with permutation equivariance , brings a new opportunity to model the many-body systems in an end-to-end manner . Since many-body physical systems follow many physical constraints like SE ( 3 ) symmetry , pure black-box GNN models show limitations on generalization in this scenario and symmetry-preserving GNN models have become a hot research direction . The core question to be solved for developing general equivariant GNN models is how to conduct nonlinear operations on tensors in a reference-free way . To represent and manipulate equivariant tensor of arbitrary orders , some approaches resort to equivariant function spaces such as spherical harmonics ( Thomas et al. , 2018 ; Fuchs et al. , 2020 ; Bogatskiy et al. , 2020 ; Fuchs et al. , 2021 ) or lifting the spatial space to high-dimensional spaces such as Lie group space ( Cohen & Welling , 2016 ; Cohen et al. , 2018 ; 2019b ; Finzi et al. , 2020 ; Hutchinson et al. , 2021 ) . Since no restriction on the order of tensors is imposed on these methods , sufficient expressive power of these models is guaranteed . Unfortunately , transforming a many-body system into those high-dimensional spaces or calculating equivariant functions usually brings excessive computational cost and great optimization difficulty , which is unacceptable in some real-world scenarios . To remedy this issue , Satorras et al . ( 2021c ) proposed EGNN to directly implement equivariant operations in the original space , providing an efficient way to preserve equivariance without performing complex space transformations . Detailed experiments in ( Satorras et al. , 2021c ) have shown that preserving equivariance without transforming the space is theoretically possible and computationally efficient in practice . However , one trade-off of EGNN is abandoning a certain amount of tensor information 1 , which causes the equivariance function class EGNN can approximate to be restricted . This drawback may become serious when modeling complex dynamical scenarios such as molecular simulation , where geometric information ( e.g. , angular potentials and rotatable bonds ) plays an important role in inducing conformation changes ( Klicpera et al . ( 2020 ) ; Xu et al . ( 2021 ) ) . To mitigate this issue , we propose a new model called Equivariant Vector Field Network ( EVFN ) to fit the gradient vector fields ( Song & Ermon , 2019 ; Shi et al. , 2021 ) of many-body systems . With a scalarization block and a vectorization block , EVFN is able to represent tensor information losslessly in the original space and outputs equivariant vector fields with no restriction on the direction . Inspired by the scalarization technique from differential geometry ( Kobayashi , 1963 ; Hsu , 2002 ) , EVFN first introduces a tuple of complete basis associated with each particle pair that preserves permutation and SE ( 3 ) symmetry under global rotation and translation transformation on the manybody system . Based on this basis , the scalarization block losslessly transforms the geometric information into SO ( 3 ) -invariant scalar representations . In principle , the scalar representations can be fed into any permutation-equivariant network to implement complex nonlinear transformations . Moreover , the vectorization block could reverse the scalars back to the vector field without sacrificing geometric information with the complete basis . Once obtained the estimated gradient field , we can predict a certain state or the whole dynamical trajectory of a 3D many-body system via an integration procedure . We evaluate the proposed framework on two many-body scenarios that require equivariance : ( 1 ) the simulated Newtonian many-body dynamics trajectory prediction and ( 2 ) the real-world molecular conformation generation . Our model achieves best or competitive results in various types of datasets . 2 BACKGROUND . In this section , we first introduce some basic concepts on the notion of equivariance and tensor field and then describe the scalarization technique from differential geometry . Finally , we define the ‘ vector field ’ as the differential of a many-body system . Let X = ( x1 , . . . , xN ) ∈ RN×3 be a many-body system living in R3 , where N is the number of particles . We use xi ( t ) to denote the position of the particle i at time t. SE ( 3 ) group and Equivariance In the Euclidean space R3 we can consider affine transformations that preserve the distance between any two points , i.e. , the isometric group SE ( 3 ) . We call it the symmetry group w.r.t . the Euclidean metric , and it turns out that SE ( 3 ) can be generated by the translation group and the rotation group SO ( 3 ) . Once we have a symmetry group , it ’ s valid to define quantities that are “ equivariant ” under the symmetry group . Given a function f : Rm → Rn , assuming the symmetry group G acts on Rm and Rn , then f is G-equivariant if f ( gx ) = gf ( x ) , ∀x ∈ Rm and g ∈ G. For SO ( 3 ) group , if n = 1 , i.e. , the output of f is a scalar , then the group action on R1 is the identity map , in this case f should be SO ( 3 ) -invariant ( Thomas et al. , 2018 ) : f ( gx ) = f ( x ) . The notion of tensor field can be defined for general Riemannian manifold ( see Definition 2.1.10 of ( Jost & Jost , 2008 ) ) . Let { ∂ ∂xi } 3i=1 and { dxi } 3i=1 be the tangent vectors and dual vectors in R3 respectively , and ⊗ denotes the tensor product . Then recall the definition of tensor field for R3 w.r.t . the SO ( 3 ) group : Definition 2.1 . A ( r , s ) - tensor field θ is a multi-linear map from a collection of r vectors and s dual vectors in R3 to R : θ ( x ) = θi1···irj1···js ∂ ∂xi1 ⊗ · · · ⊗ ∂ ∂xir ⊗ dxj1 ⊗ · · · ⊗ dxjs . It implies that under SO ( 3 ) coordinate transformation g : = { gij } 1≤i , j≤n , the tensor field θ transforms equivariantly : θ i′1···i ′ r j′1···j′s = gi′1i1 · · · gi′rirg T j1j ′ 1 · · · gTjsj′sθ i1···ir j1···js , where g T is the inverse of g. 1For example , dihedral angle is a function of position vectors rather than the position ’ s norm , which are the input of EGNN . Detailed definition is given in appendix A.2.4 Equivariant Vector field To model X ( t ) , a natural way is to estimate its differential dX ( t ) dt and apply an ODE solver to integrate the differential to obtain the dynamic trajectory or a state at a given time . Due to the SO ( 3 ) symmetry , we define such differential as an equivariant vector field . Most 3D real-world scenarios adopt first-order and second-order equivariant vector fields to depict their dynamic evolving mechanisms , which are also the modeling targets in this paper . A typical secondorder vector field is the acceleration field of Newtonian systems . Gradient field is a widely used terminology meaning the first-order derivative w.r.t . a scalar function ( Jost & Jost , 2008 ; Song & Ermon , 2019 ) . To generate molecular conformations ( i.e . equilibrium state ) with a single stage , ( Shi et al. , 2021 ) define “ gradient field ” to serve as pseudo-forces acting on each particle . By evolving the particles following the direction of the gradient field , the nonequilibrium system will finally converge to an equilibrium state . Gradient field is a special case of first-order vector field . 3 METHODOLOGY . Given a many-body system X , we aim at modeling its vector field to predict the long-term dynamic trajectory or the equilibrium state within a single stage . To preserve the physical symmetries of the system , the estimated vector field should be equivariant for permutation and SE ( 3 ) group . To achieve this goal , we represent the system as a spatial graph and construct EVFN based on it with three key components : ( 1 ) a Scalarization block to encode the geometric tensors into SO ( 3 ) invariant scalar representations attached to each node ; ( 2 ) a Graph Transformer block to learn SO ( 3 ) -invariant edgewise embeddings by propagating and aggregating information on the graph ; and ( 3 ) a Vectorization block to reverse scalar representations back to geometric tensors to estimate the vector field . A brief overview of EVFN is illustrated in Figure 1 . Once the vector field network ( EVFN ) is optimized , an Evolving block is incorporated to integrate the vector field for predicting the dynamics . The translation symmetry can be easily preserved by moving the particle system ’ s centroid at t = 0 to the origin ( the Centralization operation in Figure 1 ) . Permutation equivariance is automatically guaranteed for the message-passing based Graph Transformer . We provide detailed proof about these symmetries in Appendix A.2.1 . Now we concentrate on SO ( 3 ) symmetry in the following sections . 3.1 SCALARIZATION BLOCK . The scalarization block is designed to transform geometric tensors into edgewise SO ( 3 ) -invariant scalar features by introducing a novel tuple of complete basis . Given a particle xi ( t ) , define neighbor N ( xi ( t ) ) as the particles that react with xi ( t ) . Then we can consider a particle pair ( xi ( t ) , xj ( t ) ) , where xj ( t ) ∈ N ( xi ( t ) ) . Suppose we take the positions of the two particles as the relevant geometric information of edge ⟨i , j⟩ , then the edgewise SO ( 3 ) -invariant scalars could be defined as tij : = Scalarize ( xi ( t ) , xj ( t ) , Fij ) , where Scalarize is the scalarization operation under an edgewise dynamical basis Fij defined below . Equivariant basis construction For the particle pair ( xi ( t ) , xj ( t ) ) , let a ( t ) = xi ( t ) −xj ( t ) ∥xi ( t ) −xj ( t ) ∥ and × denotes the cross product of two vectors , define b ( t ) = xi ( t ) × xj ( t ) ∥xi ( t ) × xj ( t ) ∥ and c ( t ) = a ( t ) × b ( t ) , ( 3.1 ) Then we build a SO ( 3 ) -equivariant basis Fij : = ( a ( t ) , b ( t ) , c ( t ) ) . In practice we add a small constant ϵ to the normalization factor in case that xi and xj collapse . Under the condition that the matrix ( a ( t ) , b ( t ) , c ( t ) ) is non-degenerate , Fij formulates a complete orthonormal basis ( frame ) of the tangent space at xi ( t ) . Note that this is a dynamical basis w.r.t . t and the construction process of such basis is permutation-equivariant . Since the Euclidean metric is flat , the dual basis ( living in the cotangent space of xi ( Hsu , 2002 ) ) of Fij is just its transpose : ( aT ( t ) , bT ( t ) , cT ( t ) ) . The ‘ bad ’ event that Fij is degenerate for all neighbors happens only when all particles are restricted to a straight line , which is a measure zero set in R3 . Therefore , we assume Fij is non-degenerate from now on . Proof for SO ( 3 ) -equivariance of Fij is provided in proposition A.1 . Invariant scalarization of geometric tensors With the complete equivariant basis , we can realize the scalarization operation ( Kobayashi , 1963 ) in an elementary way . First of all , notice that under the basis Fij ( ( a ( t ) , b ( t ) , c ( t ) ) ) , the position vector of xk naturally owns a ‘ coefficient ’ or ‘ scalar ’ representation : ( xk · a ( t ) , xk · b ( t ) , xk · c ( t ) ) . ( 3.2 ) We define the process obtaining such scalars as Scalarize operation . Here we demonstrate that the set of obtained coefficients ( 3.2 ) is actually a SO ( 3 ) -invariant scalar tuple . Let g ∈ SO ( 3 ) be an arbitrary orthogonal transformation , then xk → gxk and ( a ( t ) , b ( t ) , c ( t ) ) → ( g · a ( t ) , g · b ( t ) , g · c ( t ) ) . Therefore ( 3.2 ) undergoes ( xk · a ( t ) , xk · b ( t ) , xk · c ( t ) ) → ( gxk · ga ( t ) , gxk · gb ( t ) , gxk · gc ( t ) ) = ( xk · a ( t ) , xk · b ( t ) , xk · c ( t ) ) , ( 3.3 ) where we use the fact that gT g = I to get the last line . It is easy to prove that the Scalarize operation could transform arbitrary geometric tensors into SO ( 3 ) -invariant scalars . Taking ( 2,0 ) -type tensors as an example , by extending the complete basis ( a , b , c ) through tensor product , it ’ s easy to check that { a⊗ a , b⊗ b , c⊗ c , a⊗ b , b⊗ a , a⊗ c , c⊗ a , b⊗ c , c⊗ b } forms an equivariant basis of the ( 2 , 0 ) -type tensor space . Then the scalarization of a tensor is just the linear combination coefficients under this basis . In the same way as ( 3.3 ) , we can prove that the coefficients are also SO ( 3 ) -invariant scalars . Given a ( 2 , 0 ) -symmetric tensor θ ( e.g. , energymomentum tensor ) , under the complete basis Fij = ( a , b , c ) , θ can be expressed as : θ =θaaa⊗ a+ θbbb⊗ b+ θccc⊗ c+ θab ( a⊗ b+ b⊗ a ) + θac ( a⊗ c+ c⊗ a ) + θbc ( b⊗ c+ c⊗ b ) . ( 3.4 ) The scalars tuple tij : = { θaa , θab , . . . } are the scalarization of θ under the equivariant basis Fij , which are SO ( 3 ) -invariant . Since any nonlinear transformations acting on SO ( 3 ) -invariant scalars are still SO ( 3 ) -invariant , therefore the scalars tuple can be fed into any neural network architectures without concerns about breaking the equivariance symmetry . Our scalarization is inspired by the scalarization technique on the frame bundle in differential geometry ( Hsu , 2002 ) , the equivalence of the scalarization technique and ( 3.4 ) is given in proposition A.2 . In practice , we focus on the scalarization of ( 1 , 0 ) -type tensors ( i.e. , vectors ) . Since the most common geometric information of input is vectors in real-world scenarios . We define the Scalarize operation as : Scalarize ( xi , xj , Fij ) = ( xi · a ( t ) , xi · b ( t ) , xi · c ( t ) , xj · a ( t ) , xj · b ( t ) , xj · c ( t ) ) ( 3.5 )
This work (EVFM) aims to to improve predictions of n-body system dynamics by combining continuous lie symmetries with permutation symmetry. The authors propose to do this by encoding SO(3) invariant representations of each node, followed by the use of a graph transformer and a vectorization block to estimate the vector field. An evolving block is subsequently used to predict dynamics.
SP:e02b51aa077852aeddc65bb2217968aca35e9105
Learning Universal User Representations via Self-Supervised Lifelong Behaviors Modeling
1 INTRODUCTION . Inferring user intents through user behavior data has been extensively studied in industrial applications , such as recommendation systems , search engines , and online advertising Dupret & Piwowarski ( 2008 ) ; He et al . ( 2014 ) ; Elkahky et al . ( 2015 ) ; Yu et al . ( 2016 ) . One key aspect in these systems is user modeling , which describes the process of building up and modifying a conceptual understanding of the user Fischer ( 2001 ) . Essentially , user modeling is to learn a user representation that helps to capture the user ’ s interests and preferences to improve the performance on downstream tasks . In the literature , there have been many studies that focused on task-specific user modeling , such as user response prediction and personalized recommendation Ren et al . ( 2019 ) ; Yu et al . ( 2019 ) ; Ji et al . ( 2020 ) . However , the user representation learned by a specific task can hardly be generalized to other tasks . As a result , specific user representation models need to be trained in each downstream task , which requires massive labeled data , training time , computing and storage resources . Given these limitations , universal user representations that can serve a variety of downstream tasks are preferred . Due to the sequential form of user behavior data , recurrent neural networks ( RNNs ) are usually used to encode the temporal dynamics of user behavior sequences Wu et al . ( 2017 ) ; Devooght & Bersini ( 2017 ) ; Zhu et al . ( 2017 ) ; An et al . ( 2019 ) . Unfortunately , these approaches can only process user behavior sequences with a length of tens or hundreds , while the length can reach hundreds of thousands in many social networks and e-commerce services . Moreover , we have verified by experiments that the performance will become better as the user behavior becomes richer in most downstream tasks ( Fig . 3 ) . To solve this problem , many methods drown from natural language processing ( NLP ) Kumar et al . ( 2016 ) ; Dai et al . ( 2019 ) ; Yang et al . ( 2019 ) are proposed . They model long sequential data based on hierarchical architectures and memory networks Ying et al . ( 2018 ) ; Pi et al . ( 2019 ) ; Ren et al . ( 2019 ) . However , it is still hard for them to encode lifelong user behavior sequences when the length scales up to 1000 . And representations they learned through a specific task result in poor generalization capabilities . In this work , we propose a novel framework called Lifelong User Representation Model ( LURM ) to model user behaviors since registration . To meet the needs of extremely long sequence modeling , we first introduce a model named Bag of Interests ( BoI ) to summarize items in behavior sequences similar to Bag of Visual Words . In this way , we can use a sparse vector with super-high dimension to represent user behavior in any time period . Then , a Self-supervised Multi-anchor Encoder Network ( SMEN ) that maps sequences of BoI features to multiple low-dimensional user representations is proposed . SMEN consists of three modules : a multi-anchor module which can learn different aspects of user preferences , a time aggregation module which can model evolution of user behaviors , and a multi-scale aggregation module which can learn and aggregate BoI features in different scales . Considering the consistency between user behaviors in different time periods , we introduce a contrastive loss function to the self-supervised training of SMEN . With the designs above , SMEN achieves almost lossless dimensionality reduction . The main contribution of our work can be summarized as follows : • In this work , a novel framework named LURM is proposed to model lifelong user behaviors of any length . To the best of our knowledge , it is the first method that has the ability to model lifelong behaviors in the field of universal user representation learning . • We introduce a sub-model named BoI which can encode behaviors in any time period , so that lifelong behavior data can be represented by a sequence of sparse vectors . • We can obtain compressed user representations with little information loss with the help of a designed sub-model named SMEN . • Extensive experiments are performed on several real-world datasets . The results demonstrate the effectiveness and generalization ability of the learned user representation . 2 RELATED WORKS . 2.1 UNIVERSAL USER MODELING . Compared with task-specific user modeling that requires more resources , universal user representations are preferred to serve different downstream tasks . In recent years , some works dedicated to learning universal user representations have been proposed . Ni et al . ( 2018 ) proposed a representation learning method based on multi-task learning , which enabled the network to generalize universal user representations . Extensive experiments showed the generality and transferability of the user representation . However the effectiveness of this method may still suffer due to the selection of tasks and the need of labels . To release the burden of labeling , Andrews & Bishop ( 2019 ) proposed a novel procedure to learn user embedding by using metric learning . They learned a mapping from short episodes of user behaviors to a vector space in which the distance between points captures the similarity of the corresponding users ’ invariant features . Gu et al . ( 2020 ) proposed a network named self-supervised user modeling network ( SUMN ) to encode user behavior data into universal representation . They introduced a behavior consistency loss , which guided the model to fully identify and preserve valuable user information under a self-supervised learning framework . Wu et al . ( 2020 ) proposed pre-trained user models ( PTUM ) , which can learn universal user models based on two self-supervision tasks for pre-training . The first one was masked behavior prediction , which can model the relatedness between historical behaviors . The second one was next K behavior prediction , which can model the relatedness between past and future behaviors . Unfortunately , these methods can only process user behavior sequences with a length of hundreds , and can not leverage the rich information brought by lifelong user behaviors . 2.2 LIFELONG USER MODELING . Previous works have shown that considering long-term historical behavior sequences for user modeling can significantly improve the performance of different tasks Ren et al . ( 2019 ) ; Pi et al . ( 2019 ; 2020 ) . Ren et al . proposed a hierarchical periodic memory network for lifelong sequential modeling . They built a personalized memorization for each user , which remembers both intrinsic user tastes and multi-facet user interests with the learned while compressed memory . Pi et al . decoupled the user modeling from the whole CTR prediction system to tackle the challenge of the storage cost and the system latency . Specifically , they proposed a user interest center module for real-time inference and a memory-based network that can be implemented incrementally . Pi et al . also designed a search-based interest model ( SIM ) with a cascaded two-stage search paradigm to capture the diverse long-term interest with target item . Unfortunately , the length of the user behavior sequence that these models can handle is still limited . Moreover , these models are all trained on specific tasks , which limits the generalization ability . 3 METHODOLOGY . In this work , we are committed to learning universal user representation through truly lifelong user behavior sequences with arbitrary length . For this purpose , we propose a framework named Lifelong User Representation Model ( LURM ) which consists of two cascaded sub-models : Bag of Interests ( BoI ) and Self-supervised Multi-anchor Encoder Network ( SMEN ) . The overall architecture of LURM is shown in Fig . 1 . 3.1 BAG OF INTERESTS . In order to model extremely long lifelong user behavior sequence , we propose to aggregate the content of items under user purchases , clicks , or other behaviors at a certain granularity . Inspired by Bag of Visual Words ( BoVW ) Fei-Fei & Perona ( 2005 ) , item is the natural granularity for aggregation . But there are billions of items on the entire e-commerce platform , which means that the item vocabulary is extremely large , making it infeasible in practice . Therefore , we propose a model called Bag of Interests ( BoI ) to aggregate user behavior data at ‘ interest ’ granularity . Every ‘ interest ’ is a cluster of similar items and represents a certain kind of preference . The size of ‘ interest ’ vocabulary is often selected at a level of about 105 for retaining enough details . As shown in Fig . 1 ( a ) , BoI consists of an item embedding module and a large-scale clustering module . For convenience , we focus on text modality only in this work . It should be noted that our method can be extended to multi-modal data easily . 3.1.1 ITEM EMBEDDING MODULE . Like the BoVW model , an ‘ interest ’ vocabulary are supposed to be built in our BoI model . Therefore , the embedding of each item is required , so that similar items with close distance in the embedding space can be clustered together . Recently , in natural language and image processing , dis- criminative approaches based on contrastive learning in the latent space have shown great success in the field of representation learning , achieving state-of-the-art results . Inspired by these works , we design a contrastive learning task based on the relation between items drawn from a user to learn item embedding similar to item2vec Barkan & Koenigstein ( 2016 ) . Given a set of users U = { u1 , u2 , ... , u|U | } , each user u ∈ U corresponds to a behavior sequence S = { x1 , x2 , ... , x|S| } , where xi ∈ S denotes the i-th item . |U | and |S| denote the number of users and the length of u ’ s behaviors respectively . Generally , the content of an item x can be expressed as { w1 , w2 , ... , w|x| } , where wi denotes a word from a vocabulary V , and |x| denotes the number of words in the content of x. Firstly , an encoder with average operation is used to generate item embedding e : ex = encoder ( w1 , w2 , ... , w|x| ) = proj ( 1 |x| |x|∑ i=1 Wi ) , ( 1 ) where Wi ∈ Rd is the embedding of word wi and will be learned during training , proj ( · ) includes two residual blocks , and a L2 normalization layer . To construct the contrastive learning task , we sample positive pairs from behavior sequences of users randomly . Specifically , two items ( xi , yi ) are similar , i.e . a positive pair , if they are drawn from the same user behavior sequence and the time interval between the occurrence of these two items is less than β , where β is the window size controlling the interval of the two user behaviors . Without loss of generality , the sampled mini-batch with batch size n can be denoted as ∆ = { x1 , y1 , x2 , y2 , ... , xn , yn } , where ( xi , yi ) construct a positive pair drawn from the behavior sequence Sit of the i-th user in batch . Then , the contrastive prediction task is defined to identify yi in ∆\ { xi } for a given xi , and all other items in ∆\ { xi , yi } are negatives . The loss for the positive pair ( xi , yi ) is written as l ( xi , yi ) = − log e g ( xi , yi ) /τ∑ ν∈∆ , ν 6=xi e g ( xi , ν ) /τ , ( 2 ) where g ( x , y ) = e T xey ‖ex‖‖ey‖ = e T xey denotes the cosine similarity between the embedding ex and the embedding ey , and τ is the temperature parameter . The final objective is the average loss of all positive pairs in the mini-batch , which can be written as Loss = 1 2n ∑ i ( l ( xi , yi ) + l ( yi , xi ) ) . ( 3 ) Noting that , other methods which can be used to generate item embedding are feasible .
This paper deals with the problem of obtaining user representations from behavior sequences and proposes a method introducing the idea of a "bag of features" and multi-head attention. More specifically, a behavior sequence is expressed by a sequence of multi-dimensional vectors and a fixed-length segment of the sequence is converted to a histogram, where each bin corresponds to a cluster of vectors. Each is then fed to a multi-head attention module to obtain a segment representation and all the segment representations are aggregated into a single vector through a time aggregation module. Experimental evaluations for several downstream tasks demonstrate the effectiveness of the proposed method against several simple baselines.
SP:283b7ebcc54991836055c604e3fae24aac9286b9
Learning Universal User Representations via Self-Supervised Lifelong Behaviors Modeling
1 INTRODUCTION . Inferring user intents through user behavior data has been extensively studied in industrial applications , such as recommendation systems , search engines , and online advertising Dupret & Piwowarski ( 2008 ) ; He et al . ( 2014 ) ; Elkahky et al . ( 2015 ) ; Yu et al . ( 2016 ) . One key aspect in these systems is user modeling , which describes the process of building up and modifying a conceptual understanding of the user Fischer ( 2001 ) . Essentially , user modeling is to learn a user representation that helps to capture the user ’ s interests and preferences to improve the performance on downstream tasks . In the literature , there have been many studies that focused on task-specific user modeling , such as user response prediction and personalized recommendation Ren et al . ( 2019 ) ; Yu et al . ( 2019 ) ; Ji et al . ( 2020 ) . However , the user representation learned by a specific task can hardly be generalized to other tasks . As a result , specific user representation models need to be trained in each downstream task , which requires massive labeled data , training time , computing and storage resources . Given these limitations , universal user representations that can serve a variety of downstream tasks are preferred . Due to the sequential form of user behavior data , recurrent neural networks ( RNNs ) are usually used to encode the temporal dynamics of user behavior sequences Wu et al . ( 2017 ) ; Devooght & Bersini ( 2017 ) ; Zhu et al . ( 2017 ) ; An et al . ( 2019 ) . Unfortunately , these approaches can only process user behavior sequences with a length of tens or hundreds , while the length can reach hundreds of thousands in many social networks and e-commerce services . Moreover , we have verified by experiments that the performance will become better as the user behavior becomes richer in most downstream tasks ( Fig . 3 ) . To solve this problem , many methods drown from natural language processing ( NLP ) Kumar et al . ( 2016 ) ; Dai et al . ( 2019 ) ; Yang et al . ( 2019 ) are proposed . They model long sequential data based on hierarchical architectures and memory networks Ying et al . ( 2018 ) ; Pi et al . ( 2019 ) ; Ren et al . ( 2019 ) . However , it is still hard for them to encode lifelong user behavior sequences when the length scales up to 1000 . And representations they learned through a specific task result in poor generalization capabilities . In this work , we propose a novel framework called Lifelong User Representation Model ( LURM ) to model user behaviors since registration . To meet the needs of extremely long sequence modeling , we first introduce a model named Bag of Interests ( BoI ) to summarize items in behavior sequences similar to Bag of Visual Words . In this way , we can use a sparse vector with super-high dimension to represent user behavior in any time period . Then , a Self-supervised Multi-anchor Encoder Network ( SMEN ) that maps sequences of BoI features to multiple low-dimensional user representations is proposed . SMEN consists of three modules : a multi-anchor module which can learn different aspects of user preferences , a time aggregation module which can model evolution of user behaviors , and a multi-scale aggregation module which can learn and aggregate BoI features in different scales . Considering the consistency between user behaviors in different time periods , we introduce a contrastive loss function to the self-supervised training of SMEN . With the designs above , SMEN achieves almost lossless dimensionality reduction . The main contribution of our work can be summarized as follows : • In this work , a novel framework named LURM is proposed to model lifelong user behaviors of any length . To the best of our knowledge , it is the first method that has the ability to model lifelong behaviors in the field of universal user representation learning . • We introduce a sub-model named BoI which can encode behaviors in any time period , so that lifelong behavior data can be represented by a sequence of sparse vectors . • We can obtain compressed user representations with little information loss with the help of a designed sub-model named SMEN . • Extensive experiments are performed on several real-world datasets . The results demonstrate the effectiveness and generalization ability of the learned user representation . 2 RELATED WORKS . 2.1 UNIVERSAL USER MODELING . Compared with task-specific user modeling that requires more resources , universal user representations are preferred to serve different downstream tasks . In recent years , some works dedicated to learning universal user representations have been proposed . Ni et al . ( 2018 ) proposed a representation learning method based on multi-task learning , which enabled the network to generalize universal user representations . Extensive experiments showed the generality and transferability of the user representation . However the effectiveness of this method may still suffer due to the selection of tasks and the need of labels . To release the burden of labeling , Andrews & Bishop ( 2019 ) proposed a novel procedure to learn user embedding by using metric learning . They learned a mapping from short episodes of user behaviors to a vector space in which the distance between points captures the similarity of the corresponding users ’ invariant features . Gu et al . ( 2020 ) proposed a network named self-supervised user modeling network ( SUMN ) to encode user behavior data into universal representation . They introduced a behavior consistency loss , which guided the model to fully identify and preserve valuable user information under a self-supervised learning framework . Wu et al . ( 2020 ) proposed pre-trained user models ( PTUM ) , which can learn universal user models based on two self-supervision tasks for pre-training . The first one was masked behavior prediction , which can model the relatedness between historical behaviors . The second one was next K behavior prediction , which can model the relatedness between past and future behaviors . Unfortunately , these methods can only process user behavior sequences with a length of hundreds , and can not leverage the rich information brought by lifelong user behaviors . 2.2 LIFELONG USER MODELING . Previous works have shown that considering long-term historical behavior sequences for user modeling can significantly improve the performance of different tasks Ren et al . ( 2019 ) ; Pi et al . ( 2019 ; 2020 ) . Ren et al . proposed a hierarchical periodic memory network for lifelong sequential modeling . They built a personalized memorization for each user , which remembers both intrinsic user tastes and multi-facet user interests with the learned while compressed memory . Pi et al . decoupled the user modeling from the whole CTR prediction system to tackle the challenge of the storage cost and the system latency . Specifically , they proposed a user interest center module for real-time inference and a memory-based network that can be implemented incrementally . Pi et al . also designed a search-based interest model ( SIM ) with a cascaded two-stage search paradigm to capture the diverse long-term interest with target item . Unfortunately , the length of the user behavior sequence that these models can handle is still limited . Moreover , these models are all trained on specific tasks , which limits the generalization ability . 3 METHODOLOGY . In this work , we are committed to learning universal user representation through truly lifelong user behavior sequences with arbitrary length . For this purpose , we propose a framework named Lifelong User Representation Model ( LURM ) which consists of two cascaded sub-models : Bag of Interests ( BoI ) and Self-supervised Multi-anchor Encoder Network ( SMEN ) . The overall architecture of LURM is shown in Fig . 1 . 3.1 BAG OF INTERESTS . In order to model extremely long lifelong user behavior sequence , we propose to aggregate the content of items under user purchases , clicks , or other behaviors at a certain granularity . Inspired by Bag of Visual Words ( BoVW ) Fei-Fei & Perona ( 2005 ) , item is the natural granularity for aggregation . But there are billions of items on the entire e-commerce platform , which means that the item vocabulary is extremely large , making it infeasible in practice . Therefore , we propose a model called Bag of Interests ( BoI ) to aggregate user behavior data at ‘ interest ’ granularity . Every ‘ interest ’ is a cluster of similar items and represents a certain kind of preference . The size of ‘ interest ’ vocabulary is often selected at a level of about 105 for retaining enough details . As shown in Fig . 1 ( a ) , BoI consists of an item embedding module and a large-scale clustering module . For convenience , we focus on text modality only in this work . It should be noted that our method can be extended to multi-modal data easily . 3.1.1 ITEM EMBEDDING MODULE . Like the BoVW model , an ‘ interest ’ vocabulary are supposed to be built in our BoI model . Therefore , the embedding of each item is required , so that similar items with close distance in the embedding space can be clustered together . Recently , in natural language and image processing , dis- criminative approaches based on contrastive learning in the latent space have shown great success in the field of representation learning , achieving state-of-the-art results . Inspired by these works , we design a contrastive learning task based on the relation between items drawn from a user to learn item embedding similar to item2vec Barkan & Koenigstein ( 2016 ) . Given a set of users U = { u1 , u2 , ... , u|U | } , each user u ∈ U corresponds to a behavior sequence S = { x1 , x2 , ... , x|S| } , where xi ∈ S denotes the i-th item . |U | and |S| denote the number of users and the length of u ’ s behaviors respectively . Generally , the content of an item x can be expressed as { w1 , w2 , ... , w|x| } , where wi denotes a word from a vocabulary V , and |x| denotes the number of words in the content of x. Firstly , an encoder with average operation is used to generate item embedding e : ex = encoder ( w1 , w2 , ... , w|x| ) = proj ( 1 |x| |x|∑ i=1 Wi ) , ( 1 ) where Wi ∈ Rd is the embedding of word wi and will be learned during training , proj ( · ) includes two residual blocks , and a L2 normalization layer . To construct the contrastive learning task , we sample positive pairs from behavior sequences of users randomly . Specifically , two items ( xi , yi ) are similar , i.e . a positive pair , if they are drawn from the same user behavior sequence and the time interval between the occurrence of these two items is less than β , where β is the window size controlling the interval of the two user behaviors . Without loss of generality , the sampled mini-batch with batch size n can be denoted as ∆ = { x1 , y1 , x2 , y2 , ... , xn , yn } , where ( xi , yi ) construct a positive pair drawn from the behavior sequence Sit of the i-th user in batch . Then , the contrastive prediction task is defined to identify yi in ∆\ { xi } for a given xi , and all other items in ∆\ { xi , yi } are negatives . The loss for the positive pair ( xi , yi ) is written as l ( xi , yi ) = − log e g ( xi , yi ) /τ∑ ν∈∆ , ν 6=xi e g ( xi , ν ) /τ , ( 2 ) where g ( x , y ) = e T xey ‖ex‖‖ey‖ = e T xey denotes the cosine similarity between the embedding ex and the embedding ey , and τ is the temperature parameter . The final objective is the average loss of all positive pairs in the mini-batch , which can be written as Loss = 1 2n ∑ i ( l ( xi , yi ) + l ( yi , xi ) ) . ( 3 ) Noting that , other methods which can be used to generate item embedding are feasible .
This paper introduces a universal user representation learning approach based on self-supervised learning on long-term user behaviors. The authors first represent user behaviors into sparse vectors, and then encode them into multiple vectors to represent users in different aspects. The authors further study using different time aggregation strategies to tradeoff accuracy and computational cost. Experiments on two datasets show the effectiveness of the proposed approach.
SP:283b7ebcc54991836055c604e3fae24aac9286b9
Decomposing Texture and Semantics for Out-of-distribution Detection
1 INTRODUCTION . The out-of-distribution ( OOD ) detection is the task that recognizes whether the given data comes from the distribution of training samples ( also known as in-distribution ) or not . Any machine learning-based system could receive input samples that have a completely disparate distribution from the training environments ( e.g. , dataset ) . Since the distribution shift can severely degrade the model performance ( Amodei et al. , 2016 ) , it is a potential threat for a reliable real-world AI system . However , an ambiguous definition of the “ in-distribution ” limits the feasibility of the OOD detection method in real-world applications , considering the various OOD scenarios . For example , subtle corruption is a clear signal of the OOD in the machine vision field while a change in semantic information might not be . On the other hand , an autonomous driving system may assume the in-distribution from the semantic-oriented perspective ; e.g. , an unseen traffic sign is the OOD . Interestingly , the interpretations of the in-distribution described in the above scenarios are not correlated ; rather , they are contradicted . Unfortunately , most of the conventional OOD detection methods ( Zhang et al. , 2021 ; Tack et al. , 2020 ; Ren et al. , 2019 ) assume the in-distribution as a single-mode ; thus are not able to handle other aspects of the OOD ( Figure 1a ) . To tackle this issue , we revisit the definition of the in-distribution by decomposing it into two different factors : texture and semantics ( Figure 1b ) . For the texture OOD case , we define the OOD as the textural difference between the in- and out-of-distribution datasets . On the contrary , the semantic OOD focuses on the class labels that do not exist in the in-distribution environment . Note that the two aspects have a trade-off relationship , thus detecting both OOD problems with a single model is challenging with the ( conventional ) entangled OOD point of view . Similar to ours , Geirhos et al . ( 2018 ) investigated the texture-shape cue conflict in the network , and a series of following studies ( Hermann et al. , 2019 ; Li et al. , 2020 ; Ahmed & Courville , 2020 ) explored the way to find a balance between these perspectives . However , aforementioned works utilize texture-shape to analyze the bias inherited in deep networks . In this work , instead , we focus on analyzing the texture and semantic nature underlying the in-distribution to build a more practically applicable OOD detection method . However , to the best of our knowledge , none of the studies on the OOD detection benchmark have thoroughly investigated the definition of the in-distribution . This can be problematic when the OOD detection method judges the image corrupted by minor distortion as OOD , even when the environment is tolerant to the small changes in texture . Because of such a complicated scenario , it is important to evaluate the OOD detection method in a comprehensive way that goes beyond the simple benchmarks . In this work , we propose a new approach to measuring the performance of the method according to the decomposed definition of the in-distribution . One notable observation in our benchmark is that most of the previous OOD detection methods are highly biased to the texture information and ignore the semantic clues in many cases . To mitigate this issue , our method tackles the texture and semantic information separately and aggregates these at the final module ( Figure 2 ) . To effectively extract the texture information , we use the 2D Fourier transform motivated by the recent frequency domain-driven deep method ( Xu et al. , 2020 ) . For the semantic feature , we design an extraction module upon the Deep-SVDD ( Ruff et al. , 2018 ) with our novel angular distance-based initialization strategy . We then combine two features using the normalizing flow-based method ( Dinh et al. , 2016 ) , followed by our factor control mechanism . This control module provides the flexibility to handle various OOD scenarios by choosing which decomposed feature is more important in the given surrounding OOD circumstance . The main contributions of this work are as follows : • We decompose the “ unclear ” definition of the in-distribution into texture & semantics . To the best of our knowledge , this is the first attempt to clarify the OOD itself in this field . • Motivated by real-world problems , we create new OOD detection benchmark scenarios to evaluate the models based on the decomposed in-distribution factors . • We propose a novel OOD detection method that is effective on both texture & semantics as well as the conventional benchmark setups . In addition , our method does not require any auxiliary datasets or class labels unlike the previous models . 2 RELATED WORK . In this section , we briefly overview the notable studies on the OOD detection field . We categorize the deep learning-based OOD detection methods into three groups based on the characteristics of the information that they used . Class labels of the in-distribution . Early studies on deep OOD methods rely on class supervision . ODIN , Generailized ODIN ( Liang et al. , 2017 ; Hsu et al. , 2020 ) use the uncertainty measure derived by the Softmax output . It determines the given sample as the OOD when the output probability of all classes is less than the predefined threshold . Sastry & Oore ( 2020 ) ; Lee et al . ( 2018 ) utilize the extracted feature map ( e.g. , gram matrix ) from the pre-trained networks to calculate the OOD score . Auxiliary distribution . Outlier exposure ( OE ) ( Hendrycks et al. , 2018 ) exploits additional datasets that are disjointed from the test dataset to guide the network to better representations for OOD detection . Papadopoulos et al . ( 2021 ) further improves the performance of OE by regularizing the network with the total variation distance of the Softmax output . Data augmentation . Recently , contrastive learning-based methods have shown remarkable success on the tasks related to visual representation ( He et al. , 2020 ; Chen et al. , 2020 ) . Motivated by this , several studies employ data augmentation methods such as image transformation or additional noise on the OOD detection task ( Hendrycks et al. , 2019 ; Tack et al. , 2020 ) . Unlike the prior studies that exploit additional information other than the in-distribution , we only utilize the given ( in-distribution ) training dataset . In addition , we separate and clarify the assumption of the OOD as texture and semantics to improve the practicability in the real world . 3 METHOD . In this section , we present an overview of our proposed method ( Section 3.1 ) , and the feature extraction modules of the model ( Section 3.3 and 3.2 ) . Finally , we introduce the normalizing flow-based conditional probabilistic modeling component ( Section 3.4 ) . Conventional OOD detection has an assumption that in-distribution data are sampled from the distribution of the training dataset , x ∼ pdata . We decompose the image with two factors and calculate the anomaly score based on each factor ’ s likelihood . The texture information T ( x ) extracted rigorous Fourier analysis process from input x . The semantic information S ( x ) extracts the content label features such as shape . Our framework calculates the likelihood of these two factors and then combines these likelihoods . Since we use the Normalizing flow model that trains the exact likelihood , the extracted information is adjusted using a lambda that the user can control . 3.1 MODEL OVERVIEW . We aim to train our method with the decomposed in-distribution likelihood , p ( T ( x ) |x ) and p ( S ( x ) |x ) ( Figure 2 ) . With the given input image x , we extract the features for each variable with different approaches . In detail , we distil the information from the texture and semantic information with T ( x ) and S ( x ) , respectively . The extracted features are combined by the controllable normalizing flow method . Since our normalizing flow-based model explicitly calculates the negative log-likelihood , we model each extracted information as log pθ ( T ( x ) |x ) and log pφ ( S ( x ) |x ) , where θ and φ are trainable parameters of the networks . In addition , we introduce the control parameter λ ∈ [ 0 , 1 ] to model the final probability as λ · log p ( x|T ( x ) ) + ( 1− λ ) · log p ( x|S ( x ) ) . With this control mechanism , a user can determine the appropriate model “ mode ” by referring to the prior knowledge . For example , in the case where the texture information overwhelms the semantic one for detecting OOD , we can overweight λ for better performance . By default , we use the λ value of 0.5 ( no prior knowledge ) . 3.2 EXTRACTING THE SEMANTIC INFORMATION . Multi-SVDD . Beyond the one-class anomaly detection that considers the normal data as a single class ( e.g. , DeepSVDD ( Ruff et al. , 2018 ) ) , recent studies have viewed the normal data as the union of the multiple hidden semantic information ( Ghafoori & Leckie , 2020 ; Park et al. , 2021 ) . Inspired by this idea , we use the multi-SVDD method to extract the semantic information in an unsupervised manner for the OOD detection task . Multi-SVDD embeds the samples to the multiple center vectors as close as possible . Suppose the set of center vectors C = { c1 , ... , cK } is initialized via K-means and the radius of each center is r = [ r1 , ... , rK ] . In multi-SVDD , the objective function is defined as follows . min W , r K∑ k=1 r2k + 1 νn n∑ i=1 max { 0 , ‖φ ( xi ; W ) − cj‖2 − r2j } + η 2 ∑ ‖W‖2 . ( 1 ) Here , φ ( xi ; W ) is the deep network with a set of weight parameters W and cj is assigned to φ ( xi ; W ) . As the set r is decreased , the samples are condensed into the center vectors . By using the distance between the center vectors and the samples , we get an anomaly score . Angular distance initialization . The SVDD method is originally introduced for the anomaly detection task . Because of the disparity between the OOD and the anomaly detection scenarios , direct application of the SVDD-based model to OOD detection causes unexpected performance degradation . In anomaly detection , even though the abnormal samples lie in the in-distribution manifold , it is possible to detect them as abnormal unless they are close to the center vectors cj . For example , as shown in Figure 3a , the OOD samples ( red ) that are located inside of the in-distribution manifold ( light blue shade ) can be detected as abnormal since they are outside of the tight cluster boundary ( dark blue shade ) . Because of such characteristics , a mixture of Gaussian probability density is a reasonable density space for the anomaly detection model . Unlike the anomaly detection task , the definition of the OOD detection task is to find the samples that are not the “ in-distribution ” . In Figure 3a , all the OOD samples placed in the in-distribution manifold ( light dark shade ) are recognized as the in-distribution . To tackle this issue , we propose an angular distance-based center vector initialization strategy : ck = γ v ‖v‖ , v ∈ Rh ∼ N ( 0 , 1 ) ( 2 ) where h is the dimension of the embedding space and γ is the hyper-parameter for the radius of the sphere . After φ ( xi ; W ) is trained based on angular initialization , semantic features are extracted through this model ; S ( xi ) = φ ( xi ; W ) . By setting the γ value as large enough , we ensure that all sample data are within a radius of the sphere as illustrated in Figure 3b . While Equation 1 drives the training samples to be embedded around the center vectors on the sphere , the OOD samples remain near the origin . This embedding space may be weak to recognize the semantic label of a given sample , but it is sufficient to identify whether the sample is OOD or not .
The submission proposes to evaluate OOD detection problems with regards to two aspects — detect distributional shift in texture, or that in object-identity. A Fourier transform is used to identify changes in texture, and a modification of SVDD is used to identify changes in object identity, by building density models on top of a PCA-reduction of the extracted features in both cases. Experiments are conducted which showcase that the two components can detect textural vs. object-identity shift while not mis-identifying one type of shift for the other.
SP:3887435d443946bde2ef1f6939bffab745ab69b4
Decomposing Texture and Semantics for Out-of-distribution Detection
1 INTRODUCTION . The out-of-distribution ( OOD ) detection is the task that recognizes whether the given data comes from the distribution of training samples ( also known as in-distribution ) or not . Any machine learning-based system could receive input samples that have a completely disparate distribution from the training environments ( e.g. , dataset ) . Since the distribution shift can severely degrade the model performance ( Amodei et al. , 2016 ) , it is a potential threat for a reliable real-world AI system . However , an ambiguous definition of the “ in-distribution ” limits the feasibility of the OOD detection method in real-world applications , considering the various OOD scenarios . For example , subtle corruption is a clear signal of the OOD in the machine vision field while a change in semantic information might not be . On the other hand , an autonomous driving system may assume the in-distribution from the semantic-oriented perspective ; e.g. , an unseen traffic sign is the OOD . Interestingly , the interpretations of the in-distribution described in the above scenarios are not correlated ; rather , they are contradicted . Unfortunately , most of the conventional OOD detection methods ( Zhang et al. , 2021 ; Tack et al. , 2020 ; Ren et al. , 2019 ) assume the in-distribution as a single-mode ; thus are not able to handle other aspects of the OOD ( Figure 1a ) . To tackle this issue , we revisit the definition of the in-distribution by decomposing it into two different factors : texture and semantics ( Figure 1b ) . For the texture OOD case , we define the OOD as the textural difference between the in- and out-of-distribution datasets . On the contrary , the semantic OOD focuses on the class labels that do not exist in the in-distribution environment . Note that the two aspects have a trade-off relationship , thus detecting both OOD problems with a single model is challenging with the ( conventional ) entangled OOD point of view . Similar to ours , Geirhos et al . ( 2018 ) investigated the texture-shape cue conflict in the network , and a series of following studies ( Hermann et al. , 2019 ; Li et al. , 2020 ; Ahmed & Courville , 2020 ) explored the way to find a balance between these perspectives . However , aforementioned works utilize texture-shape to analyze the bias inherited in deep networks . In this work , instead , we focus on analyzing the texture and semantic nature underlying the in-distribution to build a more practically applicable OOD detection method . However , to the best of our knowledge , none of the studies on the OOD detection benchmark have thoroughly investigated the definition of the in-distribution . This can be problematic when the OOD detection method judges the image corrupted by minor distortion as OOD , even when the environment is tolerant to the small changes in texture . Because of such a complicated scenario , it is important to evaluate the OOD detection method in a comprehensive way that goes beyond the simple benchmarks . In this work , we propose a new approach to measuring the performance of the method according to the decomposed definition of the in-distribution . One notable observation in our benchmark is that most of the previous OOD detection methods are highly biased to the texture information and ignore the semantic clues in many cases . To mitigate this issue , our method tackles the texture and semantic information separately and aggregates these at the final module ( Figure 2 ) . To effectively extract the texture information , we use the 2D Fourier transform motivated by the recent frequency domain-driven deep method ( Xu et al. , 2020 ) . For the semantic feature , we design an extraction module upon the Deep-SVDD ( Ruff et al. , 2018 ) with our novel angular distance-based initialization strategy . We then combine two features using the normalizing flow-based method ( Dinh et al. , 2016 ) , followed by our factor control mechanism . This control module provides the flexibility to handle various OOD scenarios by choosing which decomposed feature is more important in the given surrounding OOD circumstance . The main contributions of this work are as follows : • We decompose the “ unclear ” definition of the in-distribution into texture & semantics . To the best of our knowledge , this is the first attempt to clarify the OOD itself in this field . • Motivated by real-world problems , we create new OOD detection benchmark scenarios to evaluate the models based on the decomposed in-distribution factors . • We propose a novel OOD detection method that is effective on both texture & semantics as well as the conventional benchmark setups . In addition , our method does not require any auxiliary datasets or class labels unlike the previous models . 2 RELATED WORK . In this section , we briefly overview the notable studies on the OOD detection field . We categorize the deep learning-based OOD detection methods into three groups based on the characteristics of the information that they used . Class labels of the in-distribution . Early studies on deep OOD methods rely on class supervision . ODIN , Generailized ODIN ( Liang et al. , 2017 ; Hsu et al. , 2020 ) use the uncertainty measure derived by the Softmax output . It determines the given sample as the OOD when the output probability of all classes is less than the predefined threshold . Sastry & Oore ( 2020 ) ; Lee et al . ( 2018 ) utilize the extracted feature map ( e.g. , gram matrix ) from the pre-trained networks to calculate the OOD score . Auxiliary distribution . Outlier exposure ( OE ) ( Hendrycks et al. , 2018 ) exploits additional datasets that are disjointed from the test dataset to guide the network to better representations for OOD detection . Papadopoulos et al . ( 2021 ) further improves the performance of OE by regularizing the network with the total variation distance of the Softmax output . Data augmentation . Recently , contrastive learning-based methods have shown remarkable success on the tasks related to visual representation ( He et al. , 2020 ; Chen et al. , 2020 ) . Motivated by this , several studies employ data augmentation methods such as image transformation or additional noise on the OOD detection task ( Hendrycks et al. , 2019 ; Tack et al. , 2020 ) . Unlike the prior studies that exploit additional information other than the in-distribution , we only utilize the given ( in-distribution ) training dataset . In addition , we separate and clarify the assumption of the OOD as texture and semantics to improve the practicability in the real world . 3 METHOD . In this section , we present an overview of our proposed method ( Section 3.1 ) , and the feature extraction modules of the model ( Section 3.3 and 3.2 ) . Finally , we introduce the normalizing flow-based conditional probabilistic modeling component ( Section 3.4 ) . Conventional OOD detection has an assumption that in-distribution data are sampled from the distribution of the training dataset , x ∼ pdata . We decompose the image with two factors and calculate the anomaly score based on each factor ’ s likelihood . The texture information T ( x ) extracted rigorous Fourier analysis process from input x . The semantic information S ( x ) extracts the content label features such as shape . Our framework calculates the likelihood of these two factors and then combines these likelihoods . Since we use the Normalizing flow model that trains the exact likelihood , the extracted information is adjusted using a lambda that the user can control . 3.1 MODEL OVERVIEW . We aim to train our method with the decomposed in-distribution likelihood , p ( T ( x ) |x ) and p ( S ( x ) |x ) ( Figure 2 ) . With the given input image x , we extract the features for each variable with different approaches . In detail , we distil the information from the texture and semantic information with T ( x ) and S ( x ) , respectively . The extracted features are combined by the controllable normalizing flow method . Since our normalizing flow-based model explicitly calculates the negative log-likelihood , we model each extracted information as log pθ ( T ( x ) |x ) and log pφ ( S ( x ) |x ) , where θ and φ are trainable parameters of the networks . In addition , we introduce the control parameter λ ∈ [ 0 , 1 ] to model the final probability as λ · log p ( x|T ( x ) ) + ( 1− λ ) · log p ( x|S ( x ) ) . With this control mechanism , a user can determine the appropriate model “ mode ” by referring to the prior knowledge . For example , in the case where the texture information overwhelms the semantic one for detecting OOD , we can overweight λ for better performance . By default , we use the λ value of 0.5 ( no prior knowledge ) . 3.2 EXTRACTING THE SEMANTIC INFORMATION . Multi-SVDD . Beyond the one-class anomaly detection that considers the normal data as a single class ( e.g. , DeepSVDD ( Ruff et al. , 2018 ) ) , recent studies have viewed the normal data as the union of the multiple hidden semantic information ( Ghafoori & Leckie , 2020 ; Park et al. , 2021 ) . Inspired by this idea , we use the multi-SVDD method to extract the semantic information in an unsupervised manner for the OOD detection task . Multi-SVDD embeds the samples to the multiple center vectors as close as possible . Suppose the set of center vectors C = { c1 , ... , cK } is initialized via K-means and the radius of each center is r = [ r1 , ... , rK ] . In multi-SVDD , the objective function is defined as follows . min W , r K∑ k=1 r2k + 1 νn n∑ i=1 max { 0 , ‖φ ( xi ; W ) − cj‖2 − r2j } + η 2 ∑ ‖W‖2 . ( 1 ) Here , φ ( xi ; W ) is the deep network with a set of weight parameters W and cj is assigned to φ ( xi ; W ) . As the set r is decreased , the samples are condensed into the center vectors . By using the distance between the center vectors and the samples , we get an anomaly score . Angular distance initialization . The SVDD method is originally introduced for the anomaly detection task . Because of the disparity between the OOD and the anomaly detection scenarios , direct application of the SVDD-based model to OOD detection causes unexpected performance degradation . In anomaly detection , even though the abnormal samples lie in the in-distribution manifold , it is possible to detect them as abnormal unless they are close to the center vectors cj . For example , as shown in Figure 3a , the OOD samples ( red ) that are located inside of the in-distribution manifold ( light blue shade ) can be detected as abnormal since they are outside of the tight cluster boundary ( dark blue shade ) . Because of such characteristics , a mixture of Gaussian probability density is a reasonable density space for the anomaly detection model . Unlike the anomaly detection task , the definition of the OOD detection task is to find the samples that are not the “ in-distribution ” . In Figure 3a , all the OOD samples placed in the in-distribution manifold ( light dark shade ) are recognized as the in-distribution . To tackle this issue , we propose an angular distance-based center vector initialization strategy : ck = γ v ‖v‖ , v ∈ Rh ∼ N ( 0 , 1 ) ( 2 ) where h is the dimension of the embedding space and γ is the hyper-parameter for the radius of the sphere . After φ ( xi ; W ) is trained based on angular initialization , semantic features are extracted through this model ; S ( xi ) = φ ( xi ; W ) . By setting the γ value as large enough , we ensure that all sample data are within a radius of the sphere as illustrated in Figure 3b . While Equation 1 drives the training samples to be embedded around the center vectors on the sphere , the OOD samples remain near the origin . This embedding space may be weak to recognize the semantic label of a given sample , but it is sufficient to identify whether the sample is OOD or not .
The paper proposes an OOD setting emphasizing on texture and semantics. The authors propose an OOD detection method which disentangles texture and semantics. The method achieves SoA performances.
SP:3887435d443946bde2ef1f6939bffab745ab69b4
Beyond Quantization: Power aware neural networks
1 INTRODUCTION . With the ever increasing popularity of deep neural networks ( DNNs ) for tasks like face detection , voice recognition , and image enhancement , power consumption has become one of the major considerations in the design of DNNs for resource-limited end-devices . Over the last several years , a plethora of approaches have been introduced for achieving power efficiency in DNNs . These range from specialized architectures ( Sandler et al. , 2018 ; Huang et al. , 2019 ; Tan et al. , 2019 ; Radosavovic et al. , 2020 ) , to hardware oriented methods like multiplier-free designs and low-precision arithmetic . Multiplier aware methods attempt to reduce power consumption by avoiding the costly multiplication operations , which dominate the computations in a DNN . Several works replaced multiplications by additions ( Courbariaux et al. , 2015 ; Li et al. , 2016 ; Chen et al. , 2020 ) or by bit shift operations ( Elhoushi et al. , 2019 ) or both ( You et al. , 2020 ) . Others employed efficient matrix multiplication operators ( Tschannen et al. , 2018 ; Lavin & Gray , 2016 ) . However , most methods in this category introduce dedicated architectures , which require training the network from scratch . This poses a severe limitation , as different variants of the network need to be trained for different power constraints . Low-precision DNNs reduce power consumption by using low-precision arithmetic . This is done either via quantization-aware training ( QAT ) or with post-training quantization techniques . The latter avoid the need for retraining the network but often still require access to a small number of calibration samples in order to adapt the network ’ s weights . Such techniques include approaches like re-training , fine-tuning , calibration and optimization ( Banner et al. , 2019 ; Jacob et al. , 2018 ; Nahshan et al. , 2019 ; Li et al. , 2021 ) . All existing methods in this category suffer from a large drop in accuracy with respect to the full-precision version of the network , especially when working at very low bit widths . Moreover , similarly to the multiplier-free approaches , they do not provide a mechanism for traversing the power-accuracy trade-off without actually changing the hardware ( e.g. , replacing an 8-bit multiplier by a 4-bit one ) . In this work , we introduce a power-aware neural network ( PANN ) approach that allows to dramatically cut down the power consumption of any model . Our method can be applied at post-training to improve the power efficiency of a pre-trained model , or in a QAT setting to obtain even improved results . Our approach is based on careful analysis of the power consumed by additions and multiplications , as functions of several factors . We rely on bit toggling activity , which is known to be the main factor affecting dynamic power consumption , and support our theoretical analysis with accurate gate-level simulations on a 5nm process . Our first important observation is that a major portion of the power consumed by a DNN is due to the use of signed integers . We therefore present a simple method for converting any pre-trained model to use unsigned arithmetic . This conversion does not change the functionality of the model and , as can be seen in Fig . 1 , dramatically reduces power consumption on common hardware configurations . Our second observation is that the multiplier ’ s power consumption is dominated by the larger bit width among its two inputs . Therefore , although high accuracy can often be achieved with quite drastic quantization of only the weights , this common practice turns out to be ineffective in terms of power consumption . To be able to take advantage of drastic weight quantization , here we introduce a method that allows removing the multiplier altogether . Our approach can work in combination with any activation quantization method . We show theoretically and experimentally that this method is far advantageous over existing quantization methods at low power budgets , both at post-training and in QAT settings ( see Fig . 1 and Sec . 6 ) . Our method allows working under any power constraint by tuning the number of additions used to approximate each mutltiply-accumulate ( MAC ) operation . This is in contrast to regular quantization methods , which are limited to particular values . We can thus traverse the power-accuracy trade-off without changing the architecture ( e.g. , bit width of the multiplier ) , as required by existing methods . 2 RELATED WORK . Avoiding multiplications In fixed point ( integer ) representation , additions are typically much more power-efficient than multiplications ( Horowitz , 2014b ; a ) . Some works suggested to binarize or ternarize the weights to enable working with additions only ( Courbariaux et al. , 2015 ; Lin et al. , 2015 ; Li et al. , 2016 ) . However , this often severely impairs the network ’ s accuracy . Recent works suggested to replace multiplications by bit shifts ( Elhoushi et al. , 2019 ) or additions ( Chen et al. , 2020 ) or both ( You et al. , 2020 ) . Other methods reduce the number of multiplications by inducing Element toggles Multiplier inputs 0.5b+0.5b Multiplier ’ s internal units 0.5b2 Accumulator input 0.5B Accumulator sum & FF 0.5bacc+0.5bacc Table 1 : Average number of bit flips per signed MAC . The b-bit multiplier inputs are drawn uniformly from [ −2b−1 , 2b−1 ) and its bacc = 2b bit output is summed with the B-bit number in the FF . sparsity ( Venkatesh et al. , 2016 ; Mahmoud et al. , 2020 ) , decomposition into smaller intermediate products ( Kim et al. , 2016 ) , Winograd based convolutions ( Lavin & Gray , 2016 ) , or Strassen ’ s matrix multiplication algorithm ( Tschannen et al. , 2018 ) . Some of these methods require internal changes in the model , a dedicated backpropagation scheme , or other modifications to the training process . Quantization DNN quantization approaches include post-training quantization ( PTQ ) , which is applied to a pre-trained model , and quantization-aware training ( QAT ) , where the network ’ s weights are adapted to the quantization during training ( Gupta et al. , 2015 ; Louizos et al. , 2018 ; Achterhold et al. , 2018 ; Esser et al. , 2019 ) . PTQ methods are more flexible in that they do not require access to the training set . These methods show optimal results for 8-bit quantization , but tend to incur a large drop in accuracy at low bit widths . To battle this effect , some PTQ methods minimize the quantization errors of each layer individually by optimizing the parameters over a calibration set ( Nahshan et al. , 2019 ; Nagel et al. , 2020 ; Hubara et al. , 2020 ) . Others use nonuniform quantization ( Liu et al. , 2021 ; Fang et al. , 2020 ) . Effort is also invested in avoiding the need of any data sample for calibration ( Cai et al. , 2020 ; Shoukai et al. , 2020 ; Nagel et al. , 2019 ; Haroush et al. , 2020 ) . These methods , however , still show a significant drop in accuracy at the lower bit widths , while frequently requiring additional computational resources . Common to all quantization works is that they lack analysis of the power consumed by each arithmetic operation as a function of bit-width , and thus can not strive for optimal power-accuracy trade-offs . 3 POWER CONSUMPTION OF A CONVENTIONAL DNN . The total amount of power consumed by a logic circuit can be attributed to two main sources : a static power component and a dynamic one . The static power is due to a constant leakage current . It does not depend on the circuit ’ s activity and is typically the smaller component among the two . The dynamic power consumed by each node in the circuit is given by P = CV 2fα , where C is the node capacitance , V is the supply voltage , f is the operating frequency , and α is the switching activity factor ( the average number of bit flips per clock ) ( Nasser et al. , 2017 ) . Here we focus on dynamic power , which is a major contributor to the overall power consumption ( see Appendix A.1 and ( Karimi et al. , 2019 ; Kim et al. , 2020 ) ) . Also , this is the only factor affected by the DNN architecture . Most of the computation in a forward pass of a DNN can be attributed to MAC operations . As shown in Fig . 2 , MACs involve a multiplier that accepts two b-bit numbers and outputs a bacc-bit result ( bacc = 2b to account for the largest possible product ) , and an accumulator with a large bit width B to which the multiplier ’ s output is added repeatedly . To understand how much power each of these components consumes , we simulated them in Python . For the multiplier , we used the Booth-encoding architecture , which is considered efficient in terms of bit toggling ( Asif & Kong , 2015 ) . For the accumulator , we simulated a serial adder . Our Python simulation allows measuring the total number of bit flips in each MAC operation , including at the inputs , at the outputs , in the flip-flop ( FF ) register holding the previous sum , and within each of the internal components ( e.g. , the full-adders ) of the multiplier . We also verified our analysis with an accurate physical gate-level simulation on a 5nm process and found good agreement in terms of the dependence of power consumption on the bit widths ( see details in Appendix A.1 ) . Table 1 shows the average number of bit flips per MAC when both inputs to the multiplier are drawn uniformly at random from [ −2b−1 , 2b−1 ) ( Gaussian inputs lead to similar results ; please see Appendix Figs 7-8 ) . As can be seen , the power consumed by the multiplier is given by1 Pmult = 0.5b 2 + b , ( 1 ) where 0.5b2 is due to the bit toggling in the internal units , and 0.5b is contributed by the bit flips in each input . The power consumed by the accumulator is given by Pacc = 0.5B + 2b , ( 2 ) where 0.5B is due to the bit toggling in its input coming from the multiplier , 0.5bacc = b ( recall bacc = 2b ) to the bit flips at the output , and an additional 0.5bacc = b to the bit flips in the FF . These results lead us to our first important observation . Observation 1 . A dominant source of power consumption is the bit toggling at the input of the accumulator ( 0.5B ) . Suppose , for example , we use b = 4 bits for representing the weights and activations and employ a B = 32 bit accumulator , as common in modern architectures ( Kalamkar et al. , 2019 ; Rodriguez et al. , 2018 ) . Then the toggling at the input of the accumulator ( 0.5B = 16 ) is responsible for 44.4 % of the total power consumption ( Pmult + Pacc = 36 ) . At lower bit widths , this percentage is even larger . Unfortunately , existing quantization methods and multiplier-free designs do not battle this source of power consumption . Ni et al . ( 2021 ) have recently shown that the bit-width B of the accumulator can be somewhat reduced by explicitly accounting for overflows . However , this approach requires dedicated training , and degrades the network ’ s classification accuracy at low values of B . As we now show , it is possible to drastically reduce the bit toggles at the input of the accumulator at post-training without changing the model ’ s functionality ( thus retaining the same classification accuracy ) .
The authors observed that the power consumption is dominated by the bit toggling at the input of the accumulator and decreasing the bit-width of only the weights or only the activations has limited benefit to reduce the the power consumed by the multiplier. The paper proposed PANN, which uses tricks such as unsigned arithmetic in CNN and implement multiplications via additions to achieve a multiplier-free and power-aware neural network. Experiments on both post-training quantization and quantization-aware training showcased better performance compared to other works under the same number of bit-flips. However, the reviewer believes there are fundamental flaws that need to be thoroughly addressed before this paper is ready for ICLR.
SP:fbefbf441554c459cac88181fbb20d7b6b440006
Beyond Quantization: Power aware neural networks
1 INTRODUCTION . With the ever increasing popularity of deep neural networks ( DNNs ) for tasks like face detection , voice recognition , and image enhancement , power consumption has become one of the major considerations in the design of DNNs for resource-limited end-devices . Over the last several years , a plethora of approaches have been introduced for achieving power efficiency in DNNs . These range from specialized architectures ( Sandler et al. , 2018 ; Huang et al. , 2019 ; Tan et al. , 2019 ; Radosavovic et al. , 2020 ) , to hardware oriented methods like multiplier-free designs and low-precision arithmetic . Multiplier aware methods attempt to reduce power consumption by avoiding the costly multiplication operations , which dominate the computations in a DNN . Several works replaced multiplications by additions ( Courbariaux et al. , 2015 ; Li et al. , 2016 ; Chen et al. , 2020 ) or by bit shift operations ( Elhoushi et al. , 2019 ) or both ( You et al. , 2020 ) . Others employed efficient matrix multiplication operators ( Tschannen et al. , 2018 ; Lavin & Gray , 2016 ) . However , most methods in this category introduce dedicated architectures , which require training the network from scratch . This poses a severe limitation , as different variants of the network need to be trained for different power constraints . Low-precision DNNs reduce power consumption by using low-precision arithmetic . This is done either via quantization-aware training ( QAT ) or with post-training quantization techniques . The latter avoid the need for retraining the network but often still require access to a small number of calibration samples in order to adapt the network ’ s weights . Such techniques include approaches like re-training , fine-tuning , calibration and optimization ( Banner et al. , 2019 ; Jacob et al. , 2018 ; Nahshan et al. , 2019 ; Li et al. , 2021 ) . All existing methods in this category suffer from a large drop in accuracy with respect to the full-precision version of the network , especially when working at very low bit widths . Moreover , similarly to the multiplier-free approaches , they do not provide a mechanism for traversing the power-accuracy trade-off without actually changing the hardware ( e.g. , replacing an 8-bit multiplier by a 4-bit one ) . In this work , we introduce a power-aware neural network ( PANN ) approach that allows to dramatically cut down the power consumption of any model . Our method can be applied at post-training to improve the power efficiency of a pre-trained model , or in a QAT setting to obtain even improved results . Our approach is based on careful analysis of the power consumed by additions and multiplications , as functions of several factors . We rely on bit toggling activity , which is known to be the main factor affecting dynamic power consumption , and support our theoretical analysis with accurate gate-level simulations on a 5nm process . Our first important observation is that a major portion of the power consumed by a DNN is due to the use of signed integers . We therefore present a simple method for converting any pre-trained model to use unsigned arithmetic . This conversion does not change the functionality of the model and , as can be seen in Fig . 1 , dramatically reduces power consumption on common hardware configurations . Our second observation is that the multiplier ’ s power consumption is dominated by the larger bit width among its two inputs . Therefore , although high accuracy can often be achieved with quite drastic quantization of only the weights , this common practice turns out to be ineffective in terms of power consumption . To be able to take advantage of drastic weight quantization , here we introduce a method that allows removing the multiplier altogether . Our approach can work in combination with any activation quantization method . We show theoretically and experimentally that this method is far advantageous over existing quantization methods at low power budgets , both at post-training and in QAT settings ( see Fig . 1 and Sec . 6 ) . Our method allows working under any power constraint by tuning the number of additions used to approximate each mutltiply-accumulate ( MAC ) operation . This is in contrast to regular quantization methods , which are limited to particular values . We can thus traverse the power-accuracy trade-off without changing the architecture ( e.g. , bit width of the multiplier ) , as required by existing methods . 2 RELATED WORK . Avoiding multiplications In fixed point ( integer ) representation , additions are typically much more power-efficient than multiplications ( Horowitz , 2014b ; a ) . Some works suggested to binarize or ternarize the weights to enable working with additions only ( Courbariaux et al. , 2015 ; Lin et al. , 2015 ; Li et al. , 2016 ) . However , this often severely impairs the network ’ s accuracy . Recent works suggested to replace multiplications by bit shifts ( Elhoushi et al. , 2019 ) or additions ( Chen et al. , 2020 ) or both ( You et al. , 2020 ) . Other methods reduce the number of multiplications by inducing Element toggles Multiplier inputs 0.5b+0.5b Multiplier ’ s internal units 0.5b2 Accumulator input 0.5B Accumulator sum & FF 0.5bacc+0.5bacc Table 1 : Average number of bit flips per signed MAC . The b-bit multiplier inputs are drawn uniformly from [ −2b−1 , 2b−1 ) and its bacc = 2b bit output is summed with the B-bit number in the FF . sparsity ( Venkatesh et al. , 2016 ; Mahmoud et al. , 2020 ) , decomposition into smaller intermediate products ( Kim et al. , 2016 ) , Winograd based convolutions ( Lavin & Gray , 2016 ) , or Strassen ’ s matrix multiplication algorithm ( Tschannen et al. , 2018 ) . Some of these methods require internal changes in the model , a dedicated backpropagation scheme , or other modifications to the training process . Quantization DNN quantization approaches include post-training quantization ( PTQ ) , which is applied to a pre-trained model , and quantization-aware training ( QAT ) , where the network ’ s weights are adapted to the quantization during training ( Gupta et al. , 2015 ; Louizos et al. , 2018 ; Achterhold et al. , 2018 ; Esser et al. , 2019 ) . PTQ methods are more flexible in that they do not require access to the training set . These methods show optimal results for 8-bit quantization , but tend to incur a large drop in accuracy at low bit widths . To battle this effect , some PTQ methods minimize the quantization errors of each layer individually by optimizing the parameters over a calibration set ( Nahshan et al. , 2019 ; Nagel et al. , 2020 ; Hubara et al. , 2020 ) . Others use nonuniform quantization ( Liu et al. , 2021 ; Fang et al. , 2020 ) . Effort is also invested in avoiding the need of any data sample for calibration ( Cai et al. , 2020 ; Shoukai et al. , 2020 ; Nagel et al. , 2019 ; Haroush et al. , 2020 ) . These methods , however , still show a significant drop in accuracy at the lower bit widths , while frequently requiring additional computational resources . Common to all quantization works is that they lack analysis of the power consumed by each arithmetic operation as a function of bit-width , and thus can not strive for optimal power-accuracy trade-offs . 3 POWER CONSUMPTION OF A CONVENTIONAL DNN . The total amount of power consumed by a logic circuit can be attributed to two main sources : a static power component and a dynamic one . The static power is due to a constant leakage current . It does not depend on the circuit ’ s activity and is typically the smaller component among the two . The dynamic power consumed by each node in the circuit is given by P = CV 2fα , where C is the node capacitance , V is the supply voltage , f is the operating frequency , and α is the switching activity factor ( the average number of bit flips per clock ) ( Nasser et al. , 2017 ) . Here we focus on dynamic power , which is a major contributor to the overall power consumption ( see Appendix A.1 and ( Karimi et al. , 2019 ; Kim et al. , 2020 ) ) . Also , this is the only factor affected by the DNN architecture . Most of the computation in a forward pass of a DNN can be attributed to MAC operations . As shown in Fig . 2 , MACs involve a multiplier that accepts two b-bit numbers and outputs a bacc-bit result ( bacc = 2b to account for the largest possible product ) , and an accumulator with a large bit width B to which the multiplier ’ s output is added repeatedly . To understand how much power each of these components consumes , we simulated them in Python . For the multiplier , we used the Booth-encoding architecture , which is considered efficient in terms of bit toggling ( Asif & Kong , 2015 ) . For the accumulator , we simulated a serial adder . Our Python simulation allows measuring the total number of bit flips in each MAC operation , including at the inputs , at the outputs , in the flip-flop ( FF ) register holding the previous sum , and within each of the internal components ( e.g. , the full-adders ) of the multiplier . We also verified our analysis with an accurate physical gate-level simulation on a 5nm process and found good agreement in terms of the dependence of power consumption on the bit widths ( see details in Appendix A.1 ) . Table 1 shows the average number of bit flips per MAC when both inputs to the multiplier are drawn uniformly at random from [ −2b−1 , 2b−1 ) ( Gaussian inputs lead to similar results ; please see Appendix Figs 7-8 ) . As can be seen , the power consumed by the multiplier is given by1 Pmult = 0.5b 2 + b , ( 1 ) where 0.5b2 is due to the bit toggling in the internal units , and 0.5b is contributed by the bit flips in each input . The power consumed by the accumulator is given by Pacc = 0.5B + 2b , ( 2 ) where 0.5B is due to the bit toggling in its input coming from the multiplier , 0.5bacc = b ( recall bacc = 2b ) to the bit flips at the output , and an additional 0.5bacc = b to the bit flips in the FF . These results lead us to our first important observation . Observation 1 . A dominant source of power consumption is the bit toggling at the input of the accumulator ( 0.5B ) . Suppose , for example , we use b = 4 bits for representing the weights and activations and employ a B = 32 bit accumulator , as common in modern architectures ( Kalamkar et al. , 2019 ; Rodriguez et al. , 2018 ) . Then the toggling at the input of the accumulator ( 0.5B = 16 ) is responsible for 44.4 % of the total power consumption ( Pmult + Pacc = 36 ) . At lower bit widths , this percentage is even larger . Unfortunately , existing quantization methods and multiplier-free designs do not battle this source of power consumption . Ni et al . ( 2021 ) have recently shown that the bit-width B of the accumulator can be somewhat reduced by explicitly accounting for overflows . However , this approach requires dedicated training , and degrades the network ’ s classification accuracy at low values of B . As we now show , it is possible to drastically reduce the bit toggles at the input of the accumulator at post-training without changing the model ’ s functionality ( thus retaining the same classification accuracy ) .
The paper argues that power consumption is a major obstacle in deploying DNNs to end devices and that current quantization approaches do not take power consumption directly into account and therefore are not optimal in reducing it. Using an approximate power model based on the average number of bit flips, they make two observation that are frequently overlooked by existing quantization approaches: 1) A significant portion of the power consumption of the MAC operation is due to the usage of signed integers and using unsigned integers instead can significantly reduce the power consumption. 2) The multipliers power consumption is dominated by the larger bit widths (weight or activation) and therefore using lower bit for one of the two (e.g. weights) is not power efficient. Based on this the authors introduce a new weight quantization approach (PANN) which removes the multiply operation and replaces it with additions. This allows PANN to efficiently reduces the power consumption. For the same power budget, PANN achieves significantly higher accuracy (or effective bit width), but comes at the cost of higher latency and memory usage.
SP:fbefbf441554c459cac88181fbb20d7b6b440006
Poisoning and Backdooring Contrastive Learning
1 INTRODUCTION . Contrastive learning ( Chopra et al. , 2005 ; Hadsell et al. , 2006 ) trains a model that projects a data distribution onto a lower-dimensional embedding space such that similar objects in the origin space are closer together in the embedding space than dissimilar objects ( Chechik et al. , 2010 ; Sohn , 2016 ; Oord et al. , 2018 ; Wu et al. , 2018 ) . Significant advances over the last years have enabled self-supervised classifiers to achieve state of the art accuracy by training on noisy and uncurated datasets ( Radford et al. , 2021 ; Tian et al. , 2021 ) , which brings two significant benefits . First , training on uncurated data is cheaper ( Joulin et al. , 2016 ) . Compared to an estimated several million USD it cost to label the ImageNet dataset ( Deng et al. , 2009 ) , contrastively trained models can train without expensive labeling efforts ( Chen et al. , 2020a ) . Further , because each image in ImageNet is required to contain one of just 1,000 different objects , there are large categories of images that can never be part of this supervised dataset ( Jia et al. , 2021 ) . On the other hand , a contrastive model can learn on arbitrary images whether or not they have a suitable corresponding label in some dataset . Second , training on noisy data improves robustness ( Radford et al. , 2021 ) . Classifiers trained exclusively on ImageNet overfit the particular details of this training set ( Recht et al. , 2019 ; Hendrycks & Dietterich , 2019 ) , and do not generalize to other test sets ( Taori et al. , 2020 ) . Contrastive models trained on data scraped from the Internet exhibit impressive robustness properties ; The contrastively trained CLIP ( Radford et al. , 2021 ) model is the first technique to show significant effective robustness on ImageNet-V2 ( Recht et al. , 2019 ; Taori et al. , 2020 ) . Contributions . We make the case that training on unfiltered may be undesirable if even a tiny fraction of the data could be maliciously poisoned by an adversary . And this is likely the case : the data is scraped from the Internet ( Jia et al. , 2021 ) without any human review before it is passed to the learning algorithm ( Radford et al. , 2021 ; Jia et al. , 2021 ; Tian et al. , 2021 ) . Thus , because these datasets are explicitly “ noisy ” ( Jia et al. , 2021 ) and “ uncurated ” ( Tian et al. , 2019 ) , we argue the likelihood of at least one adversary is high . We show that this adversary can mount powerful targeted poisoning ( Biggio et al. , 2012 ) and backdoor attacks ( Gu et al. , 2017 ; Chen et al. , 2017 ) against multimodal contrastive models . A poisoning adversary introduces malicious examples into the training dataset so that the model will misclassify a particular input at test time as an adversarially-desired label . We then consider patchbased backdoors , where the adversary poisons a dataset so that the learned model will classify any input that contains a particular trigger-pattern as a desired target label . Existing attacks are sufficient to poison contrastively-trained models ( Biggio et al. , 2012 ; Gu et al. , 2017 ; Chen et al. , 2017 ) —although we must adapt them to this new domain . The primary contribution of this paper is an emperical evaluation of 20,000 GPU-hours to show these attacks are immediately practical . Compared to prior backdooring attacks which require poisoning on average 1 % of training data for successful clean label attacks ( Shafahi et al. , 2018 ; Saha et al. , 2021 ) , we find that attacking contrastive models requires orders of magnitude fewer injections : just 0.01 % suffices for many of our backdoor attacks , or 0.0001 % for poisoning attacks . 2 BACKGROUND , NOTATION , AND RELATED WORK . 2.1 POISONING AND BACKDOOR ATTACKS . In a poisoning attack ( Biggio et al. , 2012 ) , an adversary modifies a benign training dataset X by injecting poisoned examples P to form a poisoned dataset X ′ = X ∪ P . When the victim runs the training algorithm T on the modified training dataset X ′ , they obtain a poisoned model fθ ← T ( X ′ ) . This model fθ will now perform well in most standard settings , but because of the poisoned examples P , the adversary will control how it behaves in other settings . We first consider targeted poisoning ( Barreno et al. , 2006 ; Biggio et al. , 2012 ) where an adversary injects poisoned examples so that some input x′ will be misclasified as a desired target y′ . Poisoning attacks exist for many tasks , including supervised ( Biggio et al. , 2012 ; Turner et al. , 2019 ; Koh & Liang , 2017 ) , unsupervised ( Kloft & Laskov , 2010 ; 2012 ; Biggio et al. , 2013 ) , and semi-supervised ( Liu et al. , 2020 ; Carlini , 2021 ) learning . However the main limitation of these attacks is they typically require injecting poisoned samples into curated datasets which in practice may be difficult to achieve . Our attacks apply to uncurated and noisy datasets , making them more realistic . We then turn to backdoor attacks on image classifiers . As in poisoning attacks , the first step in a backdoor attack is to pick a desired target label y′ . Instead of causing one particular image to be classified as y′ , a backdoor attack makes any image with a backdoor patch applied classified as y′ ( Gu et al. , 2017 ; Chen et al. , 2017 ) . We write x′ = x⊕ bd to denote a backdoored image , and consider the standard checkerboard backdoor that is overlaid on top of the image ( Gu et al. , 2017 ) , see Figure 1 for an example . We consider two approaches to placing the backdoor on the image . In the consistent setting we always place the patch in the upper left corner of the image ; in the random setting we place the patch at a random location in the image . 2.2 CONTRASTIVE LEARNING . In its most general definition , contrastive learning ( Chopra et al. , 2005 ; Hadsell et al. , 2006 ; Sohn , 2016 ; Oord et al. , 2018 ) constructs an embedding function f : X → E that maps objects of one type ( e.g. , images ) into an embedding space so that “ similar ” objects have close embeddings under a simple distance metric ( e.g. , Euclidean distance or cosine similarity ) . Early techniques would train using a triplet loss ( Weinberger & Saul , 2009 ; Chechik et al. , 2010 ) to distinguish two similar objects from a third different object . However more recent techniques now perform the contrastive loss across the entire mini-batch ( Sohn , 2016 ; Oord et al. , 2018 ) . While this direction traditionally focused on a single domain ( e.g. , classifiers only trained on images ( Sohn , 2016 ; Wu et al. , 2018 ; Bachman et al. , 2019 ; Chen et al. , 2020a ; b ) ) , within this past year , multimodal ( Weston et al. , 2010 ; Socher & Fei-Fei , 2010 ) contrastive learning techniques have begun to emerge that demonstrate significant and surprising benefits ( Radford et al. , 2021 ; Jia et al. , 2021 ) . Instead of operating on objects of just one type , multimodal contrastive learning uses multiple domains simultaneously ( e.g. , images and text ) ( Zhang et al. , 2020 ) . We focus on multi-modal classifiers . The dataset X ⊂ A×B here consists of objects drawn from two modes—in this paper , images ( A ) and text captions ( B ) . Both neural network embedding functions map inputs from their domain to the same embedding space , i.e. , f : A → E and g : B → E. For a given training example ( a , b ) ∈ X the training objective then minimizes an inner product ( e.g. , cosine similarity ) between the embeddings 〈f ( a ) , g ( b ) 〉 while maximizing the inner product between this example and other examples ( a′ , b′ ) ∈ X . Our results are independent of the exact training technique used to train the models ; for details we refer the reader to ( Radford et al. , 2021 ) . Use of contrastive models . Contrastively trained models are typically used in one of two ways . 1 . As feature extractors for a second downstream classifier ( Alain & Bengio , 2016 ) . We use f to map some new training dataset X̂ into the embedding space E , and then train a linear classifier z : E → Y to map the embeddings to predictions of the downstream task . 2 . As zero-shot classifiers . Given an object description ( e.g. , t1 = “ A photo of a cat ” and t2= “ A photo of a dog ” ) a contrastive classifier evaluates the embedding ei = g ( ti ) . At test time the classification of x is given by z ( x ) = { 〈ei , f ( x ) 〉 : i ∈ [ 0 , N ] } . 2.3 THREAT MODEL . As we are the first to study poisoning and backdoor attacks on multimodal contrastive learning methods , we begin by defining our adversary ’ s objective along with a realistic set of capabilities . Adversary Objective . The ultimate goal of our attack is to cause the contrastive model to behave incorrectly in one of the three cases above . Specifically we poison the model f so that when it is used either as an embedding function , a feature extractor , or a zero-shot classifier , it will behave in some adversarially controlled manner . We focus our paper on attacking the image embedding function f . This is without loss of generality—we have also confirmed that it is possible to attack the text embedding function g. However most prior work studies poisoning images , and so we do too . Adversary Capabilities . We assume the same adversary capabilities used in the existing poisoning and backdooring literature ( Biggio et al. , 2012 ) . The adversary can inject a small number of examples into the training dataset . At the poisoning rate required by prior supervised attacks ( Shafahi et al. , 2018 ; Saha et al. , 2021 ) , an adversary would need to modify a million images in the CLIP dataset . This is not realistic . In our paper we consider adversaries who can poison 100 − 10 , 000× fewer images . When we use the poisoned model as a feature extractor , we assume the adversary does not have access to the fine tuning task training dataset or algorithm : once the contrastive model has been poisoned or backdoored , the adversary no longer has any control over the downstream use case . 3 POISONING AND BACKDOORING ATTACK ALGORITHM . Both our poisoning and backdoor attacks will follow the same general procedure . We begin with the simpler case of targeted poisoning : given an example x′ and incorrect target label y′ , the adversary supplies the contrastive algorithm with the poison set P designed so that y′ = z ( fθ ( x′ ) ) , that is the learned model fθ ← T ( X ∪ P ) will compute an embedding so that the classifier z will misclassify the input . Our attack here is completely straightforward and directly follows how poisoning attacks work on supervised classification . Because models overfit against their training dataset ( Zhang et al. , 2017 ) , and because contrastively trained models have higher train-test gaps than supervised classifiers ( Radford et al. , 2021 ) , we need only inject image-text pairs that cause the model to map x′ into the concept class of y′ .
This paper explores data security in multimodal contrastive learning. In particular, it designs an image-text pair generation to poison the dataset, driving the model to misclassify a particular test input or a group of images with a small patch. While the generation method is quite simple, the attack success rate is impressive (only 3 out of 3 million images to conduct target poisoning attacks). This paper reminds us that it is potentially dangerous to train models on noisy and uncurated Internet scrapes, which is applied in some SOTA algorithms.
SP:e4e55a9fb7e1f55fd0a6d13c315f524774b74b5f
Poisoning and Backdooring Contrastive Learning
1 INTRODUCTION . Contrastive learning ( Chopra et al. , 2005 ; Hadsell et al. , 2006 ) trains a model that projects a data distribution onto a lower-dimensional embedding space such that similar objects in the origin space are closer together in the embedding space than dissimilar objects ( Chechik et al. , 2010 ; Sohn , 2016 ; Oord et al. , 2018 ; Wu et al. , 2018 ) . Significant advances over the last years have enabled self-supervised classifiers to achieve state of the art accuracy by training on noisy and uncurated datasets ( Radford et al. , 2021 ; Tian et al. , 2021 ) , which brings two significant benefits . First , training on uncurated data is cheaper ( Joulin et al. , 2016 ) . Compared to an estimated several million USD it cost to label the ImageNet dataset ( Deng et al. , 2009 ) , contrastively trained models can train without expensive labeling efforts ( Chen et al. , 2020a ) . Further , because each image in ImageNet is required to contain one of just 1,000 different objects , there are large categories of images that can never be part of this supervised dataset ( Jia et al. , 2021 ) . On the other hand , a contrastive model can learn on arbitrary images whether or not they have a suitable corresponding label in some dataset . Second , training on noisy data improves robustness ( Radford et al. , 2021 ) . Classifiers trained exclusively on ImageNet overfit the particular details of this training set ( Recht et al. , 2019 ; Hendrycks & Dietterich , 2019 ) , and do not generalize to other test sets ( Taori et al. , 2020 ) . Contrastive models trained on data scraped from the Internet exhibit impressive robustness properties ; The contrastively trained CLIP ( Radford et al. , 2021 ) model is the first technique to show significant effective robustness on ImageNet-V2 ( Recht et al. , 2019 ; Taori et al. , 2020 ) . Contributions . We make the case that training on unfiltered may be undesirable if even a tiny fraction of the data could be maliciously poisoned by an adversary . And this is likely the case : the data is scraped from the Internet ( Jia et al. , 2021 ) without any human review before it is passed to the learning algorithm ( Radford et al. , 2021 ; Jia et al. , 2021 ; Tian et al. , 2021 ) . Thus , because these datasets are explicitly “ noisy ” ( Jia et al. , 2021 ) and “ uncurated ” ( Tian et al. , 2019 ) , we argue the likelihood of at least one adversary is high . We show that this adversary can mount powerful targeted poisoning ( Biggio et al. , 2012 ) and backdoor attacks ( Gu et al. , 2017 ; Chen et al. , 2017 ) against multimodal contrastive models . A poisoning adversary introduces malicious examples into the training dataset so that the model will misclassify a particular input at test time as an adversarially-desired label . We then consider patchbased backdoors , where the adversary poisons a dataset so that the learned model will classify any input that contains a particular trigger-pattern as a desired target label . Existing attacks are sufficient to poison contrastively-trained models ( Biggio et al. , 2012 ; Gu et al. , 2017 ; Chen et al. , 2017 ) —although we must adapt them to this new domain . The primary contribution of this paper is an emperical evaluation of 20,000 GPU-hours to show these attacks are immediately practical . Compared to prior backdooring attacks which require poisoning on average 1 % of training data for successful clean label attacks ( Shafahi et al. , 2018 ; Saha et al. , 2021 ) , we find that attacking contrastive models requires orders of magnitude fewer injections : just 0.01 % suffices for many of our backdoor attacks , or 0.0001 % for poisoning attacks . 2 BACKGROUND , NOTATION , AND RELATED WORK . 2.1 POISONING AND BACKDOOR ATTACKS . In a poisoning attack ( Biggio et al. , 2012 ) , an adversary modifies a benign training dataset X by injecting poisoned examples P to form a poisoned dataset X ′ = X ∪ P . When the victim runs the training algorithm T on the modified training dataset X ′ , they obtain a poisoned model fθ ← T ( X ′ ) . This model fθ will now perform well in most standard settings , but because of the poisoned examples P , the adversary will control how it behaves in other settings . We first consider targeted poisoning ( Barreno et al. , 2006 ; Biggio et al. , 2012 ) where an adversary injects poisoned examples so that some input x′ will be misclasified as a desired target y′ . Poisoning attacks exist for many tasks , including supervised ( Biggio et al. , 2012 ; Turner et al. , 2019 ; Koh & Liang , 2017 ) , unsupervised ( Kloft & Laskov , 2010 ; 2012 ; Biggio et al. , 2013 ) , and semi-supervised ( Liu et al. , 2020 ; Carlini , 2021 ) learning . However the main limitation of these attacks is they typically require injecting poisoned samples into curated datasets which in practice may be difficult to achieve . Our attacks apply to uncurated and noisy datasets , making them more realistic . We then turn to backdoor attacks on image classifiers . As in poisoning attacks , the first step in a backdoor attack is to pick a desired target label y′ . Instead of causing one particular image to be classified as y′ , a backdoor attack makes any image with a backdoor patch applied classified as y′ ( Gu et al. , 2017 ; Chen et al. , 2017 ) . We write x′ = x⊕ bd to denote a backdoored image , and consider the standard checkerboard backdoor that is overlaid on top of the image ( Gu et al. , 2017 ) , see Figure 1 for an example . We consider two approaches to placing the backdoor on the image . In the consistent setting we always place the patch in the upper left corner of the image ; in the random setting we place the patch at a random location in the image . 2.2 CONTRASTIVE LEARNING . In its most general definition , contrastive learning ( Chopra et al. , 2005 ; Hadsell et al. , 2006 ; Sohn , 2016 ; Oord et al. , 2018 ) constructs an embedding function f : X → E that maps objects of one type ( e.g. , images ) into an embedding space so that “ similar ” objects have close embeddings under a simple distance metric ( e.g. , Euclidean distance or cosine similarity ) . Early techniques would train using a triplet loss ( Weinberger & Saul , 2009 ; Chechik et al. , 2010 ) to distinguish two similar objects from a third different object . However more recent techniques now perform the contrastive loss across the entire mini-batch ( Sohn , 2016 ; Oord et al. , 2018 ) . While this direction traditionally focused on a single domain ( e.g. , classifiers only trained on images ( Sohn , 2016 ; Wu et al. , 2018 ; Bachman et al. , 2019 ; Chen et al. , 2020a ; b ) ) , within this past year , multimodal ( Weston et al. , 2010 ; Socher & Fei-Fei , 2010 ) contrastive learning techniques have begun to emerge that demonstrate significant and surprising benefits ( Radford et al. , 2021 ; Jia et al. , 2021 ) . Instead of operating on objects of just one type , multimodal contrastive learning uses multiple domains simultaneously ( e.g. , images and text ) ( Zhang et al. , 2020 ) . We focus on multi-modal classifiers . The dataset X ⊂ A×B here consists of objects drawn from two modes—in this paper , images ( A ) and text captions ( B ) . Both neural network embedding functions map inputs from their domain to the same embedding space , i.e. , f : A → E and g : B → E. For a given training example ( a , b ) ∈ X the training objective then minimizes an inner product ( e.g. , cosine similarity ) between the embeddings 〈f ( a ) , g ( b ) 〉 while maximizing the inner product between this example and other examples ( a′ , b′ ) ∈ X . Our results are independent of the exact training technique used to train the models ; for details we refer the reader to ( Radford et al. , 2021 ) . Use of contrastive models . Contrastively trained models are typically used in one of two ways . 1 . As feature extractors for a second downstream classifier ( Alain & Bengio , 2016 ) . We use f to map some new training dataset X̂ into the embedding space E , and then train a linear classifier z : E → Y to map the embeddings to predictions of the downstream task . 2 . As zero-shot classifiers . Given an object description ( e.g. , t1 = “ A photo of a cat ” and t2= “ A photo of a dog ” ) a contrastive classifier evaluates the embedding ei = g ( ti ) . At test time the classification of x is given by z ( x ) = { 〈ei , f ( x ) 〉 : i ∈ [ 0 , N ] } . 2.3 THREAT MODEL . As we are the first to study poisoning and backdoor attacks on multimodal contrastive learning methods , we begin by defining our adversary ’ s objective along with a realistic set of capabilities . Adversary Objective . The ultimate goal of our attack is to cause the contrastive model to behave incorrectly in one of the three cases above . Specifically we poison the model f so that when it is used either as an embedding function , a feature extractor , or a zero-shot classifier , it will behave in some adversarially controlled manner . We focus our paper on attacking the image embedding function f . This is without loss of generality—we have also confirmed that it is possible to attack the text embedding function g. However most prior work studies poisoning images , and so we do too . Adversary Capabilities . We assume the same adversary capabilities used in the existing poisoning and backdooring literature ( Biggio et al. , 2012 ) . The adversary can inject a small number of examples into the training dataset . At the poisoning rate required by prior supervised attacks ( Shafahi et al. , 2018 ; Saha et al. , 2021 ) , an adversary would need to modify a million images in the CLIP dataset . This is not realistic . In our paper we consider adversaries who can poison 100 − 10 , 000× fewer images . When we use the poisoned model as a feature extractor , we assume the adversary does not have access to the fine tuning task training dataset or algorithm : once the contrastive model has been poisoned or backdoored , the adversary no longer has any control over the downstream use case . 3 POISONING AND BACKDOORING ATTACK ALGORITHM . Both our poisoning and backdoor attacks will follow the same general procedure . We begin with the simpler case of targeted poisoning : given an example x′ and incorrect target label y′ , the adversary supplies the contrastive algorithm with the poison set P designed so that y′ = z ( fθ ( x′ ) ) , that is the learned model fθ ← T ( X ∪ P ) will compute an embedding so that the classifier z will misclassify the input . Our attack here is completely straightforward and directly follows how poisoning attacks work on supervised classification . Because models overfit against their training dataset ( Zhang et al. , 2017 ) , and because contrastively trained models have higher train-test gaps than supervised classifiers ( Radford et al. , 2021 ) , we need only inject image-text pairs that cause the model to map x′ into the concept class of y′ .
The paper describes poisoning and backdooring attacks on CLIP, a recent method to (pre)-train multimodal networks with a contrastive objective. The setting here is different from a classical supervised BadNet-like setup in that learned embeddings are not necessarily going to be directly mapped to a class of choice. Authors evaluate both zero-shop learning setup and the linear probes to find that both of them can be successfully backdoored and poisoned with much fewer requirements. Authors extensively evaluate their attack and pinpoint what hyperparameters make it more effective.
SP:e4e55a9fb7e1f55fd0a6d13c315f524774b74b5f
Discovering Classification Rules for Interpretable Learning with Linear Programming
1 INTRODUCTION . Medical diagnosis , educational and juridical decisions often have important consequences for the society . Therefore , both the accuracy and interpretability of these decisions are of crucial importance . In particular , these decisions should be understandable by the decision makers . Rule sets consisting of a few intuitively coherent rules have shown to accomplish this purpose in such domains ( Lakkaraju et al. , 2016 ) . Here , we first aim at rule extraction from powerful ensemble methods and then solely focus on rule generation . In both cases , our objective is to obtain a set of classification rules that balances the trade-off between accuracy and interpretability . Our main tools in this effort are linear programming and column generation . A rule is an independent if-then statement , which contains one or more conditions that assign a class to a set of samples . For example in binary classification , “ if ( Clump Thickness is greater than six ) and ( Single Epithelial Cell Size is less than four ) then the tumor is malignant ” is such a rule that can be used for breast cancer diagnosis . When a sample satisfies this rule , then it receives a label corresponding to one of the two classes . In case a sample is covered by more than one rule , then majority voting among the assigned labels is used to determine the class of the sample . Growing decision trees is closely related to rule-based learning . A Decision Tree ( DT ) naturally results with a set of leaves , where each leaf corresponds to a different rule . On one hand , rule learning is considered to be more flexible for interpretability than tree-based learning approaches . Fürnkranz ( 1999 ) lists superiorities of rule-based learning over tree-based learning . Leaves ( rules ) of a DT are not independent from each other . For example , in a binary tree when a splitting is performed at a node , the left child grows a rule , while its right sibling grows its negation . As a result , every sample obeys to exactly one rule and it is classified according to the corresponding leaf . Thus , inaccurate rules , which are negations of their siblings after splitting , can be created . This may render false classification of samples and reduce both accuracy and interpretability . However , independently constructed rules do not need such a negation rule , and hence , rule-based learning can be considered more flexible for interpretability . On the other hand , unlike DTs , an independent set of rules does not necessarily cover the entire sample space . This may result in a state where a test sample can not be classified with the proposed rule set . This drawback is often handled by assigning a default label to those uncovered samples ( Fürnkranz et al. , 2012 ) . To minimize the number of uncovered samples during testing , the training data is required to be fully covered . Separate-andconquer algorithm by Fürnkranz ( 1999 ) achieves this heuristically by fitting rules on uncovered training samples , and new rules are generated until each training sample is covered by at least one rule . Instead of such a sequential covering , we take advantage of our linear programming ( LP ) approach and explicitly impose a covering constraint on the training set . In this paper , we propose two algorithms for interpretability and learning that are based on mathematical programming . We give a generic LP model that is first used by our Rule Extraction ( RUX ) algorithm , which selects rules from trained tree or rule ensembles for interpretation . Then , we develop a Rule Generation ( RUG ) algorithm , which generates classification rules using column generation ( CG ) to solve the LP model . Rule extraction methods attempt to select a set of rules from accurate complex or black-box models to interpret the predictions of these models ( Hayashi & Oishi , 2018 ) . For instance , several works in the literature aim at interpreting Random Forest ( RF ) models by extracting rules from the trees in the forest ( Liu et al. , 2012 ; Lu Thi et al. , 2015 ; Adnan & Islam , 2017 ; Wang et al. , 2020 ; Birbil et al. , 2020 ) . Most of these studies use heuristic approaches to select the desired set of rules . Birbil et al . ( 2020 ) suggest a set covering formulation to extract intrepretable rules from RFs . Other applications include extraction of rules from artificial neural networks ( Andrews et al. , 1995 ) and support vector machines ( Barakat & Bradley , 2010 ) . As a remark , there are other post-hoc interpretability approaches in the literature such as SHAP ( Lundberg & Lee , 2017 ) and LIME ( Ribeiro et al. , 2016 ) that are considered to be model-agnostic . However , our RUX algorithm is a model-specific approach . There are several studies that are closely related to ours , since they also employ mathematical programming for rule learning . Malioutov & Varshney ( 2013 ) propose a rule-based binary classifier solving a binary program that minimizes the number of rules for a boolean compressed sensing problem . Wang & Rudin ( 2015 ) present a mixed integer linear programming ( MILP ) formulation to learn decision rules for binary classification ( e.g. , patterns ) . They give a discussion on how their classifier is equivalent to DTs and RFs . The MILP formulation is solved using generated rules with an objective of minimizing the number of misclassified samples , the number of rules generated , and the total length of each rule . Dash et al . ( 2020 ) offer a CG-based framework to find optimal rule set for binary classification , where the objective is to find a trade-off between classification rule simplicity and accuracy . One-hot encoding is used to binarize categorical data , and numerical data is also discretized with sample deciles as thresholds . For large instances , the pricing subproblem is either solved with time limits , or the model columns are generated by a greedy heuristic . Wei et al . ( 2019 ) propose generalized linear rule models using a similar CG framework of the work by Dash et al . ( 2020 ) . Malioutov & Meel ( 2018 ) solve a Max-Sat formulation by constraint programming to construct interpretable classification rules . Ghosh & Meel ( 2019 ) also propose a framework based on MaxSAT formulation that can be applied to binary classification problems with binary features to minimize the number of generated rules and the number of misclassified samples . Their approach is incremental and takes advantage of partitioning the data set into several clusters . Contributions . RUX and RUG are based on an LP model and thus , both algorithms are scalable for large datasets . This is an important advantage compared to the existing studies that use MILP formulations . The proposed algorithms directly address multi-class problems while the existing studies using optimization-based approaches are for binary classification . To that end , the objective function in our LP formulation minimizes classification error using a loss function instead of explicitly counting misclassified samples . Both RUX and RUG can work with continuous or categorical features , and hence , they do not require encoding of the data . Along with the set of rules , our algorithms also return the optimal weights for the rules . These weights allow to attach importance to each rule for interpreting the classification . Our algorithms admit assigning cost coefficients to the rules . These coefficients could relate to different attributes of the rules , such as ; rule lengths , estimator weights , number of false negatives , and so on . The objective function also allows penaliz- ing rules that may have undesired outcomes ( like long rule lengths ) . Thus , the decision makers can use these coefficients to lead the training process to obtain a set of rules that are more appealing for their needs . The novelty in our column generation approach is the use of a regular decision tree with sample weights as the pricing subproblem . Training trees with sample weights is very fast , and also standard in all machine learning packages as boosting methods also rely on sample weights . Lastly , we present our algorithms for multi-class classification problems . However , the proposed ideas can also be extended to discovering regression rules for interpretable learning with linear programming . 2 RULE EXTRACTION . We consider a classification problem with K classes and denote the set of class labels by K. The training dataset consists of samples with features xi ∈ Rp for i ∈ I and labels yi for i ∈ I . To work with multiple classes , we define a vector-valued mapping y ( xi ) ∈ K ⊂ RK as in Zhu et al . ( 2009 ) . That is , if yi = k , then y ( xi ) = ( − 1K−1 , − 1 K−1 , . . . , 1 , . . . , − 1 K−1 ) ᵀ , ( 1 ) where the value one appears only at the kth component of the vector . Suppose that we have a collection of rules indexed by J . A rule j ∈ J assigns the vectorRj ( xi ) ∈ K to input xi , only if rule j covers sample i . This vector is also formed in the same manner as in equation 1 . To predict the class of a given sample xi with this collection of rules , we use a set of nonnegative weights wj , j ∈ J associated with the rules and evaluate ŷ ( xi ) = ∑ j∈J aijRj ( xi ) wj , ( 2 ) where aij ∈ { 0 , 1 } indicates whether rule j covers sample i or not . Then , the index of the largest component of the resulting vector ŷ ( xi ) is assigned as the predicted label ŷi of sample i ∈ I . Note that equation 2 is similar to the weighting of the classifiers in standard boosting methods . Here , instead of classifiers , we use rules for classifying only the covered samples . In order to evaluate the classification error , we use the hinge loss and define the total classification loss by ∑ i∈I max { 1− ∑ j∈J âijwj , 0 } , where âij = κaijRj ( xi ) ᵀy ( xi ) with κ = ( K−1 ) /K . This loss function allows us to write a linear programming model , where the objective is to find the set of rules that minimizes the total loss . To this end , we introduce the auxiliary variables vi , i ∈ I standing for vi ≥ max { 1− ∑ j∈J âijwj , 0 } , and obtain our master problem minimize ∑ i∈I vi + ∑ j∈J cjwj subject to ∑ j∈J âijwj + vi ≥ 1 , i ∈ I ; ∑ j∈J aijwj ≥ ε , i ∈ I ; vi ≥ 0 , i ∈ I ; wj ≥ 0 , j ∈ J , ( 3 ) where cj ≥ 0 , j ∈ J are the cost coefficients . These coefficients and the second set of constraints require further explanation . The cost coefficients serve two important roles : First , solutions involving many rules with nonzero weights are avoided . The less the number of rules in the resulting set , the easier it is to interpret a solution . In other words , we prefer sparse solutions and cj serves to that preference . Second , in many different application domains , rules have actual costs that need to be taken into consideration . As we also highlight in our title , when interpretability is of concern , one could try to obtain rules with few features because shorter rules are considered to be easier to interpret ( Lakkaraju et al. , 2016 ) . In this case , the number of conditions in a rule can be set as the rule cost . As another example , consider a classification problem in medical diagnosis , where false negatives are likely to be perceived as more costly than false positives . Such an evaluation could also be easily incorporated with a rule cost in the proposed model . We should point out that there exists a trade-off between model accuracy and the rule set size . This can also be handled with our formulation by introducing a fixed cost coefficient , i.e. , using the same value for all cj , j ∈ J . The master problem equation 3 also involves a set of covering constraints with the fixed right-handside , ε > 0 . These constraints make sure that each sample is covered by at least one rule . The need for these covering constraints is exemplified as follows : Consider a binary classification problem , where we select three samples { xi , xk , xl } along with their labels yi = 1 , yk = −1 and yl = −1 . Suppose that we have two rules j and j′ such that the former covers all three , whereas the latter covers only the last two samples . Here , the rules use majority voting as it is applied to the leaves in trained DTs . The labels assigned by the rules j and j′ are as follows : Rj ( xi ) = Rj ( xk ) = Rj ( xl ) = −1 and Rj′ ( xk ) = Rj′ ( xl ) = −1 . The first set of constraints in equation 3 then becomes −aijwj +vi ≥ 1 , akjwj +akj′wj′ +vk ≥ 1 , aljwj +alj′wj′ +vl ≥ 1 . For simplicity , if we also assume that cj = cj′ , then the optimal solution becomes wj = vk = vl = 0 and wj′ = vi = 1 . With this solution , sample xi is not covered . We remark that covering each sample in a dataset is crucial from the perspective of giving a literal interpretation with the obtained rules . This is particularly important when the resulting set of rules is used to interpret the classification of individual samples in the dataset . Clearly , varying ε may lead to a change in the set of optimal rules . For instance a large ε value may impose larger rule weights on those samples covered with only few rules . However , we point out that the role of this constraint is just coverage , and hence , setting ε to a small strictly positive value is sufficient1 . Up until this point , we have not specified any details about the rule set J . On one hand , this rule set can be static in the sense that it can be obtained by extracting the rules available through a trained tree ensemble algorithm or by using the rules resulting from a rule-based method ( Cohen & Singer , 1999 ) . For instance , consider a RF model trained on a given dataset . Then , the rule set J can be obtained from the leaves of the trees in the forest , since each leaf corresponds to a rule . Solving equation 3 with such a rule set allows us to extract the rules that were most critical for classification . As we have mentioned before , we can assign the length of a rule as its cost and try to obtain rules with desirable lengths for interpretation . In a similar vein , consider another example with a trained AdaBoost ( Freund & Schapire , 1997 ) model for which the base estimators are set as DTs . Again , the leaves of the trees from AdaBoost can be used to construct the rule set J in equation 3 , which is then solved to extract an interpretable set of rules . The costs of the rules in this case could be the inverse of the estimator weights assigned by the AdaBoost algorithm to the trees . In this way , the obtained set of rules is more likely to inherit the importance of the DTs from the AdaBoost model . Using these trained models to construct our master problem leads to our first algorithm RUX , which is based on tree ensembles . We show in Section 4 that RUX can indeed extract only a selection of rules from the trained models without significantly sacrificing accuracy . The computational complexity for RUX is determined by the underlying LP solver used . In practice , most of the LP solvers use interior point methods that work in polynomial-time .
The paper discusses how, from an existing set of rules or a procedure generating rules "on the fly", one can use linear programming to select a subset of rules that would increase "interpretability", where an interpretable system of rules is here understood as a system with few, short rules. The idea is to ensure a good accuracy level by minimizing a continuous hinge loss (ensuring the linearity of the problem), while penalizing retained rule (rules having a positive weight) through a cost proportional to the complexity of those rules. The experiments show that the paper does, indeed, improve upon the metric retained by the authors (rule length and rule quantity). Whether this produces something that is readable is not checked further on (assuming that this is sufficient for user-readability).
SP:3a3e8b97dfde90fa74251f8ff90fc4010c09642b
Discovering Classification Rules for Interpretable Learning with Linear Programming
1 INTRODUCTION . Medical diagnosis , educational and juridical decisions often have important consequences for the society . Therefore , both the accuracy and interpretability of these decisions are of crucial importance . In particular , these decisions should be understandable by the decision makers . Rule sets consisting of a few intuitively coherent rules have shown to accomplish this purpose in such domains ( Lakkaraju et al. , 2016 ) . Here , we first aim at rule extraction from powerful ensemble methods and then solely focus on rule generation . In both cases , our objective is to obtain a set of classification rules that balances the trade-off between accuracy and interpretability . Our main tools in this effort are linear programming and column generation . A rule is an independent if-then statement , which contains one or more conditions that assign a class to a set of samples . For example in binary classification , “ if ( Clump Thickness is greater than six ) and ( Single Epithelial Cell Size is less than four ) then the tumor is malignant ” is such a rule that can be used for breast cancer diagnosis . When a sample satisfies this rule , then it receives a label corresponding to one of the two classes . In case a sample is covered by more than one rule , then majority voting among the assigned labels is used to determine the class of the sample . Growing decision trees is closely related to rule-based learning . A Decision Tree ( DT ) naturally results with a set of leaves , where each leaf corresponds to a different rule . On one hand , rule learning is considered to be more flexible for interpretability than tree-based learning approaches . Fürnkranz ( 1999 ) lists superiorities of rule-based learning over tree-based learning . Leaves ( rules ) of a DT are not independent from each other . For example , in a binary tree when a splitting is performed at a node , the left child grows a rule , while its right sibling grows its negation . As a result , every sample obeys to exactly one rule and it is classified according to the corresponding leaf . Thus , inaccurate rules , which are negations of their siblings after splitting , can be created . This may render false classification of samples and reduce both accuracy and interpretability . However , independently constructed rules do not need such a negation rule , and hence , rule-based learning can be considered more flexible for interpretability . On the other hand , unlike DTs , an independent set of rules does not necessarily cover the entire sample space . This may result in a state where a test sample can not be classified with the proposed rule set . This drawback is often handled by assigning a default label to those uncovered samples ( Fürnkranz et al. , 2012 ) . To minimize the number of uncovered samples during testing , the training data is required to be fully covered . Separate-andconquer algorithm by Fürnkranz ( 1999 ) achieves this heuristically by fitting rules on uncovered training samples , and new rules are generated until each training sample is covered by at least one rule . Instead of such a sequential covering , we take advantage of our linear programming ( LP ) approach and explicitly impose a covering constraint on the training set . In this paper , we propose two algorithms for interpretability and learning that are based on mathematical programming . We give a generic LP model that is first used by our Rule Extraction ( RUX ) algorithm , which selects rules from trained tree or rule ensembles for interpretation . Then , we develop a Rule Generation ( RUG ) algorithm , which generates classification rules using column generation ( CG ) to solve the LP model . Rule extraction methods attempt to select a set of rules from accurate complex or black-box models to interpret the predictions of these models ( Hayashi & Oishi , 2018 ) . For instance , several works in the literature aim at interpreting Random Forest ( RF ) models by extracting rules from the trees in the forest ( Liu et al. , 2012 ; Lu Thi et al. , 2015 ; Adnan & Islam , 2017 ; Wang et al. , 2020 ; Birbil et al. , 2020 ) . Most of these studies use heuristic approaches to select the desired set of rules . Birbil et al . ( 2020 ) suggest a set covering formulation to extract intrepretable rules from RFs . Other applications include extraction of rules from artificial neural networks ( Andrews et al. , 1995 ) and support vector machines ( Barakat & Bradley , 2010 ) . As a remark , there are other post-hoc interpretability approaches in the literature such as SHAP ( Lundberg & Lee , 2017 ) and LIME ( Ribeiro et al. , 2016 ) that are considered to be model-agnostic . However , our RUX algorithm is a model-specific approach . There are several studies that are closely related to ours , since they also employ mathematical programming for rule learning . Malioutov & Varshney ( 2013 ) propose a rule-based binary classifier solving a binary program that minimizes the number of rules for a boolean compressed sensing problem . Wang & Rudin ( 2015 ) present a mixed integer linear programming ( MILP ) formulation to learn decision rules for binary classification ( e.g. , patterns ) . They give a discussion on how their classifier is equivalent to DTs and RFs . The MILP formulation is solved using generated rules with an objective of minimizing the number of misclassified samples , the number of rules generated , and the total length of each rule . Dash et al . ( 2020 ) offer a CG-based framework to find optimal rule set for binary classification , where the objective is to find a trade-off between classification rule simplicity and accuracy . One-hot encoding is used to binarize categorical data , and numerical data is also discretized with sample deciles as thresholds . For large instances , the pricing subproblem is either solved with time limits , or the model columns are generated by a greedy heuristic . Wei et al . ( 2019 ) propose generalized linear rule models using a similar CG framework of the work by Dash et al . ( 2020 ) . Malioutov & Meel ( 2018 ) solve a Max-Sat formulation by constraint programming to construct interpretable classification rules . Ghosh & Meel ( 2019 ) also propose a framework based on MaxSAT formulation that can be applied to binary classification problems with binary features to minimize the number of generated rules and the number of misclassified samples . Their approach is incremental and takes advantage of partitioning the data set into several clusters . Contributions . RUX and RUG are based on an LP model and thus , both algorithms are scalable for large datasets . This is an important advantage compared to the existing studies that use MILP formulations . The proposed algorithms directly address multi-class problems while the existing studies using optimization-based approaches are for binary classification . To that end , the objective function in our LP formulation minimizes classification error using a loss function instead of explicitly counting misclassified samples . Both RUX and RUG can work with continuous or categorical features , and hence , they do not require encoding of the data . Along with the set of rules , our algorithms also return the optimal weights for the rules . These weights allow to attach importance to each rule for interpreting the classification . Our algorithms admit assigning cost coefficients to the rules . These coefficients could relate to different attributes of the rules , such as ; rule lengths , estimator weights , number of false negatives , and so on . The objective function also allows penaliz- ing rules that may have undesired outcomes ( like long rule lengths ) . Thus , the decision makers can use these coefficients to lead the training process to obtain a set of rules that are more appealing for their needs . The novelty in our column generation approach is the use of a regular decision tree with sample weights as the pricing subproblem . Training trees with sample weights is very fast , and also standard in all machine learning packages as boosting methods also rely on sample weights . Lastly , we present our algorithms for multi-class classification problems . However , the proposed ideas can also be extended to discovering regression rules for interpretable learning with linear programming . 2 RULE EXTRACTION . We consider a classification problem with K classes and denote the set of class labels by K. The training dataset consists of samples with features xi ∈ Rp for i ∈ I and labels yi for i ∈ I . To work with multiple classes , we define a vector-valued mapping y ( xi ) ∈ K ⊂ RK as in Zhu et al . ( 2009 ) . That is , if yi = k , then y ( xi ) = ( − 1K−1 , − 1 K−1 , . . . , 1 , . . . , − 1 K−1 ) ᵀ , ( 1 ) where the value one appears only at the kth component of the vector . Suppose that we have a collection of rules indexed by J . A rule j ∈ J assigns the vectorRj ( xi ) ∈ K to input xi , only if rule j covers sample i . This vector is also formed in the same manner as in equation 1 . To predict the class of a given sample xi with this collection of rules , we use a set of nonnegative weights wj , j ∈ J associated with the rules and evaluate ŷ ( xi ) = ∑ j∈J aijRj ( xi ) wj , ( 2 ) where aij ∈ { 0 , 1 } indicates whether rule j covers sample i or not . Then , the index of the largest component of the resulting vector ŷ ( xi ) is assigned as the predicted label ŷi of sample i ∈ I . Note that equation 2 is similar to the weighting of the classifiers in standard boosting methods . Here , instead of classifiers , we use rules for classifying only the covered samples . In order to evaluate the classification error , we use the hinge loss and define the total classification loss by ∑ i∈I max { 1− ∑ j∈J âijwj , 0 } , where âij = κaijRj ( xi ) ᵀy ( xi ) with κ = ( K−1 ) /K . This loss function allows us to write a linear programming model , where the objective is to find the set of rules that minimizes the total loss . To this end , we introduce the auxiliary variables vi , i ∈ I standing for vi ≥ max { 1− ∑ j∈J âijwj , 0 } , and obtain our master problem minimize ∑ i∈I vi + ∑ j∈J cjwj subject to ∑ j∈J âijwj + vi ≥ 1 , i ∈ I ; ∑ j∈J aijwj ≥ ε , i ∈ I ; vi ≥ 0 , i ∈ I ; wj ≥ 0 , j ∈ J , ( 3 ) where cj ≥ 0 , j ∈ J are the cost coefficients . These coefficients and the second set of constraints require further explanation . The cost coefficients serve two important roles : First , solutions involving many rules with nonzero weights are avoided . The less the number of rules in the resulting set , the easier it is to interpret a solution . In other words , we prefer sparse solutions and cj serves to that preference . Second , in many different application domains , rules have actual costs that need to be taken into consideration . As we also highlight in our title , when interpretability is of concern , one could try to obtain rules with few features because shorter rules are considered to be easier to interpret ( Lakkaraju et al. , 2016 ) . In this case , the number of conditions in a rule can be set as the rule cost . As another example , consider a classification problem in medical diagnosis , where false negatives are likely to be perceived as more costly than false positives . Such an evaluation could also be easily incorporated with a rule cost in the proposed model . We should point out that there exists a trade-off between model accuracy and the rule set size . This can also be handled with our formulation by introducing a fixed cost coefficient , i.e. , using the same value for all cj , j ∈ J . The master problem equation 3 also involves a set of covering constraints with the fixed right-handside , ε > 0 . These constraints make sure that each sample is covered by at least one rule . The need for these covering constraints is exemplified as follows : Consider a binary classification problem , where we select three samples { xi , xk , xl } along with their labels yi = 1 , yk = −1 and yl = −1 . Suppose that we have two rules j and j′ such that the former covers all three , whereas the latter covers only the last two samples . Here , the rules use majority voting as it is applied to the leaves in trained DTs . The labels assigned by the rules j and j′ are as follows : Rj ( xi ) = Rj ( xk ) = Rj ( xl ) = −1 and Rj′ ( xk ) = Rj′ ( xl ) = −1 . The first set of constraints in equation 3 then becomes −aijwj +vi ≥ 1 , akjwj +akj′wj′ +vk ≥ 1 , aljwj +alj′wj′ +vl ≥ 1 . For simplicity , if we also assume that cj = cj′ , then the optimal solution becomes wj = vk = vl = 0 and wj′ = vi = 1 . With this solution , sample xi is not covered . We remark that covering each sample in a dataset is crucial from the perspective of giving a literal interpretation with the obtained rules . This is particularly important when the resulting set of rules is used to interpret the classification of individual samples in the dataset . Clearly , varying ε may lead to a change in the set of optimal rules . For instance a large ε value may impose larger rule weights on those samples covered with only few rules . However , we point out that the role of this constraint is just coverage , and hence , setting ε to a small strictly positive value is sufficient1 . Up until this point , we have not specified any details about the rule set J . On one hand , this rule set can be static in the sense that it can be obtained by extracting the rules available through a trained tree ensemble algorithm or by using the rules resulting from a rule-based method ( Cohen & Singer , 1999 ) . For instance , consider a RF model trained on a given dataset . Then , the rule set J can be obtained from the leaves of the trees in the forest , since each leaf corresponds to a rule . Solving equation 3 with such a rule set allows us to extract the rules that were most critical for classification . As we have mentioned before , we can assign the length of a rule as its cost and try to obtain rules with desirable lengths for interpretation . In a similar vein , consider another example with a trained AdaBoost ( Freund & Schapire , 1997 ) model for which the base estimators are set as DTs . Again , the leaves of the trees from AdaBoost can be used to construct the rule set J in equation 3 , which is then solved to extract an interpretable set of rules . The costs of the rules in this case could be the inverse of the estimator weights assigned by the AdaBoost algorithm to the trees . In this way , the obtained set of rules is more likely to inherit the importance of the DTs from the AdaBoost model . Using these trained models to construct our master problem leads to our first algorithm RUX , which is based on tree ensembles . We show in Section 4 that RUX can indeed extract only a selection of rules from the trained models without significantly sacrificing accuracy . The computational complexity for RUX is determined by the underlying LP solver used . In practice , most of the LP solvers use interior point methods that work in polynomial-time .
The paper presents an approach to learning ensembles of classification rules using linear programming. Two variants of the algorithm are considered: one uses a collection of rules extracted from ensembles of decision trees as the starting point, and another one that learns rules from scratch by applying decision tree learning within the rule learning algorithm. UCI datasets (mostly two-class problems) are used to compare classifier complexity and accuracy with that of random forests, AdaBoost, and learning a single decision tree.
SP:3a3e8b97dfde90fa74251f8ff90fc4010c09642b
Revisiting Virtual Nodes in Graph Neural Networks for Link Prediction
1 INTRODUCTION . Link prediction is an important task to complete graphs that are missing edges in various domains : citation networks ( Kipf & Welling , 2016 ) , social networks ( Adamic & Adar , 2003 ) , medical drug interaction graphs ( Abbas et al. , 2021 ) , or knowledge graphs ( KGs ) ( Ji et al. , 2021 ) . Numerous kinds of models have been proposed to solve the link prediction problem , ranging from KG-specific predictors ( Ji et al. , 2021 ) to graph neural networks ( GNNs ) ( Kipf & Welling , 2016 ; Zhang & Chen , 2018 ) . Over dense biomedical networks , GNNs turned out to work especially well ( Hu et al. , 2020 ) . In this work , we focus on graph neural networks for link prediction . Many of the popular GNNs are based on the message-passing scheme , which computes node embeddings based on iteratively aggregating the features of ( usually direct/one-hop ) neighbor nodes along the graph edges ( Gilmer et al. , 2017 ) . Interestingly , best performance is usually obtained by only considering two to three hops of neighbors ( i.e. , 2-3 layers in the GNN ) . One main reason identified for this is over-smoothing , the problem that node representations become indistinguishable when the number of layers increases ( Li et al. , 2018 ) . The exponentially-growing amount of information has also been suggested as one issue connected to capturing long-range dependencies ( Alon & Yahav , 2021 ) . While it is likely that link prediction most often depends on the local node neighborhood , it is not beyond imagination that there are critical long-range dependencies ( e.g. , complex chains of drug-drug or drug-protein interactions ) . Hence , using a small number of layers to overcome the above problems results in under-reaching . There have been several recent proposals to overcome under-reaching . On the one hand , several works propose techniques that allow for larger numbers of GNN layers ( Xu et al. , 2018 ; Wu et al. , 2019 ; Liu et al. , 2020 ; Chen et al. , 2020 ; Sun et al. , 2021 ; Zhou et al. , 2020 ; Li et al. , 2020a ) . However , although ( Chen et al. , 2020 ) show that over-smoothing happens particularly in dense graphs , the link prediction experiments in these works consider citation or recommendation networks , but not the especially dense biomedical ones . And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data . On the other hand , there are approaches that adapt the message-passing scheme to consider neighbors beyond the one-hop neighborhood : based on graph diffusion ( Atwood & Towsley , 2016 ; Klicpera et al. , 2019a ; Abu-El-Haija et al. , 2019 ; Xu et al. , 2019a ; Ma et al. , 2020 ; Klicpera et al. , 2019b ) and other theories ( Morris et al. , 2019 ; You et al. , 2019 ) . However , most of these models are relatively complex and , in fact , in our experiments over the challenging graphs from the Open Graph Benchmark ( OGB ) ( Hu et al. , 2020 ) , several ran out of memory . Moreover , the majority has not considered link prediction , while this problem was recently shown to be more difficult than node classification ( Zhang et al. , 2020 ) . In this paper , we propose a simple but elegant solution to under-reaching based on the concept of virtual nodes ( Gilmer et al. , 2017 ; Li et al. , 2017 ; Pham et al. , 2017 ; Ishiguro et al. , 2019 ) . Virtual nodes are well known to often improve the graph classification performance of graph neural networks , where an artificial virtual node is added to every graph and connected to all nodes in the graph . While the virtual nodes were originally thought as representations of the entire graph , they also provide shortcuts for message passing between nodes along the graph edges . Surprisingly , the impact of virtual nodes for the link prediction problem has not been investigated yet . The reason for this might be that the often very large and heterogeneous “ network ” graphs in link prediction are of very different nature and require novel/adapted solutions ( e.g. , a protein interaction network may easily contain millions of nodes , whereas a molecule to be classified contains usually less than fifty ) . We explore application and effects of virtual nodes in link prediction theoretically and empirically : • We propose to use multiple virtual nodes in the link prediction scenario and describe a graph-based technique to connect them to the graph nodes . Consider Figure 1 . In a nutshell , we use a graph clustering algorithm to determine groups of nodes in the graph that belong together and then connect these nodes to a common virtual node . In this way , under-reaching is decreased because clustered nodes can share information easily ; at the same time , the nodes are spared of unnecessary information from unrelated nodes ( i.e. , in contrast to the single virtual node model ) . • We also investigate alternative methods to determine the virtual node connections ( e.g. , randomization in clustering ) and compare to the original model with a single virtual node . • We theoretically investigate the benefit of using ( multiple ) virtual nodes in terms of two aspects : influence score and the expressiveness in learning a structural link representation . • We conducted extensive experiments over challenging datasets of different type , provide ablation studies that confirm the superiority of our proposed techniques , analyze the results in detail , and provide first guidelines about how to use virtual nodes with different types of data and GNNs . • Most importantly , we show that our virtual node extensions most often yield rather stable performance increases and allow standard GNNs to compete with complex state-of-the-art models that also try to improve message passing , as well as with the models leading the OGB leaderboards . 2 RELATED WORK . We give an overview on approaches that are similar from a technical perspective ; for a more detailed summary , see Appendix A . For a more general overview of the large and diverse field of link prediction , we refer to good summaries in recent works ( Martínez et al. , 2016 ; Zhang et al. , 2020 ) . Deeper GNNs . Several techniques address over-smoothing and hence allow for constructing deeper GNNs to solve under-reaching . These models range from the simple but efficient message propagation in SGC ( Wu et al. , 2019 ; Liu et al. , 2020 ) and APPNP ( Klicpera et al. , 2019a ) and connections in JKNet ( Xu et al. , 2018 ) , to more advanced proposals ( Chen et al. , 2020 ; Sun et al. , 2021 ; Zhou et al. , 2020 ; Li et al. , 2020a ) such as the differentiable aggregation functions in DeeperGCN ( Li et al. , 2020a ) . However , although ( Chen et al. , 2020 ) show that over-smoothing happens particularly in dense graphs , the experiments in most of these works consider citation or recommendation networks , but not the especially dense and important biomedical ones . And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data . Beyond One-Hop Neighbors . Recently , graph diffusion methods are used in various ways to determine the message targets and thus extend standard message passing beyond the one-hop neighborhood . Atwood & Towsley ( 2016 ) use k-hop random walks to extend the node features . APPNP ( Klicpera et al. , 2019a ) applies personalized PageRank to propagate the node predictions generated by a neural network . Other models concatenate ( Abu-El-Haija et al. , 2019 ) or aggregate ( Xu et al. , 2019a ; Ma et al. , 2020 ) node embeddings in every layer using a diffusion-based transition matrix . The diffusion-based graph neural network ( GDC ) ( Klicpera et al. , 2019b ) aggregates information from multiple neighborhood hops at each layer by sparsifying a generalized form of graph diffusion . Subsequent works use diffusion methods on multiple scales ( Liao et al. , 2019 ; Luan et al. , 2019 ; Xhonneux et al. , 2020 ) and attention ( Wang et al. , 2020 ) . Morris et al . ( 2019 ) take higher-order graph structures at multiple scales into account during message passing based on the k-dimensional Weisfeiler and Leman graph algorithm . All the above approaches are relatively complex , many terminated with memory errors in our experiments , and few have been evaluated for link prediction . Virtual Nodes . To the best of our knowledge , virtual nodes have only been considered in the context of graph classification so far , where a single virtual node ( also called supernode ) is added to the graph to be classified and connected to all graph nodes ( Gilmer et al. , 2017 ; Li et al. , 2017 ; Pham et al. , 2017 ; Ishiguro et al. , 2019 ) . Note that the original idea was to compute a graph embedding in parallel with the node embeddings and even connected the virtual node only in one direction ( i.e , via edges from the graph nodes ) instead of bidirectionally ( Li et al. , 2017 ) . There are some GNNs which point out special nodes that we could consider as “ virtual ” . Fey et al . ( 2020 ) propose a GNN for molecule graph classification which clusters certain nodes within a molecule using a structure-based , molecule-specific algorithm and then applies message passing within and between these clusters . The graph-partition based message passing from Liao et al . ( 2018 ) also used clustering , but it just divides the original messages into inter- and intra-cluster . Our approach creates new “ paths ” in the graph and we theoretically demonstrate its expressiveness . P-GNN ( You et al. , 2019 ) assigns nodes to random clusters ( “ anchor-sets ” ) and then creates a message for each node for every anchor-set , while ignoring the message passing from original direct neighbors . Our virtual nodes represent an alternative means to aggregate messages from multiple graph nodes which are not necessarily direct neighbors . We also explore the idea of similar random assignments in our context , but show that more elaborate techniques generally work better . Most importantly , we do not propose a specific , new GNN but a new technique for augmenting existing graph neural networks . Although it is a well-known trick , the advantage of using virtual nodes has never been theoretically investigated nor fully understood . We focus on link prediction and considerably extend the virtual node technique . There are commonalities in the advantages of using virtual nodes for graph classification and link prediction , but their role in link prediction is to improve the representation of the link instead of the graph ( nodes ) . We analyze theoretically and empirically how they improve GNN performance . 3 PRELIMINARIES . Link Prediction . We consider an undirected graphG = ( V , E ) with nodes V and edgesE ⊆ V ×V . Note that this basic choice is only for ease of presentation . All our techniques work for directed graphs and , with simple adaptation , also for graphs with labelled edges . We assume V to be ordered and may refer to a node by its index in V . For a node v ∈ V , Nv denotes the set of its neighbors . Given two nodes , the link prediction task is to predict whether there is a link between them . Message-Passing Graph Neural Networks . In this paper , we usually use the term graph neural networks ( GNNs ) to denote GNNs that use message passing as described by Gilmer et al . ( 2017 ) . These networks compute for every v ∈ V a node representation h ` v at layer ` ∈ [ 1 , 2 , . . . , k ] , by aggregating its neighbor nodes based on a generic aggregation function and then combine the obtained vector with h ` −1v as below ; h 0 v are the initial node features . h ` v = COMBINE ` ( h ` −1v , AGGREGATE ` ( { h ` −1u | u ∈ Nv } ) ) ( 1 ) Link prediction with GNNs is usually done by combining ( e.g. , concatenating ) the final representations hLu , h L v , of the nodes u , v under consideration and passing them through several feed-forward layers with a final sigmoid function for scoring . We follow this approach . We further use [ 1 , n ] to denote an interval [ 1,2 , . . . , n ] .
This paper investigates using virtual nodes in graph neural networks for link prediction. Specifically, the authors use a graph clustering algorithm to determine groups of nodes in the graph and adopt multiple virtual nodes in the graph for the link prediction senario. They also theoretically investigate the effect of using virtual nodes for link prediction. Experiments conducted on six datasets provide insights and guidelines about using virtual nodes for link prediction.
SP:c4f85e58f75ccd367c8907900be68c1ed4b05d4c
Revisiting Virtual Nodes in Graph Neural Networks for Link Prediction
1 INTRODUCTION . Link prediction is an important task to complete graphs that are missing edges in various domains : citation networks ( Kipf & Welling , 2016 ) , social networks ( Adamic & Adar , 2003 ) , medical drug interaction graphs ( Abbas et al. , 2021 ) , or knowledge graphs ( KGs ) ( Ji et al. , 2021 ) . Numerous kinds of models have been proposed to solve the link prediction problem , ranging from KG-specific predictors ( Ji et al. , 2021 ) to graph neural networks ( GNNs ) ( Kipf & Welling , 2016 ; Zhang & Chen , 2018 ) . Over dense biomedical networks , GNNs turned out to work especially well ( Hu et al. , 2020 ) . In this work , we focus on graph neural networks for link prediction . Many of the popular GNNs are based on the message-passing scheme , which computes node embeddings based on iteratively aggregating the features of ( usually direct/one-hop ) neighbor nodes along the graph edges ( Gilmer et al. , 2017 ) . Interestingly , best performance is usually obtained by only considering two to three hops of neighbors ( i.e. , 2-3 layers in the GNN ) . One main reason identified for this is over-smoothing , the problem that node representations become indistinguishable when the number of layers increases ( Li et al. , 2018 ) . The exponentially-growing amount of information has also been suggested as one issue connected to capturing long-range dependencies ( Alon & Yahav , 2021 ) . While it is likely that link prediction most often depends on the local node neighborhood , it is not beyond imagination that there are critical long-range dependencies ( e.g. , complex chains of drug-drug or drug-protein interactions ) . Hence , using a small number of layers to overcome the above problems results in under-reaching . There have been several recent proposals to overcome under-reaching . On the one hand , several works propose techniques that allow for larger numbers of GNN layers ( Xu et al. , 2018 ; Wu et al. , 2019 ; Liu et al. , 2020 ; Chen et al. , 2020 ; Sun et al. , 2021 ; Zhou et al. , 2020 ; Li et al. , 2020a ) . However , although ( Chen et al. , 2020 ) show that over-smoothing happens particularly in dense graphs , the link prediction experiments in these works consider citation or recommendation networks , but not the especially dense biomedical ones . And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data . On the other hand , there are approaches that adapt the message-passing scheme to consider neighbors beyond the one-hop neighborhood : based on graph diffusion ( Atwood & Towsley , 2016 ; Klicpera et al. , 2019a ; Abu-El-Haija et al. , 2019 ; Xu et al. , 2019a ; Ma et al. , 2020 ; Klicpera et al. , 2019b ) and other theories ( Morris et al. , 2019 ; You et al. , 2019 ) . However , most of these models are relatively complex and , in fact , in our experiments over the challenging graphs from the Open Graph Benchmark ( OGB ) ( Hu et al. , 2020 ) , several ran out of memory . Moreover , the majority has not considered link prediction , while this problem was recently shown to be more difficult than node classification ( Zhang et al. , 2020 ) . In this paper , we propose a simple but elegant solution to under-reaching based on the concept of virtual nodes ( Gilmer et al. , 2017 ; Li et al. , 2017 ; Pham et al. , 2017 ; Ishiguro et al. , 2019 ) . Virtual nodes are well known to often improve the graph classification performance of graph neural networks , where an artificial virtual node is added to every graph and connected to all nodes in the graph . While the virtual nodes were originally thought as representations of the entire graph , they also provide shortcuts for message passing between nodes along the graph edges . Surprisingly , the impact of virtual nodes for the link prediction problem has not been investigated yet . The reason for this might be that the often very large and heterogeneous “ network ” graphs in link prediction are of very different nature and require novel/adapted solutions ( e.g. , a protein interaction network may easily contain millions of nodes , whereas a molecule to be classified contains usually less than fifty ) . We explore application and effects of virtual nodes in link prediction theoretically and empirically : • We propose to use multiple virtual nodes in the link prediction scenario and describe a graph-based technique to connect them to the graph nodes . Consider Figure 1 . In a nutshell , we use a graph clustering algorithm to determine groups of nodes in the graph that belong together and then connect these nodes to a common virtual node . In this way , under-reaching is decreased because clustered nodes can share information easily ; at the same time , the nodes are spared of unnecessary information from unrelated nodes ( i.e. , in contrast to the single virtual node model ) . • We also investigate alternative methods to determine the virtual node connections ( e.g. , randomization in clustering ) and compare to the original model with a single virtual node . • We theoretically investigate the benefit of using ( multiple ) virtual nodes in terms of two aspects : influence score and the expressiveness in learning a structural link representation . • We conducted extensive experiments over challenging datasets of different type , provide ablation studies that confirm the superiority of our proposed techniques , analyze the results in detail , and provide first guidelines about how to use virtual nodes with different types of data and GNNs . • Most importantly , we show that our virtual node extensions most often yield rather stable performance increases and allow standard GNNs to compete with complex state-of-the-art models that also try to improve message passing , as well as with the models leading the OGB leaderboards . 2 RELATED WORK . We give an overview on approaches that are similar from a technical perspective ; for a more detailed summary , see Appendix A . For a more general overview of the large and diverse field of link prediction , we refer to good summaries in recent works ( Martínez et al. , 2016 ; Zhang et al. , 2020 ) . Deeper GNNs . Several techniques address over-smoothing and hence allow for constructing deeper GNNs to solve under-reaching . These models range from the simple but efficient message propagation in SGC ( Wu et al. , 2019 ; Liu et al. , 2020 ) and APPNP ( Klicpera et al. , 2019a ) and connections in JKNet ( Xu et al. , 2018 ) , to more advanced proposals ( Chen et al. , 2020 ; Sun et al. , 2021 ; Zhou et al. , 2020 ; Li et al. , 2020a ) such as the differentiable aggregation functions in DeeperGCN ( Li et al. , 2020a ) . However , although ( Chen et al. , 2020 ) show that over-smoothing happens particularly in dense graphs , the experiments in most of these works consider citation or recommendation networks , but not the especially dense and important biomedical ones . And our experiments over the latter suggest that the reported results are not generalizable to the more challenging biomedical data . Beyond One-Hop Neighbors . Recently , graph diffusion methods are used in various ways to determine the message targets and thus extend standard message passing beyond the one-hop neighborhood . Atwood & Towsley ( 2016 ) use k-hop random walks to extend the node features . APPNP ( Klicpera et al. , 2019a ) applies personalized PageRank to propagate the node predictions generated by a neural network . Other models concatenate ( Abu-El-Haija et al. , 2019 ) or aggregate ( Xu et al. , 2019a ; Ma et al. , 2020 ) node embeddings in every layer using a diffusion-based transition matrix . The diffusion-based graph neural network ( GDC ) ( Klicpera et al. , 2019b ) aggregates information from multiple neighborhood hops at each layer by sparsifying a generalized form of graph diffusion . Subsequent works use diffusion methods on multiple scales ( Liao et al. , 2019 ; Luan et al. , 2019 ; Xhonneux et al. , 2020 ) and attention ( Wang et al. , 2020 ) . Morris et al . ( 2019 ) take higher-order graph structures at multiple scales into account during message passing based on the k-dimensional Weisfeiler and Leman graph algorithm . All the above approaches are relatively complex , many terminated with memory errors in our experiments , and few have been evaluated for link prediction . Virtual Nodes . To the best of our knowledge , virtual nodes have only been considered in the context of graph classification so far , where a single virtual node ( also called supernode ) is added to the graph to be classified and connected to all graph nodes ( Gilmer et al. , 2017 ; Li et al. , 2017 ; Pham et al. , 2017 ; Ishiguro et al. , 2019 ) . Note that the original idea was to compute a graph embedding in parallel with the node embeddings and even connected the virtual node only in one direction ( i.e , via edges from the graph nodes ) instead of bidirectionally ( Li et al. , 2017 ) . There are some GNNs which point out special nodes that we could consider as “ virtual ” . Fey et al . ( 2020 ) propose a GNN for molecule graph classification which clusters certain nodes within a molecule using a structure-based , molecule-specific algorithm and then applies message passing within and between these clusters . The graph-partition based message passing from Liao et al . ( 2018 ) also used clustering , but it just divides the original messages into inter- and intra-cluster . Our approach creates new “ paths ” in the graph and we theoretically demonstrate its expressiveness . P-GNN ( You et al. , 2019 ) assigns nodes to random clusters ( “ anchor-sets ” ) and then creates a message for each node for every anchor-set , while ignoring the message passing from original direct neighbors . Our virtual nodes represent an alternative means to aggregate messages from multiple graph nodes which are not necessarily direct neighbors . We also explore the idea of similar random assignments in our context , but show that more elaborate techniques generally work better . Most importantly , we do not propose a specific , new GNN but a new technique for augmenting existing graph neural networks . Although it is a well-known trick , the advantage of using virtual nodes has never been theoretically investigated nor fully understood . We focus on link prediction and considerably extend the virtual node technique . There are commonalities in the advantages of using virtual nodes for graph classification and link prediction , but their role in link prediction is to improve the representation of the link instead of the graph ( nodes ) . We analyze theoretically and empirically how they improve GNN performance . 3 PRELIMINARIES . Link Prediction . We consider an undirected graphG = ( V , E ) with nodes V and edgesE ⊆ V ×V . Note that this basic choice is only for ease of presentation . All our techniques work for directed graphs and , with simple adaptation , also for graphs with labelled edges . We assume V to be ordered and may refer to a node by its index in V . For a node v ∈ V , Nv denotes the set of its neighbors . Given two nodes , the link prediction task is to predict whether there is a link between them . Message-Passing Graph Neural Networks . In this paper , we usually use the term graph neural networks ( GNNs ) to denote GNNs that use message passing as described by Gilmer et al . ( 2017 ) . These networks compute for every v ∈ V a node representation h ` v at layer ` ∈ [ 1 , 2 , . . . , k ] , by aggregating its neighbor nodes based on a generic aggregation function and then combine the obtained vector with h ` −1v as below ; h 0 v are the initial node features . h ` v = COMBINE ` ( h ` −1v , AGGREGATE ` ( { h ` −1u | u ∈ Nv } ) ) ( 1 ) Link prediction with GNNs is usually done by combining ( e.g. , concatenating ) the final representations hLu , h L v , of the nodes u , v under consideration and passing them through several feed-forward layers with a final sigmoid function for scoring . We follow this approach . We further use [ 1 , n ] to denote an interval [ 1,2 , . . . , n ] .
The authors revisited the commonly used trick of virtual nodes in graph learning. The authors proposed the multiple virtual nodes usage under the link prediction scenario and provided both theoretical and empirical supports for it. For theoretical analysis, the authors consider the influence score for m-regular graph and expressiveness of link representation (by concatenating representation of two nodes) in a special case and non-attributed graphs. For empirical analysis, the authors compare the performance of multiple virtual nodes setting to only one node setting with different GNN strategies and different datasets. They finally conclude that the virtual nodes can stably improve base GNN performance on some challenging link prediction tasks.
SP:c4f85e58f75ccd367c8907900be68c1ed4b05d4c
Certified Robustness for Deep Equilibrium Models via Interval Bound Propagation
1 INTRODUCTION . A recent development in neural network design has been the introduction of implicit layers ( Amos & Kolter , 2017 ; Chen et al. , 2018 ; Agrawal et al. , 2019 ; Bai et al. , 2019 ; 2020 ; El Ghaoui et al. , 2021 ) , where the output is defined implicity as the solution to certain sets of conditions , rather than explicitly , via closed-form functions . These layers are promising alternatives to standard explicit deep learning layers and have demonstrated improved expressivity and inductive biases in a variety of settings , for example , processing time series ( Rubanova et al. , 2019 ) , generative modeling ( Grathwohl et al. , 2018 ) , solving logical reasoning tasks ( Wang et al. , 2019 ) , solving two player games ( Ling et al. , 2018 ) , and many others . One particularly promising class of implicit layers is deep equilibrium layers ( DEQs ) ( Bai et al. , 2019 ) , which define the output as the solution to an input-dependent fixed point equation . DEQ-based models have matched or outperformed traditional explicit models even in commonly benchmarked settings ( Bai et al. , 2019 ; 2020 ) . Though recent empirical successes of DEQs have been promising , their implicit nature and inherent mathematical complexity also give rise to basic concerns . In order for DEQs to realize their promise , these concerns should ideally be mitigated or resolved . For example , one major issue with DEQs is well-posedness – a solution to the fixed point equation defining the layer might not exist . On the other hand , explicit layers always have well-defined outputs . A recent line of work has focused on addressing this important concern ( Winston & Kolter , 2020 ; Revay et al. , 2020 ; Xie et al. , 2021 ) . This paper tackles a less-studied , but also important , question for DEQs : certified adversarial robustness . Because robustness is a basic concern for safe deployment of deep models ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) , for explicit models there is a large literature dedicated to certifying robustness , or guaranteeing correctness of the predictions even when the input is subject to imperceptible adversarial perturbations ( see e.g . ( Raghunathan et al. , 2018a ; Wong & Kolter , 2018 ; Gowal et al. , 2018 ; Dvijotham et al . ; Xiao et al. , 2018 ; Cohen et al. , 2019 ) ) . Many certified robustness methods require opening up the black box of the model and therefore only work for explicit models . It is unclear how to certify robustness of DEQs , which are only defined implicitly . The certified robustness method motivating this work is interval bound propagation ( IBP ) ( Mirman et al. , 2018 ; Gowal et al. , 2018 ) , a simple and cheap way to certify robustness to ` 8 perturbations . IBP computes layerwise upper and lower bounds for each coordinate of the adversarially perturbed hidden layers . These bounds follow basic rules of interval arithmetic and are simple to obtain in closed form as explicit functions of a layer ’ s weights . However , it is unclear how to apply this idea when layers are only defined implicitly . In this paper , we propose the IBP-MonDEQ , a deep equilibrium layer which is certifiably robust to ` 8 perturbations . Motivated by principles from IBP , we define the IBP-MonDEQ output as the solution of an augmented fixed-point equation involving 3 quantities : the unperturbed output of the layer and upper and lower interval bounds on this output . As with IBP for explicit models , interval bounds on the IBP-MonDEQ output are computed during the forward pass of the network and can be composed with other layers to certify robustness of the entire model . More concretely , we build upon monotone operator deep equilibrium ( MonDEQ ) layers proposed by Winston & Kolter ( 2020 ) , which take the preceding layer vpxq as input and outputs the solution z‹ , which is guaranteed existence and uniqueness , to the following fixed-point equation : z‹ “ σpWz‹ ` vpxqq ( 1.1 ) We propose an IBP-inspired fixed-point equation „ sz‹ z‹ “ σ ˆ „ pW q ` pW q´ pW q´ pW q ` „ sz‹ z‹ ` „ svpxq vpxq ˙ ( 1.2 ) which maps upper and lower interval bounds sv , v on v to sz‹ and z‹ , which provide upper and lower interval bounds on z‹ . Figure 1 depicts this process . The augmented fixed-point equation is derived by unrolling the computation of z‹ into an infinitely deep , explicit network and applying IBP to the forward pass of this network . One immediate challenge is that it is not clear that a fixed point solution to ( 1.2 ) should always exist , especially given its interpretation as the result of applying IBP to an infinitely deep network . Indeed , a major drawback of IBP is that its performance degrades with deeper networks , as observed by Shi et al . ( 2021 ) and also shown in Figure 2 ( left ) . One potential explanation for this failure is that IBP bounds tend to be unstable with depth and can diverge for deeper models ( Figure 2 , right ) . On the other hand , we show that a unique fixed-point solution to ( 1.2 ) is guaranteed to exist whenW admits a simple unconstrained parameterization which is easy to enforce throughout training . Thus , our results pinpoint a class of infinitely deep networks for which IBP bounds are provably stable , which may be of independent interest . We experimentally compare the proposed IBP-MonDEQ layer against IBP for standard explicit models . We consider common certified robustness benchmark settings and evaluate architectures of various sizes . Our results show that models with IBP-MonDEQ layers can achieve comparable or better ` 8 certified robustness relative to fully explicit models with similar parameter counts . In summary , our contributions are as follows : 1 ) We study the certified robustness of DEQs , proposing the IBP-MonDEQ , a class of DEQs with a guaranteed unique fixed point with provable interval bounds on the fixed point value . 2 ) The proposed IBP-MonDEQs form an expressive class of infinitely-deep models for which IBP is provably stable , which may be of independent interest . 3 ) Our experiments demonstrate that IBP-MonDEQ layers are competitive with standard explicit layers for ` 8-certified robustness . 2 BACKGROUND . ` 8 certified robustness . Consider a K-way classification task with neural network classifier F : Rd Ñ RK . For a given input x with true label y , the classifier F is adversarially robust to ` 8 perturbations with radius if min δPRd : } δ } 8ď F px ` δqy ´ F px ` δqy1 ą 0 @ y1 ‰ y ( 2.1 ) Safety-critical applications require certifying the robustness of F , i.e. , verifying whether 2.1 holds . Directly optimizing over δ to verify ( 2.1 ) is challenging because the objective is non-convex ( Madry et al. , 2017 ) . Thus , recent work on certified robustness has focused on verifying ( 2.1 ) via computationally tractable relaxations ( Raghunathan et al. , 2018a ; Wong & Kolter , 2018 ; Dvijotham et al . ; Raghunathan et al. , 2018b ; Weng et al. , 2018 ; Gowal et al. , 2018 ; Salman et al. , 2019b ) . Interval bound propagation . IBP is a computationally efficient method for certifying ` 8 robustness of neural networks ( Mirman et al. , 2018 ; Gowal et al. , 2018 ) . It proposes to verify ( 2.1 ) via a ( potentially loose ) upper bound on F px ` δqy ´ F px ` δqy1 which is obtained by computing propagating upper and lower bounds on each layer through the forward pass of the network . More precisely , let zpxq compute some hidden layer of the network on input x . We say that szpx , q , zpx , q are interval bounds on z at x for perturbation if the following holds for all coordinates i : pzpx , qqi ď min δPRd : } δ } 8ď pzpx ` δqqi ď max δPRd : } δ } 8ď pzpx ` δqqi ď pszpx , qqi ( 2.2 ) We omit dependencies on x and when clear from context . Letting k denote the layer index , the bounds szk and zk are obtained inductively via simple interval arithmetic . For an affine layer zk “ Wzk´1 ` b and pre-computed bounds zk´1 , szk´1 , IBP computes szk , zk as follows : „ szk zk “ „ pW q ` pW q´ pW q´ pW q ` „ szk´1 zk´1 ` „ b b ( 2.3 ) Here pW q ` fi maxtW , 0u and pW q´ fi mintW , 0u denote the matrix W with negative or positive values truncated to 0 . For simplicity we focus on ReLU networks , with σ denoting the ReLU activation . For layers zk “ σpzk´1q which apply σ coordinate-wise , IBP computes szk “ σpszk´1q and zk “ σpzk´1q . Initial bounds are obtained via psz0px , q , z0px , qq “ px ` 1 , x´ 1q , where 1 denotes the all 1 ’ s vector . The interval bounds are propagated through all layers of the network by following the simple rules above . To certify ( 2.1 ) for the whole network , one straightforward method is to confirm that the margins of the interval bounds on the logits are positive : pF px , qqy ´ p sF px , qqy1 ą 0 @ y1 ‰ y . One important note about IBP is that the bounds should be optimized during training in order for the method to provide nontrivial robustness guarantees . Monotone operator equilibrium networks . Proposed by Winston & Kolter ( 2020 ) , MonDEQs are a class of DEQs inspired by monotone operator theory ( Ryu & Boyd , 2016 ) which have guaranteed unique fixed point solutions to the following equilibrium equation : z‹ “ σpWz‹ ` vpxqq ( 2.4 ) Let Ihˆh denote the identity matrix on h dimensions , with the subscript omitted when clear . A unique fixed-point solution is guaranteed for ( 2.4 ) for the following class of W : Proposition 2.1 ( ( Winston & Kolter , 2020 ) ) . Suppose W P Rhˆh satisfies that I ´W is positive definite ( PD ) , i.e . I ´W ą 0 . Then @ vpxq P Rh , a solution z‹ to ( 2.4 ) exists and is unique . Here 0 denotes the all 0 ’ s matrix and A ą 0 indicates that uJAu ą 0 for all nonzero u ( note that A does not need to be symmetric ) . Winston & Kolter ( 2020 ) guarantee that I ´ W is PD using the following unconstrained parameterization , which is enforced throughout training : W “ p1´mqI ´AAJ ` B ´BJ , for positive hyperparameter m. Section 3.1 builds on these results to derive certified upper and lower bounds on z‹ . 3 CERTIFYING ROBUSTNESS OF MONDEQS USING IBP . In this section , we describe our core methodology for developing certifiably robust MonDEQ layers . We will first demonstrate how to obtain interval bounds for MonDEQ layers by computing the solution to a certain IBP-inspired fixed point equation ( 3.3 ) . In Section 3.1 , we characterize a new parameterization for W for which a unique fixed point exists . In Section 3.3 , we provide theoretical justification that the resulting IBP-MonDEQ layers remain expressive . Our aim is to derive upper and lower interval bounds sz‹ and z‹ for the fixed point solution to ( 2.4 ) . A common interpretation of DEQs is that they compute an infinitely deep , unrolled explicit network ( Bai et al. , 2019 ; Winston & Kolter , 2020 ) : zk “ σpWzk´1 ` vq ( 3.1 ) with limkÑ8 zk “ z‹ . This equivalence is informal and mainly serves to motivate our derivation of the IBP-MonDEQ . Given interval bounds sv and v on v satisfying v ď v ď sv , where the inequalities hold elementwise , we can also follow IBP and ( 2.3 ) to iteratively obtain interval bounds on zk : „ szk zk “ σ ˆ „ pW q ` pW q´ pW q´ pW q ` „ szk´1 zk´1 ` „ sv v ˙ ( 3.2 ) Just as we took limkÑ8 zk , we consider limkÑ8 szk , zk , motivating another fixed-point problem : „ sz‹ z‹ “ σ ˆ „ pW q ` pW q´ pW q´ pW q ` „ sz‹ z‹ ` „ sv v ˙ ( 3.3 ) As shown in Figure 1 , this IBP-inspired fixed point equation essentially maps the region tv1 : v ď v1 ď svu to the region tz‹1 : z‹ ď z‹1 ď sz‹u , where inequalities are coordinate-wise . If they exist , the fixed points sz‹ and z‹ will indeed lead to valid interval bounds ( as defined in ( 2.2 ) ) on z‹ . Proposition 3.2 states this observation formally .
The paper presents a new class of neural networks called IBP-MonDEQs that are an extension of the recently introduced implicit networks MonDEQs. The authors identify a class of weight matrices of the network that ensures that fixed point of the implicit layers exist with respect to the interval analysis and is unique. The construction of IBP_MonDEQ is motivated by the goal of obtaining networks that can be certified to be robust. The authors then train such networks comparing against explicit networks of the same architecture constructed using the certified IBP training. The results show that IBP-MonDEQs can obtain better certified robustness than explicit networks on the MNIST and CIFAR10 dataset.
SP:868493ffb117e15730ab87159ba934b93b0bf8e4
Certified Robustness for Deep Equilibrium Models via Interval Bound Propagation
1 INTRODUCTION . A recent development in neural network design has been the introduction of implicit layers ( Amos & Kolter , 2017 ; Chen et al. , 2018 ; Agrawal et al. , 2019 ; Bai et al. , 2019 ; 2020 ; El Ghaoui et al. , 2021 ) , where the output is defined implicity as the solution to certain sets of conditions , rather than explicitly , via closed-form functions . These layers are promising alternatives to standard explicit deep learning layers and have demonstrated improved expressivity and inductive biases in a variety of settings , for example , processing time series ( Rubanova et al. , 2019 ) , generative modeling ( Grathwohl et al. , 2018 ) , solving logical reasoning tasks ( Wang et al. , 2019 ) , solving two player games ( Ling et al. , 2018 ) , and many others . One particularly promising class of implicit layers is deep equilibrium layers ( DEQs ) ( Bai et al. , 2019 ) , which define the output as the solution to an input-dependent fixed point equation . DEQ-based models have matched or outperformed traditional explicit models even in commonly benchmarked settings ( Bai et al. , 2019 ; 2020 ) . Though recent empirical successes of DEQs have been promising , their implicit nature and inherent mathematical complexity also give rise to basic concerns . In order for DEQs to realize their promise , these concerns should ideally be mitigated or resolved . For example , one major issue with DEQs is well-posedness – a solution to the fixed point equation defining the layer might not exist . On the other hand , explicit layers always have well-defined outputs . A recent line of work has focused on addressing this important concern ( Winston & Kolter , 2020 ; Revay et al. , 2020 ; Xie et al. , 2021 ) . This paper tackles a less-studied , but also important , question for DEQs : certified adversarial robustness . Because robustness is a basic concern for safe deployment of deep models ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) , for explicit models there is a large literature dedicated to certifying robustness , or guaranteeing correctness of the predictions even when the input is subject to imperceptible adversarial perturbations ( see e.g . ( Raghunathan et al. , 2018a ; Wong & Kolter , 2018 ; Gowal et al. , 2018 ; Dvijotham et al . ; Xiao et al. , 2018 ; Cohen et al. , 2019 ) ) . Many certified robustness methods require opening up the black box of the model and therefore only work for explicit models . It is unclear how to certify robustness of DEQs , which are only defined implicitly . The certified robustness method motivating this work is interval bound propagation ( IBP ) ( Mirman et al. , 2018 ; Gowal et al. , 2018 ) , a simple and cheap way to certify robustness to ` 8 perturbations . IBP computes layerwise upper and lower bounds for each coordinate of the adversarially perturbed hidden layers . These bounds follow basic rules of interval arithmetic and are simple to obtain in closed form as explicit functions of a layer ’ s weights . However , it is unclear how to apply this idea when layers are only defined implicitly . In this paper , we propose the IBP-MonDEQ , a deep equilibrium layer which is certifiably robust to ` 8 perturbations . Motivated by principles from IBP , we define the IBP-MonDEQ output as the solution of an augmented fixed-point equation involving 3 quantities : the unperturbed output of the layer and upper and lower interval bounds on this output . As with IBP for explicit models , interval bounds on the IBP-MonDEQ output are computed during the forward pass of the network and can be composed with other layers to certify robustness of the entire model . More concretely , we build upon monotone operator deep equilibrium ( MonDEQ ) layers proposed by Winston & Kolter ( 2020 ) , which take the preceding layer vpxq as input and outputs the solution z‹ , which is guaranteed existence and uniqueness , to the following fixed-point equation : z‹ “ σpWz‹ ` vpxqq ( 1.1 ) We propose an IBP-inspired fixed-point equation „ sz‹ z‹ “ σ ˆ „ pW q ` pW q´ pW q´ pW q ` „ sz‹ z‹ ` „ svpxq vpxq ˙ ( 1.2 ) which maps upper and lower interval bounds sv , v on v to sz‹ and z‹ , which provide upper and lower interval bounds on z‹ . Figure 1 depicts this process . The augmented fixed-point equation is derived by unrolling the computation of z‹ into an infinitely deep , explicit network and applying IBP to the forward pass of this network . One immediate challenge is that it is not clear that a fixed point solution to ( 1.2 ) should always exist , especially given its interpretation as the result of applying IBP to an infinitely deep network . Indeed , a major drawback of IBP is that its performance degrades with deeper networks , as observed by Shi et al . ( 2021 ) and also shown in Figure 2 ( left ) . One potential explanation for this failure is that IBP bounds tend to be unstable with depth and can diverge for deeper models ( Figure 2 , right ) . On the other hand , we show that a unique fixed-point solution to ( 1.2 ) is guaranteed to exist whenW admits a simple unconstrained parameterization which is easy to enforce throughout training . Thus , our results pinpoint a class of infinitely deep networks for which IBP bounds are provably stable , which may be of independent interest . We experimentally compare the proposed IBP-MonDEQ layer against IBP for standard explicit models . We consider common certified robustness benchmark settings and evaluate architectures of various sizes . Our results show that models with IBP-MonDEQ layers can achieve comparable or better ` 8 certified robustness relative to fully explicit models with similar parameter counts . In summary , our contributions are as follows : 1 ) We study the certified robustness of DEQs , proposing the IBP-MonDEQ , a class of DEQs with a guaranteed unique fixed point with provable interval bounds on the fixed point value . 2 ) The proposed IBP-MonDEQs form an expressive class of infinitely-deep models for which IBP is provably stable , which may be of independent interest . 3 ) Our experiments demonstrate that IBP-MonDEQ layers are competitive with standard explicit layers for ` 8-certified robustness . 2 BACKGROUND . ` 8 certified robustness . Consider a K-way classification task with neural network classifier F : Rd Ñ RK . For a given input x with true label y , the classifier F is adversarially robust to ` 8 perturbations with radius if min δPRd : } δ } 8ď F px ` δqy ´ F px ` δqy1 ą 0 @ y1 ‰ y ( 2.1 ) Safety-critical applications require certifying the robustness of F , i.e. , verifying whether 2.1 holds . Directly optimizing over δ to verify ( 2.1 ) is challenging because the objective is non-convex ( Madry et al. , 2017 ) . Thus , recent work on certified robustness has focused on verifying ( 2.1 ) via computationally tractable relaxations ( Raghunathan et al. , 2018a ; Wong & Kolter , 2018 ; Dvijotham et al . ; Raghunathan et al. , 2018b ; Weng et al. , 2018 ; Gowal et al. , 2018 ; Salman et al. , 2019b ) . Interval bound propagation . IBP is a computationally efficient method for certifying ` 8 robustness of neural networks ( Mirman et al. , 2018 ; Gowal et al. , 2018 ) . It proposes to verify ( 2.1 ) via a ( potentially loose ) upper bound on F px ` δqy ´ F px ` δqy1 which is obtained by computing propagating upper and lower bounds on each layer through the forward pass of the network . More precisely , let zpxq compute some hidden layer of the network on input x . We say that szpx , q , zpx , q are interval bounds on z at x for perturbation if the following holds for all coordinates i : pzpx , qqi ď min δPRd : } δ } 8ď pzpx ` δqqi ď max δPRd : } δ } 8ď pzpx ` δqqi ď pszpx , qqi ( 2.2 ) We omit dependencies on x and when clear from context . Letting k denote the layer index , the bounds szk and zk are obtained inductively via simple interval arithmetic . For an affine layer zk “ Wzk´1 ` b and pre-computed bounds zk´1 , szk´1 , IBP computes szk , zk as follows : „ szk zk “ „ pW q ` pW q´ pW q´ pW q ` „ szk´1 zk´1 ` „ b b ( 2.3 ) Here pW q ` fi maxtW , 0u and pW q´ fi mintW , 0u denote the matrix W with negative or positive values truncated to 0 . For simplicity we focus on ReLU networks , with σ denoting the ReLU activation . For layers zk “ σpzk´1q which apply σ coordinate-wise , IBP computes szk “ σpszk´1q and zk “ σpzk´1q . Initial bounds are obtained via psz0px , q , z0px , qq “ px ` 1 , x´ 1q , where 1 denotes the all 1 ’ s vector . The interval bounds are propagated through all layers of the network by following the simple rules above . To certify ( 2.1 ) for the whole network , one straightforward method is to confirm that the margins of the interval bounds on the logits are positive : pF px , qqy ´ p sF px , qqy1 ą 0 @ y1 ‰ y . One important note about IBP is that the bounds should be optimized during training in order for the method to provide nontrivial robustness guarantees . Monotone operator equilibrium networks . Proposed by Winston & Kolter ( 2020 ) , MonDEQs are a class of DEQs inspired by monotone operator theory ( Ryu & Boyd , 2016 ) which have guaranteed unique fixed point solutions to the following equilibrium equation : z‹ “ σpWz‹ ` vpxqq ( 2.4 ) Let Ihˆh denote the identity matrix on h dimensions , with the subscript omitted when clear . A unique fixed-point solution is guaranteed for ( 2.4 ) for the following class of W : Proposition 2.1 ( ( Winston & Kolter , 2020 ) ) . Suppose W P Rhˆh satisfies that I ´W is positive definite ( PD ) , i.e . I ´W ą 0 . Then @ vpxq P Rh , a solution z‹ to ( 2.4 ) exists and is unique . Here 0 denotes the all 0 ’ s matrix and A ą 0 indicates that uJAu ą 0 for all nonzero u ( note that A does not need to be symmetric ) . Winston & Kolter ( 2020 ) guarantee that I ´ W is PD using the following unconstrained parameterization , which is enforced throughout training : W “ p1´mqI ´AAJ ` B ´BJ , for positive hyperparameter m. Section 3.1 builds on these results to derive certified upper and lower bounds on z‹ . 3 CERTIFYING ROBUSTNESS OF MONDEQS USING IBP . In this section , we describe our core methodology for developing certifiably robust MonDEQ layers . We will first demonstrate how to obtain interval bounds for MonDEQ layers by computing the solution to a certain IBP-inspired fixed point equation ( 3.3 ) . In Section 3.1 , we characterize a new parameterization for W for which a unique fixed point exists . In Section 3.3 , we provide theoretical justification that the resulting IBP-MonDEQ layers remain expressive . Our aim is to derive upper and lower interval bounds sz‹ and z‹ for the fixed point solution to ( 2.4 ) . A common interpretation of DEQs is that they compute an infinitely deep , unrolled explicit network ( Bai et al. , 2019 ; Winston & Kolter , 2020 ) : zk “ σpWzk´1 ` vq ( 3.1 ) with limkÑ8 zk “ z‹ . This equivalence is informal and mainly serves to motivate our derivation of the IBP-MonDEQ . Given interval bounds sv and v on v satisfying v ď v ď sv , where the inequalities hold elementwise , we can also follow IBP and ( 2.3 ) to iteratively obtain interval bounds on zk : „ szk zk “ σ ˆ „ pW q ` pW q´ pW q´ pW q ` „ szk´1 zk´1 ` „ sv v ˙ ( 3.2 ) Just as we took limkÑ8 zk , we consider limkÑ8 szk , zk , motivating another fixed-point problem : „ sz‹ z‹ “ σ ˆ „ pW q ` pW q´ pW q´ pW q ` „ sz‹ z‹ ` „ sv v ˙ ( 3.3 ) As shown in Figure 1 , this IBP-inspired fixed point equation essentially maps the region tv1 : v ď v1 ď svu to the region tz‹1 : z‹ ď z‹1 ď sz‹u , where inequalities are coordinate-wise . If they exist , the fixed points sz‹ and z‹ will indeed lead to valid interval bounds ( as defined in ( 2.2 ) ) on z‹ . Proposition 3.2 states this observation formally .
The authors propose a deep equilibrium (DEQ) layer that provides certifiable robustness via the interval bound propagation technique. This involves augmenting the original fixed point condition considered in DEQs with two additional fixed point conditions, one for each bound. The main contribution is a theoretical result that says that when parameterised in a certain way, this IBP-DEQ admits a unique fixed point (Theorem 3.1, 3.3), and also that the model provides valid IPB (Proposition 3.2). Motivated by a theoretical result (Proposition 3,7), the authors show empirically that such restrictions imposed by the specific DEQ and parameterisation achieve comparable or improved performance compared with explicit models on MNIST and CIFAR10.
SP:868493ffb117e15730ab87159ba934b93b0bf8e4
Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness
1 INTRODUCTION . With the convolutional neural networks ( CNNs ) becoming ubiquitous , the security and robustness of neural networks is attracting increasing interests . Recent studies find that CNN models are inherently vulnerable to adversarial attacks ( Goodfellow et al. , 2014 ) , which craft imperceptible perturbations on the images , referred to as adversarial examples , to mislead the neural network models . Even without accessing the target model , an adversary can still generate adversarial examples from other surrogate models to attack the target model by exploiting the adversarial transferability among them . Such vulnerability of CNN models has spurred extensive researches on improving the robustness against adversarial attacks . One stream of approaches targets on learning robust features for an individual model ( Madry et al. , 2017 ; Brendel et al. , 2020 ) . Informally , robust features are defined as the features that are less sensitive to the adversarial perturbations added on the inputs . A representative approach , referred to as adversarial training ( Madry et al. , 2017 ) , on-line generates adversarial examples on which the model minimizes the training loss . As a result , adversarial training encourages the model to learn the features that are less sensitive to the adversarial input perturbations , thereby alleviating the model ’ s vulnerability . However , such adversarial training methods often have to sacrifice the clean accuracy for enhanced robustness ( Zhang et al. , 2019 ) , since they exclude the non-robust features and become less distinguishable for the examples with high similarity in the feature space . Besides empowering improved robustness for an individual model , another stream of researches focuses on forming strong ensembles to improve the robustness ( Yang et al. , 2020 ; Bagnall et al. , 2017 ; Pang et al. , 2019 ; Kariyappa & Qureshi , 2019 ) . Generally speaking , an ensemble is constructed by aggregating multiple sub-models . Intuitively , an ensemble is promising to facilitate better robustness than an individual model because a successful attack needs to mislead the majority of the sub-models rather than just one . While the robustness of an ensemble highly relies on the diversity of the sub-models , recent study finds that CNN models trained independently on the same dataset are with highly-overlapped adversarial subspaces ( Tramèr et al. , 2017 ) . Therefore , many studies propose ensemble training methods to diversify the sub-models . For example , DVERGE ( Yang et al. , 2020 ) proposes to distill non-robust features corresponding to each sub-model ’ s vulnerability , then isolates the vulnerabilities of the sub-models by mutual learning such that impeding the adversarial transferability among them . There is another learned insight that the ensembles composed by more sub-models tend to capture greater robustness improvement . Table 1 shows the robustness trend of the ensembles trained with various ensemble training methods . Robustness improvement can be obtained by including more sub-models within the ensemble . This drives us to further explore whether the trend will continue when keeping enlarging the ensemble . However , existing ensemble construction methods are with poor scalability because of the rapidly increasing overhead , especially with mutual learning which trains the sub-models in a round-robin manner , the complexity will rise at a speed of O ( n2 ) . We propose Ensemble-in-One , a novel approach that can improve the scalability of ensemble training and introduce randomness mechanism for enhanced generalization , simultaneously obtaining better robustness and higher efficiency . For a dedicated CNN model , we conduct a Random Gated Network ( RGN ) by substituting each parameterized layer with a Random Gated Block ( RGB ) on top of the neural architecture . Through this , the network can instantiate numerous sub-models by controlling the gates in each block . Ensemble-in-One substantially reduces the complexity when scaling up the ensemble . In summary , the contributions of this work are listed as below : • Ensemble-in-One is a simple but effective method that learns adversarially robust ensembles within one over-parametrized random gated network . The EIO construction enables us to employ ensemble learning techniques to learn more robust individual models with minimal computational overheads and no extra inference overhead . • Extensive experiments demonstrate the effectiveness of Ensemble-in-One . It outperforms the previous ensemble training methods with even less computational overhead . Moreover , EIO also achieves better accuracy-robustness trade-offs than adversarial training method . 2 RELATED WORK . 2.1 ADVERSARIAL ATTACKS AND COUNTERMEASURES .. The inherent vulnerability of CNN models poses challenges on the security of deep learning systems . An adversary can apply an additive perturbation on an original input to generate an adversarial example that induces wrong prediction in CNN models ( Goodfellow et al. , 2014 ) . Denoting an input as x , the goal of adversarial attacks is to find a perturbation δ s.t . xadv = x + δ can mislead the model , where ||δ||p satisfies the intensity constraint ||δ||p ≤ . To formulate that , the adversarial attack aims at maximizing the loss L for the model with parameters θ on the input-label pair ( x , y ) , i.e . δ = argmaxδLθ ( x + δ , y ) , under the constraint that the ` p norm of the perturbation should not exceed the bound . Usually , we use ` ∞ norm ( Goodfellow et al. , 2014 ; Madry et al. , 2017 ) of the perturbations to measure the attack ’ s effectiveness or model ’ s robustness . An attack that requires smaller perturbation to successfully deceive the model is regarded to be stronger . Correspondingly , a defense that enforces the attacks to enlarge perturbation intensity is regarded to be more robust . Various adversarial attack methods have been investigated to strengthen the attack effectiveness . The fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2014 ) exploits the gradient descent method to generate adversarial examples . As an improvement , many studies further show the attack can be strengthened through multi-step projected gradient descent ( PGD ) ( Madry et al. , 2017 ) generation , random-starting strategy , and momentum mechanism ( Dong et al. , 2017 ) . Then SGM ( Wu et al. , 2020 ) further finds that adding weight to the gradients going through the skip connections can make the attacks more effective . Other prevalent attack approaches include C & W losses ( Carlini & Wagner , 2017b ) , M-DI2-FGSM ( Xie et al. , 2019 ) , etc . These attacks provide strong and effective ways to generate adversarial examples , rendering a huge threat to real-world deep learning systems . To improve the robustness of CNN systems , there are also extensive countermeasures for adversarial attacks . One active research direction targets improving the robustness of individual models . Adversarial training ( Madry et al. , 2017 ) optimizes the model on the adversarial examples generated in every step of the training stage . Therefore , the optimized model will tend to drop non-robust features to converge better on the adversarial data . However , adversarial training encourages the model to fit the adversarial examples , thereby reducing the generalization on the clean data and causing significant degradation of the clean accuracy . 2.2 TEST-TIME RANDOMNESS FOR ADVERSARIAL DEFENSE . Besides the aforementioned training techniques , there exist studies that introduce test-time randomness to improve the robustness . Feinman et . al . ( Feinman et al. , 2017 ) utilize the uncertainty measure in dropout networks to detect adversarial examples . Dhillon et . al . ( Dhillon et al. , 2018 ) and Xie et . al . ( Xie et al. , 2017 ) incorporate layer-wise weighted dropout and random input transformations during test time to improve the robustness . Test-time randomness is found to be effective in increasing the required distortion on the model , since test-time randomness makes generating white-box adversarial examples almost as difficult as generating transferable black-box ones ( Carlini & Wagner , 2017a ) . Nevertheless , test-time randomness increases the inference cost and can be circumvented to some extent with the expectation-over-transformation technique ( Athalye et al. , 2018 ) . 2.3 ENSEMBLE TRAINING FOR ADVERSARIAL DEFENSE .. Besides improving the robustness of individual models , another recent research direction is to investigate the robustness of model ensembles in which multiple sub-models work together . The basic idea is that multiple sub-models can provide diverse decisions . Ensemble methods can combine multiple weak models to jointly make decisions , thereby assembling as a stronger entirety . However , it is demonstrated that independent training of multiple models tends to capture similar features , which would not provide diversities among them ( Kariyappa & Qureshi , 2019 ) . Therefore , several studies propose ensemble training methods to fully diversify the sub-models to improve the ensemble robustness . For example , Pan et . al . treat the distribution of output predictions as a diversity measurement and they propose an adaptive diversity promoting ( ADP ) regularizer ( Pang et al. , 2019 ) to diversify the non-max predictions of sub-models . Sanjay et . al . regard the gradients w.r.t . the inputs as a discrimination of different models , thus they propose a gradient alignment loss ( GAL ) ( Kariyappa & Qureshi , 2019 ) which takes the cosine similarity of the gradients as a criterion to train the sub-models . The very recent work DVERGE ( Yang et al. , 2020 ) claims that the similar non-robust features captured by the sub-models cause high adversarial transferability among them . Therefore , the authors exploit non-robust feature distillation and adopt mutual learning to diversify and isolate the vulnerabilities among the sub-models , such that the within-ensemble transferability is highly impeded . However , as mentioned before , such ensemble methods are overwhelmed by the fast-increasing overhead when scaling up the ensemble . For example , DVERGE takes 11 hours to train an ensemble with three sub-models while needs approximately 50 hours when the sub-model count increases to eight . Therefore , a more efficient ensemble construction method is highly demanded to tackle the scaling problem . 3 ENSEMBLE-IN-ONE . 3.1 BASIC MOTIVATION . The conventional way to construct ensembles is to simply aggregate multiple sub-models by averaging their predictions , which is inefficient and hard to scale up . An intuitive way to enhance the scalability of the ensemble construction is to introduce an ensemble for each layer in the network . As shown in Fig.1 , we can augment a dynamic network by augmenting each parameterized layer with an n-path gated block . Then by selecting the paths along the augmented layer , the dynamic network can instantiate nL varied sub-models ideally . Taking ResNet-20 as an example , by replacing each convolution layer ( ignoring the skip connection branch ) with a two-path gated module , the overall path count will approach 219 = 524288 . Such augmentation way provides an approximation to training a very large ensemble of sub-models . Then through vulnerability diversification mutual learning , each path tends to capture better robustness . Following this idea , we propose Ensemblein-One to further improve the robustness of both individual models and ensembles . 3.2 CONSTRUCTION OF THE RANDOM GATED NETWORK . Denote a candidate neural network as N ( o1 , o2 , ... , om ) , where oi represents an operator in the network . To transform the original network into a random gated network ( RGN ) , we first extract the neural architecture to obtain the connection topology and layer types . On top of that , we replace each parameterized layer ( mainly convolutional layer , optionally followed by a batch normalization layer ) with a random gated block ( RGB ) . As shown in Fig . 2 , each RGB simply repeats the original layer by n times , and leverages binary gates with uniform probabilities to control the open or mutation of corresponding sub-layers . These repeated sub-layers are with different weight parameters . We denote the RGN as N ( d1 , d2 , ... , dm ) , where di = ( oi1 , ... , oin ) . Let gi be the gate information in the ith RGB , then a specific path derived from the RGN can be expressed as P = ( g1 · d1 , g2 · d2 , ... , gm · dm ) . For each RGB , when performing the computation , only one of the n gates is opened at a time , and the others will be temporarily muted . Thus by , only one path of activation is active in memory during training , which reduces the memory occupation of training an RGN to the same level of training an individual model . Moreover , to ensure that all paths can be equally sampled and trained , each gate in a RGB is chosen with identical probability , i.e . 1/n if each RGB consists of n sub-operators . Therefore , the binary gate function can be expressed as : gi = [ 1 , 0 , ... , 0 ] with probability 1/n , [ 0 , 1 , ... , 0 ] with probability 1/n , ... [ 0 , 0 , ... , 1 ] with probability 1/n . ( 1 ) An RGN is analogous to the super network in parameter-sharing neural architecture search , and the forward process of an RGN is similar to evaluating a sub-architecture ( Pham et al. , 2018 ; Cai et al. , 2018 ) . Compared to conventional ensemble training methods , our method is easier to scale up the ensemble . It only incurs n×memory occupation for the weight storage , while still keeping the same memory requirement for activation as an individual model .
This paper proposed a new way to generate an ensemble of networks against adversarial attacks. Different from other methods, which train different sub-models, the proposed method repeats convolution layers multiple times and controls them with random gates. The experiment demonstrates that it outperforms other ensemble training with a smaller computational overhead.
SP:1947cb313bad9ec8dfb9e00106e098872c2d07e9
Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness
1 INTRODUCTION . With the convolutional neural networks ( CNNs ) becoming ubiquitous , the security and robustness of neural networks is attracting increasing interests . Recent studies find that CNN models are inherently vulnerable to adversarial attacks ( Goodfellow et al. , 2014 ) , which craft imperceptible perturbations on the images , referred to as adversarial examples , to mislead the neural network models . Even without accessing the target model , an adversary can still generate adversarial examples from other surrogate models to attack the target model by exploiting the adversarial transferability among them . Such vulnerability of CNN models has spurred extensive researches on improving the robustness against adversarial attacks . One stream of approaches targets on learning robust features for an individual model ( Madry et al. , 2017 ; Brendel et al. , 2020 ) . Informally , robust features are defined as the features that are less sensitive to the adversarial perturbations added on the inputs . A representative approach , referred to as adversarial training ( Madry et al. , 2017 ) , on-line generates adversarial examples on which the model minimizes the training loss . As a result , adversarial training encourages the model to learn the features that are less sensitive to the adversarial input perturbations , thereby alleviating the model ’ s vulnerability . However , such adversarial training methods often have to sacrifice the clean accuracy for enhanced robustness ( Zhang et al. , 2019 ) , since they exclude the non-robust features and become less distinguishable for the examples with high similarity in the feature space . Besides empowering improved robustness for an individual model , another stream of researches focuses on forming strong ensembles to improve the robustness ( Yang et al. , 2020 ; Bagnall et al. , 2017 ; Pang et al. , 2019 ; Kariyappa & Qureshi , 2019 ) . Generally speaking , an ensemble is constructed by aggregating multiple sub-models . Intuitively , an ensemble is promising to facilitate better robustness than an individual model because a successful attack needs to mislead the majority of the sub-models rather than just one . While the robustness of an ensemble highly relies on the diversity of the sub-models , recent study finds that CNN models trained independently on the same dataset are with highly-overlapped adversarial subspaces ( Tramèr et al. , 2017 ) . Therefore , many studies propose ensemble training methods to diversify the sub-models . For example , DVERGE ( Yang et al. , 2020 ) proposes to distill non-robust features corresponding to each sub-model ’ s vulnerability , then isolates the vulnerabilities of the sub-models by mutual learning such that impeding the adversarial transferability among them . There is another learned insight that the ensembles composed by more sub-models tend to capture greater robustness improvement . Table 1 shows the robustness trend of the ensembles trained with various ensemble training methods . Robustness improvement can be obtained by including more sub-models within the ensemble . This drives us to further explore whether the trend will continue when keeping enlarging the ensemble . However , existing ensemble construction methods are with poor scalability because of the rapidly increasing overhead , especially with mutual learning which trains the sub-models in a round-robin manner , the complexity will rise at a speed of O ( n2 ) . We propose Ensemble-in-One , a novel approach that can improve the scalability of ensemble training and introduce randomness mechanism for enhanced generalization , simultaneously obtaining better robustness and higher efficiency . For a dedicated CNN model , we conduct a Random Gated Network ( RGN ) by substituting each parameterized layer with a Random Gated Block ( RGB ) on top of the neural architecture . Through this , the network can instantiate numerous sub-models by controlling the gates in each block . Ensemble-in-One substantially reduces the complexity when scaling up the ensemble . In summary , the contributions of this work are listed as below : • Ensemble-in-One is a simple but effective method that learns adversarially robust ensembles within one over-parametrized random gated network . The EIO construction enables us to employ ensemble learning techniques to learn more robust individual models with minimal computational overheads and no extra inference overhead . • Extensive experiments demonstrate the effectiveness of Ensemble-in-One . It outperforms the previous ensemble training methods with even less computational overhead . Moreover , EIO also achieves better accuracy-robustness trade-offs than adversarial training method . 2 RELATED WORK . 2.1 ADVERSARIAL ATTACKS AND COUNTERMEASURES .. The inherent vulnerability of CNN models poses challenges on the security of deep learning systems . An adversary can apply an additive perturbation on an original input to generate an adversarial example that induces wrong prediction in CNN models ( Goodfellow et al. , 2014 ) . Denoting an input as x , the goal of adversarial attacks is to find a perturbation δ s.t . xadv = x + δ can mislead the model , where ||δ||p satisfies the intensity constraint ||δ||p ≤ . To formulate that , the adversarial attack aims at maximizing the loss L for the model with parameters θ on the input-label pair ( x , y ) , i.e . δ = argmaxδLθ ( x + δ , y ) , under the constraint that the ` p norm of the perturbation should not exceed the bound . Usually , we use ` ∞ norm ( Goodfellow et al. , 2014 ; Madry et al. , 2017 ) of the perturbations to measure the attack ’ s effectiveness or model ’ s robustness . An attack that requires smaller perturbation to successfully deceive the model is regarded to be stronger . Correspondingly , a defense that enforces the attacks to enlarge perturbation intensity is regarded to be more robust . Various adversarial attack methods have been investigated to strengthen the attack effectiveness . The fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2014 ) exploits the gradient descent method to generate adversarial examples . As an improvement , many studies further show the attack can be strengthened through multi-step projected gradient descent ( PGD ) ( Madry et al. , 2017 ) generation , random-starting strategy , and momentum mechanism ( Dong et al. , 2017 ) . Then SGM ( Wu et al. , 2020 ) further finds that adding weight to the gradients going through the skip connections can make the attacks more effective . Other prevalent attack approaches include C & W losses ( Carlini & Wagner , 2017b ) , M-DI2-FGSM ( Xie et al. , 2019 ) , etc . These attacks provide strong and effective ways to generate adversarial examples , rendering a huge threat to real-world deep learning systems . To improve the robustness of CNN systems , there are also extensive countermeasures for adversarial attacks . One active research direction targets improving the robustness of individual models . Adversarial training ( Madry et al. , 2017 ) optimizes the model on the adversarial examples generated in every step of the training stage . Therefore , the optimized model will tend to drop non-robust features to converge better on the adversarial data . However , adversarial training encourages the model to fit the adversarial examples , thereby reducing the generalization on the clean data and causing significant degradation of the clean accuracy . 2.2 TEST-TIME RANDOMNESS FOR ADVERSARIAL DEFENSE . Besides the aforementioned training techniques , there exist studies that introduce test-time randomness to improve the robustness . Feinman et . al . ( Feinman et al. , 2017 ) utilize the uncertainty measure in dropout networks to detect adversarial examples . Dhillon et . al . ( Dhillon et al. , 2018 ) and Xie et . al . ( Xie et al. , 2017 ) incorporate layer-wise weighted dropout and random input transformations during test time to improve the robustness . Test-time randomness is found to be effective in increasing the required distortion on the model , since test-time randomness makes generating white-box adversarial examples almost as difficult as generating transferable black-box ones ( Carlini & Wagner , 2017a ) . Nevertheless , test-time randomness increases the inference cost and can be circumvented to some extent with the expectation-over-transformation technique ( Athalye et al. , 2018 ) . 2.3 ENSEMBLE TRAINING FOR ADVERSARIAL DEFENSE .. Besides improving the robustness of individual models , another recent research direction is to investigate the robustness of model ensembles in which multiple sub-models work together . The basic idea is that multiple sub-models can provide diverse decisions . Ensemble methods can combine multiple weak models to jointly make decisions , thereby assembling as a stronger entirety . However , it is demonstrated that independent training of multiple models tends to capture similar features , which would not provide diversities among them ( Kariyappa & Qureshi , 2019 ) . Therefore , several studies propose ensemble training methods to fully diversify the sub-models to improve the ensemble robustness . For example , Pan et . al . treat the distribution of output predictions as a diversity measurement and they propose an adaptive diversity promoting ( ADP ) regularizer ( Pang et al. , 2019 ) to diversify the non-max predictions of sub-models . Sanjay et . al . regard the gradients w.r.t . the inputs as a discrimination of different models , thus they propose a gradient alignment loss ( GAL ) ( Kariyappa & Qureshi , 2019 ) which takes the cosine similarity of the gradients as a criterion to train the sub-models . The very recent work DVERGE ( Yang et al. , 2020 ) claims that the similar non-robust features captured by the sub-models cause high adversarial transferability among them . Therefore , the authors exploit non-robust feature distillation and adopt mutual learning to diversify and isolate the vulnerabilities among the sub-models , such that the within-ensemble transferability is highly impeded . However , as mentioned before , such ensemble methods are overwhelmed by the fast-increasing overhead when scaling up the ensemble . For example , DVERGE takes 11 hours to train an ensemble with three sub-models while needs approximately 50 hours when the sub-model count increases to eight . Therefore , a more efficient ensemble construction method is highly demanded to tackle the scaling problem . 3 ENSEMBLE-IN-ONE . 3.1 BASIC MOTIVATION . The conventional way to construct ensembles is to simply aggregate multiple sub-models by averaging their predictions , which is inefficient and hard to scale up . An intuitive way to enhance the scalability of the ensemble construction is to introduce an ensemble for each layer in the network . As shown in Fig.1 , we can augment a dynamic network by augmenting each parameterized layer with an n-path gated block . Then by selecting the paths along the augmented layer , the dynamic network can instantiate nL varied sub-models ideally . Taking ResNet-20 as an example , by replacing each convolution layer ( ignoring the skip connection branch ) with a two-path gated module , the overall path count will approach 219 = 524288 . Such augmentation way provides an approximation to training a very large ensemble of sub-models . Then through vulnerability diversification mutual learning , each path tends to capture better robustness . Following this idea , we propose Ensemblein-One to further improve the robustness of both individual models and ensembles . 3.2 CONSTRUCTION OF THE RANDOM GATED NETWORK . Denote a candidate neural network as N ( o1 , o2 , ... , om ) , where oi represents an operator in the network . To transform the original network into a random gated network ( RGN ) , we first extract the neural architecture to obtain the connection topology and layer types . On top of that , we replace each parameterized layer ( mainly convolutional layer , optionally followed by a batch normalization layer ) with a random gated block ( RGB ) . As shown in Fig . 2 , each RGB simply repeats the original layer by n times , and leverages binary gates with uniform probabilities to control the open or mutation of corresponding sub-layers . These repeated sub-layers are with different weight parameters . We denote the RGN as N ( d1 , d2 , ... , dm ) , where di = ( oi1 , ... , oin ) . Let gi be the gate information in the ith RGB , then a specific path derived from the RGN can be expressed as P = ( g1 · d1 , g2 · d2 , ... , gm · dm ) . For each RGB , when performing the computation , only one of the n gates is opened at a time , and the others will be temporarily muted . Thus by , only one path of activation is active in memory during training , which reduces the memory occupation of training an RGN to the same level of training an individual model . Moreover , to ensure that all paths can be equally sampled and trained , each gate in a RGB is chosen with identical probability , i.e . 1/n if each RGB consists of n sub-operators . Therefore , the binary gate function can be expressed as : gi = [ 1 , 0 , ... , 0 ] with probability 1/n , [ 0 , 1 , ... , 0 ] with probability 1/n , ... [ 0 , 0 , ... , 1 ] with probability 1/n . ( 1 ) An RGN is analogous to the super network in parameter-sharing neural architecture search , and the forward process of an RGN is similar to evaluating a sub-architecture ( Pham et al. , 2018 ; Cai et al. , 2018 ) . Compared to conventional ensemble training methods , our method is easier to scale up the ensemble . It only incurs n×memory occupation for the weight storage , while still keeping the same memory requirement for activation as an individual model .
This paper proposes a robust training and defending method RGN by applying control gates with binary status. During the training, the proposed method generates adversarial examples in a clean-label attack manner and mitigates the adversarial perturbation through training on another path. During the inference, RGN finds a subnetwork to defend against adversarial attacks.
SP:1947cb313bad9ec8dfb9e00106e098872c2d07e9
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time
1 INTRODUCTION . Deep neural networks have been playing a central role in both practical ( such as computer vision ( LeCun et al. , 1998 ; Krizhevsky et al. , 2012 ; Szegedy et al. , 2015 ; He et al. , 2016 ) , natural language processing ( Collobert et al. , 2011 ; Devlin et al. , 2018 ) , automatic driving system , game playing ( Silver et al. , 2016 ; 2017 ) ) and theoretical machine learning community ( Li & Liang ( 2018 ) ; Jacot et al . ( 2018 ) ; Du et al . ( 2019b ) ; Allen-Zhu et al . ( 2019a ; b ) ; Du et al . ( 2019a ) ; Song & Yang ( 2019 ) ; Brand et al . ( 2021 ) ; Zou et al . ( 2018 ) ; Cao & Gu ( 2019 ) ; Lee et al . ( 2019a ) ; Liu et al . ( 2020 ; 2021 ) ; Chen et al . ( 2021 ) ) . In order to analyze the dynamics of neural networks and obtain provable guarantees , using over-parametrization has been a growing trend . In terms of understanding the convergence behavior of over-parametrized networks , most of the attentions have been directed to the study of first-order method such as gradient descent or stochastic gradient descent . The widespread use of first-order method is explained , to a large degree , by its computational efficiency , since computing the gradient of the loss function at each iteration is usually cheap and simple , let alone with its compatibility with random sampling-based method such as minibatch . One of the major drawbacks of first-order methods is their convergence rate is typically slow in many non-convex settings ( poly ( n , L , log ( 1/ ) ) iterations , where n is the number of training samples , L is the number of layers and is the precision of training ) , e.g. , deep neural network with ReLU activation , as shown in Allen-Zhu et al . ( 2019a ) ) , which is often the case of a deep over-parameterized neural network . Second-order method ( which employs the information of the Hessian matrix ) on the other hand enjoys a much faster convergence rate ( only log ( 1/ ) iterations ( Zhang et al. , 2019 ) , but not poly ( n ) log ( 1/ ) iterations ) and exploits the local geometry of the loss function to overcome the pathological curvature issues that are critical in first-order method . Another clear advantage of second-order method over first-order method is it does not require the tuning of learning rate . The expense of using second-order method is its prohibitive cost per iteration , as it is imperative to invert a dynamically-changing Hessian matrix or equivalently , solving a regression task involving Hessian . Given any weight matrix of size m × m ( m is the width of network ) , its Hessian matrix has size m2 ×m2 , which makes any naive implementation of second-order algorithm takes at least O ( m4 ) time since one needs to write down the Hessian . This explains the scarcity of deploying large-scale second-order method in non-convex setting , such as training deep neural networks , in contrast to their popular presence in convex optimization setting ( Vaidya ( 1989 ) ; Daitch & Spielman ( 2008 ) ; Lee et al . ( 2015 ) ; Cohen et al . ( 2019 ) ; Lee et al . ( 2019b ) ; Jiang et al . ( 2020b ; a ) ; Song & Yu ( 2021 ) ) . Recent works ( Cai et al . ( 2019 ) ; Zhang et al . ( 2019 ) ) improved the practicality of second-order method on training deep networks and presented algorithms to train one-hidden layer overparametrized networks with smooth activation functions . Specifically , they achieve a running time ofO ( mn2 ) per iteration . Their methods are essentially variants of Gauss-Newton method combined with using neural tangent kernels ( Jacot et al . ( 2018 ) ) to prove the convergence . By using the idea of randomized linear algebra ( Clarkson & Woodruff ( 2013 ) ; Woodruff ( 2014 ) ) and a clever sketching matrix as a preconditioner , Brand et al . ( 2021 ) further improves the running time to Õ ( mn ) per iteration . However , all of these algorithms are for training a shallow network with one-hidden layer and fall short on deep networks — First , it is not clear that their algorithms can be generalized to multi-layer setting , due to the presence of gradient vanishing or exploding . In the seminal work of Allen-Zhu et al . ( 2019a ) , they showed that as long as the networks are over-parametrized , first-order methods such as gradient descent and stochastic gradient descent won ’ t encounter such problem . But does this still hold for second-order method ? Can we provably show that second-order method has a good performance when training deep over-parametrized networks ? Second , even the fastest of them ( Brand et al . ( 2021 ) ) would incur a running time of Õ ( m2nL ) per iteration , which seems unavoidable due to the size of intermediate weight matrices is m×m . In this work , we take the first step to tame the beast — We propose a second-order method that achieves subquadratic cost per iteration with respect to m , and show that it has linear convergence rate in training deep over-parametrized neural networks . We emphasize the importance of obtaining subquadratic algorithm , since in multi-layer settings , the network width is typically much larger than in one-hidden layer setting ( m ≥ n8L12 , ( Zou & Gu , 2019 ) ) . Our work can be decomposed into two parts : algorithmically and analytically . From an algorithmic perspective , our method builds upon a variant of Gauss-Newton method ( Björck ( 1996 ) ) called Gram-Gauss-Newton method ( Cai et al . ( 2019 ) ; Brand et al . ( 2021 ) ) . In order to achieve a feasible running time , we exploit two features of the gradient , which is the key ingredient to form the Jacobian matrix : 1 ) . The gradient is low rank ( rank n ) . 2 ) . The gradient can be formulated as the outer product of two vectors . From an analytical perspective , our work is inspired by Allen-Zhu et al . ( 2019a ) . In contrast to their proof which is a straightforward analysis of the gradient , we make use of the multi-layer neural tangent kernels ( Du et al . ( 2019a ) ) and establish a connection between a Gram matrix we compute at each iteration and the NTK matrix . Our Contributions . We summarize our technical contributions below . • We develop an analytical framework for the convergence behavior of second-order method on training multi-layer over-parametrized neural network . To facilitate the analysis , we exploit the equivalence between neural tangent kernels and our over-parametrized network . • We design a second-order algorithm to train such networks , and achieve a cost per iteration of õ ( m2 ) . Our algorithm makes use of Gram-Gauss-Newton method , tensor-based sketching techniques , and data structures that maintains a low rank representation efficiently . • By combining fast tensor algebra techniques and sketching-based preconditioning , we devise an algorithm to efficiently solve a regression problem where the involved matrix has its rows being tensor product of vectors . 1.1 OUR RESULT . Our main result can be summarized in the following three theorems , with one analyzing the convergence behavior of a general Gram-based optimization framework , one designing an efficient algorithm to realize this second-order optimization scheme , and the other is a novel algorithm to solve tensor-based regression in high precision and fast , which is a key step in our second-order method . Throughout this paper , we will use n to denote the number of training data points , d to denote the dimension of input data points , m to denote the width of the network and L to denote the number of layers of the network . We use ft ∈ Rn to denote the prediction of neural network at time t. Our first theorem demonstrates the fast convergence rate of our algorithm . Theorem 1.1 ( Convergence , informal version of Theorem F.19 ) . Suppose the width of the neural network satisfies m ≥ poly ( n , L ) , then there exists an algorithm ( Algorithm 1 ) such that , over the randomness of initialization of the network and the algorithm , with probability at least 1 − e−Ω ( log 2m ) , we have ‖ft+1 − y‖2 ≤ 1 2 ‖ft − y‖2 , where ft ∈ Rn is the the prediction produced by neural network at time t. The above theorem establishes the linear convergence behavior of our second-order method , which is a standard convergence result for second-order method , as well as the same behavior as in onehidden layer over-parametrized networks ( Brand et al . ( 2021 ) ) . However , compared to one-hidden layer case , our analysis is much more sophisticated since we have to carefully control the probability so that it does not blow up exponentially with respect to the number of layers . The next theorem concerns the cost per iteration of our second-order algorithm . Theorem 1.2 ( Runtime , informal version of Theorem B.1 ) . There exists a randomized algorithm ( Algorithm 1 ) that trains a multi-layer neural network of widthm with the cost per training iteration being O ( m2−Ω ( 1 ) ) . We improve the overall training time of multi-layer over-parametrized networks with second-order method from Tinit +T ·O ( m2 ) to Tinit +T · o ( m2 ) , where Tinit is the initialization time of training , typically takes O ( m2 ) . As we have argued before , multi-layer over-parametrized networks require m to be in the order of n8 , hence improving the cost per iteration from quadratic to subquadratic is an important gain in speeding up training . Its advantage is even more evident when one seeks a high precision solution , and hence the number of iterations T is large . We highlight that it is non-trivial to obtain a subquadratic running time per iteration : If not handled properly , computing the matrix-vector product with weight matrices will take O ( m2 ) time ! This means that even for first-order methods such as gradient descent , it is not clear how to achieve a subquadratic running time , since one has to multiply the weight matrix with a vector in both forward evaluation and backpropagation . In our case , we have also a Jacobian matrix of size n × m2 , so forming it naively will cost O ( nm2 ) time , which is prohibitively large . Finally , note that the update matrix is also an m×m matrix . In order to circumvent these problems , we exploit the fact that the gradient is of low rank ( rank n ) , hence one can compute a rank-n factorization and use it to support fast matrix-vector product . We also observe that each row of the Jacobian matrix can be formulated as a tensor product of two vectors , therefore we can make use of fast randomized linear algebra to approximate the tensor product efficiently . As a byproduct , we have the following technical theorem : Theorem 1.3 ( Fast Tensor Regression , informal version of Theorem D.14 ) . Given two n×m matrices U and V with m n and a target vector c ∈ Rn . Let J = [ vec ( u1v > 1 ) > , . . . , vec ( unv > n ) > ] ∈ Rn×m2 where ui is the i-th row of matrix U ∀i ∈ [ n ] . There is a randomized algorithm that takes Õ ( nm+ n2 ( log ( κ/ ) + log ( m/δ ) ) + nω ) time and outputs a vector x̂ ∈ Rn such that ‖JJ > x̂− c‖2 ≤ ‖c‖2 holds with probability at least 1− δ , and κ is the condition number of J . From a high level , the algorithm proceeds as follows : given matrices U and V , it forms an approximation J̃ ∈ Rn×n log ( m/δ ) , where each row is generated by applying fast tensor sketching technique to ui and vi ( Ahle et al . ( 2020 ) ) . Then , it uses another sketching matrix for J̃ to obtain a good preconditioner R for J̃ . Subsequently , it runs a gradient descent to solve the regression . To understand this runtime better , we note that nm term is the size of matrices U and V , hence reading the entries from these matrices will take at least O ( nm ) time . The algorithm then uses tensor-based sketching techniques ( Ahle et al . ( 2020 ) ) to squash length m2 tensors to length O ( n log ( m/ δ ) ) . All subsequent operations are performed on these much smaller vectors . Finally , computing the preconditioner takes Õ ( nω ) time , and running the gradient descent takes Õ ( n2 log ( κ/ ) ) time .
1. This paper proved that the second-order method can minimize the training loss in linea rate on multi-layer over-parameterized neural networks. This analysis relies on the connection between neural tangent kernel and over-parameterized neural networks. 2. This paper also reduced the per-iteration cost of second-order methods to $\widetilde{o}(m^2)$ where $m$ is the hidden layer width, by combining Gram-Gauss-Newton method, tensor-based sketching, and efficient data structures that maintain low-rank representations. 3. This paper also designed an algorithm to efficiently solve a regression problem in which the matrix has its rows being tensor products of two vectors. This algorithm may be of general interest.
SP:1653be2aeb4a22e0771305d1b18024e3b88c275d
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time
1 INTRODUCTION . Deep neural networks have been playing a central role in both practical ( such as computer vision ( LeCun et al. , 1998 ; Krizhevsky et al. , 2012 ; Szegedy et al. , 2015 ; He et al. , 2016 ) , natural language processing ( Collobert et al. , 2011 ; Devlin et al. , 2018 ) , automatic driving system , game playing ( Silver et al. , 2016 ; 2017 ) ) and theoretical machine learning community ( Li & Liang ( 2018 ) ; Jacot et al . ( 2018 ) ; Du et al . ( 2019b ) ; Allen-Zhu et al . ( 2019a ; b ) ; Du et al . ( 2019a ) ; Song & Yang ( 2019 ) ; Brand et al . ( 2021 ) ; Zou et al . ( 2018 ) ; Cao & Gu ( 2019 ) ; Lee et al . ( 2019a ) ; Liu et al . ( 2020 ; 2021 ) ; Chen et al . ( 2021 ) ) . In order to analyze the dynamics of neural networks and obtain provable guarantees , using over-parametrization has been a growing trend . In terms of understanding the convergence behavior of over-parametrized networks , most of the attentions have been directed to the study of first-order method such as gradient descent or stochastic gradient descent . The widespread use of first-order method is explained , to a large degree , by its computational efficiency , since computing the gradient of the loss function at each iteration is usually cheap and simple , let alone with its compatibility with random sampling-based method such as minibatch . One of the major drawbacks of first-order methods is their convergence rate is typically slow in many non-convex settings ( poly ( n , L , log ( 1/ ) ) iterations , where n is the number of training samples , L is the number of layers and is the precision of training ) , e.g. , deep neural network with ReLU activation , as shown in Allen-Zhu et al . ( 2019a ) ) , which is often the case of a deep over-parameterized neural network . Second-order method ( which employs the information of the Hessian matrix ) on the other hand enjoys a much faster convergence rate ( only log ( 1/ ) iterations ( Zhang et al. , 2019 ) , but not poly ( n ) log ( 1/ ) iterations ) and exploits the local geometry of the loss function to overcome the pathological curvature issues that are critical in first-order method . Another clear advantage of second-order method over first-order method is it does not require the tuning of learning rate . The expense of using second-order method is its prohibitive cost per iteration , as it is imperative to invert a dynamically-changing Hessian matrix or equivalently , solving a regression task involving Hessian . Given any weight matrix of size m × m ( m is the width of network ) , its Hessian matrix has size m2 ×m2 , which makes any naive implementation of second-order algorithm takes at least O ( m4 ) time since one needs to write down the Hessian . This explains the scarcity of deploying large-scale second-order method in non-convex setting , such as training deep neural networks , in contrast to their popular presence in convex optimization setting ( Vaidya ( 1989 ) ; Daitch & Spielman ( 2008 ) ; Lee et al . ( 2015 ) ; Cohen et al . ( 2019 ) ; Lee et al . ( 2019b ) ; Jiang et al . ( 2020b ; a ) ; Song & Yu ( 2021 ) ) . Recent works ( Cai et al . ( 2019 ) ; Zhang et al . ( 2019 ) ) improved the practicality of second-order method on training deep networks and presented algorithms to train one-hidden layer overparametrized networks with smooth activation functions . Specifically , they achieve a running time ofO ( mn2 ) per iteration . Their methods are essentially variants of Gauss-Newton method combined with using neural tangent kernels ( Jacot et al . ( 2018 ) ) to prove the convergence . By using the idea of randomized linear algebra ( Clarkson & Woodruff ( 2013 ) ; Woodruff ( 2014 ) ) and a clever sketching matrix as a preconditioner , Brand et al . ( 2021 ) further improves the running time to Õ ( mn ) per iteration . However , all of these algorithms are for training a shallow network with one-hidden layer and fall short on deep networks — First , it is not clear that their algorithms can be generalized to multi-layer setting , due to the presence of gradient vanishing or exploding . In the seminal work of Allen-Zhu et al . ( 2019a ) , they showed that as long as the networks are over-parametrized , first-order methods such as gradient descent and stochastic gradient descent won ’ t encounter such problem . But does this still hold for second-order method ? Can we provably show that second-order method has a good performance when training deep over-parametrized networks ? Second , even the fastest of them ( Brand et al . ( 2021 ) ) would incur a running time of Õ ( m2nL ) per iteration , which seems unavoidable due to the size of intermediate weight matrices is m×m . In this work , we take the first step to tame the beast — We propose a second-order method that achieves subquadratic cost per iteration with respect to m , and show that it has linear convergence rate in training deep over-parametrized neural networks . We emphasize the importance of obtaining subquadratic algorithm , since in multi-layer settings , the network width is typically much larger than in one-hidden layer setting ( m ≥ n8L12 , ( Zou & Gu , 2019 ) ) . Our work can be decomposed into two parts : algorithmically and analytically . From an algorithmic perspective , our method builds upon a variant of Gauss-Newton method ( Björck ( 1996 ) ) called Gram-Gauss-Newton method ( Cai et al . ( 2019 ) ; Brand et al . ( 2021 ) ) . In order to achieve a feasible running time , we exploit two features of the gradient , which is the key ingredient to form the Jacobian matrix : 1 ) . The gradient is low rank ( rank n ) . 2 ) . The gradient can be formulated as the outer product of two vectors . From an analytical perspective , our work is inspired by Allen-Zhu et al . ( 2019a ) . In contrast to their proof which is a straightforward analysis of the gradient , we make use of the multi-layer neural tangent kernels ( Du et al . ( 2019a ) ) and establish a connection between a Gram matrix we compute at each iteration and the NTK matrix . Our Contributions . We summarize our technical contributions below . • We develop an analytical framework for the convergence behavior of second-order method on training multi-layer over-parametrized neural network . To facilitate the analysis , we exploit the equivalence between neural tangent kernels and our over-parametrized network . • We design a second-order algorithm to train such networks , and achieve a cost per iteration of õ ( m2 ) . Our algorithm makes use of Gram-Gauss-Newton method , tensor-based sketching techniques , and data structures that maintains a low rank representation efficiently . • By combining fast tensor algebra techniques and sketching-based preconditioning , we devise an algorithm to efficiently solve a regression problem where the involved matrix has its rows being tensor product of vectors . 1.1 OUR RESULT . Our main result can be summarized in the following three theorems , with one analyzing the convergence behavior of a general Gram-based optimization framework , one designing an efficient algorithm to realize this second-order optimization scheme , and the other is a novel algorithm to solve tensor-based regression in high precision and fast , which is a key step in our second-order method . Throughout this paper , we will use n to denote the number of training data points , d to denote the dimension of input data points , m to denote the width of the network and L to denote the number of layers of the network . We use ft ∈ Rn to denote the prediction of neural network at time t. Our first theorem demonstrates the fast convergence rate of our algorithm . Theorem 1.1 ( Convergence , informal version of Theorem F.19 ) . Suppose the width of the neural network satisfies m ≥ poly ( n , L ) , then there exists an algorithm ( Algorithm 1 ) such that , over the randomness of initialization of the network and the algorithm , with probability at least 1 − e−Ω ( log 2m ) , we have ‖ft+1 − y‖2 ≤ 1 2 ‖ft − y‖2 , where ft ∈ Rn is the the prediction produced by neural network at time t. The above theorem establishes the linear convergence behavior of our second-order method , which is a standard convergence result for second-order method , as well as the same behavior as in onehidden layer over-parametrized networks ( Brand et al . ( 2021 ) ) . However , compared to one-hidden layer case , our analysis is much more sophisticated since we have to carefully control the probability so that it does not blow up exponentially with respect to the number of layers . The next theorem concerns the cost per iteration of our second-order algorithm . Theorem 1.2 ( Runtime , informal version of Theorem B.1 ) . There exists a randomized algorithm ( Algorithm 1 ) that trains a multi-layer neural network of widthm with the cost per training iteration being O ( m2−Ω ( 1 ) ) . We improve the overall training time of multi-layer over-parametrized networks with second-order method from Tinit +T ·O ( m2 ) to Tinit +T · o ( m2 ) , where Tinit is the initialization time of training , typically takes O ( m2 ) . As we have argued before , multi-layer over-parametrized networks require m to be in the order of n8 , hence improving the cost per iteration from quadratic to subquadratic is an important gain in speeding up training . Its advantage is even more evident when one seeks a high precision solution , and hence the number of iterations T is large . We highlight that it is non-trivial to obtain a subquadratic running time per iteration : If not handled properly , computing the matrix-vector product with weight matrices will take O ( m2 ) time ! This means that even for first-order methods such as gradient descent , it is not clear how to achieve a subquadratic running time , since one has to multiply the weight matrix with a vector in both forward evaluation and backpropagation . In our case , we have also a Jacobian matrix of size n × m2 , so forming it naively will cost O ( nm2 ) time , which is prohibitively large . Finally , note that the update matrix is also an m×m matrix . In order to circumvent these problems , we exploit the fact that the gradient is of low rank ( rank n ) , hence one can compute a rank-n factorization and use it to support fast matrix-vector product . We also observe that each row of the Jacobian matrix can be formulated as a tensor product of two vectors , therefore we can make use of fast randomized linear algebra to approximate the tensor product efficiently . As a byproduct , we have the following technical theorem : Theorem 1.3 ( Fast Tensor Regression , informal version of Theorem D.14 ) . Given two n×m matrices U and V with m n and a target vector c ∈ Rn . Let J = [ vec ( u1v > 1 ) > , . . . , vec ( unv > n ) > ] ∈ Rn×m2 where ui is the i-th row of matrix U ∀i ∈ [ n ] . There is a randomized algorithm that takes Õ ( nm+ n2 ( log ( κ/ ) + log ( m/δ ) ) + nω ) time and outputs a vector x̂ ∈ Rn such that ‖JJ > x̂− c‖2 ≤ ‖c‖2 holds with probability at least 1− δ , and κ is the condition number of J . From a high level , the algorithm proceeds as follows : given matrices U and V , it forms an approximation J̃ ∈ Rn×n log ( m/δ ) , where each row is generated by applying fast tensor sketching technique to ui and vi ( Ahle et al . ( 2020 ) ) . Then , it uses another sketching matrix for J̃ to obtain a good preconditioner R for J̃ . Subsequently , it runs a gradient descent to solve the regression . To understand this runtime better , we note that nm term is the size of matrices U and V , hence reading the entries from these matrices will take at least O ( nm ) time . The algorithm then uses tensor-based sketching techniques ( Ahle et al . ( 2020 ) ) to squash length m2 tensors to length O ( n log ( m/ δ ) ) . All subsequent operations are performed on these much smaller vectors . Finally , computing the preconditioner takes Õ ( nω ) time , and running the gradient descent takes Õ ( n2 log ( κ/ ) ) time .
This paper proposes a second-order algorithm for training neural networks, in the L2 regression setting. It provides a theoretical analysis of its complexity in the over-parametrized regime. It does not provide empirical validation of the method, or an implementation.
SP:1653be2aeb4a22e0771305d1b18024e3b88c275d
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time
1 INTRODUCTION . Deep neural networks have been playing a central role in both practical ( such as computer vision ( LeCun et al. , 1998 ; Krizhevsky et al. , 2012 ; Szegedy et al. , 2015 ; He et al. , 2016 ) , natural language processing ( Collobert et al. , 2011 ; Devlin et al. , 2018 ) , automatic driving system , game playing ( Silver et al. , 2016 ; 2017 ) ) and theoretical machine learning community ( Li & Liang ( 2018 ) ; Jacot et al . ( 2018 ) ; Du et al . ( 2019b ) ; Allen-Zhu et al . ( 2019a ; b ) ; Du et al . ( 2019a ) ; Song & Yang ( 2019 ) ; Brand et al . ( 2021 ) ; Zou et al . ( 2018 ) ; Cao & Gu ( 2019 ) ; Lee et al . ( 2019a ) ; Liu et al . ( 2020 ; 2021 ) ; Chen et al . ( 2021 ) ) . In order to analyze the dynamics of neural networks and obtain provable guarantees , using over-parametrization has been a growing trend . In terms of understanding the convergence behavior of over-parametrized networks , most of the attentions have been directed to the study of first-order method such as gradient descent or stochastic gradient descent . The widespread use of first-order method is explained , to a large degree , by its computational efficiency , since computing the gradient of the loss function at each iteration is usually cheap and simple , let alone with its compatibility with random sampling-based method such as minibatch . One of the major drawbacks of first-order methods is their convergence rate is typically slow in many non-convex settings ( poly ( n , L , log ( 1/ ) ) iterations , where n is the number of training samples , L is the number of layers and is the precision of training ) , e.g. , deep neural network with ReLU activation , as shown in Allen-Zhu et al . ( 2019a ) ) , which is often the case of a deep over-parameterized neural network . Second-order method ( which employs the information of the Hessian matrix ) on the other hand enjoys a much faster convergence rate ( only log ( 1/ ) iterations ( Zhang et al. , 2019 ) , but not poly ( n ) log ( 1/ ) iterations ) and exploits the local geometry of the loss function to overcome the pathological curvature issues that are critical in first-order method . Another clear advantage of second-order method over first-order method is it does not require the tuning of learning rate . The expense of using second-order method is its prohibitive cost per iteration , as it is imperative to invert a dynamically-changing Hessian matrix or equivalently , solving a regression task involving Hessian . Given any weight matrix of size m × m ( m is the width of network ) , its Hessian matrix has size m2 ×m2 , which makes any naive implementation of second-order algorithm takes at least O ( m4 ) time since one needs to write down the Hessian . This explains the scarcity of deploying large-scale second-order method in non-convex setting , such as training deep neural networks , in contrast to their popular presence in convex optimization setting ( Vaidya ( 1989 ) ; Daitch & Spielman ( 2008 ) ; Lee et al . ( 2015 ) ; Cohen et al . ( 2019 ) ; Lee et al . ( 2019b ) ; Jiang et al . ( 2020b ; a ) ; Song & Yu ( 2021 ) ) . Recent works ( Cai et al . ( 2019 ) ; Zhang et al . ( 2019 ) ) improved the practicality of second-order method on training deep networks and presented algorithms to train one-hidden layer overparametrized networks with smooth activation functions . Specifically , they achieve a running time ofO ( mn2 ) per iteration . Their methods are essentially variants of Gauss-Newton method combined with using neural tangent kernels ( Jacot et al . ( 2018 ) ) to prove the convergence . By using the idea of randomized linear algebra ( Clarkson & Woodruff ( 2013 ) ; Woodruff ( 2014 ) ) and a clever sketching matrix as a preconditioner , Brand et al . ( 2021 ) further improves the running time to Õ ( mn ) per iteration . However , all of these algorithms are for training a shallow network with one-hidden layer and fall short on deep networks — First , it is not clear that their algorithms can be generalized to multi-layer setting , due to the presence of gradient vanishing or exploding . In the seminal work of Allen-Zhu et al . ( 2019a ) , they showed that as long as the networks are over-parametrized , first-order methods such as gradient descent and stochastic gradient descent won ’ t encounter such problem . But does this still hold for second-order method ? Can we provably show that second-order method has a good performance when training deep over-parametrized networks ? Second , even the fastest of them ( Brand et al . ( 2021 ) ) would incur a running time of Õ ( m2nL ) per iteration , which seems unavoidable due to the size of intermediate weight matrices is m×m . In this work , we take the first step to tame the beast — We propose a second-order method that achieves subquadratic cost per iteration with respect to m , and show that it has linear convergence rate in training deep over-parametrized neural networks . We emphasize the importance of obtaining subquadratic algorithm , since in multi-layer settings , the network width is typically much larger than in one-hidden layer setting ( m ≥ n8L12 , ( Zou & Gu , 2019 ) ) . Our work can be decomposed into two parts : algorithmically and analytically . From an algorithmic perspective , our method builds upon a variant of Gauss-Newton method ( Björck ( 1996 ) ) called Gram-Gauss-Newton method ( Cai et al . ( 2019 ) ; Brand et al . ( 2021 ) ) . In order to achieve a feasible running time , we exploit two features of the gradient , which is the key ingredient to form the Jacobian matrix : 1 ) . The gradient is low rank ( rank n ) . 2 ) . The gradient can be formulated as the outer product of two vectors . From an analytical perspective , our work is inspired by Allen-Zhu et al . ( 2019a ) . In contrast to their proof which is a straightforward analysis of the gradient , we make use of the multi-layer neural tangent kernels ( Du et al . ( 2019a ) ) and establish a connection between a Gram matrix we compute at each iteration and the NTK matrix . Our Contributions . We summarize our technical contributions below . • We develop an analytical framework for the convergence behavior of second-order method on training multi-layer over-parametrized neural network . To facilitate the analysis , we exploit the equivalence between neural tangent kernels and our over-parametrized network . • We design a second-order algorithm to train such networks , and achieve a cost per iteration of õ ( m2 ) . Our algorithm makes use of Gram-Gauss-Newton method , tensor-based sketching techniques , and data structures that maintains a low rank representation efficiently . • By combining fast tensor algebra techniques and sketching-based preconditioning , we devise an algorithm to efficiently solve a regression problem where the involved matrix has its rows being tensor product of vectors . 1.1 OUR RESULT . Our main result can be summarized in the following three theorems , with one analyzing the convergence behavior of a general Gram-based optimization framework , one designing an efficient algorithm to realize this second-order optimization scheme , and the other is a novel algorithm to solve tensor-based regression in high precision and fast , which is a key step in our second-order method . Throughout this paper , we will use n to denote the number of training data points , d to denote the dimension of input data points , m to denote the width of the network and L to denote the number of layers of the network . We use ft ∈ Rn to denote the prediction of neural network at time t. Our first theorem demonstrates the fast convergence rate of our algorithm . Theorem 1.1 ( Convergence , informal version of Theorem F.19 ) . Suppose the width of the neural network satisfies m ≥ poly ( n , L ) , then there exists an algorithm ( Algorithm 1 ) such that , over the randomness of initialization of the network and the algorithm , with probability at least 1 − e−Ω ( log 2m ) , we have ‖ft+1 − y‖2 ≤ 1 2 ‖ft − y‖2 , where ft ∈ Rn is the the prediction produced by neural network at time t. The above theorem establishes the linear convergence behavior of our second-order method , which is a standard convergence result for second-order method , as well as the same behavior as in onehidden layer over-parametrized networks ( Brand et al . ( 2021 ) ) . However , compared to one-hidden layer case , our analysis is much more sophisticated since we have to carefully control the probability so that it does not blow up exponentially with respect to the number of layers . The next theorem concerns the cost per iteration of our second-order algorithm . Theorem 1.2 ( Runtime , informal version of Theorem B.1 ) . There exists a randomized algorithm ( Algorithm 1 ) that trains a multi-layer neural network of widthm with the cost per training iteration being O ( m2−Ω ( 1 ) ) . We improve the overall training time of multi-layer over-parametrized networks with second-order method from Tinit +T ·O ( m2 ) to Tinit +T · o ( m2 ) , where Tinit is the initialization time of training , typically takes O ( m2 ) . As we have argued before , multi-layer over-parametrized networks require m to be in the order of n8 , hence improving the cost per iteration from quadratic to subquadratic is an important gain in speeding up training . Its advantage is even more evident when one seeks a high precision solution , and hence the number of iterations T is large . We highlight that it is non-trivial to obtain a subquadratic running time per iteration : If not handled properly , computing the matrix-vector product with weight matrices will take O ( m2 ) time ! This means that even for first-order methods such as gradient descent , it is not clear how to achieve a subquadratic running time , since one has to multiply the weight matrix with a vector in both forward evaluation and backpropagation . In our case , we have also a Jacobian matrix of size n × m2 , so forming it naively will cost O ( nm2 ) time , which is prohibitively large . Finally , note that the update matrix is also an m×m matrix . In order to circumvent these problems , we exploit the fact that the gradient is of low rank ( rank n ) , hence one can compute a rank-n factorization and use it to support fast matrix-vector product . We also observe that each row of the Jacobian matrix can be formulated as a tensor product of two vectors , therefore we can make use of fast randomized linear algebra to approximate the tensor product efficiently . As a byproduct , we have the following technical theorem : Theorem 1.3 ( Fast Tensor Regression , informal version of Theorem D.14 ) . Given two n×m matrices U and V with m n and a target vector c ∈ Rn . Let J = [ vec ( u1v > 1 ) > , . . . , vec ( unv > n ) > ] ∈ Rn×m2 where ui is the i-th row of matrix U ∀i ∈ [ n ] . There is a randomized algorithm that takes Õ ( nm+ n2 ( log ( κ/ ) + log ( m/δ ) ) + nω ) time and outputs a vector x̂ ∈ Rn such that ‖JJ > x̂− c‖2 ≤ ‖c‖2 holds with probability at least 1− δ , and κ is the condition number of J . From a high level , the algorithm proceeds as follows : given matrices U and V , it forms an approximation J̃ ∈ Rn×n log ( m/δ ) , where each row is generated by applying fast tensor sketching technique to ui and vi ( Ahle et al . ( 2020 ) ) . Then , it uses another sketching matrix for J̃ to obtain a good preconditioner R for J̃ . Subsequently , it runs a gradient descent to solve the regression . To understand this runtime better , we note that nm term is the size of matrices U and V , hence reading the entries from these matrices will take at least O ( nm ) time . The algorithm then uses tensor-based sketching techniques ( Ahle et al . ( 2020 ) ) to squash length m2 tensors to length O ( n log ( m/ δ ) ) . All subsequent operations are performed on these much smaller vectors . Finally , computing the preconditioner takes Õ ( nω ) time , and running the gradient descent takes Õ ( n2 log ( κ/ ) ) time .
This paper studies the training algorithms for multi-layer over-parameterized neural networks. In particular, this paper starts from gauss-newton-methods and incorporates tensor-based sketching techniques and preconditioning to improve the per-iteration computational complexity. As a result, the proposed algorithm can find the global solution within the time that is subquadratic in the network width.
SP:1653be2aeb4a22e0771305d1b18024e3b88c275d
Meta Attention For Off-Policy Actor-Critic
1 INTRODUCTION . Reinforcement Learning ( RL ) algorithms based on the Actor-Critic framework have achieved considerable success in many areas such as games , robot control , and planning . Compared with onpolicy methods , off-policy methods possess more efficient sampling since they do not require new samples to be collected for each gradient step and make better use of experience ( Haarnoja et al. , 2018a ) . However , even for off-policy methods , traditional reinforcement learning algorithms still have extremely low sample efficiency ( Yu , 2018 ) . Recently , meta-learning ( Hospedales et al. , 2020 ) has become topical as a paradigm to accelerate RL by learning an inductive bias from past experience . By learning aspects of the learning strategy , such as fast adaptation strategies ( Finn et al. , 2017 ) ( Rakelly et al. , 2019 ) , losses ( Zhou et al. , 2020 ) ( Bechtle et al. , 2020 ) , optimization strategies ( Duan et al. , 2016 ) , exploration strategies ( Gupta et al. , 2018 ) , hyperparameters ( Bechtle et al. , 2020 ) , and intrinsic rewards ( Zheng et al. , 2018 ) , meta-learning has significantly improved sample efficiency over standard RL algorithms . Improving sample efficiency through attention mechanism has also been proved to be very effective in image-based reinforcement learning ( Barati & Chen , 2019 ; Chen et al. , 2019 ) . The application of attention mechanism in multi-agent system ( Parnika et al. , 2021 ; Iqbal & Sha , 2019 ) and multi-object task ( Team et al. , 2021 ) also shows its powerful capabilities in information processing . However , in the existing works , attention mechanisms often have clear application scenarios , such as imagebased control or multiple sources of information ( multi-agent or multi-target ) . The effective combination of attention mechanisms with current algorithms in a state-based single-agent environment is still a problem to be investigated In this paper , we propose a meta attention method based on the attention mechanism . In the human decision-making process , people often modify their concepts based on the feedback and results to obtain a better decision . Inspired by this decision-making process , we use meta attention to adapt the features generated by the policy network based on the evaluation of value network . Our work differs from current attention-based work in that our meta attention approach works only within the Actor-Critic framework and does not depend on a specific scenario . We formalize the metalearning process as a bi-level optimization problem . Our approach can be flexibly combined with various algorithms by using meta attention as a meta-learner and optimizing meta attention in the outer layer . Unlike the existing meta-learning methods , our meta attention approach can improve the performance of agent through gradients in the training stage and obtain better actions by adjusting the features in the execution stage . We evaluated the proposed meta attention method in a series of continuous control tasks in Gym and Roboschool , including three 3D robot control tasks , two 2D control tasks , and one classic control task based on DDPG ( Lillicrap et al. , 2016 ) , TD3 ( Fujimoto et al. , 2018 ) and SAC ( Haarnoja et al. , 2018b ) . Besides , we also discussed the changes and impact caused by modifying the actor features through meta attention . Experimental results how that our meta-attention approach is not only effective in accelerating the learning progress of the agent in the training phase , but also improves the actions in the execution phase , further enhancing the performance of the agent . 2 RELATED WORK . Attention Mechanism Attention is a behavioral and cognitive process of selectively attending to a discrete aspect of information , whether subjective or objective , while ignoring other perceptible information ( de Santana Correia & Colombini , 2021 ) . Typically , attention mechanism is mainly applied in the fields of computational vision and natural language processing . Accordingly , although the implementation methods are different , the application of attention mechanism in reinforcement learning is mainly focused on video games ( Wu et al. , 2021 ; Chen et al. , 2019 ; Barati & Chen , 2019 ; Mott et al. , 2019 ) ( Manchin et al. , 2019 ) . Other works such as Peng et al . ( 2020 ) proposed a dynamic attention model with a dynamic encoder-decoder architecture , which dynamically explores node features to efficiently exploit hidden structural information at different construction steps . Li et al . ( 2021 ) applied the attention mechanism to generate feature vectors and input them into value and policy head during the feature extraction phase of PPO and PPG.Jiang & Lu ( 2018 ) ; Iqbal & Sha ( 2019 ) ; Mao et al . ( 2019 ) employed a multi-head attention mechanism to make one agent selectively pay attention to information from other agents . Team et al . ( 2021 ) used the attention mechanism to match multiple goals with the hidden state of the current state to obtain the goal-attention hidden state under different goals . This approach allows the agent to predict the expected return obtained by attending to a goal at the end of a scene . Meta Reinforcement Learning Meta-learning is most often understood as learning to learn , which refers to learning from historical information or multiple learning episodes to improve learning algorithms . Since works which fed historical trajectories into Recurrent Neural Network to flesh out task-level information ( Wang et al. , 2016 ; Duan et al. , 2016 ) , various meta-learning methods have been proposed to strengthen agent performance . Houthooft et al . ( 2018 ) ; Kirsch et al . ( 2020 ) ; Zhou et al . ( 2020 ) used meta-learning methods to learn a loss function rather than artificial design to improve performance of agent in single or multiple tasks.Gupta et al . ( 2018 ) ; Stadie et al . ( 2018 ) ; Xu et al . ( 2018a ) employed meta-learning methods to learn exploration instead of traditional exploration methods.Finn et al . ( 2017 ) meta learned a good model initialization of the model that can be quickly adapted to different tasks . Xu et al . ( 2018b ) improved the performance of the agent by meta-learning discount factors.Rakelly et al . ( 2019 ) Fakoor et al . ( 2020 ) treats meta information as an unobservable state of Partially Observable Markov Decision Process , further improves the agent ’ s performance in multi task learning . Although the attention mechanism may not have an explicit meta-learning object , it can also be considered as the king of the meta-learning method . Bi-level Optimization Generally , meta-learning can be formalized as a bi-level optimization ( BLO ) problem . However , the solution of BLO problems is often challenging . Franceschi et al . ( 2018 ) proposed a framework for approximating the solution of BLO problems using the gradient method . Since then , many works based on the gradient methods to optimize meta leaner have successfully proved the feasibility of gradient method , such asLi et al . ( 2019 ) ; Finn & Levine ( 2018 ) ; Lian et al . ( 2020 ) ; Flennerhag et al . ( 2020 ) . For the RL problem , Zhou et al . ( 2020 ) optimize the meta critic , the upper-level of a BLO , as an intrinsic motivation by gradient method . Kirsch et al . ( 2020 ) enables a population of agents to use and improve a single parameterized objective function through gradient learning on different tasks . Rajeswaran et al . ( 2019 ) proposed an implicit MAML algorithm by drawing upon implicit differentiation that effectively decouples the meta-gradient computation from the selection of an inner loop optimizer . Liu et al . ( 2019 ) proposed a surrogate objective function TMAML , which incorporates control variables into gradient estimation through automatic differentiation and improves the quality of the gradient estimation by reducing the variance without introducing bias . 3 METHODOLOGY . 3.1 OFF-POLICY ACTOR-CRITIC . In general , the reinforcement learning task can be considered as finding the optimal policies in Markov Decision Processes ( MDPs ) . The MDP is defined by a tuple ( S , A , P , R ) , where S is a set of states , A is a set of actions , P is a set of probabilities to switch from a state s to s′ for a given set of action a , and R : S × A → R is a scalar reward function . In the Actor-Critic framework , the policy network ( Actor ) πφ ( s ) and the value network ( Critic ) Qθ ( s , a ) are parameterized by a neural network respectively . At each time t , the agent receives an observation st and takes a action at based on its policy π : S → A , then receives a reward rt and a new state st+1 . The tuple ( st , at , rt , st+1 ) describes a state transition and will be stored in a reply buffer D for off-policy learning . The objective of RL is to find the optimal policy πφ to maximizes the expected cumulative return J : J ( φ ) = E [ ∞∑ t=0 γtR ( st , at ) | at ∼ πφ ( · | st ) ] ( 1 ) Where γ is the discount factor , and J ( φ ) also can be written as the expected value for the Q-function : J ( φ ) = Es∼pπQθ ( s , a ) |at∼πφ ( ·|st ) , ( 2 ) Where Q-function is the expected discounted sum of rewards following visitation at state s and execution of action a , and pπ is the state distribution induced by policy π . For off-policy ActorCritic architectures such as DDPG , TD3 and SAC , the loss for the actor provided by the critic may be different , but the Q-function will be learned by minimizing the loss of the below equation as same : LCriticθ = Est∼pπ , at∼πφ [ ( Qθ ( st , at ) − yt ) 2 ] yt = rt + γQθ′ ( st+1 , πφ′ ( st+1 ) ) , ( 3 ) where φ′ and θ′ represent the target network for the critic and the actor respectively . The actor loss usually differs in details according to different algorithms , however , they all follow a form of the following formula : LActorφ = −J ( φ ) = − Es∼pπQθ ( s , a ) |a=πφ ( s ) ( 4 ) 3.2 META ATTENTION METHOD . The attention mechanism has been widely used in image recognition and natural language processing since it can model the human pattern recognition process . It allocates attention to the important part of information while automatically ignores low-value features . According to ( Vaswani et al. , 2017 ) , attention function can be described as mapping a query and a set of key-value pairs to an output , where the output is computed as a weighted sum of the values , with the weight assigned to each value being computed by a compatibility function of the query with the corresponding key : Attention ( Query , Key , Value ) = ∑ i Similarity ( Queryi , Keyi ) ∗ Value i In order to introduce the attention mechanism into the Actor-Critic framework , we first split the actor and critic network into two parts . We take the last layer of actor network π̂ as an action net to produce actions and the rest as actor feature net π̄ ( s ) to extract features , with the entire policy network being denoted as πφ ( s ) = π̂ ( π̄ ( s ) ) . Similarly , the value network can also be divided into a critic net Q̂ and a critic feature net Q̄ ( s ) , and the whole value network can be expressed as Qθ ( s ) = Q̂ ( Q̄ ( s , a ) ) . We use the feature net as encoder to obtain Query , Key and Value by the following formula : Querry = π̄ ( s ) Key = Q̄ ( s , πφ ( s ) ) Value = π̄ ( s ) We input Query ( actor features ) and Key ( critic features ) into meta attention network fψ ( Query , Key ) ( a three-layers MLP parameterized by ψ ) to calculate the similarity of each feature dimension . To enhance or reduce the features in specific corresponding dimensions , we multiply the output after the sigmoid function by 2 to obtain the feature scale . In this paper , we use ◦ to denote the Hadamard product , and by calculating the Hadamard product ◦ of Value ( actor features ) and scale , we obtain the attention features π̄′ ( s ) after modifying some dimensions : Attention Features = π̄′ψ ( s ) = fψ ( π̄ ( s ) , Q̄ ( s , πφ ( s ) ) ) ◦ 2 ◦ π̄ ( s ) ( 5 ) This calculation process corresponds to the part in the green dotted line box in Figure 1 ( b ) . There are also two critical problems in the optimization process . 1 ) how to affect the agent ’ s decision-making and training process through attention features ; and 2 ) how to optimize the meta attention network to achieve the correct matching Query ( actor features ) and Key ( critic features ) to generate the proper feature scales . To address these issues , we formalize the entire optimization process as a bi-level optimization problem , referring to the meta attention method as the outer level , and to the task performed by the agent as the inner level : ψ∗ = arg min ψ LAttentionψ ( D ; φ∗ ; ψ ) ( 6 ) s.t . φ∗ = arg min φ [ LActorφ ( D ; φ|a ∼ πφ ( s ) ) + LActorφ ( D ; φ ; ψ|a ∼ π̂ ( π̄′ψ ( s ) ) ] , ( 7 ) where LAttentionψ is the meta optimization objective , and L Actor φ is the actor loss in the form of Eq. ( 4 ) . For the first problem , we first implement a traditional back-propagate dominated by the actor loss LActorφ |a=πφ ( s ) on training data dtrn : φold = φ− η ∂LActorφ ( dtrn|a ∼ πφ ( s ) ) ∂φ , ( 8 ) Where η is the learning rate , this process corresponds to the upper part of Figure 1 ( b ) and the first step of meta-training in Algorithm1 . Then , we generate a new attention action a = π̂ ( π̄′ψ ( s ) ) from the attention features through action net and feed into the critic to get the loss LActorφ ( D ; φ ; ψ ) |a=π̂ ( π̄′ψ ( s ) ) for back-propagation : φnew = φold − η ∂LActorφold ( dtrn ; ψ|a ∼ π̂ ( π̄′ψ ( s ) ) ) ∂φ ( 9 ) This allows the agent to obtain better actions by simply modifying the features compared to the original action without increasing the batch size , strengthening the agent ’ s learning of good actions and reducing the probability of producing poor actions after the gradient step . This process corresponds to the lower part of ( b ) in Figure 1 and the second step of meta-training in Algorithm1 . For the second problem , we have two basic assumptions : 1 ) a good meta attention network will inevitably generate feature scales that can generate actions with higher value since it better associates the relationship between actor features and critic features ; 2 ) in the process of back-propagation , high-value actions enhance actors more than low-value actions because good actions will strengthen the actor ’ s tendency to make such choices , while bad actions cause the actor to try other actions . Furthermore , since the meta attention network is involved in the back propagation process , we use the utility of meta attention in this process as the attention loss LAttentionψ and the attention loss on validation data dval is defined as : LAttentionψ = tanh ( LActorφnew ( dval|a ∼ πφnew ( s ) ) − L Actor φold ( dval|a ∼ πφold ( s ) ) ) ( 10 ) Under this definition , when performing gradient descent updates , it is ensured that the meta attention is always updated along the direction that improves the performance of the agent . This process corresponds to the blue line part of ( b ) in Figure 1 and the meta-test in Algorithm1 .
This paper proposes an attention-based actor-critic agent. The authors propose to parameterise the actor and the critic with two separate neural networks. Their algorithm operates in a two-stage fashion. In the first stage, the algorithm is a standard actor critic that produces an action probability distribution and a prediction of the action-value function. In the second state, the algorithm attends over the policy's and value function's hidden features. This output of the attention mechanism is a new feature representation that is fed into the policy's and value function's final layer, to produce an alternative action probability distribution and action value-function. The agent then takes the action that has the highest predicted return. The authors evaluate their proposed algorithm under DDPG and SAC on standard continuous control tasks and demonstrate that it can improve performance.
SP:34b501a8bcf5fca41b166b7ed5dac94c108dcb0d
Meta Attention For Off-Policy Actor-Critic
1 INTRODUCTION . Reinforcement Learning ( RL ) algorithms based on the Actor-Critic framework have achieved considerable success in many areas such as games , robot control , and planning . Compared with onpolicy methods , off-policy methods possess more efficient sampling since they do not require new samples to be collected for each gradient step and make better use of experience ( Haarnoja et al. , 2018a ) . However , even for off-policy methods , traditional reinforcement learning algorithms still have extremely low sample efficiency ( Yu , 2018 ) . Recently , meta-learning ( Hospedales et al. , 2020 ) has become topical as a paradigm to accelerate RL by learning an inductive bias from past experience . By learning aspects of the learning strategy , such as fast adaptation strategies ( Finn et al. , 2017 ) ( Rakelly et al. , 2019 ) , losses ( Zhou et al. , 2020 ) ( Bechtle et al. , 2020 ) , optimization strategies ( Duan et al. , 2016 ) , exploration strategies ( Gupta et al. , 2018 ) , hyperparameters ( Bechtle et al. , 2020 ) , and intrinsic rewards ( Zheng et al. , 2018 ) , meta-learning has significantly improved sample efficiency over standard RL algorithms . Improving sample efficiency through attention mechanism has also been proved to be very effective in image-based reinforcement learning ( Barati & Chen , 2019 ; Chen et al. , 2019 ) . The application of attention mechanism in multi-agent system ( Parnika et al. , 2021 ; Iqbal & Sha , 2019 ) and multi-object task ( Team et al. , 2021 ) also shows its powerful capabilities in information processing . However , in the existing works , attention mechanisms often have clear application scenarios , such as imagebased control or multiple sources of information ( multi-agent or multi-target ) . The effective combination of attention mechanisms with current algorithms in a state-based single-agent environment is still a problem to be investigated In this paper , we propose a meta attention method based on the attention mechanism . In the human decision-making process , people often modify their concepts based on the feedback and results to obtain a better decision . Inspired by this decision-making process , we use meta attention to adapt the features generated by the policy network based on the evaluation of value network . Our work differs from current attention-based work in that our meta attention approach works only within the Actor-Critic framework and does not depend on a specific scenario . We formalize the metalearning process as a bi-level optimization problem . Our approach can be flexibly combined with various algorithms by using meta attention as a meta-learner and optimizing meta attention in the outer layer . Unlike the existing meta-learning methods , our meta attention approach can improve the performance of agent through gradients in the training stage and obtain better actions by adjusting the features in the execution stage . We evaluated the proposed meta attention method in a series of continuous control tasks in Gym and Roboschool , including three 3D robot control tasks , two 2D control tasks , and one classic control task based on DDPG ( Lillicrap et al. , 2016 ) , TD3 ( Fujimoto et al. , 2018 ) and SAC ( Haarnoja et al. , 2018b ) . Besides , we also discussed the changes and impact caused by modifying the actor features through meta attention . Experimental results how that our meta-attention approach is not only effective in accelerating the learning progress of the agent in the training phase , but also improves the actions in the execution phase , further enhancing the performance of the agent . 2 RELATED WORK . Attention Mechanism Attention is a behavioral and cognitive process of selectively attending to a discrete aspect of information , whether subjective or objective , while ignoring other perceptible information ( de Santana Correia & Colombini , 2021 ) . Typically , attention mechanism is mainly applied in the fields of computational vision and natural language processing . Accordingly , although the implementation methods are different , the application of attention mechanism in reinforcement learning is mainly focused on video games ( Wu et al. , 2021 ; Chen et al. , 2019 ; Barati & Chen , 2019 ; Mott et al. , 2019 ) ( Manchin et al. , 2019 ) . Other works such as Peng et al . ( 2020 ) proposed a dynamic attention model with a dynamic encoder-decoder architecture , which dynamically explores node features to efficiently exploit hidden structural information at different construction steps . Li et al . ( 2021 ) applied the attention mechanism to generate feature vectors and input them into value and policy head during the feature extraction phase of PPO and PPG.Jiang & Lu ( 2018 ) ; Iqbal & Sha ( 2019 ) ; Mao et al . ( 2019 ) employed a multi-head attention mechanism to make one agent selectively pay attention to information from other agents . Team et al . ( 2021 ) used the attention mechanism to match multiple goals with the hidden state of the current state to obtain the goal-attention hidden state under different goals . This approach allows the agent to predict the expected return obtained by attending to a goal at the end of a scene . Meta Reinforcement Learning Meta-learning is most often understood as learning to learn , which refers to learning from historical information or multiple learning episodes to improve learning algorithms . Since works which fed historical trajectories into Recurrent Neural Network to flesh out task-level information ( Wang et al. , 2016 ; Duan et al. , 2016 ) , various meta-learning methods have been proposed to strengthen agent performance . Houthooft et al . ( 2018 ) ; Kirsch et al . ( 2020 ) ; Zhou et al . ( 2020 ) used meta-learning methods to learn a loss function rather than artificial design to improve performance of agent in single or multiple tasks.Gupta et al . ( 2018 ) ; Stadie et al . ( 2018 ) ; Xu et al . ( 2018a ) employed meta-learning methods to learn exploration instead of traditional exploration methods.Finn et al . ( 2017 ) meta learned a good model initialization of the model that can be quickly adapted to different tasks . Xu et al . ( 2018b ) improved the performance of the agent by meta-learning discount factors.Rakelly et al . ( 2019 ) Fakoor et al . ( 2020 ) treats meta information as an unobservable state of Partially Observable Markov Decision Process , further improves the agent ’ s performance in multi task learning . Although the attention mechanism may not have an explicit meta-learning object , it can also be considered as the king of the meta-learning method . Bi-level Optimization Generally , meta-learning can be formalized as a bi-level optimization ( BLO ) problem . However , the solution of BLO problems is often challenging . Franceschi et al . ( 2018 ) proposed a framework for approximating the solution of BLO problems using the gradient method . Since then , many works based on the gradient methods to optimize meta leaner have successfully proved the feasibility of gradient method , such asLi et al . ( 2019 ) ; Finn & Levine ( 2018 ) ; Lian et al . ( 2020 ) ; Flennerhag et al . ( 2020 ) . For the RL problem , Zhou et al . ( 2020 ) optimize the meta critic , the upper-level of a BLO , as an intrinsic motivation by gradient method . Kirsch et al . ( 2020 ) enables a population of agents to use and improve a single parameterized objective function through gradient learning on different tasks . Rajeswaran et al . ( 2019 ) proposed an implicit MAML algorithm by drawing upon implicit differentiation that effectively decouples the meta-gradient computation from the selection of an inner loop optimizer . Liu et al . ( 2019 ) proposed a surrogate objective function TMAML , which incorporates control variables into gradient estimation through automatic differentiation and improves the quality of the gradient estimation by reducing the variance without introducing bias . 3 METHODOLOGY . 3.1 OFF-POLICY ACTOR-CRITIC . In general , the reinforcement learning task can be considered as finding the optimal policies in Markov Decision Processes ( MDPs ) . The MDP is defined by a tuple ( S , A , P , R ) , where S is a set of states , A is a set of actions , P is a set of probabilities to switch from a state s to s′ for a given set of action a , and R : S × A → R is a scalar reward function . In the Actor-Critic framework , the policy network ( Actor ) πφ ( s ) and the value network ( Critic ) Qθ ( s , a ) are parameterized by a neural network respectively . At each time t , the agent receives an observation st and takes a action at based on its policy π : S → A , then receives a reward rt and a new state st+1 . The tuple ( st , at , rt , st+1 ) describes a state transition and will be stored in a reply buffer D for off-policy learning . The objective of RL is to find the optimal policy πφ to maximizes the expected cumulative return J : J ( φ ) = E [ ∞∑ t=0 γtR ( st , at ) | at ∼ πφ ( · | st ) ] ( 1 ) Where γ is the discount factor , and J ( φ ) also can be written as the expected value for the Q-function : J ( φ ) = Es∼pπQθ ( s , a ) |at∼πφ ( ·|st ) , ( 2 ) Where Q-function is the expected discounted sum of rewards following visitation at state s and execution of action a , and pπ is the state distribution induced by policy π . For off-policy ActorCritic architectures such as DDPG , TD3 and SAC , the loss for the actor provided by the critic may be different , but the Q-function will be learned by minimizing the loss of the below equation as same : LCriticθ = Est∼pπ , at∼πφ [ ( Qθ ( st , at ) − yt ) 2 ] yt = rt + γQθ′ ( st+1 , πφ′ ( st+1 ) ) , ( 3 ) where φ′ and θ′ represent the target network for the critic and the actor respectively . The actor loss usually differs in details according to different algorithms , however , they all follow a form of the following formula : LActorφ = −J ( φ ) = − Es∼pπQθ ( s , a ) |a=πφ ( s ) ( 4 ) 3.2 META ATTENTION METHOD . The attention mechanism has been widely used in image recognition and natural language processing since it can model the human pattern recognition process . It allocates attention to the important part of information while automatically ignores low-value features . According to ( Vaswani et al. , 2017 ) , attention function can be described as mapping a query and a set of key-value pairs to an output , where the output is computed as a weighted sum of the values , with the weight assigned to each value being computed by a compatibility function of the query with the corresponding key : Attention ( Query , Key , Value ) = ∑ i Similarity ( Queryi , Keyi ) ∗ Value i In order to introduce the attention mechanism into the Actor-Critic framework , we first split the actor and critic network into two parts . We take the last layer of actor network π̂ as an action net to produce actions and the rest as actor feature net π̄ ( s ) to extract features , with the entire policy network being denoted as πφ ( s ) = π̂ ( π̄ ( s ) ) . Similarly , the value network can also be divided into a critic net Q̂ and a critic feature net Q̄ ( s ) , and the whole value network can be expressed as Qθ ( s ) = Q̂ ( Q̄ ( s , a ) ) . We use the feature net as encoder to obtain Query , Key and Value by the following formula : Querry = π̄ ( s ) Key = Q̄ ( s , πφ ( s ) ) Value = π̄ ( s ) We input Query ( actor features ) and Key ( critic features ) into meta attention network fψ ( Query , Key ) ( a three-layers MLP parameterized by ψ ) to calculate the similarity of each feature dimension . To enhance or reduce the features in specific corresponding dimensions , we multiply the output after the sigmoid function by 2 to obtain the feature scale . In this paper , we use ◦ to denote the Hadamard product , and by calculating the Hadamard product ◦ of Value ( actor features ) and scale , we obtain the attention features π̄′ ( s ) after modifying some dimensions : Attention Features = π̄′ψ ( s ) = fψ ( π̄ ( s ) , Q̄ ( s , πφ ( s ) ) ) ◦ 2 ◦ π̄ ( s ) ( 5 ) This calculation process corresponds to the part in the green dotted line box in Figure 1 ( b ) . There are also two critical problems in the optimization process . 1 ) how to affect the agent ’ s decision-making and training process through attention features ; and 2 ) how to optimize the meta attention network to achieve the correct matching Query ( actor features ) and Key ( critic features ) to generate the proper feature scales . To address these issues , we formalize the entire optimization process as a bi-level optimization problem , referring to the meta attention method as the outer level , and to the task performed by the agent as the inner level : ψ∗ = arg min ψ LAttentionψ ( D ; φ∗ ; ψ ) ( 6 ) s.t . φ∗ = arg min φ [ LActorφ ( D ; φ|a ∼ πφ ( s ) ) + LActorφ ( D ; φ ; ψ|a ∼ π̂ ( π̄′ψ ( s ) ) ] , ( 7 ) where LAttentionψ is the meta optimization objective , and L Actor φ is the actor loss in the form of Eq. ( 4 ) . For the first problem , we first implement a traditional back-propagate dominated by the actor loss LActorφ |a=πφ ( s ) on training data dtrn : φold = φ− η ∂LActorφ ( dtrn|a ∼ πφ ( s ) ) ∂φ , ( 8 ) Where η is the learning rate , this process corresponds to the upper part of Figure 1 ( b ) and the first step of meta-training in Algorithm1 . Then , we generate a new attention action a = π̂ ( π̄′ψ ( s ) ) from the attention features through action net and feed into the critic to get the loss LActorφ ( D ; φ ; ψ ) |a=π̂ ( π̄′ψ ( s ) ) for back-propagation : φnew = φold − η ∂LActorφold ( dtrn ; ψ|a ∼ π̂ ( π̄′ψ ( s ) ) ) ∂φ ( 9 ) This allows the agent to obtain better actions by simply modifying the features compared to the original action without increasing the batch size , strengthening the agent ’ s learning of good actions and reducing the probability of producing poor actions after the gradient step . This process corresponds to the lower part of ( b ) in Figure 1 and the second step of meta-training in Algorithm1 . For the second problem , we have two basic assumptions : 1 ) a good meta attention network will inevitably generate feature scales that can generate actions with higher value since it better associates the relationship between actor features and critic features ; 2 ) in the process of back-propagation , high-value actions enhance actors more than low-value actions because good actions will strengthen the actor ’ s tendency to make such choices , while bad actions cause the actor to try other actions . Furthermore , since the meta attention network is involved in the back propagation process , we use the utility of meta attention in this process as the attention loss LAttentionψ and the attention loss on validation data dval is defined as : LAttentionψ = tanh ( LActorφnew ( dval|a ∼ πφnew ( s ) ) − L Actor φold ( dval|a ∼ πφold ( s ) ) ) ( 10 ) Under this definition , when performing gradient descent updates , it is ensured that the meta attention is always updated along the direction that improves the performance of the agent . This process corresponds to the blue line part of ( b ) in Figure 1 and the meta-test in Algorithm1 .
This paper proposes to modify the off-policy actor-critic framework by introducing an attention mechanism in the actor and critic. The attention mechanism is used to adjust the actor features (i.e. intermediate features generated by the actor neural network) for better action selection in the continuous control tasks. The query is the actor feature. The key is the critic feature. The output from the attention network is the similarity of each feature dimension. The new actor feature is the attention-weighted action feature and it is used to generate new action in the actor neural network. Experiments demonstrated that the proposed attention mechanism between actor and critic network can improve actor-critic algorithms, DDPG, TD3, and SAC.
SP:34b501a8bcf5fca41b166b7ed5dac94c108dcb0d
Meta Attention For Off-Policy Actor-Critic
1 INTRODUCTION . Reinforcement Learning ( RL ) algorithms based on the Actor-Critic framework have achieved considerable success in many areas such as games , robot control , and planning . Compared with onpolicy methods , off-policy methods possess more efficient sampling since they do not require new samples to be collected for each gradient step and make better use of experience ( Haarnoja et al. , 2018a ) . However , even for off-policy methods , traditional reinforcement learning algorithms still have extremely low sample efficiency ( Yu , 2018 ) . Recently , meta-learning ( Hospedales et al. , 2020 ) has become topical as a paradigm to accelerate RL by learning an inductive bias from past experience . By learning aspects of the learning strategy , such as fast adaptation strategies ( Finn et al. , 2017 ) ( Rakelly et al. , 2019 ) , losses ( Zhou et al. , 2020 ) ( Bechtle et al. , 2020 ) , optimization strategies ( Duan et al. , 2016 ) , exploration strategies ( Gupta et al. , 2018 ) , hyperparameters ( Bechtle et al. , 2020 ) , and intrinsic rewards ( Zheng et al. , 2018 ) , meta-learning has significantly improved sample efficiency over standard RL algorithms . Improving sample efficiency through attention mechanism has also been proved to be very effective in image-based reinforcement learning ( Barati & Chen , 2019 ; Chen et al. , 2019 ) . The application of attention mechanism in multi-agent system ( Parnika et al. , 2021 ; Iqbal & Sha , 2019 ) and multi-object task ( Team et al. , 2021 ) also shows its powerful capabilities in information processing . However , in the existing works , attention mechanisms often have clear application scenarios , such as imagebased control or multiple sources of information ( multi-agent or multi-target ) . The effective combination of attention mechanisms with current algorithms in a state-based single-agent environment is still a problem to be investigated In this paper , we propose a meta attention method based on the attention mechanism . In the human decision-making process , people often modify their concepts based on the feedback and results to obtain a better decision . Inspired by this decision-making process , we use meta attention to adapt the features generated by the policy network based on the evaluation of value network . Our work differs from current attention-based work in that our meta attention approach works only within the Actor-Critic framework and does not depend on a specific scenario . We formalize the metalearning process as a bi-level optimization problem . Our approach can be flexibly combined with various algorithms by using meta attention as a meta-learner and optimizing meta attention in the outer layer . Unlike the existing meta-learning methods , our meta attention approach can improve the performance of agent through gradients in the training stage and obtain better actions by adjusting the features in the execution stage . We evaluated the proposed meta attention method in a series of continuous control tasks in Gym and Roboschool , including three 3D robot control tasks , two 2D control tasks , and one classic control task based on DDPG ( Lillicrap et al. , 2016 ) , TD3 ( Fujimoto et al. , 2018 ) and SAC ( Haarnoja et al. , 2018b ) . Besides , we also discussed the changes and impact caused by modifying the actor features through meta attention . Experimental results how that our meta-attention approach is not only effective in accelerating the learning progress of the agent in the training phase , but also improves the actions in the execution phase , further enhancing the performance of the agent . 2 RELATED WORK . Attention Mechanism Attention is a behavioral and cognitive process of selectively attending to a discrete aspect of information , whether subjective or objective , while ignoring other perceptible information ( de Santana Correia & Colombini , 2021 ) . Typically , attention mechanism is mainly applied in the fields of computational vision and natural language processing . Accordingly , although the implementation methods are different , the application of attention mechanism in reinforcement learning is mainly focused on video games ( Wu et al. , 2021 ; Chen et al. , 2019 ; Barati & Chen , 2019 ; Mott et al. , 2019 ) ( Manchin et al. , 2019 ) . Other works such as Peng et al . ( 2020 ) proposed a dynamic attention model with a dynamic encoder-decoder architecture , which dynamically explores node features to efficiently exploit hidden structural information at different construction steps . Li et al . ( 2021 ) applied the attention mechanism to generate feature vectors and input them into value and policy head during the feature extraction phase of PPO and PPG.Jiang & Lu ( 2018 ) ; Iqbal & Sha ( 2019 ) ; Mao et al . ( 2019 ) employed a multi-head attention mechanism to make one agent selectively pay attention to information from other agents . Team et al . ( 2021 ) used the attention mechanism to match multiple goals with the hidden state of the current state to obtain the goal-attention hidden state under different goals . This approach allows the agent to predict the expected return obtained by attending to a goal at the end of a scene . Meta Reinforcement Learning Meta-learning is most often understood as learning to learn , which refers to learning from historical information or multiple learning episodes to improve learning algorithms . Since works which fed historical trajectories into Recurrent Neural Network to flesh out task-level information ( Wang et al. , 2016 ; Duan et al. , 2016 ) , various meta-learning methods have been proposed to strengthen agent performance . Houthooft et al . ( 2018 ) ; Kirsch et al . ( 2020 ) ; Zhou et al . ( 2020 ) used meta-learning methods to learn a loss function rather than artificial design to improve performance of agent in single or multiple tasks.Gupta et al . ( 2018 ) ; Stadie et al . ( 2018 ) ; Xu et al . ( 2018a ) employed meta-learning methods to learn exploration instead of traditional exploration methods.Finn et al . ( 2017 ) meta learned a good model initialization of the model that can be quickly adapted to different tasks . Xu et al . ( 2018b ) improved the performance of the agent by meta-learning discount factors.Rakelly et al . ( 2019 ) Fakoor et al . ( 2020 ) treats meta information as an unobservable state of Partially Observable Markov Decision Process , further improves the agent ’ s performance in multi task learning . Although the attention mechanism may not have an explicit meta-learning object , it can also be considered as the king of the meta-learning method . Bi-level Optimization Generally , meta-learning can be formalized as a bi-level optimization ( BLO ) problem . However , the solution of BLO problems is often challenging . Franceschi et al . ( 2018 ) proposed a framework for approximating the solution of BLO problems using the gradient method . Since then , many works based on the gradient methods to optimize meta leaner have successfully proved the feasibility of gradient method , such asLi et al . ( 2019 ) ; Finn & Levine ( 2018 ) ; Lian et al . ( 2020 ) ; Flennerhag et al . ( 2020 ) . For the RL problem , Zhou et al . ( 2020 ) optimize the meta critic , the upper-level of a BLO , as an intrinsic motivation by gradient method . Kirsch et al . ( 2020 ) enables a population of agents to use and improve a single parameterized objective function through gradient learning on different tasks . Rajeswaran et al . ( 2019 ) proposed an implicit MAML algorithm by drawing upon implicit differentiation that effectively decouples the meta-gradient computation from the selection of an inner loop optimizer . Liu et al . ( 2019 ) proposed a surrogate objective function TMAML , which incorporates control variables into gradient estimation through automatic differentiation and improves the quality of the gradient estimation by reducing the variance without introducing bias . 3 METHODOLOGY . 3.1 OFF-POLICY ACTOR-CRITIC . In general , the reinforcement learning task can be considered as finding the optimal policies in Markov Decision Processes ( MDPs ) . The MDP is defined by a tuple ( S , A , P , R ) , where S is a set of states , A is a set of actions , P is a set of probabilities to switch from a state s to s′ for a given set of action a , and R : S × A → R is a scalar reward function . In the Actor-Critic framework , the policy network ( Actor ) πφ ( s ) and the value network ( Critic ) Qθ ( s , a ) are parameterized by a neural network respectively . At each time t , the agent receives an observation st and takes a action at based on its policy π : S → A , then receives a reward rt and a new state st+1 . The tuple ( st , at , rt , st+1 ) describes a state transition and will be stored in a reply buffer D for off-policy learning . The objective of RL is to find the optimal policy πφ to maximizes the expected cumulative return J : J ( φ ) = E [ ∞∑ t=0 γtR ( st , at ) | at ∼ πφ ( · | st ) ] ( 1 ) Where γ is the discount factor , and J ( φ ) also can be written as the expected value for the Q-function : J ( φ ) = Es∼pπQθ ( s , a ) |at∼πφ ( ·|st ) , ( 2 ) Where Q-function is the expected discounted sum of rewards following visitation at state s and execution of action a , and pπ is the state distribution induced by policy π . For off-policy ActorCritic architectures such as DDPG , TD3 and SAC , the loss for the actor provided by the critic may be different , but the Q-function will be learned by minimizing the loss of the below equation as same : LCriticθ = Est∼pπ , at∼πφ [ ( Qθ ( st , at ) − yt ) 2 ] yt = rt + γQθ′ ( st+1 , πφ′ ( st+1 ) ) , ( 3 ) where φ′ and θ′ represent the target network for the critic and the actor respectively . The actor loss usually differs in details according to different algorithms , however , they all follow a form of the following formula : LActorφ = −J ( φ ) = − Es∼pπQθ ( s , a ) |a=πφ ( s ) ( 4 ) 3.2 META ATTENTION METHOD . The attention mechanism has been widely used in image recognition and natural language processing since it can model the human pattern recognition process . It allocates attention to the important part of information while automatically ignores low-value features . According to ( Vaswani et al. , 2017 ) , attention function can be described as mapping a query and a set of key-value pairs to an output , where the output is computed as a weighted sum of the values , with the weight assigned to each value being computed by a compatibility function of the query with the corresponding key : Attention ( Query , Key , Value ) = ∑ i Similarity ( Queryi , Keyi ) ∗ Value i In order to introduce the attention mechanism into the Actor-Critic framework , we first split the actor and critic network into two parts . We take the last layer of actor network π̂ as an action net to produce actions and the rest as actor feature net π̄ ( s ) to extract features , with the entire policy network being denoted as πφ ( s ) = π̂ ( π̄ ( s ) ) . Similarly , the value network can also be divided into a critic net Q̂ and a critic feature net Q̄ ( s ) , and the whole value network can be expressed as Qθ ( s ) = Q̂ ( Q̄ ( s , a ) ) . We use the feature net as encoder to obtain Query , Key and Value by the following formula : Querry = π̄ ( s ) Key = Q̄ ( s , πφ ( s ) ) Value = π̄ ( s ) We input Query ( actor features ) and Key ( critic features ) into meta attention network fψ ( Query , Key ) ( a three-layers MLP parameterized by ψ ) to calculate the similarity of each feature dimension . To enhance or reduce the features in specific corresponding dimensions , we multiply the output after the sigmoid function by 2 to obtain the feature scale . In this paper , we use ◦ to denote the Hadamard product , and by calculating the Hadamard product ◦ of Value ( actor features ) and scale , we obtain the attention features π̄′ ( s ) after modifying some dimensions : Attention Features = π̄′ψ ( s ) = fψ ( π̄ ( s ) , Q̄ ( s , πφ ( s ) ) ) ◦ 2 ◦ π̄ ( s ) ( 5 ) This calculation process corresponds to the part in the green dotted line box in Figure 1 ( b ) . There are also two critical problems in the optimization process . 1 ) how to affect the agent ’ s decision-making and training process through attention features ; and 2 ) how to optimize the meta attention network to achieve the correct matching Query ( actor features ) and Key ( critic features ) to generate the proper feature scales . To address these issues , we formalize the entire optimization process as a bi-level optimization problem , referring to the meta attention method as the outer level , and to the task performed by the agent as the inner level : ψ∗ = arg min ψ LAttentionψ ( D ; φ∗ ; ψ ) ( 6 ) s.t . φ∗ = arg min φ [ LActorφ ( D ; φ|a ∼ πφ ( s ) ) + LActorφ ( D ; φ ; ψ|a ∼ π̂ ( π̄′ψ ( s ) ) ] , ( 7 ) where LAttentionψ is the meta optimization objective , and L Actor φ is the actor loss in the form of Eq. ( 4 ) . For the first problem , we first implement a traditional back-propagate dominated by the actor loss LActorφ |a=πφ ( s ) on training data dtrn : φold = φ− η ∂LActorφ ( dtrn|a ∼ πφ ( s ) ) ∂φ , ( 8 ) Where η is the learning rate , this process corresponds to the upper part of Figure 1 ( b ) and the first step of meta-training in Algorithm1 . Then , we generate a new attention action a = π̂ ( π̄′ψ ( s ) ) from the attention features through action net and feed into the critic to get the loss LActorφ ( D ; φ ; ψ ) |a=π̂ ( π̄′ψ ( s ) ) for back-propagation : φnew = φold − η ∂LActorφold ( dtrn ; ψ|a ∼ π̂ ( π̄′ψ ( s ) ) ) ∂φ ( 9 ) This allows the agent to obtain better actions by simply modifying the features compared to the original action without increasing the batch size , strengthening the agent ’ s learning of good actions and reducing the probability of producing poor actions after the gradient step . This process corresponds to the lower part of ( b ) in Figure 1 and the second step of meta-training in Algorithm1 . For the second problem , we have two basic assumptions : 1 ) a good meta attention network will inevitably generate feature scales that can generate actions with higher value since it better associates the relationship between actor features and critic features ; 2 ) in the process of back-propagation , high-value actions enhance actors more than low-value actions because good actions will strengthen the actor ’ s tendency to make such choices , while bad actions cause the actor to try other actions . Furthermore , since the meta attention network is involved in the back propagation process , we use the utility of meta attention in this process as the attention loss LAttentionψ and the attention loss on validation data dval is defined as : LAttentionψ = tanh ( LActorφnew ( dval|a ∼ πφnew ( s ) ) − L Actor φold ( dval|a ∼ πφold ( s ) ) ) ( 10 ) Under this definition , when performing gradient descent updates , it is ensured that the meta attention is always updated along the direction that improves the performance of the agent . This process corresponds to the blue line part of ( b ) in Figure 1 and the meta-test in Algorithm1 .
The paper introduces attention mechanism into actor-critics method, and formulates RL as a bi-level optimization to learn the (meta) attention parameters. The attention mechanism appears model agnostic and acts on the feature representation from the actor and critic models. Empirically, the proposed model shows improved performance over baseline methods.
SP:34b501a8bcf5fca41b166b7ed5dac94c108dcb0d
Should I Run Offline Reinforcement Learning or Behavioral Cloning?
1 INTRODUCTION . Offline reinforcement learning ( RL ) algorithms aim to leverage large , existing datasets of previously collected data to produce effective policies that generalize across a wide range of scenarios , without the need for costly active data collection . Many recent offline RL algorithms ( Fujimoto et al. , 2018 ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Kumar et al. , 2020 ; Yu et al. , 2020 ; Sinha et al. , 2021 ; Kostrikov et al. , 2021 ) can work well even highly suboptimal data . With recent advances , the performance of offline RL algorithms has improved significantly , and a number of these approaches have been studied theoretically ( Wang et al. , 2021 ; Zanette , 2020 ; Rashidinejad et al. , 2021 ) . While it is clear that offline RL algorithms are a good choice when the available data is either random or highly suboptimal , it is less clear if these methods should even be tried when the datasets consist of demonstration that come from expert or near-expert demonstrations . In these cases , imitation learning algorithms , such as behavorial cloning ( BC ) , can be used to train policies via supervised learning . It then seems natural to ask : When should we prefer to use offline RL over imitation learning ? To our knowledge , there has not been a rigorous characterization of when offline RL perform better than imitation learning . Existing empirical studies comparing offline RL to imitation learning have mixed conclusions . Some works show that offline RL methods appear to greatly outperform imitation learning , specifically in environments that require “ stitching ” parts of suboptimal trajectories ( Fu et al. , 2020 ) . In contrast , a number of recent works have argued that BC performs better than offline RL on both expert and suboptimal demonstration data over a variety of tasks ( Mandlekar et al. , 2021 ; Florence et al. , 2021 ; Hahn et al. , 2021 ) . This makes it confusing for practitioners to understand whether to use offline RL or simply run BC on collected demonstrations . Thus , in this work we aim to understand if there are conditions on the environment or the dataset under which an offline RL algorithm might outperform BC for a given task , even when BC is provided with expert data or is allowed to use reward as side information . Our insights can inform a practitioner with sufficient domain knowledge on whether offline RL is a good idea , even when expert or near-expert data is available and BC is considered to be a natural choice . Our contribution in this paper is a theoretical and empirical characterization of certain conditions when offline RL can outperform BC . Theoretically , we are the first to derive conditions on the environment or offline dataset where offline RL achieves better worst-case guarantees than even the best-case lower-bound for BC using expert demonstrations . These conditions are grounded in practical problems , and provide guidance to the practitioner as to whether they should use RL or BC . Concretely , we show that in the case of expert data , the error incurred by offline RL algorithms can scale significantly more favorably when the MDP enjoys some structure including horizonindependent returns ( i.e. , sparse rewards ) , or a low volume of states where it is “ critical ” to take the same action as the expert ( Section 4.2 ) Meanwhile , in the case of sufficiently noisy data , we show that offline RL again enjoys better guarantees on long-horizon tasks ( Section 4.3 ) . Finally , since traditional BC methods ignore rewards , we consider generalized BC methods that use the observed rewards to inform learning , and show that it is still preferable to perform offline RL ( Section 4.4 ) . Empirically , we validate our theoretical conclusions on diagnostic gridworld domains ( Fu et al. , 2019 ) and large-scale benchmark problems in robotic manipulation and navigation and Atari games , using human data ( Fu et al. , 2020 ) , scripted data ( Singh et al. , 2020 ) and data generated from RL policies ( Agarwal et al. , 2020b ) . We verify that in multiple long-horizon problems where the conditions we propose are likely to be satisfied , practical offline RL methods can outperform BC and generalized BC methods . Using careful offline tuning practices , we show that it is possible for offline RL to outperform cloning an expert dataset for the same task , given equal amounts of data . 2 RELATED WORK . Offline RL ( Lange et al. , 2012 ; Levine et al. , 2020 ) has shown promise in domains such as robotic manipulation ( Kalashnikov et al. , 2018b ; Mandlekar et al. , 2020 ; Singh et al. , 2020 ; Kalashnikov et al. , 2021 ) , NLP ( Jaques et al. , 2020 ) and healthcare ( Shortreed et al. , 2011 ; Wang et al. , 2018 ) . The major challenge in offline RL is distribution shift ( Fujimoto et al. , 2018 ; Kumar et al. , 2019 ) , where the learned policy might execute out-of-distribution actions . Prior offline RL methods can broadly be characterized into two categories : ( 1 ) policy-constraint methods that regularize the learned policy to be “ close ” to the behavior policy either explicitly ( Fujimoto et al. , 2018 ; Kumar et al. , 2019 ; Liu et al. , 2020 ; Wu et al. , 2019 ; Fujimoto & Gu , 2021 ) or implicitly ( Siegel et al. , 2020 ; Peng et al. , 2019 ; Nair et al. , 2020 ) , or via importance sampling ( Liu et al. , 2019 ; Swaminathan & Joachims , 2015 ; Nachum et al. , 2019 ) , and ( 2 ) conservative methods that learn a lower-bound , or conservative , estimate of return and optimize the policy against it ( Kumar et al. , 2020 ; Kostrikov et al. , 2021 ; Kidambi et al. , 2020 ; Yu et al. , 2020 ; 2021 ) . Our goal is not to devise a new offline RL algorithm , but rather to understand when existing offline RL methods can outperform BC . When do offline RL methods outperform BC ? Rashidinejad et al . ( 2021 ) derive a conservative offline RL algorithm based on lower-confidence bounds ( LCB ) that provably outperforms BC in the simpler contextual bandits ( CB ) setting , but do not extend it to MDPs . While this CB result signals the possibility that offline RL can outperform BC in theory , this generalization is not trivial , as RL suffers from compounding errors ( Munos , 2003 ; 2005 ; Wang et al. , 2021 ) . Laroche et al . ( 2019 ) ; Nadjahi et al . ( 2019 ) ; Kumar et al . ( 2020 ) ; Liu et al . ( 2020 ) ; Xie et al . ( 2021a ) present safe policy improvement bounds expressed as improvements over the behavior policy , which imitation aims to recover , but these bounds do not clearly indicate when offline RL is better or worse . Empirically , Fu et al . ( 2020 ) show that offline RL considerably outperforms BC for tasks that require “ stitching ” trajectory segments to devise an optimal policy . In contrast , Mandlekar et al . ( 2021 ) ; Brandfonbrener et al . ( 2021 ) ; Chen et al . ( 2021 ) ; Hahn et al . ( 2021 ) suggest that BC or filtered BC using the top fraction of the data performs better on other tasks . While results on Adroit domains in D4RL ( Fu et al. , 2020 ) show that offline RL outperforms BC even on expert data , Florence et al . ( 2021 ) reported superior BC results , making it unclear . Kurenkov & Kolesnikov ( 2022 ) emphasize the importance of the online evaluation budget for offline RL methods and show that BC is more favorable in a limited budget . We provide a characterization of certain scenarios where we would expect offline RL to be better than BC , and empirical results verifying that offline RL performs well on such problems , spanning robotics , navigation , games ( Fu et al. , 2020 ; Singh et al. , 2020 ; Bellemare et al. , 2013 ) . Our theoretical analysis combines tools from a number of prior works . We analyze the total error incurred by RL via an error propagation analysis ( Munos , 2003 ; 2005 ; Farahmand et al. , 2010 ; Chen & Jiang , 2019 ; Xie & Jiang , 2020 ; Liu et al. , 2020 ) , which gives rise to bounds with concentrability coefficients that bound the total distributional shift between the learned policy and the data distribution ( Xie & Jiang , 2020 ; Liu et al. , 2020 ) . We use tools from Ren et al . ( 2021 ) , which provide horizon-free bounds for standard ( non-conservative ) offline Q-learning but relax their strict coverage assumptions . While our analysis studies a LCB-style algorithm similar to Rashidinejad et al . ( 2021 ) , we modify it to use tighter Bernstein bonuses ( Zhang et al. , 2021 ; Agarwal et al. , 2020a ) , which is key to improving its suboptimality guarantee . Xie et al . ( 2021b ) consider a similar algorithm with Bernstein bonuses , but take a different approach to analysis it and use it for policy finetuning . 3 PROBLEM SETUP AND PRELIMINARIES . The goal in reinforcement learning is to learn a policy π ( ·|s ) that maximizes the expected cumulative discounted reward in a Markov decision process ( MDP ) , which is defined by a tuple ( S , A , P , r , γ ) . S , A represent state and action spaces , P ( s′|s , a ) and r ( s , a ) represent the dynamics and mean reward function , and γ ∈ ( 0 , 1 ) represents the discount factor . The effective horizon of the MDP is given by H = 1/ ( 1− γ ) . The Q-function , Qπ ( s , a ) for a given policy π is equal to the discounted long-term reward attained by executing a at the state s and then following policy π thereafter . Qπ satisfies the recursion : ∀s , a ∈ S×A , Qπ ( s , a ) = r ( s , a ) +γEs′∼P ( ·|s , a ) , a′∼π ( ·|s′ ) [ Q ( s′ , a′ ) ] . The value function V π considers the expectation of the Q-function over the policy V π ( s ) = Ea∼π ( ·|s ) [ Qπ ( s , a ) ] . Meanwhile , the Q-function of the optimal policy , Q∗ , satisfies the recursion : Q∗ ( s , a ) = r ( s , a ) + Es′∼P ( ·|s , a ) [ maxa′ Q∗ ( s′ , a′ ) ] , and the optimal value function is given by V ∗ ( s ) = maxaQ∗ ( s , a ) . Finally , the expected cumulative discounted reward is given by J ( π ) = Es0∼ρ [ V π ( s0 ) ] . In offline RL , we are provided with a dataset D of transitions , D = { ( si , ai , ri , s′i ) } Ni=1 of size |D| = N . We assume that the dataset D is generated i.i.d . from a distribution µ ( s , a ) that specifies the effective behavior policy πβ ( a|s ) : = µ ( s , a ) / ∑ a µ ( s , a ) . Note that holds even if the data itself is generated by running a non-Markovian policy πβ ( Puterman , 1994 ) . Let n ( s , a ) be the number of times ( s , a ) appear in D , and P̂ ( ·|s , a ) and r̂ ( s , a ) denote the empirical dynamics and reward distributions in D , which may be different from P and r due to stochasticity . Following Rashidinejad et al . ( 2021 ) , the goal is to minimize the suboptimality of the learned policy π̂ : SubOpt ( π̂ ) = ED∼µ [ J ( π∗ ) − J ( π̂ ) ] = ED [ Es0∼ρ [ V ∗ ( s0 ) − V π̂ ( s0 ) ] ] . ( 1 ) Dataset and MDP conditions . Here we introduce some conditions on the offline dataset and MDP structure that we make for our analysis . The first characterizes the distribution shift between the data distribution µ ( s , a ) and the normalized state-action marginal of π∗ , given by d∗ ( s , a ) = ( 1− γ ) ∑∞ t=0 γ tP ( st = s , at = a ; π∗ ) , via a concentrability coefficient C∗ . Condition 3.1 ( Rashidinejad et al . ( 2021 ) , Concentrability of the data distribution ) . Define C∗ to be the smallest , finite constant that satisfies : d∗ ( s , a ) /µ ( s , a ) ≤ C∗ ∀s ∈ S , a ∈ A . Intuitively , the coefficient C∗ scales with how suboptimal the data µ ( s , a ) is relative to the optimal π∗ , where C∗ = 1 corresponds to data from π∗ . The next condition we consider is that the discounted return for any trajectory in the MDP is bounded by a constant , which w.l.o.g. , we assume to be 1 . Condition 3.2 ( Ren et al . ( 2021 ) , the value of any trajectory is bounded by 1 ) . The infinite-horizon discounted return for any trajectory τ = ( s0 , a0 , r0 , s1 , · · · ) is bounded as ∑∞ t=0 γ trt ≤ 1 . This condition holds in sparse-reward tasks , particularly those where an agent succeeds or fails at its task once per episode . This is common in domains such as robotics ( Singh et al. , 2020 ; Kalashnikov et al. , 2018b ) and games ( Bellemare et al. , 2013 ) , where the agent receives a signal upon succeeding a task or winning . This condition also appears in prior work deriving suboptimality bounds for RL algorithms ( Ren et al. , 2021 ; Zhang et al. , 2021 ) . Notation . Let n ∧ 1 = max { n , 1 } . Denote ι = polylog ( |S| , H , N ) . We let ι be a polylogarithmic quantity , changing with context . For d-dimensional vectors x , y , x ( i ) denotes its i-th entry , and define V ( x , y ) = ∑ i x ( i ) y ( i ) 2 − ( ∑ i x ( i ) y ( i ) ) 2 .
The paper considers a setting where we are given access to a dataset of expert or noisy-expert data collected from some MDP and need to decide whether to use either behavior cloning (BC) or offline RL. It conducts a theoretical analysis in a tabular setting showing that offline RL will recover a better policy than BC when the data is sufficiently suboptimal and has sufficient coverage. Finally it conducts an empirical analysis that confirms the results in some diagnostic gridworld tasks and shows that a tuned variant of the CQL offline RL algorithm outperforms BC in several larger-scale tasks.
SP:e0f3760b57534bf1ea5bdd3135661779d6842510
Should I Run Offline Reinforcement Learning or Behavioral Cloning?
1 INTRODUCTION . Offline reinforcement learning ( RL ) algorithms aim to leverage large , existing datasets of previously collected data to produce effective policies that generalize across a wide range of scenarios , without the need for costly active data collection . Many recent offline RL algorithms ( Fujimoto et al. , 2018 ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Kumar et al. , 2020 ; Yu et al. , 2020 ; Sinha et al. , 2021 ; Kostrikov et al. , 2021 ) can work well even highly suboptimal data . With recent advances , the performance of offline RL algorithms has improved significantly , and a number of these approaches have been studied theoretically ( Wang et al. , 2021 ; Zanette , 2020 ; Rashidinejad et al. , 2021 ) . While it is clear that offline RL algorithms are a good choice when the available data is either random or highly suboptimal , it is less clear if these methods should even be tried when the datasets consist of demonstration that come from expert or near-expert demonstrations . In these cases , imitation learning algorithms , such as behavorial cloning ( BC ) , can be used to train policies via supervised learning . It then seems natural to ask : When should we prefer to use offline RL over imitation learning ? To our knowledge , there has not been a rigorous characterization of when offline RL perform better than imitation learning . Existing empirical studies comparing offline RL to imitation learning have mixed conclusions . Some works show that offline RL methods appear to greatly outperform imitation learning , specifically in environments that require “ stitching ” parts of suboptimal trajectories ( Fu et al. , 2020 ) . In contrast , a number of recent works have argued that BC performs better than offline RL on both expert and suboptimal demonstration data over a variety of tasks ( Mandlekar et al. , 2021 ; Florence et al. , 2021 ; Hahn et al. , 2021 ) . This makes it confusing for practitioners to understand whether to use offline RL or simply run BC on collected demonstrations . Thus , in this work we aim to understand if there are conditions on the environment or the dataset under which an offline RL algorithm might outperform BC for a given task , even when BC is provided with expert data or is allowed to use reward as side information . Our insights can inform a practitioner with sufficient domain knowledge on whether offline RL is a good idea , even when expert or near-expert data is available and BC is considered to be a natural choice . Our contribution in this paper is a theoretical and empirical characterization of certain conditions when offline RL can outperform BC . Theoretically , we are the first to derive conditions on the environment or offline dataset where offline RL achieves better worst-case guarantees than even the best-case lower-bound for BC using expert demonstrations . These conditions are grounded in practical problems , and provide guidance to the practitioner as to whether they should use RL or BC . Concretely , we show that in the case of expert data , the error incurred by offline RL algorithms can scale significantly more favorably when the MDP enjoys some structure including horizonindependent returns ( i.e. , sparse rewards ) , or a low volume of states where it is “ critical ” to take the same action as the expert ( Section 4.2 ) Meanwhile , in the case of sufficiently noisy data , we show that offline RL again enjoys better guarantees on long-horizon tasks ( Section 4.3 ) . Finally , since traditional BC methods ignore rewards , we consider generalized BC methods that use the observed rewards to inform learning , and show that it is still preferable to perform offline RL ( Section 4.4 ) . Empirically , we validate our theoretical conclusions on diagnostic gridworld domains ( Fu et al. , 2019 ) and large-scale benchmark problems in robotic manipulation and navigation and Atari games , using human data ( Fu et al. , 2020 ) , scripted data ( Singh et al. , 2020 ) and data generated from RL policies ( Agarwal et al. , 2020b ) . We verify that in multiple long-horizon problems where the conditions we propose are likely to be satisfied , practical offline RL methods can outperform BC and generalized BC methods . Using careful offline tuning practices , we show that it is possible for offline RL to outperform cloning an expert dataset for the same task , given equal amounts of data . 2 RELATED WORK . Offline RL ( Lange et al. , 2012 ; Levine et al. , 2020 ) has shown promise in domains such as robotic manipulation ( Kalashnikov et al. , 2018b ; Mandlekar et al. , 2020 ; Singh et al. , 2020 ; Kalashnikov et al. , 2021 ) , NLP ( Jaques et al. , 2020 ) and healthcare ( Shortreed et al. , 2011 ; Wang et al. , 2018 ) . The major challenge in offline RL is distribution shift ( Fujimoto et al. , 2018 ; Kumar et al. , 2019 ) , where the learned policy might execute out-of-distribution actions . Prior offline RL methods can broadly be characterized into two categories : ( 1 ) policy-constraint methods that regularize the learned policy to be “ close ” to the behavior policy either explicitly ( Fujimoto et al. , 2018 ; Kumar et al. , 2019 ; Liu et al. , 2020 ; Wu et al. , 2019 ; Fujimoto & Gu , 2021 ) or implicitly ( Siegel et al. , 2020 ; Peng et al. , 2019 ; Nair et al. , 2020 ) , or via importance sampling ( Liu et al. , 2019 ; Swaminathan & Joachims , 2015 ; Nachum et al. , 2019 ) , and ( 2 ) conservative methods that learn a lower-bound , or conservative , estimate of return and optimize the policy against it ( Kumar et al. , 2020 ; Kostrikov et al. , 2021 ; Kidambi et al. , 2020 ; Yu et al. , 2020 ; 2021 ) . Our goal is not to devise a new offline RL algorithm , but rather to understand when existing offline RL methods can outperform BC . When do offline RL methods outperform BC ? Rashidinejad et al . ( 2021 ) derive a conservative offline RL algorithm based on lower-confidence bounds ( LCB ) that provably outperforms BC in the simpler contextual bandits ( CB ) setting , but do not extend it to MDPs . While this CB result signals the possibility that offline RL can outperform BC in theory , this generalization is not trivial , as RL suffers from compounding errors ( Munos , 2003 ; 2005 ; Wang et al. , 2021 ) . Laroche et al . ( 2019 ) ; Nadjahi et al . ( 2019 ) ; Kumar et al . ( 2020 ) ; Liu et al . ( 2020 ) ; Xie et al . ( 2021a ) present safe policy improvement bounds expressed as improvements over the behavior policy , which imitation aims to recover , but these bounds do not clearly indicate when offline RL is better or worse . Empirically , Fu et al . ( 2020 ) show that offline RL considerably outperforms BC for tasks that require “ stitching ” trajectory segments to devise an optimal policy . In contrast , Mandlekar et al . ( 2021 ) ; Brandfonbrener et al . ( 2021 ) ; Chen et al . ( 2021 ) ; Hahn et al . ( 2021 ) suggest that BC or filtered BC using the top fraction of the data performs better on other tasks . While results on Adroit domains in D4RL ( Fu et al. , 2020 ) show that offline RL outperforms BC even on expert data , Florence et al . ( 2021 ) reported superior BC results , making it unclear . Kurenkov & Kolesnikov ( 2022 ) emphasize the importance of the online evaluation budget for offline RL methods and show that BC is more favorable in a limited budget . We provide a characterization of certain scenarios where we would expect offline RL to be better than BC , and empirical results verifying that offline RL performs well on such problems , spanning robotics , navigation , games ( Fu et al. , 2020 ; Singh et al. , 2020 ; Bellemare et al. , 2013 ) . Our theoretical analysis combines tools from a number of prior works . We analyze the total error incurred by RL via an error propagation analysis ( Munos , 2003 ; 2005 ; Farahmand et al. , 2010 ; Chen & Jiang , 2019 ; Xie & Jiang , 2020 ; Liu et al. , 2020 ) , which gives rise to bounds with concentrability coefficients that bound the total distributional shift between the learned policy and the data distribution ( Xie & Jiang , 2020 ; Liu et al. , 2020 ) . We use tools from Ren et al . ( 2021 ) , which provide horizon-free bounds for standard ( non-conservative ) offline Q-learning but relax their strict coverage assumptions . While our analysis studies a LCB-style algorithm similar to Rashidinejad et al . ( 2021 ) , we modify it to use tighter Bernstein bonuses ( Zhang et al. , 2021 ; Agarwal et al. , 2020a ) , which is key to improving its suboptimality guarantee . Xie et al . ( 2021b ) consider a similar algorithm with Bernstein bonuses , but take a different approach to analysis it and use it for policy finetuning . 3 PROBLEM SETUP AND PRELIMINARIES . The goal in reinforcement learning is to learn a policy π ( ·|s ) that maximizes the expected cumulative discounted reward in a Markov decision process ( MDP ) , which is defined by a tuple ( S , A , P , r , γ ) . S , A represent state and action spaces , P ( s′|s , a ) and r ( s , a ) represent the dynamics and mean reward function , and γ ∈ ( 0 , 1 ) represents the discount factor . The effective horizon of the MDP is given by H = 1/ ( 1− γ ) . The Q-function , Qπ ( s , a ) for a given policy π is equal to the discounted long-term reward attained by executing a at the state s and then following policy π thereafter . Qπ satisfies the recursion : ∀s , a ∈ S×A , Qπ ( s , a ) = r ( s , a ) +γEs′∼P ( ·|s , a ) , a′∼π ( ·|s′ ) [ Q ( s′ , a′ ) ] . The value function V π considers the expectation of the Q-function over the policy V π ( s ) = Ea∼π ( ·|s ) [ Qπ ( s , a ) ] . Meanwhile , the Q-function of the optimal policy , Q∗ , satisfies the recursion : Q∗ ( s , a ) = r ( s , a ) + Es′∼P ( ·|s , a ) [ maxa′ Q∗ ( s′ , a′ ) ] , and the optimal value function is given by V ∗ ( s ) = maxaQ∗ ( s , a ) . Finally , the expected cumulative discounted reward is given by J ( π ) = Es0∼ρ [ V π ( s0 ) ] . In offline RL , we are provided with a dataset D of transitions , D = { ( si , ai , ri , s′i ) } Ni=1 of size |D| = N . We assume that the dataset D is generated i.i.d . from a distribution µ ( s , a ) that specifies the effective behavior policy πβ ( a|s ) : = µ ( s , a ) / ∑ a µ ( s , a ) . Note that holds even if the data itself is generated by running a non-Markovian policy πβ ( Puterman , 1994 ) . Let n ( s , a ) be the number of times ( s , a ) appear in D , and P̂ ( ·|s , a ) and r̂ ( s , a ) denote the empirical dynamics and reward distributions in D , which may be different from P and r due to stochasticity . Following Rashidinejad et al . ( 2021 ) , the goal is to minimize the suboptimality of the learned policy π̂ : SubOpt ( π̂ ) = ED∼µ [ J ( π∗ ) − J ( π̂ ) ] = ED [ Es0∼ρ [ V ∗ ( s0 ) − V π̂ ( s0 ) ] ] . ( 1 ) Dataset and MDP conditions . Here we introduce some conditions on the offline dataset and MDP structure that we make for our analysis . The first characterizes the distribution shift between the data distribution µ ( s , a ) and the normalized state-action marginal of π∗ , given by d∗ ( s , a ) = ( 1− γ ) ∑∞ t=0 γ tP ( st = s , at = a ; π∗ ) , via a concentrability coefficient C∗ . Condition 3.1 ( Rashidinejad et al . ( 2021 ) , Concentrability of the data distribution ) . Define C∗ to be the smallest , finite constant that satisfies : d∗ ( s , a ) /µ ( s , a ) ≤ C∗ ∀s ∈ S , a ∈ A . Intuitively , the coefficient C∗ scales with how suboptimal the data µ ( s , a ) is relative to the optimal π∗ , where C∗ = 1 corresponds to data from π∗ . The next condition we consider is that the discounted return for any trajectory in the MDP is bounded by a constant , which w.l.o.g. , we assume to be 1 . Condition 3.2 ( Ren et al . ( 2021 ) , the value of any trajectory is bounded by 1 ) . The infinite-horizon discounted return for any trajectory τ = ( s0 , a0 , r0 , s1 , · · · ) is bounded as ∑∞ t=0 γ trt ≤ 1 . This condition holds in sparse-reward tasks , particularly those where an agent succeeds or fails at its task once per episode . This is common in domains such as robotics ( Singh et al. , 2020 ; Kalashnikov et al. , 2018b ) and games ( Bellemare et al. , 2013 ) , where the agent receives a signal upon succeeding a task or winning . This condition also appears in prior work deriving suboptimality bounds for RL algorithms ( Ren et al. , 2021 ; Zhang et al. , 2021 ) . Notation . Let n ∧ 1 = max { n , 1 } . Denote ι = polylog ( |S| , H , N ) . We let ι be a polylogarithmic quantity , changing with context . For d-dimensional vectors x , y , x ( i ) denotes its i-th entry , and define V ( x , y ) = ∑ i x ( i ) y ( i ) 2 − ( ∑ i x ( i ) y ( i ) ) 2 .
Offline RL approaches are of quite great interest because of potentially easier real world applications. But as the paper points out in the extensive related work, there are a lot of conflicting results in the literature when it comes to comparison with plain old behavior cloning. To this end, the paper functions as fairly clear exposition of the current state of offline RL literature and proposes a formalization to allow comparison between these approaches. This formalization leads to a better characterization for the conditions for offline RL methods to outperform behavior cloning (and the importance of planning horizon and critical states). The paper performs experiments on a wide number of domains which allows the readers to make their own inferences for the conditions for offline RL successes.
SP:e0f3760b57534bf1ea5bdd3135661779d6842510
Should I Run Offline Reinforcement Learning or Behavioral Cloning?
1 INTRODUCTION . Offline reinforcement learning ( RL ) algorithms aim to leverage large , existing datasets of previously collected data to produce effective policies that generalize across a wide range of scenarios , without the need for costly active data collection . Many recent offline RL algorithms ( Fujimoto et al. , 2018 ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Kumar et al. , 2020 ; Yu et al. , 2020 ; Sinha et al. , 2021 ; Kostrikov et al. , 2021 ) can work well even highly suboptimal data . With recent advances , the performance of offline RL algorithms has improved significantly , and a number of these approaches have been studied theoretically ( Wang et al. , 2021 ; Zanette , 2020 ; Rashidinejad et al. , 2021 ) . While it is clear that offline RL algorithms are a good choice when the available data is either random or highly suboptimal , it is less clear if these methods should even be tried when the datasets consist of demonstration that come from expert or near-expert demonstrations . In these cases , imitation learning algorithms , such as behavorial cloning ( BC ) , can be used to train policies via supervised learning . It then seems natural to ask : When should we prefer to use offline RL over imitation learning ? To our knowledge , there has not been a rigorous characterization of when offline RL perform better than imitation learning . Existing empirical studies comparing offline RL to imitation learning have mixed conclusions . Some works show that offline RL methods appear to greatly outperform imitation learning , specifically in environments that require “ stitching ” parts of suboptimal trajectories ( Fu et al. , 2020 ) . In contrast , a number of recent works have argued that BC performs better than offline RL on both expert and suboptimal demonstration data over a variety of tasks ( Mandlekar et al. , 2021 ; Florence et al. , 2021 ; Hahn et al. , 2021 ) . This makes it confusing for practitioners to understand whether to use offline RL or simply run BC on collected demonstrations . Thus , in this work we aim to understand if there are conditions on the environment or the dataset under which an offline RL algorithm might outperform BC for a given task , even when BC is provided with expert data or is allowed to use reward as side information . Our insights can inform a practitioner with sufficient domain knowledge on whether offline RL is a good idea , even when expert or near-expert data is available and BC is considered to be a natural choice . Our contribution in this paper is a theoretical and empirical characterization of certain conditions when offline RL can outperform BC . Theoretically , we are the first to derive conditions on the environment or offline dataset where offline RL achieves better worst-case guarantees than even the best-case lower-bound for BC using expert demonstrations . These conditions are grounded in practical problems , and provide guidance to the practitioner as to whether they should use RL or BC . Concretely , we show that in the case of expert data , the error incurred by offline RL algorithms can scale significantly more favorably when the MDP enjoys some structure including horizonindependent returns ( i.e. , sparse rewards ) , or a low volume of states where it is “ critical ” to take the same action as the expert ( Section 4.2 ) Meanwhile , in the case of sufficiently noisy data , we show that offline RL again enjoys better guarantees on long-horizon tasks ( Section 4.3 ) . Finally , since traditional BC methods ignore rewards , we consider generalized BC methods that use the observed rewards to inform learning , and show that it is still preferable to perform offline RL ( Section 4.4 ) . Empirically , we validate our theoretical conclusions on diagnostic gridworld domains ( Fu et al. , 2019 ) and large-scale benchmark problems in robotic manipulation and navigation and Atari games , using human data ( Fu et al. , 2020 ) , scripted data ( Singh et al. , 2020 ) and data generated from RL policies ( Agarwal et al. , 2020b ) . We verify that in multiple long-horizon problems where the conditions we propose are likely to be satisfied , practical offline RL methods can outperform BC and generalized BC methods . Using careful offline tuning practices , we show that it is possible for offline RL to outperform cloning an expert dataset for the same task , given equal amounts of data . 2 RELATED WORK . Offline RL ( Lange et al. , 2012 ; Levine et al. , 2020 ) has shown promise in domains such as robotic manipulation ( Kalashnikov et al. , 2018b ; Mandlekar et al. , 2020 ; Singh et al. , 2020 ; Kalashnikov et al. , 2021 ) , NLP ( Jaques et al. , 2020 ) and healthcare ( Shortreed et al. , 2011 ; Wang et al. , 2018 ) . The major challenge in offline RL is distribution shift ( Fujimoto et al. , 2018 ; Kumar et al. , 2019 ) , where the learned policy might execute out-of-distribution actions . Prior offline RL methods can broadly be characterized into two categories : ( 1 ) policy-constraint methods that regularize the learned policy to be “ close ” to the behavior policy either explicitly ( Fujimoto et al. , 2018 ; Kumar et al. , 2019 ; Liu et al. , 2020 ; Wu et al. , 2019 ; Fujimoto & Gu , 2021 ) or implicitly ( Siegel et al. , 2020 ; Peng et al. , 2019 ; Nair et al. , 2020 ) , or via importance sampling ( Liu et al. , 2019 ; Swaminathan & Joachims , 2015 ; Nachum et al. , 2019 ) , and ( 2 ) conservative methods that learn a lower-bound , or conservative , estimate of return and optimize the policy against it ( Kumar et al. , 2020 ; Kostrikov et al. , 2021 ; Kidambi et al. , 2020 ; Yu et al. , 2020 ; 2021 ) . Our goal is not to devise a new offline RL algorithm , but rather to understand when existing offline RL methods can outperform BC . When do offline RL methods outperform BC ? Rashidinejad et al . ( 2021 ) derive a conservative offline RL algorithm based on lower-confidence bounds ( LCB ) that provably outperforms BC in the simpler contextual bandits ( CB ) setting , but do not extend it to MDPs . While this CB result signals the possibility that offline RL can outperform BC in theory , this generalization is not trivial , as RL suffers from compounding errors ( Munos , 2003 ; 2005 ; Wang et al. , 2021 ) . Laroche et al . ( 2019 ) ; Nadjahi et al . ( 2019 ) ; Kumar et al . ( 2020 ) ; Liu et al . ( 2020 ) ; Xie et al . ( 2021a ) present safe policy improvement bounds expressed as improvements over the behavior policy , which imitation aims to recover , but these bounds do not clearly indicate when offline RL is better or worse . Empirically , Fu et al . ( 2020 ) show that offline RL considerably outperforms BC for tasks that require “ stitching ” trajectory segments to devise an optimal policy . In contrast , Mandlekar et al . ( 2021 ) ; Brandfonbrener et al . ( 2021 ) ; Chen et al . ( 2021 ) ; Hahn et al . ( 2021 ) suggest that BC or filtered BC using the top fraction of the data performs better on other tasks . While results on Adroit domains in D4RL ( Fu et al. , 2020 ) show that offline RL outperforms BC even on expert data , Florence et al . ( 2021 ) reported superior BC results , making it unclear . Kurenkov & Kolesnikov ( 2022 ) emphasize the importance of the online evaluation budget for offline RL methods and show that BC is more favorable in a limited budget . We provide a characterization of certain scenarios where we would expect offline RL to be better than BC , and empirical results verifying that offline RL performs well on such problems , spanning robotics , navigation , games ( Fu et al. , 2020 ; Singh et al. , 2020 ; Bellemare et al. , 2013 ) . Our theoretical analysis combines tools from a number of prior works . We analyze the total error incurred by RL via an error propagation analysis ( Munos , 2003 ; 2005 ; Farahmand et al. , 2010 ; Chen & Jiang , 2019 ; Xie & Jiang , 2020 ; Liu et al. , 2020 ) , which gives rise to bounds with concentrability coefficients that bound the total distributional shift between the learned policy and the data distribution ( Xie & Jiang , 2020 ; Liu et al. , 2020 ) . We use tools from Ren et al . ( 2021 ) , which provide horizon-free bounds for standard ( non-conservative ) offline Q-learning but relax their strict coverage assumptions . While our analysis studies a LCB-style algorithm similar to Rashidinejad et al . ( 2021 ) , we modify it to use tighter Bernstein bonuses ( Zhang et al. , 2021 ; Agarwal et al. , 2020a ) , which is key to improving its suboptimality guarantee . Xie et al . ( 2021b ) consider a similar algorithm with Bernstein bonuses , but take a different approach to analysis it and use it for policy finetuning . 3 PROBLEM SETUP AND PRELIMINARIES . The goal in reinforcement learning is to learn a policy π ( ·|s ) that maximizes the expected cumulative discounted reward in a Markov decision process ( MDP ) , which is defined by a tuple ( S , A , P , r , γ ) . S , A represent state and action spaces , P ( s′|s , a ) and r ( s , a ) represent the dynamics and mean reward function , and γ ∈ ( 0 , 1 ) represents the discount factor . The effective horizon of the MDP is given by H = 1/ ( 1− γ ) . The Q-function , Qπ ( s , a ) for a given policy π is equal to the discounted long-term reward attained by executing a at the state s and then following policy π thereafter . Qπ satisfies the recursion : ∀s , a ∈ S×A , Qπ ( s , a ) = r ( s , a ) +γEs′∼P ( ·|s , a ) , a′∼π ( ·|s′ ) [ Q ( s′ , a′ ) ] . The value function V π considers the expectation of the Q-function over the policy V π ( s ) = Ea∼π ( ·|s ) [ Qπ ( s , a ) ] . Meanwhile , the Q-function of the optimal policy , Q∗ , satisfies the recursion : Q∗ ( s , a ) = r ( s , a ) + Es′∼P ( ·|s , a ) [ maxa′ Q∗ ( s′ , a′ ) ] , and the optimal value function is given by V ∗ ( s ) = maxaQ∗ ( s , a ) . Finally , the expected cumulative discounted reward is given by J ( π ) = Es0∼ρ [ V π ( s0 ) ] . In offline RL , we are provided with a dataset D of transitions , D = { ( si , ai , ri , s′i ) } Ni=1 of size |D| = N . We assume that the dataset D is generated i.i.d . from a distribution µ ( s , a ) that specifies the effective behavior policy πβ ( a|s ) : = µ ( s , a ) / ∑ a µ ( s , a ) . Note that holds even if the data itself is generated by running a non-Markovian policy πβ ( Puterman , 1994 ) . Let n ( s , a ) be the number of times ( s , a ) appear in D , and P̂ ( ·|s , a ) and r̂ ( s , a ) denote the empirical dynamics and reward distributions in D , which may be different from P and r due to stochasticity . Following Rashidinejad et al . ( 2021 ) , the goal is to minimize the suboptimality of the learned policy π̂ : SubOpt ( π̂ ) = ED∼µ [ J ( π∗ ) − J ( π̂ ) ] = ED [ Es0∼ρ [ V ∗ ( s0 ) − V π̂ ( s0 ) ] ] . ( 1 ) Dataset and MDP conditions . Here we introduce some conditions on the offline dataset and MDP structure that we make for our analysis . The first characterizes the distribution shift between the data distribution µ ( s , a ) and the normalized state-action marginal of π∗ , given by d∗ ( s , a ) = ( 1− γ ) ∑∞ t=0 γ tP ( st = s , at = a ; π∗ ) , via a concentrability coefficient C∗ . Condition 3.1 ( Rashidinejad et al . ( 2021 ) , Concentrability of the data distribution ) . Define C∗ to be the smallest , finite constant that satisfies : d∗ ( s , a ) /µ ( s , a ) ≤ C∗ ∀s ∈ S , a ∈ A . Intuitively , the coefficient C∗ scales with how suboptimal the data µ ( s , a ) is relative to the optimal π∗ , where C∗ = 1 corresponds to data from π∗ . The next condition we consider is that the discounted return for any trajectory in the MDP is bounded by a constant , which w.l.o.g. , we assume to be 1 . Condition 3.2 ( Ren et al . ( 2021 ) , the value of any trajectory is bounded by 1 ) . The infinite-horizon discounted return for any trajectory τ = ( s0 , a0 , r0 , s1 , · · · ) is bounded as ∑∞ t=0 γ trt ≤ 1 . This condition holds in sparse-reward tasks , particularly those where an agent succeeds or fails at its task once per episode . This is common in domains such as robotics ( Singh et al. , 2020 ; Kalashnikov et al. , 2018b ) and games ( Bellemare et al. , 2013 ) , where the agent receives a signal upon succeeding a task or winning . This condition also appears in prior work deriving suboptimality bounds for RL algorithms ( Ren et al. , 2021 ; Zhang et al. , 2021 ) . Notation . Let n ∧ 1 = max { n , 1 } . Denote ι = polylog ( |S| , H , N ) . We let ι be a polylogarithmic quantity , changing with context . For d-dimensional vectors x , y , x ( i ) denotes its i-th entry , and define V ( x , y ) = ∑ i x ( i ) y ( i ) 2 − ( ∑ i x ( i ) y ( i ) ) 2 .
The paper provides an attempt to help practitioners answer the question "For what type of environments and datasets should we prefer Offline RL over Behavioural Cloning". To do so, the authors extend the previous work of [1] studying this problem for contextual bandits to MDPs. These theoretical results enable them to draw some conclusions on when we can expect Offline RL to have an edge on Behavioural cloning. To name a few: - When the dataset provided is not optimal, offline RL can scale more favourable with the horizon especially for environments with horizon-independent returns or with a low volume of critical states,i.e. states for which there is no significant advantage to select one action over another. - For long horizon tasks again, offline RL trained on noisy data displaying higher coverage can outperform BC trained on the same amount of expert demonstrations. These theoretically grounded insights are then empirically validated on a tabular gridworld domain, after which they validate their findings on high dimensional offline RL problems (continuous control and navigation, and some atari games). On the tabular gridworld, the experiments strongly agree with their theoretical results. When turning to deep RL, the experiments mostly aligned although the various tuning mechanisms introduced in previous work to train efficiently offline RL algorithms introduce a few discrepancies. 1. Paria Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, and Stuart Russell. Bridging offline reinforcement learning and imitation learning: A tale of pessimism. *arXiv preprint arXiv:2103.12021*,
SP:e0f3760b57534bf1ea5bdd3135661779d6842510
Backdoor Defense via Decoupling the Training Process
1 INTRODUCTION . Deep learning , especially deep neural networks ( DNNs ) , has been widely adopted in many realms ( Wang et al. , 2020b ; Li et al. , 2020a ; Wen et al. , 2020 ) for its high effectiveness . In general , the training of DNNs requires a large amount of training samples and computational resources . Accordingly , third-party resources ( e.g. , third-party data or servers ) are usually involved . While the opacity of the training process brings certain convenience , it also introduces new security threats . Backdoor attack poses a new security threat to the training process of DNNs ( Li et al. , 2020c ) . It maliciously manipulates the prediction of the attacked DNNs by poisoning a few training samples . Specifically , backdoor attackers inject the backdoor trigger ( i.e. , a particular pattern ) to some benign training images and change their labels with the attacker-specified target label . The connection between the backdoor trigger and the target label will be learned by DNNs during the training process . In the inference process , the prediction of attacked DNNs will be changed to the target label when the trigger is present , whereas the attacked DNNs will behave normally on benign samples . As such , users are difficult to realize the existence of hidden backdoors and therefore this attack is a serious threat to the practical applications of DNNs . In this paper , we first investigate backdoor attacks from the hidden feature space . Our preliminary experiments reveal that the backdoor is embedded in the feature space , i.e. , samples with the back- ∗The first two authors contributed equally to this work . This work was mostly done when Kunzhe Huang and Yiming Li were the research interns at The Chinese University of Hong Kong , Shenzhen . † indicates corresponding authors : Baoyuan Wu ( wubaoyuan @ cuhk.edu.cn ) and Zhan Qin ( qinzhan @ zju.edu.cn ) . door trigger ( dubbed poisoned samples ) tend to cluster together in the feature space . We reveal that this phenomenon is mostly due to the end-to-end supervised training paradigm . Specifically , the excessive learning capability allows DNNs to learn features about the backdoor trigger , while the DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label by the end-to-end supervised training . Based on this understanding , we propose to decouple the end-to-end training process for the backdoor defense . Specifically , we treat the DNNs as two disjoint parts , including a feature extractor ( i.e. , backbone ) and a simple classifier ( i.e. , the remaining fully connected layers ) . We first learn the purified feature extractor via self-supervised learning ( Kolesnikov et al. , 2019 ; Chen et al. , 2020a ; Jing & Tian , 2020 ) with unlabeled training samples ( obtained by removing their labels ) , and then learn the simple classifier via standard supervised training process based on the learned feature extractor and all training samples . The strong data augmentations involved in the self-supervised learning damage trigger patterns , making them unlearnable during representation learning ; and the decoupling process further disconnects trigger patterns and the target label . Accordingly , hidden backdoors can not be successfully created even the model is trained on the poisoned dataset based on our defense . Moreover , we further reveal that the representation of poisoned samples generated by the purified extractor is significantly different from those generated by the extractor learned with standard training process . Specifically , the poisoned sample lies closely to samples with its ground-truth label instead of the target label . This phenomenon makes the training of the simple classifier similar to label-noise learning ( Wang et al. , 2019b ; Ma et al. , 2020 ; Berthon et al. , 2021 ) . As such , we first filter high-credible training samples ( i.e. , training samples that are most probably to be benign ) and then use those samples as labeled samples and the remaining part to form unlabeled samples to fine-tune the whole model via semi-supervised learning ( Rasmus et al. , 2015 ; Berthelot et al. , 2019 ; Sohn et al. , 2020 ) . This approach is to further reduce the adverse effects of poisoned samples . The main contributions of this paper are three-fold . ( 1 ) We reveal that the backdoor is embedded in the feature space , which is mostly due to the end-to-end supervised training paradigm . ( 2 ) Based on our understanding , we propose a decoupling-based backdoor defense ( DBD ) to alleviate the threat of poisoning-based backdoor attacks . ( 3 ) Experiments on classical benchmark datasets are conducted , which verify the effectiveness of our defense . 2 RELATED WORK . 2.1 BACKDOOR ATTACK . Backdoor attack is an emerging research area , which raises security concerns about training with third-party resources . In this paper , we focus on the poisoning-based backdoor attack towards image classification , where attackers can only modify the dataset instead of other training components ( e.g. , training loss ) . This threat could also happen in other tasks ( Xiang et al. , 2021 ; Zhai et al. , 2021 ; Li et al. , 2022 ) and with different attacker ’ s capacities ( Nguyen & Tran , 2020 ; Tang et al. , 2020 ; Zeng et al. , 2021a ) , which are out-of-scope of this paper . In general , existing attacks can be divided into two main categories based on the property of target labels , as follows : Poison-Label Backdoor Attack . It is currently the most common attack paradigm , where the target label is different from the ground-truth label of poisoned samples . BadNets ( Gu et al. , 2019 ) is the first and most representative poison-label attack . Specifically , it randomly selected a few samples from the original benign dataset to generate poisoned samples by stamping the backdoor trigger onto the ( benign ) image and change their label with an attacker-specified target label . Those generated poisoned samples associated with remaining benign ones were combined to form the poisoned training dataset , which will be delivered to users . After that , ( Chen et al. , 2017 ) suggested that the poisoned image should be similar to its benign version for the stealthiness , based on which they proposed the blended attack . Recently , ( Xue et al. , 2020 ; Li et al. , 2020b ; 2021c ) further explored how to conduct poison-label backdoor attacks more stealthily . Most recently , a more stealthy and effective attack , the WaNet ( Nguyen & Tran , 2021 ) , was proposed . WaNet adopted image warping as the backdoor trigger , which deforms but preserves the image content . Clean-Label Backdoor Attack . Although the poisoned image generated by poison-label attacks could be similar to its benign version , users may still notice the attack by examining the image-label relationship . To address this problem , Turner et al . ( 2019 ) proposed the clean-label attack paradigm , where the target label is consistent with the ground-truth label of poisoned samples . Specifically , they first leveraged adversarial perturbations or generative models to modify some benign images from the target class and then conducted the standard trigger injection process . This idea was generalized to attack video classification in ( Zhao et al. , 2020b ) , where they adopted the targeted universal adversarial perturbation ( Moosavi-Dezfooli et al. , 2017 ) as the trigger pattern . Although clean-label backdoor attacks are more stealthy compared with poison-label ones , they usually suffer from relatively poor performance and may even fail in creating backdoors ( Li et al. , 2020c ) . 2.2 BACKDOOR DEFENSE . Currently , there are also some approaches to alleviate the backdoor threat . Existing defenses are mostly empirical , which can be divided into five main categories , including ( 1 ) detection-based defenses ( Xu et al. , 2021 ; Zeng et al. , 2021a ; Xiang et al. , 2022 ) , ( 2 ) preprocessing based defenses ( Doan et al. , 2020 ; Li et al. , 2021b ; Zeng et al. , 2021b ) , ( 3 ) model reconstruction based defenses ( Zhao et al. , 2020a ; Li et al. , 2021a ; Zeng et al. , 2022 ) , ( 4 ) trigger synthesis based defenses ( Guo et al. , 2020 ; Dong et al. , 2021 ; Shen et al. , 2021 ) , and ( 5 ) poison suppression based defenses ( Du et al. , 2020 ; Borgnia et al. , 2021 ) . Specifically , detection-based defenses examine whether a suspicious DNN or sample is attacked and it will deny the use of malicious objects ; Preprocessing based methods intend to damage trigger patterns contained in attack samples to prevent backdoor activation by introducing a preprocessing module before feeding images into DNNs ; Model reconstruction based ones aim at removing the hidden backdoor in DNNs by modifying models directly ; The fourth type of defenses synthesize potential trigger patterns at first , following by the second stage that the hidden backdoor is eliminated by suppressing their effects ; The last type of methods depress the effectiveness of poisoned samples during the training process to prevent the creation of hidden backdoors . In general , our method is most relevant to this type of defenses . In this paper , we only focus on the last four types of defenses since they directly improve the robustness of DNNs . Besides , there were also few works focusing on certified backdoor defenses ( Wang et al. , 2020a ; Weber et al. , 2020 ) . Their robustness is theoretically guaranteed under certain assumptions , which cause these methods to be generally weaker than empirical ones in practice . 2.3 SEMI-SUPERVISED AND SELF-SUPERVISED LEARNING . Semi-supervised Learning . In many real-world applications , the acquisition of labeled data often relies on manual labeling , which is very expensive . In contrast , obtaining unlabeled samples is much easier . To utilize the power of unlabeled samples with labeled ones simultaneously , a great amount of semi-supervised learning methods were proposed ( Gao et al. , 2017 ; Berthelot et al. , 2019 ; Van Engelen & Hoos , 2020 ) . Recently , semi-supervised learning was also introduced in improving the security of DNNs ( Stanforth et al. , 2019 ; Carmon et al. , 2019 ) , where they utilized unlabelled samples in the adversarial training . Most recently , ( Yan et al. , 2021 ) discussed how to backdoor semi-supervised learning . However , this approach needs to control other training components ( e.g. , training loss ) in addition to modifying training samples and therefore is out-of-scope of this paper . How to adopt semi-supervised learning for backdoor defense remains blank . Self-supervised Learning . This learning paradigm is a subset of unsupervised learning , where DNNs are trained with supervised signals generated from the data itself ( Chen et al. , 2020a ; Grill et al. , 2020 ; Liu et al. , 2021 ) . It has been adopted for increasing adversarial robustness ( Hendrycks et al. , 2019 ; Wu et al. , 2021 ; Shi et al. , 2021 ) . Most recently , there were also a few works ( Saha et al. , 2021 ; Carlini & Terzis , 2021 ; Jia et al. , 2021 ) exploring how to backdoor self-supervised learning . However , these attacks are out-of-scope of this paper since they need to control other training components ( e.g. , training loss ) in addition to modifying training samples .
This paper shows that self-supervised, contrastive learning can give a feature extractor that scatters training data points with backdoor triggers in the feature space. With this observation, the authors propose a novel defense method based on contrastive learning and decouple end-to-end training to defend against backdoor attacks. They first train a feature extractor using self-supervised contrastive learning that turns the poisoned data points into outliers in the feature space. Then they train a cascade classifier that ignores the poisoned data points by leveraging the fact that a neural network tends to capture frequent patterns. Experiments are conducted and the results verify the effectiveness of the defense.
SP:f6e3b8902793199afc205f8d6df15993eea5b992
Backdoor Defense via Decoupling the Training Process
1 INTRODUCTION . Deep learning , especially deep neural networks ( DNNs ) , has been widely adopted in many realms ( Wang et al. , 2020b ; Li et al. , 2020a ; Wen et al. , 2020 ) for its high effectiveness . In general , the training of DNNs requires a large amount of training samples and computational resources . Accordingly , third-party resources ( e.g. , third-party data or servers ) are usually involved . While the opacity of the training process brings certain convenience , it also introduces new security threats . Backdoor attack poses a new security threat to the training process of DNNs ( Li et al. , 2020c ) . It maliciously manipulates the prediction of the attacked DNNs by poisoning a few training samples . Specifically , backdoor attackers inject the backdoor trigger ( i.e. , a particular pattern ) to some benign training images and change their labels with the attacker-specified target label . The connection between the backdoor trigger and the target label will be learned by DNNs during the training process . In the inference process , the prediction of attacked DNNs will be changed to the target label when the trigger is present , whereas the attacked DNNs will behave normally on benign samples . As such , users are difficult to realize the existence of hidden backdoors and therefore this attack is a serious threat to the practical applications of DNNs . In this paper , we first investigate backdoor attacks from the hidden feature space . Our preliminary experiments reveal that the backdoor is embedded in the feature space , i.e. , samples with the back- ∗The first two authors contributed equally to this work . This work was mostly done when Kunzhe Huang and Yiming Li were the research interns at The Chinese University of Hong Kong , Shenzhen . † indicates corresponding authors : Baoyuan Wu ( wubaoyuan @ cuhk.edu.cn ) and Zhan Qin ( qinzhan @ zju.edu.cn ) . door trigger ( dubbed poisoned samples ) tend to cluster together in the feature space . We reveal that this phenomenon is mostly due to the end-to-end supervised training paradigm . Specifically , the excessive learning capability allows DNNs to learn features about the backdoor trigger , while the DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label by the end-to-end supervised training . Based on this understanding , we propose to decouple the end-to-end training process for the backdoor defense . Specifically , we treat the DNNs as two disjoint parts , including a feature extractor ( i.e. , backbone ) and a simple classifier ( i.e. , the remaining fully connected layers ) . We first learn the purified feature extractor via self-supervised learning ( Kolesnikov et al. , 2019 ; Chen et al. , 2020a ; Jing & Tian , 2020 ) with unlabeled training samples ( obtained by removing their labels ) , and then learn the simple classifier via standard supervised training process based on the learned feature extractor and all training samples . The strong data augmentations involved in the self-supervised learning damage trigger patterns , making them unlearnable during representation learning ; and the decoupling process further disconnects trigger patterns and the target label . Accordingly , hidden backdoors can not be successfully created even the model is trained on the poisoned dataset based on our defense . Moreover , we further reveal that the representation of poisoned samples generated by the purified extractor is significantly different from those generated by the extractor learned with standard training process . Specifically , the poisoned sample lies closely to samples with its ground-truth label instead of the target label . This phenomenon makes the training of the simple classifier similar to label-noise learning ( Wang et al. , 2019b ; Ma et al. , 2020 ; Berthon et al. , 2021 ) . As such , we first filter high-credible training samples ( i.e. , training samples that are most probably to be benign ) and then use those samples as labeled samples and the remaining part to form unlabeled samples to fine-tune the whole model via semi-supervised learning ( Rasmus et al. , 2015 ; Berthelot et al. , 2019 ; Sohn et al. , 2020 ) . This approach is to further reduce the adverse effects of poisoned samples . The main contributions of this paper are three-fold . ( 1 ) We reveal that the backdoor is embedded in the feature space , which is mostly due to the end-to-end supervised training paradigm . ( 2 ) Based on our understanding , we propose a decoupling-based backdoor defense ( DBD ) to alleviate the threat of poisoning-based backdoor attacks . ( 3 ) Experiments on classical benchmark datasets are conducted , which verify the effectiveness of our defense . 2 RELATED WORK . 2.1 BACKDOOR ATTACK . Backdoor attack is an emerging research area , which raises security concerns about training with third-party resources . In this paper , we focus on the poisoning-based backdoor attack towards image classification , where attackers can only modify the dataset instead of other training components ( e.g. , training loss ) . This threat could also happen in other tasks ( Xiang et al. , 2021 ; Zhai et al. , 2021 ; Li et al. , 2022 ) and with different attacker ’ s capacities ( Nguyen & Tran , 2020 ; Tang et al. , 2020 ; Zeng et al. , 2021a ) , which are out-of-scope of this paper . In general , existing attacks can be divided into two main categories based on the property of target labels , as follows : Poison-Label Backdoor Attack . It is currently the most common attack paradigm , where the target label is different from the ground-truth label of poisoned samples . BadNets ( Gu et al. , 2019 ) is the first and most representative poison-label attack . Specifically , it randomly selected a few samples from the original benign dataset to generate poisoned samples by stamping the backdoor trigger onto the ( benign ) image and change their label with an attacker-specified target label . Those generated poisoned samples associated with remaining benign ones were combined to form the poisoned training dataset , which will be delivered to users . After that , ( Chen et al. , 2017 ) suggested that the poisoned image should be similar to its benign version for the stealthiness , based on which they proposed the blended attack . Recently , ( Xue et al. , 2020 ; Li et al. , 2020b ; 2021c ) further explored how to conduct poison-label backdoor attacks more stealthily . Most recently , a more stealthy and effective attack , the WaNet ( Nguyen & Tran , 2021 ) , was proposed . WaNet adopted image warping as the backdoor trigger , which deforms but preserves the image content . Clean-Label Backdoor Attack . Although the poisoned image generated by poison-label attacks could be similar to its benign version , users may still notice the attack by examining the image-label relationship . To address this problem , Turner et al . ( 2019 ) proposed the clean-label attack paradigm , where the target label is consistent with the ground-truth label of poisoned samples . Specifically , they first leveraged adversarial perturbations or generative models to modify some benign images from the target class and then conducted the standard trigger injection process . This idea was generalized to attack video classification in ( Zhao et al. , 2020b ) , where they adopted the targeted universal adversarial perturbation ( Moosavi-Dezfooli et al. , 2017 ) as the trigger pattern . Although clean-label backdoor attacks are more stealthy compared with poison-label ones , they usually suffer from relatively poor performance and may even fail in creating backdoors ( Li et al. , 2020c ) . 2.2 BACKDOOR DEFENSE . Currently , there are also some approaches to alleviate the backdoor threat . Existing defenses are mostly empirical , which can be divided into five main categories , including ( 1 ) detection-based defenses ( Xu et al. , 2021 ; Zeng et al. , 2021a ; Xiang et al. , 2022 ) , ( 2 ) preprocessing based defenses ( Doan et al. , 2020 ; Li et al. , 2021b ; Zeng et al. , 2021b ) , ( 3 ) model reconstruction based defenses ( Zhao et al. , 2020a ; Li et al. , 2021a ; Zeng et al. , 2022 ) , ( 4 ) trigger synthesis based defenses ( Guo et al. , 2020 ; Dong et al. , 2021 ; Shen et al. , 2021 ) , and ( 5 ) poison suppression based defenses ( Du et al. , 2020 ; Borgnia et al. , 2021 ) . Specifically , detection-based defenses examine whether a suspicious DNN or sample is attacked and it will deny the use of malicious objects ; Preprocessing based methods intend to damage trigger patterns contained in attack samples to prevent backdoor activation by introducing a preprocessing module before feeding images into DNNs ; Model reconstruction based ones aim at removing the hidden backdoor in DNNs by modifying models directly ; The fourth type of defenses synthesize potential trigger patterns at first , following by the second stage that the hidden backdoor is eliminated by suppressing their effects ; The last type of methods depress the effectiveness of poisoned samples during the training process to prevent the creation of hidden backdoors . In general , our method is most relevant to this type of defenses . In this paper , we only focus on the last four types of defenses since they directly improve the robustness of DNNs . Besides , there were also few works focusing on certified backdoor defenses ( Wang et al. , 2020a ; Weber et al. , 2020 ) . Their robustness is theoretically guaranteed under certain assumptions , which cause these methods to be generally weaker than empirical ones in practice . 2.3 SEMI-SUPERVISED AND SELF-SUPERVISED LEARNING . Semi-supervised Learning . In many real-world applications , the acquisition of labeled data often relies on manual labeling , which is very expensive . In contrast , obtaining unlabeled samples is much easier . To utilize the power of unlabeled samples with labeled ones simultaneously , a great amount of semi-supervised learning methods were proposed ( Gao et al. , 2017 ; Berthelot et al. , 2019 ; Van Engelen & Hoos , 2020 ) . Recently , semi-supervised learning was also introduced in improving the security of DNNs ( Stanforth et al. , 2019 ; Carmon et al. , 2019 ) , where they utilized unlabelled samples in the adversarial training . Most recently , ( Yan et al. , 2021 ) discussed how to backdoor semi-supervised learning . However , this approach needs to control other training components ( e.g. , training loss ) in addition to modifying training samples and therefore is out-of-scope of this paper . How to adopt semi-supervised learning for backdoor defense remains blank . Self-supervised Learning . This learning paradigm is a subset of unsupervised learning , where DNNs are trained with supervised signals generated from the data itself ( Chen et al. , 2020a ; Grill et al. , 2020 ; Liu et al. , 2021 ) . It has been adopted for increasing adversarial robustness ( Hendrycks et al. , 2019 ; Wu et al. , 2021 ; Shi et al. , 2021 ) . Most recently , there were also a few works ( Saha et al. , 2021 ; Carlini & Terzis , 2021 ; Jia et al. , 2021 ) exploring how to backdoor self-supervised learning . However , these attacks are out-of-scope of this paper since they need to control other training components ( e.g. , training loss ) in addition to modifying training samples .
Summary: authors propose a modification to the training procedure to prevent backdoor attacks. Instead of performing supervised training, they suggest first training the model in a self-supervised way, then in a supervised way on fully connected layers. Later they propose to remove low-credible samples and fine-tune the whole model on the remaining samples with labels. They claim that this procedure eliminates the backdoored inputs that have incorrect labels.
SP:f6e3b8902793199afc205f8d6df15993eea5b992
Backdoor Defense via Decoupling the Training Process
1 INTRODUCTION . Deep learning , especially deep neural networks ( DNNs ) , has been widely adopted in many realms ( Wang et al. , 2020b ; Li et al. , 2020a ; Wen et al. , 2020 ) for its high effectiveness . In general , the training of DNNs requires a large amount of training samples and computational resources . Accordingly , third-party resources ( e.g. , third-party data or servers ) are usually involved . While the opacity of the training process brings certain convenience , it also introduces new security threats . Backdoor attack poses a new security threat to the training process of DNNs ( Li et al. , 2020c ) . It maliciously manipulates the prediction of the attacked DNNs by poisoning a few training samples . Specifically , backdoor attackers inject the backdoor trigger ( i.e. , a particular pattern ) to some benign training images and change their labels with the attacker-specified target label . The connection between the backdoor trigger and the target label will be learned by DNNs during the training process . In the inference process , the prediction of attacked DNNs will be changed to the target label when the trigger is present , whereas the attacked DNNs will behave normally on benign samples . As such , users are difficult to realize the existence of hidden backdoors and therefore this attack is a serious threat to the practical applications of DNNs . In this paper , we first investigate backdoor attacks from the hidden feature space . Our preliminary experiments reveal that the backdoor is embedded in the feature space , i.e. , samples with the back- ∗The first two authors contributed equally to this work . This work was mostly done when Kunzhe Huang and Yiming Li were the research interns at The Chinese University of Hong Kong , Shenzhen . † indicates corresponding authors : Baoyuan Wu ( wubaoyuan @ cuhk.edu.cn ) and Zhan Qin ( qinzhan @ zju.edu.cn ) . door trigger ( dubbed poisoned samples ) tend to cluster together in the feature space . We reveal that this phenomenon is mostly due to the end-to-end supervised training paradigm . Specifically , the excessive learning capability allows DNNs to learn features about the backdoor trigger , while the DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label by the end-to-end supervised training . Based on this understanding , we propose to decouple the end-to-end training process for the backdoor defense . Specifically , we treat the DNNs as two disjoint parts , including a feature extractor ( i.e. , backbone ) and a simple classifier ( i.e. , the remaining fully connected layers ) . We first learn the purified feature extractor via self-supervised learning ( Kolesnikov et al. , 2019 ; Chen et al. , 2020a ; Jing & Tian , 2020 ) with unlabeled training samples ( obtained by removing their labels ) , and then learn the simple classifier via standard supervised training process based on the learned feature extractor and all training samples . The strong data augmentations involved in the self-supervised learning damage trigger patterns , making them unlearnable during representation learning ; and the decoupling process further disconnects trigger patterns and the target label . Accordingly , hidden backdoors can not be successfully created even the model is trained on the poisoned dataset based on our defense . Moreover , we further reveal that the representation of poisoned samples generated by the purified extractor is significantly different from those generated by the extractor learned with standard training process . Specifically , the poisoned sample lies closely to samples with its ground-truth label instead of the target label . This phenomenon makes the training of the simple classifier similar to label-noise learning ( Wang et al. , 2019b ; Ma et al. , 2020 ; Berthon et al. , 2021 ) . As such , we first filter high-credible training samples ( i.e. , training samples that are most probably to be benign ) and then use those samples as labeled samples and the remaining part to form unlabeled samples to fine-tune the whole model via semi-supervised learning ( Rasmus et al. , 2015 ; Berthelot et al. , 2019 ; Sohn et al. , 2020 ) . This approach is to further reduce the adverse effects of poisoned samples . The main contributions of this paper are three-fold . ( 1 ) We reveal that the backdoor is embedded in the feature space , which is mostly due to the end-to-end supervised training paradigm . ( 2 ) Based on our understanding , we propose a decoupling-based backdoor defense ( DBD ) to alleviate the threat of poisoning-based backdoor attacks . ( 3 ) Experiments on classical benchmark datasets are conducted , which verify the effectiveness of our defense . 2 RELATED WORK . 2.1 BACKDOOR ATTACK . Backdoor attack is an emerging research area , which raises security concerns about training with third-party resources . In this paper , we focus on the poisoning-based backdoor attack towards image classification , where attackers can only modify the dataset instead of other training components ( e.g. , training loss ) . This threat could also happen in other tasks ( Xiang et al. , 2021 ; Zhai et al. , 2021 ; Li et al. , 2022 ) and with different attacker ’ s capacities ( Nguyen & Tran , 2020 ; Tang et al. , 2020 ; Zeng et al. , 2021a ) , which are out-of-scope of this paper . In general , existing attacks can be divided into two main categories based on the property of target labels , as follows : Poison-Label Backdoor Attack . It is currently the most common attack paradigm , where the target label is different from the ground-truth label of poisoned samples . BadNets ( Gu et al. , 2019 ) is the first and most representative poison-label attack . Specifically , it randomly selected a few samples from the original benign dataset to generate poisoned samples by stamping the backdoor trigger onto the ( benign ) image and change their label with an attacker-specified target label . Those generated poisoned samples associated with remaining benign ones were combined to form the poisoned training dataset , which will be delivered to users . After that , ( Chen et al. , 2017 ) suggested that the poisoned image should be similar to its benign version for the stealthiness , based on which they proposed the blended attack . Recently , ( Xue et al. , 2020 ; Li et al. , 2020b ; 2021c ) further explored how to conduct poison-label backdoor attacks more stealthily . Most recently , a more stealthy and effective attack , the WaNet ( Nguyen & Tran , 2021 ) , was proposed . WaNet adopted image warping as the backdoor trigger , which deforms but preserves the image content . Clean-Label Backdoor Attack . Although the poisoned image generated by poison-label attacks could be similar to its benign version , users may still notice the attack by examining the image-label relationship . To address this problem , Turner et al . ( 2019 ) proposed the clean-label attack paradigm , where the target label is consistent with the ground-truth label of poisoned samples . Specifically , they first leveraged adversarial perturbations or generative models to modify some benign images from the target class and then conducted the standard trigger injection process . This idea was generalized to attack video classification in ( Zhao et al. , 2020b ) , where they adopted the targeted universal adversarial perturbation ( Moosavi-Dezfooli et al. , 2017 ) as the trigger pattern . Although clean-label backdoor attacks are more stealthy compared with poison-label ones , they usually suffer from relatively poor performance and may even fail in creating backdoors ( Li et al. , 2020c ) . 2.2 BACKDOOR DEFENSE . Currently , there are also some approaches to alleviate the backdoor threat . Existing defenses are mostly empirical , which can be divided into five main categories , including ( 1 ) detection-based defenses ( Xu et al. , 2021 ; Zeng et al. , 2021a ; Xiang et al. , 2022 ) , ( 2 ) preprocessing based defenses ( Doan et al. , 2020 ; Li et al. , 2021b ; Zeng et al. , 2021b ) , ( 3 ) model reconstruction based defenses ( Zhao et al. , 2020a ; Li et al. , 2021a ; Zeng et al. , 2022 ) , ( 4 ) trigger synthesis based defenses ( Guo et al. , 2020 ; Dong et al. , 2021 ; Shen et al. , 2021 ) , and ( 5 ) poison suppression based defenses ( Du et al. , 2020 ; Borgnia et al. , 2021 ) . Specifically , detection-based defenses examine whether a suspicious DNN or sample is attacked and it will deny the use of malicious objects ; Preprocessing based methods intend to damage trigger patterns contained in attack samples to prevent backdoor activation by introducing a preprocessing module before feeding images into DNNs ; Model reconstruction based ones aim at removing the hidden backdoor in DNNs by modifying models directly ; The fourth type of defenses synthesize potential trigger patterns at first , following by the second stage that the hidden backdoor is eliminated by suppressing their effects ; The last type of methods depress the effectiveness of poisoned samples during the training process to prevent the creation of hidden backdoors . In general , our method is most relevant to this type of defenses . In this paper , we only focus on the last four types of defenses since they directly improve the robustness of DNNs . Besides , there were also few works focusing on certified backdoor defenses ( Wang et al. , 2020a ; Weber et al. , 2020 ) . Their robustness is theoretically guaranteed under certain assumptions , which cause these methods to be generally weaker than empirical ones in practice . 2.3 SEMI-SUPERVISED AND SELF-SUPERVISED LEARNING . Semi-supervised Learning . In many real-world applications , the acquisition of labeled data often relies on manual labeling , which is very expensive . In contrast , obtaining unlabeled samples is much easier . To utilize the power of unlabeled samples with labeled ones simultaneously , a great amount of semi-supervised learning methods were proposed ( Gao et al. , 2017 ; Berthelot et al. , 2019 ; Van Engelen & Hoos , 2020 ) . Recently , semi-supervised learning was also introduced in improving the security of DNNs ( Stanforth et al. , 2019 ; Carmon et al. , 2019 ) , where they utilized unlabelled samples in the adversarial training . Most recently , ( Yan et al. , 2021 ) discussed how to backdoor semi-supervised learning . However , this approach needs to control other training components ( e.g. , training loss ) in addition to modifying training samples and therefore is out-of-scope of this paper . How to adopt semi-supervised learning for backdoor defense remains blank . Self-supervised Learning . This learning paradigm is a subset of unsupervised learning , where DNNs are trained with supervised signals generated from the data itself ( Chen et al. , 2020a ; Grill et al. , 2020 ; Liu et al. , 2021 ) . It has been adopted for increasing adversarial robustness ( Hendrycks et al. , 2019 ; Wu et al. , 2021 ; Shi et al. , 2021 ) . Most recently , there were also a few works ( Saha et al. , 2021 ; Carlini & Terzis , 2021 ; Jia et al. , 2021 ) exploring how to backdoor self-supervised learning . However , these attacks are out-of-scope of this paper since they need to control other training components ( e.g. , training loss ) in addition to modifying training samples .
This paper proposes a decoupling-based backdoor defense (DBD) on poisoning-based backdoor attacks where an adversary can modify the dataset only. Specifically, DBD combines a self-supervised feature extractor and a supervised noise-free classifier, with an additional semi-supervised learning fine-tuning step. The core idea is to decouple the feature extractor and the final prediction. The authors evaluate the effectiveness of DBD on three datasets, three backdoor attack models, and four defense baselines.
SP:f6e3b8902793199afc205f8d6df15993eea5b992
Learning to Give Checkable Answers with Prover-Verifier Games
1 INTRODUCTION . The astonishing performance of today ’ s dominant learning paradigm – optimizing powerful differentiable function approximators to minimize suitable loss functions – often comes at the cost of poor robustness and reliability . It is common for powerful deep learning systems to be vulnerable to adversarial attacks ( Goodfellow et al. , 2015 ) , to display erratic behaviour on out-of-distribution data and be very confident of wrong predictions ( Che et al. , 2021 ) . How can we train our learning algorithms to produce outputs that can be checked by humans or by learning algorithms that are cheaper or better understood ? Over the past decades , the field of computational complexity has greatly expanded our notion of proof to include formalisms such as interactive , zero-knowledge , and probabilistically checkable proofs ( Goldreich , 2008 ) . Most of these notions can be thought of in terms of a game between a powerful but untrusted prover and a computationally limited but trusted verifier . Taking inspiration from this , we propose the Prover-Verifier Game ( PVG ) , where two learning agents play the role of prover and verifier , and are allowed to converse . The verifier aims to determine the correct answer to a decision problem , and the prover aims to convince the verifier of a particular answer ( regardless of its correctness ) . Since the prover is untrustworthy , the verifier will only find its messages useful to the extent that it can independently verify the information . If all goes well , then the game dynamics will lead to a proof protocol which , if not mathematically sound , is at least sufficient for the whole system to achieve more reliable predictions than the verifier can achieve unaided . We analyze the desirability of the game equilibria implied by several variants of the Prover-Verifier Game concept and narrow the space down to a subset of games that theoretically have the desired equilibria . Picking the right game equilibrium concept turns out to be essential : we prove that formulating the PVG as a sequential game in which the prover agent plays first leads to dysfunctional solutions . On the other hand , the simultaneous ( connected to Nash equilibria ) and verifier-first sequential formulations ( connected to verifier-leading Stackelberg equilibria ) have desirable equilibria . We formally show , on a illustrative problem , that gradient based differentiable game optimizers can find desirable proof-verification protocols . Do reliable justification protocols emerge in practice if we let artificial agents play the Prover-Verifier Game ? To complement our novel game formulation , we develop a rigorous evaluation methodology whereby the verifier is frozen and the prover ( or the message ) is optimized to convince the verifier . We then run simulations on two algorithmic tasks using a practical instantiation of the PVG concept . As predicted by our theory , PVG-trained verifiers learn to receive useful and reliable information from untrusted provers by following sensible verification protocols , whereas those trained alongside fully collaborative provers are easily deceived . 2 BACKGROUND . 2.1 INTERACTIVE PROOF SYSTEMS ( IPS ) . Interactive proof systems generalize the notion of a mathematical proof to a dialogue between two agents - a prover and a verifier - to solve a decision problem ( Arora & Barak , 2009 ; Thaler , 2019 ) . The verifier agent , who is trustworthy but computationally constrained , is tasked with producing a correct answer to the decision problem . It can exchange messages with a computationally unbounded , yet potentially adversarial prover agent . The communication protocol between the prover and verifier constitutes an interactive proof system if and only if it is sound and complete : Definition 1 ( Completeness and Soundness ) . The verifier of an interactive proof system is complete iff there exists a prover that can always convince the verifier that the answer is “ yes '' , if the correct answer is “ yes '' . It is sound iff there doesn ’ t exist a prover that can trick the verifier into answering “ yes '' if the correct answer is “ no '' . 2.2 DIFFERENTIABLE GAME OPTIMIZATION . A two-player differentiable game consists of two agents whose strategies are parametrized by w = ( w1 , w2 ) 2 Rd that take turns to minimize their differentiable loss functions ( L1 , L2 ) : Rd ! R. An equilibrium concept determines which strategies will be adopted by the players . A Nash equilibrium is achieved when no player can unilaterally improve its objective function . Definition 2 ( Nash Equilibrium ( Von Neumann & Morgenstern , 2007 ) ) . The strategies parametrized by ( w⇤1 , w ⇤ 2 ) constitute a Nash equilibrium 1 of the two-player sequential differentiable game with loss functions ( L1 , L2 ) : Rd ! R if they minimize their loss functions keeping the other player ’ s parameters fixed : w⇤1 = argmin w1 L1 ( w1 , w⇤2 ) , w⇤2 = argmin w2 L2 ( w⇤1 , w2 ) ( 1 ) The notion of equilibrium considered by Generative Adversarial Networks ( Goodfellow et al. , 2014 ) is an example of a Nash Equilibrium . A Stackelberg equilibrium differs from the Nash equilibrium in that one of the players is deemed the “ leader '' and the other the “ follower '' ( Wang * et al. , 2020 ) . It is assumed that the follower always picks the optimal strategy for a given leader strategy . In response , the leader modifies its strategy by factoring in how the follower agent will respond to the modification . Definition 3 ( Stackelberg Equilibrium ( Fiez et al. , 2020 ) ) . Let w1 parametrize the strategy of the “ leader agent '' and w2 parametrize that of the “ follower agent '' . The loss functions of the agents are L1 and L2 respectively . The strategies parametrized by ( w1 , w2 ) constitute a Stackelberg equilibrium1 if and only if ( 1 ) the follower ’ s strategy is optimal given the leader ’ s strategy , and ( 2 ) the leader ’ s strategy is optimal taking into consideration how the follower will respond to modifications . w⇤1 = argmin w1 L1 ( w1 , w⇤2 ( w1 ) ) , w⇤2 ( w1 ) = argmin w2 L2 ( w⇤1 , w2 ) ( 2 ) 3 PROVER-VERIFIER GAME . The Prover-Verifier Game aims to learn decision rules with a reliable internal verification step . We describe what we mean for a verification protocol to be reliable , then outline the game formulation . 3.1 DESIDERATA AND PROVER-VERIFIER INCENTIVES . Desiderata : Designing the right prover-verifier game requires precisely defining what a desirable outcome/equilibrium of the game looks like . With the “ completeness '' and “ soundness '' definitions in mind ( Section 2.1 ) , we list the properties we seek in a desirable verifier protocol : 1We assume uniqueness of equilibria for simplicity . • Possibility of Perfect Recall : There should exist a prover that can help the verifier achieve perfect recall — the ratio of true positive predictions to all positive examples . • Guarantee of Perfect Precision : There shouldn ’ t exist any prover which can trick the verifier into achieving non-perfect precision — the ratio of true positive predictions to all positive predictions . “ Possibility of perfect recall '' is connected to completeness , and implies that with the right proofs , the verifier can achieve zero false negative rate . “ Guarantee of perfect precision '' is related to soundness and implies that the verifier always has zero false positive rate regardless of which proof is used . Picking the right prover-verifier incentives : We propose to set up the prover-verifier game with the following incentive structure : the verifier is incentivized to always give the correct answer to the given decision problem , and the prover is incentivized to get the verifier to answer “ yes '' to the decision problem , regardless of the correct answer . This structure encourages collaborative dynamics when the correct answer to the decision problem is “ yes '' ( which is linked to recall ) and encourages adversarial dynamics if the correct answer is “ no '' ( which is linked to precision ) . As we will prove formally in Section 3.3 , this incentive structure , when embedded in the right game setup with the right loss functions , can lead to a proof-verification protocol that meets the desiderata . 3.2 DIFFERENT PROVER-VERIFIER GAME FORMULATIONS . Notation Let x ⇠ pX ( x ) , y ⇠ pY ( y ) and y0 ⇠ pY 0 ( y0 ) be the input , verifier label and prover label random variables where we have x 2 Rnx , y 2 { 0 , 1 , ... , K } and y0 2 { 0 , 1 , ... , K } . Let their joint distribution be pD . The prover agent Pwp , parametrized by wp 2 Rnp , outputs the conditional probability distribution pp ( z|x ) over real valued “ message '' vectors z 2 Rnz conditioned on inputs . The verifier agent Vwv , parametrized by wv 2 Rnv , represents the conditional probability distribution pv ( ŷ|x , z ) over the predicted labels ŷ 2 { 0 , 1 , ... , K } , conditioned on the inputs x and messages z . Loss Functions : We pick the following loss functions for the verifier and prover : Lv = E ( x , y , y0 ) ⇠pD , z⇠pp ( z|x ) [ log pv ( y|x , z ) ] ( 3 ) Lp = E ( x , y , y0 ) ⇠pD , z⇠pp ( z|x ) [ log pv ( y 0|x , z ) ] ( 4 ) Since we wish for the verifier agent to try and solve the problem to the best of its ability , we set pY to be the correct label distribution . Since we wish to incentivize the prover to try and defend a particular answer regardless of its correctness , we set pY 0 ( y0 ) = 1 where y0 2 { 0 , 1 , . . . , K } is the output that the prover is defending . In a decision problem , y0 will be one of 0 or 1 . One can consider variations on these loss functions by replacing the negative log with other functions that are convex and monotonically increasing . We prefer the aforementioned loss functions when the verifier agent represents a softmax policy over its outputs . This guarantees that the gradients vanish iff the agents fulfill their incentives ( see Appendix A ) . Suitable Equilibrium concepts : There are three obvious ways to set up the order in which the agents play ( i.e . pick their strategies parametrized by wp and wv ) the Prover-Verifier Game : 1 ) The prover and verifier play simultaneously , 2 ) The prover plays first , 3 ) The verifier plays first . The simultaneous setup leads to picking Nash equilibrium as the equilibrium concept . The latter two lead to picking Stackelberg equilibrium , where prover or verifier is the leader and can therefore reason about how the opponent will respond to a given strategy . Interestingly , as we show in Section 3.3 , not all of these formulations lead to equilibria that satisfy our desiderata . When Instances are Revealed : We can also arrive at different game formulation depending on whether the problem instances ( i.e. , the input and prover-verifier labels ) are revealed to the agents before or after they pick their strategies ( i.e . weights ) . This leads to eight different game formulations : two from the simultaneous setup ( before or after the simultaneous moves ) and six from the sequential setup ( before , in-between and after the moves , for each of the two sequential formulations ) . How the Prover and Verifier Interact : The design space of how the prover and verifier interact with each other is large . We only consider the single-step , no feedback formulation throughout the paper and leave the analysis of other formulations as future work . Note that the single-step , no feedback formulation is related to NP proof systems and include sophisticated proof-verification protocols . We can also derive numerous PVG formulations by modifying the communication channel between the prover and verifier . Various decisions regarding the channel include : 1 ) If the channel is stochastic or deterministic ( and what kind of noise there is ) 2 ) Whether prover ’ s messages are real valued vectors , or discrete tokens 3 ) Whether the prover and verifier use an already constructed communication protocol , such as natural language .
The authors propose a framework for training networks, such that justification of the answers can automatically emerge from it. Specifically, they propose a training framework where a prover's objective is to persuade the verifier network while the verifier network's objective is answering correctly based on both the original input and the prover's message. The authors analyze when the two-agents training network would converge and identify when the solution derived would become trivial. The authors also applied their framework in two experimental settings: the binary erasure task and the cross detection task, and show that the method would results in robust and interpretable decision rule.
SP:07478ff01da03c456b44a04f251e59daa364677a
Learning to Give Checkable Answers with Prover-Verifier Games
1 INTRODUCTION . The astonishing performance of today ’ s dominant learning paradigm – optimizing powerful differentiable function approximators to minimize suitable loss functions – often comes at the cost of poor robustness and reliability . It is common for powerful deep learning systems to be vulnerable to adversarial attacks ( Goodfellow et al. , 2015 ) , to display erratic behaviour on out-of-distribution data and be very confident of wrong predictions ( Che et al. , 2021 ) . How can we train our learning algorithms to produce outputs that can be checked by humans or by learning algorithms that are cheaper or better understood ? Over the past decades , the field of computational complexity has greatly expanded our notion of proof to include formalisms such as interactive , zero-knowledge , and probabilistically checkable proofs ( Goldreich , 2008 ) . Most of these notions can be thought of in terms of a game between a powerful but untrusted prover and a computationally limited but trusted verifier . Taking inspiration from this , we propose the Prover-Verifier Game ( PVG ) , where two learning agents play the role of prover and verifier , and are allowed to converse . The verifier aims to determine the correct answer to a decision problem , and the prover aims to convince the verifier of a particular answer ( regardless of its correctness ) . Since the prover is untrustworthy , the verifier will only find its messages useful to the extent that it can independently verify the information . If all goes well , then the game dynamics will lead to a proof protocol which , if not mathematically sound , is at least sufficient for the whole system to achieve more reliable predictions than the verifier can achieve unaided . We analyze the desirability of the game equilibria implied by several variants of the Prover-Verifier Game concept and narrow the space down to a subset of games that theoretically have the desired equilibria . Picking the right game equilibrium concept turns out to be essential : we prove that formulating the PVG as a sequential game in which the prover agent plays first leads to dysfunctional solutions . On the other hand , the simultaneous ( connected to Nash equilibria ) and verifier-first sequential formulations ( connected to verifier-leading Stackelberg equilibria ) have desirable equilibria . We formally show , on a illustrative problem , that gradient based differentiable game optimizers can find desirable proof-verification protocols . Do reliable justification protocols emerge in practice if we let artificial agents play the Prover-Verifier Game ? To complement our novel game formulation , we develop a rigorous evaluation methodology whereby the verifier is frozen and the prover ( or the message ) is optimized to convince the verifier . We then run simulations on two algorithmic tasks using a practical instantiation of the PVG concept . As predicted by our theory , PVG-trained verifiers learn to receive useful and reliable information from untrusted provers by following sensible verification protocols , whereas those trained alongside fully collaborative provers are easily deceived . 2 BACKGROUND . 2.1 INTERACTIVE PROOF SYSTEMS ( IPS ) . Interactive proof systems generalize the notion of a mathematical proof to a dialogue between two agents - a prover and a verifier - to solve a decision problem ( Arora & Barak , 2009 ; Thaler , 2019 ) . The verifier agent , who is trustworthy but computationally constrained , is tasked with producing a correct answer to the decision problem . It can exchange messages with a computationally unbounded , yet potentially adversarial prover agent . The communication protocol between the prover and verifier constitutes an interactive proof system if and only if it is sound and complete : Definition 1 ( Completeness and Soundness ) . The verifier of an interactive proof system is complete iff there exists a prover that can always convince the verifier that the answer is “ yes '' , if the correct answer is “ yes '' . It is sound iff there doesn ’ t exist a prover that can trick the verifier into answering “ yes '' if the correct answer is “ no '' . 2.2 DIFFERENTIABLE GAME OPTIMIZATION . A two-player differentiable game consists of two agents whose strategies are parametrized by w = ( w1 , w2 ) 2 Rd that take turns to minimize their differentiable loss functions ( L1 , L2 ) : Rd ! R. An equilibrium concept determines which strategies will be adopted by the players . A Nash equilibrium is achieved when no player can unilaterally improve its objective function . Definition 2 ( Nash Equilibrium ( Von Neumann & Morgenstern , 2007 ) ) . The strategies parametrized by ( w⇤1 , w ⇤ 2 ) constitute a Nash equilibrium 1 of the two-player sequential differentiable game with loss functions ( L1 , L2 ) : Rd ! R if they minimize their loss functions keeping the other player ’ s parameters fixed : w⇤1 = argmin w1 L1 ( w1 , w⇤2 ) , w⇤2 = argmin w2 L2 ( w⇤1 , w2 ) ( 1 ) The notion of equilibrium considered by Generative Adversarial Networks ( Goodfellow et al. , 2014 ) is an example of a Nash Equilibrium . A Stackelberg equilibrium differs from the Nash equilibrium in that one of the players is deemed the “ leader '' and the other the “ follower '' ( Wang * et al. , 2020 ) . It is assumed that the follower always picks the optimal strategy for a given leader strategy . In response , the leader modifies its strategy by factoring in how the follower agent will respond to the modification . Definition 3 ( Stackelberg Equilibrium ( Fiez et al. , 2020 ) ) . Let w1 parametrize the strategy of the “ leader agent '' and w2 parametrize that of the “ follower agent '' . The loss functions of the agents are L1 and L2 respectively . The strategies parametrized by ( w1 , w2 ) constitute a Stackelberg equilibrium1 if and only if ( 1 ) the follower ’ s strategy is optimal given the leader ’ s strategy , and ( 2 ) the leader ’ s strategy is optimal taking into consideration how the follower will respond to modifications . w⇤1 = argmin w1 L1 ( w1 , w⇤2 ( w1 ) ) , w⇤2 ( w1 ) = argmin w2 L2 ( w⇤1 , w2 ) ( 2 ) 3 PROVER-VERIFIER GAME . The Prover-Verifier Game aims to learn decision rules with a reliable internal verification step . We describe what we mean for a verification protocol to be reliable , then outline the game formulation . 3.1 DESIDERATA AND PROVER-VERIFIER INCENTIVES . Desiderata : Designing the right prover-verifier game requires precisely defining what a desirable outcome/equilibrium of the game looks like . With the “ completeness '' and “ soundness '' definitions in mind ( Section 2.1 ) , we list the properties we seek in a desirable verifier protocol : 1We assume uniqueness of equilibria for simplicity . • Possibility of Perfect Recall : There should exist a prover that can help the verifier achieve perfect recall — the ratio of true positive predictions to all positive examples . • Guarantee of Perfect Precision : There shouldn ’ t exist any prover which can trick the verifier into achieving non-perfect precision — the ratio of true positive predictions to all positive predictions . “ Possibility of perfect recall '' is connected to completeness , and implies that with the right proofs , the verifier can achieve zero false negative rate . “ Guarantee of perfect precision '' is related to soundness and implies that the verifier always has zero false positive rate regardless of which proof is used . Picking the right prover-verifier incentives : We propose to set up the prover-verifier game with the following incentive structure : the verifier is incentivized to always give the correct answer to the given decision problem , and the prover is incentivized to get the verifier to answer “ yes '' to the decision problem , regardless of the correct answer . This structure encourages collaborative dynamics when the correct answer to the decision problem is “ yes '' ( which is linked to recall ) and encourages adversarial dynamics if the correct answer is “ no '' ( which is linked to precision ) . As we will prove formally in Section 3.3 , this incentive structure , when embedded in the right game setup with the right loss functions , can lead to a proof-verification protocol that meets the desiderata . 3.2 DIFFERENT PROVER-VERIFIER GAME FORMULATIONS . Notation Let x ⇠ pX ( x ) , y ⇠ pY ( y ) and y0 ⇠ pY 0 ( y0 ) be the input , verifier label and prover label random variables where we have x 2 Rnx , y 2 { 0 , 1 , ... , K } and y0 2 { 0 , 1 , ... , K } . Let their joint distribution be pD . The prover agent Pwp , parametrized by wp 2 Rnp , outputs the conditional probability distribution pp ( z|x ) over real valued “ message '' vectors z 2 Rnz conditioned on inputs . The verifier agent Vwv , parametrized by wv 2 Rnv , represents the conditional probability distribution pv ( ŷ|x , z ) over the predicted labels ŷ 2 { 0 , 1 , ... , K } , conditioned on the inputs x and messages z . Loss Functions : We pick the following loss functions for the verifier and prover : Lv = E ( x , y , y0 ) ⇠pD , z⇠pp ( z|x ) [ log pv ( y|x , z ) ] ( 3 ) Lp = E ( x , y , y0 ) ⇠pD , z⇠pp ( z|x ) [ log pv ( y 0|x , z ) ] ( 4 ) Since we wish for the verifier agent to try and solve the problem to the best of its ability , we set pY to be the correct label distribution . Since we wish to incentivize the prover to try and defend a particular answer regardless of its correctness , we set pY 0 ( y0 ) = 1 where y0 2 { 0 , 1 , . . . , K } is the output that the prover is defending . In a decision problem , y0 will be one of 0 or 1 . One can consider variations on these loss functions by replacing the negative log with other functions that are convex and monotonically increasing . We prefer the aforementioned loss functions when the verifier agent represents a softmax policy over its outputs . This guarantees that the gradients vanish iff the agents fulfill their incentives ( see Appendix A ) . Suitable Equilibrium concepts : There are three obvious ways to set up the order in which the agents play ( i.e . pick their strategies parametrized by wp and wv ) the Prover-Verifier Game : 1 ) The prover and verifier play simultaneously , 2 ) The prover plays first , 3 ) The verifier plays first . The simultaneous setup leads to picking Nash equilibrium as the equilibrium concept . The latter two lead to picking Stackelberg equilibrium , where prover or verifier is the leader and can therefore reason about how the opponent will respond to a given strategy . Interestingly , as we show in Section 3.3 , not all of these formulations lead to equilibria that satisfy our desiderata . When Instances are Revealed : We can also arrive at different game formulation depending on whether the problem instances ( i.e. , the input and prover-verifier labels ) are revealed to the agents before or after they pick their strategies ( i.e . weights ) . This leads to eight different game formulations : two from the simultaneous setup ( before or after the simultaneous moves ) and six from the sequential setup ( before , in-between and after the moves , for each of the two sequential formulations ) . How the Prover and Verifier Interact : The design space of how the prover and verifier interact with each other is large . We only consider the single-step , no feedback formulation throughout the paper and leave the analysis of other formulations as future work . Note that the single-step , no feedback formulation is related to NP proof systems and include sophisticated proof-verification protocols . We can also derive numerous PVG formulations by modifying the communication channel between the prover and verifier . Various decisions regarding the channel include : 1 ) If the channel is stochastic or deterministic ( and what kind of noise there is ) 2 ) Whether prover ’ s messages are real valued vectors , or discrete tokens 3 ) Whether the prover and verifier use an already constructed communication protocol , such as natural language .
This paper proposes a new learning methodology for training neural networks based on Prover-Verifier Games (PVGs), which are inspired by interactive proof systems (IPS). PVG consists of two learners, which work in both collaborative and adversarial manners, and the hope is that two learners together could achieve more reliable predictions. This paper studies eight possible game instantiations, depending on player order and when the problem instance is revealed, and the theoretical analysis shows that two instantiations are superior to others, which is also confirmed by an empirical evaluation. This paper also discovers that stress-testing of verifier's robustness is a more meaningful measurement of the learning success in contrast to prediction accuracy during training.
SP:07478ff01da03c456b44a04f251e59daa364677a
Learning to Give Checkable Answers with Prover-Verifier Games
1 INTRODUCTION . The astonishing performance of today ’ s dominant learning paradigm – optimizing powerful differentiable function approximators to minimize suitable loss functions – often comes at the cost of poor robustness and reliability . It is common for powerful deep learning systems to be vulnerable to adversarial attacks ( Goodfellow et al. , 2015 ) , to display erratic behaviour on out-of-distribution data and be very confident of wrong predictions ( Che et al. , 2021 ) . How can we train our learning algorithms to produce outputs that can be checked by humans or by learning algorithms that are cheaper or better understood ? Over the past decades , the field of computational complexity has greatly expanded our notion of proof to include formalisms such as interactive , zero-knowledge , and probabilistically checkable proofs ( Goldreich , 2008 ) . Most of these notions can be thought of in terms of a game between a powerful but untrusted prover and a computationally limited but trusted verifier . Taking inspiration from this , we propose the Prover-Verifier Game ( PVG ) , where two learning agents play the role of prover and verifier , and are allowed to converse . The verifier aims to determine the correct answer to a decision problem , and the prover aims to convince the verifier of a particular answer ( regardless of its correctness ) . Since the prover is untrustworthy , the verifier will only find its messages useful to the extent that it can independently verify the information . If all goes well , then the game dynamics will lead to a proof protocol which , if not mathematically sound , is at least sufficient for the whole system to achieve more reliable predictions than the verifier can achieve unaided . We analyze the desirability of the game equilibria implied by several variants of the Prover-Verifier Game concept and narrow the space down to a subset of games that theoretically have the desired equilibria . Picking the right game equilibrium concept turns out to be essential : we prove that formulating the PVG as a sequential game in which the prover agent plays first leads to dysfunctional solutions . On the other hand , the simultaneous ( connected to Nash equilibria ) and verifier-first sequential formulations ( connected to verifier-leading Stackelberg equilibria ) have desirable equilibria . We formally show , on a illustrative problem , that gradient based differentiable game optimizers can find desirable proof-verification protocols . Do reliable justification protocols emerge in practice if we let artificial agents play the Prover-Verifier Game ? To complement our novel game formulation , we develop a rigorous evaluation methodology whereby the verifier is frozen and the prover ( or the message ) is optimized to convince the verifier . We then run simulations on two algorithmic tasks using a practical instantiation of the PVG concept . As predicted by our theory , PVG-trained verifiers learn to receive useful and reliable information from untrusted provers by following sensible verification protocols , whereas those trained alongside fully collaborative provers are easily deceived . 2 BACKGROUND . 2.1 INTERACTIVE PROOF SYSTEMS ( IPS ) . Interactive proof systems generalize the notion of a mathematical proof to a dialogue between two agents - a prover and a verifier - to solve a decision problem ( Arora & Barak , 2009 ; Thaler , 2019 ) . The verifier agent , who is trustworthy but computationally constrained , is tasked with producing a correct answer to the decision problem . It can exchange messages with a computationally unbounded , yet potentially adversarial prover agent . The communication protocol between the prover and verifier constitutes an interactive proof system if and only if it is sound and complete : Definition 1 ( Completeness and Soundness ) . The verifier of an interactive proof system is complete iff there exists a prover that can always convince the verifier that the answer is “ yes '' , if the correct answer is “ yes '' . It is sound iff there doesn ’ t exist a prover that can trick the verifier into answering “ yes '' if the correct answer is “ no '' . 2.2 DIFFERENTIABLE GAME OPTIMIZATION . A two-player differentiable game consists of two agents whose strategies are parametrized by w = ( w1 , w2 ) 2 Rd that take turns to minimize their differentiable loss functions ( L1 , L2 ) : Rd ! R. An equilibrium concept determines which strategies will be adopted by the players . A Nash equilibrium is achieved when no player can unilaterally improve its objective function . Definition 2 ( Nash Equilibrium ( Von Neumann & Morgenstern , 2007 ) ) . The strategies parametrized by ( w⇤1 , w ⇤ 2 ) constitute a Nash equilibrium 1 of the two-player sequential differentiable game with loss functions ( L1 , L2 ) : Rd ! R if they minimize their loss functions keeping the other player ’ s parameters fixed : w⇤1 = argmin w1 L1 ( w1 , w⇤2 ) , w⇤2 = argmin w2 L2 ( w⇤1 , w2 ) ( 1 ) The notion of equilibrium considered by Generative Adversarial Networks ( Goodfellow et al. , 2014 ) is an example of a Nash Equilibrium . A Stackelberg equilibrium differs from the Nash equilibrium in that one of the players is deemed the “ leader '' and the other the “ follower '' ( Wang * et al. , 2020 ) . It is assumed that the follower always picks the optimal strategy for a given leader strategy . In response , the leader modifies its strategy by factoring in how the follower agent will respond to the modification . Definition 3 ( Stackelberg Equilibrium ( Fiez et al. , 2020 ) ) . Let w1 parametrize the strategy of the “ leader agent '' and w2 parametrize that of the “ follower agent '' . The loss functions of the agents are L1 and L2 respectively . The strategies parametrized by ( w1 , w2 ) constitute a Stackelberg equilibrium1 if and only if ( 1 ) the follower ’ s strategy is optimal given the leader ’ s strategy , and ( 2 ) the leader ’ s strategy is optimal taking into consideration how the follower will respond to modifications . w⇤1 = argmin w1 L1 ( w1 , w⇤2 ( w1 ) ) , w⇤2 ( w1 ) = argmin w2 L2 ( w⇤1 , w2 ) ( 2 ) 3 PROVER-VERIFIER GAME . The Prover-Verifier Game aims to learn decision rules with a reliable internal verification step . We describe what we mean for a verification protocol to be reliable , then outline the game formulation . 3.1 DESIDERATA AND PROVER-VERIFIER INCENTIVES . Desiderata : Designing the right prover-verifier game requires precisely defining what a desirable outcome/equilibrium of the game looks like . With the “ completeness '' and “ soundness '' definitions in mind ( Section 2.1 ) , we list the properties we seek in a desirable verifier protocol : 1We assume uniqueness of equilibria for simplicity . • Possibility of Perfect Recall : There should exist a prover that can help the verifier achieve perfect recall — the ratio of true positive predictions to all positive examples . • Guarantee of Perfect Precision : There shouldn ’ t exist any prover which can trick the verifier into achieving non-perfect precision — the ratio of true positive predictions to all positive predictions . “ Possibility of perfect recall '' is connected to completeness , and implies that with the right proofs , the verifier can achieve zero false negative rate . “ Guarantee of perfect precision '' is related to soundness and implies that the verifier always has zero false positive rate regardless of which proof is used . Picking the right prover-verifier incentives : We propose to set up the prover-verifier game with the following incentive structure : the verifier is incentivized to always give the correct answer to the given decision problem , and the prover is incentivized to get the verifier to answer “ yes '' to the decision problem , regardless of the correct answer . This structure encourages collaborative dynamics when the correct answer to the decision problem is “ yes '' ( which is linked to recall ) and encourages adversarial dynamics if the correct answer is “ no '' ( which is linked to precision ) . As we will prove formally in Section 3.3 , this incentive structure , when embedded in the right game setup with the right loss functions , can lead to a proof-verification protocol that meets the desiderata . 3.2 DIFFERENT PROVER-VERIFIER GAME FORMULATIONS . Notation Let x ⇠ pX ( x ) , y ⇠ pY ( y ) and y0 ⇠ pY 0 ( y0 ) be the input , verifier label and prover label random variables where we have x 2 Rnx , y 2 { 0 , 1 , ... , K } and y0 2 { 0 , 1 , ... , K } . Let their joint distribution be pD . The prover agent Pwp , parametrized by wp 2 Rnp , outputs the conditional probability distribution pp ( z|x ) over real valued “ message '' vectors z 2 Rnz conditioned on inputs . The verifier agent Vwv , parametrized by wv 2 Rnv , represents the conditional probability distribution pv ( ŷ|x , z ) over the predicted labels ŷ 2 { 0 , 1 , ... , K } , conditioned on the inputs x and messages z . Loss Functions : We pick the following loss functions for the verifier and prover : Lv = E ( x , y , y0 ) ⇠pD , z⇠pp ( z|x ) [ log pv ( y|x , z ) ] ( 3 ) Lp = E ( x , y , y0 ) ⇠pD , z⇠pp ( z|x ) [ log pv ( y 0|x , z ) ] ( 4 ) Since we wish for the verifier agent to try and solve the problem to the best of its ability , we set pY to be the correct label distribution . Since we wish to incentivize the prover to try and defend a particular answer regardless of its correctness , we set pY 0 ( y0 ) = 1 where y0 2 { 0 , 1 , . . . , K } is the output that the prover is defending . In a decision problem , y0 will be one of 0 or 1 . One can consider variations on these loss functions by replacing the negative log with other functions that are convex and monotonically increasing . We prefer the aforementioned loss functions when the verifier agent represents a softmax policy over its outputs . This guarantees that the gradients vanish iff the agents fulfill their incentives ( see Appendix A ) . Suitable Equilibrium concepts : There are three obvious ways to set up the order in which the agents play ( i.e . pick their strategies parametrized by wp and wv ) the Prover-Verifier Game : 1 ) The prover and verifier play simultaneously , 2 ) The prover plays first , 3 ) The verifier plays first . The simultaneous setup leads to picking Nash equilibrium as the equilibrium concept . The latter two lead to picking Stackelberg equilibrium , where prover or verifier is the leader and can therefore reason about how the opponent will respond to a given strategy . Interestingly , as we show in Section 3.3 , not all of these formulations lead to equilibria that satisfy our desiderata . When Instances are Revealed : We can also arrive at different game formulation depending on whether the problem instances ( i.e. , the input and prover-verifier labels ) are revealed to the agents before or after they pick their strategies ( i.e . weights ) . This leads to eight different game formulations : two from the simultaneous setup ( before or after the simultaneous moves ) and six from the sequential setup ( before , in-between and after the moves , for each of the two sequential formulations ) . How the Prover and Verifier Interact : The design space of how the prover and verifier interact with each other is large . We only consider the single-step , no feedback formulation throughout the paper and leave the analysis of other formulations as future work . Note that the single-step , no feedback formulation is related to NP proof systems and include sophisticated proof-verification protocols . We can also derive numerous PVG formulations by modifying the communication channel between the prover and verifier . Various decisions regarding the channel include : 1 ) If the channel is stochastic or deterministic ( and what kind of noise there is ) 2 ) Whether prover ’ s messages are real valued vectors , or discrete tokens 3 ) Whether the prover and verifier use an already constructed communication protocol , such as natural language .
The paper explores the idea of learning a game-theory-inspired prover-verifier system to augment neural networks with verifiable predictions. In this paper, the authors set up a differentiable prover-verifier game and establish conditions on the formulation of the game to ensure the learned verifier satisfies the soundness and completeness constraints. The paper has the following results: - All verifier leading Stackelberg equilibria in a verifier-leading sequential PVG formulation in which the problem instance is revealed after the verifier picks its strategy give a desirable proof-verification protocol. - For the BEC-based prover-verifier system, the paper also guarantees that alternate gradient descent ascent converges to equilibrium under suitable learning rates verifier-leading PVG formulations. - Empirical evidence from the BEC experiments and the FindThePlus experiments show the effectiveness of the learning scheme in a practical setting.
SP:07478ff01da03c456b44a04f251e59daa364677a
Explore and Control with Adversarial Surprise
1 INTRODUCTION . Reinforcement learning methods have attained impressive results across a number of domains ( e.g. , Berner et al . ( 2019 ) ; Kober et al . ( 2013 ) ; Levine et al . ( 2016 ) ; Vinyals et al . ( 2019 ) ) . However , current RL methods typically require a large number of samples for each new task ( Dann et al. , 2018 ) . In other areas of machine learning , an effective way to mitigate high data requirements has been the use of unsupervised or self-supervised learning ( Sutskever et al. , 2014 ; Radford et al. , 2019 ) . Similarly , humans and animals seem to be able to learn rich priors from their own experience without being told what to do , and children engage in structured but unsupervised play in part as a way to acquire a functional understanding of the world ( Smith & Gasser , 2005 ) . Based on this intuition , unsupervised RL methods rely on intrinsic motivation ( IM ) : task-agnostic objectives that incentivize the agent to autonomously explore the world and learn behaviors that can be used to solve a range of downstream tasks with little supervision . A general strategy is to exactly or approximately express this task-agnostic objective in the form of a reward function that uses only environment statistics , and then optimize it using standard RL algorithms . A reasonable goal for a good unsupervised learning algorithm is to fully explore the state space of the environment , since this ensures the agent will have the experience which with to learn the optimal policy for a downstream task . Therefore , past work on IM has frequently focused on novelty-seeking agents that maximize surprise or prediction error ( Achiam & Sastry , 2017 ; Schmidhuber , 1991 ; Yamamoto & Ishikawa , 2010 ; Pathak et al. , 2017 ; Burda et al. , 2018 ) . However , these methods are vulnerable to becoming distracted by inherently stochastic elements of the environment , such as a “ noisy TV ” ( Schmidhuber , 2010 ) . In contrast , active inference researchers inspired by biological agents have focused on developing agents that seek to control their environment and minimize surprise ( Friston , 2009 ; Friston et al. , 2009 ; 2016 ; Berseth et al. , 2021 ) . These methods suffer from the opposite issue , the “ dark room problem ” , in which a surprise-minimizing agent in a low-entropy environment does not need to learn any behaviors at all in order to satisfy its objective ( Friston et al. , 2012 ) . Yet humans seem to maintain a balance between optimizing for both novelty and familiarity . For example , a child in a play room does not just try to toss their toys on the floor in every possible pattern , or immediately put them away in the toy box , but instead tries to stack them together , find new uses for parts , or combine them in various structured ways . We argue that an effective unsupervised RL method should find the right balance between exploration and control . With this goal in mind , we introduce a new algorithm based on an adversarial game between two policies , which take turns sequentially acting for the same RL agent . The goal of the Control policy is to minimize surprise , by learning to manipulate its environment in order to return to safe and predictable states . In turn , the Explore policy is novelty-seeking , and attempts to maximize surprise for the Control policy , putting it into a diverse range of novel states . When combined , the two adversaries engage in an arms race , repeatedly putting the agent into challenging new situations , then attempting to gain control of those situations . Figure 8 shows an illustration of the method , including a sample interaction . Rather than simply adding noise to the environment , the Explore policy learns to adapt to the Control policy , and to search for increasingly challenging situations from which the Control policy must recover . Thus our method , Adversarial Surprise ( AS ) , leverages the power of multi-agent training to generate a curriculum of increasingly challenging exploration and control problems , leading to the emergence of complex , meaningful behaviours . The contributions of this paper are : i ) The Adversarial Surprise algorithm ; ii ) Theoretical results which prove that when the environment is formulated as a stochastic Block MDP ( Du et al. , 2019 ) , traditional surprise-maximizing methods will fail to fully explore the underlying state space , but Adversarial Surprise will succeed ; iii ) Empirical evidence which supports the theoretical results , and shows that AS fully explores the state space of both Block MDPs and traditional benchmarking environments like VizDoom ; iv ) Experiments that compare AS to state-of-the-art unsupervised RL baselines Random Network Distillation ( RND ) ( Burda et al. , 2018 ) , Asymmetric Self-Play ( ASP ) ( Sukhbaatar et al. , 2017 ) , Adversarially Guided Actor-Critic ( AGAC ) ( Flet-Berliac et al. , 2021 ) , and Surprise Minimizing RL ( SMiRL ) ( Berseth et al. , 2021 ) , and show that AS is able to explore more effectively , learn more meaningful behaviors , and achieve higher task reward when transferred zero-shot to common benchmarking environments Atari and VizDoom . Videos of our agents are available on the project website : https : //sites.google.com/corp/view/ adversarialsurprise/home , and show that AS is able to learn interesting , emergent behaviors in Atari , VizDoom , and MiniGrid , even without ever having trained with the game reward . 2 RELATED WORK . Novelty-seeking and exploration methods lead the agent to increase coverage of the environment . A simple way to implement novelty-seeking is to maximize the prediction error of a world model ( Achiam & Sastry , 2017 ; Schmidhuber , 1991 ; Yamamoto & Ishikawa , 2010 ; Pathak et al. , 2017 ; Burda et al. , 2018 ; Raileanu & Rocktäschel , 2020 ; Zhang et al. , 2020b ) . Random Network Distillation ( RND ) ( Burda et al. , 2018 ) is a highly effective example of one such method . However , a major problem of prediction-error-based methods is that the intrinsic reward is not useful when the envi- ronment contains aleatoric uncertainty , i.e . inherently stochastic elements . This problem is often referred to as the noisy TV problem , after Schmidhuber ( 2010 ) used the example of an agent becoming stuck staring at static on a TV screen . We show in this work that RND is indeed vulnerable to this problem , performing poorly in stochastic environments . Several works have proposed solutions to deal with aleatoric uncertainty . For example , some approximate information gain on the agent ’ s dynamics model of the environment ( Houthooft et al. , 2016 ; Still & Precup , 2012 ; Bellemare et al. , 2016 ; Pathak et al. , 2017 ; Schmidhuber , 1991 ) , using variational Bayes or ensemble disagreement ( Shyam et al. , 2019 ; Pathak et al. , 2019 ; Houthooft et al. , 2016 ) . However , implementing such Bayesian procedures is difficult , because it requires scalable and effective modeling of epistemic uncertainty , which itself is a major open problem with high-dimensional models such as neural networks ( Bhattacharya & Maiti , 2020 ) . Another method based on maximizing information gain between the agent ’ s actions and future state is known as empowerment ( Klyubin et al. , 2005 ; Salge et al. , 2014 ; Eysenbach et al. , 2018 ; Sharma et al. , 2019 ) , but can also be difficult to approximate for high-dimensional states ( Karl et al. , 2015 ; Zhao et al. , 2020 ; de Abril & Kanai , 2018 ; Zhang et al. , 2020a ; Mohamed & Rezende , 2015 ; Gregor et al. , 2016 ; Hansen et al. , 2019 ) . Instead of attempting to directly approximate information gain , our approach maximizes state coverage via an adversarial competition over observation entropy . As we will show , this recovers an effective coverage strategy even in the presence of rich , stochastic observations , and performs well in practice . The asymptotic policy learned by standard novelty-seeking methods is not exploratory . Recent work tries to learn a policy that approximately maximizes the state marginal entropy at convergence ( Hazan et al. , 2019 ; Lee et al. , 2019 ) . The state marginal entropy is hard to compute in general , and recent work has proposed various approximations ( Seo et al. , 2021 ; Liu & Abbeel , 2021 ; Mutti et al. , 2021 ) . We prove that even with only noisy observations of the underlying state , our method asymptotically maximizes the state marginal entropy at convergence under some assumptions . Surprise minimization and active inference : The design of the Control agent in our method draws on ideas from surprise minimization and active inference ( Friston , 2009 ; Friston et al. , 2009 ; 2016 ) . The free energy principle , originating in the neuroscience community , argues that complex nicheseeking behaviors of biological systems are the result of minimizing long-term average surprise on system sensors , leading agents to stay in safe and stable states ( Friston , 2009 ) . SMiRL is an unsupervised RL method that leverages surprise minimization as a means to discover skills that stabilize an otherwise chaotic environment ( Berseth et al. , 2021 ) . However , such approaches require strong assumptions on the stochasticity of the environment . In low-entropy environments , surprise minimization will not lead to learning interesting behavior due to the dark room problem , in which the agent is not incentivized to explore the environment to find a better niche ( Friston et al. , 2012 ) . Our method does not require that the environment is stochastic , since the Explore agent itself drives the Control agent into situations from which surprise minimization is challenging . Multi-Agent competition has been shown to drive the emergence of complex behavior ( Baker et al. , 2019 ; Dennis et al. , 2020 ; Xu et al. , 2020 ; Leibo et al. , 2019 ; Schmidhuber , 1997 ; Campero et al. , 2020 ) . Asymmetric Self-Play ( ASP ) aims to learn increasingly complex skills via a competition between two policies , where one policy ( Bob ) tries to imitate or reverse the trajectory of the other policy ( Alice ) ( Sukhbaatar et al. , 2017 ; OpenAI et al. , 2021 ) . Our empirical results compare to ASP , and demonstrate that it can fail in stochastic environments , because Alice can easily produce random trajectories which are very difficult for Bob to imitate . Similar to ASP , Adversarially-Guided Actor Critic ( AGAC ) ( Flet-Berliac et al. , 2021 ) introduces an adversary agent which attempts to mimic the action distribution of the actor , while the actor tries to differentiate itself from the adversary . Like these methods , Adversarial Surprise uses a two-player game to induce exploration and emergent complexity . However , our objective is general and information-theoretic , focusing on minimization or maximization of surprise rather than reaching specific states . Unlike these methods , we provide theoretical results showing AS provides a principled approach to maximizing state coverage in stochastic environments . Finally , our method is reminiscent of recent work that learns separate exploration and exploitation policies but without competition between them ( Badia et al. , 2020b ; a ; Campos et al. , 2021 )
This paper introduces a method, called Adversarial Surprise (AS), for unsupervised reinforcement learning. AS employs a two-player, adversarial, sequential procedure in which an Explore player tries to maximize the approximate entropy of the observations, whereas a Control player tries to minimize this same entropy. The method is then compared against other unsupervised RL baselines in visual domains, over both state coverage and zero-shot adaptation.
SP:5cfd3e8b1aeca40406af37d08743ce8dc2f9c8fe
Explore and Control with Adversarial Surprise
1 INTRODUCTION . Reinforcement learning methods have attained impressive results across a number of domains ( e.g. , Berner et al . ( 2019 ) ; Kober et al . ( 2013 ) ; Levine et al . ( 2016 ) ; Vinyals et al . ( 2019 ) ) . However , current RL methods typically require a large number of samples for each new task ( Dann et al. , 2018 ) . In other areas of machine learning , an effective way to mitigate high data requirements has been the use of unsupervised or self-supervised learning ( Sutskever et al. , 2014 ; Radford et al. , 2019 ) . Similarly , humans and animals seem to be able to learn rich priors from their own experience without being told what to do , and children engage in structured but unsupervised play in part as a way to acquire a functional understanding of the world ( Smith & Gasser , 2005 ) . Based on this intuition , unsupervised RL methods rely on intrinsic motivation ( IM ) : task-agnostic objectives that incentivize the agent to autonomously explore the world and learn behaviors that can be used to solve a range of downstream tasks with little supervision . A general strategy is to exactly or approximately express this task-agnostic objective in the form of a reward function that uses only environment statistics , and then optimize it using standard RL algorithms . A reasonable goal for a good unsupervised learning algorithm is to fully explore the state space of the environment , since this ensures the agent will have the experience which with to learn the optimal policy for a downstream task . Therefore , past work on IM has frequently focused on novelty-seeking agents that maximize surprise or prediction error ( Achiam & Sastry , 2017 ; Schmidhuber , 1991 ; Yamamoto & Ishikawa , 2010 ; Pathak et al. , 2017 ; Burda et al. , 2018 ) . However , these methods are vulnerable to becoming distracted by inherently stochastic elements of the environment , such as a “ noisy TV ” ( Schmidhuber , 2010 ) . In contrast , active inference researchers inspired by biological agents have focused on developing agents that seek to control their environment and minimize surprise ( Friston , 2009 ; Friston et al. , 2009 ; 2016 ; Berseth et al. , 2021 ) . These methods suffer from the opposite issue , the “ dark room problem ” , in which a surprise-minimizing agent in a low-entropy environment does not need to learn any behaviors at all in order to satisfy its objective ( Friston et al. , 2012 ) . Yet humans seem to maintain a balance between optimizing for both novelty and familiarity . For example , a child in a play room does not just try to toss their toys on the floor in every possible pattern , or immediately put them away in the toy box , but instead tries to stack them together , find new uses for parts , or combine them in various structured ways . We argue that an effective unsupervised RL method should find the right balance between exploration and control . With this goal in mind , we introduce a new algorithm based on an adversarial game between two policies , which take turns sequentially acting for the same RL agent . The goal of the Control policy is to minimize surprise , by learning to manipulate its environment in order to return to safe and predictable states . In turn , the Explore policy is novelty-seeking , and attempts to maximize surprise for the Control policy , putting it into a diverse range of novel states . When combined , the two adversaries engage in an arms race , repeatedly putting the agent into challenging new situations , then attempting to gain control of those situations . Figure 8 shows an illustration of the method , including a sample interaction . Rather than simply adding noise to the environment , the Explore policy learns to adapt to the Control policy , and to search for increasingly challenging situations from which the Control policy must recover . Thus our method , Adversarial Surprise ( AS ) , leverages the power of multi-agent training to generate a curriculum of increasingly challenging exploration and control problems , leading to the emergence of complex , meaningful behaviours . The contributions of this paper are : i ) The Adversarial Surprise algorithm ; ii ) Theoretical results which prove that when the environment is formulated as a stochastic Block MDP ( Du et al. , 2019 ) , traditional surprise-maximizing methods will fail to fully explore the underlying state space , but Adversarial Surprise will succeed ; iii ) Empirical evidence which supports the theoretical results , and shows that AS fully explores the state space of both Block MDPs and traditional benchmarking environments like VizDoom ; iv ) Experiments that compare AS to state-of-the-art unsupervised RL baselines Random Network Distillation ( RND ) ( Burda et al. , 2018 ) , Asymmetric Self-Play ( ASP ) ( Sukhbaatar et al. , 2017 ) , Adversarially Guided Actor-Critic ( AGAC ) ( Flet-Berliac et al. , 2021 ) , and Surprise Minimizing RL ( SMiRL ) ( Berseth et al. , 2021 ) , and show that AS is able to explore more effectively , learn more meaningful behaviors , and achieve higher task reward when transferred zero-shot to common benchmarking environments Atari and VizDoom . Videos of our agents are available on the project website : https : //sites.google.com/corp/view/ adversarialsurprise/home , and show that AS is able to learn interesting , emergent behaviors in Atari , VizDoom , and MiniGrid , even without ever having trained with the game reward . 2 RELATED WORK . Novelty-seeking and exploration methods lead the agent to increase coverage of the environment . A simple way to implement novelty-seeking is to maximize the prediction error of a world model ( Achiam & Sastry , 2017 ; Schmidhuber , 1991 ; Yamamoto & Ishikawa , 2010 ; Pathak et al. , 2017 ; Burda et al. , 2018 ; Raileanu & Rocktäschel , 2020 ; Zhang et al. , 2020b ) . Random Network Distillation ( RND ) ( Burda et al. , 2018 ) is a highly effective example of one such method . However , a major problem of prediction-error-based methods is that the intrinsic reward is not useful when the envi- ronment contains aleatoric uncertainty , i.e . inherently stochastic elements . This problem is often referred to as the noisy TV problem , after Schmidhuber ( 2010 ) used the example of an agent becoming stuck staring at static on a TV screen . We show in this work that RND is indeed vulnerable to this problem , performing poorly in stochastic environments . Several works have proposed solutions to deal with aleatoric uncertainty . For example , some approximate information gain on the agent ’ s dynamics model of the environment ( Houthooft et al. , 2016 ; Still & Precup , 2012 ; Bellemare et al. , 2016 ; Pathak et al. , 2017 ; Schmidhuber , 1991 ) , using variational Bayes or ensemble disagreement ( Shyam et al. , 2019 ; Pathak et al. , 2019 ; Houthooft et al. , 2016 ) . However , implementing such Bayesian procedures is difficult , because it requires scalable and effective modeling of epistemic uncertainty , which itself is a major open problem with high-dimensional models such as neural networks ( Bhattacharya & Maiti , 2020 ) . Another method based on maximizing information gain between the agent ’ s actions and future state is known as empowerment ( Klyubin et al. , 2005 ; Salge et al. , 2014 ; Eysenbach et al. , 2018 ; Sharma et al. , 2019 ) , but can also be difficult to approximate for high-dimensional states ( Karl et al. , 2015 ; Zhao et al. , 2020 ; de Abril & Kanai , 2018 ; Zhang et al. , 2020a ; Mohamed & Rezende , 2015 ; Gregor et al. , 2016 ; Hansen et al. , 2019 ) . Instead of attempting to directly approximate information gain , our approach maximizes state coverage via an adversarial competition over observation entropy . As we will show , this recovers an effective coverage strategy even in the presence of rich , stochastic observations , and performs well in practice . The asymptotic policy learned by standard novelty-seeking methods is not exploratory . Recent work tries to learn a policy that approximately maximizes the state marginal entropy at convergence ( Hazan et al. , 2019 ; Lee et al. , 2019 ) . The state marginal entropy is hard to compute in general , and recent work has proposed various approximations ( Seo et al. , 2021 ; Liu & Abbeel , 2021 ; Mutti et al. , 2021 ) . We prove that even with only noisy observations of the underlying state , our method asymptotically maximizes the state marginal entropy at convergence under some assumptions . Surprise minimization and active inference : The design of the Control agent in our method draws on ideas from surprise minimization and active inference ( Friston , 2009 ; Friston et al. , 2009 ; 2016 ) . The free energy principle , originating in the neuroscience community , argues that complex nicheseeking behaviors of biological systems are the result of minimizing long-term average surprise on system sensors , leading agents to stay in safe and stable states ( Friston , 2009 ) . SMiRL is an unsupervised RL method that leverages surprise minimization as a means to discover skills that stabilize an otherwise chaotic environment ( Berseth et al. , 2021 ) . However , such approaches require strong assumptions on the stochasticity of the environment . In low-entropy environments , surprise minimization will not lead to learning interesting behavior due to the dark room problem , in which the agent is not incentivized to explore the environment to find a better niche ( Friston et al. , 2012 ) . Our method does not require that the environment is stochastic , since the Explore agent itself drives the Control agent into situations from which surprise minimization is challenging . Multi-Agent competition has been shown to drive the emergence of complex behavior ( Baker et al. , 2019 ; Dennis et al. , 2020 ; Xu et al. , 2020 ; Leibo et al. , 2019 ; Schmidhuber , 1997 ; Campero et al. , 2020 ) . Asymmetric Self-Play ( ASP ) aims to learn increasingly complex skills via a competition between two policies , where one policy ( Bob ) tries to imitate or reverse the trajectory of the other policy ( Alice ) ( Sukhbaatar et al. , 2017 ; OpenAI et al. , 2021 ) . Our empirical results compare to ASP , and demonstrate that it can fail in stochastic environments , because Alice can easily produce random trajectories which are very difficult for Bob to imitate . Similar to ASP , Adversarially-Guided Actor Critic ( AGAC ) ( Flet-Berliac et al. , 2021 ) introduces an adversary agent which attempts to mimic the action distribution of the actor , while the actor tries to differentiate itself from the adversary . Like these methods , Adversarial Surprise uses a two-player game to induce exploration and emergent complexity . However , our objective is general and information-theoretic , focusing on minimization or maximization of surprise rather than reaching specific states . Unlike these methods , we provide theoretical results showing AS provides a principled approach to maximizing state coverage in stochastic environments . Finally , our method is reminiscent of recent work that learns separate exploration and exploitation policies but without competition between them ( Badia et al. , 2020b ; a ; Campos et al. , 2021 )
This paper introduces Adversarial Surprise, a new approach for unsupervised reinforcement learning in stochastic BMDPs, where the goal is to explore an environment without rewards. The algorithm uses a single agent with two policies, an Explorer and a Controller, which switch during an episode with opposite rewards: to maximize and minimize "surprise". The method is supported by a theoretical argument under the assumptions of the stochastic BMDP. When these are present in an environment, the empirical results are strong. The method is also tested in Atari and Doom.
SP:5cfd3e8b1aeca40406af37d08743ce8dc2f9c8fe
Explore and Control with Adversarial Surprise
1 INTRODUCTION . Reinforcement learning methods have attained impressive results across a number of domains ( e.g. , Berner et al . ( 2019 ) ; Kober et al . ( 2013 ) ; Levine et al . ( 2016 ) ; Vinyals et al . ( 2019 ) ) . However , current RL methods typically require a large number of samples for each new task ( Dann et al. , 2018 ) . In other areas of machine learning , an effective way to mitigate high data requirements has been the use of unsupervised or self-supervised learning ( Sutskever et al. , 2014 ; Radford et al. , 2019 ) . Similarly , humans and animals seem to be able to learn rich priors from their own experience without being told what to do , and children engage in structured but unsupervised play in part as a way to acquire a functional understanding of the world ( Smith & Gasser , 2005 ) . Based on this intuition , unsupervised RL methods rely on intrinsic motivation ( IM ) : task-agnostic objectives that incentivize the agent to autonomously explore the world and learn behaviors that can be used to solve a range of downstream tasks with little supervision . A general strategy is to exactly or approximately express this task-agnostic objective in the form of a reward function that uses only environment statistics , and then optimize it using standard RL algorithms . A reasonable goal for a good unsupervised learning algorithm is to fully explore the state space of the environment , since this ensures the agent will have the experience which with to learn the optimal policy for a downstream task . Therefore , past work on IM has frequently focused on novelty-seeking agents that maximize surprise or prediction error ( Achiam & Sastry , 2017 ; Schmidhuber , 1991 ; Yamamoto & Ishikawa , 2010 ; Pathak et al. , 2017 ; Burda et al. , 2018 ) . However , these methods are vulnerable to becoming distracted by inherently stochastic elements of the environment , such as a “ noisy TV ” ( Schmidhuber , 2010 ) . In contrast , active inference researchers inspired by biological agents have focused on developing agents that seek to control their environment and minimize surprise ( Friston , 2009 ; Friston et al. , 2009 ; 2016 ; Berseth et al. , 2021 ) . These methods suffer from the opposite issue , the “ dark room problem ” , in which a surprise-minimizing agent in a low-entropy environment does not need to learn any behaviors at all in order to satisfy its objective ( Friston et al. , 2012 ) . Yet humans seem to maintain a balance between optimizing for both novelty and familiarity . For example , a child in a play room does not just try to toss their toys on the floor in every possible pattern , or immediately put them away in the toy box , but instead tries to stack them together , find new uses for parts , or combine them in various structured ways . We argue that an effective unsupervised RL method should find the right balance between exploration and control . With this goal in mind , we introduce a new algorithm based on an adversarial game between two policies , which take turns sequentially acting for the same RL agent . The goal of the Control policy is to minimize surprise , by learning to manipulate its environment in order to return to safe and predictable states . In turn , the Explore policy is novelty-seeking , and attempts to maximize surprise for the Control policy , putting it into a diverse range of novel states . When combined , the two adversaries engage in an arms race , repeatedly putting the agent into challenging new situations , then attempting to gain control of those situations . Figure 8 shows an illustration of the method , including a sample interaction . Rather than simply adding noise to the environment , the Explore policy learns to adapt to the Control policy , and to search for increasingly challenging situations from which the Control policy must recover . Thus our method , Adversarial Surprise ( AS ) , leverages the power of multi-agent training to generate a curriculum of increasingly challenging exploration and control problems , leading to the emergence of complex , meaningful behaviours . The contributions of this paper are : i ) The Adversarial Surprise algorithm ; ii ) Theoretical results which prove that when the environment is formulated as a stochastic Block MDP ( Du et al. , 2019 ) , traditional surprise-maximizing methods will fail to fully explore the underlying state space , but Adversarial Surprise will succeed ; iii ) Empirical evidence which supports the theoretical results , and shows that AS fully explores the state space of both Block MDPs and traditional benchmarking environments like VizDoom ; iv ) Experiments that compare AS to state-of-the-art unsupervised RL baselines Random Network Distillation ( RND ) ( Burda et al. , 2018 ) , Asymmetric Self-Play ( ASP ) ( Sukhbaatar et al. , 2017 ) , Adversarially Guided Actor-Critic ( AGAC ) ( Flet-Berliac et al. , 2021 ) , and Surprise Minimizing RL ( SMiRL ) ( Berseth et al. , 2021 ) , and show that AS is able to explore more effectively , learn more meaningful behaviors , and achieve higher task reward when transferred zero-shot to common benchmarking environments Atari and VizDoom . Videos of our agents are available on the project website : https : //sites.google.com/corp/view/ adversarialsurprise/home , and show that AS is able to learn interesting , emergent behaviors in Atari , VizDoom , and MiniGrid , even without ever having trained with the game reward . 2 RELATED WORK . Novelty-seeking and exploration methods lead the agent to increase coverage of the environment . A simple way to implement novelty-seeking is to maximize the prediction error of a world model ( Achiam & Sastry , 2017 ; Schmidhuber , 1991 ; Yamamoto & Ishikawa , 2010 ; Pathak et al. , 2017 ; Burda et al. , 2018 ; Raileanu & Rocktäschel , 2020 ; Zhang et al. , 2020b ) . Random Network Distillation ( RND ) ( Burda et al. , 2018 ) is a highly effective example of one such method . However , a major problem of prediction-error-based methods is that the intrinsic reward is not useful when the envi- ronment contains aleatoric uncertainty , i.e . inherently stochastic elements . This problem is often referred to as the noisy TV problem , after Schmidhuber ( 2010 ) used the example of an agent becoming stuck staring at static on a TV screen . We show in this work that RND is indeed vulnerable to this problem , performing poorly in stochastic environments . Several works have proposed solutions to deal with aleatoric uncertainty . For example , some approximate information gain on the agent ’ s dynamics model of the environment ( Houthooft et al. , 2016 ; Still & Precup , 2012 ; Bellemare et al. , 2016 ; Pathak et al. , 2017 ; Schmidhuber , 1991 ) , using variational Bayes or ensemble disagreement ( Shyam et al. , 2019 ; Pathak et al. , 2019 ; Houthooft et al. , 2016 ) . However , implementing such Bayesian procedures is difficult , because it requires scalable and effective modeling of epistemic uncertainty , which itself is a major open problem with high-dimensional models such as neural networks ( Bhattacharya & Maiti , 2020 ) . Another method based on maximizing information gain between the agent ’ s actions and future state is known as empowerment ( Klyubin et al. , 2005 ; Salge et al. , 2014 ; Eysenbach et al. , 2018 ; Sharma et al. , 2019 ) , but can also be difficult to approximate for high-dimensional states ( Karl et al. , 2015 ; Zhao et al. , 2020 ; de Abril & Kanai , 2018 ; Zhang et al. , 2020a ; Mohamed & Rezende , 2015 ; Gregor et al. , 2016 ; Hansen et al. , 2019 ) . Instead of attempting to directly approximate information gain , our approach maximizes state coverage via an adversarial competition over observation entropy . As we will show , this recovers an effective coverage strategy even in the presence of rich , stochastic observations , and performs well in practice . The asymptotic policy learned by standard novelty-seeking methods is not exploratory . Recent work tries to learn a policy that approximately maximizes the state marginal entropy at convergence ( Hazan et al. , 2019 ; Lee et al. , 2019 ) . The state marginal entropy is hard to compute in general , and recent work has proposed various approximations ( Seo et al. , 2021 ; Liu & Abbeel , 2021 ; Mutti et al. , 2021 ) . We prove that even with only noisy observations of the underlying state , our method asymptotically maximizes the state marginal entropy at convergence under some assumptions . Surprise minimization and active inference : The design of the Control agent in our method draws on ideas from surprise minimization and active inference ( Friston , 2009 ; Friston et al. , 2009 ; 2016 ) . The free energy principle , originating in the neuroscience community , argues that complex nicheseeking behaviors of biological systems are the result of minimizing long-term average surprise on system sensors , leading agents to stay in safe and stable states ( Friston , 2009 ) . SMiRL is an unsupervised RL method that leverages surprise minimization as a means to discover skills that stabilize an otherwise chaotic environment ( Berseth et al. , 2021 ) . However , such approaches require strong assumptions on the stochasticity of the environment . In low-entropy environments , surprise minimization will not lead to learning interesting behavior due to the dark room problem , in which the agent is not incentivized to explore the environment to find a better niche ( Friston et al. , 2012 ) . Our method does not require that the environment is stochastic , since the Explore agent itself drives the Control agent into situations from which surprise minimization is challenging . Multi-Agent competition has been shown to drive the emergence of complex behavior ( Baker et al. , 2019 ; Dennis et al. , 2020 ; Xu et al. , 2020 ; Leibo et al. , 2019 ; Schmidhuber , 1997 ; Campero et al. , 2020 ) . Asymmetric Self-Play ( ASP ) aims to learn increasingly complex skills via a competition between two policies , where one policy ( Bob ) tries to imitate or reverse the trajectory of the other policy ( Alice ) ( Sukhbaatar et al. , 2017 ; OpenAI et al. , 2021 ) . Our empirical results compare to ASP , and demonstrate that it can fail in stochastic environments , because Alice can easily produce random trajectories which are very difficult for Bob to imitate . Similar to ASP , Adversarially-Guided Actor Critic ( AGAC ) ( Flet-Berliac et al. , 2021 ) introduces an adversary agent which attempts to mimic the action distribution of the actor , while the actor tries to differentiate itself from the adversary . Like these methods , Adversarial Surprise uses a two-player game to induce exploration and emergent complexity . However , our objective is general and information-theoretic , focusing on minimization or maximization of surprise rather than reaching specific states . Unlike these methods , we provide theoretical results showing AS provides a principled approach to maximizing state coverage in stochastic environments . Finally , our method is reminiscent of recent work that learns separate exploration and exploitation policies but without competition between them ( Badia et al. , 2020b ; a ; Campos et al. , 2021 )
This paper proposes Adversarial Surprise (AS), a method for unsupervised training of RL agents based on the competition of two policies dubbed Explore and Control. These two policies control compete to maximize/minimize surprise, respectively, by taking turns to control a shared body. Authors show that under some conditions this objective leads to maximizing state entropy within an episode, and formalize the settings where other approaches may fail. Evaluation is performed on three different domains: procedurally generated MiniGrid tasks, VizDoom and four Atari games. AS explores more rooms within an episode in Minigrid, visits more x positions within an episode in VizDoom, and obtains higher zero-shot scores in 3/4 Atari games.
SP:5cfd3e8b1aeca40406af37d08743ce8dc2f9c8fe
DeepDebug: Fixing Python Bugs Using Stack Traces, Backtranslation, and Code Skeletons
The joint task of bug localization and program repair is an integral part of the software development process . In this work we present DeepDebug , an approach to automated debugging using large , pretrained transformers . We begin by training a bug-creation model on reversed commit data for the purpose of generating synthetic bugs . We apply these synthetic bugs toward two ends . First , we directly train a backtranslation model on all functions from 200K repositories . Next , we focus on 10K repositories for which we can execute tests , and create buggy versions of all functions in those repositories that are covered by passing tests . This provides us with rich debugging information such as stack traces and print statements , which we use to finetune our model which was pretrained on raw source code . Finally , we strengthen all our models by expanding the context window beyond the buggy function itself , and adding a skeleton consisting of that function ’ s parent class , imports , signatures , docstrings , and method bodies , in order of priority . On the QuixBugs benchmark , we increase the total number of fixes found by over 50 % , while also decreasing the false positive rate from 35 % to 5 % and decreasing the timeout from six hours to one minute . On our own benchmark of executable tests , our model fixes 68 % of all bugs on its first attempt without using traces , and after adding traces it fixes 75 % on first attempt . 1 INTRODUCTION . The dominant paradigm in automated program repair is the generate-and-validate approach , which our work follows . In this setting we assume the existence of a suite of test functions that identify the existence of a bug . We must then localize the bug and consider candidate fixes until finding a patch that satisfies the test suite . Throughout our experiments , we work with synthetic bugs in which the error has already been localized to a single buggy method . We take as input the buggy function , along with additional context depending on the experiment , such as surrounding context from the function ’ s file and a stack trace that exposes the buggy function . We feed that input to our sequence-to-sequence transformer , which attempts to generate the fixed function in its entirety . In our deployment scenario , we also attempt to localize the bug using the stack trace . At present we apply a simple heuristic based on which lines of the stack trace come from the developer ’ s own code , considering the most recently called lines to be the most suspect . In future we are interested in improving our heuristic using an encoder transformer that reranks methods given the stack trace . 2 RELATED WORK . 2.1 FAULT LOCALIZATION . Curiously , the standard approach to bug localization eschews stack traces and instead uses spectrumbased fault-localization ( SBFL ) . In this approach , lines of code are ranked in order of suspiciousness based on how many failing tests execute them . One example is the DStar formula ( Wong et al. , 2014 ) . Given a statement s that is executed by failed ( s ) failing tests and passed ( s ) passing tests , it computes the suspiciousness score S ( s ) = failed ( s ) e passed ( s ) + ( totalfailed− failed ( s ) ) where e is an exponent like 2 . A notable exception to the SBFL approach to fault localization is the task of build repair , in which the localization information comes from the compiler message rather than a test suite . Gao et al . use crash traces to query Stack Overflow and produce repairing edits ( Gao et al. , 2015 ) . The DeepDelta model was trained on build data from 300 million lines of Java code from Google projects ( Mesbah et al. , 2019 ) . Using a Neural Machine Translation approach , DeepDelta fixes half of all errors resulting from mismatched method signatures or a missing symbol on its first attempt . 2.2 FALSE POSITIVES . A pervasive problem in the field is the fact that tests are incomplete oracles . Many tools produce patches that are merely test-suite adequate more often than they are genuine fixes . For instance , when evaluated on the QuixBugs challenge ( Lin et al. , 2017 ) , a benchmark consisting of one-line bugs in classic algorithms , a set of ten bugpatching tools found solely false positives for nine functions , while only actually finding any genuine fix for seven ( Ye et al. , 2019 ) . Only three models found more genuine fixes than false positives , in all cases exactly one more genuine fix . The neural approach CoCoNuT does somewhat better , finding thirteen genuine fixes along with seven false positives ( Lutellier et al. , 2020 ) . A survey found similarly disappointing results on the larger benchmark Defects4j ( Just et al. , 2014 ) , a collection of real-world Java bugs ( Durieux et al. , 2019 ) . In stark contrast , we observe an excellent false positive rate with our approach . One can also evaluate bug-patching models without executing tests . The Patches in the Wild benchmark frames bug-patching as a sequence-to-sequence problem as in our approach , which enables the use of NLP metrics ( Tufano et al. , 2019 ) . The authors consider an edit to be a bonafide fix only if it is exactly the same as the edit the original developer made when committing it . SequenceR ( Chen et al. , 2018 ) both narrows and expands on the Patches in the Wild dataset by focusing on one-line changes only , while also providing additional context beyond the buggy method by including other methods ’ signatures , similarly to our use of code skeletons . This extended context gives a 15 % relative boost . 2.3 SYNTHETIC DATA . For certain bug-types it is possible to generate millions of synthetic bugs to train on . Devlin et al . ( Devlin et al. , 2017 ) train an RNN on Python to fix incorrect comparison operators , the mistaken use of “ is ” vs. “ is not ” , variable misuse , and forgotten “ self ” accessors . Overall , they achieve 86 % accuracy on synthetic bugs and 41 % on real-life bugs . Kanade et al . ( Kanade et al. , 2019 ) pretrain a BERT-style model “ CuBERT ” on a larger Python dataset and then finetune on a related suite of synthetic bugs , achieving over 90 % accuracy . Backtranslation is a more generic form of data augmentation , in which a model is trained to translate target data to source data ( Edunov et al. , 2018 ) . In our case , we have an abundance of bug-free data mined from GitHub . We train a backtranslation model to create synthetic bugs and augment the training data for our goal task of fixing bugs . Backtranslation is also put to use for the related task of grammatical error correction ( Kiyono et al. , 2019 ) . 2.4 PRETRAINING . Various task-agnostic pretraining approaches like BERT , BART , T5 , and GPT-3 ( Devlin et al. , 2017 ; Lewis et al. , 2020 ; Raffel et al. , 2019 ; Brown et al. , 2020 ) have seen large performance gains on a diverse array of benchmarks . These models are typically pretrained using a denoising objective on a large corpus of text that has been synthetically corrupted using bidirectional or causal masking , as well as more arbitrary noising such as randomly deleting , replacing , or permuting source tokens . Pretraining has also shown large improvements for the task of program repair ( Drain et al. , 2021 ; Lu et al. , 2021 ) . 3 MODEL . We reuse the 406M-parameter sequence-to-sequence transformer with twelve encoder layers and twelve decoder layers which was pretrained as described under section 4.1 . When experimenting with stack traces , we allot 1024 tokens for the code skeleton , and up to 896 tokens for the trace . In order to accommodate this larger context we thus need to expand the transformer ’ s positional embedding matrix . To this end we use axial embeddings , as inspired by reformer ( Kitaev et al. , 2020 ) . In our implementation , we duplicate the first 896 of the pre-existing 1024 positional embeddings , generate a random axial vector , and add that vector to each of the 896 duplicate embeddings . This approach outperformed randomly initialized embeddings in preliminary experiments . 4 DATA . We work with four different training datasets : Raw python code used for pretraining , commit data used for training a neural bug-creator and bug-patcher , methods extracted from the raw code in which we insert both neural bugs so as to train an even stronger bug-patcher , and , finally , methods that pass executable tests . For this last dataset , we also obtain the list of lines executed by each test , and we obtain another bug-patching dataset by again inserting synthetic bugs and rerunning the passing tests , which allows us to finetune a bug-patcher on stack traces , error messages , and print statements . We also experiment with giving our bug-patcher models either only the focal buggy method or also a ‘ skeleton ’ of the entire file that prioritizes data such as function signatures . 4.1 PRETRAINING . We build on the DeepDev transformer platform . We reuse the 406M-parameter DeepDev Python transformer that was warmstarted from FaceBook ’ s BART model and then further pretrained using a spanmasking objective ( Lewis et al. , 2020 ; Clement et al. , 2020 ) . The pretraining data consists of 200,000 public Python repos filtered to have at least five stars . Pretraining took place on a DGX-2 box for three weeks . Note that the DeepDev tokenizer has appended whitespace tokens , such as the four-space and eight-space tokens , which boosts throughput and the effective context length . To minimize any risk of leakage , we consistently restrict to the same validation and test repositories , in particular to those repositories used in CodeSearchNet ( Husain et al. , 2019 ) . 4.2 COMMIT DATA . We traverse the commit history of 100,000 Python repositories that were filtered to have at least ten stars . We further filter to all commits whose message contains the word “ fix '' , roughly one fifth of all commits . Based upon inspecting many examples , it seems that this simple filter is approximately as precise as more restrictive filters that insist on phrases like “ patch bug ” or “ fix error '' . Nevertheless , the data is still extremely noisy . This commit data serves two purposes for us . First , it allows us to train an edit model which is biased towards constructive , bug-fixing edits . We can evaluate such a model directly on bug-fixing , or finetune it on more filtered bug data . Second , we can reverse the input and output and train an edit model biased towards destructive , bug-inducing edits . We can use this model to create neural bugs to greatly augment our training data . This backtranslation approach has already proven useful elsewhere in nlp . Since we are interested in fixing buggy methods , we consider each method edited by each commit . As we are only interested in nontrivial edits , we normalize each method before and after the commit and discard the edits that do not affect the normalized code . To normalize , we strip comments , replace string and numeric literals with placeholders like ‘ STR_LIT ’ , and standardize whitespace . Finally we are left with 1.1 million , nontrivially edited methods . We experiment with giving solely the focal method and its edit as input and output , as well as putting more context into the input , as described in the section on Code Skeletons .
This paper proposes an approach to fixing synthetic python bugs by using the backtranslation model. The idea is to employ four different types of datasets and employ BART to learn from them. They employ backtranslation in evaluating the trained model and outperform the previous baseline of over 50%. They further evaluate their model with test cases and achieve a fixed rate of over 90%
SP:bfc9fdbd6152659e6d7975261a2adf9f918be84a
DeepDebug: Fixing Python Bugs Using Stack Traces, Backtranslation, and Code Skeletons
The joint task of bug localization and program repair is an integral part of the software development process . In this work we present DeepDebug , an approach to automated debugging using large , pretrained transformers . We begin by training a bug-creation model on reversed commit data for the purpose of generating synthetic bugs . We apply these synthetic bugs toward two ends . First , we directly train a backtranslation model on all functions from 200K repositories . Next , we focus on 10K repositories for which we can execute tests , and create buggy versions of all functions in those repositories that are covered by passing tests . This provides us with rich debugging information such as stack traces and print statements , which we use to finetune our model which was pretrained on raw source code . Finally , we strengthen all our models by expanding the context window beyond the buggy function itself , and adding a skeleton consisting of that function ’ s parent class , imports , signatures , docstrings , and method bodies , in order of priority . On the QuixBugs benchmark , we increase the total number of fixes found by over 50 % , while also decreasing the false positive rate from 35 % to 5 % and decreasing the timeout from six hours to one minute . On our own benchmark of executable tests , our model fixes 68 % of all bugs on its first attempt without using traces , and after adding traces it fixes 75 % on first attempt . 1 INTRODUCTION . The dominant paradigm in automated program repair is the generate-and-validate approach , which our work follows . In this setting we assume the existence of a suite of test functions that identify the existence of a bug . We must then localize the bug and consider candidate fixes until finding a patch that satisfies the test suite . Throughout our experiments , we work with synthetic bugs in which the error has already been localized to a single buggy method . We take as input the buggy function , along with additional context depending on the experiment , such as surrounding context from the function ’ s file and a stack trace that exposes the buggy function . We feed that input to our sequence-to-sequence transformer , which attempts to generate the fixed function in its entirety . In our deployment scenario , we also attempt to localize the bug using the stack trace . At present we apply a simple heuristic based on which lines of the stack trace come from the developer ’ s own code , considering the most recently called lines to be the most suspect . In future we are interested in improving our heuristic using an encoder transformer that reranks methods given the stack trace . 2 RELATED WORK . 2.1 FAULT LOCALIZATION . Curiously , the standard approach to bug localization eschews stack traces and instead uses spectrumbased fault-localization ( SBFL ) . In this approach , lines of code are ranked in order of suspiciousness based on how many failing tests execute them . One example is the DStar formula ( Wong et al. , 2014 ) . Given a statement s that is executed by failed ( s ) failing tests and passed ( s ) passing tests , it computes the suspiciousness score S ( s ) = failed ( s ) e passed ( s ) + ( totalfailed− failed ( s ) ) where e is an exponent like 2 . A notable exception to the SBFL approach to fault localization is the task of build repair , in which the localization information comes from the compiler message rather than a test suite . Gao et al . use crash traces to query Stack Overflow and produce repairing edits ( Gao et al. , 2015 ) . The DeepDelta model was trained on build data from 300 million lines of Java code from Google projects ( Mesbah et al. , 2019 ) . Using a Neural Machine Translation approach , DeepDelta fixes half of all errors resulting from mismatched method signatures or a missing symbol on its first attempt . 2.2 FALSE POSITIVES . A pervasive problem in the field is the fact that tests are incomplete oracles . Many tools produce patches that are merely test-suite adequate more often than they are genuine fixes . For instance , when evaluated on the QuixBugs challenge ( Lin et al. , 2017 ) , a benchmark consisting of one-line bugs in classic algorithms , a set of ten bugpatching tools found solely false positives for nine functions , while only actually finding any genuine fix for seven ( Ye et al. , 2019 ) . Only three models found more genuine fixes than false positives , in all cases exactly one more genuine fix . The neural approach CoCoNuT does somewhat better , finding thirteen genuine fixes along with seven false positives ( Lutellier et al. , 2020 ) . A survey found similarly disappointing results on the larger benchmark Defects4j ( Just et al. , 2014 ) , a collection of real-world Java bugs ( Durieux et al. , 2019 ) . In stark contrast , we observe an excellent false positive rate with our approach . One can also evaluate bug-patching models without executing tests . The Patches in the Wild benchmark frames bug-patching as a sequence-to-sequence problem as in our approach , which enables the use of NLP metrics ( Tufano et al. , 2019 ) . The authors consider an edit to be a bonafide fix only if it is exactly the same as the edit the original developer made when committing it . SequenceR ( Chen et al. , 2018 ) both narrows and expands on the Patches in the Wild dataset by focusing on one-line changes only , while also providing additional context beyond the buggy method by including other methods ’ signatures , similarly to our use of code skeletons . This extended context gives a 15 % relative boost . 2.3 SYNTHETIC DATA . For certain bug-types it is possible to generate millions of synthetic bugs to train on . Devlin et al . ( Devlin et al. , 2017 ) train an RNN on Python to fix incorrect comparison operators , the mistaken use of “ is ” vs. “ is not ” , variable misuse , and forgotten “ self ” accessors . Overall , they achieve 86 % accuracy on synthetic bugs and 41 % on real-life bugs . Kanade et al . ( Kanade et al. , 2019 ) pretrain a BERT-style model “ CuBERT ” on a larger Python dataset and then finetune on a related suite of synthetic bugs , achieving over 90 % accuracy . Backtranslation is a more generic form of data augmentation , in which a model is trained to translate target data to source data ( Edunov et al. , 2018 ) . In our case , we have an abundance of bug-free data mined from GitHub . We train a backtranslation model to create synthetic bugs and augment the training data for our goal task of fixing bugs . Backtranslation is also put to use for the related task of grammatical error correction ( Kiyono et al. , 2019 ) . 2.4 PRETRAINING . Various task-agnostic pretraining approaches like BERT , BART , T5 , and GPT-3 ( Devlin et al. , 2017 ; Lewis et al. , 2020 ; Raffel et al. , 2019 ; Brown et al. , 2020 ) have seen large performance gains on a diverse array of benchmarks . These models are typically pretrained using a denoising objective on a large corpus of text that has been synthetically corrupted using bidirectional or causal masking , as well as more arbitrary noising such as randomly deleting , replacing , or permuting source tokens . Pretraining has also shown large improvements for the task of program repair ( Drain et al. , 2021 ; Lu et al. , 2021 ) . 3 MODEL . We reuse the 406M-parameter sequence-to-sequence transformer with twelve encoder layers and twelve decoder layers which was pretrained as described under section 4.1 . When experimenting with stack traces , we allot 1024 tokens for the code skeleton , and up to 896 tokens for the trace . In order to accommodate this larger context we thus need to expand the transformer ’ s positional embedding matrix . To this end we use axial embeddings , as inspired by reformer ( Kitaev et al. , 2020 ) . In our implementation , we duplicate the first 896 of the pre-existing 1024 positional embeddings , generate a random axial vector , and add that vector to each of the 896 duplicate embeddings . This approach outperformed randomly initialized embeddings in preliminary experiments . 4 DATA . We work with four different training datasets : Raw python code used for pretraining , commit data used for training a neural bug-creator and bug-patcher , methods extracted from the raw code in which we insert both neural bugs so as to train an even stronger bug-patcher , and , finally , methods that pass executable tests . For this last dataset , we also obtain the list of lines executed by each test , and we obtain another bug-patching dataset by again inserting synthetic bugs and rerunning the passing tests , which allows us to finetune a bug-patcher on stack traces , error messages , and print statements . We also experiment with giving our bug-patcher models either only the focal buggy method or also a ‘ skeleton ’ of the entire file that prioritizes data such as function signatures . 4.1 PRETRAINING . We build on the DeepDev transformer platform . We reuse the 406M-parameter DeepDev Python transformer that was warmstarted from FaceBook ’ s BART model and then further pretrained using a spanmasking objective ( Lewis et al. , 2020 ; Clement et al. , 2020 ) . The pretraining data consists of 200,000 public Python repos filtered to have at least five stars . Pretraining took place on a DGX-2 box for three weeks . Note that the DeepDev tokenizer has appended whitespace tokens , such as the four-space and eight-space tokens , which boosts throughput and the effective context length . To minimize any risk of leakage , we consistently restrict to the same validation and test repositories , in particular to those repositories used in CodeSearchNet ( Husain et al. , 2019 ) . 4.2 COMMIT DATA . We traverse the commit history of 100,000 Python repositories that were filtered to have at least ten stars . We further filter to all commits whose message contains the word “ fix '' , roughly one fifth of all commits . Based upon inspecting many examples , it seems that this simple filter is approximately as precise as more restrictive filters that insist on phrases like “ patch bug ” or “ fix error '' . Nevertheless , the data is still extremely noisy . This commit data serves two purposes for us . First , it allows us to train an edit model which is biased towards constructive , bug-fixing edits . We can evaluate such a model directly on bug-fixing , or finetune it on more filtered bug data . Second , we can reverse the input and output and train an edit model biased towards destructive , bug-inducing edits . We can use this model to create neural bugs to greatly augment our training data . This backtranslation approach has already proven useful elsewhere in nlp . Since we are interested in fixing buggy methods , we consider each method edited by each commit . As we are only interested in nontrivial edits , we normalize each method before and after the commit and discard the edits that do not affect the normalized code . To normalize , we strip comments , replace string and numeric literals with placeholders like ‘ STR_LIT ’ , and standardize whitespace . Finally we are left with 1.1 million , nontrivially edited methods . We experiment with giving solely the focal method and its edit as input and output , as well as putting more context into the input , as described in the section on Code Skeletons .
Authors propose couple approaches to train models for automatically debugging Python programs. Authors create bug generation model that generates training set with automatically added bugs. The bugs are used in code repositories with tests to obtain stack traces of errors. These stack traces are used to train debugging models that include stack traces. Authors also propose including context such as import statements, class declarations, and docstrings to improve performance of the debugging model. Authors show significant improvement on QuixBugs benchmark.
SP:bfc9fdbd6152659e6d7975261a2adf9f918be84a
DeepDebug: Fixing Python Bugs Using Stack Traces, Backtranslation, and Code Skeletons
The joint task of bug localization and program repair is an integral part of the software development process . In this work we present DeepDebug , an approach to automated debugging using large , pretrained transformers . We begin by training a bug-creation model on reversed commit data for the purpose of generating synthetic bugs . We apply these synthetic bugs toward two ends . First , we directly train a backtranslation model on all functions from 200K repositories . Next , we focus on 10K repositories for which we can execute tests , and create buggy versions of all functions in those repositories that are covered by passing tests . This provides us with rich debugging information such as stack traces and print statements , which we use to finetune our model which was pretrained on raw source code . Finally , we strengthen all our models by expanding the context window beyond the buggy function itself , and adding a skeleton consisting of that function ’ s parent class , imports , signatures , docstrings , and method bodies , in order of priority . On the QuixBugs benchmark , we increase the total number of fixes found by over 50 % , while also decreasing the false positive rate from 35 % to 5 % and decreasing the timeout from six hours to one minute . On our own benchmark of executable tests , our model fixes 68 % of all bugs on its first attempt without using traces , and after adding traces it fixes 75 % on first attempt . 1 INTRODUCTION . The dominant paradigm in automated program repair is the generate-and-validate approach , which our work follows . In this setting we assume the existence of a suite of test functions that identify the existence of a bug . We must then localize the bug and consider candidate fixes until finding a patch that satisfies the test suite . Throughout our experiments , we work with synthetic bugs in which the error has already been localized to a single buggy method . We take as input the buggy function , along with additional context depending on the experiment , such as surrounding context from the function ’ s file and a stack trace that exposes the buggy function . We feed that input to our sequence-to-sequence transformer , which attempts to generate the fixed function in its entirety . In our deployment scenario , we also attempt to localize the bug using the stack trace . At present we apply a simple heuristic based on which lines of the stack trace come from the developer ’ s own code , considering the most recently called lines to be the most suspect . In future we are interested in improving our heuristic using an encoder transformer that reranks methods given the stack trace . 2 RELATED WORK . 2.1 FAULT LOCALIZATION . Curiously , the standard approach to bug localization eschews stack traces and instead uses spectrumbased fault-localization ( SBFL ) . In this approach , lines of code are ranked in order of suspiciousness based on how many failing tests execute them . One example is the DStar formula ( Wong et al. , 2014 ) . Given a statement s that is executed by failed ( s ) failing tests and passed ( s ) passing tests , it computes the suspiciousness score S ( s ) = failed ( s ) e passed ( s ) + ( totalfailed− failed ( s ) ) where e is an exponent like 2 . A notable exception to the SBFL approach to fault localization is the task of build repair , in which the localization information comes from the compiler message rather than a test suite . Gao et al . use crash traces to query Stack Overflow and produce repairing edits ( Gao et al. , 2015 ) . The DeepDelta model was trained on build data from 300 million lines of Java code from Google projects ( Mesbah et al. , 2019 ) . Using a Neural Machine Translation approach , DeepDelta fixes half of all errors resulting from mismatched method signatures or a missing symbol on its first attempt . 2.2 FALSE POSITIVES . A pervasive problem in the field is the fact that tests are incomplete oracles . Many tools produce patches that are merely test-suite adequate more often than they are genuine fixes . For instance , when evaluated on the QuixBugs challenge ( Lin et al. , 2017 ) , a benchmark consisting of one-line bugs in classic algorithms , a set of ten bugpatching tools found solely false positives for nine functions , while only actually finding any genuine fix for seven ( Ye et al. , 2019 ) . Only three models found more genuine fixes than false positives , in all cases exactly one more genuine fix . The neural approach CoCoNuT does somewhat better , finding thirteen genuine fixes along with seven false positives ( Lutellier et al. , 2020 ) . A survey found similarly disappointing results on the larger benchmark Defects4j ( Just et al. , 2014 ) , a collection of real-world Java bugs ( Durieux et al. , 2019 ) . In stark contrast , we observe an excellent false positive rate with our approach . One can also evaluate bug-patching models without executing tests . The Patches in the Wild benchmark frames bug-patching as a sequence-to-sequence problem as in our approach , which enables the use of NLP metrics ( Tufano et al. , 2019 ) . The authors consider an edit to be a bonafide fix only if it is exactly the same as the edit the original developer made when committing it . SequenceR ( Chen et al. , 2018 ) both narrows and expands on the Patches in the Wild dataset by focusing on one-line changes only , while also providing additional context beyond the buggy method by including other methods ’ signatures , similarly to our use of code skeletons . This extended context gives a 15 % relative boost . 2.3 SYNTHETIC DATA . For certain bug-types it is possible to generate millions of synthetic bugs to train on . Devlin et al . ( Devlin et al. , 2017 ) train an RNN on Python to fix incorrect comparison operators , the mistaken use of “ is ” vs. “ is not ” , variable misuse , and forgotten “ self ” accessors . Overall , they achieve 86 % accuracy on synthetic bugs and 41 % on real-life bugs . Kanade et al . ( Kanade et al. , 2019 ) pretrain a BERT-style model “ CuBERT ” on a larger Python dataset and then finetune on a related suite of synthetic bugs , achieving over 90 % accuracy . Backtranslation is a more generic form of data augmentation , in which a model is trained to translate target data to source data ( Edunov et al. , 2018 ) . In our case , we have an abundance of bug-free data mined from GitHub . We train a backtranslation model to create synthetic bugs and augment the training data for our goal task of fixing bugs . Backtranslation is also put to use for the related task of grammatical error correction ( Kiyono et al. , 2019 ) . 2.4 PRETRAINING . Various task-agnostic pretraining approaches like BERT , BART , T5 , and GPT-3 ( Devlin et al. , 2017 ; Lewis et al. , 2020 ; Raffel et al. , 2019 ; Brown et al. , 2020 ) have seen large performance gains on a diverse array of benchmarks . These models are typically pretrained using a denoising objective on a large corpus of text that has been synthetically corrupted using bidirectional or causal masking , as well as more arbitrary noising such as randomly deleting , replacing , or permuting source tokens . Pretraining has also shown large improvements for the task of program repair ( Drain et al. , 2021 ; Lu et al. , 2021 ) . 3 MODEL . We reuse the 406M-parameter sequence-to-sequence transformer with twelve encoder layers and twelve decoder layers which was pretrained as described under section 4.1 . When experimenting with stack traces , we allot 1024 tokens for the code skeleton , and up to 896 tokens for the trace . In order to accommodate this larger context we thus need to expand the transformer ’ s positional embedding matrix . To this end we use axial embeddings , as inspired by reformer ( Kitaev et al. , 2020 ) . In our implementation , we duplicate the first 896 of the pre-existing 1024 positional embeddings , generate a random axial vector , and add that vector to each of the 896 duplicate embeddings . This approach outperformed randomly initialized embeddings in preliminary experiments . 4 DATA . We work with four different training datasets : Raw python code used for pretraining , commit data used for training a neural bug-creator and bug-patcher , methods extracted from the raw code in which we insert both neural bugs so as to train an even stronger bug-patcher , and , finally , methods that pass executable tests . For this last dataset , we also obtain the list of lines executed by each test , and we obtain another bug-patching dataset by again inserting synthetic bugs and rerunning the passing tests , which allows us to finetune a bug-patcher on stack traces , error messages , and print statements . We also experiment with giving our bug-patcher models either only the focal buggy method or also a ‘ skeleton ’ of the entire file that prioritizes data such as function signatures . 4.1 PRETRAINING . We build on the DeepDev transformer platform . We reuse the 406M-parameter DeepDev Python transformer that was warmstarted from FaceBook ’ s BART model and then further pretrained using a spanmasking objective ( Lewis et al. , 2020 ; Clement et al. , 2020 ) . The pretraining data consists of 200,000 public Python repos filtered to have at least five stars . Pretraining took place on a DGX-2 box for three weeks . Note that the DeepDev tokenizer has appended whitespace tokens , such as the four-space and eight-space tokens , which boosts throughput and the effective context length . To minimize any risk of leakage , we consistently restrict to the same validation and test repositories , in particular to those repositories used in CodeSearchNet ( Husain et al. , 2019 ) . 4.2 COMMIT DATA . We traverse the commit history of 100,000 Python repositories that were filtered to have at least ten stars . We further filter to all commits whose message contains the word “ fix '' , roughly one fifth of all commits . Based upon inspecting many examples , it seems that this simple filter is approximately as precise as more restrictive filters that insist on phrases like “ patch bug ” or “ fix error '' . Nevertheless , the data is still extremely noisy . This commit data serves two purposes for us . First , it allows us to train an edit model which is biased towards constructive , bug-fixing edits . We can evaluate such a model directly on bug-fixing , or finetune it on more filtered bug data . Second , we can reverse the input and output and train an edit model biased towards destructive , bug-inducing edits . We can use this model to create neural bugs to greatly augment our training data . This backtranslation approach has already proven useful elsewhere in nlp . Since we are interested in fixing buggy methods , we consider each method edited by each commit . As we are only interested in nontrivial edits , we normalize each method before and after the commit and discard the edits that do not affect the normalized code . To normalize , we strip comments , replace string and numeric literals with placeholders like ‘ STR_LIT ’ , and standardize whitespace . Finally we are left with 1.1 million , nontrivially edited methods . We experiment with giving solely the focal method and its edit as input and output , as well as putting more context into the input , as described in the section on Code Skeletons .
This manuscript describes DeepDebug a transformer-based mode that performs code repair on Python methods. Specifically, the model is pretrained on 200k Python code repositories and fine-tuned on a number of datasets consisting of bug fix commits and augmentation through synthetic bugs. The model takes Python methods with bugs as the input and outputs potentially bug free versions of the method. The main contribution of the paper, if held up through the questions I ask below, include: - An innovative application of the backtranslation approach on the code repair domain. - Setting a new state-of-the-art (SOTA) performance for code repair on the Python problems from the QuixBugs benchmark, beating previous SOTA (e.g., CoCoNut) not only in the percentage of bugs fixed, but also in using fewer attempts and significantly less latency budget. While these contributions are exciting and potentially pushing the boundary of deep learning-based code repair, there are some major issues with how the methodology is described, how the results are presented, and how the implications of the results are discussed. These critique points from me are listed in the "Main Review" section below.
SP:bfc9fdbd6152659e6d7975261a2adf9f918be84a
Noise-Contrastive Variational Information Bottleneck Networks
1 INTRODUCTION Deep neural networks ( DNNs ) have become the standard tool for challenging classification tasks , e.g . image classification or semantic segmentation , due to their excellent predictive accuracy . However , predictions of DNNs often tend to be overconfident , leading to miscalibration ( Guo et al. , 2017 ) . This problem is amplified in the presence of distributional shift in the test data , such as from image corruptions ( Ovadia et al. , 2019 ) . Multiple methods to regularize the output distribution of DNNs during training have been proposed ( Joo & Chung , 2020 ; Joo et al. , 2020 ; Pereyra et al. , 2017 ; Szegedy et al. , 2016 ) to obtain well-calibrated models . We note , however , that evaluating in-domain uncertainty quantification in terms of model calibration is not sufficient , as it does not indicate how well correct and incorrect predictions can be discerned based on the predictive uncertainty ( Ding et al. , 2020 ) . As Fig . 1 shows , methods that indiscriminately regularize the confidence to improve calibration perform significantly worse at separating correct from incorrect predictions . To address this , we turn to deep variational information bottleneck networks ( VIBN ; Alemi et al. , 2017 ) . Similarly to output regularization methods , they benefit generalization and calibration ( Alemi et al. , 2018 ) but , as we show empirically , suffer from the same separability problem . However , unlike these methods , VIBNs allow us to utilize the distribution matching of the variational approximation to define a noise-contrastive loss term , which can overcome the problem of insufficient separation of correct and incorrect predictions based on the uncertainty while retaining the benefits of the VIBN in terms of generalization and calibration . To this end , we propose a novel model , the noise-contrastive variational information bottleneck network ( NC-VIBN ) , which builds upon the VIBN to improve uncertainty estimation . Instead of using distribution matching as the primary source of regularization , our model utilizes it to define a loss term that explicitly encourages high predictive entropy only for uninformative samples from the latent prior . Additionally , we account for weight uncertainties in the decoder and use L2-normalization before computing the latent embeddings to further alleviate the described problems while improving the calibration and generalization capabilities of the model . We make the following contributions : ( i ) We empirically show that models that explicitly regularize the prediction confidence make it harder to distinguish between correct and incorrect predictions based on the estimated uncertainty . ( ii ) We link the VIBN to these methods and find that it suffers from the same behavior due to the implicit L2-regularization through the latent KL-divergence term . ( iii ) We circumvent these ill effects regarding separability of correct and incorrect predictions by proposing a noise-contrastive loss term that utilizes distribution matching in the latent space and , combined with architectural refinements , leads to improved separability , calibration , and accuracy . ( iv ) Our results show that our proposed model also leads to improved accuracy and calibration in the presence of distributional shift introduced by image corruptions . 2 RELATED WORK . Information Bottleneck . The information bottleneck principle has been proposed by Tishby et al . ( 1999 ) as means to analyze generalization for deep neural networks ( Tishby & Zaslavsky , 2015 ) and relies on computing the mutual information of input , output , and intermediate variables . Since these quantities are in general intractable , variational approximations of the mutual information terms have been introduced ( Alemi et al. , 2017 ; Achille & Soatto , 2018 ) . These variational approximations are also able to overcome problems of the information bottleneck objective for deterministic representations ( Amjad & Geiger , 2019 ) and share some of the benefits of Bayesian models ( Alemi et al. , 2020 ) . Further , as shown by Alemi et al . ( 2018 ) , variational information bottleneck networks can improve calibration , which we want to further improve with our method . Noise-contrastive estimation . Noise-contrastive estimation ( Gutmann & Hyvärinen , 2010 ) is an estimation method for parameterized densities , which is based on discriminating between real and artificial data by logistic regression using the log-density functions . Inspired by noise-contrastive estimation , Hafner et al . ( 2020 ) propose noise-contrastive priors as data space priors that encourage uncertain predictions at the boundary of the training data for regression tasks by minimizing the KL-divergence between a high-variance Gaussian and the predicted output distribution for perturbed training data points . We focus on classification instead . Uncertainty estimation . Bayesian neural networks ( BNNs ) are a popular and theoretically wellfounded tool for uncertainty estimation . A multitude of methods have been proposed to approximate the intractable weight posterior , including variational inference ( Blundell et al. , 2015 ; Graves , 2011 ) , Markov chain Monte Carlo methods ( Welling & Teh , 2011 ) , Laplace approximation ( MacKay , 1992 ; Ritter et al. , 2018 ) , and assumed density filtering ( Hernández-Lobato & Adams , 2015 ) , as well as approximate variational inference methods based on the dropout ( Srivastava et al. , 2014 ) regularization scheme ( Gal & Ghahramani , 2016 ) . In the last years , these methods have been successfully scaled to larger models ( Dusenberry et al. , 2020 ; Heek & Kalchbrenner , 2019 ; Maddox et al. , 2019 ; Osawa et al. , 2019 ; Zhang et al. , 2020 ) . Deep ensembles ( Lakshminarayanan et al. , 2017 ) have also been used for uncertainty estimation and can be interpreted as Bayesian model averaging . Since model averaging with respect to the approximate posterior requires multiple forward passes , BNNs incur a substantial computational overhead . To lighten this , there has been an interest in Bayesian last layer approaches ( van Amersfoort et al. , 2021 ; Kristiadi et al. , 2020 ; Liu et al. , 2020 ; Riquelme et al. , 2018 ; Snoek et al. , 2015 ; Wilson et al. , 2016 ) . Our approach similarly employs a Bayesian treatment only for the last ( few ) layers , however combining it with the information bottleneck principle and a noise-contrastive loss . An alternative approach for estimating uncertainty in classification networks is parameterizing the more expressive Dirichlet distribution ( Gast & Roth , 2018 ; Joo et al. , 2020 ; Malinin & Gales , 2018 ; Sensoy et al. , 2018 ) instead of the categorical distribution at the output layer . We instead gain additional expressiveness by modelling the latent space distributions . Related methods in out-of-distribution detection . Lee et al . ( 2018 ) proposed to train confidencecalibrated classifiers by using a generative adversarial network ( GAN ) that learns to generate samples at the data boundary . The generator is trained to generate data points , which are hard to separate from in-distribution data by the discriminator while given an almost uniform labeling by the classifier . In contrast , Sricharan & Srivastava ( 2018 ) train the generator to produce low-entropy in-distribution samples while requiring the classifier to maximize the entropy of those samples . 3 VARIATIONAL APPROXIMATION OF THE INFORMATION BOTTLENECK . We begin by recapitulating two variational approximations of the information bottleneck , the deep variational information bottleneck ( Alemi et al. , 2017 ) and information dropout ( Achille & Soatto , 2018 ) , which we will use to explain the behavior of such models and to build our own model upon . Deep variational information bottleneck . The information bottleneck was first introduced ( Tishby et al. , 1999 ; Tishby & Zaslavsky , 2015 ) to find a low-complexity representation Z depending on a feature vector X that maximizes the mutual information with a target variable Y . To constrain the complexity of Z , the mutual information between X and Z is bounded , resulting in a maximization problem with inequality constraints . Alemi et al . ( 2017 ) proposed to use a variational approximation of the mutual information terms of the Lagrangian with Lagrange multiplier β , resulting in the objective min ϕ , ψ 1 N N∑ n=1 Epϕ ( z|xn ) [ − log qψ ( yn|z ) ] + βDKL [ pϕ ( z|xn ) ∥∥r ( z ) ] , ( 1 ) where the stochastic encoder pϕ ( z|x ) and decoder qψ ( y|z ) are modeled by neural networks , parameterized by ϕ and ψ respectively , and r ( z ) is a variational approximation of the marginal distribution p ( z ) = ∫ pϕ ( z|x ) p ( x ) dx of z . To draw a parallel to the variational inference literature ( Kingma & Welling , 2014 ) , r ( z ) is also referred to as the latent prior . Alemi et al . ( 2017 ) assume r ( z ) to be a standard Gaussian and model the distribution of the latent encodings as Gaussians with diagonal covariance , resulting in DKL [ pϕ ( z|xn ) ∥∥r ( z ) ] = 1 2 ∑ i [ − log σ2zi|xn + σ 2 zi|xn + µ 2 zi|xn − 1 ] , ( 2 ) where µzi|xn and σ 2 zi|xn are the component-wise mean and variance of the latent embedding pϕ ( z|xn ) of xn , estimated by the encoder network . The decoder is a softmax classification network hψ , predicting class probability vectors from z , i.e . qψ ( y|z ) = Cat ( y|hψ ( z ) ) . Information dropout ( IDO ) . Achille & Soatto ( 2018 ) proposed an alternative approach , where the encoder network gϕ predicts a non-negative vector µx as well as αx , parameterizing the log-normal distribution logN ( 0 , α2x ) . The distribution of the latent encodings is modeled as z ∼ µx ⊙ ϵ with ϵ ∼ logN ( 0 , α2x ) . They show that if µx is the output of ReLU units and the latent prior r ( z ) is chosen to be a mixture of the delta distribution at 0 and a log-uniform distribution , i.e . r ( z ) ∝ qδ0 ( z ) + c/z , the KL-divergence is given as DKL [ pϕ ( z|x ) ∥∥r ( z ) ] = { − log q µx = 0−H [ pαx ( log ϵ ) ] + log c µx > 0 , ( 3 ) where the entropy term H [ pαx ( log ϵ ) ] is given by logαx for ϵ ∼ logN ( 0 , α2x ) up to an additive constant . In the original formulation of Achille & Soatto ( 2018 ) , the mean of ϵ grows with αx , resulting in a higher level of saturation of the softmax outputs , hence in overconfidence . Note that if ϵ is log-normal distributed , log ϵ is normal distributed and the entropy does not depend on its mean . Therefore , we here instead employ the mean-corrected log-normal distribution logN ( − 12α 2 x , α 2 x ) so that Epαx ( ϵ ) [ µx ⊙ ϵ ] = µx without changes to the KL-divergence . 4 UNCERTAINTY QUANTIFICATION UNDER OUTPUT DISTRIBUTION REGULARIZATION . A frequently used metric to assess uncertainty estimation is the expected calibration error ( Guo et al. , 2017 ) or related calibration metrics , which measure how well the prediction confidence coincides with the prediction accuracy . Methods that achieve better calibration by output distribution regularization include label smoothing ( Müller et al. , 2019 ; Szegedy et al. , 2016 ) , regularization of the predictive entropy ( Pereyra et al. , 2017 ) for the categorical distribution , or evidential deep learning ( Sensoy et al. , 2018 ) and belief matching ( Joo et al. , 2020 ) for the Dirichlet distribution . Alternatively , the model can be regularized in function or distribution spaces prior to the final softmax layer ( Joo et al. , 2020 ) , and in the simplest case results in norm penalties for the predicted logits . However , these calibration metrics actually do not indicate how well correct predictions can be separated from incorrect ones based on some uncertainty estimate like the predictive entropy . In fact , a model predicting with a confidence equal to the expected accuracy on in-domain data for both correct classifications and misclassifications would be perfectly calibrated according to the expected calibration error , but the confidence of a prediction would give us no information about an example being correctly or incorrectly classified ( Ding et al. , 2020 ) . A suitable method to examine how easily correct and incorrect predictions can be discriminated , is to analyze how the predictive performance behaves under the rejection of uncertain predictions ( Ding et al. , 2020 ; Nadeem et al. , 2009 ) . As can be seen in Fig . 1 , models that are trained with an objective that regularizes the output distribution perform significantly worse at assigning incorrect predictions a comparatively higher uncertainty and rejecting them early than the deterministic model . We argue that this is because these methods indiscriminately regularize the confidence of correct and incorrect predictions . More experimental evidence is presented in Sec . 6.1 .
This work suggests a *noise-contrastive loss* for variational information bottleneck networks to resolve the poor performance at separating correct and incorrect predictions in regularization methods. Standard regularization methods suffer *separability problems*: models indiscriminately regularize confidence to improve calibration perform much worse at differentiating correct from wrong predictions. The authors address this problem in the variational information bottleneck, suggesting a remedy that utilizes the distribution matching in the latent space. The authors have applied the proposed loss term to the two variational approximations of the information bottleneck, the deep variational information bottleneck (VIBN, Alemi et al., 2017) and information dropout (IDO, Achille & Soatto, 2018).
SP:8e2015afabb59d791d51809a003407a5142aeefd
Noise-Contrastive Variational Information Bottleneck Networks
1 INTRODUCTION Deep neural networks ( DNNs ) have become the standard tool for challenging classification tasks , e.g . image classification or semantic segmentation , due to their excellent predictive accuracy . However , predictions of DNNs often tend to be overconfident , leading to miscalibration ( Guo et al. , 2017 ) . This problem is amplified in the presence of distributional shift in the test data , such as from image corruptions ( Ovadia et al. , 2019 ) . Multiple methods to regularize the output distribution of DNNs during training have been proposed ( Joo & Chung , 2020 ; Joo et al. , 2020 ; Pereyra et al. , 2017 ; Szegedy et al. , 2016 ) to obtain well-calibrated models . We note , however , that evaluating in-domain uncertainty quantification in terms of model calibration is not sufficient , as it does not indicate how well correct and incorrect predictions can be discerned based on the predictive uncertainty ( Ding et al. , 2020 ) . As Fig . 1 shows , methods that indiscriminately regularize the confidence to improve calibration perform significantly worse at separating correct from incorrect predictions . To address this , we turn to deep variational information bottleneck networks ( VIBN ; Alemi et al. , 2017 ) . Similarly to output regularization methods , they benefit generalization and calibration ( Alemi et al. , 2018 ) but , as we show empirically , suffer from the same separability problem . However , unlike these methods , VIBNs allow us to utilize the distribution matching of the variational approximation to define a noise-contrastive loss term , which can overcome the problem of insufficient separation of correct and incorrect predictions based on the uncertainty while retaining the benefits of the VIBN in terms of generalization and calibration . To this end , we propose a novel model , the noise-contrastive variational information bottleneck network ( NC-VIBN ) , which builds upon the VIBN to improve uncertainty estimation . Instead of using distribution matching as the primary source of regularization , our model utilizes it to define a loss term that explicitly encourages high predictive entropy only for uninformative samples from the latent prior . Additionally , we account for weight uncertainties in the decoder and use L2-normalization before computing the latent embeddings to further alleviate the described problems while improving the calibration and generalization capabilities of the model . We make the following contributions : ( i ) We empirically show that models that explicitly regularize the prediction confidence make it harder to distinguish between correct and incorrect predictions based on the estimated uncertainty . ( ii ) We link the VIBN to these methods and find that it suffers from the same behavior due to the implicit L2-regularization through the latent KL-divergence term . ( iii ) We circumvent these ill effects regarding separability of correct and incorrect predictions by proposing a noise-contrastive loss term that utilizes distribution matching in the latent space and , combined with architectural refinements , leads to improved separability , calibration , and accuracy . ( iv ) Our results show that our proposed model also leads to improved accuracy and calibration in the presence of distributional shift introduced by image corruptions . 2 RELATED WORK . Information Bottleneck . The information bottleneck principle has been proposed by Tishby et al . ( 1999 ) as means to analyze generalization for deep neural networks ( Tishby & Zaslavsky , 2015 ) and relies on computing the mutual information of input , output , and intermediate variables . Since these quantities are in general intractable , variational approximations of the mutual information terms have been introduced ( Alemi et al. , 2017 ; Achille & Soatto , 2018 ) . These variational approximations are also able to overcome problems of the information bottleneck objective for deterministic representations ( Amjad & Geiger , 2019 ) and share some of the benefits of Bayesian models ( Alemi et al. , 2020 ) . Further , as shown by Alemi et al . ( 2018 ) , variational information bottleneck networks can improve calibration , which we want to further improve with our method . Noise-contrastive estimation . Noise-contrastive estimation ( Gutmann & Hyvärinen , 2010 ) is an estimation method for parameterized densities , which is based on discriminating between real and artificial data by logistic regression using the log-density functions . Inspired by noise-contrastive estimation , Hafner et al . ( 2020 ) propose noise-contrastive priors as data space priors that encourage uncertain predictions at the boundary of the training data for regression tasks by minimizing the KL-divergence between a high-variance Gaussian and the predicted output distribution for perturbed training data points . We focus on classification instead . Uncertainty estimation . Bayesian neural networks ( BNNs ) are a popular and theoretically wellfounded tool for uncertainty estimation . A multitude of methods have been proposed to approximate the intractable weight posterior , including variational inference ( Blundell et al. , 2015 ; Graves , 2011 ) , Markov chain Monte Carlo methods ( Welling & Teh , 2011 ) , Laplace approximation ( MacKay , 1992 ; Ritter et al. , 2018 ) , and assumed density filtering ( Hernández-Lobato & Adams , 2015 ) , as well as approximate variational inference methods based on the dropout ( Srivastava et al. , 2014 ) regularization scheme ( Gal & Ghahramani , 2016 ) . In the last years , these methods have been successfully scaled to larger models ( Dusenberry et al. , 2020 ; Heek & Kalchbrenner , 2019 ; Maddox et al. , 2019 ; Osawa et al. , 2019 ; Zhang et al. , 2020 ) . Deep ensembles ( Lakshminarayanan et al. , 2017 ) have also been used for uncertainty estimation and can be interpreted as Bayesian model averaging . Since model averaging with respect to the approximate posterior requires multiple forward passes , BNNs incur a substantial computational overhead . To lighten this , there has been an interest in Bayesian last layer approaches ( van Amersfoort et al. , 2021 ; Kristiadi et al. , 2020 ; Liu et al. , 2020 ; Riquelme et al. , 2018 ; Snoek et al. , 2015 ; Wilson et al. , 2016 ) . Our approach similarly employs a Bayesian treatment only for the last ( few ) layers , however combining it with the information bottleneck principle and a noise-contrastive loss . An alternative approach for estimating uncertainty in classification networks is parameterizing the more expressive Dirichlet distribution ( Gast & Roth , 2018 ; Joo et al. , 2020 ; Malinin & Gales , 2018 ; Sensoy et al. , 2018 ) instead of the categorical distribution at the output layer . We instead gain additional expressiveness by modelling the latent space distributions . Related methods in out-of-distribution detection . Lee et al . ( 2018 ) proposed to train confidencecalibrated classifiers by using a generative adversarial network ( GAN ) that learns to generate samples at the data boundary . The generator is trained to generate data points , which are hard to separate from in-distribution data by the discriminator while given an almost uniform labeling by the classifier . In contrast , Sricharan & Srivastava ( 2018 ) train the generator to produce low-entropy in-distribution samples while requiring the classifier to maximize the entropy of those samples . 3 VARIATIONAL APPROXIMATION OF THE INFORMATION BOTTLENECK . We begin by recapitulating two variational approximations of the information bottleneck , the deep variational information bottleneck ( Alemi et al. , 2017 ) and information dropout ( Achille & Soatto , 2018 ) , which we will use to explain the behavior of such models and to build our own model upon . Deep variational information bottleneck . The information bottleneck was first introduced ( Tishby et al. , 1999 ; Tishby & Zaslavsky , 2015 ) to find a low-complexity representation Z depending on a feature vector X that maximizes the mutual information with a target variable Y . To constrain the complexity of Z , the mutual information between X and Z is bounded , resulting in a maximization problem with inequality constraints . Alemi et al . ( 2017 ) proposed to use a variational approximation of the mutual information terms of the Lagrangian with Lagrange multiplier β , resulting in the objective min ϕ , ψ 1 N N∑ n=1 Epϕ ( z|xn ) [ − log qψ ( yn|z ) ] + βDKL [ pϕ ( z|xn ) ∥∥r ( z ) ] , ( 1 ) where the stochastic encoder pϕ ( z|x ) and decoder qψ ( y|z ) are modeled by neural networks , parameterized by ϕ and ψ respectively , and r ( z ) is a variational approximation of the marginal distribution p ( z ) = ∫ pϕ ( z|x ) p ( x ) dx of z . To draw a parallel to the variational inference literature ( Kingma & Welling , 2014 ) , r ( z ) is also referred to as the latent prior . Alemi et al . ( 2017 ) assume r ( z ) to be a standard Gaussian and model the distribution of the latent encodings as Gaussians with diagonal covariance , resulting in DKL [ pϕ ( z|xn ) ∥∥r ( z ) ] = 1 2 ∑ i [ − log σ2zi|xn + σ 2 zi|xn + µ 2 zi|xn − 1 ] , ( 2 ) where µzi|xn and σ 2 zi|xn are the component-wise mean and variance of the latent embedding pϕ ( z|xn ) of xn , estimated by the encoder network . The decoder is a softmax classification network hψ , predicting class probability vectors from z , i.e . qψ ( y|z ) = Cat ( y|hψ ( z ) ) . Information dropout ( IDO ) . Achille & Soatto ( 2018 ) proposed an alternative approach , where the encoder network gϕ predicts a non-negative vector µx as well as αx , parameterizing the log-normal distribution logN ( 0 , α2x ) . The distribution of the latent encodings is modeled as z ∼ µx ⊙ ϵ with ϵ ∼ logN ( 0 , α2x ) . They show that if µx is the output of ReLU units and the latent prior r ( z ) is chosen to be a mixture of the delta distribution at 0 and a log-uniform distribution , i.e . r ( z ) ∝ qδ0 ( z ) + c/z , the KL-divergence is given as DKL [ pϕ ( z|x ) ∥∥r ( z ) ] = { − log q µx = 0−H [ pαx ( log ϵ ) ] + log c µx > 0 , ( 3 ) where the entropy term H [ pαx ( log ϵ ) ] is given by logαx for ϵ ∼ logN ( 0 , α2x ) up to an additive constant . In the original formulation of Achille & Soatto ( 2018 ) , the mean of ϵ grows with αx , resulting in a higher level of saturation of the softmax outputs , hence in overconfidence . Note that if ϵ is log-normal distributed , log ϵ is normal distributed and the entropy does not depend on its mean . Therefore , we here instead employ the mean-corrected log-normal distribution logN ( − 12α 2 x , α 2 x ) so that Epαx ( ϵ ) [ µx ⊙ ϵ ] = µx without changes to the KL-divergence . 4 UNCERTAINTY QUANTIFICATION UNDER OUTPUT DISTRIBUTION REGULARIZATION . A frequently used metric to assess uncertainty estimation is the expected calibration error ( Guo et al. , 2017 ) or related calibration metrics , which measure how well the prediction confidence coincides with the prediction accuracy . Methods that achieve better calibration by output distribution regularization include label smoothing ( Müller et al. , 2019 ; Szegedy et al. , 2016 ) , regularization of the predictive entropy ( Pereyra et al. , 2017 ) for the categorical distribution , or evidential deep learning ( Sensoy et al. , 2018 ) and belief matching ( Joo et al. , 2020 ) for the Dirichlet distribution . Alternatively , the model can be regularized in function or distribution spaces prior to the final softmax layer ( Joo et al. , 2020 ) , and in the simplest case results in norm penalties for the predicted logits . However , these calibration metrics actually do not indicate how well correct predictions can be separated from incorrect ones based on some uncertainty estimate like the predictive entropy . In fact , a model predicting with a confidence equal to the expected accuracy on in-domain data for both correct classifications and misclassifications would be perfectly calibrated according to the expected calibration error , but the confidence of a prediction would give us no information about an example being correctly or incorrectly classified ( Ding et al. , 2020 ) . A suitable method to examine how easily correct and incorrect predictions can be discriminated , is to analyze how the predictive performance behaves under the rejection of uncertain predictions ( Ding et al. , 2020 ; Nadeem et al. , 2009 ) . As can be seen in Fig . 1 , models that are trained with an objective that regularizes the output distribution perform significantly worse at assigning incorrect predictions a comparatively higher uncertainty and rejecting them early than the deterministic model . We argue that this is because these methods indiscriminately regularize the confidence of correct and incorrect predictions . More experimental evidence is presented in Sec . 6.1 .
This paper attempts to improve uncertainty estimates for neural network-based classification. To do so, they combine various techniques: the information bottleneck, a variational distribution over neural net weights, a noise contrastive loss term, and an L2-normalisation step for the last part of encoder. They present experimental results justifying these choices (including, notably, significantly improved NLL compared to the baselines).
SP:8e2015afabb59d791d51809a003407a5142aeefd
Noise-Contrastive Variational Information Bottleneck Networks
1 INTRODUCTION Deep neural networks ( DNNs ) have become the standard tool for challenging classification tasks , e.g . image classification or semantic segmentation , due to their excellent predictive accuracy . However , predictions of DNNs often tend to be overconfident , leading to miscalibration ( Guo et al. , 2017 ) . This problem is amplified in the presence of distributional shift in the test data , such as from image corruptions ( Ovadia et al. , 2019 ) . Multiple methods to regularize the output distribution of DNNs during training have been proposed ( Joo & Chung , 2020 ; Joo et al. , 2020 ; Pereyra et al. , 2017 ; Szegedy et al. , 2016 ) to obtain well-calibrated models . We note , however , that evaluating in-domain uncertainty quantification in terms of model calibration is not sufficient , as it does not indicate how well correct and incorrect predictions can be discerned based on the predictive uncertainty ( Ding et al. , 2020 ) . As Fig . 1 shows , methods that indiscriminately regularize the confidence to improve calibration perform significantly worse at separating correct from incorrect predictions . To address this , we turn to deep variational information bottleneck networks ( VIBN ; Alemi et al. , 2017 ) . Similarly to output regularization methods , they benefit generalization and calibration ( Alemi et al. , 2018 ) but , as we show empirically , suffer from the same separability problem . However , unlike these methods , VIBNs allow us to utilize the distribution matching of the variational approximation to define a noise-contrastive loss term , which can overcome the problem of insufficient separation of correct and incorrect predictions based on the uncertainty while retaining the benefits of the VIBN in terms of generalization and calibration . To this end , we propose a novel model , the noise-contrastive variational information bottleneck network ( NC-VIBN ) , which builds upon the VIBN to improve uncertainty estimation . Instead of using distribution matching as the primary source of regularization , our model utilizes it to define a loss term that explicitly encourages high predictive entropy only for uninformative samples from the latent prior . Additionally , we account for weight uncertainties in the decoder and use L2-normalization before computing the latent embeddings to further alleviate the described problems while improving the calibration and generalization capabilities of the model . We make the following contributions : ( i ) We empirically show that models that explicitly regularize the prediction confidence make it harder to distinguish between correct and incorrect predictions based on the estimated uncertainty . ( ii ) We link the VIBN to these methods and find that it suffers from the same behavior due to the implicit L2-regularization through the latent KL-divergence term . ( iii ) We circumvent these ill effects regarding separability of correct and incorrect predictions by proposing a noise-contrastive loss term that utilizes distribution matching in the latent space and , combined with architectural refinements , leads to improved separability , calibration , and accuracy . ( iv ) Our results show that our proposed model also leads to improved accuracy and calibration in the presence of distributional shift introduced by image corruptions . 2 RELATED WORK . Information Bottleneck . The information bottleneck principle has been proposed by Tishby et al . ( 1999 ) as means to analyze generalization for deep neural networks ( Tishby & Zaslavsky , 2015 ) and relies on computing the mutual information of input , output , and intermediate variables . Since these quantities are in general intractable , variational approximations of the mutual information terms have been introduced ( Alemi et al. , 2017 ; Achille & Soatto , 2018 ) . These variational approximations are also able to overcome problems of the information bottleneck objective for deterministic representations ( Amjad & Geiger , 2019 ) and share some of the benefits of Bayesian models ( Alemi et al. , 2020 ) . Further , as shown by Alemi et al . ( 2018 ) , variational information bottleneck networks can improve calibration , which we want to further improve with our method . Noise-contrastive estimation . Noise-contrastive estimation ( Gutmann & Hyvärinen , 2010 ) is an estimation method for parameterized densities , which is based on discriminating between real and artificial data by logistic regression using the log-density functions . Inspired by noise-contrastive estimation , Hafner et al . ( 2020 ) propose noise-contrastive priors as data space priors that encourage uncertain predictions at the boundary of the training data for regression tasks by minimizing the KL-divergence between a high-variance Gaussian and the predicted output distribution for perturbed training data points . We focus on classification instead . Uncertainty estimation . Bayesian neural networks ( BNNs ) are a popular and theoretically wellfounded tool for uncertainty estimation . A multitude of methods have been proposed to approximate the intractable weight posterior , including variational inference ( Blundell et al. , 2015 ; Graves , 2011 ) , Markov chain Monte Carlo methods ( Welling & Teh , 2011 ) , Laplace approximation ( MacKay , 1992 ; Ritter et al. , 2018 ) , and assumed density filtering ( Hernández-Lobato & Adams , 2015 ) , as well as approximate variational inference methods based on the dropout ( Srivastava et al. , 2014 ) regularization scheme ( Gal & Ghahramani , 2016 ) . In the last years , these methods have been successfully scaled to larger models ( Dusenberry et al. , 2020 ; Heek & Kalchbrenner , 2019 ; Maddox et al. , 2019 ; Osawa et al. , 2019 ; Zhang et al. , 2020 ) . Deep ensembles ( Lakshminarayanan et al. , 2017 ) have also been used for uncertainty estimation and can be interpreted as Bayesian model averaging . Since model averaging with respect to the approximate posterior requires multiple forward passes , BNNs incur a substantial computational overhead . To lighten this , there has been an interest in Bayesian last layer approaches ( van Amersfoort et al. , 2021 ; Kristiadi et al. , 2020 ; Liu et al. , 2020 ; Riquelme et al. , 2018 ; Snoek et al. , 2015 ; Wilson et al. , 2016 ) . Our approach similarly employs a Bayesian treatment only for the last ( few ) layers , however combining it with the information bottleneck principle and a noise-contrastive loss . An alternative approach for estimating uncertainty in classification networks is parameterizing the more expressive Dirichlet distribution ( Gast & Roth , 2018 ; Joo et al. , 2020 ; Malinin & Gales , 2018 ; Sensoy et al. , 2018 ) instead of the categorical distribution at the output layer . We instead gain additional expressiveness by modelling the latent space distributions . Related methods in out-of-distribution detection . Lee et al . ( 2018 ) proposed to train confidencecalibrated classifiers by using a generative adversarial network ( GAN ) that learns to generate samples at the data boundary . The generator is trained to generate data points , which are hard to separate from in-distribution data by the discriminator while given an almost uniform labeling by the classifier . In contrast , Sricharan & Srivastava ( 2018 ) train the generator to produce low-entropy in-distribution samples while requiring the classifier to maximize the entropy of those samples . 3 VARIATIONAL APPROXIMATION OF THE INFORMATION BOTTLENECK . We begin by recapitulating two variational approximations of the information bottleneck , the deep variational information bottleneck ( Alemi et al. , 2017 ) and information dropout ( Achille & Soatto , 2018 ) , which we will use to explain the behavior of such models and to build our own model upon . Deep variational information bottleneck . The information bottleneck was first introduced ( Tishby et al. , 1999 ; Tishby & Zaslavsky , 2015 ) to find a low-complexity representation Z depending on a feature vector X that maximizes the mutual information with a target variable Y . To constrain the complexity of Z , the mutual information between X and Z is bounded , resulting in a maximization problem with inequality constraints . Alemi et al . ( 2017 ) proposed to use a variational approximation of the mutual information terms of the Lagrangian with Lagrange multiplier β , resulting in the objective min ϕ , ψ 1 N N∑ n=1 Epϕ ( z|xn ) [ − log qψ ( yn|z ) ] + βDKL [ pϕ ( z|xn ) ∥∥r ( z ) ] , ( 1 ) where the stochastic encoder pϕ ( z|x ) and decoder qψ ( y|z ) are modeled by neural networks , parameterized by ϕ and ψ respectively , and r ( z ) is a variational approximation of the marginal distribution p ( z ) = ∫ pϕ ( z|x ) p ( x ) dx of z . To draw a parallel to the variational inference literature ( Kingma & Welling , 2014 ) , r ( z ) is also referred to as the latent prior . Alemi et al . ( 2017 ) assume r ( z ) to be a standard Gaussian and model the distribution of the latent encodings as Gaussians with diagonal covariance , resulting in DKL [ pϕ ( z|xn ) ∥∥r ( z ) ] = 1 2 ∑ i [ − log σ2zi|xn + σ 2 zi|xn + µ 2 zi|xn − 1 ] , ( 2 ) where µzi|xn and σ 2 zi|xn are the component-wise mean and variance of the latent embedding pϕ ( z|xn ) of xn , estimated by the encoder network . The decoder is a softmax classification network hψ , predicting class probability vectors from z , i.e . qψ ( y|z ) = Cat ( y|hψ ( z ) ) . Information dropout ( IDO ) . Achille & Soatto ( 2018 ) proposed an alternative approach , where the encoder network gϕ predicts a non-negative vector µx as well as αx , parameterizing the log-normal distribution logN ( 0 , α2x ) . The distribution of the latent encodings is modeled as z ∼ µx ⊙ ϵ with ϵ ∼ logN ( 0 , α2x ) . They show that if µx is the output of ReLU units and the latent prior r ( z ) is chosen to be a mixture of the delta distribution at 0 and a log-uniform distribution , i.e . r ( z ) ∝ qδ0 ( z ) + c/z , the KL-divergence is given as DKL [ pϕ ( z|x ) ∥∥r ( z ) ] = { − log q µx = 0−H [ pαx ( log ϵ ) ] + log c µx > 0 , ( 3 ) where the entropy term H [ pαx ( log ϵ ) ] is given by logαx for ϵ ∼ logN ( 0 , α2x ) up to an additive constant . In the original formulation of Achille & Soatto ( 2018 ) , the mean of ϵ grows with αx , resulting in a higher level of saturation of the softmax outputs , hence in overconfidence . Note that if ϵ is log-normal distributed , log ϵ is normal distributed and the entropy does not depend on its mean . Therefore , we here instead employ the mean-corrected log-normal distribution logN ( − 12α 2 x , α 2 x ) so that Epαx ( ϵ ) [ µx ⊙ ϵ ] = µx without changes to the KL-divergence . 4 UNCERTAINTY QUANTIFICATION UNDER OUTPUT DISTRIBUTION REGULARIZATION . A frequently used metric to assess uncertainty estimation is the expected calibration error ( Guo et al. , 2017 ) or related calibration metrics , which measure how well the prediction confidence coincides with the prediction accuracy . Methods that achieve better calibration by output distribution regularization include label smoothing ( Müller et al. , 2019 ; Szegedy et al. , 2016 ) , regularization of the predictive entropy ( Pereyra et al. , 2017 ) for the categorical distribution , or evidential deep learning ( Sensoy et al. , 2018 ) and belief matching ( Joo et al. , 2020 ) for the Dirichlet distribution . Alternatively , the model can be regularized in function or distribution spaces prior to the final softmax layer ( Joo et al. , 2020 ) , and in the simplest case results in norm penalties for the predicted logits . However , these calibration metrics actually do not indicate how well correct predictions can be separated from incorrect ones based on some uncertainty estimate like the predictive entropy . In fact , a model predicting with a confidence equal to the expected accuracy on in-domain data for both correct classifications and misclassifications would be perfectly calibrated according to the expected calibration error , but the confidence of a prediction would give us no information about an example being correctly or incorrectly classified ( Ding et al. , 2020 ) . A suitable method to examine how easily correct and incorrect predictions can be discriminated , is to analyze how the predictive performance behaves under the rejection of uncertain predictions ( Ding et al. , 2020 ; Nadeem et al. , 2009 ) . As can be seen in Fig . 1 , models that are trained with an objective that regularizes the output distribution perform significantly worse at assigning incorrect predictions a comparatively higher uncertainty and rejecting them early than the deterministic model . We argue that this is because these methods indiscriminately regularize the confidence of correct and incorrect predictions . More experimental evidence is presented in Sec . 6.1 .
This paper first empirically observes that output regularization methods for reducing model overconfidence improves calibration but make correct and incorrect predictions indistinguishable based on predictive uncertainty. To tackle the issue, it proposes a noise-contrastive loss and some architectural refinements based on the variational information bottleneck. Empirical results demonstrate both improved accuracy and calibration over output regularization methods.
SP:8e2015afabb59d791d51809a003407a5142aeefd
Understanding approximate and unrolled dictionary learning for pattern recovery
Dictionary learning consists of finding a sparse representation from noisy data and is a common way to encode data-driven prior knowledge on signals . Alternating minimization ( AM ) is standard for the underlying optimization , where gradient descent steps alternate with sparse coding procedures . The major drawback of this method is its prohibitive computational cost , making it unpractical on large real-world data sets . This work studies an approximate formulation of dictionary learning based on unrolling and compares it to alternating minimization to find the best trade-off between speed and precision . We analyze the asymptotic behavior and convergence rate of gradients estimates in both methods . We show that unrolling performs better on the support of the inner problem solution and during the first iterations . Finally , we apply unrolling on pattern learning in magnetoencephalography ( MEG ) with the help of a stochastic algorithm and compare the performance to a state-of-the-art method . 1 INTRODUCTION . Pattern learning provides insightful information on the data in various biomedical applications . Typical examples include the study of magnetoencephalography ( MEG ) recordings , where one aims to analyze the electrical activity in the brain from measurements of the magnetic field around the scalp of the patient ( Dupré la Tour et al. , 2018 ) . One may also mention neural oscillations study in the local field potential ( Cole & Voytek , 2017 ) or QRS complex detection in electrocardiograms ( Xiang et al. , 2018 ) among others . Dictionary learning ( Olshausen & Field , 1997 ; Aharon et al. , 2006 ; Mairal et al. , 2009 ) is particularly efficient on pattern learning tasks , such as blood cells detection ( Yellin et al. , 2017 ) and MEG signals analysis ( Dupré la Tour et al. , 2018 ) . This framework assumes that the signal can be decomposed into a sparse representation in a redundant basis of patterns – also called atoms . In other words , the goal is to recover a sparse code Z ∈ Rn×T and a dictionary D ∈ Rm×n from noisy measurements Y ∈ Rm×T which are obtained as the linear transformation DZ , corrupted with noise B ∈ Rm×T : Y = DZ + B . Theoretical elements on identifiability and local convergence have been proven in several studies ( Gribonval et al. , 2015 ; Haeffele & Vidal , 2015 ; Agarwal et al. , 2016 ; Sun et al. , 2016 ) . Sparsity-based optimization problems related to dictionary learning generally rely on the usage of the ` 0 or ` 1 regularizations . In this paper , we study Lasso-based ( Tibshirani , 1996 ) dictionary learning where the dictionary D is learned in a set of constraints C by solving min Z∈Rn×T , D∈C F ( Z , D ) , 1 2 ‖DZ − Y ‖22 + λ ‖Z‖1 . ( 1 ) Dictionary learning can be written as a bi-level optimization problem to minimize the cost function with respect to the dictionary only , as mentioned in Mairal et al . ( 2009 ) , min D∈C G ( D ) , F ( Z∗ ( D ) , D ) with Z∗ ( D ) = arg min Z∈Rn×T F ( Z , D ) . ( 2 ) Computing the data representation Z∗ ( D ) is often referred to as the inner problem , while the global minimization is the outer problem . Classical constraint sets include the unit norm , where each atom is normalized to avoid scale-invariant issues , and normalized convolutional kernels to perform Convolutional Dictionary Learning ( Grosse et al. , 2007 ) . Classical dictionary learning methods solve this bi-convex optimization problem through Alternating Minimization ( AM ) ( Mairal et al. , 2009 ) . It consists in minimizing the cost function F over Z with a fixed dictionary D and then performing projected gradient descent to optimize the dictionary with a fixed Z . While AM provides a simple strategy to perform dictionary learning , it can be inefficient on large-scale data sets due to the need to resolve the inner problems precisely for all samples . In recent years , many studies have focused on algorithm unrolling ( Tolooshams et al. , 2020 ; Scetbon et al. , 2021 ) to overcome this issue . The core idea consists of unrolling the algorithm , which solves the inner problem , and then computing the gradient with respect to the dictionary with the help of back-propagation through the iterates of this algorithm . Gregor & LeCun ( 2010 ) popularized this method and first proposed to unroll ISTA ( Daubechies et al. , 2004 ) – a proximal gradient descent algorithm designed for the Lasso – to speed up the computation of Z∗ ( D ) . The N + 1-th layer of this network – called LISTA – is obtained as ZN+1 = ST λ L ( W 1Y + W 2ZN ) , with ST be- ing the soft-thresholding operator . This work has led to many contributions aiming at improving this method and providing theoretical justifications in a supervised ( Chen et al. , 2018 ; Liu & Chen , 2019 ) or unsupervised ( Moreau & Bruna , 2017 ; Ablin et al. , 2019 ) setting . For such unrolled algorithms , the weights W 1 and W 2 can be re-parameterized as functions of D – as illustrated in Figure A in appendix – such that the output ZN ( D ) matches the result of N iterations of ISTA , i.e . W 1D = 1 L D > and W 2D = ( I − 1 L D > D ) , where L = ‖D‖2 . ( 3 ) Then , the dictionary can be learned by minimizing the loss F ( ZN ( D ) , D ) over D with backpropagation . This approach is generally referred to as Deep Dictionary Learning ( DDL ) . DDL and variants with different kinds of regularization ( Tolooshams et al. , 2020 ; Lecouat et al. , 2020 ; Scetbon et al. , 2021 ) , image processing based on metric learning ( Tang et al. , 2020 ) , and classification tasks with scattering ( Zarka et al. , 2019 ) have been proposed in the literature , among others . While these techniques have achieved good performance levels on several signal processing tasks , the reasons they speed up the learning process are still unclear . In this work , we study unrolling in Lasso-based dictionary learning as an approximate bi-level optimization problem . What makes this work different from Bertrand et al . ( 2020 ) , Ablin et al . ( 2020 ) and Tolooshams & Ba ( 2021 ) is that we study the instability of non-smooth bi-level optimization and unrolled sparse coding out of the support , which is of major interest in practice with a small number of layers . In Section 2 , we analyze the convergence of the Jacobian computed with automatic differentiation and find out that its stability is guaranteed on the support of the sparse codes only . De facto , numerical instabilities in its estimation make unrolling inefficient after a few dozen iterations . In Section 3 , we empirically show that unrolling leads to better results than AM only with a small number of iterations of sparse coding , making it possible to learn a good dictionary in this setting . Then we adapt a stochastic approach to make this method usable on large data sets , and we apply it to pattern learning in magnetoencephalography ( MEG ) in Section 4 . We do so by adapting unrolling to rank one convolutional dictionary learning on multivariate time series ( Dupré la Tour et al. , 2018 ) . We show that there is no need to unroll more than a few dozen iterations to obtain satisfying results , leading to a significant gain of time compared to a state-of-the-art algorithm . 2 BI-LEVEL OPTIMIZATION FOR APPROXIMATE DICTIONARY LEARNING . As Z∗ ( D ) does not have a closed-form expression , G can not be computed directly . A solution is to replace the inner problem Z∗ ( D ) by an approximation ZN ( D ) obtained through N iterations of a numerical optimization algorithm or its unrolled version . This reduces the problem to minimizing GN ( D ) , F ( ZN ( D ) , D ) . The first question is how sub-optimal global solutions of GN are compared to the ones of G. Proposition 2.1 shows that the global minima of GN converge as fast as the numerical approximation ZN in function value . Proposition 2.1 Let D∗ = arg minD∈C G ( D ) and D∗N = arg minD∈C GN ( D ) , where N is the number of unrolled iterations . We denote by K ( D∗ ) a constant depending on D∗ , and by C ( N ) the convergence speed of the algorithm , which approximates the inner problem solution . We have GN ( D ∗ N ) −G ( D∗ ) ≤ K ( D∗ ) C ( N ) . The proofs of all theoretical results are deferred to Appendix C. Proposition 2.1 implies that when ZN is computed with FISTA ( Beck & Teboulle , 2009 ) , the function value for global minima of GN converges with speed C ( N ) = 1N2 towards the value of the global minima of F . Therefore , solving the inner problem approximately leads to suitable solutions for equation 2 , given that the optimization procedure is efficient enough to find a proper minimum of GN . As the computational cost of zN increases with N , the choice of N results in a trade-off between the precision of the solution and the computational efficiency , which is critical for processing large data sets . Moreover , learning the dictionary and computing the sparse codes are two different tasks . The loss GN takes into account the dictionary and the corresponding approximation ZN ( D ) to evaluate the quality of the solution . However , the dictionary evaluation should reflect its ability to generate the same signals as the ground truth data and not consider an approximate sparse code that can be recomputed afterward . Therefore , we should distinguish the ability of the algorithm to recover a good dictionary from its ability to learn the dictionary and the sparse codes at the same time . In this work , we use the metric proposed in Moreau & Gramfort ( 2020 ) for convolutions to evaluate the quality of the dictionary . We compare the atoms using their correlation and denote as C the cost matrix whose entry i , j compare the atom i of the first dictionary and j of the second . We define a sign and permutation invariant metric S ( C ) = maxσ∈Sn 1 n ∑n i=1 |Cσ ( i ) , i| , where Sn is the group of permutations of [ 1 , n ] . This metric corresponds to the best linear sum assignment on the cost matrix C , and it can be computed with the Hungarian algorithm . Note that doing so has several limitations and that evaluating the dictionary is still an open problem . Without loss of generality , let T = 1 and thus z ∈ Rn in the rest of this section . Gradient estimation in dictionary learning . Approximate dictionary learning is a non-convex problem , meaning that good or poor local minima of GN may be reached depending on the initialization , the optimization path , and the structure of the problem . Therefore , a gradient descent onGN has no guarantee to find an adequate minimizer of G. While complete theoretical analysis of these problems is arduous , we propose to study the correlation between the gradient obtained with GN and the actual gradient of G , as a way to ensure that the optimization dynamics are similar . Once z∗ ( D ) is known , Danskin ( 1967 , Thm 1 ) states that g∗ ( D ) = ∇G ( D ) is equal to∇2F ( z∗ ( D ) , D ) , where∇2 indicates that the gradient is computed relatively to the second variable in F . Even though the inner problem is non-smooth , this result holds as long as the solution z∗ ( D ) is unique . In the following , we will assume that D > D is invertible on the support of z∗ ( D ) , which implies the uniqueness of z∗ ( D ) . This occurs with probability one if D is sampled from a continuous distribution ( Tibshirani , 2013 ) . AM and DDL differ in how they estimate the gradient of G. AM relies on the analytical formula of g∗ and uses an approximation zN of z∗ , leading to the approximate gradient g1N ( D ) = ∇2F ( zN ( D ) , D ) . We evaluate how well g1N approximates g∗ in Proposition 2.2 . Proposition 2.2 Let D ∈ Rm×n . Then , there exists a constant L1 > 0 such that for every number of iterations N ∥∥g1N − g∗∥∥ ≤ L1 ‖zN ( D ) − z∗ ( D ) ‖ . Proposition 2.2 shows that g1N converges as fast as the iterates of ISTA converge . DDL computes the gradient automatically through zN ( D ) . As opposed to AM , this directly minimizes the loss GN ( D ) . Automatic differentiation yields a sub-gradient g2N ( D ) such that g2N ( D ) ∈ ∇2F ( zN ( D ) , D ) + J + N ( ∂1F ( zN ( D ) , D ) ) , ( 4 ) where JN : Rm×n → Rn is the weak Jacobian of zN ( D ) with respect to D and J+N denotes its adjoint . The product between J+N and ∂1F ( zN ( D ) , D ) is computed via automatic differentiation . Proposition 2.3 Let D ∈ Rm×n . Let S∗ be the support of z∗ ( D ) , SN be the support of zN and S̃N = SN ∪ S∗ . Let f ( z , D ) = 12 ‖Dz − y‖ 2 2 be the data-fitting term in F . Let R ( J , S̃ ) = J+ ( ∇21,1f ( z∗ , D ) 1S̃ ) +∇22,1f ( z∗ , D ) 1S̃ . Then there exists a constant L2 > 0 and a subsequence of ( F ) ISTA iterates zφ ( N ) such that for all N ∈ N : ∃ g2φ ( N ) ∈ ∇2f ( zφ ( N ) , D ) + J + φ ( N ) ( ∇1f ( zφ ( N ) , D ) + λ∂‖·‖1 ( zφ ( N ) ) ) s.t . : ∥∥∥g2φ ( N ) − g∗∥∥∥ ≤ ∥∥∥R ( Jφ ( N ) , S̃φ ( N ) ) ∥∥∥∥∥zφ ( N ) − z∗∥∥+ L22 ∥∥zφ ( N ) − z∗∥∥2 . This sub-sequence zφ ( N ) corresponds to iterates on the support of z∗ . Proposition 2.3 shows that g2N may converge faster than g 1 N once the support is reached . Ablin et al . ( 2020 ) and Tolooshams & Ba ( 2021 ) have studied the behavior of strongly convex functions , as it is the case on the support , and found similar results . This allowed Tolooshams & Ba ( 2021 ) to focus on support identification and show that automatic differentiation leads to a better gradient estimation in dictionary learning on the support under minor assumptions . However , we are also interested in characterizing the behavior outside of the support , where the gradient estimation is difficult because of the sub-differential . In practice , automatic differentiation uses the sign operator as a sub-gradient of ‖·‖1 . The convergence behavior of g2N is also driven by R ( JN , S̃N ) and thus by the weak Jacobian computed via back-propagation . We first compute a closed-form expression of the weak Jacobian of z∗ ( D ) and zN ( D ) . We then show that R ( JN , S̃N ) ≤ L ‖JN − J∗‖ and we analyze the convergence of JN towards J∗ . Study of the Jacobian . The computation of the Jacobian can be done by differentiating through ISTA . In Theorem 2.4 , we show that JN+1 depends on JN and the past iterate zN , and converges towards a fixed point . This formula can be used to compute the Jacobian during the forward pass , avoiding the computational cost of back-propagation and saving memory . Theorem 2.4 At iteration N + 1 of ISTA , the weak Jacobian of zN+1 relatively to Dl , where Dl is the l-th row of D , is given by induction : ∂ ( zN+1 ) ∂Dl = 1|zN+1| > 0 ( ∂ ( zN ) ∂Dl − 1 L ( Dlz > N + ( D > l zN − yl ) In + D > D ∂ ( zN ) ∂Dl ) ) . ∂ ( zN ) ∂Dl will be denoted by JNl . It converges towards the weak Jacobian J ∗ l of z ∗ relatively to Dl , whose values are J∗l S∗ = − ( D > : ,S∗D : ,S∗ ) −1 ( Dlz ∗ > + ( D > l z ∗ − yl ) In ) S∗ , on the support S∗ of z∗ , and 0 elsewhere . Moreover , R ( J∗ , S∗ ) = 0 . This result is similar to Bertrand et al . ( 2020 ) where the Jacobian of z is computed over λ to perform hyper-parameter optimization in Lasso-type models . Using R ( J∗ , S∗ ) = 0 , we can write∥∥∥R ( JN , S̃N ) ∥∥∥ ≤ ∥∥∥R ( JN , S̃N ) −R ( J∗ , S∗ ) ∥∥∥ ≤ L ‖JN − J∗‖ , ( 5 ) as ∥∥∇21,1f ( z∗ , D ) ∥∥2 = L. If the back-propagation were to output an accurate estimate JN of the weak Jacobian J∗ , ∥∥∥R ( JN , S̃N ) ∥∥∥ would be 0 , and the convergence rate of g2N could be twice as fast as the one of g1N . To quantify this , we now analyze the convergence of JN towards J ∗ . In Proposition 2.5 , we compute an upper bound of ∥∥JNl − J∗l ∥∥ with possible usage of truncated backpropagation ( Shaban et al. , 2019 ) . Truncated back-propagation of depth K corresponds to an initial estimate of the Jacobian JN−K = 0 and iterating the induction in Theorem 2.4 . Proposition 2.5 Let N be the number of iterations and K be the back-propagation depth . We assume that ∀n ≥ N−K , S∗ ⊂ Sn . Let ĒN = Sn\S∗ , let L be the largest eigenvalue ofD > : ,S∗D : ,S∗ , and let µn be the smallest eigenvalue of D > : ,SnD : ,Sn−1 . Let Bn = ∥∥∥PEn −D > : ,ĒnD† > : ,S∗PS∗∥∥∥ , where PS is the projection on RS and D† is the pseudo-inverse of D. We have∥∥JNl − J∗l ∥∥ ≤ K∏ k=1 ( 1− µN−k L ) ‖J∗l ‖+ 2 L ‖Dl‖ K−1∑ k=0 k∏ i=1 ( 1−µN−i L ) ( ∥∥zN−kl − z∗l ∥∥+BN−k ‖z∗l ‖ ) . Proposition 2.5 reveals multiple stages in the Jacobian estimation . First , one can see that if all iterates used for the back-propagation lie on the support S∗ , the Jacobian estimate has a quasi-linear convergence , as shown in the following corollary . Corollary 2.6 Let µ > 0 be the smallest eigenvalue of D > : ,S∗D : ,S∗ . Let K ≤ N be the backpropagation depth and let ∆N = F ( zN , D ) − F ( z∗ , D ) + L2 ‖zN − z ∗‖ . Suppose that ∀n ∈ [ N −K , N ] ; Sn ⊂ S∗ . Then , we have∥∥J∗l − JNl ∥∥ ≤ ( 1− µL ) K ‖J∗l ‖+K ( 1− µL ) K−1 ‖Dl‖ 4∆N−KL2 . Once the support is reached , ISTA also converges with the same linear rate ( 1 − µL ) . Thus the gradient estimate g2N converges almost twice as fast as g 1 N in the best case – with optimal subgradient – as O ( K ( 1− µL ) 2K ) . This is similar to Ablin et al . ( 2020 , Proposition.5 ) and Tolooshams & Ba ( 2021 ) . Second , Proposition 2.5 shows that ∥∥J∗l − JNl ∥∥ may increase when the support is not well-estimated , leading to a deterioration of the gradient estimate . This is due to an accumulation of errors materialized by the sum in the right-hand side of the inequality , as the term BN ‖z∗‖ may not vanish to 0 as long as SN 6⊂ S∗ . Interestingly , once the support is reached at iteration S < N , the errors converge linearly towards 0 , and we recover the fast estimation of g∗ with g2 . Therefore , Lasso-based DDL should either be used with a low number of steps or truncated back-propagation to ensure stability . These results apply for all linear dictionaries , including convolutions . Numerical illustrations . We now illustrate these theoretical results depending on the number N of unrolled iterations . The data are generated from a random Gaussian dictionary D of size 30×50 , with Bernoulli-Gaussian sparse codes z ( sparsity 0.3 , σ2z = 1 ) , and Gaussian noise ( σ2noise = 0.1 ) – more details in Appendix A . Figure 1 confirms the linear convergence of JNl once the support is reached . However , the convergence might be unstable when the number of iteration grows , leading to exploding gradient , as illustrated in the second case . When this happens , using a small number of iterations or truncated back-propagation becomes necessary to prevent accumulating errors . It is also of interest to look at the proportion of unstable Jacobians ( see Figure 2 ) . We recover behaviors observed in the first and second case in Figure 1 . 40 % samples suffer from numerical instabilities in this example . This has a negative impact on the gradient estimation outside of the support . We display the convergence behavior of the gradients estimated by AM and by DDL with different back-propagation depths ( 20 , 50 , full ) for simulated data and images in Figure 3 . We unroll FISTA instead of ISTA to make the convergence faster . We observed similar
The authors theoretically study the performance of dictionary learning using "unrolling" based methods. As opposed to alternating minimization (AM) which switches back and forth between dictionary estimation and sparse recovery, the paper writes down the target dictionary as the solution to a bi-level optimization, where the "inner" optimization is approximated by unrolling with N steps. The main contribution is an approach (along with careful analysis) of computing the subgradient for the outer optimization, and experiments to show that this method works on synthetic and real datasets.
SP:ccc0c1a5a3f474edc40fbc97237c2c15d156e4a4
Understanding approximate and unrolled dictionary learning for pattern recovery
Dictionary learning consists of finding a sparse representation from noisy data and is a common way to encode data-driven prior knowledge on signals . Alternating minimization ( AM ) is standard for the underlying optimization , where gradient descent steps alternate with sparse coding procedures . The major drawback of this method is its prohibitive computational cost , making it unpractical on large real-world data sets . This work studies an approximate formulation of dictionary learning based on unrolling and compares it to alternating minimization to find the best trade-off between speed and precision . We analyze the asymptotic behavior and convergence rate of gradients estimates in both methods . We show that unrolling performs better on the support of the inner problem solution and during the first iterations . Finally , we apply unrolling on pattern learning in magnetoencephalography ( MEG ) with the help of a stochastic algorithm and compare the performance to a state-of-the-art method . 1 INTRODUCTION . Pattern learning provides insightful information on the data in various biomedical applications . Typical examples include the study of magnetoencephalography ( MEG ) recordings , where one aims to analyze the electrical activity in the brain from measurements of the magnetic field around the scalp of the patient ( Dupré la Tour et al. , 2018 ) . One may also mention neural oscillations study in the local field potential ( Cole & Voytek , 2017 ) or QRS complex detection in electrocardiograms ( Xiang et al. , 2018 ) among others . Dictionary learning ( Olshausen & Field , 1997 ; Aharon et al. , 2006 ; Mairal et al. , 2009 ) is particularly efficient on pattern learning tasks , such as blood cells detection ( Yellin et al. , 2017 ) and MEG signals analysis ( Dupré la Tour et al. , 2018 ) . This framework assumes that the signal can be decomposed into a sparse representation in a redundant basis of patterns – also called atoms . In other words , the goal is to recover a sparse code Z ∈ Rn×T and a dictionary D ∈ Rm×n from noisy measurements Y ∈ Rm×T which are obtained as the linear transformation DZ , corrupted with noise B ∈ Rm×T : Y = DZ + B . Theoretical elements on identifiability and local convergence have been proven in several studies ( Gribonval et al. , 2015 ; Haeffele & Vidal , 2015 ; Agarwal et al. , 2016 ; Sun et al. , 2016 ) . Sparsity-based optimization problems related to dictionary learning generally rely on the usage of the ` 0 or ` 1 regularizations . In this paper , we study Lasso-based ( Tibshirani , 1996 ) dictionary learning where the dictionary D is learned in a set of constraints C by solving min Z∈Rn×T , D∈C F ( Z , D ) , 1 2 ‖DZ − Y ‖22 + λ ‖Z‖1 . ( 1 ) Dictionary learning can be written as a bi-level optimization problem to minimize the cost function with respect to the dictionary only , as mentioned in Mairal et al . ( 2009 ) , min D∈C G ( D ) , F ( Z∗ ( D ) , D ) with Z∗ ( D ) = arg min Z∈Rn×T F ( Z , D ) . ( 2 ) Computing the data representation Z∗ ( D ) is often referred to as the inner problem , while the global minimization is the outer problem . Classical constraint sets include the unit norm , where each atom is normalized to avoid scale-invariant issues , and normalized convolutional kernels to perform Convolutional Dictionary Learning ( Grosse et al. , 2007 ) . Classical dictionary learning methods solve this bi-convex optimization problem through Alternating Minimization ( AM ) ( Mairal et al. , 2009 ) . It consists in minimizing the cost function F over Z with a fixed dictionary D and then performing projected gradient descent to optimize the dictionary with a fixed Z . While AM provides a simple strategy to perform dictionary learning , it can be inefficient on large-scale data sets due to the need to resolve the inner problems precisely for all samples . In recent years , many studies have focused on algorithm unrolling ( Tolooshams et al. , 2020 ; Scetbon et al. , 2021 ) to overcome this issue . The core idea consists of unrolling the algorithm , which solves the inner problem , and then computing the gradient with respect to the dictionary with the help of back-propagation through the iterates of this algorithm . Gregor & LeCun ( 2010 ) popularized this method and first proposed to unroll ISTA ( Daubechies et al. , 2004 ) – a proximal gradient descent algorithm designed for the Lasso – to speed up the computation of Z∗ ( D ) . The N + 1-th layer of this network – called LISTA – is obtained as ZN+1 = ST λ L ( W 1Y + W 2ZN ) , with ST be- ing the soft-thresholding operator . This work has led to many contributions aiming at improving this method and providing theoretical justifications in a supervised ( Chen et al. , 2018 ; Liu & Chen , 2019 ) or unsupervised ( Moreau & Bruna , 2017 ; Ablin et al. , 2019 ) setting . For such unrolled algorithms , the weights W 1 and W 2 can be re-parameterized as functions of D – as illustrated in Figure A in appendix – such that the output ZN ( D ) matches the result of N iterations of ISTA , i.e . W 1D = 1 L D > and W 2D = ( I − 1 L D > D ) , where L = ‖D‖2 . ( 3 ) Then , the dictionary can be learned by minimizing the loss F ( ZN ( D ) , D ) over D with backpropagation . This approach is generally referred to as Deep Dictionary Learning ( DDL ) . DDL and variants with different kinds of regularization ( Tolooshams et al. , 2020 ; Lecouat et al. , 2020 ; Scetbon et al. , 2021 ) , image processing based on metric learning ( Tang et al. , 2020 ) , and classification tasks with scattering ( Zarka et al. , 2019 ) have been proposed in the literature , among others . While these techniques have achieved good performance levels on several signal processing tasks , the reasons they speed up the learning process are still unclear . In this work , we study unrolling in Lasso-based dictionary learning as an approximate bi-level optimization problem . What makes this work different from Bertrand et al . ( 2020 ) , Ablin et al . ( 2020 ) and Tolooshams & Ba ( 2021 ) is that we study the instability of non-smooth bi-level optimization and unrolled sparse coding out of the support , which is of major interest in practice with a small number of layers . In Section 2 , we analyze the convergence of the Jacobian computed with automatic differentiation and find out that its stability is guaranteed on the support of the sparse codes only . De facto , numerical instabilities in its estimation make unrolling inefficient after a few dozen iterations . In Section 3 , we empirically show that unrolling leads to better results than AM only with a small number of iterations of sparse coding , making it possible to learn a good dictionary in this setting . Then we adapt a stochastic approach to make this method usable on large data sets , and we apply it to pattern learning in magnetoencephalography ( MEG ) in Section 4 . We do so by adapting unrolling to rank one convolutional dictionary learning on multivariate time series ( Dupré la Tour et al. , 2018 ) . We show that there is no need to unroll more than a few dozen iterations to obtain satisfying results , leading to a significant gain of time compared to a state-of-the-art algorithm . 2 BI-LEVEL OPTIMIZATION FOR APPROXIMATE DICTIONARY LEARNING . As Z∗ ( D ) does not have a closed-form expression , G can not be computed directly . A solution is to replace the inner problem Z∗ ( D ) by an approximation ZN ( D ) obtained through N iterations of a numerical optimization algorithm or its unrolled version . This reduces the problem to minimizing GN ( D ) , F ( ZN ( D ) , D ) . The first question is how sub-optimal global solutions of GN are compared to the ones of G. Proposition 2.1 shows that the global minima of GN converge as fast as the numerical approximation ZN in function value . Proposition 2.1 Let D∗ = arg minD∈C G ( D ) and D∗N = arg minD∈C GN ( D ) , where N is the number of unrolled iterations . We denote by K ( D∗ ) a constant depending on D∗ , and by C ( N ) the convergence speed of the algorithm , which approximates the inner problem solution . We have GN ( D ∗ N ) −G ( D∗ ) ≤ K ( D∗ ) C ( N ) . The proofs of all theoretical results are deferred to Appendix C. Proposition 2.1 implies that when ZN is computed with FISTA ( Beck & Teboulle , 2009 ) , the function value for global minima of GN converges with speed C ( N ) = 1N2 towards the value of the global minima of F . Therefore , solving the inner problem approximately leads to suitable solutions for equation 2 , given that the optimization procedure is efficient enough to find a proper minimum of GN . As the computational cost of zN increases with N , the choice of N results in a trade-off between the precision of the solution and the computational efficiency , which is critical for processing large data sets . Moreover , learning the dictionary and computing the sparse codes are two different tasks . The loss GN takes into account the dictionary and the corresponding approximation ZN ( D ) to evaluate the quality of the solution . However , the dictionary evaluation should reflect its ability to generate the same signals as the ground truth data and not consider an approximate sparse code that can be recomputed afterward . Therefore , we should distinguish the ability of the algorithm to recover a good dictionary from its ability to learn the dictionary and the sparse codes at the same time . In this work , we use the metric proposed in Moreau & Gramfort ( 2020 ) for convolutions to evaluate the quality of the dictionary . We compare the atoms using their correlation and denote as C the cost matrix whose entry i , j compare the atom i of the first dictionary and j of the second . We define a sign and permutation invariant metric S ( C ) = maxσ∈Sn 1 n ∑n i=1 |Cσ ( i ) , i| , where Sn is the group of permutations of [ 1 , n ] . This metric corresponds to the best linear sum assignment on the cost matrix C , and it can be computed with the Hungarian algorithm . Note that doing so has several limitations and that evaluating the dictionary is still an open problem . Without loss of generality , let T = 1 and thus z ∈ Rn in the rest of this section . Gradient estimation in dictionary learning . Approximate dictionary learning is a non-convex problem , meaning that good or poor local minima of GN may be reached depending on the initialization , the optimization path , and the structure of the problem . Therefore , a gradient descent onGN has no guarantee to find an adequate minimizer of G. While complete theoretical analysis of these problems is arduous , we propose to study the correlation between the gradient obtained with GN and the actual gradient of G , as a way to ensure that the optimization dynamics are similar . Once z∗ ( D ) is known , Danskin ( 1967 , Thm 1 ) states that g∗ ( D ) = ∇G ( D ) is equal to∇2F ( z∗ ( D ) , D ) , where∇2 indicates that the gradient is computed relatively to the second variable in F . Even though the inner problem is non-smooth , this result holds as long as the solution z∗ ( D ) is unique . In the following , we will assume that D > D is invertible on the support of z∗ ( D ) , which implies the uniqueness of z∗ ( D ) . This occurs with probability one if D is sampled from a continuous distribution ( Tibshirani , 2013 ) . AM and DDL differ in how they estimate the gradient of G. AM relies on the analytical formula of g∗ and uses an approximation zN of z∗ , leading to the approximate gradient g1N ( D ) = ∇2F ( zN ( D ) , D ) . We evaluate how well g1N approximates g∗ in Proposition 2.2 . Proposition 2.2 Let D ∈ Rm×n . Then , there exists a constant L1 > 0 such that for every number of iterations N ∥∥g1N − g∗∥∥ ≤ L1 ‖zN ( D ) − z∗ ( D ) ‖ . Proposition 2.2 shows that g1N converges as fast as the iterates of ISTA converge . DDL computes the gradient automatically through zN ( D ) . As opposed to AM , this directly minimizes the loss GN ( D ) . Automatic differentiation yields a sub-gradient g2N ( D ) such that g2N ( D ) ∈ ∇2F ( zN ( D ) , D ) + J + N ( ∂1F ( zN ( D ) , D ) ) , ( 4 ) where JN : Rm×n → Rn is the weak Jacobian of zN ( D ) with respect to D and J+N denotes its adjoint . The product between J+N and ∂1F ( zN ( D ) , D ) is computed via automatic differentiation . Proposition 2.3 Let D ∈ Rm×n . Let S∗ be the support of z∗ ( D ) , SN be the support of zN and S̃N = SN ∪ S∗ . Let f ( z , D ) = 12 ‖Dz − y‖ 2 2 be the data-fitting term in F . Let R ( J , S̃ ) = J+ ( ∇21,1f ( z∗ , D ) 1S̃ ) +∇22,1f ( z∗ , D ) 1S̃ . Then there exists a constant L2 > 0 and a subsequence of ( F ) ISTA iterates zφ ( N ) such that for all N ∈ N : ∃ g2φ ( N ) ∈ ∇2f ( zφ ( N ) , D ) + J + φ ( N ) ( ∇1f ( zφ ( N ) , D ) + λ∂‖·‖1 ( zφ ( N ) ) ) s.t . : ∥∥∥g2φ ( N ) − g∗∥∥∥ ≤ ∥∥∥R ( Jφ ( N ) , S̃φ ( N ) ) ∥∥∥∥∥zφ ( N ) − z∗∥∥+ L22 ∥∥zφ ( N ) − z∗∥∥2 . This sub-sequence zφ ( N ) corresponds to iterates on the support of z∗ . Proposition 2.3 shows that g2N may converge faster than g 1 N once the support is reached . Ablin et al . ( 2020 ) and Tolooshams & Ba ( 2021 ) have studied the behavior of strongly convex functions , as it is the case on the support , and found similar results . This allowed Tolooshams & Ba ( 2021 ) to focus on support identification and show that automatic differentiation leads to a better gradient estimation in dictionary learning on the support under minor assumptions . However , we are also interested in characterizing the behavior outside of the support , where the gradient estimation is difficult because of the sub-differential . In practice , automatic differentiation uses the sign operator as a sub-gradient of ‖·‖1 . The convergence behavior of g2N is also driven by R ( JN , S̃N ) and thus by the weak Jacobian computed via back-propagation . We first compute a closed-form expression of the weak Jacobian of z∗ ( D ) and zN ( D ) . We then show that R ( JN , S̃N ) ≤ L ‖JN − J∗‖ and we analyze the convergence of JN towards J∗ . Study of the Jacobian . The computation of the Jacobian can be done by differentiating through ISTA . In Theorem 2.4 , we show that JN+1 depends on JN and the past iterate zN , and converges towards a fixed point . This formula can be used to compute the Jacobian during the forward pass , avoiding the computational cost of back-propagation and saving memory . Theorem 2.4 At iteration N + 1 of ISTA , the weak Jacobian of zN+1 relatively to Dl , where Dl is the l-th row of D , is given by induction : ∂ ( zN+1 ) ∂Dl = 1|zN+1| > 0 ( ∂ ( zN ) ∂Dl − 1 L ( Dlz > N + ( D > l zN − yl ) In + D > D ∂ ( zN ) ∂Dl ) ) . ∂ ( zN ) ∂Dl will be denoted by JNl . It converges towards the weak Jacobian J ∗ l of z ∗ relatively to Dl , whose values are J∗l S∗ = − ( D > : ,S∗D : ,S∗ ) −1 ( Dlz ∗ > + ( D > l z ∗ − yl ) In ) S∗ , on the support S∗ of z∗ , and 0 elsewhere . Moreover , R ( J∗ , S∗ ) = 0 . This result is similar to Bertrand et al . ( 2020 ) where the Jacobian of z is computed over λ to perform hyper-parameter optimization in Lasso-type models . Using R ( J∗ , S∗ ) = 0 , we can write∥∥∥R ( JN , S̃N ) ∥∥∥ ≤ ∥∥∥R ( JN , S̃N ) −R ( J∗ , S∗ ) ∥∥∥ ≤ L ‖JN − J∗‖ , ( 5 ) as ∥∥∇21,1f ( z∗ , D ) ∥∥2 = L. If the back-propagation were to output an accurate estimate JN of the weak Jacobian J∗ , ∥∥∥R ( JN , S̃N ) ∥∥∥ would be 0 , and the convergence rate of g2N could be twice as fast as the one of g1N . To quantify this , we now analyze the convergence of JN towards J ∗ . In Proposition 2.5 , we compute an upper bound of ∥∥JNl − J∗l ∥∥ with possible usage of truncated backpropagation ( Shaban et al. , 2019 ) . Truncated back-propagation of depth K corresponds to an initial estimate of the Jacobian JN−K = 0 and iterating the induction in Theorem 2.4 . Proposition 2.5 Let N be the number of iterations and K be the back-propagation depth . We assume that ∀n ≥ N−K , S∗ ⊂ Sn . Let ĒN = Sn\S∗ , let L be the largest eigenvalue ofD > : ,S∗D : ,S∗ , and let µn be the smallest eigenvalue of D > : ,SnD : ,Sn−1 . Let Bn = ∥∥∥PEn −D > : ,ĒnD† > : ,S∗PS∗∥∥∥ , where PS is the projection on RS and D† is the pseudo-inverse of D. We have∥∥JNl − J∗l ∥∥ ≤ K∏ k=1 ( 1− µN−k L ) ‖J∗l ‖+ 2 L ‖Dl‖ K−1∑ k=0 k∏ i=1 ( 1−µN−i L ) ( ∥∥zN−kl − z∗l ∥∥+BN−k ‖z∗l ‖ ) . Proposition 2.5 reveals multiple stages in the Jacobian estimation . First , one can see that if all iterates used for the back-propagation lie on the support S∗ , the Jacobian estimate has a quasi-linear convergence , as shown in the following corollary . Corollary 2.6 Let µ > 0 be the smallest eigenvalue of D > : ,S∗D : ,S∗ . Let K ≤ N be the backpropagation depth and let ∆N = F ( zN , D ) − F ( z∗ , D ) + L2 ‖zN − z ∗‖ . Suppose that ∀n ∈ [ N −K , N ] ; Sn ⊂ S∗ . Then , we have∥∥J∗l − JNl ∥∥ ≤ ( 1− µL ) K ‖J∗l ‖+K ( 1− µL ) K−1 ‖Dl‖ 4∆N−KL2 . Once the support is reached , ISTA also converges with the same linear rate ( 1 − µL ) . Thus the gradient estimate g2N converges almost twice as fast as g 1 N in the best case – with optimal subgradient – as O ( K ( 1− µL ) 2K ) . This is similar to Ablin et al . ( 2020 , Proposition.5 ) and Tolooshams & Ba ( 2021 ) . Second , Proposition 2.5 shows that ∥∥J∗l − JNl ∥∥ may increase when the support is not well-estimated , leading to a deterioration of the gradient estimate . This is due to an accumulation of errors materialized by the sum in the right-hand side of the inequality , as the term BN ‖z∗‖ may not vanish to 0 as long as SN 6⊂ S∗ . Interestingly , once the support is reached at iteration S < N , the errors converge linearly towards 0 , and we recover the fast estimation of g∗ with g2 . Therefore , Lasso-based DDL should either be used with a low number of steps or truncated back-propagation to ensure stability . These results apply for all linear dictionaries , including convolutions . Numerical illustrations . We now illustrate these theoretical results depending on the number N of unrolled iterations . The data are generated from a random Gaussian dictionary D of size 30×50 , with Bernoulli-Gaussian sparse codes z ( sparsity 0.3 , σ2z = 1 ) , and Gaussian noise ( σ2noise = 0.1 ) – more details in Appendix A . Figure 1 confirms the linear convergence of JNl once the support is reached . However , the convergence might be unstable when the number of iteration grows , leading to exploding gradient , as illustrated in the second case . When this happens , using a small number of iterations or truncated back-propagation becomes necessary to prevent accumulating errors . It is also of interest to look at the proportion of unstable Jacobians ( see Figure 2 ) . We recover behaviors observed in the first and second case in Figure 1 . 40 % samples suffer from numerical instabilities in this example . This has a negative impact on the gradient estimation outside of the support . We display the convergence behavior of the gradients estimated by AM and by DDL with different back-propagation depths ( 20 , 50 , full ) for simulated data and images in Figure 3 . We unroll FISTA instead of ISTA to make the convergence faster . We observed similar
The paper studies dictionary learning where assumes that data can be represented as a linear combination of a few atoms of a matrix called dictionary. Traditionally, one way to approach the problem is to set up a min-min (bi-convex) optimization problem known as lasso or basis pursuit and solve it through alternating minimization (alternate between a sparse coding step and a dictionary update). The paper compares gradient based alternating minimization which uses an analytic gradient (given the code estimate, compute the gradient) to unrolled-based dictionary learning which uses backpropagation (automatic differentiation) through an iterative algorithm estimating the code (inner problem) to compute the gradient for update of the dictionary. This paper borrows results from (Ablin et al., 2020) that had studied min-min optimization problems when the objective functions are smooth, differentiable, and strongly convex. Specifically, solving lasso iteratively through ISTA or FISTA, after support selection, with some assumption on the dictionary, the problem of this paper is reduced to the strongly convex case of (Ablin et al., 2020). Hence, the results follow. Their contribution that makes their paper different from (Ablin et al., 2020) is the study of the Jacobian and instability of the convergence prior to support selection. This very same model (dictionary learning through unrolled algorithms) have been already studied theoretically by [1] (which is missing in the citations) and empirically in the context of convolutional dictionary learning by this paper (Tolooshams et al., 2020) that they cite.
SP:ccc0c1a5a3f474edc40fbc97237c2c15d156e4a4
Understanding approximate and unrolled dictionary learning for pattern recovery
Dictionary learning consists of finding a sparse representation from noisy data and is a common way to encode data-driven prior knowledge on signals . Alternating minimization ( AM ) is standard for the underlying optimization , where gradient descent steps alternate with sparse coding procedures . The major drawback of this method is its prohibitive computational cost , making it unpractical on large real-world data sets . This work studies an approximate formulation of dictionary learning based on unrolling and compares it to alternating minimization to find the best trade-off between speed and precision . We analyze the asymptotic behavior and convergence rate of gradients estimates in both methods . We show that unrolling performs better on the support of the inner problem solution and during the first iterations . Finally , we apply unrolling on pattern learning in magnetoencephalography ( MEG ) with the help of a stochastic algorithm and compare the performance to a state-of-the-art method . 1 INTRODUCTION . Pattern learning provides insightful information on the data in various biomedical applications . Typical examples include the study of magnetoencephalography ( MEG ) recordings , where one aims to analyze the electrical activity in the brain from measurements of the magnetic field around the scalp of the patient ( Dupré la Tour et al. , 2018 ) . One may also mention neural oscillations study in the local field potential ( Cole & Voytek , 2017 ) or QRS complex detection in electrocardiograms ( Xiang et al. , 2018 ) among others . Dictionary learning ( Olshausen & Field , 1997 ; Aharon et al. , 2006 ; Mairal et al. , 2009 ) is particularly efficient on pattern learning tasks , such as blood cells detection ( Yellin et al. , 2017 ) and MEG signals analysis ( Dupré la Tour et al. , 2018 ) . This framework assumes that the signal can be decomposed into a sparse representation in a redundant basis of patterns – also called atoms . In other words , the goal is to recover a sparse code Z ∈ Rn×T and a dictionary D ∈ Rm×n from noisy measurements Y ∈ Rm×T which are obtained as the linear transformation DZ , corrupted with noise B ∈ Rm×T : Y = DZ + B . Theoretical elements on identifiability and local convergence have been proven in several studies ( Gribonval et al. , 2015 ; Haeffele & Vidal , 2015 ; Agarwal et al. , 2016 ; Sun et al. , 2016 ) . Sparsity-based optimization problems related to dictionary learning generally rely on the usage of the ` 0 or ` 1 regularizations . In this paper , we study Lasso-based ( Tibshirani , 1996 ) dictionary learning where the dictionary D is learned in a set of constraints C by solving min Z∈Rn×T , D∈C F ( Z , D ) , 1 2 ‖DZ − Y ‖22 + λ ‖Z‖1 . ( 1 ) Dictionary learning can be written as a bi-level optimization problem to minimize the cost function with respect to the dictionary only , as mentioned in Mairal et al . ( 2009 ) , min D∈C G ( D ) , F ( Z∗ ( D ) , D ) with Z∗ ( D ) = arg min Z∈Rn×T F ( Z , D ) . ( 2 ) Computing the data representation Z∗ ( D ) is often referred to as the inner problem , while the global minimization is the outer problem . Classical constraint sets include the unit norm , where each atom is normalized to avoid scale-invariant issues , and normalized convolutional kernels to perform Convolutional Dictionary Learning ( Grosse et al. , 2007 ) . Classical dictionary learning methods solve this bi-convex optimization problem through Alternating Minimization ( AM ) ( Mairal et al. , 2009 ) . It consists in minimizing the cost function F over Z with a fixed dictionary D and then performing projected gradient descent to optimize the dictionary with a fixed Z . While AM provides a simple strategy to perform dictionary learning , it can be inefficient on large-scale data sets due to the need to resolve the inner problems precisely for all samples . In recent years , many studies have focused on algorithm unrolling ( Tolooshams et al. , 2020 ; Scetbon et al. , 2021 ) to overcome this issue . The core idea consists of unrolling the algorithm , which solves the inner problem , and then computing the gradient with respect to the dictionary with the help of back-propagation through the iterates of this algorithm . Gregor & LeCun ( 2010 ) popularized this method and first proposed to unroll ISTA ( Daubechies et al. , 2004 ) – a proximal gradient descent algorithm designed for the Lasso – to speed up the computation of Z∗ ( D ) . The N + 1-th layer of this network – called LISTA – is obtained as ZN+1 = ST λ L ( W 1Y + W 2ZN ) , with ST be- ing the soft-thresholding operator . This work has led to many contributions aiming at improving this method and providing theoretical justifications in a supervised ( Chen et al. , 2018 ; Liu & Chen , 2019 ) or unsupervised ( Moreau & Bruna , 2017 ; Ablin et al. , 2019 ) setting . For such unrolled algorithms , the weights W 1 and W 2 can be re-parameterized as functions of D – as illustrated in Figure A in appendix – such that the output ZN ( D ) matches the result of N iterations of ISTA , i.e . W 1D = 1 L D > and W 2D = ( I − 1 L D > D ) , where L = ‖D‖2 . ( 3 ) Then , the dictionary can be learned by minimizing the loss F ( ZN ( D ) , D ) over D with backpropagation . This approach is generally referred to as Deep Dictionary Learning ( DDL ) . DDL and variants with different kinds of regularization ( Tolooshams et al. , 2020 ; Lecouat et al. , 2020 ; Scetbon et al. , 2021 ) , image processing based on metric learning ( Tang et al. , 2020 ) , and classification tasks with scattering ( Zarka et al. , 2019 ) have been proposed in the literature , among others . While these techniques have achieved good performance levels on several signal processing tasks , the reasons they speed up the learning process are still unclear . In this work , we study unrolling in Lasso-based dictionary learning as an approximate bi-level optimization problem . What makes this work different from Bertrand et al . ( 2020 ) , Ablin et al . ( 2020 ) and Tolooshams & Ba ( 2021 ) is that we study the instability of non-smooth bi-level optimization and unrolled sparse coding out of the support , which is of major interest in practice with a small number of layers . In Section 2 , we analyze the convergence of the Jacobian computed with automatic differentiation and find out that its stability is guaranteed on the support of the sparse codes only . De facto , numerical instabilities in its estimation make unrolling inefficient after a few dozen iterations . In Section 3 , we empirically show that unrolling leads to better results than AM only with a small number of iterations of sparse coding , making it possible to learn a good dictionary in this setting . Then we adapt a stochastic approach to make this method usable on large data sets , and we apply it to pattern learning in magnetoencephalography ( MEG ) in Section 4 . We do so by adapting unrolling to rank one convolutional dictionary learning on multivariate time series ( Dupré la Tour et al. , 2018 ) . We show that there is no need to unroll more than a few dozen iterations to obtain satisfying results , leading to a significant gain of time compared to a state-of-the-art algorithm . 2 BI-LEVEL OPTIMIZATION FOR APPROXIMATE DICTIONARY LEARNING . As Z∗ ( D ) does not have a closed-form expression , G can not be computed directly . A solution is to replace the inner problem Z∗ ( D ) by an approximation ZN ( D ) obtained through N iterations of a numerical optimization algorithm or its unrolled version . This reduces the problem to minimizing GN ( D ) , F ( ZN ( D ) , D ) . The first question is how sub-optimal global solutions of GN are compared to the ones of G. Proposition 2.1 shows that the global minima of GN converge as fast as the numerical approximation ZN in function value . Proposition 2.1 Let D∗ = arg minD∈C G ( D ) and D∗N = arg minD∈C GN ( D ) , where N is the number of unrolled iterations . We denote by K ( D∗ ) a constant depending on D∗ , and by C ( N ) the convergence speed of the algorithm , which approximates the inner problem solution . We have GN ( D ∗ N ) −G ( D∗ ) ≤ K ( D∗ ) C ( N ) . The proofs of all theoretical results are deferred to Appendix C. Proposition 2.1 implies that when ZN is computed with FISTA ( Beck & Teboulle , 2009 ) , the function value for global minima of GN converges with speed C ( N ) = 1N2 towards the value of the global minima of F . Therefore , solving the inner problem approximately leads to suitable solutions for equation 2 , given that the optimization procedure is efficient enough to find a proper minimum of GN . As the computational cost of zN increases with N , the choice of N results in a trade-off between the precision of the solution and the computational efficiency , which is critical for processing large data sets . Moreover , learning the dictionary and computing the sparse codes are two different tasks . The loss GN takes into account the dictionary and the corresponding approximation ZN ( D ) to evaluate the quality of the solution . However , the dictionary evaluation should reflect its ability to generate the same signals as the ground truth data and not consider an approximate sparse code that can be recomputed afterward . Therefore , we should distinguish the ability of the algorithm to recover a good dictionary from its ability to learn the dictionary and the sparse codes at the same time . In this work , we use the metric proposed in Moreau & Gramfort ( 2020 ) for convolutions to evaluate the quality of the dictionary . We compare the atoms using their correlation and denote as C the cost matrix whose entry i , j compare the atom i of the first dictionary and j of the second . We define a sign and permutation invariant metric S ( C ) = maxσ∈Sn 1 n ∑n i=1 |Cσ ( i ) , i| , where Sn is the group of permutations of [ 1 , n ] . This metric corresponds to the best linear sum assignment on the cost matrix C , and it can be computed with the Hungarian algorithm . Note that doing so has several limitations and that evaluating the dictionary is still an open problem . Without loss of generality , let T = 1 and thus z ∈ Rn in the rest of this section . Gradient estimation in dictionary learning . Approximate dictionary learning is a non-convex problem , meaning that good or poor local minima of GN may be reached depending on the initialization , the optimization path , and the structure of the problem . Therefore , a gradient descent onGN has no guarantee to find an adequate minimizer of G. While complete theoretical analysis of these problems is arduous , we propose to study the correlation between the gradient obtained with GN and the actual gradient of G , as a way to ensure that the optimization dynamics are similar . Once z∗ ( D ) is known , Danskin ( 1967 , Thm 1 ) states that g∗ ( D ) = ∇G ( D ) is equal to∇2F ( z∗ ( D ) , D ) , where∇2 indicates that the gradient is computed relatively to the second variable in F . Even though the inner problem is non-smooth , this result holds as long as the solution z∗ ( D ) is unique . In the following , we will assume that D > D is invertible on the support of z∗ ( D ) , which implies the uniqueness of z∗ ( D ) . This occurs with probability one if D is sampled from a continuous distribution ( Tibshirani , 2013 ) . AM and DDL differ in how they estimate the gradient of G. AM relies on the analytical formula of g∗ and uses an approximation zN of z∗ , leading to the approximate gradient g1N ( D ) = ∇2F ( zN ( D ) , D ) . We evaluate how well g1N approximates g∗ in Proposition 2.2 . Proposition 2.2 Let D ∈ Rm×n . Then , there exists a constant L1 > 0 such that for every number of iterations N ∥∥g1N − g∗∥∥ ≤ L1 ‖zN ( D ) − z∗ ( D ) ‖ . Proposition 2.2 shows that g1N converges as fast as the iterates of ISTA converge . DDL computes the gradient automatically through zN ( D ) . As opposed to AM , this directly minimizes the loss GN ( D ) . Automatic differentiation yields a sub-gradient g2N ( D ) such that g2N ( D ) ∈ ∇2F ( zN ( D ) , D ) + J + N ( ∂1F ( zN ( D ) , D ) ) , ( 4 ) where JN : Rm×n → Rn is the weak Jacobian of zN ( D ) with respect to D and J+N denotes its adjoint . The product between J+N and ∂1F ( zN ( D ) , D ) is computed via automatic differentiation . Proposition 2.3 Let D ∈ Rm×n . Let S∗ be the support of z∗ ( D ) , SN be the support of zN and S̃N = SN ∪ S∗ . Let f ( z , D ) = 12 ‖Dz − y‖ 2 2 be the data-fitting term in F . Let R ( J , S̃ ) = J+ ( ∇21,1f ( z∗ , D ) 1S̃ ) +∇22,1f ( z∗ , D ) 1S̃ . Then there exists a constant L2 > 0 and a subsequence of ( F ) ISTA iterates zφ ( N ) such that for all N ∈ N : ∃ g2φ ( N ) ∈ ∇2f ( zφ ( N ) , D ) + J + φ ( N ) ( ∇1f ( zφ ( N ) , D ) + λ∂‖·‖1 ( zφ ( N ) ) ) s.t . : ∥∥∥g2φ ( N ) − g∗∥∥∥ ≤ ∥∥∥R ( Jφ ( N ) , S̃φ ( N ) ) ∥∥∥∥∥zφ ( N ) − z∗∥∥+ L22 ∥∥zφ ( N ) − z∗∥∥2 . This sub-sequence zφ ( N ) corresponds to iterates on the support of z∗ . Proposition 2.3 shows that g2N may converge faster than g 1 N once the support is reached . Ablin et al . ( 2020 ) and Tolooshams & Ba ( 2021 ) have studied the behavior of strongly convex functions , as it is the case on the support , and found similar results . This allowed Tolooshams & Ba ( 2021 ) to focus on support identification and show that automatic differentiation leads to a better gradient estimation in dictionary learning on the support under minor assumptions . However , we are also interested in characterizing the behavior outside of the support , where the gradient estimation is difficult because of the sub-differential . In practice , automatic differentiation uses the sign operator as a sub-gradient of ‖·‖1 . The convergence behavior of g2N is also driven by R ( JN , S̃N ) and thus by the weak Jacobian computed via back-propagation . We first compute a closed-form expression of the weak Jacobian of z∗ ( D ) and zN ( D ) . We then show that R ( JN , S̃N ) ≤ L ‖JN − J∗‖ and we analyze the convergence of JN towards J∗ . Study of the Jacobian . The computation of the Jacobian can be done by differentiating through ISTA . In Theorem 2.4 , we show that JN+1 depends on JN and the past iterate zN , and converges towards a fixed point . This formula can be used to compute the Jacobian during the forward pass , avoiding the computational cost of back-propagation and saving memory . Theorem 2.4 At iteration N + 1 of ISTA , the weak Jacobian of zN+1 relatively to Dl , where Dl is the l-th row of D , is given by induction : ∂ ( zN+1 ) ∂Dl = 1|zN+1| > 0 ( ∂ ( zN ) ∂Dl − 1 L ( Dlz > N + ( D > l zN − yl ) In + D > D ∂ ( zN ) ∂Dl ) ) . ∂ ( zN ) ∂Dl will be denoted by JNl . It converges towards the weak Jacobian J ∗ l of z ∗ relatively to Dl , whose values are J∗l S∗ = − ( D > : ,S∗D : ,S∗ ) −1 ( Dlz ∗ > + ( D > l z ∗ − yl ) In ) S∗ , on the support S∗ of z∗ , and 0 elsewhere . Moreover , R ( J∗ , S∗ ) = 0 . This result is similar to Bertrand et al . ( 2020 ) where the Jacobian of z is computed over λ to perform hyper-parameter optimization in Lasso-type models . Using R ( J∗ , S∗ ) = 0 , we can write∥∥∥R ( JN , S̃N ) ∥∥∥ ≤ ∥∥∥R ( JN , S̃N ) −R ( J∗ , S∗ ) ∥∥∥ ≤ L ‖JN − J∗‖ , ( 5 ) as ∥∥∇21,1f ( z∗ , D ) ∥∥2 = L. If the back-propagation were to output an accurate estimate JN of the weak Jacobian J∗ , ∥∥∥R ( JN , S̃N ) ∥∥∥ would be 0 , and the convergence rate of g2N could be twice as fast as the one of g1N . To quantify this , we now analyze the convergence of JN towards J ∗ . In Proposition 2.5 , we compute an upper bound of ∥∥JNl − J∗l ∥∥ with possible usage of truncated backpropagation ( Shaban et al. , 2019 ) . Truncated back-propagation of depth K corresponds to an initial estimate of the Jacobian JN−K = 0 and iterating the induction in Theorem 2.4 . Proposition 2.5 Let N be the number of iterations and K be the back-propagation depth . We assume that ∀n ≥ N−K , S∗ ⊂ Sn . Let ĒN = Sn\S∗ , let L be the largest eigenvalue ofD > : ,S∗D : ,S∗ , and let µn be the smallest eigenvalue of D > : ,SnD : ,Sn−1 . Let Bn = ∥∥∥PEn −D > : ,ĒnD† > : ,S∗PS∗∥∥∥ , where PS is the projection on RS and D† is the pseudo-inverse of D. We have∥∥JNl − J∗l ∥∥ ≤ K∏ k=1 ( 1− µN−k L ) ‖J∗l ‖+ 2 L ‖Dl‖ K−1∑ k=0 k∏ i=1 ( 1−µN−i L ) ( ∥∥zN−kl − z∗l ∥∥+BN−k ‖z∗l ‖ ) . Proposition 2.5 reveals multiple stages in the Jacobian estimation . First , one can see that if all iterates used for the back-propagation lie on the support S∗ , the Jacobian estimate has a quasi-linear convergence , as shown in the following corollary . Corollary 2.6 Let µ > 0 be the smallest eigenvalue of D > : ,S∗D : ,S∗ . Let K ≤ N be the backpropagation depth and let ∆N = F ( zN , D ) − F ( z∗ , D ) + L2 ‖zN − z ∗‖ . Suppose that ∀n ∈ [ N −K , N ] ; Sn ⊂ S∗ . Then , we have∥∥J∗l − JNl ∥∥ ≤ ( 1− µL ) K ‖J∗l ‖+K ( 1− µL ) K−1 ‖Dl‖ 4∆N−KL2 . Once the support is reached , ISTA also converges with the same linear rate ( 1 − µL ) . Thus the gradient estimate g2N converges almost twice as fast as g 1 N in the best case – with optimal subgradient – as O ( K ( 1− µL ) 2K ) . This is similar to Ablin et al . ( 2020 , Proposition.5 ) and Tolooshams & Ba ( 2021 ) . Second , Proposition 2.5 shows that ∥∥J∗l − JNl ∥∥ may increase when the support is not well-estimated , leading to a deterioration of the gradient estimate . This is due to an accumulation of errors materialized by the sum in the right-hand side of the inequality , as the term BN ‖z∗‖ may not vanish to 0 as long as SN 6⊂ S∗ . Interestingly , once the support is reached at iteration S < N , the errors converge linearly towards 0 , and we recover the fast estimation of g∗ with g2 . Therefore , Lasso-based DDL should either be used with a low number of steps or truncated back-propagation to ensure stability . These results apply for all linear dictionaries , including convolutions . Numerical illustrations . We now illustrate these theoretical results depending on the number N of unrolled iterations . The data are generated from a random Gaussian dictionary D of size 30×50 , with Bernoulli-Gaussian sparse codes z ( sparsity 0.3 , σ2z = 1 ) , and Gaussian noise ( σ2noise = 0.1 ) – more details in Appendix A . Figure 1 confirms the linear convergence of JNl once the support is reached . However , the convergence might be unstable when the number of iteration grows , leading to exploding gradient , as illustrated in the second case . When this happens , using a small number of iterations or truncated back-propagation becomes necessary to prevent accumulating errors . It is also of interest to look at the proportion of unstable Jacobians ( see Figure 2 ) . We recover behaviors observed in the first and second case in Figure 1 . 40 % samples suffer from numerical instabilities in this example . This has a negative impact on the gradient estimation outside of the support . We display the convergence behavior of the gradients estimated by AM and by DDL with different back-propagation depths ( 20 , 50 , full ) for simulated data and images in Figure 3 . We unroll FISTA instead of ISTA to make the convergence faster . We observed similar
The authors investigate the asymptotic behavior of unrolling applied to dictionary learning. The applicability of unrolling to dictionary learning results from the circumstance that dictionary learning can be reformulated in terms of bilevel optimization, where the lower-level (or inner) problem is a sparse coding problem (in the LASSO case considered here). Unrolling means to replace the argmin in the lower-level problem with the $N$-th iterate of a suitable optimization algorithm. Gradients of the upper-level loss can then be computed by means of backpropagation through algorithmic iterates. The authors study the behavior of resulting gradients depending on $N$ and draw a comparison with alternating minimization. They find that unrolling constitutes a scalable alternative to alternating minimization, where unrolling a relatively small number of iterations or using truncated backpropagation is favorable to ensure stable approximate gradients.
SP:ccc0c1a5a3f474edc40fbc97237c2c15d156e4a4
Yformer: U-Net Inspired Transformer Architecture for Far Horizon Time Series Forecasting
1 INTRODUCTION . In the most simple case , time series forecasting deals with a scalar time-varying signal and aims to predict or forecast its values in the near future ; for example , countless applications in finance , healthcare , production automatization , etc . ( Carta et al. , 2021 ; Cao et al. , 2018 ; Sagheer & Kotb , 2019 ) can benefit from an accurate forecasting solution . Often not just a single scalar signal is of interest , but multiple at once , and further time-varying signals are available and even known for the future . For example , suppose one aims to forecast the energy consumption of a house , it likely depends on the social time that one seeks to forecast for ( such as the next hour or day ) , and also on features of these time points ( such as weekday , daylight , etc . ) , which are known already for the future . This is also the case in model predictive control ( Camacho & Alba , 2013 ) , where one is interested to forecast the expected value realized by some planned action , then this action is also known at the time of forecast . More generally , time series forecasting , nowadays deals with quadruples ( x , y , x′ , y′ ) of known past predictors x , known past targets y , known future predictors x′ and sought future targets y′ . ( Figure 3 in appendix section A provides a simple illustration ) Time series problems can often be addressed by methods developed initially for images , treating them as 1-dimensional images . Especially for time-series classification many typical time series encoder architectures have been adapted from models for images ( Wang et al. , 2017 ; Zou et al. , 2019 ) . Time series forecasting then is closely related to image outpainting ( Van Hoorick , 2019 ) , the task to predict how an image likely extends to the left , right , top or bottom , as well as to the more well-known task of image segmentation , where for each input pixel , an output pixel has to be predicted , whose channels encode pixel-wise classes such as vehicle , road , pedestrian say for road scenes . Time series forecasting combines aspects from both problem settings : information about targets from shifted positions ( e.g. , the past targets y as in image outpainting ) and information about other channels from the same positions ( e.g. , the future predictors x′ as in image segmentation ) . One of the most successful , principled architectures for the image segmentation task are U-Nets introduced in Ronneberger et al . ( 2015 ) , an architecture that successively downsamples / coarsens its inputs and then upsamples / refines the latent representation with deconvolutions also using the latent representations of the same detail level , tightly coupling down- and upsampling procedures and thus yielding latent features on the same resolution as the inputs . Following the great success in Natural Language Processing ( NLP ) applications , attention-based , esp . transformer-based architectures ( Vaswani et al. , 2017 ) that model pairwise interactions between sequence elements have been recently adapted for time series forecasting . One of the significant challenges , is that the length of the time series , are often one or two magnitudes of order larger than the ( sentence-level ) NLP problems . Plenty of approaches aim to mitigate the quadratic complexity O ( T 2 ) in the sequence/time series length T to at mostO ( T log T ) . For example , the Informer architecture ( Zhou et al. , 2020 ) , arguably one of the most accurate forecasting models researched so far , adapts the transformer by a sparse attention mechanism and a successive downsampling/coarsening of the past time series . As in the original transformer , only the coarsest representation is fed into the decoder . Possibly to remedy the loss in resolution by this procedure , the Informer feeds its input a second time into the decoder network , this time without any coarsening . While forecasting problems share many commonalities with image segmentation problems , transformer-based architectures like the Informer do not involve coupled down- and upscaling procedures to yield predictions on the same resolution as the inputs . Thus , we propose a novel Y-shaped architecture called Yformer that 1 . Couples downscaling/upscaling to leverage both , coarse and fine-grained features for time series forecasting , 2 . Combines the coupled scaling mechanism with sparse attention modules to capture longrange effects on all scale levels , and 3 . Stabilizes encoder and decoder stacks by reconstructing the recent past . 2 RELATED WORK . Deep Learning Based Time Series Forecasting : While Convolutional Neural Network ( CNN ) and Recurrent Neural network ( RNN ) based architectures ( Salinas et al. , 2020 ; Rangapuram et al. , 2018 ) outperform traditional methods like ARIMA ( Box & Jenkins , 1968 ) and exponential smoothing methods ( Hyndman & Athanasopoulos , 2018 ) , the addition of attention layers ( Vaswani et al. , 2017 ) to model time series forecasting has proven to be very beneficial across different problem settings ( Fan et al. , 2019 ; Qin et al. , 2017 ; Lai et al. , 2018 ) . Attention allows direct pair-wise interaction with eccentric events ( like holidays ) and can model temporal dynamics inherently unlike RNN ’ s and CNN ’ s that fail to capture long-range dependencies directly . Recent work like Reformer ( Kitaev et al. , 2020 ) , Linformer ( Wang et al. , 2020 ) and Informer ( Zhou et al. , 2020 ) have focused on reducing the quadratic complexity of modeling pair-wise interactions to O ( T log T ) with the introduction of restricted attention layers . Consequently , they can predict for longer forecasting horizons but are hindered by their capability of aggregating features and maintaining the resolution required for far horizon forecasting . U-Net : The Yformer model is inspired by the famous U-Net architecture introduced in Ronneberger et al . ( 2015 ) originating from the field of medical image segmentation . The U-net architecture is capable of compressing information by aggregating over the inputs and up-sampling embeddings to the same resolutions as that of the inputs from their compressed latent features . Current transformer architectures like the Informer ( Zhou et al. , 2020 ) do not utilize up-sampling techniques even though the network produces intermediate multi resolution feature maps . Our work aims to capitalize on these multi resolution feature maps and use the U-net shape effectively for the task of time series forecasting . Previous works like Stoller et al . ( 2019 ) and Perslev et al . ( 2019 ) have successfully applied U-Net architecture for the task of sequence modeling and time series segmentation , illustrating superior results in the respective tasks . These work motivate the use of a U-Net-inspired architecture for time series forecasting as current methods fail to couple sparse attention mechanism with the U-Net shaped architecture . Additional related works section is decoupled from the main text and is presented in the appendix section B . 3 PROBLEM FORMULATION . By a time series x with M channels , we mean a finite sequence of vectors in RM , denote their space by R∗×M : = ⋃ T∈N RT×M , and their length by |x| : = T ( for x ∈ RT×M , M ∈ N ) . We write ( x , y ) ∈ R∗× ( M+O ) to denote two time series of same length with M and O channels , respectively . We model a time series forecasting instance as a quadruple ( x , y , x′ , y′ ) ∈ R∗× ( M+O ) × R∗× ( M+O ) , where x , y denote the past predictors and targets until a reference time point T and x′ , y′ denote the future predictors and targets from the reference point T to the next τ time steps . Here , τ = |x′| is called the forecast horizon . For a Time Series Forecasting Problem , given ( i ) a sample D : = { ( x1 , y1 , x′1 , y′1 ) , . . . , ( xN , yN , x ′ N , y ′ N ) } from an unknown distribution p of time series forecasting instances and ( ii ) a function ` : R∗× ( O+O ) → R called loss , we attempt to find a function ŷ : R∗× ( M+O ) ×R∗×M → R∗×O ( with |ŷ ( x , y , x′ ) | = |x′| ) with minimal expected loss E ( x , y , x′ , y′ ) ∼p ` ( y′ , ŷ ( x , y , x′ ) ) ( 1 ) The loss ` usually is the mean absolute error ( MAE ) or mean squared error ( MSE ) averaged over future time points : ` mae ( y′ , ŷ ) : = 1 |y′| |y′|∑ t=1 1 O ||y′t − ŷt||1 , ` mse ( y′ , ŷ ) : = 1 |y′| |y′|∑ t=1 1 O ||y′t − ŷt||22 ( 2 ) Furthermore , if there is only one target channel and no predictor channels ( O = 1 , M = 0 ) , the time series forecasting problem is called univariate , otherwise multivariate . 4 BACKGROUND . Our work incorporates restricted attention based Transformer in a U-Net inspired architecture . For this reason , we base our work on the current state of the art sparse attention model Informer , introduced in Zhou et al . ( 2020 ) . We provide a brief overview of the sparse attention mechanism ( ProbSparse ) and the encoder block ( Contracting ProbSparse Self-Attention Blocks ) used in the Informer model for completeness . ProbSparse Attention : The ProbSparse attention mechanism restricts the canonical attention ( Vaswani et al. , 2017 ) by selecting a subset u of dominant queries having the largest variance across all the keys . Consequently , the query Q ∈ RLQ×d in the canonical attention is replaced by a sparse query matrix Q ∈ RLQ×d consisting of only the u dominant queries . ProbSparse attention can hence be defined as : APropSparse ( Q , K , V ) = Softmax ( QK T √ d ) V ( 3 ) where d denotes the input dimension to the attention module . For more details on the ProbSparse attention mechanism , we refer the reader to Zhou et al . ( 2020 ) . Contracting ProbSparse Self-Attention Blocks : The Informer model uses Contracting ProbSparse Self-Attention Blocks to distill out redundant information from the long history input sequence ( x , y ) in a pyramid structure motivated from the image domain ( Lin et al. , 2017 ) . The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query , key and value for self-attention . This is followed by multiple layers of convolution ( Conv1d ) , and finally the MaxPool operation reduces the latent dimension by effectively distilling out redundant information at each block . We refer the reader to Algorithm 2 in the appendix section C where these operations are presented in an algorithmic structure for a comprehensive overview . 5 METHODOLOGY . The Yformer model is a Y-shaped ( Figure 1b ) symmetric encoder-decoder architecture that is specifically designed to take advantage of the multi-resolution embeddings generated by the Contracting ProbSparse Self-Attention Blocks . The fundamental design consideration is the adoption of U-Netinspired connections to extract encoder features at multiple resolutions and provide direct connection to the corresponding symmetric decoder block ( simple illustration provided in Figure 4 , appendix section A ) . Furthermore , the addition of reconstruction loss helps the model learn generalized embeddings that better approximate the data generating distribution . The Y-Past Encoder of the Yformer is designed using a similar encoder structure as that of the Informer . The Y-Past Encoder embeds the past sequence ( x , y ) into a scalar projection along with the addition of positional and temporal embeddings . Multiple Contracting ProbSparse Self-Attention Blocks are used to generate encoder embeddings at various resolutions . The Informer model uses the final low-dimensional embedding as the input to the decoder ( Figure 1a ) whereas , the Yformer retains the embeddings at multiple resolutions to be passed on to the decoder . This allows the Yformer to use high-dimensional lower-level embeddings effectively . The Y-Future Encoder of the Yformer mitigates the issue of the redundant reprocessing of parts of the past sequence ( x , y ) used as tokens ( xtoken , ytoken ) in the Informer architecture . The Informer model uses only the coarsest representation from the encoder embedding , leading to a loss in resolution and forcing the Informer to pass part of the past sequence as tokens ( xtoken , ytoken ) to the decoder ( Figure 1a ) . The Yformer separates the future predictors and the past sequence ( x , y ) bypassing the future predictors ( x′ ) through a separate encoder and utilizing the multi-resolution embeddings to dismiss the need for tokens entirely . Unlike the Y-Past Encoder , the attention blocks in the Y-Future encoder are based on masked canonical self-attention mechanism ( Vaswani et al. , 2017 ) . Masking the attention ensures that there is no information leak from the future time steps to the past . Moreover , a masked canonical self-attention mechanism helps reduce the complexity , as half of the query-key interactions are restricted by design . Thus , the Y-Future Encoder is designed by stacking multiple Contracting ProbSparse Self-Attention Blocks where the ProbSparse attention is replaced by the Masked Attention . We name these blocks Contracting Masked Self-Attention Blocks ( Algorithm 3 appendix section C ) . The Yformer processes the past inputs and the future predictors separately within its encoders . However , considering the time steps , the future predictors are a continuation of the past time steps . For this reason , the Yformer model concatenates ( represented by the symbol + ) the past encoder embedding and the future encoder embedding along the time dimension after each encoder block , preserving the continuity between the past input time steps and the future time steps . Let i represent the index of an encoder block , then epasti+1 and e fut i+1 represent the output from the past encoder and the future encoder respectively . The final concatenated encoder embedding ( ei+1 ) is calculated as , epasti+1 = ContractingProbSparseSelfAttentionBlock ( e past i ) efuti+1 = ContractingMaskedSelfAttentionBlock ( e fut i ) ei+1 = e past i+1 ++ e fut i+1 ( 4 ) The encoder embeddings represented by E = [ e0 , . . . , eI ] ( where I is the number of encoder layers ) contain the combination of past and future embeddings at multiple resolutions . The Y-Decoder of the Yformer consists of two parts . The first part takes as input the final concatenated low-dimensional embedding ( eI ) and performs a multi-head canonical self-attention mechanism . Here , the past encoder embedding ( epastI ) is allowed to attend to itself as well as the future encoder embedding ( efutI ) in an unrestricted fashion . The encoder embedding ( eI ) is the lowdimensional distilled embedding , and skipping query-key interaction within these low-dimensional embeddings might deny the model useful pair-wise interactions . Therefore , it is by design that this is the only part of the Yformer model that uses canonical self-attention in comparison to the Informer that uses canonical attention within its repeating decoder block , as shown in Figure 1a . Since the canonical self-attention layer is separated from the repeating attention blocks within the decoder , the Yformer complexity from this full attention module does not increase with an increase in the number of decoder blocks . The U-Net architecture inspires the second part of the Y-Decoder . Consequently , the decoder is structured in a symmetric expanding path identical to the contracting encoder . We realize this architecture by introducing upsampling on the ProbSparse attention mechanism using Expanding ProbSparse Cross-Attention Block . The Expanding ProbSparse Cross-Attention Block within the Yformer decoder performs two tasks : ( 1 ) upsample the compressed encoder embedding eI and ( 2 ) perform restricted cross attention between the expanding decoder embedding dI−i and the corresponding encoder embedding ei ( represented in Figure 4 appendix section A ) . We accomplish both the tasks by introducing an Expanding ProbSparse Cross-Attention Block as illustrated in Algorithm 1 . Algorithm 1 Expanding ProbSparse Cross-Attention Block Input : dI−i , ei Output : dI−i+1 dI−i+1 ← ProbSparseCrossAttn ( dI−i , ei ) dI−i+1 ← Conv1d ( dI−i+1 ) dI−i+1 ← Conv1d ( dI−i+1 ) dI−i+1 ← LayerNorm ( dI−i+1 ) dI−i+1 ← ELU ( ConvTranspose1d ( dI−i+1 ) ) ) The Expanding ProbSparse Cross-Attention Blocks within the Yformer decoder uses a ProbSparseCrossAttn to construct direct connections between the lower levels of the encoder and the corresponding symmetric higher levels of the decoder . Direct connections from the encoder to the decoder are an essential component for majority of the models within the image domain . For example , ResNet ( He et al. , 2016 ) , and DenseNet ( Huang et al. , 2017 ) have demonstrated that direct connections between previous feature maps , strengthen feature propagation , reduce parameters , mitigate vanishing gradients and encourage feature reuse . However , current transformer-based architectures like the Informer fail to utilize such direct connections . The ProbSparseCrossAttn takes in as input the decoder embedding from the previous layer dI−i as queries and the corresponding encoder embedding ei as keys . The Yformer uses the ProbSparse restricted attention so that the model is scalable with an increase in the number of decoder blocks . We utilize ConvTranspose1d or popularly known as Deconvolution for incrementally increasing the embedding space . The famous U-Net architecture uses a symmetric expanding path using such Deconvolution layers . This property enables the model to not only aggregate over the input but also upscale the latent dimensions , improving the overall expressivity of the architecture . The decoder of Yformer follows a similar strategy by employing Deconvolution to expand the embedding space of the encoded output . We describe the different operators used in the appendix section C. A fully connected layer ( LinearLayer ) predicts the future time steps ŷfut from the final decoder layer ( dI ) and additionally reconstructs the past input targets ŷpast . [ ŷpast , ŷfut ] = LinearLayer ( dI ) ( 5 ) The addition of reconstruction loss to the Yformer as an auxiliary loss , serves two significant purposes . Firstly , the reconstruction loss acts as a data-dependent regularization term that reduces overfitting by learning embeddings that are more general ( Ghasedi Dizaji et al. , 2017 ; Jarrett & van der Schaar , 2020 ) . Secondly , the reconstruction loss helps in producing future-output in a similar distribution as the inputs ( Bank et al. , 2020 ) . For far horizon forecasting , we are interested in learning a future-output distribution . However , the future-output distribution and the past-input distribution arise from the same data generating process . Therefore having an auxiliary reconstruction loss would direct the gradients to a better approximate of the data generating process . The Yformer model is trained on the combined loss ` , ` = α ` mse ( y , ŷpast ) + ( 1− α ) ` mse ( y′ , ŷfut ) ( 6 ) where the first term tries to learn the past targets y and the second term learns the future targets y′ . We use the reconstruction factor ( α ) to vary the importance of reconstruction and future prediction and tune this as a hyperparameter .
Recent works such as the Informer have used efficient attention mechanisms and shown significant performance improvements in the long sequence time-series forecasting problems. However, the authors argued that using only the coarsest past representations for the decoder could be a major limitation. In this paper, the authors proposed the Yformer model by combining the Informer and the U-Net architectures. They adopted direct connections from the multi-resolution encoder to decoder to leverage both coarse and fine-grained representations. The authors claimed the effectiveness of the proposed method through three benchmark datasets used in the Informer paper.
SP:7ce909645e709416d43fa0e795f6cf2831a0757e
Yformer: U-Net Inspired Transformer Architecture for Far Horizon Time Series Forecasting
1 INTRODUCTION . In the most simple case , time series forecasting deals with a scalar time-varying signal and aims to predict or forecast its values in the near future ; for example , countless applications in finance , healthcare , production automatization , etc . ( Carta et al. , 2021 ; Cao et al. , 2018 ; Sagheer & Kotb , 2019 ) can benefit from an accurate forecasting solution . Often not just a single scalar signal is of interest , but multiple at once , and further time-varying signals are available and even known for the future . For example , suppose one aims to forecast the energy consumption of a house , it likely depends on the social time that one seeks to forecast for ( such as the next hour or day ) , and also on features of these time points ( such as weekday , daylight , etc . ) , which are known already for the future . This is also the case in model predictive control ( Camacho & Alba , 2013 ) , where one is interested to forecast the expected value realized by some planned action , then this action is also known at the time of forecast . More generally , time series forecasting , nowadays deals with quadruples ( x , y , x′ , y′ ) of known past predictors x , known past targets y , known future predictors x′ and sought future targets y′ . ( Figure 3 in appendix section A provides a simple illustration ) Time series problems can often be addressed by methods developed initially for images , treating them as 1-dimensional images . Especially for time-series classification many typical time series encoder architectures have been adapted from models for images ( Wang et al. , 2017 ; Zou et al. , 2019 ) . Time series forecasting then is closely related to image outpainting ( Van Hoorick , 2019 ) , the task to predict how an image likely extends to the left , right , top or bottom , as well as to the more well-known task of image segmentation , where for each input pixel , an output pixel has to be predicted , whose channels encode pixel-wise classes such as vehicle , road , pedestrian say for road scenes . Time series forecasting combines aspects from both problem settings : information about targets from shifted positions ( e.g. , the past targets y as in image outpainting ) and information about other channels from the same positions ( e.g. , the future predictors x′ as in image segmentation ) . One of the most successful , principled architectures for the image segmentation task are U-Nets introduced in Ronneberger et al . ( 2015 ) , an architecture that successively downsamples / coarsens its inputs and then upsamples / refines the latent representation with deconvolutions also using the latent representations of the same detail level , tightly coupling down- and upsampling procedures and thus yielding latent features on the same resolution as the inputs . Following the great success in Natural Language Processing ( NLP ) applications , attention-based , esp . transformer-based architectures ( Vaswani et al. , 2017 ) that model pairwise interactions between sequence elements have been recently adapted for time series forecasting . One of the significant challenges , is that the length of the time series , are often one or two magnitudes of order larger than the ( sentence-level ) NLP problems . Plenty of approaches aim to mitigate the quadratic complexity O ( T 2 ) in the sequence/time series length T to at mostO ( T log T ) . For example , the Informer architecture ( Zhou et al. , 2020 ) , arguably one of the most accurate forecasting models researched so far , adapts the transformer by a sparse attention mechanism and a successive downsampling/coarsening of the past time series . As in the original transformer , only the coarsest representation is fed into the decoder . Possibly to remedy the loss in resolution by this procedure , the Informer feeds its input a second time into the decoder network , this time without any coarsening . While forecasting problems share many commonalities with image segmentation problems , transformer-based architectures like the Informer do not involve coupled down- and upscaling procedures to yield predictions on the same resolution as the inputs . Thus , we propose a novel Y-shaped architecture called Yformer that 1 . Couples downscaling/upscaling to leverage both , coarse and fine-grained features for time series forecasting , 2 . Combines the coupled scaling mechanism with sparse attention modules to capture longrange effects on all scale levels , and 3 . Stabilizes encoder and decoder stacks by reconstructing the recent past . 2 RELATED WORK . Deep Learning Based Time Series Forecasting : While Convolutional Neural Network ( CNN ) and Recurrent Neural network ( RNN ) based architectures ( Salinas et al. , 2020 ; Rangapuram et al. , 2018 ) outperform traditional methods like ARIMA ( Box & Jenkins , 1968 ) and exponential smoothing methods ( Hyndman & Athanasopoulos , 2018 ) , the addition of attention layers ( Vaswani et al. , 2017 ) to model time series forecasting has proven to be very beneficial across different problem settings ( Fan et al. , 2019 ; Qin et al. , 2017 ; Lai et al. , 2018 ) . Attention allows direct pair-wise interaction with eccentric events ( like holidays ) and can model temporal dynamics inherently unlike RNN ’ s and CNN ’ s that fail to capture long-range dependencies directly . Recent work like Reformer ( Kitaev et al. , 2020 ) , Linformer ( Wang et al. , 2020 ) and Informer ( Zhou et al. , 2020 ) have focused on reducing the quadratic complexity of modeling pair-wise interactions to O ( T log T ) with the introduction of restricted attention layers . Consequently , they can predict for longer forecasting horizons but are hindered by their capability of aggregating features and maintaining the resolution required for far horizon forecasting . U-Net : The Yformer model is inspired by the famous U-Net architecture introduced in Ronneberger et al . ( 2015 ) originating from the field of medical image segmentation . The U-net architecture is capable of compressing information by aggregating over the inputs and up-sampling embeddings to the same resolutions as that of the inputs from their compressed latent features . Current transformer architectures like the Informer ( Zhou et al. , 2020 ) do not utilize up-sampling techniques even though the network produces intermediate multi resolution feature maps . Our work aims to capitalize on these multi resolution feature maps and use the U-net shape effectively for the task of time series forecasting . Previous works like Stoller et al . ( 2019 ) and Perslev et al . ( 2019 ) have successfully applied U-Net architecture for the task of sequence modeling and time series segmentation , illustrating superior results in the respective tasks . These work motivate the use of a U-Net-inspired architecture for time series forecasting as current methods fail to couple sparse attention mechanism with the U-Net shaped architecture . Additional related works section is decoupled from the main text and is presented in the appendix section B . 3 PROBLEM FORMULATION . By a time series x with M channels , we mean a finite sequence of vectors in RM , denote their space by R∗×M : = ⋃ T∈N RT×M , and their length by |x| : = T ( for x ∈ RT×M , M ∈ N ) . We write ( x , y ) ∈ R∗× ( M+O ) to denote two time series of same length with M and O channels , respectively . We model a time series forecasting instance as a quadruple ( x , y , x′ , y′ ) ∈ R∗× ( M+O ) × R∗× ( M+O ) , where x , y denote the past predictors and targets until a reference time point T and x′ , y′ denote the future predictors and targets from the reference point T to the next τ time steps . Here , τ = |x′| is called the forecast horizon . For a Time Series Forecasting Problem , given ( i ) a sample D : = { ( x1 , y1 , x′1 , y′1 ) , . . . , ( xN , yN , x ′ N , y ′ N ) } from an unknown distribution p of time series forecasting instances and ( ii ) a function ` : R∗× ( O+O ) → R called loss , we attempt to find a function ŷ : R∗× ( M+O ) ×R∗×M → R∗×O ( with |ŷ ( x , y , x′ ) | = |x′| ) with minimal expected loss E ( x , y , x′ , y′ ) ∼p ` ( y′ , ŷ ( x , y , x′ ) ) ( 1 ) The loss ` usually is the mean absolute error ( MAE ) or mean squared error ( MSE ) averaged over future time points : ` mae ( y′ , ŷ ) : = 1 |y′| |y′|∑ t=1 1 O ||y′t − ŷt||1 , ` mse ( y′ , ŷ ) : = 1 |y′| |y′|∑ t=1 1 O ||y′t − ŷt||22 ( 2 ) Furthermore , if there is only one target channel and no predictor channels ( O = 1 , M = 0 ) , the time series forecasting problem is called univariate , otherwise multivariate . 4 BACKGROUND . Our work incorporates restricted attention based Transformer in a U-Net inspired architecture . For this reason , we base our work on the current state of the art sparse attention model Informer , introduced in Zhou et al . ( 2020 ) . We provide a brief overview of the sparse attention mechanism ( ProbSparse ) and the encoder block ( Contracting ProbSparse Self-Attention Blocks ) used in the Informer model for completeness . ProbSparse Attention : The ProbSparse attention mechanism restricts the canonical attention ( Vaswani et al. , 2017 ) by selecting a subset u of dominant queries having the largest variance across all the keys . Consequently , the query Q ∈ RLQ×d in the canonical attention is replaced by a sparse query matrix Q ∈ RLQ×d consisting of only the u dominant queries . ProbSparse attention can hence be defined as : APropSparse ( Q , K , V ) = Softmax ( QK T √ d ) V ( 3 ) where d denotes the input dimension to the attention module . For more details on the ProbSparse attention mechanism , we refer the reader to Zhou et al . ( 2020 ) . Contracting ProbSparse Self-Attention Blocks : The Informer model uses Contracting ProbSparse Self-Attention Blocks to distill out redundant information from the long history input sequence ( x , y ) in a pyramid structure motivated from the image domain ( Lin et al. , 2017 ) . The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query , key and value for self-attention . This is followed by multiple layers of convolution ( Conv1d ) , and finally the MaxPool operation reduces the latent dimension by effectively distilling out redundant information at each block . We refer the reader to Algorithm 2 in the appendix section C where these operations are presented in an algorithmic structure for a comprehensive overview . 5 METHODOLOGY . The Yformer model is a Y-shaped ( Figure 1b ) symmetric encoder-decoder architecture that is specifically designed to take advantage of the multi-resolution embeddings generated by the Contracting ProbSparse Self-Attention Blocks . The fundamental design consideration is the adoption of U-Netinspired connections to extract encoder features at multiple resolutions and provide direct connection to the corresponding symmetric decoder block ( simple illustration provided in Figure 4 , appendix section A ) . Furthermore , the addition of reconstruction loss helps the model learn generalized embeddings that better approximate the data generating distribution . The Y-Past Encoder of the Yformer is designed using a similar encoder structure as that of the Informer . The Y-Past Encoder embeds the past sequence ( x , y ) into a scalar projection along with the addition of positional and temporal embeddings . Multiple Contracting ProbSparse Self-Attention Blocks are used to generate encoder embeddings at various resolutions . The Informer model uses the final low-dimensional embedding as the input to the decoder ( Figure 1a ) whereas , the Yformer retains the embeddings at multiple resolutions to be passed on to the decoder . This allows the Yformer to use high-dimensional lower-level embeddings effectively . The Y-Future Encoder of the Yformer mitigates the issue of the redundant reprocessing of parts of the past sequence ( x , y ) used as tokens ( xtoken , ytoken ) in the Informer architecture . The Informer model uses only the coarsest representation from the encoder embedding , leading to a loss in resolution and forcing the Informer to pass part of the past sequence as tokens ( xtoken , ytoken ) to the decoder ( Figure 1a ) . The Yformer separates the future predictors and the past sequence ( x , y ) bypassing the future predictors ( x′ ) through a separate encoder and utilizing the multi-resolution embeddings to dismiss the need for tokens entirely . Unlike the Y-Past Encoder , the attention blocks in the Y-Future encoder are based on masked canonical self-attention mechanism ( Vaswani et al. , 2017 ) . Masking the attention ensures that there is no information leak from the future time steps to the past . Moreover , a masked canonical self-attention mechanism helps reduce the complexity , as half of the query-key interactions are restricted by design . Thus , the Y-Future Encoder is designed by stacking multiple Contracting ProbSparse Self-Attention Blocks where the ProbSparse attention is replaced by the Masked Attention . We name these blocks Contracting Masked Self-Attention Blocks ( Algorithm 3 appendix section C ) . The Yformer processes the past inputs and the future predictors separately within its encoders . However , considering the time steps , the future predictors are a continuation of the past time steps . For this reason , the Yformer model concatenates ( represented by the symbol + ) the past encoder embedding and the future encoder embedding along the time dimension after each encoder block , preserving the continuity between the past input time steps and the future time steps . Let i represent the index of an encoder block , then epasti+1 and e fut i+1 represent the output from the past encoder and the future encoder respectively . The final concatenated encoder embedding ( ei+1 ) is calculated as , epasti+1 = ContractingProbSparseSelfAttentionBlock ( e past i ) efuti+1 = ContractingMaskedSelfAttentionBlock ( e fut i ) ei+1 = e past i+1 ++ e fut i+1 ( 4 ) The encoder embeddings represented by E = [ e0 , . . . , eI ] ( where I is the number of encoder layers ) contain the combination of past and future embeddings at multiple resolutions . The Y-Decoder of the Yformer consists of two parts . The first part takes as input the final concatenated low-dimensional embedding ( eI ) and performs a multi-head canonical self-attention mechanism . Here , the past encoder embedding ( epastI ) is allowed to attend to itself as well as the future encoder embedding ( efutI ) in an unrestricted fashion . The encoder embedding ( eI ) is the lowdimensional distilled embedding , and skipping query-key interaction within these low-dimensional embeddings might deny the model useful pair-wise interactions . Therefore , it is by design that this is the only part of the Yformer model that uses canonical self-attention in comparison to the Informer that uses canonical attention within its repeating decoder block , as shown in Figure 1a . Since the canonical self-attention layer is separated from the repeating attention blocks within the decoder , the Yformer complexity from this full attention module does not increase with an increase in the number of decoder blocks . The U-Net architecture inspires the second part of the Y-Decoder . Consequently , the decoder is structured in a symmetric expanding path identical to the contracting encoder . We realize this architecture by introducing upsampling on the ProbSparse attention mechanism using Expanding ProbSparse Cross-Attention Block . The Expanding ProbSparse Cross-Attention Block within the Yformer decoder performs two tasks : ( 1 ) upsample the compressed encoder embedding eI and ( 2 ) perform restricted cross attention between the expanding decoder embedding dI−i and the corresponding encoder embedding ei ( represented in Figure 4 appendix section A ) . We accomplish both the tasks by introducing an Expanding ProbSparse Cross-Attention Block as illustrated in Algorithm 1 . Algorithm 1 Expanding ProbSparse Cross-Attention Block Input : dI−i , ei Output : dI−i+1 dI−i+1 ← ProbSparseCrossAttn ( dI−i , ei ) dI−i+1 ← Conv1d ( dI−i+1 ) dI−i+1 ← Conv1d ( dI−i+1 ) dI−i+1 ← LayerNorm ( dI−i+1 ) dI−i+1 ← ELU ( ConvTranspose1d ( dI−i+1 ) ) ) The Expanding ProbSparse Cross-Attention Blocks within the Yformer decoder uses a ProbSparseCrossAttn to construct direct connections between the lower levels of the encoder and the corresponding symmetric higher levels of the decoder . Direct connections from the encoder to the decoder are an essential component for majority of the models within the image domain . For example , ResNet ( He et al. , 2016 ) , and DenseNet ( Huang et al. , 2017 ) have demonstrated that direct connections between previous feature maps , strengthen feature propagation , reduce parameters , mitigate vanishing gradients and encourage feature reuse . However , current transformer-based architectures like the Informer fail to utilize such direct connections . The ProbSparseCrossAttn takes in as input the decoder embedding from the previous layer dI−i as queries and the corresponding encoder embedding ei as keys . The Yformer uses the ProbSparse restricted attention so that the model is scalable with an increase in the number of decoder blocks . We utilize ConvTranspose1d or popularly known as Deconvolution for incrementally increasing the embedding space . The famous U-Net architecture uses a symmetric expanding path using such Deconvolution layers . This property enables the model to not only aggregate over the input but also upscale the latent dimensions , improving the overall expressivity of the architecture . The decoder of Yformer follows a similar strategy by employing Deconvolution to expand the embedding space of the encoded output . We describe the different operators used in the appendix section C. A fully connected layer ( LinearLayer ) predicts the future time steps ŷfut from the final decoder layer ( dI ) and additionally reconstructs the past input targets ŷpast . [ ŷpast , ŷfut ] = LinearLayer ( dI ) ( 5 ) The addition of reconstruction loss to the Yformer as an auxiliary loss , serves two significant purposes . Firstly , the reconstruction loss acts as a data-dependent regularization term that reduces overfitting by learning embeddings that are more general ( Ghasedi Dizaji et al. , 2017 ; Jarrett & van der Schaar , 2020 ) . Secondly , the reconstruction loss helps in producing future-output in a similar distribution as the inputs ( Bank et al. , 2020 ) . For far horizon forecasting , we are interested in learning a future-output distribution . However , the future-output distribution and the past-input distribution arise from the same data generating process . Therefore having an auxiliary reconstruction loss would direct the gradients to a better approximate of the data generating process . The Yformer model is trained on the combined loss ` , ` = α ` mse ( y , ŷpast ) + ( 1− α ) ` mse ( y′ , ŷfut ) ( 6 ) where the first term tries to learn the past targets y and the second term learns the future targets y′ . We use the reconstruction factor ( α ) to vary the importance of reconstruction and future prediction and tune this as a hyperparameter .
This paper presents Yformer to perform long sequence time series forecasting. The key idea is to employ skip connection to improve the prediction resolution and stabilize the encoder and decoder by reconstructing the recent past. The experiment results on two datasets showed the effectiveness of the proposed method.
SP:7ce909645e709416d43fa0e795f6cf2831a0757e
Yformer: U-Net Inspired Transformer Architecture for Far Horizon Time Series Forecasting
1 INTRODUCTION . In the most simple case , time series forecasting deals with a scalar time-varying signal and aims to predict or forecast its values in the near future ; for example , countless applications in finance , healthcare , production automatization , etc . ( Carta et al. , 2021 ; Cao et al. , 2018 ; Sagheer & Kotb , 2019 ) can benefit from an accurate forecasting solution . Often not just a single scalar signal is of interest , but multiple at once , and further time-varying signals are available and even known for the future . For example , suppose one aims to forecast the energy consumption of a house , it likely depends on the social time that one seeks to forecast for ( such as the next hour or day ) , and also on features of these time points ( such as weekday , daylight , etc . ) , which are known already for the future . This is also the case in model predictive control ( Camacho & Alba , 2013 ) , where one is interested to forecast the expected value realized by some planned action , then this action is also known at the time of forecast . More generally , time series forecasting , nowadays deals with quadruples ( x , y , x′ , y′ ) of known past predictors x , known past targets y , known future predictors x′ and sought future targets y′ . ( Figure 3 in appendix section A provides a simple illustration ) Time series problems can often be addressed by methods developed initially for images , treating them as 1-dimensional images . Especially for time-series classification many typical time series encoder architectures have been adapted from models for images ( Wang et al. , 2017 ; Zou et al. , 2019 ) . Time series forecasting then is closely related to image outpainting ( Van Hoorick , 2019 ) , the task to predict how an image likely extends to the left , right , top or bottom , as well as to the more well-known task of image segmentation , where for each input pixel , an output pixel has to be predicted , whose channels encode pixel-wise classes such as vehicle , road , pedestrian say for road scenes . Time series forecasting combines aspects from both problem settings : information about targets from shifted positions ( e.g. , the past targets y as in image outpainting ) and information about other channels from the same positions ( e.g. , the future predictors x′ as in image segmentation ) . One of the most successful , principled architectures for the image segmentation task are U-Nets introduced in Ronneberger et al . ( 2015 ) , an architecture that successively downsamples / coarsens its inputs and then upsamples / refines the latent representation with deconvolutions also using the latent representations of the same detail level , tightly coupling down- and upsampling procedures and thus yielding latent features on the same resolution as the inputs . Following the great success in Natural Language Processing ( NLP ) applications , attention-based , esp . transformer-based architectures ( Vaswani et al. , 2017 ) that model pairwise interactions between sequence elements have been recently adapted for time series forecasting . One of the significant challenges , is that the length of the time series , are often one or two magnitudes of order larger than the ( sentence-level ) NLP problems . Plenty of approaches aim to mitigate the quadratic complexity O ( T 2 ) in the sequence/time series length T to at mostO ( T log T ) . For example , the Informer architecture ( Zhou et al. , 2020 ) , arguably one of the most accurate forecasting models researched so far , adapts the transformer by a sparse attention mechanism and a successive downsampling/coarsening of the past time series . As in the original transformer , only the coarsest representation is fed into the decoder . Possibly to remedy the loss in resolution by this procedure , the Informer feeds its input a second time into the decoder network , this time without any coarsening . While forecasting problems share many commonalities with image segmentation problems , transformer-based architectures like the Informer do not involve coupled down- and upscaling procedures to yield predictions on the same resolution as the inputs . Thus , we propose a novel Y-shaped architecture called Yformer that 1 . Couples downscaling/upscaling to leverage both , coarse and fine-grained features for time series forecasting , 2 . Combines the coupled scaling mechanism with sparse attention modules to capture longrange effects on all scale levels , and 3 . Stabilizes encoder and decoder stacks by reconstructing the recent past . 2 RELATED WORK . Deep Learning Based Time Series Forecasting : While Convolutional Neural Network ( CNN ) and Recurrent Neural network ( RNN ) based architectures ( Salinas et al. , 2020 ; Rangapuram et al. , 2018 ) outperform traditional methods like ARIMA ( Box & Jenkins , 1968 ) and exponential smoothing methods ( Hyndman & Athanasopoulos , 2018 ) , the addition of attention layers ( Vaswani et al. , 2017 ) to model time series forecasting has proven to be very beneficial across different problem settings ( Fan et al. , 2019 ; Qin et al. , 2017 ; Lai et al. , 2018 ) . Attention allows direct pair-wise interaction with eccentric events ( like holidays ) and can model temporal dynamics inherently unlike RNN ’ s and CNN ’ s that fail to capture long-range dependencies directly . Recent work like Reformer ( Kitaev et al. , 2020 ) , Linformer ( Wang et al. , 2020 ) and Informer ( Zhou et al. , 2020 ) have focused on reducing the quadratic complexity of modeling pair-wise interactions to O ( T log T ) with the introduction of restricted attention layers . Consequently , they can predict for longer forecasting horizons but are hindered by their capability of aggregating features and maintaining the resolution required for far horizon forecasting . U-Net : The Yformer model is inspired by the famous U-Net architecture introduced in Ronneberger et al . ( 2015 ) originating from the field of medical image segmentation . The U-net architecture is capable of compressing information by aggregating over the inputs and up-sampling embeddings to the same resolutions as that of the inputs from their compressed latent features . Current transformer architectures like the Informer ( Zhou et al. , 2020 ) do not utilize up-sampling techniques even though the network produces intermediate multi resolution feature maps . Our work aims to capitalize on these multi resolution feature maps and use the U-net shape effectively for the task of time series forecasting . Previous works like Stoller et al . ( 2019 ) and Perslev et al . ( 2019 ) have successfully applied U-Net architecture for the task of sequence modeling and time series segmentation , illustrating superior results in the respective tasks . These work motivate the use of a U-Net-inspired architecture for time series forecasting as current methods fail to couple sparse attention mechanism with the U-Net shaped architecture . Additional related works section is decoupled from the main text and is presented in the appendix section B . 3 PROBLEM FORMULATION . By a time series x with M channels , we mean a finite sequence of vectors in RM , denote their space by R∗×M : = ⋃ T∈N RT×M , and their length by |x| : = T ( for x ∈ RT×M , M ∈ N ) . We write ( x , y ) ∈ R∗× ( M+O ) to denote two time series of same length with M and O channels , respectively . We model a time series forecasting instance as a quadruple ( x , y , x′ , y′ ) ∈ R∗× ( M+O ) × R∗× ( M+O ) , where x , y denote the past predictors and targets until a reference time point T and x′ , y′ denote the future predictors and targets from the reference point T to the next τ time steps . Here , τ = |x′| is called the forecast horizon . For a Time Series Forecasting Problem , given ( i ) a sample D : = { ( x1 , y1 , x′1 , y′1 ) , . . . , ( xN , yN , x ′ N , y ′ N ) } from an unknown distribution p of time series forecasting instances and ( ii ) a function ` : R∗× ( O+O ) → R called loss , we attempt to find a function ŷ : R∗× ( M+O ) ×R∗×M → R∗×O ( with |ŷ ( x , y , x′ ) | = |x′| ) with minimal expected loss E ( x , y , x′ , y′ ) ∼p ` ( y′ , ŷ ( x , y , x′ ) ) ( 1 ) The loss ` usually is the mean absolute error ( MAE ) or mean squared error ( MSE ) averaged over future time points : ` mae ( y′ , ŷ ) : = 1 |y′| |y′|∑ t=1 1 O ||y′t − ŷt||1 , ` mse ( y′ , ŷ ) : = 1 |y′| |y′|∑ t=1 1 O ||y′t − ŷt||22 ( 2 ) Furthermore , if there is only one target channel and no predictor channels ( O = 1 , M = 0 ) , the time series forecasting problem is called univariate , otherwise multivariate . 4 BACKGROUND . Our work incorporates restricted attention based Transformer in a U-Net inspired architecture . For this reason , we base our work on the current state of the art sparse attention model Informer , introduced in Zhou et al . ( 2020 ) . We provide a brief overview of the sparse attention mechanism ( ProbSparse ) and the encoder block ( Contracting ProbSparse Self-Attention Blocks ) used in the Informer model for completeness . ProbSparse Attention : The ProbSparse attention mechanism restricts the canonical attention ( Vaswani et al. , 2017 ) by selecting a subset u of dominant queries having the largest variance across all the keys . Consequently , the query Q ∈ RLQ×d in the canonical attention is replaced by a sparse query matrix Q ∈ RLQ×d consisting of only the u dominant queries . ProbSparse attention can hence be defined as : APropSparse ( Q , K , V ) = Softmax ( QK T √ d ) V ( 3 ) where d denotes the input dimension to the attention module . For more details on the ProbSparse attention mechanism , we refer the reader to Zhou et al . ( 2020 ) . Contracting ProbSparse Self-Attention Blocks : The Informer model uses Contracting ProbSparse Self-Attention Blocks to distill out redundant information from the long history input sequence ( x , y ) in a pyramid structure motivated from the image domain ( Lin et al. , 2017 ) . The sequence of operations within a block begins with a ProbSparse self-attention that takes as input the hidden representation hi from the ith block and projects the hidden representation into query , key and value for self-attention . This is followed by multiple layers of convolution ( Conv1d ) , and finally the MaxPool operation reduces the latent dimension by effectively distilling out redundant information at each block . We refer the reader to Algorithm 2 in the appendix section C where these operations are presented in an algorithmic structure for a comprehensive overview . 5 METHODOLOGY . The Yformer model is a Y-shaped ( Figure 1b ) symmetric encoder-decoder architecture that is specifically designed to take advantage of the multi-resolution embeddings generated by the Contracting ProbSparse Self-Attention Blocks . The fundamental design consideration is the adoption of U-Netinspired connections to extract encoder features at multiple resolutions and provide direct connection to the corresponding symmetric decoder block ( simple illustration provided in Figure 4 , appendix section A ) . Furthermore , the addition of reconstruction loss helps the model learn generalized embeddings that better approximate the data generating distribution . The Y-Past Encoder of the Yformer is designed using a similar encoder structure as that of the Informer . The Y-Past Encoder embeds the past sequence ( x , y ) into a scalar projection along with the addition of positional and temporal embeddings . Multiple Contracting ProbSparse Self-Attention Blocks are used to generate encoder embeddings at various resolutions . The Informer model uses the final low-dimensional embedding as the input to the decoder ( Figure 1a ) whereas , the Yformer retains the embeddings at multiple resolutions to be passed on to the decoder . This allows the Yformer to use high-dimensional lower-level embeddings effectively . The Y-Future Encoder of the Yformer mitigates the issue of the redundant reprocessing of parts of the past sequence ( x , y ) used as tokens ( xtoken , ytoken ) in the Informer architecture . The Informer model uses only the coarsest representation from the encoder embedding , leading to a loss in resolution and forcing the Informer to pass part of the past sequence as tokens ( xtoken , ytoken ) to the decoder ( Figure 1a ) . The Yformer separates the future predictors and the past sequence ( x , y ) bypassing the future predictors ( x′ ) through a separate encoder and utilizing the multi-resolution embeddings to dismiss the need for tokens entirely . Unlike the Y-Past Encoder , the attention blocks in the Y-Future encoder are based on masked canonical self-attention mechanism ( Vaswani et al. , 2017 ) . Masking the attention ensures that there is no information leak from the future time steps to the past . Moreover , a masked canonical self-attention mechanism helps reduce the complexity , as half of the query-key interactions are restricted by design . Thus , the Y-Future Encoder is designed by stacking multiple Contracting ProbSparse Self-Attention Blocks where the ProbSparse attention is replaced by the Masked Attention . We name these blocks Contracting Masked Self-Attention Blocks ( Algorithm 3 appendix section C ) . The Yformer processes the past inputs and the future predictors separately within its encoders . However , considering the time steps , the future predictors are a continuation of the past time steps . For this reason , the Yformer model concatenates ( represented by the symbol + ) the past encoder embedding and the future encoder embedding along the time dimension after each encoder block , preserving the continuity between the past input time steps and the future time steps . Let i represent the index of an encoder block , then epasti+1 and e fut i+1 represent the output from the past encoder and the future encoder respectively . The final concatenated encoder embedding ( ei+1 ) is calculated as , epasti+1 = ContractingProbSparseSelfAttentionBlock ( e past i ) efuti+1 = ContractingMaskedSelfAttentionBlock ( e fut i ) ei+1 = e past i+1 ++ e fut i+1 ( 4 ) The encoder embeddings represented by E = [ e0 , . . . , eI ] ( where I is the number of encoder layers ) contain the combination of past and future embeddings at multiple resolutions . The Y-Decoder of the Yformer consists of two parts . The first part takes as input the final concatenated low-dimensional embedding ( eI ) and performs a multi-head canonical self-attention mechanism . Here , the past encoder embedding ( epastI ) is allowed to attend to itself as well as the future encoder embedding ( efutI ) in an unrestricted fashion . The encoder embedding ( eI ) is the lowdimensional distilled embedding , and skipping query-key interaction within these low-dimensional embeddings might deny the model useful pair-wise interactions . Therefore , it is by design that this is the only part of the Yformer model that uses canonical self-attention in comparison to the Informer that uses canonical attention within its repeating decoder block , as shown in Figure 1a . Since the canonical self-attention layer is separated from the repeating attention blocks within the decoder , the Yformer complexity from this full attention module does not increase with an increase in the number of decoder blocks . The U-Net architecture inspires the second part of the Y-Decoder . Consequently , the decoder is structured in a symmetric expanding path identical to the contracting encoder . We realize this architecture by introducing upsampling on the ProbSparse attention mechanism using Expanding ProbSparse Cross-Attention Block . The Expanding ProbSparse Cross-Attention Block within the Yformer decoder performs two tasks : ( 1 ) upsample the compressed encoder embedding eI and ( 2 ) perform restricted cross attention between the expanding decoder embedding dI−i and the corresponding encoder embedding ei ( represented in Figure 4 appendix section A ) . We accomplish both the tasks by introducing an Expanding ProbSparse Cross-Attention Block as illustrated in Algorithm 1 . Algorithm 1 Expanding ProbSparse Cross-Attention Block Input : dI−i , ei Output : dI−i+1 dI−i+1 ← ProbSparseCrossAttn ( dI−i , ei ) dI−i+1 ← Conv1d ( dI−i+1 ) dI−i+1 ← Conv1d ( dI−i+1 ) dI−i+1 ← LayerNorm ( dI−i+1 ) dI−i+1 ← ELU ( ConvTranspose1d ( dI−i+1 ) ) ) The Expanding ProbSparse Cross-Attention Blocks within the Yformer decoder uses a ProbSparseCrossAttn to construct direct connections between the lower levels of the encoder and the corresponding symmetric higher levels of the decoder . Direct connections from the encoder to the decoder are an essential component for majority of the models within the image domain . For example , ResNet ( He et al. , 2016 ) , and DenseNet ( Huang et al. , 2017 ) have demonstrated that direct connections between previous feature maps , strengthen feature propagation , reduce parameters , mitigate vanishing gradients and encourage feature reuse . However , current transformer-based architectures like the Informer fail to utilize such direct connections . The ProbSparseCrossAttn takes in as input the decoder embedding from the previous layer dI−i as queries and the corresponding encoder embedding ei as keys . The Yformer uses the ProbSparse restricted attention so that the model is scalable with an increase in the number of decoder blocks . We utilize ConvTranspose1d or popularly known as Deconvolution for incrementally increasing the embedding space . The famous U-Net architecture uses a symmetric expanding path using such Deconvolution layers . This property enables the model to not only aggregate over the input but also upscale the latent dimensions , improving the overall expressivity of the architecture . The decoder of Yformer follows a similar strategy by employing Deconvolution to expand the embedding space of the encoded output . We describe the different operators used in the appendix section C. A fully connected layer ( LinearLayer ) predicts the future time steps ŷfut from the final decoder layer ( dI ) and additionally reconstructs the past input targets ŷpast . [ ŷpast , ŷfut ] = LinearLayer ( dI ) ( 5 ) The addition of reconstruction loss to the Yformer as an auxiliary loss , serves two significant purposes . Firstly , the reconstruction loss acts as a data-dependent regularization term that reduces overfitting by learning embeddings that are more general ( Ghasedi Dizaji et al. , 2017 ; Jarrett & van der Schaar , 2020 ) . Secondly , the reconstruction loss helps in producing future-output in a similar distribution as the inputs ( Bank et al. , 2020 ) . For far horizon forecasting , we are interested in learning a future-output distribution . However , the future-output distribution and the past-input distribution arise from the same data generating process . Therefore having an auxiliary reconstruction loss would direct the gradients to a better approximate of the data generating process . The Yformer model is trained on the combined loss ` , ` = α ` mse ( y , ŷpast ) + ( 1− α ) ` mse ( y′ , ŷfut ) ( 6 ) where the first term tries to learn the past targets y and the second term learns the future targets y′ . We use the reconstruction factor ( α ) to vary the importance of reconstruction and future prediction and tune this as a hyperparameter .
The authors propose a new Transformer-based architecture for long-sequence temporal forecasting (LSTF) utilising ProbSparse attention mechanisms to efficiently capture long-term dependencies with L log(L) complexity. The Yformer builds on the Informer architecture with 3 key innovations: 1. Using distinct encoders to capture historical and known future information separately. This improves representation learning for time series data, while still maintaining computational efficiency with ProbSparse attention. 2. Using a common decoder to process encoder representations jointly. The is also contains an upsampling step inspired by U-Net, although the benefits of upsampling are not explicitly evaluated. 3. Including an auxiliary reconstruction loss which uses the reconstruction error of past targets to regularise training.
SP:7ce909645e709416d43fa0e795f6cf2831a0757e
Capturing Structural Locality in Non-parametric Language Models
Structural locality is a ubiquitous feature of real-world datasets , wherein data points are organized into local hierarchies . Some examples include topical clusters in text or project hierarchies in source code repositories . In this paper , we explore utilizing this structural locality within non-parametric language models , which generate sequences that reference retrieved examples from an external source . We propose a simple yet effective approach for adding locality information into such models by adding learned parameters that improve the likelihood of retrieving examples from local neighborhoods . Experiments on two different domains , Java source code and Wikipedia text , demonstrate that locality features improve model efficacy over models without access to these features , with interesting differences . We also perform an analysis of how and where locality features contribute to improved performance and why the traditionally used contextual similarity metrics alone are not enough to grasp the locality structure . 1 INTRODUCTION . Language models ( LMs ) predict a probability distribution over sequences , and are most widely studied to model and generate natural languages ( Bengio et al. , 2003 ; Merity et al. , 2018 ; Baevski & Auli , 2018 ; Brown et al. , 2020 ) . Advances in LMs benefit many natural language processing downstream tasks , such as machine translation ( Bahdanau et al. , 2015 ) , dialog systems ( Sordoni et al. , 2015 ) , question answering ( Yang et al. , 2019 ; Raffel et al. , 2019 ) , and general representation learning for natural language ( Devlin et al. , 2018 ; Liu et al. , 2019 ) . Recently , LMs have also been adopted to model sequences other than text , such as source code written in programming language ( Hindle et al. , 2016 ; Hellendoorn & Devanbu , 2017 ; Alon et al. , 2020 ; Karampatsis et al. , 2020 ) , which can enable useful downstream tasks like code completion ( Raychev et al. , 2014 ) . Most current neural LMs are based on parametric neural networks , using RNN ( Mikolov et al. , 2010 ) or Transformer ( Vaswani et al. , 2017 ) architectures . These models make predictions solely using a fixed set of neural network parameters . Recently , more and more neural LMs also incorporate non-parametric components ( Grave et al. , 2017 ; Guu et al. , 2018 ; He et al. , 2020 ; Khandelwal et al. , 2020 ) , which usually first select examples from an external source and then reference them during the prediction . For example , Khandelwal et al . ( 2020 ) model the token-level probability by interpolating the parametric LM probability with a probability obtained from the nearest context-token pairs in an external datastore . Using such non-parametric components in LMs is beneficial because the model no longer needs to memorize everything about the language in its parameters . For such non-parametric LMs , one important concept is a distance metric between the current context and other contexts in the datastore . One example of such metric is the ` 2 distance between context vectors calculated by the parametric model ( Khandelwal et al. , 2020 ) . This distance can be used in both retrieval and probability calculation ; items in the datastore that are less distant from the current context are more likely to be retrieved and have a higher influence on the final probability . However , given that non-parametric datastores are typically very large , containing a myriad of contexts from disparate sources , calculating a metric that accurately reflects semantic similarities is non-trivial ; as we demonstrate in experiments , there is much room for improvement in current practice . Nobel . In this paper , we argue that the relevance of contexts may be correlated with not only contextual distance , but also structural characteristics of the underlying data . Specifically , we take advantage of a property we dub structural locality , the propensity of text to be divided into local groups sharing common hierarchical attributes . This property is ubiquitous across many kinds of texts and can provide additional information on how closely related two different examples are to each other . Throughout this paper , we will provide two case-studies of this phenomenon . First , in the domain of programs written in source code , if two source files originate from the same project , they are more likely to be related than files from other projects , and even more so if they are from the exact same package ( Hellendoorn & Devanbu , 2017 ) . Second , in natural language , two sections of Wikipedia text may be more related if they fall within the same topical domain , are from similarly titled sections , or even are from the same article ( as in Figure 1 ) . Notably this locality often manifests itself at different levels , such as the levels of “ project ” , “ subdirectory ” , and “ file ” cited above for source code . In this paper , we hypothesize that by using multiple levels of structural locality , we can better calibrate the distance metrics used to retrieve examples from non-parametric datastores , thereby improving LM performance . Specifically , we propose a simple-yet-effective approach that can easily be applied to non-parametric LMs : we use different levels of structural locality to define functions that modify the contextual distance metrics used by the non-parametric module . We evaluate our method on two drastically different domains : Java programming language source code , and natural language Wikipedia articles , achieving noticeable LM performance gains in both by adding just 5 & 7 parameters respectively . Moreover , we perform an in-depth analysis showing how the traditionally used contextual similarity metrics alone are not enough to grasp the locality structure , providing evidence for why adding the locality features is indeed useful . We also compare programming languages and natural languages to highlight several interesting differences in terms of how , and how much , the locality helps improve LM performance . 2 NON-PARAMETRIC LANGUAGE MODELS . Given a linguistic context consisting of a sequence of tokens ct = ( w1 , ... wt−1 ) , autoregressive parametric LMs estimate p ( wt|ct ; θ ) , the probability distribution over the next token wt . Such parametric LMs store information regarding the language being modeled in the parameters θ . The size of θ is fixed in advance based on the hyperparameters of the model architecture , in recent years typically a neural network ( Grave et al. , 2016 ; Baevski & Auli , 2018 ; Dai et al. , 2019 ; Brown et al. , 2020 ) . In contrast , a non-parametric LM ’ s number of parameters is not determined by just the model architecture , but also by the underlying data used to train the model . While non-parametric LMs using Bayesian statistics have existed for some time ( Wood et al. , 2011 ; Shareghi et al. , 2017 ; He et al. , 2020 ) , they have recently seen increased prevalence through the introduction of neural LMs that retrieve relevant examples from an external datastore ( Hashimoto et al. , 2018 ; Guu et al. , 2018 ) . In particular , we focus on kNN-LMs ( Khandelwal et al. , 2020 ) , a variety of such models that uses a nearest neighbor retrieval mechanism to augment a pre-trained parametric LM , achieving impressive results without any additional training . Neural network-based LMs usually map the context c to a fixed-length vector representation , with a trained function f ( c ) . In kNN-LMs , the non-parametric component consists of a collection ( D ) of contexts for the kNN to retrieve from . Denoting these contexts and their corresponding next token as During inference , the parametric component of the LM generates the output distribution over next tokens pLM ( wt|ct ; θ ) and the corresponding context representation f ( ct ) , given the test input context ct. Then the non-parametric component of the LM queries the datastore with f ( ct ) representation to retrieve its k-nearest neighbors N according to a distance function d ( · , · ) . We can then compute a probability distribution over these neighbors using the softmax of their negative distances . The model aggregates the probability mass for each vocabulary item across all its occurrences in the retrieved targets . This distribution is then interpolated with the parametric LM distribution pLM to produce the final kNN-LM distribution : pkNN ( wt|ct ) ∝ ∑ ( ki , vi ) ∈N 1wt=vi exp ( −d ( ki , f ( ct ) ) ) ( 2 ) p ( wt|ct ; θ ) = λpkNN ( wt|ct ) + ( 1− λ ) pLM ( wt|ct ; θ ) ( 3 ) In our experiments , we follow Khandelwal et al . ( 2020 ) in setting the interpolation factor λ to 0.25 . 3 DEFINING STRUCTURAL LOCALITY . We define structural locality as a categorical feature calculated between a pair of contexts ( ci , cj ) in a collection of data , that describes whether the pair share some common , potentially hierarchical , attributes ( e.g. , the section title of a Wikipedia article section , or the directory path of a source code file ) . For each domain , a set of hierarchical attributes { l0 , l1 , ... , ln } can be defined based on prior knowledge of the domain . We denote lk ( ci , cj ) ∈ { 0 , 1 } as the boolean locality feature value for the context pair , representing whether ci and cj share the same hierarchical attributes lk . Here , l0 is reserved for “ no locality ” , in case the pair shares none of the attributes . Without loss of generality , we set a constraint that ∑ k lk ( ci , cj ) = 1 , as new features can be introduced by conjunction and negation of the attributes if needed . Specific Instantiations . We instantiate these features on our two case studies of Wikipedia text and Java source code , as summarized in Table 1 . In Wikipedia , for every context ci , we define four mutually exclusive hierarchical attributes , l0 − l3 . We calculate these features based on the Wikipedia article and section titles , using simple pattern matching . We then link each article to a set of categories ( one article may belong to multiple categories ) using the knowledge graph WikiData,1 by aggregating all the category entities involving two properties : P31 ( instance of ) and P279 ( subclass of ) . The criterion for “ same section title ” is exact string match ( Hayashi et al. , 2020 ) . If there is at least one common category between the sets of categories for two articles , the pair is assigned the “ same article category ” . For Java source code , we define 3 mutually exclusive attributes , l0 − l2 based on the location of the code . For each source file , we use the full file path to obtain the two attributes : project name and sub-directory path.2 The criterion for both “ same project ” and “ same subdirectory ” is exact string match . Note that these features are strictly hierarchical , hence only two features are used to capture specific locality here . An Aside : Connections to Domain Adaptation . Domain adaptation typically refers to reusing existing information about a given problem ( e.g. , data or model ) to solve a task in a new domain . 1https : //www.wikidata.org/ 2For example , full path ... /Journal.IO/src/main/java/journal/io/api/DataFile.java has project Journal.IO and sub-directory src/main/java/journal/io/api/ for package journal.io.api . Domain adaptation for neural models generally focuses on fine-tuning models on in-domain data ( Sennrich et al. , 2016 ; Chu et al. , 2017 ) or making direct modifications to the model to consider domain information ( Britz et al. , 2017 ) or latent topic features ( Khudanpur & Wu , 2000 ; Mikolov & Zweig , 2012 ; Wang & Cho , 2016 ) . Most of these methods do not natively support new test-time contexts that were not seen at training time . In comparison , one immediate advantage of non-parametric LMs is the ability to adapt to different domains at test time without re-training ( Merity et al. , 2016 ; Grave et al. , 2016 ; 2017 ; Khandelwal et al. , 2020 ) . For example , some adaptive LMs ( Grave et al. , 2016 ; 2017 ) make use of the previous hidden states of test documents dynamically during inference . Similarly , our proposed locality features do not require re-training on the training set . Note that within the scope of this paper , although connected , the proposed structural locality is a different concept from domain . We consider domains as higher-level classifications describing the text where one example belongs to one domain label ; e.g. , a section about Kim Kardashian ’ s early life belongs to a category of texts describing celebrities . One the other hand , with the structural locality , a user could define multiple levels of locality : to that same section , we can assign not only the domain label , but also , the section title “ Early Life ” . The lightweight nature of our model combined with non-parametric LMs also makes adding more levels of features straightforward , as the features only need to be calculated for the top nearest neighbors , and the number parameters that need tuning in our proposed method ( Section 5 ) is only about twice the number of locality features .
This work concerns itself about utilizing structural locality inherent in real-world datasets in improving the effectiveness of non-parametric language models. It makes a claim that a) structural locality is not implicitly fully captured by the distance metric used in non-parametric language models and further that b) explicitly plugging in structural locality into non-parametric language models can improve their effectiveness. It validates this claim first by doing analysis of two datasets with the help of custom locality functions and then by plugging in the locality functions into a non-parametric language model with the help of learnable parameters.
SP:6bbcd3b93df28e77efc11cae45e2c7bfe5f2b398
Capturing Structural Locality in Non-parametric Language Models
Structural locality is a ubiquitous feature of real-world datasets , wherein data points are organized into local hierarchies . Some examples include topical clusters in text or project hierarchies in source code repositories . In this paper , we explore utilizing this structural locality within non-parametric language models , which generate sequences that reference retrieved examples from an external source . We propose a simple yet effective approach for adding locality information into such models by adding learned parameters that improve the likelihood of retrieving examples from local neighborhoods . Experiments on two different domains , Java source code and Wikipedia text , demonstrate that locality features improve model efficacy over models without access to these features , with interesting differences . We also perform an analysis of how and where locality features contribute to improved performance and why the traditionally used contextual similarity metrics alone are not enough to grasp the locality structure . 1 INTRODUCTION . Language models ( LMs ) predict a probability distribution over sequences , and are most widely studied to model and generate natural languages ( Bengio et al. , 2003 ; Merity et al. , 2018 ; Baevski & Auli , 2018 ; Brown et al. , 2020 ) . Advances in LMs benefit many natural language processing downstream tasks , such as machine translation ( Bahdanau et al. , 2015 ) , dialog systems ( Sordoni et al. , 2015 ) , question answering ( Yang et al. , 2019 ; Raffel et al. , 2019 ) , and general representation learning for natural language ( Devlin et al. , 2018 ; Liu et al. , 2019 ) . Recently , LMs have also been adopted to model sequences other than text , such as source code written in programming language ( Hindle et al. , 2016 ; Hellendoorn & Devanbu , 2017 ; Alon et al. , 2020 ; Karampatsis et al. , 2020 ) , which can enable useful downstream tasks like code completion ( Raychev et al. , 2014 ) . Most current neural LMs are based on parametric neural networks , using RNN ( Mikolov et al. , 2010 ) or Transformer ( Vaswani et al. , 2017 ) architectures . These models make predictions solely using a fixed set of neural network parameters . Recently , more and more neural LMs also incorporate non-parametric components ( Grave et al. , 2017 ; Guu et al. , 2018 ; He et al. , 2020 ; Khandelwal et al. , 2020 ) , which usually first select examples from an external source and then reference them during the prediction . For example , Khandelwal et al . ( 2020 ) model the token-level probability by interpolating the parametric LM probability with a probability obtained from the nearest context-token pairs in an external datastore . Using such non-parametric components in LMs is beneficial because the model no longer needs to memorize everything about the language in its parameters . For such non-parametric LMs , one important concept is a distance metric between the current context and other contexts in the datastore . One example of such metric is the ` 2 distance between context vectors calculated by the parametric model ( Khandelwal et al. , 2020 ) . This distance can be used in both retrieval and probability calculation ; items in the datastore that are less distant from the current context are more likely to be retrieved and have a higher influence on the final probability . However , given that non-parametric datastores are typically very large , containing a myriad of contexts from disparate sources , calculating a metric that accurately reflects semantic similarities is non-trivial ; as we demonstrate in experiments , there is much room for improvement in current practice . Nobel . In this paper , we argue that the relevance of contexts may be correlated with not only contextual distance , but also structural characteristics of the underlying data . Specifically , we take advantage of a property we dub structural locality , the propensity of text to be divided into local groups sharing common hierarchical attributes . This property is ubiquitous across many kinds of texts and can provide additional information on how closely related two different examples are to each other . Throughout this paper , we will provide two case-studies of this phenomenon . First , in the domain of programs written in source code , if two source files originate from the same project , they are more likely to be related than files from other projects , and even more so if they are from the exact same package ( Hellendoorn & Devanbu , 2017 ) . Second , in natural language , two sections of Wikipedia text may be more related if they fall within the same topical domain , are from similarly titled sections , or even are from the same article ( as in Figure 1 ) . Notably this locality often manifests itself at different levels , such as the levels of “ project ” , “ subdirectory ” , and “ file ” cited above for source code . In this paper , we hypothesize that by using multiple levels of structural locality , we can better calibrate the distance metrics used to retrieve examples from non-parametric datastores , thereby improving LM performance . Specifically , we propose a simple-yet-effective approach that can easily be applied to non-parametric LMs : we use different levels of structural locality to define functions that modify the contextual distance metrics used by the non-parametric module . We evaluate our method on two drastically different domains : Java programming language source code , and natural language Wikipedia articles , achieving noticeable LM performance gains in both by adding just 5 & 7 parameters respectively . Moreover , we perform an in-depth analysis showing how the traditionally used contextual similarity metrics alone are not enough to grasp the locality structure , providing evidence for why adding the locality features is indeed useful . We also compare programming languages and natural languages to highlight several interesting differences in terms of how , and how much , the locality helps improve LM performance . 2 NON-PARAMETRIC LANGUAGE MODELS . Given a linguistic context consisting of a sequence of tokens ct = ( w1 , ... wt−1 ) , autoregressive parametric LMs estimate p ( wt|ct ; θ ) , the probability distribution over the next token wt . Such parametric LMs store information regarding the language being modeled in the parameters θ . The size of θ is fixed in advance based on the hyperparameters of the model architecture , in recent years typically a neural network ( Grave et al. , 2016 ; Baevski & Auli , 2018 ; Dai et al. , 2019 ; Brown et al. , 2020 ) . In contrast , a non-parametric LM ’ s number of parameters is not determined by just the model architecture , but also by the underlying data used to train the model . While non-parametric LMs using Bayesian statistics have existed for some time ( Wood et al. , 2011 ; Shareghi et al. , 2017 ; He et al. , 2020 ) , they have recently seen increased prevalence through the introduction of neural LMs that retrieve relevant examples from an external datastore ( Hashimoto et al. , 2018 ; Guu et al. , 2018 ) . In particular , we focus on kNN-LMs ( Khandelwal et al. , 2020 ) , a variety of such models that uses a nearest neighbor retrieval mechanism to augment a pre-trained parametric LM , achieving impressive results without any additional training . Neural network-based LMs usually map the context c to a fixed-length vector representation , with a trained function f ( c ) . In kNN-LMs , the non-parametric component consists of a collection ( D ) of contexts for the kNN to retrieve from . Denoting these contexts and their corresponding next token as During inference , the parametric component of the LM generates the output distribution over next tokens pLM ( wt|ct ; θ ) and the corresponding context representation f ( ct ) , given the test input context ct. Then the non-parametric component of the LM queries the datastore with f ( ct ) representation to retrieve its k-nearest neighbors N according to a distance function d ( · , · ) . We can then compute a probability distribution over these neighbors using the softmax of their negative distances . The model aggregates the probability mass for each vocabulary item across all its occurrences in the retrieved targets . This distribution is then interpolated with the parametric LM distribution pLM to produce the final kNN-LM distribution : pkNN ( wt|ct ) ∝ ∑ ( ki , vi ) ∈N 1wt=vi exp ( −d ( ki , f ( ct ) ) ) ( 2 ) p ( wt|ct ; θ ) = λpkNN ( wt|ct ) + ( 1− λ ) pLM ( wt|ct ; θ ) ( 3 ) In our experiments , we follow Khandelwal et al . ( 2020 ) in setting the interpolation factor λ to 0.25 . 3 DEFINING STRUCTURAL LOCALITY . We define structural locality as a categorical feature calculated between a pair of contexts ( ci , cj ) in a collection of data , that describes whether the pair share some common , potentially hierarchical , attributes ( e.g. , the section title of a Wikipedia article section , or the directory path of a source code file ) . For each domain , a set of hierarchical attributes { l0 , l1 , ... , ln } can be defined based on prior knowledge of the domain . We denote lk ( ci , cj ) ∈ { 0 , 1 } as the boolean locality feature value for the context pair , representing whether ci and cj share the same hierarchical attributes lk . Here , l0 is reserved for “ no locality ” , in case the pair shares none of the attributes . Without loss of generality , we set a constraint that ∑ k lk ( ci , cj ) = 1 , as new features can be introduced by conjunction and negation of the attributes if needed . Specific Instantiations . We instantiate these features on our two case studies of Wikipedia text and Java source code , as summarized in Table 1 . In Wikipedia , for every context ci , we define four mutually exclusive hierarchical attributes , l0 − l3 . We calculate these features based on the Wikipedia article and section titles , using simple pattern matching . We then link each article to a set of categories ( one article may belong to multiple categories ) using the knowledge graph WikiData,1 by aggregating all the category entities involving two properties : P31 ( instance of ) and P279 ( subclass of ) . The criterion for “ same section title ” is exact string match ( Hayashi et al. , 2020 ) . If there is at least one common category between the sets of categories for two articles , the pair is assigned the “ same article category ” . For Java source code , we define 3 mutually exclusive attributes , l0 − l2 based on the location of the code . For each source file , we use the full file path to obtain the two attributes : project name and sub-directory path.2 The criterion for both “ same project ” and “ same subdirectory ” is exact string match . Note that these features are strictly hierarchical , hence only two features are used to capture specific locality here . An Aside : Connections to Domain Adaptation . Domain adaptation typically refers to reusing existing information about a given problem ( e.g. , data or model ) to solve a task in a new domain . 1https : //www.wikidata.org/ 2For example , full path ... /Journal.IO/src/main/java/journal/io/api/DataFile.java has project Journal.IO and sub-directory src/main/java/journal/io/api/ for package journal.io.api . Domain adaptation for neural models generally focuses on fine-tuning models on in-domain data ( Sennrich et al. , 2016 ; Chu et al. , 2017 ) or making direct modifications to the model to consider domain information ( Britz et al. , 2017 ) or latent topic features ( Khudanpur & Wu , 2000 ; Mikolov & Zweig , 2012 ; Wang & Cho , 2016 ) . Most of these methods do not natively support new test-time contexts that were not seen at training time . In comparison , one immediate advantage of non-parametric LMs is the ability to adapt to different domains at test time without re-training ( Merity et al. , 2016 ; Grave et al. , 2016 ; 2017 ; Khandelwal et al. , 2020 ) . For example , some adaptive LMs ( Grave et al. , 2016 ; 2017 ) make use of the previous hidden states of test documents dynamically during inference . Similarly , our proposed locality features do not require re-training on the training set . Note that within the scope of this paper , although connected , the proposed structural locality is a different concept from domain . We consider domains as higher-level classifications describing the text where one example belongs to one domain label ; e.g. , a section about Kim Kardashian ’ s early life belongs to a category of texts describing celebrities . One the other hand , with the structural locality , a user could define multiple levels of locality : to that same section , we can assign not only the domain label , but also , the section title “ Early Life ” . The lightweight nature of our model combined with non-parametric LMs also makes adding more levels of features straightforward , as the features only need to be calculated for the top nearest neighbors , and the number parameters that need tuning in our proposed method ( Section 5 ) is only about twice the number of locality features .
The paper is about modelling structural locality in non-parametric language models. The key hypothesis is in modelling not only the co-occurrence characteristics but also structural characteristics such as locality. The paper explains the key claims via case studies conducted on source code data and Wikipedia datasets. The model paradigm is based on non-parametric language models. A key difference between the non-parametric model and the parametric counterpart is that in the non-parametric model the model parameters are not only determined by the model architecture but also the underlying data. Structural locality, which is different from just co-occurrence counts, models the structural relationships between pairs of items, e.g., whether they belong to the same or different directory in the case of source code. The optimisation model is presented in Equation 7 where the authors need a small sample set emanating from the same domain to train the model. The authors then conduct experiments to demonstrate that the method improves upon existing works. Both qualitative and quantitative experimental results are shown.
SP:6bbcd3b93df28e77efc11cae45e2c7bfe5f2b398
Capturing Structural Locality in Non-parametric Language Models
Structural locality is a ubiquitous feature of real-world datasets , wherein data points are organized into local hierarchies . Some examples include topical clusters in text or project hierarchies in source code repositories . In this paper , we explore utilizing this structural locality within non-parametric language models , which generate sequences that reference retrieved examples from an external source . We propose a simple yet effective approach for adding locality information into such models by adding learned parameters that improve the likelihood of retrieving examples from local neighborhoods . Experiments on two different domains , Java source code and Wikipedia text , demonstrate that locality features improve model efficacy over models without access to these features , with interesting differences . We also perform an analysis of how and where locality features contribute to improved performance and why the traditionally used contextual similarity metrics alone are not enough to grasp the locality structure . 1 INTRODUCTION . Language models ( LMs ) predict a probability distribution over sequences , and are most widely studied to model and generate natural languages ( Bengio et al. , 2003 ; Merity et al. , 2018 ; Baevski & Auli , 2018 ; Brown et al. , 2020 ) . Advances in LMs benefit many natural language processing downstream tasks , such as machine translation ( Bahdanau et al. , 2015 ) , dialog systems ( Sordoni et al. , 2015 ) , question answering ( Yang et al. , 2019 ; Raffel et al. , 2019 ) , and general representation learning for natural language ( Devlin et al. , 2018 ; Liu et al. , 2019 ) . Recently , LMs have also been adopted to model sequences other than text , such as source code written in programming language ( Hindle et al. , 2016 ; Hellendoorn & Devanbu , 2017 ; Alon et al. , 2020 ; Karampatsis et al. , 2020 ) , which can enable useful downstream tasks like code completion ( Raychev et al. , 2014 ) . Most current neural LMs are based on parametric neural networks , using RNN ( Mikolov et al. , 2010 ) or Transformer ( Vaswani et al. , 2017 ) architectures . These models make predictions solely using a fixed set of neural network parameters . Recently , more and more neural LMs also incorporate non-parametric components ( Grave et al. , 2017 ; Guu et al. , 2018 ; He et al. , 2020 ; Khandelwal et al. , 2020 ) , which usually first select examples from an external source and then reference them during the prediction . For example , Khandelwal et al . ( 2020 ) model the token-level probability by interpolating the parametric LM probability with a probability obtained from the nearest context-token pairs in an external datastore . Using such non-parametric components in LMs is beneficial because the model no longer needs to memorize everything about the language in its parameters . For such non-parametric LMs , one important concept is a distance metric between the current context and other contexts in the datastore . One example of such metric is the ` 2 distance between context vectors calculated by the parametric model ( Khandelwal et al. , 2020 ) . This distance can be used in both retrieval and probability calculation ; items in the datastore that are less distant from the current context are more likely to be retrieved and have a higher influence on the final probability . However , given that non-parametric datastores are typically very large , containing a myriad of contexts from disparate sources , calculating a metric that accurately reflects semantic similarities is non-trivial ; as we demonstrate in experiments , there is much room for improvement in current practice . Nobel . In this paper , we argue that the relevance of contexts may be correlated with not only contextual distance , but also structural characteristics of the underlying data . Specifically , we take advantage of a property we dub structural locality , the propensity of text to be divided into local groups sharing common hierarchical attributes . This property is ubiquitous across many kinds of texts and can provide additional information on how closely related two different examples are to each other . Throughout this paper , we will provide two case-studies of this phenomenon . First , in the domain of programs written in source code , if two source files originate from the same project , they are more likely to be related than files from other projects , and even more so if they are from the exact same package ( Hellendoorn & Devanbu , 2017 ) . Second , in natural language , two sections of Wikipedia text may be more related if they fall within the same topical domain , are from similarly titled sections , or even are from the same article ( as in Figure 1 ) . Notably this locality often manifests itself at different levels , such as the levels of “ project ” , “ subdirectory ” , and “ file ” cited above for source code . In this paper , we hypothesize that by using multiple levels of structural locality , we can better calibrate the distance metrics used to retrieve examples from non-parametric datastores , thereby improving LM performance . Specifically , we propose a simple-yet-effective approach that can easily be applied to non-parametric LMs : we use different levels of structural locality to define functions that modify the contextual distance metrics used by the non-parametric module . We evaluate our method on two drastically different domains : Java programming language source code , and natural language Wikipedia articles , achieving noticeable LM performance gains in both by adding just 5 & 7 parameters respectively . Moreover , we perform an in-depth analysis showing how the traditionally used contextual similarity metrics alone are not enough to grasp the locality structure , providing evidence for why adding the locality features is indeed useful . We also compare programming languages and natural languages to highlight several interesting differences in terms of how , and how much , the locality helps improve LM performance . 2 NON-PARAMETRIC LANGUAGE MODELS . Given a linguistic context consisting of a sequence of tokens ct = ( w1 , ... wt−1 ) , autoregressive parametric LMs estimate p ( wt|ct ; θ ) , the probability distribution over the next token wt . Such parametric LMs store information regarding the language being modeled in the parameters θ . The size of θ is fixed in advance based on the hyperparameters of the model architecture , in recent years typically a neural network ( Grave et al. , 2016 ; Baevski & Auli , 2018 ; Dai et al. , 2019 ; Brown et al. , 2020 ) . In contrast , a non-parametric LM ’ s number of parameters is not determined by just the model architecture , but also by the underlying data used to train the model . While non-parametric LMs using Bayesian statistics have existed for some time ( Wood et al. , 2011 ; Shareghi et al. , 2017 ; He et al. , 2020 ) , they have recently seen increased prevalence through the introduction of neural LMs that retrieve relevant examples from an external datastore ( Hashimoto et al. , 2018 ; Guu et al. , 2018 ) . In particular , we focus on kNN-LMs ( Khandelwal et al. , 2020 ) , a variety of such models that uses a nearest neighbor retrieval mechanism to augment a pre-trained parametric LM , achieving impressive results without any additional training . Neural network-based LMs usually map the context c to a fixed-length vector representation , with a trained function f ( c ) . In kNN-LMs , the non-parametric component consists of a collection ( D ) of contexts for the kNN to retrieve from . Denoting these contexts and their corresponding next token as During inference , the parametric component of the LM generates the output distribution over next tokens pLM ( wt|ct ; θ ) and the corresponding context representation f ( ct ) , given the test input context ct. Then the non-parametric component of the LM queries the datastore with f ( ct ) representation to retrieve its k-nearest neighbors N according to a distance function d ( · , · ) . We can then compute a probability distribution over these neighbors using the softmax of their negative distances . The model aggregates the probability mass for each vocabulary item across all its occurrences in the retrieved targets . This distribution is then interpolated with the parametric LM distribution pLM to produce the final kNN-LM distribution : pkNN ( wt|ct ) ∝ ∑ ( ki , vi ) ∈N 1wt=vi exp ( −d ( ki , f ( ct ) ) ) ( 2 ) p ( wt|ct ; θ ) = λpkNN ( wt|ct ) + ( 1− λ ) pLM ( wt|ct ; θ ) ( 3 ) In our experiments , we follow Khandelwal et al . ( 2020 ) in setting the interpolation factor λ to 0.25 . 3 DEFINING STRUCTURAL LOCALITY . We define structural locality as a categorical feature calculated between a pair of contexts ( ci , cj ) in a collection of data , that describes whether the pair share some common , potentially hierarchical , attributes ( e.g. , the section title of a Wikipedia article section , or the directory path of a source code file ) . For each domain , a set of hierarchical attributes { l0 , l1 , ... , ln } can be defined based on prior knowledge of the domain . We denote lk ( ci , cj ) ∈ { 0 , 1 } as the boolean locality feature value for the context pair , representing whether ci and cj share the same hierarchical attributes lk . Here , l0 is reserved for “ no locality ” , in case the pair shares none of the attributes . Without loss of generality , we set a constraint that ∑ k lk ( ci , cj ) = 1 , as new features can be introduced by conjunction and negation of the attributes if needed . Specific Instantiations . We instantiate these features on our two case studies of Wikipedia text and Java source code , as summarized in Table 1 . In Wikipedia , for every context ci , we define four mutually exclusive hierarchical attributes , l0 − l3 . We calculate these features based on the Wikipedia article and section titles , using simple pattern matching . We then link each article to a set of categories ( one article may belong to multiple categories ) using the knowledge graph WikiData,1 by aggregating all the category entities involving two properties : P31 ( instance of ) and P279 ( subclass of ) . The criterion for “ same section title ” is exact string match ( Hayashi et al. , 2020 ) . If there is at least one common category between the sets of categories for two articles , the pair is assigned the “ same article category ” . For Java source code , we define 3 mutually exclusive attributes , l0 − l2 based on the location of the code . For each source file , we use the full file path to obtain the two attributes : project name and sub-directory path.2 The criterion for both “ same project ” and “ same subdirectory ” is exact string match . Note that these features are strictly hierarchical , hence only two features are used to capture specific locality here . An Aside : Connections to Domain Adaptation . Domain adaptation typically refers to reusing existing information about a given problem ( e.g. , data or model ) to solve a task in a new domain . 1https : //www.wikidata.org/ 2For example , full path ... /Journal.IO/src/main/java/journal/io/api/DataFile.java has project Journal.IO and sub-directory src/main/java/journal/io/api/ for package journal.io.api . Domain adaptation for neural models generally focuses on fine-tuning models on in-domain data ( Sennrich et al. , 2016 ; Chu et al. , 2017 ) or making direct modifications to the model to consider domain information ( Britz et al. , 2017 ) or latent topic features ( Khudanpur & Wu , 2000 ; Mikolov & Zweig , 2012 ; Wang & Cho , 2016 ) . Most of these methods do not natively support new test-time contexts that were not seen at training time . In comparison , one immediate advantage of non-parametric LMs is the ability to adapt to different domains at test time without re-training ( Merity et al. , 2016 ; Grave et al. , 2016 ; 2017 ; Khandelwal et al. , 2020 ) . For example , some adaptive LMs ( Grave et al. , 2016 ; 2017 ) make use of the previous hidden states of test documents dynamically during inference . Similarly , our proposed locality features do not require re-training on the training set . Note that within the scope of this paper , although connected , the proposed structural locality is a different concept from domain . We consider domains as higher-level classifications describing the text where one example belongs to one domain label ; e.g. , a section about Kim Kardashian ’ s early life belongs to a category of texts describing celebrities . One the other hand , with the structural locality , a user could define multiple levels of locality : to that same section , we can assign not only the domain label , but also , the section title “ Early Life ” . The lightweight nature of our model combined with non-parametric LMs also makes adding more levels of features straightforward , as the features only need to be calculated for the top nearest neighbors , and the number parameters that need tuning in our proposed method ( Section 5 ) is only about twice the number of locality features .
The authors propose an approach to complement context by adding 'locality' information in examples present in external stores of non-parametric language models. The locality information captures the hierarchical structure, and deems two contexts more similar (or having less distance) if they share common hierarchical structure/attributes. The authors conduct experiments on source code as well as natural language articles. They analyze the results to point out reasons for improvement, and also highlight differences between the domains.
SP:6bbcd3b93df28e77efc11cae45e2c7bfe5f2b398
Task-Agnostic Graph Neural Explanations
1 INTRODUCTION . Graph neural networks ( GNNs ) ( Kipf & Welling , 2017 ; Veličković et al. , 2018 ; Xu et al. , 2019 ) have achieved remarkable success in learning from real-world graph-structured data due to their unique ability to capture both feature-wise and topological information . Extending their success , GNNs are widely applied in various research fields and industrial applications including quantum chemistry ( Gilmer et al. , 2017 ) , drug discovery ( Wu et al. , 2018 ; Wang et al. , 2020 ) , social networks ( Fan et al. , 2019 ) , and recommender systems ( Ying et al. , 2018 ) . While multiple approaches have been proposed and studied to improve GNN performance , GNN explainability is an emerging area and has a smaller body of research behind it . Recently , explainability has gained more attention due to an increasing desire for GNNs with more security , fairness , and reliability . Being able to provide a good explanation to a GNN prediction increases model reliability and reduces the risk of incorrect predictions , which is crucial in fields such as molecular biology , chemistry , fraud detection , etc . Existing methods adapting the explanation methods for convolutional neural networks ( CNNs ) or specifically designed for GNNs have shown promising explanations on multiple types of graph data . A recent survey ( Yuan et al. , 2020 ) categorizes existing explanation methods into gradient-based , perturbation , decomposition , and surrogate methods . In particular , perturbation methods involve learning or optimization ( Ying et al. , 2019 ; Luo et al. , 2020 ; Yuan et al. , 2021 ; Lin et al. , 2021 ) and , while bearing higher computational costs , generally achieve state-of-the-art performance in terms of explanation quality . These methods train post-hoc explanation models on top of the prediction model to be explained . Earlier approaches like GNNExplainer ( Ying et al. , 2019 ) require training or optimizing an individual explainer for each data instance , i.e. , a graph or a node to be explained . In contrast , PGExplainer ( Luo et al. , 2020 ) performs inductive learning , i.e. , it only requires a one-time training , and the explainer can be generalized to explain all data instances without individual optimization . Compared to other optimization-based explanation methods , PGExplainer significantly improves the efficiency in terms of time cost without performance loss by learning . However , even state-of-the-art explanation methods like PGExplainer are still task-specific at training and hence suffer from two crucial drawbacks . First , current methods are inefficient in explaining multitask prediction for graph-structured data . For example , one may need to predict multiple chemical properties in drug discovery for a molecular graph . In particular , ToxCast from MoleculeNet has 167 prediction tasks . In these cases , it is common to apply a single GNN model with multiple output dimensions to make predictions for all tasks . However , one is unable to employ a single explainer to explain the above model , since current explainers are trained specifically to explain one prediction task . As a result , in the case of ToxCast , one must train 167 explainers to explain the GNN model . Second , in industry settings , it is common to train GNN models in a two-stage fashion due to scaling , latency , and label sparsity issues . The first stage trains a GNN-based embedding model with a massive amount of unlabeled data in an unsupervised manner to learn embeddings for nodes or graphs . The second stage trains lightweight models such as multilayer perceptrons ( MLPs ) using the frozen embeddings as input to predict the downstream tasks . In the first stage , the downstream tasks are usually unknown or undefined , and existing task-specific explainers can not be applied . Also , there can be tens to hundreds of downstream tasks trained on these GNN embeddings , and training a separate explainer for each task is undesirable and downright impossible . To address the above limitations , we present a new task-agnostic explanation pipeline , shown in Figure 1 , where we decompose a prediction model into a GNN embedding model and a downstream model , designing separate explainers for each component . We design the downstream explainers to cooperate with the embedding explainer . The embedding explainer is trained using a self-supervised training framework , which we dub Task-Agnostic GNN Explainer ( TAGE ) , with no knowledge of downstream tasks , models , or labels . In contrast to existing explainers , the learning objective for TAGE is computed at the graph or node embeddings without involving task-related predictions . In addition to eliminating the need for downstream tasks in TAGE , we argue that the self-supervision performed on the embeddings can bring additional performance boost in terms of the explanation quality compared to existing task-specific baselines such as GNNExplainer and PGExplainer . We summarize our contributions as follows : 1 ) We introduce the task-agnostic explanation problem and propose a two-stage explanation pipeline involving an embedding explainer and a downstream explainer . This enables the explanation of multiple downstream tasks with a single embedding explainer . 2 ) We propose a self-supervised training framework TAGE , which is based on conditioned contrastive learning to train the embedding explainer . TAGE requires no knowledge of downstream tasks . 3 ) We perform experiments on real-world datasets and observe that TAGE outperforms existing learning-based explanation baselines in terms of explanation quality , universal explanation ability , and the time required for training and inference . 2 TASK-AGNOSTIC EXPLANATIONS . 2.1 NOTATIONS AND LEARNING-BASED GNN EXPLANATION . Our study considers the attributed graph G with node set V and edge set E. We formulate the attributed graph as a tuple of matrices ( A , X ) , where A ∈ { 0 , 1 } |V |×|V | denotes the adjacency matrix and X ∈ R|V |×df denotes the feature matrix with feature dimension of df . We assume that the prediction model F that is to be explained operates on graph-structured data through two components : a GNN-based embedding model and lighter downstream models . Denoting the input space by G , a node-level embedding model En : G → R|V |×d takes a graph as input and computes embeddings of dimension d for all nodes in the graph , whereas a graph-level embedding model Eg : G → R1×d computes an embedding for the input graph . Subsequently , the downstream model D : Rd → R computes predictions for the downstream task based on the embeddings . Typical GNN explainers consider a task-specific GNN-based model as a complete unit , i.e. , F : = D ◦ E . Given a graph G and the GNN-based model F to be explained , our goal is to identify the subgraph Gsub that contributes the most to the final prediction made by F . In other words , we claim that a given prediction is made because F captures crucial information provided by some subgraphGsub . The learning-based ( or optimization-based ) GNN explanation employs a parametric explainer Tθ associated with the GNN model F to compute the subgraph Gsub of the given graph data . Concretely , the explainer Tθ computes the importance score for each node or edge , denoted as wi or wij , or masks for node attributes denoted as m. It then selects the subgraph Gsub induced by important nodes and edges , i.e. , whose scores exceed a threshold t , and by masking the unimportant attributes . In our study , we follow Luo et al . ( 2020 ) , focusing on the importance of edges to provide explanations to GNNs . Formally , we have Gsub : = ( V , Esub ) = Tθ ( G ) , where Esub = { ( vi , vj ) : ( vi , vj ) ∈ E , wij ≥ t } . 2.2 TASK-AGNOSTIC EXPLANATIONS . As introduced in Section 1 , all existing learning-based or optimization-based explanation approaches are task-specific and hence suffer from infeasiblity or inefficiency in many real-application scenarios . In particular , they are of limited use when downstream tasks are unknown or undefined and fail to employ a single explainer to explain a multitask prediction model . To enable the explanation of GNNs in two-stage training and multitask scenarios , we introduce a new explanation paradigm called the task-agnostic explanation . The task-agnostic explanation considers a whole prediction model as an embedding model followed by any number of downstream models . It focuses on explaining the embedding model regardless of the number or the existence of downstream models . In particular , the task-agnostic explanation trains only one explainer T ( tag ) θ to explain the embedding model E , which should satisfy the following features . First , given an input graph G , the explainer T ( tag ) θ should be able to provide different explanations according to specific downstream tasks being studied . Table 1 compares the properties of common GNN explanation methods and the desired task-agnostic explainers in multitask scenarios . Second , the explainer T ( tag ) θ can be trained when only the embedding model is available , e.g. , at the first stage of a two-stage training paradigm , regardless of the presence of downstream tasks . When downstream tasks and models are unknown , T ( tag ) θ can still identify which components of the input graph are important for certain embedding dimensions of interest . 3 THE TAGE FRAMEWORK . Our explanation framework TAGE follows the typical scheme of GNN explanation introduced in the previous section . It provides explanations by identifying important edges in a given graph and removing the edges that lead to significant changes in the final prediction . Specifically , the goal of the TAGE is to predict the importance score for each edge in a given graph . Different from existing methods , the proposed TAGE breaks down typical end-to-end GNN explainers into two components . We now provide general descriptions and detailed formulations to the proposed framework . 3.1 TASK-AGNOSTIC EXPLANATION PIPELINE . Following the principle of the desired task-agnostic explanations , we introduce the task-agnostic explanation pipeline , where a typical explanation procedure is performed in two steps . In particular , we decompose the typical end-to-end learning-based GNN explainer into two parts : the embedding explainer TE and the downstream explainer Tdown , corresponding to the two components in the two-stage training and prediction procedure . We compare the typical explanation pipeline and the two-stage explanation pipeline in Figure 1 . The embedding explainer and downstream explainers can be trained or constructed independently from each other . In addition , the embedding explainer can cooperate with any downstream explainers to perform end-to-end explanations on input graphs . The downstream explainer aims to explain task-specific downstream models . As downstream models are usually lightweight MLPs , we simply adopt gradient-based explainers for downstream explainers without training . The downstream explainer takes a downstream model and the graph or node embedding vector as inputs and computes the importance score of each dimension on the embedding vector . The importance scores then serve as a condition vector input to the embedding explainer . Given the condition vector , the embedding explainer explains the GNN-based embedding model by identifying an important subgraph from the input graph data . In other words , given different condition vectors associated with different downstream tasks or models , the embedding explainer can provide corresponding explanations for the same embedding model . Formally , we denote the downstream explainer for models from D by Tdown : D × Rd → Rd , which maps input models and embeddings into importance scores m for all embedding dimensions . We denote the embedding explainer associated with the embedding model E by TE : Rd × G → G , which maps a given graph into a subgraph of higher importance , conditioned on the embedding dimension importance m ∈ Rd . The training procedures of the embedding explainer are independent of downstream tasks or downstream explainers . In particular , the downstream explainer is obtained from the downstream model only , and the training of the embedding explainer only requires the embedding model and the input graphs . As downstream models are usually constructed as stacked fully connected ( FC ) layers and the explanation of FC layers has been well studied , our study mainly focuses on the non-trivial training procedure and design of the embedding explainer .
The authors propose TAGE, a task-agnostic explanation method for explaining GNNs. TAGE explains GNN embedding models without downstream tasks and allows the explanation of multi-task models. This paper maximizes the mutual information of masked graph embedding and masked subgraph embedding as the objective function. The regularization term is added to restrict the number of edges. Empirical results focus on three aspects: (1) Improvement in fidelity scores brought by TAGE; (2) Visualizations on explanations to the GNN model; and (3) Comparison of computational time cost among GNN explainers.
SP:fb23ef05b515557e71411a840c28e1dd4d39d5cb
Task-Agnostic Graph Neural Explanations
1 INTRODUCTION . Graph neural networks ( GNNs ) ( Kipf & Welling , 2017 ; Veličković et al. , 2018 ; Xu et al. , 2019 ) have achieved remarkable success in learning from real-world graph-structured data due to their unique ability to capture both feature-wise and topological information . Extending their success , GNNs are widely applied in various research fields and industrial applications including quantum chemistry ( Gilmer et al. , 2017 ) , drug discovery ( Wu et al. , 2018 ; Wang et al. , 2020 ) , social networks ( Fan et al. , 2019 ) , and recommender systems ( Ying et al. , 2018 ) . While multiple approaches have been proposed and studied to improve GNN performance , GNN explainability is an emerging area and has a smaller body of research behind it . Recently , explainability has gained more attention due to an increasing desire for GNNs with more security , fairness , and reliability . Being able to provide a good explanation to a GNN prediction increases model reliability and reduces the risk of incorrect predictions , which is crucial in fields such as molecular biology , chemistry , fraud detection , etc . Existing methods adapting the explanation methods for convolutional neural networks ( CNNs ) or specifically designed for GNNs have shown promising explanations on multiple types of graph data . A recent survey ( Yuan et al. , 2020 ) categorizes existing explanation methods into gradient-based , perturbation , decomposition , and surrogate methods . In particular , perturbation methods involve learning or optimization ( Ying et al. , 2019 ; Luo et al. , 2020 ; Yuan et al. , 2021 ; Lin et al. , 2021 ) and , while bearing higher computational costs , generally achieve state-of-the-art performance in terms of explanation quality . These methods train post-hoc explanation models on top of the prediction model to be explained . Earlier approaches like GNNExplainer ( Ying et al. , 2019 ) require training or optimizing an individual explainer for each data instance , i.e. , a graph or a node to be explained . In contrast , PGExplainer ( Luo et al. , 2020 ) performs inductive learning , i.e. , it only requires a one-time training , and the explainer can be generalized to explain all data instances without individual optimization . Compared to other optimization-based explanation methods , PGExplainer significantly improves the efficiency in terms of time cost without performance loss by learning . However , even state-of-the-art explanation methods like PGExplainer are still task-specific at training and hence suffer from two crucial drawbacks . First , current methods are inefficient in explaining multitask prediction for graph-structured data . For example , one may need to predict multiple chemical properties in drug discovery for a molecular graph . In particular , ToxCast from MoleculeNet has 167 prediction tasks . In these cases , it is common to apply a single GNN model with multiple output dimensions to make predictions for all tasks . However , one is unable to employ a single explainer to explain the above model , since current explainers are trained specifically to explain one prediction task . As a result , in the case of ToxCast , one must train 167 explainers to explain the GNN model . Second , in industry settings , it is common to train GNN models in a two-stage fashion due to scaling , latency , and label sparsity issues . The first stage trains a GNN-based embedding model with a massive amount of unlabeled data in an unsupervised manner to learn embeddings for nodes or graphs . The second stage trains lightweight models such as multilayer perceptrons ( MLPs ) using the frozen embeddings as input to predict the downstream tasks . In the first stage , the downstream tasks are usually unknown or undefined , and existing task-specific explainers can not be applied . Also , there can be tens to hundreds of downstream tasks trained on these GNN embeddings , and training a separate explainer for each task is undesirable and downright impossible . To address the above limitations , we present a new task-agnostic explanation pipeline , shown in Figure 1 , where we decompose a prediction model into a GNN embedding model and a downstream model , designing separate explainers for each component . We design the downstream explainers to cooperate with the embedding explainer . The embedding explainer is trained using a self-supervised training framework , which we dub Task-Agnostic GNN Explainer ( TAGE ) , with no knowledge of downstream tasks , models , or labels . In contrast to existing explainers , the learning objective for TAGE is computed at the graph or node embeddings without involving task-related predictions . In addition to eliminating the need for downstream tasks in TAGE , we argue that the self-supervision performed on the embeddings can bring additional performance boost in terms of the explanation quality compared to existing task-specific baselines such as GNNExplainer and PGExplainer . We summarize our contributions as follows : 1 ) We introduce the task-agnostic explanation problem and propose a two-stage explanation pipeline involving an embedding explainer and a downstream explainer . This enables the explanation of multiple downstream tasks with a single embedding explainer . 2 ) We propose a self-supervised training framework TAGE , which is based on conditioned contrastive learning to train the embedding explainer . TAGE requires no knowledge of downstream tasks . 3 ) We perform experiments on real-world datasets and observe that TAGE outperforms existing learning-based explanation baselines in terms of explanation quality , universal explanation ability , and the time required for training and inference . 2 TASK-AGNOSTIC EXPLANATIONS . 2.1 NOTATIONS AND LEARNING-BASED GNN EXPLANATION . Our study considers the attributed graph G with node set V and edge set E. We formulate the attributed graph as a tuple of matrices ( A , X ) , where A ∈ { 0 , 1 } |V |×|V | denotes the adjacency matrix and X ∈ R|V |×df denotes the feature matrix with feature dimension of df . We assume that the prediction model F that is to be explained operates on graph-structured data through two components : a GNN-based embedding model and lighter downstream models . Denoting the input space by G , a node-level embedding model En : G → R|V |×d takes a graph as input and computes embeddings of dimension d for all nodes in the graph , whereas a graph-level embedding model Eg : G → R1×d computes an embedding for the input graph . Subsequently , the downstream model D : Rd → R computes predictions for the downstream task based on the embeddings . Typical GNN explainers consider a task-specific GNN-based model as a complete unit , i.e. , F : = D ◦ E . Given a graph G and the GNN-based model F to be explained , our goal is to identify the subgraph Gsub that contributes the most to the final prediction made by F . In other words , we claim that a given prediction is made because F captures crucial information provided by some subgraphGsub . The learning-based ( or optimization-based ) GNN explanation employs a parametric explainer Tθ associated with the GNN model F to compute the subgraph Gsub of the given graph data . Concretely , the explainer Tθ computes the importance score for each node or edge , denoted as wi or wij , or masks for node attributes denoted as m. It then selects the subgraph Gsub induced by important nodes and edges , i.e. , whose scores exceed a threshold t , and by masking the unimportant attributes . In our study , we follow Luo et al . ( 2020 ) , focusing on the importance of edges to provide explanations to GNNs . Formally , we have Gsub : = ( V , Esub ) = Tθ ( G ) , where Esub = { ( vi , vj ) : ( vi , vj ) ∈ E , wij ≥ t } . 2.2 TASK-AGNOSTIC EXPLANATIONS . As introduced in Section 1 , all existing learning-based or optimization-based explanation approaches are task-specific and hence suffer from infeasiblity or inefficiency in many real-application scenarios . In particular , they are of limited use when downstream tasks are unknown or undefined and fail to employ a single explainer to explain a multitask prediction model . To enable the explanation of GNNs in two-stage training and multitask scenarios , we introduce a new explanation paradigm called the task-agnostic explanation . The task-agnostic explanation considers a whole prediction model as an embedding model followed by any number of downstream models . It focuses on explaining the embedding model regardless of the number or the existence of downstream models . In particular , the task-agnostic explanation trains only one explainer T ( tag ) θ to explain the embedding model E , which should satisfy the following features . First , given an input graph G , the explainer T ( tag ) θ should be able to provide different explanations according to specific downstream tasks being studied . Table 1 compares the properties of common GNN explanation methods and the desired task-agnostic explainers in multitask scenarios . Second , the explainer T ( tag ) θ can be trained when only the embedding model is available , e.g. , at the first stage of a two-stage training paradigm , regardless of the presence of downstream tasks . When downstream tasks and models are unknown , T ( tag ) θ can still identify which components of the input graph are important for certain embedding dimensions of interest . 3 THE TAGE FRAMEWORK . Our explanation framework TAGE follows the typical scheme of GNN explanation introduced in the previous section . It provides explanations by identifying important edges in a given graph and removing the edges that lead to significant changes in the final prediction . Specifically , the goal of the TAGE is to predict the importance score for each edge in a given graph . Different from existing methods , the proposed TAGE breaks down typical end-to-end GNN explainers into two components . We now provide general descriptions and detailed formulations to the proposed framework . 3.1 TASK-AGNOSTIC EXPLANATION PIPELINE . Following the principle of the desired task-agnostic explanations , we introduce the task-agnostic explanation pipeline , where a typical explanation procedure is performed in two steps . In particular , we decompose the typical end-to-end learning-based GNN explainer into two parts : the embedding explainer TE and the downstream explainer Tdown , corresponding to the two components in the two-stage training and prediction procedure . We compare the typical explanation pipeline and the two-stage explanation pipeline in Figure 1 . The embedding explainer and downstream explainers can be trained or constructed independently from each other . In addition , the embedding explainer can cooperate with any downstream explainers to perform end-to-end explanations on input graphs . The downstream explainer aims to explain task-specific downstream models . As downstream models are usually lightweight MLPs , we simply adopt gradient-based explainers for downstream explainers without training . The downstream explainer takes a downstream model and the graph or node embedding vector as inputs and computes the importance score of each dimension on the embedding vector . The importance scores then serve as a condition vector input to the embedding explainer . Given the condition vector , the embedding explainer explains the GNN-based embedding model by identifying an important subgraph from the input graph data . In other words , given different condition vectors associated with different downstream tasks or models , the embedding explainer can provide corresponding explanations for the same embedding model . Formally , we denote the downstream explainer for models from D by Tdown : D × Rd → Rd , which maps input models and embeddings into importance scores m for all embedding dimensions . We denote the embedding explainer associated with the embedding model E by TE : Rd × G → G , which maps a given graph into a subgraph of higher importance , conditioned on the embedding dimension importance m ∈ Rd . The training procedures of the embedding explainer are independent of downstream tasks or downstream explainers . In particular , the downstream explainer is obtained from the downstream model only , and the training of the embedding explainer only requires the embedding model and the input graphs . As downstream models are usually constructed as stacked fully connected ( FC ) layers and the explanation of FC layers has been well studied , our study mainly focuses on the non-trivial training procedure and design of the embedding explainer .
This paper is motivated by the fact that existing task-specific explainers are too expensive to be applied to generating explanations for a model trained for multi-tasks. They decompose the typical end-to-end learning-based GNN explainer into two parts: the embedding explainer $\mathcal{T}_{\mathcal{E}}$ and the downstream explainer $\mathcal{T}_d$ ($d$ for down). The downstream explainer maps embeddings into importance scores $m$ for all embedding dimensions given a trained model and certain inputs. The embedding explainer is associated with the embedding model $\mathcal{E}$, which maps a given graph into a subgraph of high importance, conditioned on the embedding dimension importance. Finally, $\mathcal{T}_{\mathcal{E}}$ is trained via a self-supervised manner, and they use a gradient explainer for $\mathcal{T}_d$. In the experiments, TAGE outperforms the SOTA explainers w.r.t fidelity, sparsity, and especially time cost.
SP:fb23ef05b515557e71411a840c28e1dd4d39d5cb
Task-Agnostic Graph Neural Explanations
1 INTRODUCTION . Graph neural networks ( GNNs ) ( Kipf & Welling , 2017 ; Veličković et al. , 2018 ; Xu et al. , 2019 ) have achieved remarkable success in learning from real-world graph-structured data due to their unique ability to capture both feature-wise and topological information . Extending their success , GNNs are widely applied in various research fields and industrial applications including quantum chemistry ( Gilmer et al. , 2017 ) , drug discovery ( Wu et al. , 2018 ; Wang et al. , 2020 ) , social networks ( Fan et al. , 2019 ) , and recommender systems ( Ying et al. , 2018 ) . While multiple approaches have been proposed and studied to improve GNN performance , GNN explainability is an emerging area and has a smaller body of research behind it . Recently , explainability has gained more attention due to an increasing desire for GNNs with more security , fairness , and reliability . Being able to provide a good explanation to a GNN prediction increases model reliability and reduces the risk of incorrect predictions , which is crucial in fields such as molecular biology , chemistry , fraud detection , etc . Existing methods adapting the explanation methods for convolutional neural networks ( CNNs ) or specifically designed for GNNs have shown promising explanations on multiple types of graph data . A recent survey ( Yuan et al. , 2020 ) categorizes existing explanation methods into gradient-based , perturbation , decomposition , and surrogate methods . In particular , perturbation methods involve learning or optimization ( Ying et al. , 2019 ; Luo et al. , 2020 ; Yuan et al. , 2021 ; Lin et al. , 2021 ) and , while bearing higher computational costs , generally achieve state-of-the-art performance in terms of explanation quality . These methods train post-hoc explanation models on top of the prediction model to be explained . Earlier approaches like GNNExplainer ( Ying et al. , 2019 ) require training or optimizing an individual explainer for each data instance , i.e. , a graph or a node to be explained . In contrast , PGExplainer ( Luo et al. , 2020 ) performs inductive learning , i.e. , it only requires a one-time training , and the explainer can be generalized to explain all data instances without individual optimization . Compared to other optimization-based explanation methods , PGExplainer significantly improves the efficiency in terms of time cost without performance loss by learning . However , even state-of-the-art explanation methods like PGExplainer are still task-specific at training and hence suffer from two crucial drawbacks . First , current methods are inefficient in explaining multitask prediction for graph-structured data . For example , one may need to predict multiple chemical properties in drug discovery for a molecular graph . In particular , ToxCast from MoleculeNet has 167 prediction tasks . In these cases , it is common to apply a single GNN model with multiple output dimensions to make predictions for all tasks . However , one is unable to employ a single explainer to explain the above model , since current explainers are trained specifically to explain one prediction task . As a result , in the case of ToxCast , one must train 167 explainers to explain the GNN model . Second , in industry settings , it is common to train GNN models in a two-stage fashion due to scaling , latency , and label sparsity issues . The first stage trains a GNN-based embedding model with a massive amount of unlabeled data in an unsupervised manner to learn embeddings for nodes or graphs . The second stage trains lightweight models such as multilayer perceptrons ( MLPs ) using the frozen embeddings as input to predict the downstream tasks . In the first stage , the downstream tasks are usually unknown or undefined , and existing task-specific explainers can not be applied . Also , there can be tens to hundreds of downstream tasks trained on these GNN embeddings , and training a separate explainer for each task is undesirable and downright impossible . To address the above limitations , we present a new task-agnostic explanation pipeline , shown in Figure 1 , where we decompose a prediction model into a GNN embedding model and a downstream model , designing separate explainers for each component . We design the downstream explainers to cooperate with the embedding explainer . The embedding explainer is trained using a self-supervised training framework , which we dub Task-Agnostic GNN Explainer ( TAGE ) , with no knowledge of downstream tasks , models , or labels . In contrast to existing explainers , the learning objective for TAGE is computed at the graph or node embeddings without involving task-related predictions . In addition to eliminating the need for downstream tasks in TAGE , we argue that the self-supervision performed on the embeddings can bring additional performance boost in terms of the explanation quality compared to existing task-specific baselines such as GNNExplainer and PGExplainer . We summarize our contributions as follows : 1 ) We introduce the task-agnostic explanation problem and propose a two-stage explanation pipeline involving an embedding explainer and a downstream explainer . This enables the explanation of multiple downstream tasks with a single embedding explainer . 2 ) We propose a self-supervised training framework TAGE , which is based on conditioned contrastive learning to train the embedding explainer . TAGE requires no knowledge of downstream tasks . 3 ) We perform experiments on real-world datasets and observe that TAGE outperforms existing learning-based explanation baselines in terms of explanation quality , universal explanation ability , and the time required for training and inference . 2 TASK-AGNOSTIC EXPLANATIONS . 2.1 NOTATIONS AND LEARNING-BASED GNN EXPLANATION . Our study considers the attributed graph G with node set V and edge set E. We formulate the attributed graph as a tuple of matrices ( A , X ) , where A ∈ { 0 , 1 } |V |×|V | denotes the adjacency matrix and X ∈ R|V |×df denotes the feature matrix with feature dimension of df . We assume that the prediction model F that is to be explained operates on graph-structured data through two components : a GNN-based embedding model and lighter downstream models . Denoting the input space by G , a node-level embedding model En : G → R|V |×d takes a graph as input and computes embeddings of dimension d for all nodes in the graph , whereas a graph-level embedding model Eg : G → R1×d computes an embedding for the input graph . Subsequently , the downstream model D : Rd → R computes predictions for the downstream task based on the embeddings . Typical GNN explainers consider a task-specific GNN-based model as a complete unit , i.e. , F : = D ◦ E . Given a graph G and the GNN-based model F to be explained , our goal is to identify the subgraph Gsub that contributes the most to the final prediction made by F . In other words , we claim that a given prediction is made because F captures crucial information provided by some subgraphGsub . The learning-based ( or optimization-based ) GNN explanation employs a parametric explainer Tθ associated with the GNN model F to compute the subgraph Gsub of the given graph data . Concretely , the explainer Tθ computes the importance score for each node or edge , denoted as wi or wij , or masks for node attributes denoted as m. It then selects the subgraph Gsub induced by important nodes and edges , i.e. , whose scores exceed a threshold t , and by masking the unimportant attributes . In our study , we follow Luo et al . ( 2020 ) , focusing on the importance of edges to provide explanations to GNNs . Formally , we have Gsub : = ( V , Esub ) = Tθ ( G ) , where Esub = { ( vi , vj ) : ( vi , vj ) ∈ E , wij ≥ t } . 2.2 TASK-AGNOSTIC EXPLANATIONS . As introduced in Section 1 , all existing learning-based or optimization-based explanation approaches are task-specific and hence suffer from infeasiblity or inefficiency in many real-application scenarios . In particular , they are of limited use when downstream tasks are unknown or undefined and fail to employ a single explainer to explain a multitask prediction model . To enable the explanation of GNNs in two-stage training and multitask scenarios , we introduce a new explanation paradigm called the task-agnostic explanation . The task-agnostic explanation considers a whole prediction model as an embedding model followed by any number of downstream models . It focuses on explaining the embedding model regardless of the number or the existence of downstream models . In particular , the task-agnostic explanation trains only one explainer T ( tag ) θ to explain the embedding model E , which should satisfy the following features . First , given an input graph G , the explainer T ( tag ) θ should be able to provide different explanations according to specific downstream tasks being studied . Table 1 compares the properties of common GNN explanation methods and the desired task-agnostic explainers in multitask scenarios . Second , the explainer T ( tag ) θ can be trained when only the embedding model is available , e.g. , at the first stage of a two-stage training paradigm , regardless of the presence of downstream tasks . When downstream tasks and models are unknown , T ( tag ) θ can still identify which components of the input graph are important for certain embedding dimensions of interest . 3 THE TAGE FRAMEWORK . Our explanation framework TAGE follows the typical scheme of GNN explanation introduced in the previous section . It provides explanations by identifying important edges in a given graph and removing the edges that lead to significant changes in the final prediction . Specifically , the goal of the TAGE is to predict the importance score for each edge in a given graph . Different from existing methods , the proposed TAGE breaks down typical end-to-end GNN explainers into two components . We now provide general descriptions and detailed formulations to the proposed framework . 3.1 TASK-AGNOSTIC EXPLANATION PIPELINE . Following the principle of the desired task-agnostic explanations , we introduce the task-agnostic explanation pipeline , where a typical explanation procedure is performed in two steps . In particular , we decompose the typical end-to-end learning-based GNN explainer into two parts : the embedding explainer TE and the downstream explainer Tdown , corresponding to the two components in the two-stage training and prediction procedure . We compare the typical explanation pipeline and the two-stage explanation pipeline in Figure 1 . The embedding explainer and downstream explainers can be trained or constructed independently from each other . In addition , the embedding explainer can cooperate with any downstream explainers to perform end-to-end explanations on input graphs . The downstream explainer aims to explain task-specific downstream models . As downstream models are usually lightweight MLPs , we simply adopt gradient-based explainers for downstream explainers without training . The downstream explainer takes a downstream model and the graph or node embedding vector as inputs and computes the importance score of each dimension on the embedding vector . The importance scores then serve as a condition vector input to the embedding explainer . Given the condition vector , the embedding explainer explains the GNN-based embedding model by identifying an important subgraph from the input graph data . In other words , given different condition vectors associated with different downstream tasks or models , the embedding explainer can provide corresponding explanations for the same embedding model . Formally , we denote the downstream explainer for models from D by Tdown : D × Rd → Rd , which maps input models and embeddings into importance scores m for all embedding dimensions . We denote the embedding explainer associated with the embedding model E by TE : Rd × G → G , which maps a given graph into a subgraph of higher importance , conditioned on the embedding dimension importance m ∈ Rd . The training procedures of the embedding explainer are independent of downstream tasks or downstream explainers . In particular , the downstream explainer is obtained from the downstream model only , and the training of the embedding explainer only requires the embedding model and the input graphs . As downstream models are usually constructed as stacked fully connected ( FC ) layers and the explanation of FC layers has been well studied , our study mainly focuses on the non-trivial training procedure and design of the embedding explainer .
In this manuscript, a new explainer for GNNs is proposed. The newly proposed method aims to provide a task-agnostic explanation for the embedding GNNs rather than a specific downstream task. The motivation of the proposed method is that the modern GNNs are typically trained in a two-stage manner, where the embedding GNN is trained in the first stage and then the task-specific lightweight MLP. The proposed method is trained based on the MI between the whole graph and sub-graph generated by the explainer.
SP:fb23ef05b515557e71411a840c28e1dd4d39d5cb
IGLU: Efficient GCN Training via Lazy Updates
1 INTRODUCTION . The Graph Convolution Network ( GCN ) model has received much attention as an effective graph representation learning technique . It can exploit network topology while embedding data points enabling superior performance in several applications such as node classification on graphs ( Kipf & Welling , 2017 ) , recommendation systems ( Ying et al. , 2020 ) and program repair ( Yasunaga & Liang , 2020 ) . Their success notwithstanding , training GCNs at scale remains challenging especially on large and dense graphs and multiple convolution layers . This is mainly due to the aggregation operation that enables GCNs to adapt to graph topology – a node ’ s output layer embedding in a GCN is influenced by embeddings of its neighbors in the previous layer which recursively depend on embeddings of their own neighbors in the still previous layer , and so on . Even in GCNs with 2-3 layers , when processing loss terms corresponding to a small sample of nodes in a mini-batch , this causes back propagation to update a large multi-hop neighborhood , causing mini-batch SGD-based techniques to scale poorly especially in dense graphs . Efforts to overcome this problem try to limit the number of nodes that receive updates as a result of a back-propagation step Chiang et al . ( 2019 ) ; Hamilton et al . ( 2018 ) ; Zeng et al . ( 2020 ) . This is done either by sub-sampling the neighborhood or clustering ( note the distinction between nodes sampled to create a mini-batch and neighborhood sampling done to limit the neighborhood of the mini-batch that receives updates ) . Variance reduction techniques have also been studied Chen et al . ( 2018a ) to reduce additional variance introduced by neighborhood sampling . However , these techniques often require large graphs to be heavily subsampled resulting in poor accuracy due to insufficient aggregation . They also do not guarantee unbiased gradients or rigorous convergence guarantees . See Section 2 for a more detailed discussion about state-of-the-art methods for GCN training . Our Contributions : This paper presents IGLU , an efficient technique for training GCNs based on lazy updates . An analysis of the gradient structure in GCNs reveals the most expensive component of the back-propagation step initiated at a node to be ( re- ) computation of forward-pass embeddings for its vast multi-hop neighborhood . Based on this observation , IGLU performs back-propagation with significantly reduced complexity using intermediate computations that are cached at regular intervals . This completely avoids neighborhood sampling and is a stark departure from the state-of-the-art . IGLU is architecture-agnostic and can be readily implemented on a wide range of GCN architectures . Avoiding neighborhood sampling also allows IGLU to completely avoid variance artifacts and offer provable convergence to a first-order stationary point under standard assumptions . In experiments , IGLU offered superior accuracies and accelerated convergence on a range of benchmark datasets . 2 RELATED WORKS . ( Bruna et al. , 2013 ; Defferrard et al. , 2016 ; Kipf & Welling , 2017 ) introduced the GCN architecture for transductive learning on graphs . Later works extended to inductive settings and explored architectural variants such as the GIN ( Xu et al. , 2019 ) . Much effort has focused on speeding-up GCN training . Sampling Based Approaches : The simplest neighborhood sampling strategy adopted by GraphSAGE ( Hamilton et al. , 2018 ) iteratively sub-samples the multi-hop neighborhood of a node before proceeding to initiate back-propagation at that node . Only the sub-sampled neighbors participate in the back-propagation step limiting the amount of computation . Layer sampling strategies such as FastGCN ( Chen et al. , 2018b ) , LADIES ( Zou et al. , 2019 ) and ASGCN ( Huang et al. , 2018 ) decrease computation by sampling nodes at each GCN layer , using importance sampling to reduce variance and improve connectivity among sampled nodes . FastGCN used the same sampling distribution for all layers and struggled to maintain connectivity unless large batch-sizes were used . LADIES used a per-layer distribution conditioned on nodes sampled for the layer above allowing more efficient use of samples under a budget . ASGCN used a linear model to jointly infer node importance weights . Recent works such as Cluster-GCN ( Chiang et al. , 2019 ) and GraphSAINT ( Zeng et al. , 2020 ) propose subgraph sampling creating mini-batches out of subgraphs and restricting back-propagation to nodes within the subgraph . To avoid losing too many edges , large mini-batch sizes are used . Cluster-GCN performs graph clustering and chooses multiple clusters per mini-batch while reinserting cross-cluster edges whereas GraphSAINT samples large subgraphs directly using random walks . Bias and Variance : Sampling techniques face bias as non-linear activations in the GCN architecture make it difficult to offer unbiased estimates of the loss function e.g . Zeng et al . ( 2020 ) offer un-biased estimates only if non-linearities are discarded . Sampling techniques also face increased variance for which variance-reduction techniques have been proposed such as VR-GCN ( Chen et al. , 2018a ) , MVS-GNN ( Cong et al. , 2020 ) and AS-GCN ( Huang et al. , 2018 ) . VR-GCN samples nodes whose embeddings are to be updated at each layer and uses stale embeddings for the rest and offers variance elimination in the limit under suitable conditions . MVS-GNN handles variance due to mini-batch creation by performing importance weighted sampling to construct mini-batches . Bandit Sampler ( Liu et al. , 2020 ) formulates variance reduction as an adversarial bandit problem . Other Approaches : Recent approaches decouple propagation from prediction as a pre-processing step e.g . PPRGo ( Bojchevski et al. , 2020 ) , APPNP ( Klicpera et al. , 2018 ) and SIGN ( Frasca et al. , 2020 ) . APPNP makes use of the relationship between the GCNs and PageRank to construct improved propagation schemes via personalized PageRank . PPRGo extends APPNP by approximating the dense propagation matrix via the push-flow algorithm . SIGN proposes inception style pre-computation of graph convolutional filters to speed up training and inference . GNNAutoScale ( Fey et al. , 2021 ) builds on VR-GCN and makes use of historical embeddings for scaling GNN training to large graphs . IGLU in Context of Related Work : IGLU avoids neighborhood sampling entirely and instead speeds-up learning using stale computations . Intermediate computations are cached and lazily updated at regular intervals e.g . once per epoch . We note that IGLU ’ s caching is distinct and much more aggressive ( e.g . lasting an entire epoch ) than the internal caching performed by popular frameworks such as TensorFlow and PyTorch ( where caches last only a single iteration ) . Recomputing these values in bulk offers IGLU economies of scale . IGLU faces zero sampling variance but incurs bias due to the use of stale computations . Fortunately , this bias is provably bounded , and can be made arbitrarily small by adjusting the step length and refresh frequency of the stale computations . 3 IGLU : EFFICIENT GCN TRAINING VIA LAZY UPDATES . Problem Statement : Consider the problem of learning a GCN architecture on an undirected graph G ( V , E ) with each of the N nodes endowed with an initial feature vector x0i ∈ Rd0 , i ∈ V . X0 ∈ Rn×d0 denotes the matrix of these initial features stacked together . N ( i ) ⊂ V denotes the set of neighbors of node i . A denotes the ( normalized ) adjacency matrix of the graph . A multi-layer GCN architecture uses a parameterized function at each layer to construct a node ’ s embedding for the next layer using embeddings of that node as well as those of its neighbors . Specifically xki = f ( x k−1 j , j ∈ { i } ∪ N ( i ) ; E k ) , where Ek denotes the parameters of k-th layer . For example , a standard GCN layer is given by , xki = σ ∑ j∈V Aij ( W k ) > xk−1j , where Ek is simply the matrix W k ∈ Rdk−1×dk and dk is the embedding dimensionality after the kth layer . More involved architectures exist that incorporate operations such as layer normalization , batch normalization , etc . In this paper , Ek will always denote the collection of all parameters of the kth layer e.g . offset and scale parameters used in a layer norm operation . Xk ∈ Rn×dk will denote the matrix of kth layer embeddings stacked together , giving us the handy shorthand Xk = f ( Xk−1 ; Ek ) . Given aK-layer GCN and a multi-label/multi-class task with C labels/classes , a fully-connected layer WK+1 ∈ RdK×C and appropriate activation function such as softmax is applied to get predictions which are then fed into the task loss . We note that IGLU does not require the task loss to decompose over the classes . The convergence proofs only require a smooth training objective function . Neighborhood Explosion : To understand the reason behind neighborhood explosion and cost of mini-batch based SGD training , consider a univariate regression problem with a no-frills 2-layer GCN with unidimensional features and sigmoidal activation within the hidden layers i.e . K = 2 and C = 1 = d0 = . . . = dk . This GCN is parameterized by w1 , w2 , w3 ∈ R and offers the output ŷi = w3σ ( z2i ) where z2i = ∑ j∈V Aijw 2x1j ∈ R. In turn , we have x1j = σ ( z1i ) where z1i = ∑ j′∈V Ajj′w 1x0j′ ∈ R and x0j′ ∈ R are the initial features of the nodes . Given a task loss ` : RN × RN e.g . least squares , denoting ` ′i = ` ′ ( ŷi , yi ) gives us ∂ ` ( ŷi , yi ) ∂w1 = ` ′i · ∂ŷi ∂z2i · ∂z 2 i ∂w1 = ` ′i · w3σ′ ( z2i ) · ∑ j∈V Aijw 2 ∂x1j ∂w1 = ` ′i · w3σ′ ( z2i ) · ∑ j∈V Aijw 2σ′ ( z1j ) · ∑ j′∈V Ajj′x 0 j′ . The nesting of the summations is conspicuous and establishes the neighborhood explosion : when seeking gradients in a K-layer GCN on a graph with average degree m , up to an mK−1-sized neighborhood of a node may be involved in the back-propagation update initiated at that node . Note that the above expression involves terms such as σ′ ( z2i ) , σ ′ ( z1j ) and that the values of z 2 i , z 1 j etc change whenever the model receives updates . Consequently , for a fresh mini-batch of nodes chosen for a particular iteration , terms such as σ′ ( z2i ) , σ ′ ( z1j ) need to be computed afresh if the gradient is to be computed exactly . Performing these computations amounts to doing forward pass operations that frequently involve a large neighborhood of the nodes of the mini-batch . Sampling strategies try to limit this cost by directly restricting the neighborhood over which such forward passes are computed . However , this introduces both bias and variance into the gradient updates as discussed in Section 2 . IGLU instead lazily updates various incomplete gradient ( defined below ) and node embedding terms that participate in the above expression . This completely eliminates sampling variance but introduce a bias due to the use of stale terms . However , this bias is not just provably bounded , but can be made arbitrarily small by adjusting the step length and frequency of refreshing these terms . Lazy Updates for GCN Training : Consider an arbitrary GCN architecture with the following structure : for some parameterized layer functions we have Xk = f ( Xk−1 ; Ek ) where Ek denotes the collection of all parameters of the kth layer e.g . weight matrices , offset and scale parameters used in layer norm operations , etc . Xk ∈ RN×dk denotes the matrix of kth layer embeddings stacked together and X0 ∈ RN×d0 are the initial features . For a K-layer GCN on a multi-label/multi-class task with C labels/classes , a fully-connected layer WK+1 ∈ RdK×C is additionally included to offer ŷi = ( W K+1 ) > xKi ∈ RC . Let us use the shorthand Ŷ ∈ RN×C to denote the matrix where the predicted outputs ŷi over all the nodes are stacked . We assume a task loss function ` : RC×RC×R+ that need not decompose over the classes . We will use the abbreviation ` i : = ` ( ŷi , yi ) . For sake of simplicity , we assume that the loss function itself includes any activation such as softmax that needs to be applied over the predictions ŷi . Let L = ∑ i∈V ` i denote the total loss function . Motivation : We define the the loss derivative matrix G = [ gic ] = RN×C as gic : = ∂ ` i∂ŷic . As the proof of Lemma 1 ( See Appendix C ) shows , the loss derivative with respect to parameters Ek at any layer has the form ∂L ∂Ek = ∑N j=1 ∑dk p=1 ( ∑ i∈V ∑ c∈ [ C ] gic · ∂ŷic ∂Xkjp ) ∂Xkjp ∂Ek . Note that the loss derivative is expressed in terms of partial derivatives ∂Xkjp ∂Ek that can be computed for any node using only embeddings of its neighbors in the ( k − 1 ) th layer i.e . Xk−1 thus avoiding any neighborhood explosion . This means that neighborhood explosion must be encountered while computing the terms encapsulated in the round brackets . Let us first formally recognize these terms as incomplete gradients . The notation ∂P∂Q ∣∣∣ R denotes the partial derivative of P w.r.t Q while keeping R fixed i.e . treated as a constant . Definition 1 . For any layer k ≤ K , define its incomplete task gradient to be αk = [ αkjp ] ∈ RN×dk , αkjp : = ∂ ( G Ŷ ) ∂Xkjp ∣∣∣∣∣ G = ∑ i∈V ∑ c∈ [ C ] gic · ∂ŷic ∂Xkjp The following lemma completely characterizes the loss gradients of the GCN but also shows that the incomplete gradient terms αk , k ∈ [ K ] can be efficiently computed using a recursive formulation that also does not involve any neighborhood explosion . Lemma 1 . The following results hold whenever the task loss L is differentiable : 1 . For the final fully-connected layer we have ∂L ∂WK+1 = ( XK ) > G as well as for any k ∈ [ K ] and any parameter Ek in the kth layer , ∂L ∂Ek = ∂ ( α k Xk ) ∂Ek ∣∣∣ αk = ∑ i∈V ∑dk p=1 α k ip · ∂Xkip ∂Ek . 2 . For the final layer , we have αK = G ( WK+1 ) > as well as for any k < K , we have αk = ∂ ( α k+1 Xk+1 ) ∂Xk ∣∣∣ αk+1 i.e . αkjp = ∑ i∈V ∑dk+1 q=1 α k+1 iq · ∂Xk+1iq ∂Xkjp . Lemma 1 establishes a recursive definition of the incomplete gradients using terms such as ∂Xk+1iq ∂Xkjp that only concern a single layer . Thus , computing αk for any k ∈ [ K ] does not involve any neighborhood explosion since only the immediate neighbors of a node need be consulted . Lemma 1 also shows that if αk are computed and frozen , the loss derivatives ∂L ∂Ek only involve additional computation of terms such as ∂Xkip ∂Ek which yet again involve a single layer and do not cause neighborhood explosion . This motivates lazy updates to αk , Xk values in order to accelerate back-propagation . However , performing lazy updates to both αk , Xk offers suboptimal performance . Hence IGLU adopts two variants described in Algorithms 1 and 2 . The backprop variant * keeps embeddings Xk stale for an entire epoch but performs eager updates to αk . The inverted variant on the other hand keeps the incomplete gradients αk stale for an entire epoch but performs eager updates to Xk . Algorithm 1 IGLU : backprop order Input : GCN G , initial features X0 , task loss L 1 : Initialize model parameters Ek , k ∈ [ K ] , WK+1 2 : while not converged do 3 : Do a forward pass to compute Xk for all k ∈ [ K ] as well as Ŷ 4 : Compute G then ∂L ∂WK+1 using Lemma 1 ( 1 ) and update WK+1 ←WK+1 − η · ∂L ∂WK+1 5 : Compute αK using G , WK+1 , Lemma 1 ( 2 ) 6 : for k = K . . . 2 do 7 : Compute ∂L ∂Ek using αk , Xk , Lemma 1 ( 1 ) 8 : Update Ek ← Ek − η · ∂L ∂Ek 9 : Update αk using αk+1 using Lemma 1 ( 2 ) 10 : end for 11 : end while Algorithm 2 IGLU : inverted order Input : GCN G , initial features X0 , task loss L 1 : Initialize model parameters Ek , k ∈ [ K ] , WK+1 2 : Do an initial forward pass to computeXk , k ∈ [ K ] 3 : while not converged do 4 : Compute Ŷ , G and αk for all k ∈ [ K ] using Lemma 1 ( 2 ) 5 : for k = 1 . . .K do 6 : Compute ∂L ∂Ek using αk , Xk , Lemma 1 ( 1 ) 7 : Update Ek ← Ek − η · ∂L ∂Ek 8 : Update Xk ← f ( Xk−1 ; Ek ) 9 : end for 10 : Compute ∂L ∂WK+1 using Lemma 1 ( 1 ) and use it to update WK+1 ←WK+1 − η · ∂L ∂WK+1 11 : end while SGD Implementation : Update steps in the algorithms ( steps 4 , 8 in Algorithm 1 and steps 7 , 10 in Algorithm 2 ) are described as a single gradient step over the entire graph to simplify exposition – in practice these steps are implemented using mini-batch SGD . This is done as usual by sampling a mini-batch of nodes S and computing task gradients only w.r.t L̂S = ∑ i∈S ` i instead of L. Contribution : As noted in Section 2 , IGLU uses caching in a manner fundamentally different from popular frameworks such as PyTorch or TensorFlow . Their caches are short-lived and always seek to compute exact gradients unlike IGLU that computes gradients much faster but with bounded bias . IGLU also extends other caching-based techniques such as VR-GCN which choose to cache node embeddings . In contrast , IGLU offers two variants and the variant that uses inverted order of updates ( Algorithm 2 ) and caches incomplete gradients outperforms the backprop variant of IGLU ( Algorithm 1 ) that caches node embeddings instead . Theoretical Analysis : Note that , conditioned on the stale parameters ( either αk or Xk depending on which variant is being executed ) , the gradients used by IGLU to perform model updates ( steps 4 , 8 in Algorithm 1 and steps 7 , 10 in Algorithm 2 ) do not exhibit any sampling bias . However , the staleness itself introduces bias in the training process . By controlling the step length η and the frequency with which the stale parameters are updated , this bias can be provably controlled resulting in guaranteed convergence to a first-order stationary point . Due to lack of space , we postpone the detailed statement and proof of the convergence guarantee to Appendix C. Theorem 2 ( IGLU Convergence ( Informal ) ) . Suppose the task loss function L has O ( 1 ) -Lipschitz gradients and IGLU is executed with small enough step lengths η with model updates ( steps 4 , 8 in Algorithm 1 and steps 7 , 10 in Algorithm 2 ) being carried out either in a full-batch or mini-batch SGD manner , then within T iterations , IGLU converges to a model iterate satisfying : 1 . ‖∇L‖22 ≤ O ( 1/T 2 3 ) if update steps are carried out on the entire graph in a full-batch . 2 . ‖∇L‖22 ≤ O ( 1/ √ T ) if update steps are carried out using mini-batch SGD . This result holds under minimal assumptions of objective smoothness and boundedness that are standard Chen et al . ( 2018a ) ; Cong et al . ( 2020 ) , yet offers convergence rates comparable to those offered by standard mini-batch SGD . However , whereas works such as ( Chen et al. , 2018a ) assume bounds on the sup-norm i.e . L∞ norm of the gradients , Theorem 2 only requires an L2 norm bound . Note that objective smoothness requires the architecture to use smooth activation functions . However , IGLU offers similar performance whether using non-smooth activations e.g . ReLU or smooth ones e.g . GELU ( see Appendix B.7 ) as is also observed by other works ( Hendrycks & Gimpel , 2020 ) . * The backprop variant is named so since it updates model parameters in the order back-propagation would have updated them i.e . WK+1 followed by EK , EK−1 , . . . whereas the inverted variant performs updates in the reverse order i.e . starting from E1 , E2 all the way to WK+1 .
This paper studies tries to tackle the scalability challenge of training GNNs on large graphs. The authors propose IGLU, an architecture-agnostic method. IGLU caches intermediate computations and uses a lazy update strategy. Convergence analysis on IGLU is provided and empirical results show that IGLU has better performance than baselines such as GraphSAINT, Cluster-GCN, VR-GCN.
SP:59d57b9cb0842c860bfb06c7c0ddf7065fc384a0
IGLU: Efficient GCN Training via Lazy Updates
1 INTRODUCTION . The Graph Convolution Network ( GCN ) model has received much attention as an effective graph representation learning technique . It can exploit network topology while embedding data points enabling superior performance in several applications such as node classification on graphs ( Kipf & Welling , 2017 ) , recommendation systems ( Ying et al. , 2020 ) and program repair ( Yasunaga & Liang , 2020 ) . Their success notwithstanding , training GCNs at scale remains challenging especially on large and dense graphs and multiple convolution layers . This is mainly due to the aggregation operation that enables GCNs to adapt to graph topology – a node ’ s output layer embedding in a GCN is influenced by embeddings of its neighbors in the previous layer which recursively depend on embeddings of their own neighbors in the still previous layer , and so on . Even in GCNs with 2-3 layers , when processing loss terms corresponding to a small sample of nodes in a mini-batch , this causes back propagation to update a large multi-hop neighborhood , causing mini-batch SGD-based techniques to scale poorly especially in dense graphs . Efforts to overcome this problem try to limit the number of nodes that receive updates as a result of a back-propagation step Chiang et al . ( 2019 ) ; Hamilton et al . ( 2018 ) ; Zeng et al . ( 2020 ) . This is done either by sub-sampling the neighborhood or clustering ( note the distinction between nodes sampled to create a mini-batch and neighborhood sampling done to limit the neighborhood of the mini-batch that receives updates ) . Variance reduction techniques have also been studied Chen et al . ( 2018a ) to reduce additional variance introduced by neighborhood sampling . However , these techniques often require large graphs to be heavily subsampled resulting in poor accuracy due to insufficient aggregation . They also do not guarantee unbiased gradients or rigorous convergence guarantees . See Section 2 for a more detailed discussion about state-of-the-art methods for GCN training . Our Contributions : This paper presents IGLU , an efficient technique for training GCNs based on lazy updates . An analysis of the gradient structure in GCNs reveals the most expensive component of the back-propagation step initiated at a node to be ( re- ) computation of forward-pass embeddings for its vast multi-hop neighborhood . Based on this observation , IGLU performs back-propagation with significantly reduced complexity using intermediate computations that are cached at regular intervals . This completely avoids neighborhood sampling and is a stark departure from the state-of-the-art . IGLU is architecture-agnostic and can be readily implemented on a wide range of GCN architectures . Avoiding neighborhood sampling also allows IGLU to completely avoid variance artifacts and offer provable convergence to a first-order stationary point under standard assumptions . In experiments , IGLU offered superior accuracies and accelerated convergence on a range of benchmark datasets . 2 RELATED WORKS . ( Bruna et al. , 2013 ; Defferrard et al. , 2016 ; Kipf & Welling , 2017 ) introduced the GCN architecture for transductive learning on graphs . Later works extended to inductive settings and explored architectural variants such as the GIN ( Xu et al. , 2019 ) . Much effort has focused on speeding-up GCN training . Sampling Based Approaches : The simplest neighborhood sampling strategy adopted by GraphSAGE ( Hamilton et al. , 2018 ) iteratively sub-samples the multi-hop neighborhood of a node before proceeding to initiate back-propagation at that node . Only the sub-sampled neighbors participate in the back-propagation step limiting the amount of computation . Layer sampling strategies such as FastGCN ( Chen et al. , 2018b ) , LADIES ( Zou et al. , 2019 ) and ASGCN ( Huang et al. , 2018 ) decrease computation by sampling nodes at each GCN layer , using importance sampling to reduce variance and improve connectivity among sampled nodes . FastGCN used the same sampling distribution for all layers and struggled to maintain connectivity unless large batch-sizes were used . LADIES used a per-layer distribution conditioned on nodes sampled for the layer above allowing more efficient use of samples under a budget . ASGCN used a linear model to jointly infer node importance weights . Recent works such as Cluster-GCN ( Chiang et al. , 2019 ) and GraphSAINT ( Zeng et al. , 2020 ) propose subgraph sampling creating mini-batches out of subgraphs and restricting back-propagation to nodes within the subgraph . To avoid losing too many edges , large mini-batch sizes are used . Cluster-GCN performs graph clustering and chooses multiple clusters per mini-batch while reinserting cross-cluster edges whereas GraphSAINT samples large subgraphs directly using random walks . Bias and Variance : Sampling techniques face bias as non-linear activations in the GCN architecture make it difficult to offer unbiased estimates of the loss function e.g . Zeng et al . ( 2020 ) offer un-biased estimates only if non-linearities are discarded . Sampling techniques also face increased variance for which variance-reduction techniques have been proposed such as VR-GCN ( Chen et al. , 2018a ) , MVS-GNN ( Cong et al. , 2020 ) and AS-GCN ( Huang et al. , 2018 ) . VR-GCN samples nodes whose embeddings are to be updated at each layer and uses stale embeddings for the rest and offers variance elimination in the limit under suitable conditions . MVS-GNN handles variance due to mini-batch creation by performing importance weighted sampling to construct mini-batches . Bandit Sampler ( Liu et al. , 2020 ) formulates variance reduction as an adversarial bandit problem . Other Approaches : Recent approaches decouple propagation from prediction as a pre-processing step e.g . PPRGo ( Bojchevski et al. , 2020 ) , APPNP ( Klicpera et al. , 2018 ) and SIGN ( Frasca et al. , 2020 ) . APPNP makes use of the relationship between the GCNs and PageRank to construct improved propagation schemes via personalized PageRank . PPRGo extends APPNP by approximating the dense propagation matrix via the push-flow algorithm . SIGN proposes inception style pre-computation of graph convolutional filters to speed up training and inference . GNNAutoScale ( Fey et al. , 2021 ) builds on VR-GCN and makes use of historical embeddings for scaling GNN training to large graphs . IGLU in Context of Related Work : IGLU avoids neighborhood sampling entirely and instead speeds-up learning using stale computations . Intermediate computations are cached and lazily updated at regular intervals e.g . once per epoch . We note that IGLU ’ s caching is distinct and much more aggressive ( e.g . lasting an entire epoch ) than the internal caching performed by popular frameworks such as TensorFlow and PyTorch ( where caches last only a single iteration ) . Recomputing these values in bulk offers IGLU economies of scale . IGLU faces zero sampling variance but incurs bias due to the use of stale computations . Fortunately , this bias is provably bounded , and can be made arbitrarily small by adjusting the step length and refresh frequency of the stale computations . 3 IGLU : EFFICIENT GCN TRAINING VIA LAZY UPDATES . Problem Statement : Consider the problem of learning a GCN architecture on an undirected graph G ( V , E ) with each of the N nodes endowed with an initial feature vector x0i ∈ Rd0 , i ∈ V . X0 ∈ Rn×d0 denotes the matrix of these initial features stacked together . N ( i ) ⊂ V denotes the set of neighbors of node i . A denotes the ( normalized ) adjacency matrix of the graph . A multi-layer GCN architecture uses a parameterized function at each layer to construct a node ’ s embedding for the next layer using embeddings of that node as well as those of its neighbors . Specifically xki = f ( x k−1 j , j ∈ { i } ∪ N ( i ) ; E k ) , where Ek denotes the parameters of k-th layer . For example , a standard GCN layer is given by , xki = σ ∑ j∈V Aij ( W k ) > xk−1j , where Ek is simply the matrix W k ∈ Rdk−1×dk and dk is the embedding dimensionality after the kth layer . More involved architectures exist that incorporate operations such as layer normalization , batch normalization , etc . In this paper , Ek will always denote the collection of all parameters of the kth layer e.g . offset and scale parameters used in a layer norm operation . Xk ∈ Rn×dk will denote the matrix of kth layer embeddings stacked together , giving us the handy shorthand Xk = f ( Xk−1 ; Ek ) . Given aK-layer GCN and a multi-label/multi-class task with C labels/classes , a fully-connected layer WK+1 ∈ RdK×C and appropriate activation function such as softmax is applied to get predictions which are then fed into the task loss . We note that IGLU does not require the task loss to decompose over the classes . The convergence proofs only require a smooth training objective function . Neighborhood Explosion : To understand the reason behind neighborhood explosion and cost of mini-batch based SGD training , consider a univariate regression problem with a no-frills 2-layer GCN with unidimensional features and sigmoidal activation within the hidden layers i.e . K = 2 and C = 1 = d0 = . . . = dk . This GCN is parameterized by w1 , w2 , w3 ∈ R and offers the output ŷi = w3σ ( z2i ) where z2i = ∑ j∈V Aijw 2x1j ∈ R. In turn , we have x1j = σ ( z1i ) where z1i = ∑ j′∈V Ajj′w 1x0j′ ∈ R and x0j′ ∈ R are the initial features of the nodes . Given a task loss ` : RN × RN e.g . least squares , denoting ` ′i = ` ′ ( ŷi , yi ) gives us ∂ ` ( ŷi , yi ) ∂w1 = ` ′i · ∂ŷi ∂z2i · ∂z 2 i ∂w1 = ` ′i · w3σ′ ( z2i ) · ∑ j∈V Aijw 2 ∂x1j ∂w1 = ` ′i · w3σ′ ( z2i ) · ∑ j∈V Aijw 2σ′ ( z1j ) · ∑ j′∈V Ajj′x 0 j′ . The nesting of the summations is conspicuous and establishes the neighborhood explosion : when seeking gradients in a K-layer GCN on a graph with average degree m , up to an mK−1-sized neighborhood of a node may be involved in the back-propagation update initiated at that node . Note that the above expression involves terms such as σ′ ( z2i ) , σ ′ ( z1j ) and that the values of z 2 i , z 1 j etc change whenever the model receives updates . Consequently , for a fresh mini-batch of nodes chosen for a particular iteration , terms such as σ′ ( z2i ) , σ ′ ( z1j ) need to be computed afresh if the gradient is to be computed exactly . Performing these computations amounts to doing forward pass operations that frequently involve a large neighborhood of the nodes of the mini-batch . Sampling strategies try to limit this cost by directly restricting the neighborhood over which such forward passes are computed . However , this introduces both bias and variance into the gradient updates as discussed in Section 2 . IGLU instead lazily updates various incomplete gradient ( defined below ) and node embedding terms that participate in the above expression . This completely eliminates sampling variance but introduce a bias due to the use of stale terms . However , this bias is not just provably bounded , but can be made arbitrarily small by adjusting the step length and frequency of refreshing these terms . Lazy Updates for GCN Training : Consider an arbitrary GCN architecture with the following structure : for some parameterized layer functions we have Xk = f ( Xk−1 ; Ek ) where Ek denotes the collection of all parameters of the kth layer e.g . weight matrices , offset and scale parameters used in layer norm operations , etc . Xk ∈ RN×dk denotes the matrix of kth layer embeddings stacked together and X0 ∈ RN×d0 are the initial features . For a K-layer GCN on a multi-label/multi-class task with C labels/classes , a fully-connected layer WK+1 ∈ RdK×C is additionally included to offer ŷi = ( W K+1 ) > xKi ∈ RC . Let us use the shorthand Ŷ ∈ RN×C to denote the matrix where the predicted outputs ŷi over all the nodes are stacked . We assume a task loss function ` : RC×RC×R+ that need not decompose over the classes . We will use the abbreviation ` i : = ` ( ŷi , yi ) . For sake of simplicity , we assume that the loss function itself includes any activation such as softmax that needs to be applied over the predictions ŷi . Let L = ∑ i∈V ` i denote the total loss function . Motivation : We define the the loss derivative matrix G = [ gic ] = RN×C as gic : = ∂ ` i∂ŷic . As the proof of Lemma 1 ( See Appendix C ) shows , the loss derivative with respect to parameters Ek at any layer has the form ∂L ∂Ek = ∑N j=1 ∑dk p=1 ( ∑ i∈V ∑ c∈ [ C ] gic · ∂ŷic ∂Xkjp ) ∂Xkjp ∂Ek . Note that the loss derivative is expressed in terms of partial derivatives ∂Xkjp ∂Ek that can be computed for any node using only embeddings of its neighbors in the ( k − 1 ) th layer i.e . Xk−1 thus avoiding any neighborhood explosion . This means that neighborhood explosion must be encountered while computing the terms encapsulated in the round brackets . Let us first formally recognize these terms as incomplete gradients . The notation ∂P∂Q ∣∣∣ R denotes the partial derivative of P w.r.t Q while keeping R fixed i.e . treated as a constant . Definition 1 . For any layer k ≤ K , define its incomplete task gradient to be αk = [ αkjp ] ∈ RN×dk , αkjp : = ∂ ( G Ŷ ) ∂Xkjp ∣∣∣∣∣ G = ∑ i∈V ∑ c∈ [ C ] gic · ∂ŷic ∂Xkjp The following lemma completely characterizes the loss gradients of the GCN but also shows that the incomplete gradient terms αk , k ∈ [ K ] can be efficiently computed using a recursive formulation that also does not involve any neighborhood explosion . Lemma 1 . The following results hold whenever the task loss L is differentiable : 1 . For the final fully-connected layer we have ∂L ∂WK+1 = ( XK ) > G as well as for any k ∈ [ K ] and any parameter Ek in the kth layer , ∂L ∂Ek = ∂ ( α k Xk ) ∂Ek ∣∣∣ αk = ∑ i∈V ∑dk p=1 α k ip · ∂Xkip ∂Ek . 2 . For the final layer , we have αK = G ( WK+1 ) > as well as for any k < K , we have αk = ∂ ( α k+1 Xk+1 ) ∂Xk ∣∣∣ αk+1 i.e . αkjp = ∑ i∈V ∑dk+1 q=1 α k+1 iq · ∂Xk+1iq ∂Xkjp . Lemma 1 establishes a recursive definition of the incomplete gradients using terms such as ∂Xk+1iq ∂Xkjp that only concern a single layer . Thus , computing αk for any k ∈ [ K ] does not involve any neighborhood explosion since only the immediate neighbors of a node need be consulted . Lemma 1 also shows that if αk are computed and frozen , the loss derivatives ∂L ∂Ek only involve additional computation of terms such as ∂Xkip ∂Ek which yet again involve a single layer and do not cause neighborhood explosion . This motivates lazy updates to αk , Xk values in order to accelerate back-propagation . However , performing lazy updates to both αk , Xk offers suboptimal performance . Hence IGLU adopts two variants described in Algorithms 1 and 2 . The backprop variant * keeps embeddings Xk stale for an entire epoch but performs eager updates to αk . The inverted variant on the other hand keeps the incomplete gradients αk stale for an entire epoch but performs eager updates to Xk . Algorithm 1 IGLU : backprop order Input : GCN G , initial features X0 , task loss L 1 : Initialize model parameters Ek , k ∈ [ K ] , WK+1 2 : while not converged do 3 : Do a forward pass to compute Xk for all k ∈ [ K ] as well as Ŷ 4 : Compute G then ∂L ∂WK+1 using Lemma 1 ( 1 ) and update WK+1 ←WK+1 − η · ∂L ∂WK+1 5 : Compute αK using G , WK+1 , Lemma 1 ( 2 ) 6 : for k = K . . . 2 do 7 : Compute ∂L ∂Ek using αk , Xk , Lemma 1 ( 1 ) 8 : Update Ek ← Ek − η · ∂L ∂Ek 9 : Update αk using αk+1 using Lemma 1 ( 2 ) 10 : end for 11 : end while Algorithm 2 IGLU : inverted order Input : GCN G , initial features X0 , task loss L 1 : Initialize model parameters Ek , k ∈ [ K ] , WK+1 2 : Do an initial forward pass to computeXk , k ∈ [ K ] 3 : while not converged do 4 : Compute Ŷ , G and αk for all k ∈ [ K ] using Lemma 1 ( 2 ) 5 : for k = 1 . . .K do 6 : Compute ∂L ∂Ek using αk , Xk , Lemma 1 ( 1 ) 7 : Update Ek ← Ek − η · ∂L ∂Ek 8 : Update Xk ← f ( Xk−1 ; Ek ) 9 : end for 10 : Compute ∂L ∂WK+1 using Lemma 1 ( 1 ) and use it to update WK+1 ←WK+1 − η · ∂L ∂WK+1 11 : end while SGD Implementation : Update steps in the algorithms ( steps 4 , 8 in Algorithm 1 and steps 7 , 10 in Algorithm 2 ) are described as a single gradient step over the entire graph to simplify exposition – in practice these steps are implemented using mini-batch SGD . This is done as usual by sampling a mini-batch of nodes S and computing task gradients only w.r.t L̂S = ∑ i∈S ` i instead of L. Contribution : As noted in Section 2 , IGLU uses caching in a manner fundamentally different from popular frameworks such as PyTorch or TensorFlow . Their caches are short-lived and always seek to compute exact gradients unlike IGLU that computes gradients much faster but with bounded bias . IGLU also extends other caching-based techniques such as VR-GCN which choose to cache node embeddings . In contrast , IGLU offers two variants and the variant that uses inverted order of updates ( Algorithm 2 ) and caches incomplete gradients outperforms the backprop variant of IGLU ( Algorithm 1 ) that caches node embeddings instead . Theoretical Analysis : Note that , conditioned on the stale parameters ( either αk or Xk depending on which variant is being executed ) , the gradients used by IGLU to perform model updates ( steps 4 , 8 in Algorithm 1 and steps 7 , 10 in Algorithm 2 ) do not exhibit any sampling bias . However , the staleness itself introduces bias in the training process . By controlling the step length η and the frequency with which the stale parameters are updated , this bias can be provably controlled resulting in guaranteed convergence to a first-order stationary point . Due to lack of space , we postpone the detailed statement and proof of the convergence guarantee to Appendix C. Theorem 2 ( IGLU Convergence ( Informal ) ) . Suppose the task loss function L has O ( 1 ) -Lipschitz gradients and IGLU is executed with small enough step lengths η with model updates ( steps 4 , 8 in Algorithm 1 and steps 7 , 10 in Algorithm 2 ) being carried out either in a full-batch or mini-batch SGD manner , then within T iterations , IGLU converges to a model iterate satisfying : 1 . ‖∇L‖22 ≤ O ( 1/T 2 3 ) if update steps are carried out on the entire graph in a full-batch . 2 . ‖∇L‖22 ≤ O ( 1/ √ T ) if update steps are carried out using mini-batch SGD . This result holds under minimal assumptions of objective smoothness and boundedness that are standard Chen et al . ( 2018a ) ; Cong et al . ( 2020 ) , yet offers convergence rates comparable to those offered by standard mini-batch SGD . However , whereas works such as ( Chen et al. , 2018a ) assume bounds on the sup-norm i.e . L∞ norm of the gradients , Theorem 2 only requires an L2 norm bound . Note that objective smoothness requires the architecture to use smooth activation functions . However , IGLU offers similar performance whether using non-smooth activations e.g . ReLU or smooth ones e.g . GELU ( see Appendix B.7 ) as is also observed by other works ( Hendrycks & Gimpel , 2020 ) . * The backprop variant is named so since it updates model parameters in the order back-propagation would have updated them i.e . WK+1 followed by EK , EK−1 , . . . whereas the inverted variant performs updates in the reverse order i.e . starting from E1 , E2 all the way to WK+1 .
The work proposes IGLU, an algorithm to scale-up GNNs using stale computations instead of traditional neighborhood sampling. IGLU has bounded bias if the loss and activation functions are smooth. Results on large-scale benchmarks show IGLU achieves SOTA and scales better than previous methods.
SP:59d57b9cb0842c860bfb06c7c0ddf7065fc384a0
IGLU: Efficient GCN Training via Lazy Updates
1 INTRODUCTION . The Graph Convolution Network ( GCN ) model has received much attention as an effective graph representation learning technique . It can exploit network topology while embedding data points enabling superior performance in several applications such as node classification on graphs ( Kipf & Welling , 2017 ) , recommendation systems ( Ying et al. , 2020 ) and program repair ( Yasunaga & Liang , 2020 ) . Their success notwithstanding , training GCNs at scale remains challenging especially on large and dense graphs and multiple convolution layers . This is mainly due to the aggregation operation that enables GCNs to adapt to graph topology – a node ’ s output layer embedding in a GCN is influenced by embeddings of its neighbors in the previous layer which recursively depend on embeddings of their own neighbors in the still previous layer , and so on . Even in GCNs with 2-3 layers , when processing loss terms corresponding to a small sample of nodes in a mini-batch , this causes back propagation to update a large multi-hop neighborhood , causing mini-batch SGD-based techniques to scale poorly especially in dense graphs . Efforts to overcome this problem try to limit the number of nodes that receive updates as a result of a back-propagation step Chiang et al . ( 2019 ) ; Hamilton et al . ( 2018 ) ; Zeng et al . ( 2020 ) . This is done either by sub-sampling the neighborhood or clustering ( note the distinction between nodes sampled to create a mini-batch and neighborhood sampling done to limit the neighborhood of the mini-batch that receives updates ) . Variance reduction techniques have also been studied Chen et al . ( 2018a ) to reduce additional variance introduced by neighborhood sampling . However , these techniques often require large graphs to be heavily subsampled resulting in poor accuracy due to insufficient aggregation . They also do not guarantee unbiased gradients or rigorous convergence guarantees . See Section 2 for a more detailed discussion about state-of-the-art methods for GCN training . Our Contributions : This paper presents IGLU , an efficient technique for training GCNs based on lazy updates . An analysis of the gradient structure in GCNs reveals the most expensive component of the back-propagation step initiated at a node to be ( re- ) computation of forward-pass embeddings for its vast multi-hop neighborhood . Based on this observation , IGLU performs back-propagation with significantly reduced complexity using intermediate computations that are cached at regular intervals . This completely avoids neighborhood sampling and is a stark departure from the state-of-the-art . IGLU is architecture-agnostic and can be readily implemented on a wide range of GCN architectures . Avoiding neighborhood sampling also allows IGLU to completely avoid variance artifacts and offer provable convergence to a first-order stationary point under standard assumptions . In experiments , IGLU offered superior accuracies and accelerated convergence on a range of benchmark datasets . 2 RELATED WORKS . ( Bruna et al. , 2013 ; Defferrard et al. , 2016 ; Kipf & Welling , 2017 ) introduced the GCN architecture for transductive learning on graphs . Later works extended to inductive settings and explored architectural variants such as the GIN ( Xu et al. , 2019 ) . Much effort has focused on speeding-up GCN training . Sampling Based Approaches : The simplest neighborhood sampling strategy adopted by GraphSAGE ( Hamilton et al. , 2018 ) iteratively sub-samples the multi-hop neighborhood of a node before proceeding to initiate back-propagation at that node . Only the sub-sampled neighbors participate in the back-propagation step limiting the amount of computation . Layer sampling strategies such as FastGCN ( Chen et al. , 2018b ) , LADIES ( Zou et al. , 2019 ) and ASGCN ( Huang et al. , 2018 ) decrease computation by sampling nodes at each GCN layer , using importance sampling to reduce variance and improve connectivity among sampled nodes . FastGCN used the same sampling distribution for all layers and struggled to maintain connectivity unless large batch-sizes were used . LADIES used a per-layer distribution conditioned on nodes sampled for the layer above allowing more efficient use of samples under a budget . ASGCN used a linear model to jointly infer node importance weights . Recent works such as Cluster-GCN ( Chiang et al. , 2019 ) and GraphSAINT ( Zeng et al. , 2020 ) propose subgraph sampling creating mini-batches out of subgraphs and restricting back-propagation to nodes within the subgraph . To avoid losing too many edges , large mini-batch sizes are used . Cluster-GCN performs graph clustering and chooses multiple clusters per mini-batch while reinserting cross-cluster edges whereas GraphSAINT samples large subgraphs directly using random walks . Bias and Variance : Sampling techniques face bias as non-linear activations in the GCN architecture make it difficult to offer unbiased estimates of the loss function e.g . Zeng et al . ( 2020 ) offer un-biased estimates only if non-linearities are discarded . Sampling techniques also face increased variance for which variance-reduction techniques have been proposed such as VR-GCN ( Chen et al. , 2018a ) , MVS-GNN ( Cong et al. , 2020 ) and AS-GCN ( Huang et al. , 2018 ) . VR-GCN samples nodes whose embeddings are to be updated at each layer and uses stale embeddings for the rest and offers variance elimination in the limit under suitable conditions . MVS-GNN handles variance due to mini-batch creation by performing importance weighted sampling to construct mini-batches . Bandit Sampler ( Liu et al. , 2020 ) formulates variance reduction as an adversarial bandit problem . Other Approaches : Recent approaches decouple propagation from prediction as a pre-processing step e.g . PPRGo ( Bojchevski et al. , 2020 ) , APPNP ( Klicpera et al. , 2018 ) and SIGN ( Frasca et al. , 2020 ) . APPNP makes use of the relationship between the GCNs and PageRank to construct improved propagation schemes via personalized PageRank . PPRGo extends APPNP by approximating the dense propagation matrix via the push-flow algorithm . SIGN proposes inception style pre-computation of graph convolutional filters to speed up training and inference . GNNAutoScale ( Fey et al. , 2021 ) builds on VR-GCN and makes use of historical embeddings for scaling GNN training to large graphs . IGLU in Context of Related Work : IGLU avoids neighborhood sampling entirely and instead speeds-up learning using stale computations . Intermediate computations are cached and lazily updated at regular intervals e.g . once per epoch . We note that IGLU ’ s caching is distinct and much more aggressive ( e.g . lasting an entire epoch ) than the internal caching performed by popular frameworks such as TensorFlow and PyTorch ( where caches last only a single iteration ) . Recomputing these values in bulk offers IGLU economies of scale . IGLU faces zero sampling variance but incurs bias due to the use of stale computations . Fortunately , this bias is provably bounded , and can be made arbitrarily small by adjusting the step length and refresh frequency of the stale computations . 3 IGLU : EFFICIENT GCN TRAINING VIA LAZY UPDATES . Problem Statement : Consider the problem of learning a GCN architecture on an undirected graph G ( V , E ) with each of the N nodes endowed with an initial feature vector x0i ∈ Rd0 , i ∈ V . X0 ∈ Rn×d0 denotes the matrix of these initial features stacked together . N ( i ) ⊂ V denotes the set of neighbors of node i . A denotes the ( normalized ) adjacency matrix of the graph . A multi-layer GCN architecture uses a parameterized function at each layer to construct a node ’ s embedding for the next layer using embeddings of that node as well as those of its neighbors . Specifically xki = f ( x k−1 j , j ∈ { i } ∪ N ( i ) ; E k ) , where Ek denotes the parameters of k-th layer . For example , a standard GCN layer is given by , xki = σ ∑ j∈V Aij ( W k ) > xk−1j , where Ek is simply the matrix W k ∈ Rdk−1×dk and dk is the embedding dimensionality after the kth layer . More involved architectures exist that incorporate operations such as layer normalization , batch normalization , etc . In this paper , Ek will always denote the collection of all parameters of the kth layer e.g . offset and scale parameters used in a layer norm operation . Xk ∈ Rn×dk will denote the matrix of kth layer embeddings stacked together , giving us the handy shorthand Xk = f ( Xk−1 ; Ek ) . Given aK-layer GCN and a multi-label/multi-class task with C labels/classes , a fully-connected layer WK+1 ∈ RdK×C and appropriate activation function such as softmax is applied to get predictions which are then fed into the task loss . We note that IGLU does not require the task loss to decompose over the classes . The convergence proofs only require a smooth training objective function . Neighborhood Explosion : To understand the reason behind neighborhood explosion and cost of mini-batch based SGD training , consider a univariate regression problem with a no-frills 2-layer GCN with unidimensional features and sigmoidal activation within the hidden layers i.e . K = 2 and C = 1 = d0 = . . . = dk . This GCN is parameterized by w1 , w2 , w3 ∈ R and offers the output ŷi = w3σ ( z2i ) where z2i = ∑ j∈V Aijw 2x1j ∈ R. In turn , we have x1j = σ ( z1i ) where z1i = ∑ j′∈V Ajj′w 1x0j′ ∈ R and x0j′ ∈ R are the initial features of the nodes . Given a task loss ` : RN × RN e.g . least squares , denoting ` ′i = ` ′ ( ŷi , yi ) gives us ∂ ` ( ŷi , yi ) ∂w1 = ` ′i · ∂ŷi ∂z2i · ∂z 2 i ∂w1 = ` ′i · w3σ′ ( z2i ) · ∑ j∈V Aijw 2 ∂x1j ∂w1 = ` ′i · w3σ′ ( z2i ) · ∑ j∈V Aijw 2σ′ ( z1j ) · ∑ j′∈V Ajj′x 0 j′ . The nesting of the summations is conspicuous and establishes the neighborhood explosion : when seeking gradients in a K-layer GCN on a graph with average degree m , up to an mK−1-sized neighborhood of a node may be involved in the back-propagation update initiated at that node . Note that the above expression involves terms such as σ′ ( z2i ) , σ ′ ( z1j ) and that the values of z 2 i , z 1 j etc change whenever the model receives updates . Consequently , for a fresh mini-batch of nodes chosen for a particular iteration , terms such as σ′ ( z2i ) , σ ′ ( z1j ) need to be computed afresh if the gradient is to be computed exactly . Performing these computations amounts to doing forward pass operations that frequently involve a large neighborhood of the nodes of the mini-batch . Sampling strategies try to limit this cost by directly restricting the neighborhood over which such forward passes are computed . However , this introduces both bias and variance into the gradient updates as discussed in Section 2 . IGLU instead lazily updates various incomplete gradient ( defined below ) and node embedding terms that participate in the above expression . This completely eliminates sampling variance but introduce a bias due to the use of stale terms . However , this bias is not just provably bounded , but can be made arbitrarily small by adjusting the step length and frequency of refreshing these terms . Lazy Updates for GCN Training : Consider an arbitrary GCN architecture with the following structure : for some parameterized layer functions we have Xk = f ( Xk−1 ; Ek ) where Ek denotes the collection of all parameters of the kth layer e.g . weight matrices , offset and scale parameters used in layer norm operations , etc . Xk ∈ RN×dk denotes the matrix of kth layer embeddings stacked together and X0 ∈ RN×d0 are the initial features . For a K-layer GCN on a multi-label/multi-class task with C labels/classes , a fully-connected layer WK+1 ∈ RdK×C is additionally included to offer ŷi = ( W K+1 ) > xKi ∈ RC . Let us use the shorthand Ŷ ∈ RN×C to denote the matrix where the predicted outputs ŷi over all the nodes are stacked . We assume a task loss function ` : RC×RC×R+ that need not decompose over the classes . We will use the abbreviation ` i : = ` ( ŷi , yi ) . For sake of simplicity , we assume that the loss function itself includes any activation such as softmax that needs to be applied over the predictions ŷi . Let L = ∑ i∈V ` i denote the total loss function . Motivation : We define the the loss derivative matrix G = [ gic ] = RN×C as gic : = ∂ ` i∂ŷic . As the proof of Lemma 1 ( See Appendix C ) shows , the loss derivative with respect to parameters Ek at any layer has the form ∂L ∂Ek = ∑N j=1 ∑dk p=1 ( ∑ i∈V ∑ c∈ [ C ] gic · ∂ŷic ∂Xkjp ) ∂Xkjp ∂Ek . Note that the loss derivative is expressed in terms of partial derivatives ∂Xkjp ∂Ek that can be computed for any node using only embeddings of its neighbors in the ( k − 1 ) th layer i.e . Xk−1 thus avoiding any neighborhood explosion . This means that neighborhood explosion must be encountered while computing the terms encapsulated in the round brackets . Let us first formally recognize these terms as incomplete gradients . The notation ∂P∂Q ∣∣∣ R denotes the partial derivative of P w.r.t Q while keeping R fixed i.e . treated as a constant . Definition 1 . For any layer k ≤ K , define its incomplete task gradient to be αk = [ αkjp ] ∈ RN×dk , αkjp : = ∂ ( G Ŷ ) ∂Xkjp ∣∣∣∣∣ G = ∑ i∈V ∑ c∈ [ C ] gic · ∂ŷic ∂Xkjp The following lemma completely characterizes the loss gradients of the GCN but also shows that the incomplete gradient terms αk , k ∈ [ K ] can be efficiently computed using a recursive formulation that also does not involve any neighborhood explosion . Lemma 1 . The following results hold whenever the task loss L is differentiable : 1 . For the final fully-connected layer we have ∂L ∂WK+1 = ( XK ) > G as well as for any k ∈ [ K ] and any parameter Ek in the kth layer , ∂L ∂Ek = ∂ ( α k Xk ) ∂Ek ∣∣∣ αk = ∑ i∈V ∑dk p=1 α k ip · ∂Xkip ∂Ek . 2 . For the final layer , we have αK = G ( WK+1 ) > as well as for any k < K , we have αk = ∂ ( α k+1 Xk+1 ) ∂Xk ∣∣∣ αk+1 i.e . αkjp = ∑ i∈V ∑dk+1 q=1 α k+1 iq · ∂Xk+1iq ∂Xkjp . Lemma 1 establishes a recursive definition of the incomplete gradients using terms such as ∂Xk+1iq ∂Xkjp that only concern a single layer . Thus , computing αk for any k ∈ [ K ] does not involve any neighborhood explosion since only the immediate neighbors of a node need be consulted . Lemma 1 also shows that if αk are computed and frozen , the loss derivatives ∂L ∂Ek only involve additional computation of terms such as ∂Xkip ∂Ek which yet again involve a single layer and do not cause neighborhood explosion . This motivates lazy updates to αk , Xk values in order to accelerate back-propagation . However , performing lazy updates to both αk , Xk offers suboptimal performance . Hence IGLU adopts two variants described in Algorithms 1 and 2 . The backprop variant * keeps embeddings Xk stale for an entire epoch but performs eager updates to αk . The inverted variant on the other hand keeps the incomplete gradients αk stale for an entire epoch but performs eager updates to Xk . Algorithm 1 IGLU : backprop order Input : GCN G , initial features X0 , task loss L 1 : Initialize model parameters Ek , k ∈ [ K ] , WK+1 2 : while not converged do 3 : Do a forward pass to compute Xk for all k ∈ [ K ] as well as Ŷ 4 : Compute G then ∂L ∂WK+1 using Lemma 1 ( 1 ) and update WK+1 ←WK+1 − η · ∂L ∂WK+1 5 : Compute αK using G , WK+1 , Lemma 1 ( 2 ) 6 : for k = K . . . 2 do 7 : Compute ∂L ∂Ek using αk , Xk , Lemma 1 ( 1 ) 8 : Update Ek ← Ek − η · ∂L ∂Ek 9 : Update αk using αk+1 using Lemma 1 ( 2 ) 10 : end for 11 : end while Algorithm 2 IGLU : inverted order Input : GCN G , initial features X0 , task loss L 1 : Initialize model parameters Ek , k ∈ [ K ] , WK+1 2 : Do an initial forward pass to computeXk , k ∈ [ K ] 3 : while not converged do 4 : Compute Ŷ , G and αk for all k ∈ [ K ] using Lemma 1 ( 2 ) 5 : for k = 1 . . .K do 6 : Compute ∂L ∂Ek using αk , Xk , Lemma 1 ( 1 ) 7 : Update Ek ← Ek − η · ∂L ∂Ek 8 : Update Xk ← f ( Xk−1 ; Ek ) 9 : end for 10 : Compute ∂L ∂WK+1 using Lemma 1 ( 1 ) and use it to update WK+1 ←WK+1 − η · ∂L ∂WK+1 11 : end while SGD Implementation : Update steps in the algorithms ( steps 4 , 8 in Algorithm 1 and steps 7 , 10 in Algorithm 2 ) are described as a single gradient step over the entire graph to simplify exposition – in practice these steps are implemented using mini-batch SGD . This is done as usual by sampling a mini-batch of nodes S and computing task gradients only w.r.t L̂S = ∑ i∈S ` i instead of L. Contribution : As noted in Section 2 , IGLU uses caching in a manner fundamentally different from popular frameworks such as PyTorch or TensorFlow . Their caches are short-lived and always seek to compute exact gradients unlike IGLU that computes gradients much faster but with bounded bias . IGLU also extends other caching-based techniques such as VR-GCN which choose to cache node embeddings . In contrast , IGLU offers two variants and the variant that uses inverted order of updates ( Algorithm 2 ) and caches incomplete gradients outperforms the backprop variant of IGLU ( Algorithm 1 ) that caches node embeddings instead . Theoretical Analysis : Note that , conditioned on the stale parameters ( either αk or Xk depending on which variant is being executed ) , the gradients used by IGLU to perform model updates ( steps 4 , 8 in Algorithm 1 and steps 7 , 10 in Algorithm 2 ) do not exhibit any sampling bias . However , the staleness itself introduces bias in the training process . By controlling the step length η and the frequency with which the stale parameters are updated , this bias can be provably controlled resulting in guaranteed convergence to a first-order stationary point . Due to lack of space , we postpone the detailed statement and proof of the convergence guarantee to Appendix C. Theorem 2 ( IGLU Convergence ( Informal ) ) . Suppose the task loss function L has O ( 1 ) -Lipschitz gradients and IGLU is executed with small enough step lengths η with model updates ( steps 4 , 8 in Algorithm 1 and steps 7 , 10 in Algorithm 2 ) being carried out either in a full-batch or mini-batch SGD manner , then within T iterations , IGLU converges to a model iterate satisfying : 1 . ‖∇L‖22 ≤ O ( 1/T 2 3 ) if update steps are carried out on the entire graph in a full-batch . 2 . ‖∇L‖22 ≤ O ( 1/ √ T ) if update steps are carried out using mini-batch SGD . This result holds under minimal assumptions of objective smoothness and boundedness that are standard Chen et al . ( 2018a ) ; Cong et al . ( 2020 ) , yet offers convergence rates comparable to those offered by standard mini-batch SGD . However , whereas works such as ( Chen et al. , 2018a ) assume bounds on the sup-norm i.e . L∞ norm of the gradients , Theorem 2 only requires an L2 norm bound . Note that objective smoothness requires the architecture to use smooth activation functions . However , IGLU offers similar performance whether using non-smooth activations e.g . ReLU or smooth ones e.g . GELU ( see Appendix B.7 ) as is also observed by other works ( Hendrycks & Gimpel , 2020 ) . * The backprop variant is named so since it updates model parameters in the order back-propagation would have updated them i.e . WK+1 followed by EK , EK−1 , . . . whereas the inverted variant performs updates in the reverse order i.e . starting from E1 , E2 all the way to WK+1 .
This paper introduces a new method, IGLU, that caches intermediate computations at various GCN layers. This enables IGLU to perform lazy updates that do not require updating a large number of node embeddings during descent and offers much faster convergence without significantly biasing the gradients. Overall, this paper represents a novel solution towards efficient GNN training.
SP:59d57b9cb0842c860bfb06c7c0ddf7065fc384a0
Multi-objective Optimization by Learning Space Partition
1 INTRODUCTION . Multi-objective optimization ( MOO ) has been extensively used in many practical scenarios involving trade-offs between multiple objectives . For example , in automobile design ( Chang , 2015 ) , we must maximize the performance of the engine while simultaneously minimizing emissions and fuel consumption . In finance ( Gunantara , 2018 ) , one prefers a portfolio that maximizes the expected return while minimizing risk . Mathematically , in MOO we optimize M objectives f ( x ) = [ f1 ( x ) , f2 ( x ) , . . . , fM ( x ) ] ∈ RM : min f1 ( x ) , f2 ( x ) , ... , fM ( x ) ( 1 ) s.t . x ∈ Ω While we could set arbitrary weights for each objective to turn it into a single-objective optimization ( SOO ) problem , modern MOO methods aim to find the problem ’ s entire Pareto frontier : the set of solutions that are not dominated by any other feasible solutions1 ( see Fig . 1 for illustration ) . The Pareto frontier yields a global picture of optimal solution structures rather than focusing on one specific weighted combination of objectives . As a result , MOO is fundamentally different from SOO . Instead of focusing on a single optimal solution , a strong MOO optimizer should cover the search space broadly to explore the Pareto frontier . Popular quality indicators in MOO , such as hypervolume ( HV ) , capture this aspect by computing the volume of the currently estimated frontier . Specifically , given a reference point R ∈ RM , as shown in Fig . 1 ( a ) , the hypervolume of a finite approximate Pareto set P is the M-dimensional Lebesgue measure λM of the space dominated by P and bounded from below by R. That is , HV ( P , R ) = λM ( ∪|P|i=1 [ R , yi ] ) , where [ R , yi ] denotes the hyper-rectangle bounded by reference point R and yi . Consequently , the optimizer must consider the diversity of solutions in addition to their optimality . 1Here we define dominance y ≺f x as fi ( x ) ≤ fi ( y ) for all functions fi , and exists at least one i s.t . fi ( x ) < fi ( y ) , 1 ≤ i ≤ M . That is , solution x is always better than solution y , regardless of how the M objectives are weighted . While several previous works have proposed approaches to capture this diversity-optimality trade-off ( Deb et al. , 2002a ; Knowles , 2006 ; Igel et al. , 2007 ; Deb & Jain , 2014 ; Daulton et al. , 2020 ) , in this paper , we take a fundamentally different route by learning promising candidate regions from past explored samples . Ideally , to find the Pareto frontier in as few function evaluations as possible , we want to sample heavily in the Pareto optimal set ΩP , defined as the region of input vectors that corresponds to the Pareto frontier . One way to focus samples on ΩP is to gradually narrow the full search space down to the subregion containing ΩP via partitioning . For example , in the case of quadratic objective functions , ΩP can be separated from the non-optimal set Ω\ΩP via simple linear classifiers ( see Observation 1,2 ) . Motivated by these observations , we thus design LaMOO , a novel MOO meta-optimizer that progressively partitions regions into sub-regions and then focuses on sub-regions that are likely to contain Pareto-optimal regions , where existing solvers can help . Therefore , LaMOO is a meta-algorithm . Unlike cutting-plane methods ( Loganathan & Sherali , 1987 ; Hinder , 2018 ; Vieira & Lisboa , 2019 ) that leverage the ( sub ) -gradient of convex objectives as the cutting plane , with global optimality guarantees , LaMOO is data-driven : it leverages previous samples to build classifiers to learn the partition and focuses future samples in these promising regions . No analytical formula of objectives or their sub-gradients is needed . LaMOO is a multi-objective extension of recent works ( Wang et al. , 2020 ; Yang et al. , 2021 ) that also learn space partitions but for a single black-box objective . Empirically , LaMOO outperforms existing approaches on many benchmarks , including standard benchmarks in multi-objective black-box optimization , and real-world multi-objective problems like neural architecture search ( NAS ) ( Cai et al. , 2019 ; 2020 ) and molecule design . For example , as a meta-algorithm , LaMOO combined with CMA-ES as an inner routine requires only 62.5 % , 8 % , and 29 % as many samples to reach the same hypervolume as the original CMA-ES ( Igel et al. , 2007a ) in BraninCurrin ( Belakaria et al. , 2019 ) , VehicleSafety ( Liao et al. , 2008 ) and Nasbench201 ( Dong & Yang , 2020 ) , respectively . On average , compared to qEHVI , LaMOO uses 50 % samples to achieve the same performance in these problems . In addition , LaMOO with qEHVI ( Daulton et al. , 2020 ) and CMA-ES require 71 % and 31 % fewer samples on average , compared to naive qEHVI and CMA-ES , to achieve the same performance in molecule discovery . 2 RELATED WORK . Bayesian Optimization ( BO ) ( Zitzler et al. , 2003 ; Knowles , 2006 ; Ponweiser et al. , 2008 ; Couckuyt et al. , 2014 ; Paria et al. , 2018 ; Yang et al. , 2019 ; Daulton et al. , 2020 ) is a popular family of methods to optimize black-box single and multi-objectives . Using observed samples , BO learns a surrogate model f̂ ( x ) , search for new promising candidates based on acquisition function built on f̂ ( x ) , and query the quality of these candidates with the ground truth black-box objective ( s ) . In multi-objective Bayesian optimization ( MOBO ) , most approaches leverage Expected Hypervolume Improvement ( EHVI ) as their acquisition function ( Zitzler et al. , 2003 ; Couckuyt et al. , 2014 ; Yang et al. , 2019 ) , since finding the Pareto frontier is equivalent to maximizing the hypervolume given a finite search space ( Fleischer , 2003 ) . There are methods ( Knowles , 2006 ; Ponweiser et al. , 2008 ; Paria et al. , 2018 ) that use different acquisition functions like expected improvement ( Jones et al. , 1998 ) and Thompson sampling ( Thompson , 1933 ) . EVHI is computationally expensive : its cost increases exponentially with the number of objectives . To address this problem , qEHVI ( Daulton et al. , 2020 ) accelerates optimization by computing EHVI in parallel , and has become the state-of-the-art MOBO algorithm . In this paper , we leverage qEHVI as a candidate inner solver in our proposed LaMOO algorithm . Evolutionary algorithms ( EAs ) ( Deb et al. , 2002a ; Igel et al. , 2007a ; Zhang & Li , 2007 ; Beume et al. , 2007 ; Fang et al. , 2018 ) are also popular methods for MOO tasks . One category of MOOEAs ( Srinivas & Deb , 1994 ; Deb et al. , 2002a ; Deb & Jain , 2014 ) leverages Pareto dominance to simultaneously optimize all objectives . A second category ( e.g. , ( Zhang & Li , 2007 ) ) decomposes a multi-objective optimization problem into a number of single-objective sub-problems , converting a difficult MOO into several SOOs . Another category is quality indicator-based methods , such as ( Beume et al. , 2007 ) and ( Igel et al. , 2007a ) . They scalarize the current Pareto frontier using quality indicators ( e.g. , HV ) and transfer a MOO to a SOO . New samples are generated by crossover and mutation operations from existing ones . However the drawbacks of non-quality indicator-based methods ( i.e. , the first two categories ) can be not overlooked . Specifically , for MOO with many objectives , NSGA-II ( Deb et al. , 2002a ) easily gets stuck in a dominance resistant solution ( Pang et al. , 2020 ) which is far from the true Pareto frontier . while MOEA/D perform better in many-objective problems but specifying the weight vector for problems with unknown Pareto front is the main obstacle ( Deb & Jain , 2014 ) . In addition , A * search based algorithms are also considered to be extended to MOO ( Stewart & White , 1991 ; Tung Tung & Lin Chew , 1992 ; De la Cruz et al. , 2005 ) . Quality Indicators . Besides hypervolume , there are several other quality indicators ( Van Veldhuizen & Lamont , 1998 ; Zitzler et al. , 2000 ; Bosman & Thierens , 2003 ) for evaluating sample quality , which can be used to scalarize the MOO to SOO . The performance of a quality indicator can be evaluated by three metrics ( Deng et al. , 2007 ; Li et al. , 2014 ) , including convergence ( closeness to the Pareto frontier ) , uniformity ( the extent of the samples satisfy the uniform distribution ) , and spread ( the extent of the obtained approximate Pareto frontier ) . Sec . B specifically illustrates the merits of each quality indicator . HyperVolume is the only metric we explored that can simultaneously satisfy the evaluation of convergence , uniformity , and spread without the knowledge of the true Pareto frontier while it may suffer from expensive calculation in many-objective problems . Therefore , throughout this work , we use HV to evaluate the optimization performance of different algorithms . 3 LEARNING SPACE PARTITIONS : A THEORETICAL UNDERSTANDING . Searching in high-dimensional space to find the optimal solution to a function is in general a challenging problem , especially when the function ’ s properties are unknown to the search algorithm . The difficulty is mainly due to the curse of dimensionality : to adequately cover a d-dimensional space , in general , an exponential number of samples are needed . For this , many works use a “ coarse-to-fine ” approach : partition the search space and then focusing on promising regions . Traditionally , manually defined criteria are used , e.g. , axis-aligned partitions ( Munos , 2011b ) , Voronoi diagrams ( Kim et al. , 2020 ) , etc . Recently , ( Wang et al. , 2019 ; 2020 ; Yang et al. , 2021 ) learn space partitions based on the data collected thus far , and show strong performance in NeurIPS black box optimization challenges ( Sazanovich et al . ; Kim et al. ) . On the other hand , there is little quantitative understanding of space partition . In this paper , we first give a formal theoretical analysis on why learning plays an important role in space-partition approaches for SOO . Leveraging our understanding of how space partitioning works , we propose LaMOO which empirically outperforms existing SoTA methods on multiple MOO benchmarks . 3.1 PROBLEM SETTING . Intuitively , learning space partitions will yield strong performance if the classifier can determine which regions are promising given few data points . We formalize this intuition below and show why it is better than fixed and manually defined criteria for space partitioning . Consider the following sequential decision task . We have N samples in a discrete subset S0 and there exists one sample x∗ that achieves a minimal value of a scalar function f . Note that f can be any property we want , e.g. , in the Pareto optimal set . The goal is to construct a subset ST ⊆ S0 after T steps , so that ( 1 ) x∗ ∈ ST and ( 2 ) |ST | is as small as possible . More formally , we define the reward function r as the probability that we get x∗ by randomly sampling from the resulting subset ST : r : = 1 |ST | P ( x∗ ∈ ST ) ( 2 ) It is clear that 0 ≤ r ≤ 1. r = 1 means that we already found the optimal sample x∗ . Here we use discrete case for simplicity and leave continuous case ( i.e. , partitioning a region Ω0 instead of a discrete set S0 ) to future work . Note N could be large , so here we consider it infeasible to enumerate S0 to find x∗ . However , sampling from S0 , as well as comparing the quality of sampled solutions are allowed . An obvious baseline is to simply set ST : = S0 , then rb = N−1 . Now the question is : can we do better ? Here we seek help from the following oracle : Definition 1 ( ( α , η ) -Oracle ) . Given a subset S that contains x∗ , after taking k samples from S , the oracle can find a good subset Sgood with |Sgood| ≤ |S|/2 and P ( x∗ ∈ Sgood|x∗ ∈ S ) ≥ 1− exp ( − k η|S|α ) ( 3 ) Lemma 1 . The algorithm to uniformly draw k samples in S , pick the best and return is a ( 1 , 1 ) -oracle . See Appendix for proof . Note that a ( 1 , 1 ) -oracle is very weak , and is of little use in obtaining higher reward r. We typically hope for an oracle with smaller α and η ( i.e. , both smaller than 1 ) . Intuitively , such oracles are more sample-efficient : with few samples , they can narrow down the region containing the optimal solution x∗ with high probability . Note that α < 1 corresponds to semi-parametric models . In these cases , the oracle has generalization property : with substantially fewer samples than N ( i.e. , on the order of Nα ) , the oracle is able to put the optimal solution x∗ on the right side . In its extreme case when α = 0 ( or parametric models ) , whether we classify the optimal solution x∗ on the correct side only depends on the absolute number of samples collected in S , and is independent of its size . For example , if the function to be optimized is linear , then with d+ 1 samples , we can completely characterize the property of all |S| samples . Relation with cutting plane . Our setting can be regarded as a data-driven extension of cutting plane methods ( Loganathan & Sherali , 1987 ; Vieira & Lisboa , 2019 ; Hinder , 2018 ) in optimization , in which a cutting plane is found at the current solution to reduce the search space . For example , if f is convex and its gradient∇f ( x ) is available , then we can set Sgood : = { x : ∇f ( x0 ) > ( x− x0 ) ≤ 0 , x ∈ S0 } , since for any x ∈ S0 \ Sgood , convexity gives f ( x ) ≥ f ( x0 ) + ∇f ( x0 ) > ( x − x0 ) > f ( x0 ) and thus x is not better than current x0 . However , the cutting plane method relies on certain function properties like convexity . In contrast , learning space partition can leverage knowledge about the function forms , combined with observed samples so far , to better partition the space .
This work proposes a novel learning-based method called LaMOO to partition the search space for the multi-objective optimization problem. With the learned partition, the computational budget can be allocated to the small promising regions (e.g., the region close to the Pareto frontier) rather than the whole search space. It is a direct generalization from the closely related works (e.g., LaNAS, LaMCTS, and LaP^3) to multi-objective optimization. The proposed LaMOO has promising experimental results on both synthetic functions and real-world optimization problems.
SP:4085a28c03be5973332b8801f216918b78677483
Multi-objective Optimization by Learning Space Partition
1 INTRODUCTION . Multi-objective optimization ( MOO ) has been extensively used in many practical scenarios involving trade-offs between multiple objectives . For example , in automobile design ( Chang , 2015 ) , we must maximize the performance of the engine while simultaneously minimizing emissions and fuel consumption . In finance ( Gunantara , 2018 ) , one prefers a portfolio that maximizes the expected return while minimizing risk . Mathematically , in MOO we optimize M objectives f ( x ) = [ f1 ( x ) , f2 ( x ) , . . . , fM ( x ) ] ∈ RM : min f1 ( x ) , f2 ( x ) , ... , fM ( x ) ( 1 ) s.t . x ∈ Ω While we could set arbitrary weights for each objective to turn it into a single-objective optimization ( SOO ) problem , modern MOO methods aim to find the problem ’ s entire Pareto frontier : the set of solutions that are not dominated by any other feasible solutions1 ( see Fig . 1 for illustration ) . The Pareto frontier yields a global picture of optimal solution structures rather than focusing on one specific weighted combination of objectives . As a result , MOO is fundamentally different from SOO . Instead of focusing on a single optimal solution , a strong MOO optimizer should cover the search space broadly to explore the Pareto frontier . Popular quality indicators in MOO , such as hypervolume ( HV ) , capture this aspect by computing the volume of the currently estimated frontier . Specifically , given a reference point R ∈ RM , as shown in Fig . 1 ( a ) , the hypervolume of a finite approximate Pareto set P is the M-dimensional Lebesgue measure λM of the space dominated by P and bounded from below by R. That is , HV ( P , R ) = λM ( ∪|P|i=1 [ R , yi ] ) , where [ R , yi ] denotes the hyper-rectangle bounded by reference point R and yi . Consequently , the optimizer must consider the diversity of solutions in addition to their optimality . 1Here we define dominance y ≺f x as fi ( x ) ≤ fi ( y ) for all functions fi , and exists at least one i s.t . fi ( x ) < fi ( y ) , 1 ≤ i ≤ M . That is , solution x is always better than solution y , regardless of how the M objectives are weighted . While several previous works have proposed approaches to capture this diversity-optimality trade-off ( Deb et al. , 2002a ; Knowles , 2006 ; Igel et al. , 2007 ; Deb & Jain , 2014 ; Daulton et al. , 2020 ) , in this paper , we take a fundamentally different route by learning promising candidate regions from past explored samples . Ideally , to find the Pareto frontier in as few function evaluations as possible , we want to sample heavily in the Pareto optimal set ΩP , defined as the region of input vectors that corresponds to the Pareto frontier . One way to focus samples on ΩP is to gradually narrow the full search space down to the subregion containing ΩP via partitioning . For example , in the case of quadratic objective functions , ΩP can be separated from the non-optimal set Ω\ΩP via simple linear classifiers ( see Observation 1,2 ) . Motivated by these observations , we thus design LaMOO , a novel MOO meta-optimizer that progressively partitions regions into sub-regions and then focuses on sub-regions that are likely to contain Pareto-optimal regions , where existing solvers can help . Therefore , LaMOO is a meta-algorithm . Unlike cutting-plane methods ( Loganathan & Sherali , 1987 ; Hinder , 2018 ; Vieira & Lisboa , 2019 ) that leverage the ( sub ) -gradient of convex objectives as the cutting plane , with global optimality guarantees , LaMOO is data-driven : it leverages previous samples to build classifiers to learn the partition and focuses future samples in these promising regions . No analytical formula of objectives or their sub-gradients is needed . LaMOO is a multi-objective extension of recent works ( Wang et al. , 2020 ; Yang et al. , 2021 ) that also learn space partitions but for a single black-box objective . Empirically , LaMOO outperforms existing approaches on many benchmarks , including standard benchmarks in multi-objective black-box optimization , and real-world multi-objective problems like neural architecture search ( NAS ) ( Cai et al. , 2019 ; 2020 ) and molecule design . For example , as a meta-algorithm , LaMOO combined with CMA-ES as an inner routine requires only 62.5 % , 8 % , and 29 % as many samples to reach the same hypervolume as the original CMA-ES ( Igel et al. , 2007a ) in BraninCurrin ( Belakaria et al. , 2019 ) , VehicleSafety ( Liao et al. , 2008 ) and Nasbench201 ( Dong & Yang , 2020 ) , respectively . On average , compared to qEHVI , LaMOO uses 50 % samples to achieve the same performance in these problems . In addition , LaMOO with qEHVI ( Daulton et al. , 2020 ) and CMA-ES require 71 % and 31 % fewer samples on average , compared to naive qEHVI and CMA-ES , to achieve the same performance in molecule discovery . 2 RELATED WORK . Bayesian Optimization ( BO ) ( Zitzler et al. , 2003 ; Knowles , 2006 ; Ponweiser et al. , 2008 ; Couckuyt et al. , 2014 ; Paria et al. , 2018 ; Yang et al. , 2019 ; Daulton et al. , 2020 ) is a popular family of methods to optimize black-box single and multi-objectives . Using observed samples , BO learns a surrogate model f̂ ( x ) , search for new promising candidates based on acquisition function built on f̂ ( x ) , and query the quality of these candidates with the ground truth black-box objective ( s ) . In multi-objective Bayesian optimization ( MOBO ) , most approaches leverage Expected Hypervolume Improvement ( EHVI ) as their acquisition function ( Zitzler et al. , 2003 ; Couckuyt et al. , 2014 ; Yang et al. , 2019 ) , since finding the Pareto frontier is equivalent to maximizing the hypervolume given a finite search space ( Fleischer , 2003 ) . There are methods ( Knowles , 2006 ; Ponweiser et al. , 2008 ; Paria et al. , 2018 ) that use different acquisition functions like expected improvement ( Jones et al. , 1998 ) and Thompson sampling ( Thompson , 1933 ) . EVHI is computationally expensive : its cost increases exponentially with the number of objectives . To address this problem , qEHVI ( Daulton et al. , 2020 ) accelerates optimization by computing EHVI in parallel , and has become the state-of-the-art MOBO algorithm . In this paper , we leverage qEHVI as a candidate inner solver in our proposed LaMOO algorithm . Evolutionary algorithms ( EAs ) ( Deb et al. , 2002a ; Igel et al. , 2007a ; Zhang & Li , 2007 ; Beume et al. , 2007 ; Fang et al. , 2018 ) are also popular methods for MOO tasks . One category of MOOEAs ( Srinivas & Deb , 1994 ; Deb et al. , 2002a ; Deb & Jain , 2014 ) leverages Pareto dominance to simultaneously optimize all objectives . A second category ( e.g. , ( Zhang & Li , 2007 ) ) decomposes a multi-objective optimization problem into a number of single-objective sub-problems , converting a difficult MOO into several SOOs . Another category is quality indicator-based methods , such as ( Beume et al. , 2007 ) and ( Igel et al. , 2007a ) . They scalarize the current Pareto frontier using quality indicators ( e.g. , HV ) and transfer a MOO to a SOO . New samples are generated by crossover and mutation operations from existing ones . However the drawbacks of non-quality indicator-based methods ( i.e. , the first two categories ) can be not overlooked . Specifically , for MOO with many objectives , NSGA-II ( Deb et al. , 2002a ) easily gets stuck in a dominance resistant solution ( Pang et al. , 2020 ) which is far from the true Pareto frontier . while MOEA/D perform better in many-objective problems but specifying the weight vector for problems with unknown Pareto front is the main obstacle ( Deb & Jain , 2014 ) . In addition , A * search based algorithms are also considered to be extended to MOO ( Stewart & White , 1991 ; Tung Tung & Lin Chew , 1992 ; De la Cruz et al. , 2005 ) . Quality Indicators . Besides hypervolume , there are several other quality indicators ( Van Veldhuizen & Lamont , 1998 ; Zitzler et al. , 2000 ; Bosman & Thierens , 2003 ) for evaluating sample quality , which can be used to scalarize the MOO to SOO . The performance of a quality indicator can be evaluated by three metrics ( Deng et al. , 2007 ; Li et al. , 2014 ) , including convergence ( closeness to the Pareto frontier ) , uniformity ( the extent of the samples satisfy the uniform distribution ) , and spread ( the extent of the obtained approximate Pareto frontier ) . Sec . B specifically illustrates the merits of each quality indicator . HyperVolume is the only metric we explored that can simultaneously satisfy the evaluation of convergence , uniformity , and spread without the knowledge of the true Pareto frontier while it may suffer from expensive calculation in many-objective problems . Therefore , throughout this work , we use HV to evaluate the optimization performance of different algorithms . 3 LEARNING SPACE PARTITIONS : A THEORETICAL UNDERSTANDING . Searching in high-dimensional space to find the optimal solution to a function is in general a challenging problem , especially when the function ’ s properties are unknown to the search algorithm . The difficulty is mainly due to the curse of dimensionality : to adequately cover a d-dimensional space , in general , an exponential number of samples are needed . For this , many works use a “ coarse-to-fine ” approach : partition the search space and then focusing on promising regions . Traditionally , manually defined criteria are used , e.g. , axis-aligned partitions ( Munos , 2011b ) , Voronoi diagrams ( Kim et al. , 2020 ) , etc . Recently , ( Wang et al. , 2019 ; 2020 ; Yang et al. , 2021 ) learn space partitions based on the data collected thus far , and show strong performance in NeurIPS black box optimization challenges ( Sazanovich et al . ; Kim et al. ) . On the other hand , there is little quantitative understanding of space partition . In this paper , we first give a formal theoretical analysis on why learning plays an important role in space-partition approaches for SOO . Leveraging our understanding of how space partitioning works , we propose LaMOO which empirically outperforms existing SoTA methods on multiple MOO benchmarks . 3.1 PROBLEM SETTING . Intuitively , learning space partitions will yield strong performance if the classifier can determine which regions are promising given few data points . We formalize this intuition below and show why it is better than fixed and manually defined criteria for space partitioning . Consider the following sequential decision task . We have N samples in a discrete subset S0 and there exists one sample x∗ that achieves a minimal value of a scalar function f . Note that f can be any property we want , e.g. , in the Pareto optimal set . The goal is to construct a subset ST ⊆ S0 after T steps , so that ( 1 ) x∗ ∈ ST and ( 2 ) |ST | is as small as possible . More formally , we define the reward function r as the probability that we get x∗ by randomly sampling from the resulting subset ST : r : = 1 |ST | P ( x∗ ∈ ST ) ( 2 ) It is clear that 0 ≤ r ≤ 1. r = 1 means that we already found the optimal sample x∗ . Here we use discrete case for simplicity and leave continuous case ( i.e. , partitioning a region Ω0 instead of a discrete set S0 ) to future work . Note N could be large , so here we consider it infeasible to enumerate S0 to find x∗ . However , sampling from S0 , as well as comparing the quality of sampled solutions are allowed . An obvious baseline is to simply set ST : = S0 , then rb = N−1 . Now the question is : can we do better ? Here we seek help from the following oracle : Definition 1 ( ( α , η ) -Oracle ) . Given a subset S that contains x∗ , after taking k samples from S , the oracle can find a good subset Sgood with |Sgood| ≤ |S|/2 and P ( x∗ ∈ Sgood|x∗ ∈ S ) ≥ 1− exp ( − k η|S|α ) ( 3 ) Lemma 1 . The algorithm to uniformly draw k samples in S , pick the best and return is a ( 1 , 1 ) -oracle . See Appendix for proof . Note that a ( 1 , 1 ) -oracle is very weak , and is of little use in obtaining higher reward r. We typically hope for an oracle with smaller α and η ( i.e. , both smaller than 1 ) . Intuitively , such oracles are more sample-efficient : with few samples , they can narrow down the region containing the optimal solution x∗ with high probability . Note that α < 1 corresponds to semi-parametric models . In these cases , the oracle has generalization property : with substantially fewer samples than N ( i.e. , on the order of Nα ) , the oracle is able to put the optimal solution x∗ on the right side . In its extreme case when α = 0 ( or parametric models ) , whether we classify the optimal solution x∗ on the correct side only depends on the absolute number of samples collected in S , and is independent of its size . For example , if the function to be optimized is linear , then with d+ 1 samples , we can completely characterize the property of all |S| samples . Relation with cutting plane . Our setting can be regarded as a data-driven extension of cutting plane methods ( Loganathan & Sherali , 1987 ; Vieira & Lisboa , 2019 ; Hinder , 2018 ) in optimization , in which a cutting plane is found at the current solution to reduce the search space . For example , if f is convex and its gradient∇f ( x ) is available , then we can set Sgood : = { x : ∇f ( x0 ) > ( x− x0 ) ≤ 0 , x ∈ S0 } , since for any x ∈ S0 \ Sgood , convexity gives f ( x ) ≥ f ( x0 ) + ∇f ( x0 ) > ( x − x0 ) > f ( x0 ) and thus x is not better than current x0 . However , the cutting plane method relies on certain function properties like convexity . In contrast , learning space partition can leverage knowledge about the function forms , combined with observed samples so far , to better partition the space .
The paper develops an enhancement to multiobjective solvers so to find better Pareto solutions. The idea is to learn a proxy of the distance of samples to the Pareto frontier, and leverage such information to split the search space via a tree structure, Samples can then be extracted from promising nodes of the tree from other multiobjective algorithms. Numerical results investigate the performance of the method for both synthetic and practical benchmarks.
SP:4085a28c03be5973332b8801f216918b78677483
Multi-objective Optimization by Learning Space Partition
1 INTRODUCTION . Multi-objective optimization ( MOO ) has been extensively used in many practical scenarios involving trade-offs between multiple objectives . For example , in automobile design ( Chang , 2015 ) , we must maximize the performance of the engine while simultaneously minimizing emissions and fuel consumption . In finance ( Gunantara , 2018 ) , one prefers a portfolio that maximizes the expected return while minimizing risk . Mathematically , in MOO we optimize M objectives f ( x ) = [ f1 ( x ) , f2 ( x ) , . . . , fM ( x ) ] ∈ RM : min f1 ( x ) , f2 ( x ) , ... , fM ( x ) ( 1 ) s.t . x ∈ Ω While we could set arbitrary weights for each objective to turn it into a single-objective optimization ( SOO ) problem , modern MOO methods aim to find the problem ’ s entire Pareto frontier : the set of solutions that are not dominated by any other feasible solutions1 ( see Fig . 1 for illustration ) . The Pareto frontier yields a global picture of optimal solution structures rather than focusing on one specific weighted combination of objectives . As a result , MOO is fundamentally different from SOO . Instead of focusing on a single optimal solution , a strong MOO optimizer should cover the search space broadly to explore the Pareto frontier . Popular quality indicators in MOO , such as hypervolume ( HV ) , capture this aspect by computing the volume of the currently estimated frontier . Specifically , given a reference point R ∈ RM , as shown in Fig . 1 ( a ) , the hypervolume of a finite approximate Pareto set P is the M-dimensional Lebesgue measure λM of the space dominated by P and bounded from below by R. That is , HV ( P , R ) = λM ( ∪|P|i=1 [ R , yi ] ) , where [ R , yi ] denotes the hyper-rectangle bounded by reference point R and yi . Consequently , the optimizer must consider the diversity of solutions in addition to their optimality . 1Here we define dominance y ≺f x as fi ( x ) ≤ fi ( y ) for all functions fi , and exists at least one i s.t . fi ( x ) < fi ( y ) , 1 ≤ i ≤ M . That is , solution x is always better than solution y , regardless of how the M objectives are weighted . While several previous works have proposed approaches to capture this diversity-optimality trade-off ( Deb et al. , 2002a ; Knowles , 2006 ; Igel et al. , 2007 ; Deb & Jain , 2014 ; Daulton et al. , 2020 ) , in this paper , we take a fundamentally different route by learning promising candidate regions from past explored samples . Ideally , to find the Pareto frontier in as few function evaluations as possible , we want to sample heavily in the Pareto optimal set ΩP , defined as the region of input vectors that corresponds to the Pareto frontier . One way to focus samples on ΩP is to gradually narrow the full search space down to the subregion containing ΩP via partitioning . For example , in the case of quadratic objective functions , ΩP can be separated from the non-optimal set Ω\ΩP via simple linear classifiers ( see Observation 1,2 ) . Motivated by these observations , we thus design LaMOO , a novel MOO meta-optimizer that progressively partitions regions into sub-regions and then focuses on sub-regions that are likely to contain Pareto-optimal regions , where existing solvers can help . Therefore , LaMOO is a meta-algorithm . Unlike cutting-plane methods ( Loganathan & Sherali , 1987 ; Hinder , 2018 ; Vieira & Lisboa , 2019 ) that leverage the ( sub ) -gradient of convex objectives as the cutting plane , with global optimality guarantees , LaMOO is data-driven : it leverages previous samples to build classifiers to learn the partition and focuses future samples in these promising regions . No analytical formula of objectives or their sub-gradients is needed . LaMOO is a multi-objective extension of recent works ( Wang et al. , 2020 ; Yang et al. , 2021 ) that also learn space partitions but for a single black-box objective . Empirically , LaMOO outperforms existing approaches on many benchmarks , including standard benchmarks in multi-objective black-box optimization , and real-world multi-objective problems like neural architecture search ( NAS ) ( Cai et al. , 2019 ; 2020 ) and molecule design . For example , as a meta-algorithm , LaMOO combined with CMA-ES as an inner routine requires only 62.5 % , 8 % , and 29 % as many samples to reach the same hypervolume as the original CMA-ES ( Igel et al. , 2007a ) in BraninCurrin ( Belakaria et al. , 2019 ) , VehicleSafety ( Liao et al. , 2008 ) and Nasbench201 ( Dong & Yang , 2020 ) , respectively . On average , compared to qEHVI , LaMOO uses 50 % samples to achieve the same performance in these problems . In addition , LaMOO with qEHVI ( Daulton et al. , 2020 ) and CMA-ES require 71 % and 31 % fewer samples on average , compared to naive qEHVI and CMA-ES , to achieve the same performance in molecule discovery . 2 RELATED WORK . Bayesian Optimization ( BO ) ( Zitzler et al. , 2003 ; Knowles , 2006 ; Ponweiser et al. , 2008 ; Couckuyt et al. , 2014 ; Paria et al. , 2018 ; Yang et al. , 2019 ; Daulton et al. , 2020 ) is a popular family of methods to optimize black-box single and multi-objectives . Using observed samples , BO learns a surrogate model f̂ ( x ) , search for new promising candidates based on acquisition function built on f̂ ( x ) , and query the quality of these candidates with the ground truth black-box objective ( s ) . In multi-objective Bayesian optimization ( MOBO ) , most approaches leverage Expected Hypervolume Improvement ( EHVI ) as their acquisition function ( Zitzler et al. , 2003 ; Couckuyt et al. , 2014 ; Yang et al. , 2019 ) , since finding the Pareto frontier is equivalent to maximizing the hypervolume given a finite search space ( Fleischer , 2003 ) . There are methods ( Knowles , 2006 ; Ponweiser et al. , 2008 ; Paria et al. , 2018 ) that use different acquisition functions like expected improvement ( Jones et al. , 1998 ) and Thompson sampling ( Thompson , 1933 ) . EVHI is computationally expensive : its cost increases exponentially with the number of objectives . To address this problem , qEHVI ( Daulton et al. , 2020 ) accelerates optimization by computing EHVI in parallel , and has become the state-of-the-art MOBO algorithm . In this paper , we leverage qEHVI as a candidate inner solver in our proposed LaMOO algorithm . Evolutionary algorithms ( EAs ) ( Deb et al. , 2002a ; Igel et al. , 2007a ; Zhang & Li , 2007 ; Beume et al. , 2007 ; Fang et al. , 2018 ) are also popular methods for MOO tasks . One category of MOOEAs ( Srinivas & Deb , 1994 ; Deb et al. , 2002a ; Deb & Jain , 2014 ) leverages Pareto dominance to simultaneously optimize all objectives . A second category ( e.g. , ( Zhang & Li , 2007 ) ) decomposes a multi-objective optimization problem into a number of single-objective sub-problems , converting a difficult MOO into several SOOs . Another category is quality indicator-based methods , such as ( Beume et al. , 2007 ) and ( Igel et al. , 2007a ) . They scalarize the current Pareto frontier using quality indicators ( e.g. , HV ) and transfer a MOO to a SOO . New samples are generated by crossover and mutation operations from existing ones . However the drawbacks of non-quality indicator-based methods ( i.e. , the first two categories ) can be not overlooked . Specifically , for MOO with many objectives , NSGA-II ( Deb et al. , 2002a ) easily gets stuck in a dominance resistant solution ( Pang et al. , 2020 ) which is far from the true Pareto frontier . while MOEA/D perform better in many-objective problems but specifying the weight vector for problems with unknown Pareto front is the main obstacle ( Deb & Jain , 2014 ) . In addition , A * search based algorithms are also considered to be extended to MOO ( Stewart & White , 1991 ; Tung Tung & Lin Chew , 1992 ; De la Cruz et al. , 2005 ) . Quality Indicators . Besides hypervolume , there are several other quality indicators ( Van Veldhuizen & Lamont , 1998 ; Zitzler et al. , 2000 ; Bosman & Thierens , 2003 ) for evaluating sample quality , which can be used to scalarize the MOO to SOO . The performance of a quality indicator can be evaluated by three metrics ( Deng et al. , 2007 ; Li et al. , 2014 ) , including convergence ( closeness to the Pareto frontier ) , uniformity ( the extent of the samples satisfy the uniform distribution ) , and spread ( the extent of the obtained approximate Pareto frontier ) . Sec . B specifically illustrates the merits of each quality indicator . HyperVolume is the only metric we explored that can simultaneously satisfy the evaluation of convergence , uniformity , and spread without the knowledge of the true Pareto frontier while it may suffer from expensive calculation in many-objective problems . Therefore , throughout this work , we use HV to evaluate the optimization performance of different algorithms . 3 LEARNING SPACE PARTITIONS : A THEORETICAL UNDERSTANDING . Searching in high-dimensional space to find the optimal solution to a function is in general a challenging problem , especially when the function ’ s properties are unknown to the search algorithm . The difficulty is mainly due to the curse of dimensionality : to adequately cover a d-dimensional space , in general , an exponential number of samples are needed . For this , many works use a “ coarse-to-fine ” approach : partition the search space and then focusing on promising regions . Traditionally , manually defined criteria are used , e.g. , axis-aligned partitions ( Munos , 2011b ) , Voronoi diagrams ( Kim et al. , 2020 ) , etc . Recently , ( Wang et al. , 2019 ; 2020 ; Yang et al. , 2021 ) learn space partitions based on the data collected thus far , and show strong performance in NeurIPS black box optimization challenges ( Sazanovich et al . ; Kim et al. ) . On the other hand , there is little quantitative understanding of space partition . In this paper , we first give a formal theoretical analysis on why learning plays an important role in space-partition approaches for SOO . Leveraging our understanding of how space partitioning works , we propose LaMOO which empirically outperforms existing SoTA methods on multiple MOO benchmarks . 3.1 PROBLEM SETTING . Intuitively , learning space partitions will yield strong performance if the classifier can determine which regions are promising given few data points . We formalize this intuition below and show why it is better than fixed and manually defined criteria for space partitioning . Consider the following sequential decision task . We have N samples in a discrete subset S0 and there exists one sample x∗ that achieves a minimal value of a scalar function f . Note that f can be any property we want , e.g. , in the Pareto optimal set . The goal is to construct a subset ST ⊆ S0 after T steps , so that ( 1 ) x∗ ∈ ST and ( 2 ) |ST | is as small as possible . More formally , we define the reward function r as the probability that we get x∗ by randomly sampling from the resulting subset ST : r : = 1 |ST | P ( x∗ ∈ ST ) ( 2 ) It is clear that 0 ≤ r ≤ 1. r = 1 means that we already found the optimal sample x∗ . Here we use discrete case for simplicity and leave continuous case ( i.e. , partitioning a region Ω0 instead of a discrete set S0 ) to future work . Note N could be large , so here we consider it infeasible to enumerate S0 to find x∗ . However , sampling from S0 , as well as comparing the quality of sampled solutions are allowed . An obvious baseline is to simply set ST : = S0 , then rb = N−1 . Now the question is : can we do better ? Here we seek help from the following oracle : Definition 1 ( ( α , η ) -Oracle ) . Given a subset S that contains x∗ , after taking k samples from S , the oracle can find a good subset Sgood with |Sgood| ≤ |S|/2 and P ( x∗ ∈ Sgood|x∗ ∈ S ) ≥ 1− exp ( − k η|S|α ) ( 3 ) Lemma 1 . The algorithm to uniformly draw k samples in S , pick the best and return is a ( 1 , 1 ) -oracle . See Appendix for proof . Note that a ( 1 , 1 ) -oracle is very weak , and is of little use in obtaining higher reward r. We typically hope for an oracle with smaller α and η ( i.e. , both smaller than 1 ) . Intuitively , such oracles are more sample-efficient : with few samples , they can narrow down the region containing the optimal solution x∗ with high probability . Note that α < 1 corresponds to semi-parametric models . In these cases , the oracle has generalization property : with substantially fewer samples than N ( i.e. , on the order of Nα ) , the oracle is able to put the optimal solution x∗ on the right side . In its extreme case when α = 0 ( or parametric models ) , whether we classify the optimal solution x∗ on the correct side only depends on the absolute number of samples collected in S , and is independent of its size . For example , if the function to be optimized is linear , then with d+ 1 samples , we can completely characterize the property of all |S| samples . Relation with cutting plane . Our setting can be regarded as a data-driven extension of cutting plane methods ( Loganathan & Sherali , 1987 ; Vieira & Lisboa , 2019 ; Hinder , 2018 ) in optimization , in which a cutting plane is found at the current solution to reduce the search space . For example , if f is convex and its gradient∇f ( x ) is available , then we can set Sgood : = { x : ∇f ( x0 ) > ( x− x0 ) ≤ 0 , x ∈ S0 } , since for any x ∈ S0 \ Sgood , convexity gives f ( x ) ≥ f ( x0 ) + ∇f ( x0 ) > ( x − x0 ) > f ( x0 ) and thus x is not better than current x0 . However , the cutting plane method relies on certain function properties like convexity . In contrast , learning space partition can leverage knowledge about the function forms , combined with observed samples so far , to better partition the space .
This paper presents a learning space partitions-based multi-objective optimization framework by using Monte Carlo tree search and an innovatively proposed metric, i.e., dominance number. Solid theoretical analysis on single-objective optimization (SOO) and some observation on multi-objective optimization (MOO) for the space partitions are given. Extensive experiments are conducted based on synthetic functions, vehicle safety problem, Nasbench201 neural architecture search problem, and molecular design. Comparative results with other baselines show the superiority and effectiveness of the proposed LaMOO.
SP:4085a28c03be5973332b8801f216918b78677483
Multi-Task Neural Processes
Neural processes have recently emerged as a class of powerful neural latent variable models that combine the strengths of neural networks and stochastic processes . As they can encode contextual data in the network ’ s function space , they offer a new way to model task relatedness in multi-task learning . To study its potential , we develop multi-task neural processes , a new variant of neural processes for multi-task learning . In particular , we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task . To do so , we derive the function priors in a hierarchical Bayesian inference framework , which enables each task to incorporate the shared knowledge provided by related tasks into its context of the prediction function . Our multi-task neural processes methodologically expand the scope of vanilla neural processes and provide a new way of exploring task relatedness in function spaces for multi-task learning . The proposed multi-task neural processes are capable of learning multiple tasks with limited labeled data and in the presence of domain shift . We perform extensive experimental evaluations on several benchmarks for the multi-task regression and classification tasks . The results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning and superior performance in multi-task classification and brain image segmentation1 . 1 INTRODUCTION . As deep neural networks are black-box function approximations , it is difficult to introduce prior domain or expert knowledge into a prediction function ( Jakkala , 2021 ) . In contrast , Gaussian processes ( Rasmussen , 2003 ) explicitly define distributions over functions and perform inference over these functions given some training examples . This enables reliable and flexible decisionmaking . However , Gaussian processes can suffer from high computational complexity due to the manipulation of kernel matrices . Therefore , there has been continuous interest in bringing together neural networks and Gaussian processes ( Damianou & Lawrence , 2013 ; Wilson et al. , 2016 ; Garnelo et al. , 2018a ; Jakkala , 2021 ) into so-called neural processes . Neural processes ( Garnelo et al. , 2018b ) combine the computational efficiency of neural networks with the uncertainty quantification of stochastic processes . They are a class of neural latent variables model , which deploy a deep neural network to encode context observations into a latent stochastic variable to model prediction functions . Neural processes provide an elegant formalism to efficiently and effectively incorporate multiple datasets into learning distributions over functions . This formalism is also promising in multi-task learning to improve individual tasks by transferring useful contextual knowledge among related tasks . Their capability of estimating uncertainty over predictions also makes them well-suited for multi-task learning with limited data , where each task has only a few training samples . However , neural processes rely on the implicit assumption that the context and target sets are from the same distribution and can be aggregated by a simple average pooling operation ( Kim et al. , 2019 ; Volpp et al. , 2020 ) . This makes it non-trivial to directly apply neural processes to modeling multiple heterogeneous tasks from different domains , where the context data of different tasks are from distinctive distributions ( Long et al. , 2017 ) . 1 Our code is available in https : //anonymous.4open.science/r/Multi-Task-Neural-Processes/ . In this paper , we develop multi-task neural processes ( MTNPs ) , a methodological extension of neural processes for multi-task learning , which fills the theoretical gap of neural processes for multi-task learning . Particularly , we propose to explore task relatedness in the function space by specifying the function priors in a hierarchical Bayesian inference framework . The shared knowledge from related tasks is incorporated into the context of each individual task , which serves as the inductive bias for making predictions in this task . The hierarchical architecture allows us to design expressive datadependent priors . This enables the model to capture the complex task relationships in multitask learning . By leveraging hierarchical modeling , multi-task neural processes are capable of exploring shared knowledge among related tasks in a principled way by specifying the function prior . We validate the effectiveness of the proposed multi-task neural processes by extensive experiments in both multi-task classification and regression . The results demonstrate that multi-task neural processes can effectively capture task relatedness in the function space and consistently improve the performance of each individual task , especially in the limited data regime . 2 PRELIMINARIES : NEURAL PROCESSES . In this section , we briefly review vanilla neural processes ( Garnelo et al. , 2018b ) based on which we derive our multi-task neural processes . Let a data set be given composed of training samples and corresponding labels . In order to better reflect the desired model behaviour at test time ( Garnelo et al. , 2018b ) , the training data is split into the context set D = ( X , Y ) and a target set D∗ = ( X∗ , Y∗ ) , where X = { x1 , · · · , xn } is a subset of the training data , Y = { y1 , · · · , yn } the corresponding set of labels and n the size of the context set . To ensure both sets have the same data distribution , the context set is split from the target set . Now , given the context set ( X , Y ) , we would like to estimate a function that can make predict the labels Y∗ for target samples X∗ . In general , we define a stochastic process by a random function f : X → Y . Given the context set , we define the joint distribution over the function values { f ( x1 ) , · · · , f ( xn ) } , which in Gaussian processes is a multivariate Gaussian distribution parameterized by a kernel function . The rationale of neural processes is to extract knowledge from the context set to specify the prior over the prediction function . Instead of a kernel function , neural processes adopt a deep neural network to define the prior distribution . Specifically , the model introduces a latent variable z to account for uncertainty in the predictions of Y∗ . The observed context set is encoded into the latent variable which is conditioned on the context set , i.e . it follows the prior distribution p ( z|X , Y ) . The latent variable z is a high-dimensional random vector parameterising the stochastic process by f ( X∗ ) = g ( X∗ , z ) . The function g ( · , · ) is an extra fixed and learnable decoder function , which is also implemented by a neural network . Thus , the neural process model can be formulated as follows : p ( Y∗|X∗ , X , Y ) = ∫ p ( Y∗|X∗ , z ) p ( z|X , Y ) dz . ( 1 ) The graphical model for neural processes is shown in Figure 1 ( a ) . The neural process model is optimized using amortized variational inference . Let q ( z|X∗ , Y∗ ) be a variational posterior of the latent variable z . The evidence lower-bound ( ELBO ) for neural processes is given as follows : log p ( Y∗|X∗ , X , Y ) ≥ Eq ( z|X∗ , Y∗ ) [ p ( Y∗|X∗ , z ) ] − DKL [ q ( z|X∗ , Y∗ ) ||p ( z|X , Y ) ] . ( 2 ) In neural processes , the function space defined by deep neural networks allows the model to extract deep features while retaining a probabilistic interpretation ( Jakkala , 2021 ) . In multi-task learn- ing , usually different tasks can be from different domains and have their specific data distributions ( Lawrence & Platt , 2004 ; Long et al. , 2017 ) . Due to the complex data structure of multi-task learning , it is not straightforward to explore task relatedness in such function spaces . In this paper , we aim to extend the methodology of the neural process to the scenarios of multi-task learning to learn the shared knowledge among tasks for improving individual tasks . 3 MULTI-TASK NEURAL PROCESSES . The common setup of multi-task learning is that there are multiple related tasks for which we would like to improve the learning of each individual one by sharing information across the different tasks ( Williams et al. , 2007 ) . Multi-task learning has been studied under different settings ( Requeima et al. , 2019 ; Williams et al. , 2007 ; Lawrence & Platt , 2004 ; Yu et al. , 2005 ; Long et al. , 2017 ) . In this paper , we tackle the multi-input multi-output setting , where each task has a different distribution while different tasks share the same target space ( Lawrence & Platt , 2004 ; Yu et al. , 2005 ; Long et al. , 2017 ; Zhang et al. , 2020 ) . It aims to improve the overall performance of all multiple tasks simultaneously different from the sequential multi-task leaning ( Requeima et al. , 2019 ; Garnelo et al. , 2018a ) . This is a challenging scenario due to the domain shift between tasks , which makes it sub-optimal to directly apply neural processes by incorporating the data from other related tasks into the context of each individual task . 3.1 HIERARCHICAL CONTEXT MODELING . Multi-task learning considers the estimation of random functions fl , l = 1 , ... , L for each of the L related tasks . Each task l has its own training data , which is split into a context set Dl = ( Xl , Yl ) and a target set D∗l = ( X ∗ l , Y ∗ l ) . X ∗ l ∈ Rn ∗ l×d and Xl ∈ Rnl×d are inputs while Y∗l ∈ Rn ∗ l×C and Yl ∈ Rnl×C are outputs . d and C are the sizes of the input space and output space , respectively . n∗l and nl are the sizes of respectively the target and context set for the l-th task . Thus , we obtain the task-specific latent variables fl = fl ( Xl ) ∈ Rnl×C . We use { Dl } Ll=1 to denote all context sets in the dataset , which for brevity we represent as { Dl } and likewise will do for other sets . Formulated this way the goal of multi-task learning is to predict { Y∗l } for given { X∗l } simultaneously with the assistance information { Dl } from all tasks . To this end , we construct a joint prediction distribution with respect to the latent random functions { fl } as : p ( { Y∗l } | { X∗l } , { Dl } ) = L∏ l=1 p ( Y∗l |X∗l , { Dl } ) = L∏ l=1 ∫ p ( Y∗l |X∗l , fl ) p ( fl|M ) dfl . ( 3 ) Here , to enable shared knowledge to be transferred among tasks , we introduce a global variable M which works as a container to collect the useful information from { Dl } of all tasks . In contrast to neural processes for single tasks , the global variable M provides the contextual information from all tasks for each individual task . The concrete formation of M depends on the learning scenarios . For regression tasks , M ∈ RL∗d and each row corresponds to one task , which is the average of all feature vectors from a task . For classification tasks , M ∈ RL∗C∗d where each vector Ml , c is the average of all features of each category from the corresponding task . Similar to Gaussian processes , we assume that the function value is Y∗l = fl ( X ∗ l ) + , where ∼ N ( 0 , σ2 ) is the observation noise . For regression tasks , we can define the predictive likelihood on the target set as p ( Y∗l |X∗l , fl ) = N ( Y∗l |fl ( X∗l ) , σ2 ) . For classification tasks , we use log p ( Y∗l |X∗l , fl ) = ∑n∗l i=1 y ∗ l , ilog ( fl ( x ∗ l , i ) ) as the log-likelihood function , where x ∗ l , i is the i-th target sample from the l-th task . In order to combine Gaussian processes and neural networks in the context of multi-task learning , we define p ( fl|M ) in ( 3 ) as a deep neural network in place of a Gaussian distribution parameterized by a kernel function . To be more specific , we assume that fl is parameterized by a random variable ψl by defining fl ( X ) = Xψ > l . We specify a data dependent prior by conditioning ψl on the global variable M. In this way , we incorporate the transferable knowledge into the learning of the prediction function of the current tasks . Thus , the predictive distribution for the l-th task over its target set can be formulated as follows : p ( Y∗l |X∗l , { Dl } ) = ∫ p ( Y∗l |X∗l , ψl ) pθ ( ψl|M ) dψl . ( 4 ) In effect , ψl denotes the parameters of classifier for classification tasks or regressors for regression tasks . Particularly , the introduced latent variable ψl ∈ RC×d denotes the task specific classifier , where d is the dimension of the input feature and C is the number of classes in the dataset . As done in ( Requeima et al. , 2019 ) , we generate each column of ψl independently from the context samples of the corresponding class . In our case , each column of ψl encodes the context information of its class from all tasks pθ ( ψl|M ) = ∏C c=1 pθ ( ψl , c|Mc ) . Directly aggregating M as done in neural processes for single tasks is not applicable for multi-task learning due to the distribution shift between tasks . The data from related tasks should be processed and adapted to the current task as the contextual information . To this end , we introduce a higherlevel latent variable αl to extract the shared knowledge from M , which is conditioned on the data Dl of each task . This results in a hierarchical Bayesian modeling of functions in the neural process : pθ ( ψl|M ) = ∫ pθ1 ( ψl|αl , M ) pθ2 ( αl|Dl ) dαl , ( 5 ) whereαl is the latent variable to control the access to shared knowledge for each task , which is used to explore the relevant knowledge to the task l. pθ1 ( ψl|αl , M ) and pθ2 ( αl|Dl ) are prior distributions of the latent variable ψl and αl , respectively , which are parameterized with neural networks . To be specific , we define pθ1 ( ψl|αl , M ) = N ( ψl|µ ( ml ) , Σ ( ml ) ) . Here ml contains the relevant knowledge to the task l , which is adapted from the global variable M by a deterministic function ml = h ( αl , M ) , where h ( · ) is a learnable function implemented with a neural network . By substituting ( 5 ) into ( 4 ) , we obtain the model of multi-task neural processes as follows : p ( { Y∗l } | { X∗l } , { Dl } ) = L∏ l=1 ∫ ∫ p ( Y∗l |X∗l , ψl ) pθ1 ( ψl|αl , M ) pθ2 ( αl|Dl ) dψldαl . ( 6 ) The designed hierarchical context modeling provides a principled way to explore task relatedness in the function space , which allows task-specific function variables to leverage the shared knowledge from related tasks . We provide theoretical proof in Appendix to show that the proposed multi-task neural processes are a valid stochastic processes , which completes the theory of multi-task neural processes . The graphical model of the multi-task neural processes is shown in Figure 1 ( b ) .
This paper presents a multi-task neural processes approach in which function priors are derived in a hierarchical Bayesian inference framework to incorporate the shared knowledge into its context of the prediction function. Authors introduced a higher-level latent variable derived from the context data of related tasks to control the sharing of common knowledge between the tasks. Previous works used context data from its own task to generate the task-specific latent variables, whereas this work uses data from other tasks as well. The shared knowledge from all the tasks added to context of each individual task acts as inductive bias for predictions. Their experiment results show better performance of the proposed method than compared ones.
SP:d2b56456ad62914d187bac8c78ffa995c32fa83b
Multi-Task Neural Processes
Neural processes have recently emerged as a class of powerful neural latent variable models that combine the strengths of neural networks and stochastic processes . As they can encode contextual data in the network ’ s function space , they offer a new way to model task relatedness in multi-task learning . To study its potential , we develop multi-task neural processes , a new variant of neural processes for multi-task learning . In particular , we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task . To do so , we derive the function priors in a hierarchical Bayesian inference framework , which enables each task to incorporate the shared knowledge provided by related tasks into its context of the prediction function . Our multi-task neural processes methodologically expand the scope of vanilla neural processes and provide a new way of exploring task relatedness in function spaces for multi-task learning . The proposed multi-task neural processes are capable of learning multiple tasks with limited labeled data and in the presence of domain shift . We perform extensive experimental evaluations on several benchmarks for the multi-task regression and classification tasks . The results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning and superior performance in multi-task classification and brain image segmentation1 . 1 INTRODUCTION . As deep neural networks are black-box function approximations , it is difficult to introduce prior domain or expert knowledge into a prediction function ( Jakkala , 2021 ) . In contrast , Gaussian processes ( Rasmussen , 2003 ) explicitly define distributions over functions and perform inference over these functions given some training examples . This enables reliable and flexible decisionmaking . However , Gaussian processes can suffer from high computational complexity due to the manipulation of kernel matrices . Therefore , there has been continuous interest in bringing together neural networks and Gaussian processes ( Damianou & Lawrence , 2013 ; Wilson et al. , 2016 ; Garnelo et al. , 2018a ; Jakkala , 2021 ) into so-called neural processes . Neural processes ( Garnelo et al. , 2018b ) combine the computational efficiency of neural networks with the uncertainty quantification of stochastic processes . They are a class of neural latent variables model , which deploy a deep neural network to encode context observations into a latent stochastic variable to model prediction functions . Neural processes provide an elegant formalism to efficiently and effectively incorporate multiple datasets into learning distributions over functions . This formalism is also promising in multi-task learning to improve individual tasks by transferring useful contextual knowledge among related tasks . Their capability of estimating uncertainty over predictions also makes them well-suited for multi-task learning with limited data , where each task has only a few training samples . However , neural processes rely on the implicit assumption that the context and target sets are from the same distribution and can be aggregated by a simple average pooling operation ( Kim et al. , 2019 ; Volpp et al. , 2020 ) . This makes it non-trivial to directly apply neural processes to modeling multiple heterogeneous tasks from different domains , where the context data of different tasks are from distinctive distributions ( Long et al. , 2017 ) . 1 Our code is available in https : //anonymous.4open.science/r/Multi-Task-Neural-Processes/ . In this paper , we develop multi-task neural processes ( MTNPs ) , a methodological extension of neural processes for multi-task learning , which fills the theoretical gap of neural processes for multi-task learning . Particularly , we propose to explore task relatedness in the function space by specifying the function priors in a hierarchical Bayesian inference framework . The shared knowledge from related tasks is incorporated into the context of each individual task , which serves as the inductive bias for making predictions in this task . The hierarchical architecture allows us to design expressive datadependent priors . This enables the model to capture the complex task relationships in multitask learning . By leveraging hierarchical modeling , multi-task neural processes are capable of exploring shared knowledge among related tasks in a principled way by specifying the function prior . We validate the effectiveness of the proposed multi-task neural processes by extensive experiments in both multi-task classification and regression . The results demonstrate that multi-task neural processes can effectively capture task relatedness in the function space and consistently improve the performance of each individual task , especially in the limited data regime . 2 PRELIMINARIES : NEURAL PROCESSES . In this section , we briefly review vanilla neural processes ( Garnelo et al. , 2018b ) based on which we derive our multi-task neural processes . Let a data set be given composed of training samples and corresponding labels . In order to better reflect the desired model behaviour at test time ( Garnelo et al. , 2018b ) , the training data is split into the context set D = ( X , Y ) and a target set D∗ = ( X∗ , Y∗ ) , where X = { x1 , · · · , xn } is a subset of the training data , Y = { y1 , · · · , yn } the corresponding set of labels and n the size of the context set . To ensure both sets have the same data distribution , the context set is split from the target set . Now , given the context set ( X , Y ) , we would like to estimate a function that can make predict the labels Y∗ for target samples X∗ . In general , we define a stochastic process by a random function f : X → Y . Given the context set , we define the joint distribution over the function values { f ( x1 ) , · · · , f ( xn ) } , which in Gaussian processes is a multivariate Gaussian distribution parameterized by a kernel function . The rationale of neural processes is to extract knowledge from the context set to specify the prior over the prediction function . Instead of a kernel function , neural processes adopt a deep neural network to define the prior distribution . Specifically , the model introduces a latent variable z to account for uncertainty in the predictions of Y∗ . The observed context set is encoded into the latent variable which is conditioned on the context set , i.e . it follows the prior distribution p ( z|X , Y ) . The latent variable z is a high-dimensional random vector parameterising the stochastic process by f ( X∗ ) = g ( X∗ , z ) . The function g ( · , · ) is an extra fixed and learnable decoder function , which is also implemented by a neural network . Thus , the neural process model can be formulated as follows : p ( Y∗|X∗ , X , Y ) = ∫ p ( Y∗|X∗ , z ) p ( z|X , Y ) dz . ( 1 ) The graphical model for neural processes is shown in Figure 1 ( a ) . The neural process model is optimized using amortized variational inference . Let q ( z|X∗ , Y∗ ) be a variational posterior of the latent variable z . The evidence lower-bound ( ELBO ) for neural processes is given as follows : log p ( Y∗|X∗ , X , Y ) ≥ Eq ( z|X∗ , Y∗ ) [ p ( Y∗|X∗ , z ) ] − DKL [ q ( z|X∗ , Y∗ ) ||p ( z|X , Y ) ] . ( 2 ) In neural processes , the function space defined by deep neural networks allows the model to extract deep features while retaining a probabilistic interpretation ( Jakkala , 2021 ) . In multi-task learn- ing , usually different tasks can be from different domains and have their specific data distributions ( Lawrence & Platt , 2004 ; Long et al. , 2017 ) . Due to the complex data structure of multi-task learning , it is not straightforward to explore task relatedness in such function spaces . In this paper , we aim to extend the methodology of the neural process to the scenarios of multi-task learning to learn the shared knowledge among tasks for improving individual tasks . 3 MULTI-TASK NEURAL PROCESSES . The common setup of multi-task learning is that there are multiple related tasks for which we would like to improve the learning of each individual one by sharing information across the different tasks ( Williams et al. , 2007 ) . Multi-task learning has been studied under different settings ( Requeima et al. , 2019 ; Williams et al. , 2007 ; Lawrence & Platt , 2004 ; Yu et al. , 2005 ; Long et al. , 2017 ) . In this paper , we tackle the multi-input multi-output setting , where each task has a different distribution while different tasks share the same target space ( Lawrence & Platt , 2004 ; Yu et al. , 2005 ; Long et al. , 2017 ; Zhang et al. , 2020 ) . It aims to improve the overall performance of all multiple tasks simultaneously different from the sequential multi-task leaning ( Requeima et al. , 2019 ; Garnelo et al. , 2018a ) . This is a challenging scenario due to the domain shift between tasks , which makes it sub-optimal to directly apply neural processes by incorporating the data from other related tasks into the context of each individual task . 3.1 HIERARCHICAL CONTEXT MODELING . Multi-task learning considers the estimation of random functions fl , l = 1 , ... , L for each of the L related tasks . Each task l has its own training data , which is split into a context set Dl = ( Xl , Yl ) and a target set D∗l = ( X ∗ l , Y ∗ l ) . X ∗ l ∈ Rn ∗ l×d and Xl ∈ Rnl×d are inputs while Y∗l ∈ Rn ∗ l×C and Yl ∈ Rnl×C are outputs . d and C are the sizes of the input space and output space , respectively . n∗l and nl are the sizes of respectively the target and context set for the l-th task . Thus , we obtain the task-specific latent variables fl = fl ( Xl ) ∈ Rnl×C . We use { Dl } Ll=1 to denote all context sets in the dataset , which for brevity we represent as { Dl } and likewise will do for other sets . Formulated this way the goal of multi-task learning is to predict { Y∗l } for given { X∗l } simultaneously with the assistance information { Dl } from all tasks . To this end , we construct a joint prediction distribution with respect to the latent random functions { fl } as : p ( { Y∗l } | { X∗l } , { Dl } ) = L∏ l=1 p ( Y∗l |X∗l , { Dl } ) = L∏ l=1 ∫ p ( Y∗l |X∗l , fl ) p ( fl|M ) dfl . ( 3 ) Here , to enable shared knowledge to be transferred among tasks , we introduce a global variable M which works as a container to collect the useful information from { Dl } of all tasks . In contrast to neural processes for single tasks , the global variable M provides the contextual information from all tasks for each individual task . The concrete formation of M depends on the learning scenarios . For regression tasks , M ∈ RL∗d and each row corresponds to one task , which is the average of all feature vectors from a task . For classification tasks , M ∈ RL∗C∗d where each vector Ml , c is the average of all features of each category from the corresponding task . Similar to Gaussian processes , we assume that the function value is Y∗l = fl ( X ∗ l ) + , where ∼ N ( 0 , σ2 ) is the observation noise . For regression tasks , we can define the predictive likelihood on the target set as p ( Y∗l |X∗l , fl ) = N ( Y∗l |fl ( X∗l ) , σ2 ) . For classification tasks , we use log p ( Y∗l |X∗l , fl ) = ∑n∗l i=1 y ∗ l , ilog ( fl ( x ∗ l , i ) ) as the log-likelihood function , where x ∗ l , i is the i-th target sample from the l-th task . In order to combine Gaussian processes and neural networks in the context of multi-task learning , we define p ( fl|M ) in ( 3 ) as a deep neural network in place of a Gaussian distribution parameterized by a kernel function . To be more specific , we assume that fl is parameterized by a random variable ψl by defining fl ( X ) = Xψ > l . We specify a data dependent prior by conditioning ψl on the global variable M. In this way , we incorporate the transferable knowledge into the learning of the prediction function of the current tasks . Thus , the predictive distribution for the l-th task over its target set can be formulated as follows : p ( Y∗l |X∗l , { Dl } ) = ∫ p ( Y∗l |X∗l , ψl ) pθ ( ψl|M ) dψl . ( 4 ) In effect , ψl denotes the parameters of classifier for classification tasks or regressors for regression tasks . Particularly , the introduced latent variable ψl ∈ RC×d denotes the task specific classifier , where d is the dimension of the input feature and C is the number of classes in the dataset . As done in ( Requeima et al. , 2019 ) , we generate each column of ψl independently from the context samples of the corresponding class . In our case , each column of ψl encodes the context information of its class from all tasks pθ ( ψl|M ) = ∏C c=1 pθ ( ψl , c|Mc ) . Directly aggregating M as done in neural processes for single tasks is not applicable for multi-task learning due to the distribution shift between tasks . The data from related tasks should be processed and adapted to the current task as the contextual information . To this end , we introduce a higherlevel latent variable αl to extract the shared knowledge from M , which is conditioned on the data Dl of each task . This results in a hierarchical Bayesian modeling of functions in the neural process : pθ ( ψl|M ) = ∫ pθ1 ( ψl|αl , M ) pθ2 ( αl|Dl ) dαl , ( 5 ) whereαl is the latent variable to control the access to shared knowledge for each task , which is used to explore the relevant knowledge to the task l. pθ1 ( ψl|αl , M ) and pθ2 ( αl|Dl ) are prior distributions of the latent variable ψl and αl , respectively , which are parameterized with neural networks . To be specific , we define pθ1 ( ψl|αl , M ) = N ( ψl|µ ( ml ) , Σ ( ml ) ) . Here ml contains the relevant knowledge to the task l , which is adapted from the global variable M by a deterministic function ml = h ( αl , M ) , where h ( · ) is a learnable function implemented with a neural network . By substituting ( 5 ) into ( 4 ) , we obtain the model of multi-task neural processes as follows : p ( { Y∗l } | { X∗l } , { Dl } ) = L∏ l=1 ∫ ∫ p ( Y∗l |X∗l , ψl ) pθ1 ( ψl|αl , M ) pθ2 ( αl|Dl ) dψldαl . ( 6 ) The designed hierarchical context modeling provides a principled way to explore task relatedness in the function space , which allows task-specific function variables to leverage the shared knowledge from related tasks . We provide theoretical proof in Appendix to show that the proposed multi-task neural processes are a valid stochastic processes , which completes the theory of multi-task neural processes . The graphical model of the multi-task neural processes is shown in Figure 1 ( b ) .
The paper proposes a multi-task learning model with neural processes. The model is based on a hierarchical construction whereby each task is conditioned on global and local information. The paper derives a “hierarchical” ELBO for this model that is evaluated with MC sampling. Experiments are presented to validate the proposed model.
SP:d2b56456ad62914d187bac8c78ffa995c32fa83b
Multi-Task Neural Processes
Neural processes have recently emerged as a class of powerful neural latent variable models that combine the strengths of neural networks and stochastic processes . As they can encode contextual data in the network ’ s function space , they offer a new way to model task relatedness in multi-task learning . To study its potential , we develop multi-task neural processes , a new variant of neural processes for multi-task learning . In particular , we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task . To do so , we derive the function priors in a hierarchical Bayesian inference framework , which enables each task to incorporate the shared knowledge provided by related tasks into its context of the prediction function . Our multi-task neural processes methodologically expand the scope of vanilla neural processes and provide a new way of exploring task relatedness in function spaces for multi-task learning . The proposed multi-task neural processes are capable of learning multiple tasks with limited labeled data and in the presence of domain shift . We perform extensive experimental evaluations on several benchmarks for the multi-task regression and classification tasks . The results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning and superior performance in multi-task classification and brain image segmentation1 . 1 INTRODUCTION . As deep neural networks are black-box function approximations , it is difficult to introduce prior domain or expert knowledge into a prediction function ( Jakkala , 2021 ) . In contrast , Gaussian processes ( Rasmussen , 2003 ) explicitly define distributions over functions and perform inference over these functions given some training examples . This enables reliable and flexible decisionmaking . However , Gaussian processes can suffer from high computational complexity due to the manipulation of kernel matrices . Therefore , there has been continuous interest in bringing together neural networks and Gaussian processes ( Damianou & Lawrence , 2013 ; Wilson et al. , 2016 ; Garnelo et al. , 2018a ; Jakkala , 2021 ) into so-called neural processes . Neural processes ( Garnelo et al. , 2018b ) combine the computational efficiency of neural networks with the uncertainty quantification of stochastic processes . They are a class of neural latent variables model , which deploy a deep neural network to encode context observations into a latent stochastic variable to model prediction functions . Neural processes provide an elegant formalism to efficiently and effectively incorporate multiple datasets into learning distributions over functions . This formalism is also promising in multi-task learning to improve individual tasks by transferring useful contextual knowledge among related tasks . Their capability of estimating uncertainty over predictions also makes them well-suited for multi-task learning with limited data , where each task has only a few training samples . However , neural processes rely on the implicit assumption that the context and target sets are from the same distribution and can be aggregated by a simple average pooling operation ( Kim et al. , 2019 ; Volpp et al. , 2020 ) . This makes it non-trivial to directly apply neural processes to modeling multiple heterogeneous tasks from different domains , where the context data of different tasks are from distinctive distributions ( Long et al. , 2017 ) . 1 Our code is available in https : //anonymous.4open.science/r/Multi-Task-Neural-Processes/ . In this paper , we develop multi-task neural processes ( MTNPs ) , a methodological extension of neural processes for multi-task learning , which fills the theoretical gap of neural processes for multi-task learning . Particularly , we propose to explore task relatedness in the function space by specifying the function priors in a hierarchical Bayesian inference framework . The shared knowledge from related tasks is incorporated into the context of each individual task , which serves as the inductive bias for making predictions in this task . The hierarchical architecture allows us to design expressive datadependent priors . This enables the model to capture the complex task relationships in multitask learning . By leveraging hierarchical modeling , multi-task neural processes are capable of exploring shared knowledge among related tasks in a principled way by specifying the function prior . We validate the effectiveness of the proposed multi-task neural processes by extensive experiments in both multi-task classification and regression . The results demonstrate that multi-task neural processes can effectively capture task relatedness in the function space and consistently improve the performance of each individual task , especially in the limited data regime . 2 PRELIMINARIES : NEURAL PROCESSES . In this section , we briefly review vanilla neural processes ( Garnelo et al. , 2018b ) based on which we derive our multi-task neural processes . Let a data set be given composed of training samples and corresponding labels . In order to better reflect the desired model behaviour at test time ( Garnelo et al. , 2018b ) , the training data is split into the context set D = ( X , Y ) and a target set D∗ = ( X∗ , Y∗ ) , where X = { x1 , · · · , xn } is a subset of the training data , Y = { y1 , · · · , yn } the corresponding set of labels and n the size of the context set . To ensure both sets have the same data distribution , the context set is split from the target set . Now , given the context set ( X , Y ) , we would like to estimate a function that can make predict the labels Y∗ for target samples X∗ . In general , we define a stochastic process by a random function f : X → Y . Given the context set , we define the joint distribution over the function values { f ( x1 ) , · · · , f ( xn ) } , which in Gaussian processes is a multivariate Gaussian distribution parameterized by a kernel function . The rationale of neural processes is to extract knowledge from the context set to specify the prior over the prediction function . Instead of a kernel function , neural processes adopt a deep neural network to define the prior distribution . Specifically , the model introduces a latent variable z to account for uncertainty in the predictions of Y∗ . The observed context set is encoded into the latent variable which is conditioned on the context set , i.e . it follows the prior distribution p ( z|X , Y ) . The latent variable z is a high-dimensional random vector parameterising the stochastic process by f ( X∗ ) = g ( X∗ , z ) . The function g ( · , · ) is an extra fixed and learnable decoder function , which is also implemented by a neural network . Thus , the neural process model can be formulated as follows : p ( Y∗|X∗ , X , Y ) = ∫ p ( Y∗|X∗ , z ) p ( z|X , Y ) dz . ( 1 ) The graphical model for neural processes is shown in Figure 1 ( a ) . The neural process model is optimized using amortized variational inference . Let q ( z|X∗ , Y∗ ) be a variational posterior of the latent variable z . The evidence lower-bound ( ELBO ) for neural processes is given as follows : log p ( Y∗|X∗ , X , Y ) ≥ Eq ( z|X∗ , Y∗ ) [ p ( Y∗|X∗ , z ) ] − DKL [ q ( z|X∗ , Y∗ ) ||p ( z|X , Y ) ] . ( 2 ) In neural processes , the function space defined by deep neural networks allows the model to extract deep features while retaining a probabilistic interpretation ( Jakkala , 2021 ) . In multi-task learn- ing , usually different tasks can be from different domains and have their specific data distributions ( Lawrence & Platt , 2004 ; Long et al. , 2017 ) . Due to the complex data structure of multi-task learning , it is not straightforward to explore task relatedness in such function spaces . In this paper , we aim to extend the methodology of the neural process to the scenarios of multi-task learning to learn the shared knowledge among tasks for improving individual tasks . 3 MULTI-TASK NEURAL PROCESSES . The common setup of multi-task learning is that there are multiple related tasks for which we would like to improve the learning of each individual one by sharing information across the different tasks ( Williams et al. , 2007 ) . Multi-task learning has been studied under different settings ( Requeima et al. , 2019 ; Williams et al. , 2007 ; Lawrence & Platt , 2004 ; Yu et al. , 2005 ; Long et al. , 2017 ) . In this paper , we tackle the multi-input multi-output setting , where each task has a different distribution while different tasks share the same target space ( Lawrence & Platt , 2004 ; Yu et al. , 2005 ; Long et al. , 2017 ; Zhang et al. , 2020 ) . It aims to improve the overall performance of all multiple tasks simultaneously different from the sequential multi-task leaning ( Requeima et al. , 2019 ; Garnelo et al. , 2018a ) . This is a challenging scenario due to the domain shift between tasks , which makes it sub-optimal to directly apply neural processes by incorporating the data from other related tasks into the context of each individual task . 3.1 HIERARCHICAL CONTEXT MODELING . Multi-task learning considers the estimation of random functions fl , l = 1 , ... , L for each of the L related tasks . Each task l has its own training data , which is split into a context set Dl = ( Xl , Yl ) and a target set D∗l = ( X ∗ l , Y ∗ l ) . X ∗ l ∈ Rn ∗ l×d and Xl ∈ Rnl×d are inputs while Y∗l ∈ Rn ∗ l×C and Yl ∈ Rnl×C are outputs . d and C are the sizes of the input space and output space , respectively . n∗l and nl are the sizes of respectively the target and context set for the l-th task . Thus , we obtain the task-specific latent variables fl = fl ( Xl ) ∈ Rnl×C . We use { Dl } Ll=1 to denote all context sets in the dataset , which for brevity we represent as { Dl } and likewise will do for other sets . Formulated this way the goal of multi-task learning is to predict { Y∗l } for given { X∗l } simultaneously with the assistance information { Dl } from all tasks . To this end , we construct a joint prediction distribution with respect to the latent random functions { fl } as : p ( { Y∗l } | { X∗l } , { Dl } ) = L∏ l=1 p ( Y∗l |X∗l , { Dl } ) = L∏ l=1 ∫ p ( Y∗l |X∗l , fl ) p ( fl|M ) dfl . ( 3 ) Here , to enable shared knowledge to be transferred among tasks , we introduce a global variable M which works as a container to collect the useful information from { Dl } of all tasks . In contrast to neural processes for single tasks , the global variable M provides the contextual information from all tasks for each individual task . The concrete formation of M depends on the learning scenarios . For regression tasks , M ∈ RL∗d and each row corresponds to one task , which is the average of all feature vectors from a task . For classification tasks , M ∈ RL∗C∗d where each vector Ml , c is the average of all features of each category from the corresponding task . Similar to Gaussian processes , we assume that the function value is Y∗l = fl ( X ∗ l ) + , where ∼ N ( 0 , σ2 ) is the observation noise . For regression tasks , we can define the predictive likelihood on the target set as p ( Y∗l |X∗l , fl ) = N ( Y∗l |fl ( X∗l ) , σ2 ) . For classification tasks , we use log p ( Y∗l |X∗l , fl ) = ∑n∗l i=1 y ∗ l , ilog ( fl ( x ∗ l , i ) ) as the log-likelihood function , where x ∗ l , i is the i-th target sample from the l-th task . In order to combine Gaussian processes and neural networks in the context of multi-task learning , we define p ( fl|M ) in ( 3 ) as a deep neural network in place of a Gaussian distribution parameterized by a kernel function . To be more specific , we assume that fl is parameterized by a random variable ψl by defining fl ( X ) = Xψ > l . We specify a data dependent prior by conditioning ψl on the global variable M. In this way , we incorporate the transferable knowledge into the learning of the prediction function of the current tasks . Thus , the predictive distribution for the l-th task over its target set can be formulated as follows : p ( Y∗l |X∗l , { Dl } ) = ∫ p ( Y∗l |X∗l , ψl ) pθ ( ψl|M ) dψl . ( 4 ) In effect , ψl denotes the parameters of classifier for classification tasks or regressors for regression tasks . Particularly , the introduced latent variable ψl ∈ RC×d denotes the task specific classifier , where d is the dimension of the input feature and C is the number of classes in the dataset . As done in ( Requeima et al. , 2019 ) , we generate each column of ψl independently from the context samples of the corresponding class . In our case , each column of ψl encodes the context information of its class from all tasks pθ ( ψl|M ) = ∏C c=1 pθ ( ψl , c|Mc ) . Directly aggregating M as done in neural processes for single tasks is not applicable for multi-task learning due to the distribution shift between tasks . The data from related tasks should be processed and adapted to the current task as the contextual information . To this end , we introduce a higherlevel latent variable αl to extract the shared knowledge from M , which is conditioned on the data Dl of each task . This results in a hierarchical Bayesian modeling of functions in the neural process : pθ ( ψl|M ) = ∫ pθ1 ( ψl|αl , M ) pθ2 ( αl|Dl ) dαl , ( 5 ) whereαl is the latent variable to control the access to shared knowledge for each task , which is used to explore the relevant knowledge to the task l. pθ1 ( ψl|αl , M ) and pθ2 ( αl|Dl ) are prior distributions of the latent variable ψl and αl , respectively , which are parameterized with neural networks . To be specific , we define pθ1 ( ψl|αl , M ) = N ( ψl|µ ( ml ) , Σ ( ml ) ) . Here ml contains the relevant knowledge to the task l , which is adapted from the global variable M by a deterministic function ml = h ( αl , M ) , where h ( · ) is a learnable function implemented with a neural network . By substituting ( 5 ) into ( 4 ) , we obtain the model of multi-task neural processes as follows : p ( { Y∗l } | { X∗l } , { Dl } ) = L∏ l=1 ∫ ∫ p ( Y∗l |X∗l , ψl ) pθ1 ( ψl|αl , M ) pθ2 ( αl|Dl ) dψldαl . ( 6 ) The designed hierarchical context modeling provides a principled way to explore task relatedness in the function space , which allows task-specific function variables to leverage the shared knowledge from related tasks . We provide theoretical proof in Appendix to show that the proposed multi-task neural processes are a valid stochastic processes , which completes the theory of multi-task neural processes . The graphical model of the multi-task neural processes is shown in Figure 1 ( b ) .
This paper proposes a new multi-task neural process based on the classical neural process. The idea is to introduce additional global variables to share knowledge from different tasks. That is to say, a hierarchical bayesian model is constructed to link single-task neural processes together. The new model is able to handle multi-task applications contract to the classical neural process
SP:d2b56456ad62914d187bac8c78ffa995c32fa83b
Learning Neural Implicit Functions as Object Representations for Robotic Manipulation
1 INTRODUCTION . Intelligent agents should be able to interact with objects in the environment , such as grasping and placing an object , or more general tool-use , to achieve a certain goal . In robotics , such instances are formalized as manipulation planning , a type of a motion planning problem that solves not only for the robot ’ s own movement but also for the objects ’ motions subject to interaction constraints . Traditional approaches represent objects using meshes or combinations of shape primitives and describe interactions as hand-crafted constraints in terms of that representation . The approach of using such traditional geometric representations has long-standing limitations in terms of their perception and generalizing to large varieties of objects and interaction modes : ( i ) The representations have to be inferred from raw sensory inputs like images or point clouds – raising the fundamental problem of perception and shape estimation . However , if the aim is manipulation skills , the hard problem of precise shape estimation might be unnecessary to predict accurate interaction features1 , and an end-to-end object representation might be more appropriate than a standard perception pipeline . ( ii ) With increasing generality of object shapes and interaction , the complexity of representations grows and hand-engineering of the interaction features becomes inefficient . What is a good representation of an object ? Considering the representation will be used to predict interaction features , we expect it to encode primarily task-specific information rather than only geometric . And some of the information should to be shared across different interaction modes . In other words , good representations should be task-specific so that the feature prediction can be simplified and , at the same time , be task-agnostic to enable synergies between the features . E.g. , mug handles are called handles because we can handle the mug through them and also , once we learn the notion of a handle , we can interact with the mug through them in many different ways . From the perception aspect , good representations should be easy to infer from raw sensory inputs and should be able to trade their accuracy off in favor of the feature prediction . To this end , we propose a novel data-driven approach to learning interaction features . The proposed feature prediction scheme is illustrated in Fig . 1 . The whole pipeline is trained end-to-end directly with the task supervisions so as to make the representation and perception task-specific and thus to simplify the interaction prediction . The object representation acts as a bottleneck and is shared across multiple feature predictions so that the task-agnostic representations can emerge . Particularly , the object representation is a neural implicit function over the 3D space ( Park et al. , 2019 ; Mildenhall et al. , 2020 ) upon which equality constraint features are trained . The proposed neural implicit 1We call an interaction constraint function an interaction feature ; when used as equality constraints , the interaction features , analogous to energy potentials , return zero when feasible and non-zero otherwise . function is pixel-aligned : The function takes images from multiple cameras as input ( e.g . stereo ) and , assuming known camera poses and intrinsics , the latent representation at a certain spatial location is directly related to pixels of the images . Once learned , the interaction features can be used by a typical constrained optimal control framework to plan dexterous object-robot interaction . We adopt Logic-Geometric Programming ( LGP ) ( Toussaint et al. , 2018 ) as an optimization-based manipulation planning framework and show that this learned-feature based planning enables to compute trajectories that involve various types of interaction modes only from images . Due to the representations ’ generalization , the learned features are directly applicable to manipulation tasks involving unseen objects . To summarize , our main contributions are • To represent objects as neural implicit functions upon which interaction features are trained , • An image-based manipulation planning framework with the learned features as constraints , • Comparison to non pixel-aligned , non implicit function , and geometric representations , • Demonstration in various manipulation scenarios ranging from simple pick-and- hang [ videos ] to longer-horizon manipulations [ videos ] and zero-shot imitations [ videos ] . 2 RELATED WORK . 2.1 NEURAL IMPLICIT REPRESENTATIONS IN 3D MODELING AND VIEW SYNTHESIS . Neural implicit representations have recently gained increasing attention in 3D modeling . The core idea is to encode an object or a scene in the weights of a neural network , where the network acts as a direct mapping from 3D spatial location to an implicit representation of the model , such as occupancy measures ( Mescheder et al. , 2019 ; Songyou Peng , 2020 ) or signed distance fields ( Park et al. , 2019 ; Gropp et al. , 2020 ; Atzmon & Lipman , 2020 ) . In contrast to explicit representations like voxels , meshes or point clouds , the implicit representations don ’ t require discretization of the 3D space nor fixed shape topology but rather continuously represent the 3D geometry , thereby allowing for capturing complex shape geometry at high resolutions in a memory efficient way . There have been attempts to associate these 3D representations with 2D images using the principle of camera geometry . Exploiting the camera geometry in a forward direction , i.e. , 2D projection of 3D representations , yields a differentiable image rendering procedure and this idea can be used to get rid of 3D supervisions . For example , Sitzmann et al . ( 2019 ) ; Niemeyer et al . ( 2020 ) ; Yariv et al . ( 2020 ) ; Mildenhall et al . ( 2020 ) ; Henzler et al . ( 2021 ) ; Reizenstein et al . ( 2021 ) showed that the representation networks can be trained without the 3D supervision by defining a loss function to be difference between the rendered images and the ground-truth . Another notable application of this idea is view synthesis . Based on the differentiable rendering , Park et al . ( 2020 ) ; Chen et al . ( 2020 ) ; Yen-Chen et al . ( 2021 ) addressed unseen object pose estimation problems , where the goal is to find object ’ s pose relative to the camera that produces a rendered image closest to the ground truth . By conditioning 3D representations on 2D input images , one can expect the amortized encoder network to directly generalize to novel 3D geometries without requiring any test-time optimization . This can be done by introducing a bottleneck of a finite-dimensional global latent vector between the images and representations , but these global features often fail to capture fine-grained details of the 3D models ( Songyou Peng , 2020 ) . To address this , the camera geometry can be exploited in the inverse direction to obtain pixel-aligned local representations , i.e. , 3D reprojection of 2D image features . Saito et al . ( 2019 ) and Xu et al . ( 2019 ) showed that the pixel-aligned methods can establish rich latent features because they can easily preserve high-frequency components in the input images . Also , Yu et al . ( 2021 ) and Trevithick & Yang ( 2021 ) incorporated this idea within the view-synthesis framework and showed that their convolutional encoders have strong generalizations . While the above work investigates neural implicit functions to model shapes or appearances , we train them to model physical interaction feasibility and thereby to provide a differentiable constraint model for robot manipulation planning . 2.2 OBJECT/SCENE REPRESENTATIONS FOR ROBOTIC MANIPULATIONS . Several works have proposed data-driven approaches to learning object representations and/or interaction features which are conditioned on raw sensory inputs , especially for grasping of diverse objects . One popular approach is to train discriminative models for grasp assessments . For example , ten Pas et al . ( 2017 ) ; Mahler et al . ( 2017 ) ; Van der Merwe et al . ( 2020 ) trained a neural network that , for given candidate grasp poses , predicts their grasp qualities from point clouds . In addition , Breyer et al . ( 2020 ) ; Jiang et al . ( 2021 ) proposed 3D convolutional networks that take as inputs a truncated SDF and candidate grasp poses and return the grasp affordances . Similarly , Zeng et al . ( 2020b ; a ) addressed more general manipulation scenarios such as throwing or pick-and-place , where a convolutional network outputs a task score image . On the other hand , neural networks also have been used as generative models . For example , Mousavian et al . ( 2019 ) and Murali et al . ( 2020 ) adopted the approach of conditional variational autoencoders to model the feasible grasp pose distribution conditioned on the point cloud . Sundermeyer et al . ( 2021 ) proposed a somewhat hybrid method , where the network densely generates grasp candidates by assigning grasp scores and orientations to the point cloud . You et al . ( 2021 ) addressed the object hanging tasks from point clouds where the framework first makes dense predictions of the candidate poses among which one is picked and refined . Compared to these works , our framework takes advantage of a trajectory optimization to jointly optimize an interaction pose sequence instead of relying on exhaustive search or heuristic sampling schemes , thus not suffering from the high dimensionality nor the combinatorial complexity of long-horizon planning problems . Another important line of research is learning and utilizing keypoint object representations . Manuelli et al . ( 2019 ) ; Gao & Tedrake ( 2021 ) ; Qin et al . ( 2020 ) ; Turpin et al . ( 2021 ) represented objects using a set of 3D semantic keypoints and formulated manipulation problems in terms of such the keypoints . Similarly , Manuelli et al . ( 2020 ) learned the object dynamics as a function of keypoints upon which a model predictive controller is implemented . Despite their strong generalizations to unseen objects , the keypoint representations require semantics of the keypoints to be predefined . The representation part of our framework is closely related to dense object descriptions proposed by Florence et al . ( 2018 ; 2019 ) . The idea is to train fully-convolutional neural networks that maps a raw input image to pixelwise object representations which directly generalize to unseen objects . Our proposed framework can be seen as an extension of this pixelwise representation to dense representations over the 3D space which is learned by the task supervisions and can be seamlessly integrated into general sequential manipulation planning problems . Another recent related work was proposed by Yuan et al . ( 2021 ) , where the learned object-centric representations are used to predict the symbolic predicates of the scene which in turn enables symbolic-level task planning . In contrast , our framework predicts the task feasibility given a robot configuration and enables trajectory optimization of the lower-level continuous motions . 3 INTERACTION FEATURE PREDICTION VIA IMPLICIT REPRESENTATION . Given Nview images with their camera poses/intrinsics , { ( I1 , T 1 , K1 ) , ... , ( INview , TNview , KNview ) } , we define an interaction feature as a neural implicit function : h = φtask ( q ; { ( I1 , T 1 , K1 ) , ... , ( INview , TNview , KNview ) } ) , ( 1 ) where q ∈ SE ( 3 ) is the pose of the robot frame interacting with the object . As shown in Fig . 1 , the feature prediction framework consists of two parts : the representation network , which we call a backbone , and the feature head networks . The backbone serves as an implicit functional representation of an object , which , conditioned on a set of posed images , outputs d-dimensional representation vectors at queried 3D spatial locations . The interaction feature predictions are made through the feature heads , where each head is fed on a set of representation vectors obtained by querying the backbone at a set of key interaction points . While the multiple feature heads separately model different interactions , the backbone is shared across the tasks , making it learn more general object representations . The rest of this section will be devoted to introduce each module in detail .
The method proposes an implicit-field-function-based representation, which can be directly inferred from the camera images and be used for robot manipulation. The proposed method infers the object representation by querying about the implicit features of some pre-defined key points. The architecture projects the 3D query point into the 2D images and then exploits the pixel-wise features.
SP:f26e9ce12b57afb93c423645933e30d4b2dec3ef
Learning Neural Implicit Functions as Object Representations for Robotic Manipulation
1 INTRODUCTION . Intelligent agents should be able to interact with objects in the environment , such as grasping and placing an object , or more general tool-use , to achieve a certain goal . In robotics , such instances are formalized as manipulation planning , a type of a motion planning problem that solves not only for the robot ’ s own movement but also for the objects ’ motions subject to interaction constraints . Traditional approaches represent objects using meshes or combinations of shape primitives and describe interactions as hand-crafted constraints in terms of that representation . The approach of using such traditional geometric representations has long-standing limitations in terms of their perception and generalizing to large varieties of objects and interaction modes : ( i ) The representations have to be inferred from raw sensory inputs like images or point clouds – raising the fundamental problem of perception and shape estimation . However , if the aim is manipulation skills , the hard problem of precise shape estimation might be unnecessary to predict accurate interaction features1 , and an end-to-end object representation might be more appropriate than a standard perception pipeline . ( ii ) With increasing generality of object shapes and interaction , the complexity of representations grows and hand-engineering of the interaction features becomes inefficient . What is a good representation of an object ? Considering the representation will be used to predict interaction features , we expect it to encode primarily task-specific information rather than only geometric . And some of the information should to be shared across different interaction modes . In other words , good representations should be task-specific so that the feature prediction can be simplified and , at the same time , be task-agnostic to enable synergies between the features . E.g. , mug handles are called handles because we can handle the mug through them and also , once we learn the notion of a handle , we can interact with the mug through them in many different ways . From the perception aspect , good representations should be easy to infer from raw sensory inputs and should be able to trade their accuracy off in favor of the feature prediction . To this end , we propose a novel data-driven approach to learning interaction features . The proposed feature prediction scheme is illustrated in Fig . 1 . The whole pipeline is trained end-to-end directly with the task supervisions so as to make the representation and perception task-specific and thus to simplify the interaction prediction . The object representation acts as a bottleneck and is shared across multiple feature predictions so that the task-agnostic representations can emerge . Particularly , the object representation is a neural implicit function over the 3D space ( Park et al. , 2019 ; Mildenhall et al. , 2020 ) upon which equality constraint features are trained . The proposed neural implicit 1We call an interaction constraint function an interaction feature ; when used as equality constraints , the interaction features , analogous to energy potentials , return zero when feasible and non-zero otherwise . function is pixel-aligned : The function takes images from multiple cameras as input ( e.g . stereo ) and , assuming known camera poses and intrinsics , the latent representation at a certain spatial location is directly related to pixels of the images . Once learned , the interaction features can be used by a typical constrained optimal control framework to plan dexterous object-robot interaction . We adopt Logic-Geometric Programming ( LGP ) ( Toussaint et al. , 2018 ) as an optimization-based manipulation planning framework and show that this learned-feature based planning enables to compute trajectories that involve various types of interaction modes only from images . Due to the representations ’ generalization , the learned features are directly applicable to manipulation tasks involving unseen objects . To summarize , our main contributions are • To represent objects as neural implicit functions upon which interaction features are trained , • An image-based manipulation planning framework with the learned features as constraints , • Comparison to non pixel-aligned , non implicit function , and geometric representations , • Demonstration in various manipulation scenarios ranging from simple pick-and- hang [ videos ] to longer-horizon manipulations [ videos ] and zero-shot imitations [ videos ] . 2 RELATED WORK . 2.1 NEURAL IMPLICIT REPRESENTATIONS IN 3D MODELING AND VIEW SYNTHESIS . Neural implicit representations have recently gained increasing attention in 3D modeling . The core idea is to encode an object or a scene in the weights of a neural network , where the network acts as a direct mapping from 3D spatial location to an implicit representation of the model , such as occupancy measures ( Mescheder et al. , 2019 ; Songyou Peng , 2020 ) or signed distance fields ( Park et al. , 2019 ; Gropp et al. , 2020 ; Atzmon & Lipman , 2020 ) . In contrast to explicit representations like voxels , meshes or point clouds , the implicit representations don ’ t require discretization of the 3D space nor fixed shape topology but rather continuously represent the 3D geometry , thereby allowing for capturing complex shape geometry at high resolutions in a memory efficient way . There have been attempts to associate these 3D representations with 2D images using the principle of camera geometry . Exploiting the camera geometry in a forward direction , i.e. , 2D projection of 3D representations , yields a differentiable image rendering procedure and this idea can be used to get rid of 3D supervisions . For example , Sitzmann et al . ( 2019 ) ; Niemeyer et al . ( 2020 ) ; Yariv et al . ( 2020 ) ; Mildenhall et al . ( 2020 ) ; Henzler et al . ( 2021 ) ; Reizenstein et al . ( 2021 ) showed that the representation networks can be trained without the 3D supervision by defining a loss function to be difference between the rendered images and the ground-truth . Another notable application of this idea is view synthesis . Based on the differentiable rendering , Park et al . ( 2020 ) ; Chen et al . ( 2020 ) ; Yen-Chen et al . ( 2021 ) addressed unseen object pose estimation problems , where the goal is to find object ’ s pose relative to the camera that produces a rendered image closest to the ground truth . By conditioning 3D representations on 2D input images , one can expect the amortized encoder network to directly generalize to novel 3D geometries without requiring any test-time optimization . This can be done by introducing a bottleneck of a finite-dimensional global latent vector between the images and representations , but these global features often fail to capture fine-grained details of the 3D models ( Songyou Peng , 2020 ) . To address this , the camera geometry can be exploited in the inverse direction to obtain pixel-aligned local representations , i.e. , 3D reprojection of 2D image features . Saito et al . ( 2019 ) and Xu et al . ( 2019 ) showed that the pixel-aligned methods can establish rich latent features because they can easily preserve high-frequency components in the input images . Also , Yu et al . ( 2021 ) and Trevithick & Yang ( 2021 ) incorporated this idea within the view-synthesis framework and showed that their convolutional encoders have strong generalizations . While the above work investigates neural implicit functions to model shapes or appearances , we train them to model physical interaction feasibility and thereby to provide a differentiable constraint model for robot manipulation planning . 2.2 OBJECT/SCENE REPRESENTATIONS FOR ROBOTIC MANIPULATIONS . Several works have proposed data-driven approaches to learning object representations and/or interaction features which are conditioned on raw sensory inputs , especially for grasping of diverse objects . One popular approach is to train discriminative models for grasp assessments . For example , ten Pas et al . ( 2017 ) ; Mahler et al . ( 2017 ) ; Van der Merwe et al . ( 2020 ) trained a neural network that , for given candidate grasp poses , predicts their grasp qualities from point clouds . In addition , Breyer et al . ( 2020 ) ; Jiang et al . ( 2021 ) proposed 3D convolutional networks that take as inputs a truncated SDF and candidate grasp poses and return the grasp affordances . Similarly , Zeng et al . ( 2020b ; a ) addressed more general manipulation scenarios such as throwing or pick-and-place , where a convolutional network outputs a task score image . On the other hand , neural networks also have been used as generative models . For example , Mousavian et al . ( 2019 ) and Murali et al . ( 2020 ) adopted the approach of conditional variational autoencoders to model the feasible grasp pose distribution conditioned on the point cloud . Sundermeyer et al . ( 2021 ) proposed a somewhat hybrid method , where the network densely generates grasp candidates by assigning grasp scores and orientations to the point cloud . You et al . ( 2021 ) addressed the object hanging tasks from point clouds where the framework first makes dense predictions of the candidate poses among which one is picked and refined . Compared to these works , our framework takes advantage of a trajectory optimization to jointly optimize an interaction pose sequence instead of relying on exhaustive search or heuristic sampling schemes , thus not suffering from the high dimensionality nor the combinatorial complexity of long-horizon planning problems . Another important line of research is learning and utilizing keypoint object representations . Manuelli et al . ( 2019 ) ; Gao & Tedrake ( 2021 ) ; Qin et al . ( 2020 ) ; Turpin et al . ( 2021 ) represented objects using a set of 3D semantic keypoints and formulated manipulation problems in terms of such the keypoints . Similarly , Manuelli et al . ( 2020 ) learned the object dynamics as a function of keypoints upon which a model predictive controller is implemented . Despite their strong generalizations to unseen objects , the keypoint representations require semantics of the keypoints to be predefined . The representation part of our framework is closely related to dense object descriptions proposed by Florence et al . ( 2018 ; 2019 ) . The idea is to train fully-convolutional neural networks that maps a raw input image to pixelwise object representations which directly generalize to unseen objects . Our proposed framework can be seen as an extension of this pixelwise representation to dense representations over the 3D space which is learned by the task supervisions and can be seamlessly integrated into general sequential manipulation planning problems . Another recent related work was proposed by Yuan et al . ( 2021 ) , where the learned object-centric representations are used to predict the symbolic predicates of the scene which in turn enables symbolic-level task planning . In contrast , our framework predicts the task feasibility given a robot configuration and enables trajectory optimization of the lower-level continuous motions . 3 INTERACTION FEATURE PREDICTION VIA IMPLICIT REPRESENTATION . Given Nview images with their camera poses/intrinsics , { ( I1 , T 1 , K1 ) , ... , ( INview , TNview , KNview ) } , we define an interaction feature as a neural implicit function : h = φtask ( q ; { ( I1 , T 1 , K1 ) , ... , ( INview , TNview , KNview ) } ) , ( 1 ) where q ∈ SE ( 3 ) is the pose of the robot frame interacting with the object . As shown in Fig . 1 , the feature prediction framework consists of two parts : the representation network , which we call a backbone , and the feature head networks . The backbone serves as an implicit functional representation of an object , which , conditioned on a set of posed images , outputs d-dimensional representation vectors at queried 3D spatial locations . The interaction feature predictions are made through the feature heads , where each head is fed on a set of representation vectors obtained by querying the backbone at a set of key interaction points . While the multiple feature heads separately model different interactions , the backbone is shared across the tasks , making it learn more general object representations . The rest of this section will be devoted to introduce each module in detail .
This paper proposes a method that integrates neural implicit functions (NIFs) with planning methods for robot manipulation. A neural implicit function that represents geometry with SDFs, grasp scores, and hanging scores is learned. This is then integrated with a planner based on Logic-Geometric Planning (LGP). Experiments are run to test mug hanging in a few different scenarios, including hanging a single mug and hanging a mug with handover.
SP:f26e9ce12b57afb93c423645933e30d4b2dec3ef
Learning Neural Implicit Functions as Object Representations for Robotic Manipulation
1 INTRODUCTION . Intelligent agents should be able to interact with objects in the environment , such as grasping and placing an object , or more general tool-use , to achieve a certain goal . In robotics , such instances are formalized as manipulation planning , a type of a motion planning problem that solves not only for the robot ’ s own movement but also for the objects ’ motions subject to interaction constraints . Traditional approaches represent objects using meshes or combinations of shape primitives and describe interactions as hand-crafted constraints in terms of that representation . The approach of using such traditional geometric representations has long-standing limitations in terms of their perception and generalizing to large varieties of objects and interaction modes : ( i ) The representations have to be inferred from raw sensory inputs like images or point clouds – raising the fundamental problem of perception and shape estimation . However , if the aim is manipulation skills , the hard problem of precise shape estimation might be unnecessary to predict accurate interaction features1 , and an end-to-end object representation might be more appropriate than a standard perception pipeline . ( ii ) With increasing generality of object shapes and interaction , the complexity of representations grows and hand-engineering of the interaction features becomes inefficient . What is a good representation of an object ? Considering the representation will be used to predict interaction features , we expect it to encode primarily task-specific information rather than only geometric . And some of the information should to be shared across different interaction modes . In other words , good representations should be task-specific so that the feature prediction can be simplified and , at the same time , be task-agnostic to enable synergies between the features . E.g. , mug handles are called handles because we can handle the mug through them and also , once we learn the notion of a handle , we can interact with the mug through them in many different ways . From the perception aspect , good representations should be easy to infer from raw sensory inputs and should be able to trade their accuracy off in favor of the feature prediction . To this end , we propose a novel data-driven approach to learning interaction features . The proposed feature prediction scheme is illustrated in Fig . 1 . The whole pipeline is trained end-to-end directly with the task supervisions so as to make the representation and perception task-specific and thus to simplify the interaction prediction . The object representation acts as a bottleneck and is shared across multiple feature predictions so that the task-agnostic representations can emerge . Particularly , the object representation is a neural implicit function over the 3D space ( Park et al. , 2019 ; Mildenhall et al. , 2020 ) upon which equality constraint features are trained . The proposed neural implicit 1We call an interaction constraint function an interaction feature ; when used as equality constraints , the interaction features , analogous to energy potentials , return zero when feasible and non-zero otherwise . function is pixel-aligned : The function takes images from multiple cameras as input ( e.g . stereo ) and , assuming known camera poses and intrinsics , the latent representation at a certain spatial location is directly related to pixels of the images . Once learned , the interaction features can be used by a typical constrained optimal control framework to plan dexterous object-robot interaction . We adopt Logic-Geometric Programming ( LGP ) ( Toussaint et al. , 2018 ) as an optimization-based manipulation planning framework and show that this learned-feature based planning enables to compute trajectories that involve various types of interaction modes only from images . Due to the representations ’ generalization , the learned features are directly applicable to manipulation tasks involving unseen objects . To summarize , our main contributions are • To represent objects as neural implicit functions upon which interaction features are trained , • An image-based manipulation planning framework with the learned features as constraints , • Comparison to non pixel-aligned , non implicit function , and geometric representations , • Demonstration in various manipulation scenarios ranging from simple pick-and- hang [ videos ] to longer-horizon manipulations [ videos ] and zero-shot imitations [ videos ] . 2 RELATED WORK . 2.1 NEURAL IMPLICIT REPRESENTATIONS IN 3D MODELING AND VIEW SYNTHESIS . Neural implicit representations have recently gained increasing attention in 3D modeling . The core idea is to encode an object or a scene in the weights of a neural network , where the network acts as a direct mapping from 3D spatial location to an implicit representation of the model , such as occupancy measures ( Mescheder et al. , 2019 ; Songyou Peng , 2020 ) or signed distance fields ( Park et al. , 2019 ; Gropp et al. , 2020 ; Atzmon & Lipman , 2020 ) . In contrast to explicit representations like voxels , meshes or point clouds , the implicit representations don ’ t require discretization of the 3D space nor fixed shape topology but rather continuously represent the 3D geometry , thereby allowing for capturing complex shape geometry at high resolutions in a memory efficient way . There have been attempts to associate these 3D representations with 2D images using the principle of camera geometry . Exploiting the camera geometry in a forward direction , i.e. , 2D projection of 3D representations , yields a differentiable image rendering procedure and this idea can be used to get rid of 3D supervisions . For example , Sitzmann et al . ( 2019 ) ; Niemeyer et al . ( 2020 ) ; Yariv et al . ( 2020 ) ; Mildenhall et al . ( 2020 ) ; Henzler et al . ( 2021 ) ; Reizenstein et al . ( 2021 ) showed that the representation networks can be trained without the 3D supervision by defining a loss function to be difference between the rendered images and the ground-truth . Another notable application of this idea is view synthesis . Based on the differentiable rendering , Park et al . ( 2020 ) ; Chen et al . ( 2020 ) ; Yen-Chen et al . ( 2021 ) addressed unseen object pose estimation problems , where the goal is to find object ’ s pose relative to the camera that produces a rendered image closest to the ground truth . By conditioning 3D representations on 2D input images , one can expect the amortized encoder network to directly generalize to novel 3D geometries without requiring any test-time optimization . This can be done by introducing a bottleneck of a finite-dimensional global latent vector between the images and representations , but these global features often fail to capture fine-grained details of the 3D models ( Songyou Peng , 2020 ) . To address this , the camera geometry can be exploited in the inverse direction to obtain pixel-aligned local representations , i.e. , 3D reprojection of 2D image features . Saito et al . ( 2019 ) and Xu et al . ( 2019 ) showed that the pixel-aligned methods can establish rich latent features because they can easily preserve high-frequency components in the input images . Also , Yu et al . ( 2021 ) and Trevithick & Yang ( 2021 ) incorporated this idea within the view-synthesis framework and showed that their convolutional encoders have strong generalizations . While the above work investigates neural implicit functions to model shapes or appearances , we train them to model physical interaction feasibility and thereby to provide a differentiable constraint model for robot manipulation planning . 2.2 OBJECT/SCENE REPRESENTATIONS FOR ROBOTIC MANIPULATIONS . Several works have proposed data-driven approaches to learning object representations and/or interaction features which are conditioned on raw sensory inputs , especially for grasping of diverse objects . One popular approach is to train discriminative models for grasp assessments . For example , ten Pas et al . ( 2017 ) ; Mahler et al . ( 2017 ) ; Van der Merwe et al . ( 2020 ) trained a neural network that , for given candidate grasp poses , predicts their grasp qualities from point clouds . In addition , Breyer et al . ( 2020 ) ; Jiang et al . ( 2021 ) proposed 3D convolutional networks that take as inputs a truncated SDF and candidate grasp poses and return the grasp affordances . Similarly , Zeng et al . ( 2020b ; a ) addressed more general manipulation scenarios such as throwing or pick-and-place , where a convolutional network outputs a task score image . On the other hand , neural networks also have been used as generative models . For example , Mousavian et al . ( 2019 ) and Murali et al . ( 2020 ) adopted the approach of conditional variational autoencoders to model the feasible grasp pose distribution conditioned on the point cloud . Sundermeyer et al . ( 2021 ) proposed a somewhat hybrid method , where the network densely generates grasp candidates by assigning grasp scores and orientations to the point cloud . You et al . ( 2021 ) addressed the object hanging tasks from point clouds where the framework first makes dense predictions of the candidate poses among which one is picked and refined . Compared to these works , our framework takes advantage of a trajectory optimization to jointly optimize an interaction pose sequence instead of relying on exhaustive search or heuristic sampling schemes , thus not suffering from the high dimensionality nor the combinatorial complexity of long-horizon planning problems . Another important line of research is learning and utilizing keypoint object representations . Manuelli et al . ( 2019 ) ; Gao & Tedrake ( 2021 ) ; Qin et al . ( 2020 ) ; Turpin et al . ( 2021 ) represented objects using a set of 3D semantic keypoints and formulated manipulation problems in terms of such the keypoints . Similarly , Manuelli et al . ( 2020 ) learned the object dynamics as a function of keypoints upon which a model predictive controller is implemented . Despite their strong generalizations to unseen objects , the keypoint representations require semantics of the keypoints to be predefined . The representation part of our framework is closely related to dense object descriptions proposed by Florence et al . ( 2018 ; 2019 ) . The idea is to train fully-convolutional neural networks that maps a raw input image to pixelwise object representations which directly generalize to unseen objects . Our proposed framework can be seen as an extension of this pixelwise representation to dense representations over the 3D space which is learned by the task supervisions and can be seamlessly integrated into general sequential manipulation planning problems . Another recent related work was proposed by Yuan et al . ( 2021 ) , where the learned object-centric representations are used to predict the symbolic predicates of the scene which in turn enables symbolic-level task planning . In contrast , our framework predicts the task feasibility given a robot configuration and enables trajectory optimization of the lower-level continuous motions . 3 INTERACTION FEATURE PREDICTION VIA IMPLICIT REPRESENTATION . Given Nview images with their camera poses/intrinsics , { ( I1 , T 1 , K1 ) , ... , ( INview , TNview , KNview ) } , we define an interaction feature as a neural implicit function : h = φtask ( q ; { ( I1 , T 1 , K1 ) , ... , ( INview , TNview , KNview ) } ) , ( 1 ) where q ∈ SE ( 3 ) is the pose of the robot frame interacting with the object . As shown in Fig . 1 , the feature prediction framework consists of two parts : the representation network , which we call a backbone , and the feature head networks . The backbone serves as an implicit functional representation of an object , which , conditioned on a set of posed images , outputs d-dimensional representation vectors at queried 3D spatial locations . The interaction feature predictions are made through the feature heads , where each head is fed on a set of representation vectors obtained by querying the backbone at a set of key interaction points . While the multiple feature heads separately model different interactions , the backbone is shared across the tasks , making it learn more general object representations . The rest of this section will be devoted to introduce each module in detail .
This paper studies the problem of learning representations for robotic manipulation tasks. The authors developed a method that represents objects as neural implicit functions. To method is trained in a data-driven fashion that learns the training pipeline from camera images to interaction features end-to-end. At each time step in a motion planning task, the implicit function is queried at interaction points that are attached to the robot frame. With the pixel-aligned representations, the representations at a certain spatial location correspond to the associated pixels of the images. Experiments are conducted on a mug hanging task. Videos in simulation environments show the effectiveness of the proposed method.
SP:f26e9ce12b57afb93c423645933e30d4b2dec3ef
Scalable Hierarchical Embeddings of Complex Networks
Graph representation learning has become important in order to understand and predict intrinsic structures in complex networks . A variety of embedding methods has in recent years been developed including the Latent Distance Modeling ( LDM ) approach . A major challenge is scaling network embedding approaches to very large networks and a drawback of LDM is the computational cost invoked evaluating the full likelihood having O ( N2 ) complexity , making such analysis of large networks infeasible . We propose a novel multiscale hierarchical estimate of the full likelihood of LDMs providing high-details where the likelihood approximation is most important while scaling in complexity at O ( N logN ) . The approach relies on a clustering procedure approximating the Euclidean norm of every node pair according to the multiscale hierarchical structure imposed . We demonstrate the accuracy of our approximation and for the first time embed very large networks in the order of a million nodes using LDM and contrast the predictive performance to prominent scalable graph embedding approaches . We find that our approach significantly outperforms these existing scalable approaches in the ability to perform link prediction , node clustering and classification utilizing a surprisingly low embedding dimensionality of two to three dimensions whereas the extracted hierarchical structure facilitates network visualization and interpretation . The developed scalable hierarchical embedding approach enables accurate low dimensional representations of very large networks providing detailed visualizations that can further our understanding of their properties and structure . 1 INTRODUCTION . Networks naturally arise in a plethora of scientific areas to model the interactions between entities from physics to sociology and biology , with many instances such as collaboration , protein-protein interaction , and brain connectivity networks ( Newman , 2003 ) . In recent years Graph Representation Learning ( GRL ) approaches have attracted great interest with their outstanding performance compared to the classical techniques for the challenging network analysis problems such as link prediction ( Liben-Nowell & Kleinberg , 2003 ; Backstrom & Leskovec , 2011 ) , node classification ( Getoor & Taskar , 2007 ; Grover & Leskovec , 2016 ) , and community detection ( Fortunato , 2010 ) . Many existing GRL methods ( Hamilton et al. , 2017b ; Zhang et al. , 2020 ) mainly aim to capture the underlying intrinsic relationships among the nodes by either performing random walks ( Perozzi et al. , 2014 ; Grover & Leskovec , 2016 ) over the network or designing a matrix capturing the first and high order node proximities ( Cao et al. , 2015 ; Ou et al. , 2016 ) . However , they require high computational and space costs because of the exact node sampling procedures or the expensive factorization of dense proximity matrices . The recent Graph Neural Networks ( GNNs ) ( Hamilton et al. , 2017b ; Zhang et al. , 2020 ; Wang et al. , 2016 ) methods provide effective tools in learning the node representations by leveraging the side information such as node attribute features ; nevertheless , they also face computational difficulties , especially for large-scale networks consisting of millions of nodes and edges . Although the recent studies aim to alleviate the computational burden of the algorithms through matrix sparsification tools ( Qiu et al. , 2019 ) or hierarchical representations ( Bhowmick et al. , 2020 ; Chen et al. , 2018 ) , the performance of the methods in the downstream tasks significantly drops , and they require larger embedding sizes to compensate for the loss . Latent Space Models ( LSMs ) for the representation of graphs have been quite popular over the past years , especially for social networks analysis ( Hoff et al. , 2002 ) . LSMs utilize the generalized linear model framework to obtain informative latent node embeddings while preserving network characteristics . The choice of latent effects in modeling the link probabilities between the nodes leads to different expressive capabilities characterizing network structure . We consider the Latent Distance Model ( LDM ) ( Hoff et al. , 2002 ) with the Euclidean norm , in which nodes are placed closer in the latent space if they are similar or vice-versa . LDM obeys the triangle inequality and thus naturally represents transitivity and network homophily . These methods are attractive due to their simplicity , as they define well-structured inference problems and are characterized by high explanatory power . The time and space complexities are their main drawbacks , which scale quadratically with the number of nodes in the graph . Many real-world networks can be expressed as hierarchical structures of different scales ( Ravasz & Barabási , 2003 ) . For this purpose , several hierarchical network modeling tools have been proposed , such as the extensions of the stochastic block model to binary and multifurcating hierarchical structures ( Clauset et al. , 2008 ; Roy et al. , 2007 ; Blundell et al. , 2012 ; Herlau et al. , 2012 ; 2013 ) as well as agglomerative ( Blondel et al. , 2008 ; Ahn et al. , 2010 ) and recursive partitioning procedures ( Li et al. , 2020 ) relying on various measures of similarity . Learning the node representations preserving the hierarchical structure of the network is also a very promising task , and it can facilitate the visualization and the understanding of the inner dynamics of the network . In this work , we propose the Scalable Hierarchical Latent Distance Model ( SH-LDM ) combining embedding and hierarchical representations for graph representation learning . Importantly , the hierarchical structure imposed in ( SH-LDM ) reduces the total time and space complexity of the LDM to linearithmic in terms of the number of nodes ( i.e. , O ( N logN ) ) at the same time providing accurate interpretable representation of structure at different scales . Using the SH-LDM we embed moderate sized and large-scale networks containing more than a million nodes and establish the performance of LDM in terms of link prediction and node classification to existing prominent scalable graph embedding approaches . We further highlight how the inferred hierarchical organization can facilitate accurate visualization of network structure even when using only D = 2 dimensional representations providing favorable performance in all the considered GRL tasks ; link-prediction , node classification , node clustering , and network reconstruction . In summary , our contributions are to reconcile embedding and hierarchical representations providing accurate linearithmic approximation of the full likelihood , efficient inference , enhanced visualization and network compression utilizing ultra-low embedding dimensions and hierarchical representations . 2 THE SCALABLE HIERARCHICAL-LATENT DISTANCE MODEL . We presently concentrate our study on the case of undirected networks , but we note that our approach generalizes to both directed and bipartite graphs as described in the supplementary material . Let G = ( V , E ) be a graph where N : = |V | is the number of nodes and YN×N = [ yi , j ] be the adjacency matrix of the graph such that yi , j = 1 if there is an edge between the nodes vi and vj and otherwise it is equal 0 for all 1 ≤ i < j ≤ N . A Latent Space Model ( LSM ) defines a RD-dimensional latent space in which every node of the graph is characterized through the unobserved but informative node-specific variables { zi ∈ RD } . These variables are considered sufficient to describe and explain the underlying relationships between the nodes of the network , such as transitivity and homophily . The probability of an occurring edge between an ordered pair of the graph is considered conditionally independent given the unobserved latent positions . Consequently , the total probability distribution of the network can be written as : P ( Y |Z , θ ) = N∏ i < j p ( yi , j |zi , zj ) , ( 1 ) A popular and convenient parameterization of equation 1 for binary data is through the logistic regression model ( Hoff et al. , 2002 ; Handcock et al. , 2007 ; Krivitsky et al. , 2009 ; Hoff , 2005 ) . In contrast , we adopt the Poisson regression model as proposed in Hoff ( 2005 ) under a generalized linear model framework for the LSM . The use of a Poisson likelihood for modelling binary relationships in a network does not decrease the predictive performance nor the ability of the model to detect the network structure , as shown in Wind & Mørup ( 2012 ) and also generalize the analysis to integer weighted graphs . In addition , the exchange of the logit to a log link function when transitioning from a Bernoulli to a Poisson model defines nice decoupling properties over the predictor variables in the likelihood ( Karrer & Newman , 2011 ; Herlau et al. , 2014 ) . Utilizing the Poisson Latent Distance Model ( LDM ) of the LSM family framework , the rate of an occurring edge depends on a distance metric between the latent positions of the two nodes . We consider the LDM with node-specific biases or random-effects ( Hoff , 2005 ; Krivitsky et al. , 2009 ) such that the expression for the Poisson rate becomes : λij = exp ( γi + γj − d ( zi , zj ) ) . ( 2 ) where γi denotes the node-specific random-effects and dij ( · , · ) denotes any distance metric obeying the triangle inequality { dij ≤ dik + dkj , ∀ ( i , j , k ) } . Considering variables z as the latent charac- teristics , Eq . equation 2 shows that similar nodes will be placed closer in the latent space , yielding a high probability of an occurring edge and thus modeling homophily and satisfies network transitivity and reciprocity through the triangle inequality whereas the node specific bias can account for degree heterogeneity . The conventional LDM utilizing a global bias , γg , corresponds to the special case in which γi = γj = 0.5γg . As in Hoff et al . ( 2002 ) , we presently adopt the Euclidean distance as the choice for the distance metric dij ( · , · ) . 2.1 SCALING THE LATENT DISTANCE MODEL . Optimizing the LDM requires the computation of the log-likelihood which is defined as the sum over each ordered pair of the network as : logP ( Y |λ ) = ∑ i < j ( yij log ( λij ) − λij ) = ∑ i < j : yij=1 log ( λij ) − ∑ i < j λij , ( 3 ) For brevity , we presently ignore the linear scaling of the above log-likelihood by dimensionality D. Large networks are highly sparse ( Barabási & Pósfai , 2016 ) with the number of edges for very sparse networks being proportional to the number of nodes in the network . As a result , the computation of the link contribution ∑ yi , j=1 log ( λi , j ) is relatively cheap scaling linearithmic or sub-linearithmic ( see also supplementary material ) . This is not the case for the second term which still requires the computation of all node pairs scaling as O ( N2 ) making the evaluation of the above likelihood infeasible for large networks . To reduce the complexity , we propose to approximate the non-link term O ( N2 ) using blocks , i.e. , akin to stochastic block models White et al . ( 1976 ) ; Holland et al . ( 1983 ) ; Nowicki & Snijders ( 2001 ) , in which we when grouping the nodes into K clusters define the rate between block k and k′ in terms of their distance between centroids , ∑ i < j λij ≈ K∑ k ( ∑ i , j∈Ck e ( γi+γj−||zi−zj ||2 ) + ∑ i∈k , j /∈Ck e ( γi+γj−||µk−µk′ ||2 ) ) = ( K∑ k ∑ i , j∈Ck e ( γi+γj−||zi−zj ||2 ) ) + ∑ k > k′ e−||µk−µk′ ||2 ( ∑ i∈Ck eγi ) ( ∑ j∈Ck′ eγj ) , ( 4 ) where µk denotes the k ’ th cluster centroid of the set C = { C1 , . . . , CK } , and has absorbed the dependency over the variables Z . Overall , considering the main principle of the LDM that connected and homophilic nodes will be placed closer in the latent space , this approximation generalizes this principle introducing a clustering procedure that obeys ” cluster-homophily ” and ” cluster-transitivity ” over the latent space . More specifically , we can assume that closely related nodes will be positioned in the same cluster while related or interconnected clusters will also be positioned close in the latent space , providing an accurate approximation schema . Assuming equally sized clusters having N/K nodes the first part scales O ( N2/K ) whereas the second part scales O ( K2 ) . As such , there is an undesirable inherent trade-off in which the first term reduces by K but the second term increases quadratically . Thus , by setting K = N/log ( N ) we reduce the first part to scale asO ( N logN ) but at the cost of the second term scaling O ( N2/log ( N ) 2 ) which for large networks is still prohibitive .
The authors of this paper propose using Latent Distance Modeling (LDM) for embedding networks. LDM is an old model and simply relies on estimating the probability of an edge based on the distance of the respective embeddings of the two endpoints (+ some fixed-effect terms that capture the degree nonhomogeneity. The problem with this approach is that evaluating the likelihood function to maximize requires computing all pairwise distances between nodes in the graph -- thus scales quadratically with the size of the graph. To tackle this problem the authors propose grouping nodes (based on their embeddings in each iteration) into clusters and only evaluating the pairwise distance between the centroids of the clusters. This allows for faster evaluation of the likelihood function. Also, according to the authors, the LDM model (with and without fixed effects terms for each node), is able to achieve higher performance in different tasks (classification and link prediction) than other popular embedding methods (like DeepWalk or Node2Vec) using just 2 to 3 dimensions for the node embeddings. As a reference, the SoA approaches usually use 128 dimensions to achieve good performance.
SP:5909c76cef387e0e626c8ea0dd165a7b32d896ac
Scalable Hierarchical Embeddings of Complex Networks
Graph representation learning has become important in order to understand and predict intrinsic structures in complex networks . A variety of embedding methods has in recent years been developed including the Latent Distance Modeling ( LDM ) approach . A major challenge is scaling network embedding approaches to very large networks and a drawback of LDM is the computational cost invoked evaluating the full likelihood having O ( N2 ) complexity , making such analysis of large networks infeasible . We propose a novel multiscale hierarchical estimate of the full likelihood of LDMs providing high-details where the likelihood approximation is most important while scaling in complexity at O ( N logN ) . The approach relies on a clustering procedure approximating the Euclidean norm of every node pair according to the multiscale hierarchical structure imposed . We demonstrate the accuracy of our approximation and for the first time embed very large networks in the order of a million nodes using LDM and contrast the predictive performance to prominent scalable graph embedding approaches . We find that our approach significantly outperforms these existing scalable approaches in the ability to perform link prediction , node clustering and classification utilizing a surprisingly low embedding dimensionality of two to three dimensions whereas the extracted hierarchical structure facilitates network visualization and interpretation . The developed scalable hierarchical embedding approach enables accurate low dimensional representations of very large networks providing detailed visualizations that can further our understanding of their properties and structure . 1 INTRODUCTION . Networks naturally arise in a plethora of scientific areas to model the interactions between entities from physics to sociology and biology , with many instances such as collaboration , protein-protein interaction , and brain connectivity networks ( Newman , 2003 ) . In recent years Graph Representation Learning ( GRL ) approaches have attracted great interest with their outstanding performance compared to the classical techniques for the challenging network analysis problems such as link prediction ( Liben-Nowell & Kleinberg , 2003 ; Backstrom & Leskovec , 2011 ) , node classification ( Getoor & Taskar , 2007 ; Grover & Leskovec , 2016 ) , and community detection ( Fortunato , 2010 ) . Many existing GRL methods ( Hamilton et al. , 2017b ; Zhang et al. , 2020 ) mainly aim to capture the underlying intrinsic relationships among the nodes by either performing random walks ( Perozzi et al. , 2014 ; Grover & Leskovec , 2016 ) over the network or designing a matrix capturing the first and high order node proximities ( Cao et al. , 2015 ; Ou et al. , 2016 ) . However , they require high computational and space costs because of the exact node sampling procedures or the expensive factorization of dense proximity matrices . The recent Graph Neural Networks ( GNNs ) ( Hamilton et al. , 2017b ; Zhang et al. , 2020 ; Wang et al. , 2016 ) methods provide effective tools in learning the node representations by leveraging the side information such as node attribute features ; nevertheless , they also face computational difficulties , especially for large-scale networks consisting of millions of nodes and edges . Although the recent studies aim to alleviate the computational burden of the algorithms through matrix sparsification tools ( Qiu et al. , 2019 ) or hierarchical representations ( Bhowmick et al. , 2020 ; Chen et al. , 2018 ) , the performance of the methods in the downstream tasks significantly drops , and they require larger embedding sizes to compensate for the loss . Latent Space Models ( LSMs ) for the representation of graphs have been quite popular over the past years , especially for social networks analysis ( Hoff et al. , 2002 ) . LSMs utilize the generalized linear model framework to obtain informative latent node embeddings while preserving network characteristics . The choice of latent effects in modeling the link probabilities between the nodes leads to different expressive capabilities characterizing network structure . We consider the Latent Distance Model ( LDM ) ( Hoff et al. , 2002 ) with the Euclidean norm , in which nodes are placed closer in the latent space if they are similar or vice-versa . LDM obeys the triangle inequality and thus naturally represents transitivity and network homophily . These methods are attractive due to their simplicity , as they define well-structured inference problems and are characterized by high explanatory power . The time and space complexities are their main drawbacks , which scale quadratically with the number of nodes in the graph . Many real-world networks can be expressed as hierarchical structures of different scales ( Ravasz & Barabási , 2003 ) . For this purpose , several hierarchical network modeling tools have been proposed , such as the extensions of the stochastic block model to binary and multifurcating hierarchical structures ( Clauset et al. , 2008 ; Roy et al. , 2007 ; Blundell et al. , 2012 ; Herlau et al. , 2012 ; 2013 ) as well as agglomerative ( Blondel et al. , 2008 ; Ahn et al. , 2010 ) and recursive partitioning procedures ( Li et al. , 2020 ) relying on various measures of similarity . Learning the node representations preserving the hierarchical structure of the network is also a very promising task , and it can facilitate the visualization and the understanding of the inner dynamics of the network . In this work , we propose the Scalable Hierarchical Latent Distance Model ( SH-LDM ) combining embedding and hierarchical representations for graph representation learning . Importantly , the hierarchical structure imposed in ( SH-LDM ) reduces the total time and space complexity of the LDM to linearithmic in terms of the number of nodes ( i.e. , O ( N logN ) ) at the same time providing accurate interpretable representation of structure at different scales . Using the SH-LDM we embed moderate sized and large-scale networks containing more than a million nodes and establish the performance of LDM in terms of link prediction and node classification to existing prominent scalable graph embedding approaches . We further highlight how the inferred hierarchical organization can facilitate accurate visualization of network structure even when using only D = 2 dimensional representations providing favorable performance in all the considered GRL tasks ; link-prediction , node classification , node clustering , and network reconstruction . In summary , our contributions are to reconcile embedding and hierarchical representations providing accurate linearithmic approximation of the full likelihood , efficient inference , enhanced visualization and network compression utilizing ultra-low embedding dimensions and hierarchical representations . 2 THE SCALABLE HIERARCHICAL-LATENT DISTANCE MODEL . We presently concentrate our study on the case of undirected networks , but we note that our approach generalizes to both directed and bipartite graphs as described in the supplementary material . Let G = ( V , E ) be a graph where N : = |V | is the number of nodes and YN×N = [ yi , j ] be the adjacency matrix of the graph such that yi , j = 1 if there is an edge between the nodes vi and vj and otherwise it is equal 0 for all 1 ≤ i < j ≤ N . A Latent Space Model ( LSM ) defines a RD-dimensional latent space in which every node of the graph is characterized through the unobserved but informative node-specific variables { zi ∈ RD } . These variables are considered sufficient to describe and explain the underlying relationships between the nodes of the network , such as transitivity and homophily . The probability of an occurring edge between an ordered pair of the graph is considered conditionally independent given the unobserved latent positions . Consequently , the total probability distribution of the network can be written as : P ( Y |Z , θ ) = N∏ i < j p ( yi , j |zi , zj ) , ( 1 ) A popular and convenient parameterization of equation 1 for binary data is through the logistic regression model ( Hoff et al. , 2002 ; Handcock et al. , 2007 ; Krivitsky et al. , 2009 ; Hoff , 2005 ) . In contrast , we adopt the Poisson regression model as proposed in Hoff ( 2005 ) under a generalized linear model framework for the LSM . The use of a Poisson likelihood for modelling binary relationships in a network does not decrease the predictive performance nor the ability of the model to detect the network structure , as shown in Wind & Mørup ( 2012 ) and also generalize the analysis to integer weighted graphs . In addition , the exchange of the logit to a log link function when transitioning from a Bernoulli to a Poisson model defines nice decoupling properties over the predictor variables in the likelihood ( Karrer & Newman , 2011 ; Herlau et al. , 2014 ) . Utilizing the Poisson Latent Distance Model ( LDM ) of the LSM family framework , the rate of an occurring edge depends on a distance metric between the latent positions of the two nodes . We consider the LDM with node-specific biases or random-effects ( Hoff , 2005 ; Krivitsky et al. , 2009 ) such that the expression for the Poisson rate becomes : λij = exp ( γi + γj − d ( zi , zj ) ) . ( 2 ) where γi denotes the node-specific random-effects and dij ( · , · ) denotes any distance metric obeying the triangle inequality { dij ≤ dik + dkj , ∀ ( i , j , k ) } . Considering variables z as the latent charac- teristics , Eq . equation 2 shows that similar nodes will be placed closer in the latent space , yielding a high probability of an occurring edge and thus modeling homophily and satisfies network transitivity and reciprocity through the triangle inequality whereas the node specific bias can account for degree heterogeneity . The conventional LDM utilizing a global bias , γg , corresponds to the special case in which γi = γj = 0.5γg . As in Hoff et al . ( 2002 ) , we presently adopt the Euclidean distance as the choice for the distance metric dij ( · , · ) . 2.1 SCALING THE LATENT DISTANCE MODEL . Optimizing the LDM requires the computation of the log-likelihood which is defined as the sum over each ordered pair of the network as : logP ( Y |λ ) = ∑ i < j ( yij log ( λij ) − λij ) = ∑ i < j : yij=1 log ( λij ) − ∑ i < j λij , ( 3 ) For brevity , we presently ignore the linear scaling of the above log-likelihood by dimensionality D. Large networks are highly sparse ( Barabási & Pósfai , 2016 ) with the number of edges for very sparse networks being proportional to the number of nodes in the network . As a result , the computation of the link contribution ∑ yi , j=1 log ( λi , j ) is relatively cheap scaling linearithmic or sub-linearithmic ( see also supplementary material ) . This is not the case for the second term which still requires the computation of all node pairs scaling as O ( N2 ) making the evaluation of the above likelihood infeasible for large networks . To reduce the complexity , we propose to approximate the non-link term O ( N2 ) using blocks , i.e. , akin to stochastic block models White et al . ( 1976 ) ; Holland et al . ( 1983 ) ; Nowicki & Snijders ( 2001 ) , in which we when grouping the nodes into K clusters define the rate between block k and k′ in terms of their distance between centroids , ∑ i < j λij ≈ K∑ k ( ∑ i , j∈Ck e ( γi+γj−||zi−zj ||2 ) + ∑ i∈k , j /∈Ck e ( γi+γj−||µk−µk′ ||2 ) ) = ( K∑ k ∑ i , j∈Ck e ( γi+γj−||zi−zj ||2 ) ) + ∑ k > k′ e−||µk−µk′ ||2 ( ∑ i∈Ck eγi ) ( ∑ j∈Ck′ eγj ) , ( 4 ) where µk denotes the k ’ th cluster centroid of the set C = { C1 , . . . , CK } , and has absorbed the dependency over the variables Z . Overall , considering the main principle of the LDM that connected and homophilic nodes will be placed closer in the latent space , this approximation generalizes this principle introducing a clustering procedure that obeys ” cluster-homophily ” and ” cluster-transitivity ” over the latent space . More specifically , we can assume that closely related nodes will be positioned in the same cluster while related or interconnected clusters will also be positioned close in the latent space , providing an accurate approximation schema . Assuming equally sized clusters having N/K nodes the first part scales O ( N2/K ) whereas the second part scales O ( K2 ) . As such , there is an undesirable inherent trade-off in which the first term reduces by K but the second term increases quadratically . Thus , by setting K = N/log ( N ) we reduce the first part to scale asO ( N logN ) but at the cost of the second term scaling O ( N2/log ( N ) 2 ) which for large networks is still prohibitive .
The focus of the paper is in suggesting a node embedding method that uses hierarchy to ensure scalability. The proposed method, “Scalable Hierarchical Latent Distance Model” ( SH-LDM), aims to reconcile embedding and hierarchical network representations. This method is based on the following components: - __Foundational component:__ a Latent Distance Model. Latent Space Models are a special type of node embedding method that uses the generalized linear model framework to obtain latent node embeddings while preserving network characteristics. In the case of the Latent Distance Models, this means that nodes are placed closer in the latent space if they are closer together. In particular, here, __representing the edges as sampled from a Poisson distribution__ with parameter $\lambda_{ij}$: $$ \lambda_{ij} = \gamma_i + \gamma_j - || z_i -z_j||_2 $$ Finding a solution in terms of $z_i$ s of this model --- which is not new --- is the object of this paper. - __Component 2: Hierarchy:__ To make the fitting of this model more scalable, the authors approximate the distance r $ d(z_, z_j) = || z_i - z_j|| $ by the distance between centroids $|| \mu_{C_i} -\mu_{C_j}||, \quad \forall C_i\neq C_j$. - __Component 3: Scalable division of the data in clusters__ The clusters are not known. So the authors use a multiresoluiton KD tree to split the data into K clusters of equal sizes. It further relies on an optimization procedure for k-means clustering with Euclidean norm utilizing the auxiliary function framework of Tsutsu & Morikawa to perform optimization scalably. This allows the partition of the data into $K = \log(N)$ clusters. This results in a reduction of the total time and space complexity of the LDM to O(N logN)). The authors then proceed to validate their method using a set of experiments: 1. node classification 2. edge prediction 3. clustering and hierarchical structure recovered in the latent space. with reasonable performance on a subset of the dataset.
SP:5909c76cef387e0e626c8ea0dd165a7b32d896ac
Scalable Hierarchical Embeddings of Complex Networks
Graph representation learning has become important in order to understand and predict intrinsic structures in complex networks . A variety of embedding methods has in recent years been developed including the Latent Distance Modeling ( LDM ) approach . A major challenge is scaling network embedding approaches to very large networks and a drawback of LDM is the computational cost invoked evaluating the full likelihood having O ( N2 ) complexity , making such analysis of large networks infeasible . We propose a novel multiscale hierarchical estimate of the full likelihood of LDMs providing high-details where the likelihood approximation is most important while scaling in complexity at O ( N logN ) . The approach relies on a clustering procedure approximating the Euclidean norm of every node pair according to the multiscale hierarchical structure imposed . We demonstrate the accuracy of our approximation and for the first time embed very large networks in the order of a million nodes using LDM and contrast the predictive performance to prominent scalable graph embedding approaches . We find that our approach significantly outperforms these existing scalable approaches in the ability to perform link prediction , node clustering and classification utilizing a surprisingly low embedding dimensionality of two to three dimensions whereas the extracted hierarchical structure facilitates network visualization and interpretation . The developed scalable hierarchical embedding approach enables accurate low dimensional representations of very large networks providing detailed visualizations that can further our understanding of their properties and structure . 1 INTRODUCTION . Networks naturally arise in a plethora of scientific areas to model the interactions between entities from physics to sociology and biology , with many instances such as collaboration , protein-protein interaction , and brain connectivity networks ( Newman , 2003 ) . In recent years Graph Representation Learning ( GRL ) approaches have attracted great interest with their outstanding performance compared to the classical techniques for the challenging network analysis problems such as link prediction ( Liben-Nowell & Kleinberg , 2003 ; Backstrom & Leskovec , 2011 ) , node classification ( Getoor & Taskar , 2007 ; Grover & Leskovec , 2016 ) , and community detection ( Fortunato , 2010 ) . Many existing GRL methods ( Hamilton et al. , 2017b ; Zhang et al. , 2020 ) mainly aim to capture the underlying intrinsic relationships among the nodes by either performing random walks ( Perozzi et al. , 2014 ; Grover & Leskovec , 2016 ) over the network or designing a matrix capturing the first and high order node proximities ( Cao et al. , 2015 ; Ou et al. , 2016 ) . However , they require high computational and space costs because of the exact node sampling procedures or the expensive factorization of dense proximity matrices . The recent Graph Neural Networks ( GNNs ) ( Hamilton et al. , 2017b ; Zhang et al. , 2020 ; Wang et al. , 2016 ) methods provide effective tools in learning the node representations by leveraging the side information such as node attribute features ; nevertheless , they also face computational difficulties , especially for large-scale networks consisting of millions of nodes and edges . Although the recent studies aim to alleviate the computational burden of the algorithms through matrix sparsification tools ( Qiu et al. , 2019 ) or hierarchical representations ( Bhowmick et al. , 2020 ; Chen et al. , 2018 ) , the performance of the methods in the downstream tasks significantly drops , and they require larger embedding sizes to compensate for the loss . Latent Space Models ( LSMs ) for the representation of graphs have been quite popular over the past years , especially for social networks analysis ( Hoff et al. , 2002 ) . LSMs utilize the generalized linear model framework to obtain informative latent node embeddings while preserving network characteristics . The choice of latent effects in modeling the link probabilities between the nodes leads to different expressive capabilities characterizing network structure . We consider the Latent Distance Model ( LDM ) ( Hoff et al. , 2002 ) with the Euclidean norm , in which nodes are placed closer in the latent space if they are similar or vice-versa . LDM obeys the triangle inequality and thus naturally represents transitivity and network homophily . These methods are attractive due to their simplicity , as they define well-structured inference problems and are characterized by high explanatory power . The time and space complexities are their main drawbacks , which scale quadratically with the number of nodes in the graph . Many real-world networks can be expressed as hierarchical structures of different scales ( Ravasz & Barabási , 2003 ) . For this purpose , several hierarchical network modeling tools have been proposed , such as the extensions of the stochastic block model to binary and multifurcating hierarchical structures ( Clauset et al. , 2008 ; Roy et al. , 2007 ; Blundell et al. , 2012 ; Herlau et al. , 2012 ; 2013 ) as well as agglomerative ( Blondel et al. , 2008 ; Ahn et al. , 2010 ) and recursive partitioning procedures ( Li et al. , 2020 ) relying on various measures of similarity . Learning the node representations preserving the hierarchical structure of the network is also a very promising task , and it can facilitate the visualization and the understanding of the inner dynamics of the network . In this work , we propose the Scalable Hierarchical Latent Distance Model ( SH-LDM ) combining embedding and hierarchical representations for graph representation learning . Importantly , the hierarchical structure imposed in ( SH-LDM ) reduces the total time and space complexity of the LDM to linearithmic in terms of the number of nodes ( i.e. , O ( N logN ) ) at the same time providing accurate interpretable representation of structure at different scales . Using the SH-LDM we embed moderate sized and large-scale networks containing more than a million nodes and establish the performance of LDM in terms of link prediction and node classification to existing prominent scalable graph embedding approaches . We further highlight how the inferred hierarchical organization can facilitate accurate visualization of network structure even when using only D = 2 dimensional representations providing favorable performance in all the considered GRL tasks ; link-prediction , node classification , node clustering , and network reconstruction . In summary , our contributions are to reconcile embedding and hierarchical representations providing accurate linearithmic approximation of the full likelihood , efficient inference , enhanced visualization and network compression utilizing ultra-low embedding dimensions and hierarchical representations . 2 THE SCALABLE HIERARCHICAL-LATENT DISTANCE MODEL . We presently concentrate our study on the case of undirected networks , but we note that our approach generalizes to both directed and bipartite graphs as described in the supplementary material . Let G = ( V , E ) be a graph where N : = |V | is the number of nodes and YN×N = [ yi , j ] be the adjacency matrix of the graph such that yi , j = 1 if there is an edge between the nodes vi and vj and otherwise it is equal 0 for all 1 ≤ i < j ≤ N . A Latent Space Model ( LSM ) defines a RD-dimensional latent space in which every node of the graph is characterized through the unobserved but informative node-specific variables { zi ∈ RD } . These variables are considered sufficient to describe and explain the underlying relationships between the nodes of the network , such as transitivity and homophily . The probability of an occurring edge between an ordered pair of the graph is considered conditionally independent given the unobserved latent positions . Consequently , the total probability distribution of the network can be written as : P ( Y |Z , θ ) = N∏ i < j p ( yi , j |zi , zj ) , ( 1 ) A popular and convenient parameterization of equation 1 for binary data is through the logistic regression model ( Hoff et al. , 2002 ; Handcock et al. , 2007 ; Krivitsky et al. , 2009 ; Hoff , 2005 ) . In contrast , we adopt the Poisson regression model as proposed in Hoff ( 2005 ) under a generalized linear model framework for the LSM . The use of a Poisson likelihood for modelling binary relationships in a network does not decrease the predictive performance nor the ability of the model to detect the network structure , as shown in Wind & Mørup ( 2012 ) and also generalize the analysis to integer weighted graphs . In addition , the exchange of the logit to a log link function when transitioning from a Bernoulli to a Poisson model defines nice decoupling properties over the predictor variables in the likelihood ( Karrer & Newman , 2011 ; Herlau et al. , 2014 ) . Utilizing the Poisson Latent Distance Model ( LDM ) of the LSM family framework , the rate of an occurring edge depends on a distance metric between the latent positions of the two nodes . We consider the LDM with node-specific biases or random-effects ( Hoff , 2005 ; Krivitsky et al. , 2009 ) such that the expression for the Poisson rate becomes : λij = exp ( γi + γj − d ( zi , zj ) ) . ( 2 ) where γi denotes the node-specific random-effects and dij ( · , · ) denotes any distance metric obeying the triangle inequality { dij ≤ dik + dkj , ∀ ( i , j , k ) } . Considering variables z as the latent charac- teristics , Eq . equation 2 shows that similar nodes will be placed closer in the latent space , yielding a high probability of an occurring edge and thus modeling homophily and satisfies network transitivity and reciprocity through the triangle inequality whereas the node specific bias can account for degree heterogeneity . The conventional LDM utilizing a global bias , γg , corresponds to the special case in which γi = γj = 0.5γg . As in Hoff et al . ( 2002 ) , we presently adopt the Euclidean distance as the choice for the distance metric dij ( · , · ) . 2.1 SCALING THE LATENT DISTANCE MODEL . Optimizing the LDM requires the computation of the log-likelihood which is defined as the sum over each ordered pair of the network as : logP ( Y |λ ) = ∑ i < j ( yij log ( λij ) − λij ) = ∑ i < j : yij=1 log ( λij ) − ∑ i < j λij , ( 3 ) For brevity , we presently ignore the linear scaling of the above log-likelihood by dimensionality D. Large networks are highly sparse ( Barabási & Pósfai , 2016 ) with the number of edges for very sparse networks being proportional to the number of nodes in the network . As a result , the computation of the link contribution ∑ yi , j=1 log ( λi , j ) is relatively cheap scaling linearithmic or sub-linearithmic ( see also supplementary material ) . This is not the case for the second term which still requires the computation of all node pairs scaling as O ( N2 ) making the evaluation of the above likelihood infeasible for large networks . To reduce the complexity , we propose to approximate the non-link term O ( N2 ) using blocks , i.e. , akin to stochastic block models White et al . ( 1976 ) ; Holland et al . ( 1983 ) ; Nowicki & Snijders ( 2001 ) , in which we when grouping the nodes into K clusters define the rate between block k and k′ in terms of their distance between centroids , ∑ i < j λij ≈ K∑ k ( ∑ i , j∈Ck e ( γi+γj−||zi−zj ||2 ) + ∑ i∈k , j /∈Ck e ( γi+γj−||µk−µk′ ||2 ) ) = ( K∑ k ∑ i , j∈Ck e ( γi+γj−||zi−zj ||2 ) ) + ∑ k > k′ e−||µk−µk′ ||2 ( ∑ i∈Ck eγi ) ( ∑ j∈Ck′ eγj ) , ( 4 ) where µk denotes the k ’ th cluster centroid of the set C = { C1 , . . . , CK } , and has absorbed the dependency over the variables Z . Overall , considering the main principle of the LDM that connected and homophilic nodes will be placed closer in the latent space , this approximation generalizes this principle introducing a clustering procedure that obeys ” cluster-homophily ” and ” cluster-transitivity ” over the latent space . More specifically , we can assume that closely related nodes will be positioned in the same cluster while related or interconnected clusters will also be positioned close in the latent space , providing an accurate approximation schema . Assuming equally sized clusters having N/K nodes the first part scales O ( N2/K ) whereas the second part scales O ( K2 ) . As such , there is an undesirable inherent trade-off in which the first term reduces by K but the second term increases quadratically . Thus , by setting K = N/log ( N ) we reduce the first part to scale asO ( N logN ) but at the cost of the second term scaling O ( N2/log ( N ) 2 ) which for large networks is still prohibitive .
The paper studies the node-level representation learning problem. It proposes SH-LDM, which combines the embedding and hierarchical representations for scalable graph representation learning. The hierarchical structure in SH-LDM reduces the time and space complexity of the LDM to linearithmic in terms of the number of nodes. The proposed model works well on link prediction and node classification with low embedding dimensions.
SP:5909c76cef387e0e626c8ea0dd165a7b32d896ac
Empirical Study of the Decision Region and Robustness in Deep Neural Networks
1 INTRODUCTION . With the steep improvement of the performance of Deep Neural Networks ( DNNs ) , their applications are expanding to the real world , such as autonomous driving and healthcare ( LeCun et al. , 2015 ; Miotto et al. , 2018 ; Huang & Chen , 2020 ) . For real world application , it may be necessary to choose the best model among the candidates . Traditionally , the generalization performance which measures the objective score on the test dataset excluded in the training phase , is used to evaluate the models ( Bishop , 2006 ) . However , it is non-trivial to evaluate DNNs based on this single metric . For example , if two networks with the same structure have the similar test accuracy , it is ambiguous which is better . Robustness against adversarial attacks , measure of the vulnerability , can be an alternative to evaluate DNNs ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Gu & Rigazio , 2014 ; Huang et al. , 2015 ; Jakubovitz & Giryes , 2018 ; Yuan et al. , 2019 ; Zhong et al. , 2021 ) . Adversarial attacks aim to induce model misprediction by perturbing the input with small magnitude . Most previous works were focused on the way to find adversarial samples by utilizing the model properties such as gradients with respect to the loss function . Given that the adversarial attack seeks to find the perturbation path on the model prediction surface over the input space , robustness can be expressed in terms of the geometry of the model . However , few studies have been performed to interpret the robustness with the concept of the geometric properties of DNNs . From a geometric viewpoint , the internal properties of DNNs are represented by the boundaries and the regions ( Baughman & Liu , 2014 ) . It is shown that the DNNs with piece-wise linear activation layers are composed of many linear regions , and the maximal number of these regions is mathematically related to the expressivity of DNNs ( Montúfar et al. , 2014 ; Xiong et al. , 2020 ) . As these approaches only provide the upper bound for the expressivity with the same structured model , it does not explain how much information the model actually expresses . In this work , we investigate the relationship between the internal properties of DNNs and the robustness . In particular , our approach analyzes the internal characteristics from the perspective of the decision boundary ( DB ) and the decision region ( DR ) , which are basic components of DNNs ( Fawzi et al. , 2017 ) . To avoid insensitivity of the maximal number of linear regions in the same structure assumption , we propose the novel concept of the Populated Region Set ( PRS ) , which is a set of DRs containing at least one sample included in the training dataset . Since the PRS can be considered as the feasible complexity of the model , we hypothesize that the size of PRS is related to the robustness of network . To validate our hypothesis , we perform systematic experiments with various structures of DNNs and datasets . Our observations are summarized as follows : • The models with the same structure can have different size of PRS , although they have similar generalization performance . In experiments , we observe that this difference leads to different robustness of the network . • We empirically show that the size of the PRS is related to the robustness against the adversarial attack . The model with a small size of the PRS tends to show higher robustness compared to that with a large size ( in Section 4.1 ) . We further observe that when the model achieves a low PRS ratio , the linear classifier that maps the penultimate features to the logits has high cosine similarity between parameters corresponding to each class ( in Section 4.2 ) . • We verify that the size of intersection of the PRS from the training/test dataset is related to the robustness of model . The model with a high PRS inclusion ratio of test samples has higher robustness than that with a low PRS inclusion ratio ( in Section 5 ) . • We identify that the model with a small size of the PRS learns the sparse feature representation . In quantification , we observe the inversely correlated relationship between the size of the PRS and sparsity of feature representation ( in Section 6 ) . 2 RELATED WORK . Adversarial robustness For the real-world application of DNNs , the adversarial attack , which reveals the vulnerability of DNNs ( Goodfellow et al. , 2014 ) , is mainly used to validate the reliability of the trained network . As an early stage for adversarial attacks , the fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2014 ) based on the gradient with respect to the loss function and the multi-step iterative method ( Kurakin et al. , 2016 ) are proposed to create adversarial examples to change the model prediction with a small perturbation . Recently , many studies on effective attacks in various settings ( e.g. , white-box or black-box ) have been performed to understand the undesirable decision of the networks ( Shaham et al. , 2018 ; Madry et al. , 2018 ; Chen et al. , 2020 ) . In terms of factors affecting robustness , Yao et al . ( 2018 ) provide evidence to argue that training with a large batch size can degrade the robustness of the model against the adversarial attack from the perspective of the Hessian spectrum . In contrast , Kamath et al . ( 2019 ) propose that the model with a constant ratio between the learning rate and batch size does not degrade the model robustness even with a large batch size as it converges to the flatter minima . Geometric Analysis inside Deep Neural Networks With increasing interest in the expressive power of DNNs , there have been several attempts to analyze DNNs from a geometric perspective ( Dauphin et al. , 2014 ; Choromanska et al. , 2015 ) . In these studies , the characteristics of the decision boundary or regions formulated by the DNNs are mainly discussed . Montúfar et al . ( 2014 ) show that the cascade of the linear layer and the nonlinear activation organizes the numerous piece-wise linear regions . They show that the complexity of the decision boundary is related to the maximal number of these linear regions , which is determined by the depth and the width of the model . Xiong et al . ( 2020 ) extend the notion of the linear region to the convolutional layers and show the better geometric efficiency of the convolutional layers . Fawzi et al . ( 2018 ) reveal that classification regions in DNNs are topologically connected and the decision boundary of natural images is flat in most directions . It has also been shown that the manifolds learned by DNNs and the distributions over them are highly related to the representation capability of a network ( Lei et al. , 2018 ) . While these studies highlight the benefits of increasing expressivity of DNNs as the number of regions increases , interpreting the vulnerability of DNNs with the geometry is another important topic . Yang et al . ( 2020 ) show that a model with thick decision boundaries induces robustness . Moosavi-Dezfooli et al . ( 2019 ) show that a decision boundary with a small curvature acquires the high robustness of the model . These approaches focus on the decision boundaries , while this paper suggests to focus on the decision regions , which are composed by the surrounding decision boundaries . 3 INTERNAL PROPERTY OF DNNS . This section describes the internal properties of DNNs from the perspective of decision boundaries ( DBs ) and regions ( DRs ) . The DBs of the DNN classifier is mainly defined as the borderline between DRs for classification , where the prediction probability of class i and the neighboring class j are the same ( Fawzi et al. , 2018 ) . To expand the notion of DBs and DRs to the internal feature-level , we redefine the DBs in the classifier that generalizes the existing definition of DBs . We then propose the novel concept of the Populated Region ( PR ) that describes the specific DRs used from the network for training samples . PR is used to analyze the relationship between the trained parameters and the characteristics of networks . 3.1 DECISION BOUNDARY AND REGION . Let the classifier withL number of layers be F ( x ) = fL ( σ ( fL−1σ ( · · ·σ ( f1 ( x ) ) ) ) = fL:1 ( x ) , where x is the sample in the input space X ⊂ RDx and σ ( · ) denotes the non-linear activation function1 . For the l-th layer , fl ( · ) denotes the linear operation and f il:1 ( · ) denotes the value of the i-th element of the feature vector fl:1 ( x ) ∈ RDl . We define the DB for the i-th neuron of the l-th layer . Definition 1 ( Decision Boundary ( DB ) ) The i-th decision boundary at the l-th layer is defined as Bil = { x|f il:1 ( x ) = 0 , ∀x ∈ X } . We note that the internal DBBil ( l < L ) divides the input spaceX based on the hidden representation of the l-th layer ( i.e. , existence of feature and the amount of feature activation ) . There are a total of Dl boundaries and the configuration of the DBs are arranged by the training . As input samples in the same classification region are considered to belong to the same class , the input samples placed on the same side of the internal DB Bil share the similar feature representation . In this sense , we define the internal DR , which is surrounded by internal DBs . Definition 2 ( Decision Region ( DR ) ) Let Vl ∈ { −1 , +1 } Dl be the indicator vector to choose positive or negative side of decision boundaries of the l-th layer . Then the decision regionDRVl , which shares the sign of feature representation , is defined as DRVl = { x|sign ( fl:1 ( x ) ) = Vl , ∀x ∈ X } . 1Although there are various activation functions , we only consider ReLU activation for this paper . Figure 1 presents the internal properties for two networks trained on CIFAR-10 with similar test accuracy . The right column in Figure 1 depicts the internal DBs and DRs in the network with a high/low PRS ratio ( top and bottom ) . We randomly select two test images ( blue and green box ) and generate adversarial images for blue box ( orange and purple box ) in each network , respectively . We make a hyperplane with these images to visualize the DBs and DRs in the 2D space . We identify that the configuration of DBs and DRs appears to be different , although the two networks have the same structure and similar test accuracy . 3.2 POPULATED REGION SET . It is well-studied that the number of DRs is related to the representation power of DNNs ( Montúfar et al. , 2014 ; Xiong et al. , 2020 ) . In particular , the expressivity of DNNs with partial linear activation function is quantified by the maximal number of the linear regions and this number is related to the width and depth of the structure . We believe that although the maximal number can be one measure of expressity , the trained DNNs with finite training data2 can not handle the entire regions to solve the task . To only consider DRs that the network uses in the training process , we devise the train-related regions where training samples are populated more frequently . We define the Populated Region Set ( PRS ) , which is a set of DRs containing at least one sample included in the training dataset . PRS will be used to analyze the relationship between the geometrical property and the robustness of DNNs in a practical aspect . Definition 3 ( Populated Region Set ( PRS ) ) From the set of every DRs of the model f and given the dataset X , the Populated Region Set is defined as PRS ( X , f , l ) = { DRVl |x ∈ DRVl ∃ x ∈ X , ∀Vl ∈ { −1 , 1 } Dl } . We can then define a Populated Region as a union of decision regions in PRS as PR ( X , f , l ) = ∪DR∈PRS ( X , f , l ) DR. We note that the size of the PRS is bounded to the size of given dataset X . When |PRS ( X , f , l ) | = |X| , each sample in training dataset is assigned to each distinct DR in the l-th layer . To compare the PRS of networks , we define the PRS ratio , |PRS ( f , X , l ) ||X | , which measures the ratio between the size of the PRS and the given dataset . Figure 2 presents a comparison between two equivalent neural networks ( A and B ) with six convolution blocks ( CNN-6 ) trained on CIFAR-10 varying only the batch size ( 2048/128 ) . Figure 2 ( a ) presents the PRS ratio for the depth of layers in each model at the 300th epoch . We observe that only the penultimate layer ( l = 8 ) shows a different PRS ratio . Figure 2 ( b ) shows that the two networks have largely different PRS ratios with similar training/test accuracy . From the above observation and the fact that the penultimate layers are widely used as feature extraction , we only consider the PRS ratio on the penultimate layer in the remainder of the paper . Experimental setups For the systematic experiments , we select three different structures of DNNs to analyze : ( 1 ) a convolutional neural network with six convolution blocks ( CNN-6 ) ; ( 2 ) VGG-16 2In general , the number of training data is smaller than the maximal number of the linear region . ( Simonyan & Zisserman , 2014 ) ; and ( 3 ) ResNet-18 ( He et al. , 2016 ) . We train3 basic models with fixed five random seeds and four batch sizes ( 64 , 128 , 512 and 2048 ) over three datasets : MNIST ( LeCun & Cortes , 2010 ) , F-MNIST ( Xiao et al. , 2017 ) , and CIFAR-10 ( Krizhevsky et al. , 2009 ) . For the extensive analysis on the correlation between the PRS ratio and properties of network , we extract candidates from each basic model with the grid of epochs . Then we apply the test accuracy threshold to guarantee the sufficient performance . Finally , we obtain 947 models for analysis . The details for the network architecture and the selection procedure are described in Appendix A-C .
The paper proposes a new metric, the size of the populated region set (PRS), as an explanation for models with similar clean accuracies reaching very different accuracies under adversarial attacks. PRS is the set of decision regions that have training examples in them. After introducing and defining populated regions show that PRS in the penultimate layer is inversely correlated with robust accuracy. This is shown to hold across MNIST, F-MNIST and CIFAR10 and for 3 different model architectures (simple CNN, VGG and ResNet). They use two example networks with high and low PRS to show how PRS evolves during training and that the higher resistance to adversarial attacks might stem from the neurons in the last layer having similar decision boundaries in input space. Test samples that also fall into the PRS are shown to be more robust and models with small PRS have more test samples in them than models with large PRS. Additionally, it is shown that models with small PRS produce more sparse features.
SP:3b16daad8d1675a80905dd147cade67558c50fd2
Empirical Study of the Decision Region and Robustness in Deep Neural Networks
1 INTRODUCTION . With the steep improvement of the performance of Deep Neural Networks ( DNNs ) , their applications are expanding to the real world , such as autonomous driving and healthcare ( LeCun et al. , 2015 ; Miotto et al. , 2018 ; Huang & Chen , 2020 ) . For real world application , it may be necessary to choose the best model among the candidates . Traditionally , the generalization performance which measures the objective score on the test dataset excluded in the training phase , is used to evaluate the models ( Bishop , 2006 ) . However , it is non-trivial to evaluate DNNs based on this single metric . For example , if two networks with the same structure have the similar test accuracy , it is ambiguous which is better . Robustness against adversarial attacks , measure of the vulnerability , can be an alternative to evaluate DNNs ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Gu & Rigazio , 2014 ; Huang et al. , 2015 ; Jakubovitz & Giryes , 2018 ; Yuan et al. , 2019 ; Zhong et al. , 2021 ) . Adversarial attacks aim to induce model misprediction by perturbing the input with small magnitude . Most previous works were focused on the way to find adversarial samples by utilizing the model properties such as gradients with respect to the loss function . Given that the adversarial attack seeks to find the perturbation path on the model prediction surface over the input space , robustness can be expressed in terms of the geometry of the model . However , few studies have been performed to interpret the robustness with the concept of the geometric properties of DNNs . From a geometric viewpoint , the internal properties of DNNs are represented by the boundaries and the regions ( Baughman & Liu , 2014 ) . It is shown that the DNNs with piece-wise linear activation layers are composed of many linear regions , and the maximal number of these regions is mathematically related to the expressivity of DNNs ( Montúfar et al. , 2014 ; Xiong et al. , 2020 ) . As these approaches only provide the upper bound for the expressivity with the same structured model , it does not explain how much information the model actually expresses . In this work , we investigate the relationship between the internal properties of DNNs and the robustness . In particular , our approach analyzes the internal characteristics from the perspective of the decision boundary ( DB ) and the decision region ( DR ) , which are basic components of DNNs ( Fawzi et al. , 2017 ) . To avoid insensitivity of the maximal number of linear regions in the same structure assumption , we propose the novel concept of the Populated Region Set ( PRS ) , which is a set of DRs containing at least one sample included in the training dataset . Since the PRS can be considered as the feasible complexity of the model , we hypothesize that the size of PRS is related to the robustness of network . To validate our hypothesis , we perform systematic experiments with various structures of DNNs and datasets . Our observations are summarized as follows : • The models with the same structure can have different size of PRS , although they have similar generalization performance . In experiments , we observe that this difference leads to different robustness of the network . • We empirically show that the size of the PRS is related to the robustness against the adversarial attack . The model with a small size of the PRS tends to show higher robustness compared to that with a large size ( in Section 4.1 ) . We further observe that when the model achieves a low PRS ratio , the linear classifier that maps the penultimate features to the logits has high cosine similarity between parameters corresponding to each class ( in Section 4.2 ) . • We verify that the size of intersection of the PRS from the training/test dataset is related to the robustness of model . The model with a high PRS inclusion ratio of test samples has higher robustness than that with a low PRS inclusion ratio ( in Section 5 ) . • We identify that the model with a small size of the PRS learns the sparse feature representation . In quantification , we observe the inversely correlated relationship between the size of the PRS and sparsity of feature representation ( in Section 6 ) . 2 RELATED WORK . Adversarial robustness For the real-world application of DNNs , the adversarial attack , which reveals the vulnerability of DNNs ( Goodfellow et al. , 2014 ) , is mainly used to validate the reliability of the trained network . As an early stage for adversarial attacks , the fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2014 ) based on the gradient with respect to the loss function and the multi-step iterative method ( Kurakin et al. , 2016 ) are proposed to create adversarial examples to change the model prediction with a small perturbation . Recently , many studies on effective attacks in various settings ( e.g. , white-box or black-box ) have been performed to understand the undesirable decision of the networks ( Shaham et al. , 2018 ; Madry et al. , 2018 ; Chen et al. , 2020 ) . In terms of factors affecting robustness , Yao et al . ( 2018 ) provide evidence to argue that training with a large batch size can degrade the robustness of the model against the adversarial attack from the perspective of the Hessian spectrum . In contrast , Kamath et al . ( 2019 ) propose that the model with a constant ratio between the learning rate and batch size does not degrade the model robustness even with a large batch size as it converges to the flatter minima . Geometric Analysis inside Deep Neural Networks With increasing interest in the expressive power of DNNs , there have been several attempts to analyze DNNs from a geometric perspective ( Dauphin et al. , 2014 ; Choromanska et al. , 2015 ) . In these studies , the characteristics of the decision boundary or regions formulated by the DNNs are mainly discussed . Montúfar et al . ( 2014 ) show that the cascade of the linear layer and the nonlinear activation organizes the numerous piece-wise linear regions . They show that the complexity of the decision boundary is related to the maximal number of these linear regions , which is determined by the depth and the width of the model . Xiong et al . ( 2020 ) extend the notion of the linear region to the convolutional layers and show the better geometric efficiency of the convolutional layers . Fawzi et al . ( 2018 ) reveal that classification regions in DNNs are topologically connected and the decision boundary of natural images is flat in most directions . It has also been shown that the manifolds learned by DNNs and the distributions over them are highly related to the representation capability of a network ( Lei et al. , 2018 ) . While these studies highlight the benefits of increasing expressivity of DNNs as the number of regions increases , interpreting the vulnerability of DNNs with the geometry is another important topic . Yang et al . ( 2020 ) show that a model with thick decision boundaries induces robustness . Moosavi-Dezfooli et al . ( 2019 ) show that a decision boundary with a small curvature acquires the high robustness of the model . These approaches focus on the decision boundaries , while this paper suggests to focus on the decision regions , which are composed by the surrounding decision boundaries . 3 INTERNAL PROPERTY OF DNNS . This section describes the internal properties of DNNs from the perspective of decision boundaries ( DBs ) and regions ( DRs ) . The DBs of the DNN classifier is mainly defined as the borderline between DRs for classification , where the prediction probability of class i and the neighboring class j are the same ( Fawzi et al. , 2018 ) . To expand the notion of DBs and DRs to the internal feature-level , we redefine the DBs in the classifier that generalizes the existing definition of DBs . We then propose the novel concept of the Populated Region ( PR ) that describes the specific DRs used from the network for training samples . PR is used to analyze the relationship between the trained parameters and the characteristics of networks . 3.1 DECISION BOUNDARY AND REGION . Let the classifier withL number of layers be F ( x ) = fL ( σ ( fL−1σ ( · · ·σ ( f1 ( x ) ) ) ) = fL:1 ( x ) , where x is the sample in the input space X ⊂ RDx and σ ( · ) denotes the non-linear activation function1 . For the l-th layer , fl ( · ) denotes the linear operation and f il:1 ( · ) denotes the value of the i-th element of the feature vector fl:1 ( x ) ∈ RDl . We define the DB for the i-th neuron of the l-th layer . Definition 1 ( Decision Boundary ( DB ) ) The i-th decision boundary at the l-th layer is defined as Bil = { x|f il:1 ( x ) = 0 , ∀x ∈ X } . We note that the internal DBBil ( l < L ) divides the input spaceX based on the hidden representation of the l-th layer ( i.e. , existence of feature and the amount of feature activation ) . There are a total of Dl boundaries and the configuration of the DBs are arranged by the training . As input samples in the same classification region are considered to belong to the same class , the input samples placed on the same side of the internal DB Bil share the similar feature representation . In this sense , we define the internal DR , which is surrounded by internal DBs . Definition 2 ( Decision Region ( DR ) ) Let Vl ∈ { −1 , +1 } Dl be the indicator vector to choose positive or negative side of decision boundaries of the l-th layer . Then the decision regionDRVl , which shares the sign of feature representation , is defined as DRVl = { x|sign ( fl:1 ( x ) ) = Vl , ∀x ∈ X } . 1Although there are various activation functions , we only consider ReLU activation for this paper . Figure 1 presents the internal properties for two networks trained on CIFAR-10 with similar test accuracy . The right column in Figure 1 depicts the internal DBs and DRs in the network with a high/low PRS ratio ( top and bottom ) . We randomly select two test images ( blue and green box ) and generate adversarial images for blue box ( orange and purple box ) in each network , respectively . We make a hyperplane with these images to visualize the DBs and DRs in the 2D space . We identify that the configuration of DBs and DRs appears to be different , although the two networks have the same structure and similar test accuracy . 3.2 POPULATED REGION SET . It is well-studied that the number of DRs is related to the representation power of DNNs ( Montúfar et al. , 2014 ; Xiong et al. , 2020 ) . In particular , the expressivity of DNNs with partial linear activation function is quantified by the maximal number of the linear regions and this number is related to the width and depth of the structure . We believe that although the maximal number can be one measure of expressity , the trained DNNs with finite training data2 can not handle the entire regions to solve the task . To only consider DRs that the network uses in the training process , we devise the train-related regions where training samples are populated more frequently . We define the Populated Region Set ( PRS ) , which is a set of DRs containing at least one sample included in the training dataset . PRS will be used to analyze the relationship between the geometrical property and the robustness of DNNs in a practical aspect . Definition 3 ( Populated Region Set ( PRS ) ) From the set of every DRs of the model f and given the dataset X , the Populated Region Set is defined as PRS ( X , f , l ) = { DRVl |x ∈ DRVl ∃ x ∈ X , ∀Vl ∈ { −1 , 1 } Dl } . We can then define a Populated Region as a union of decision regions in PRS as PR ( X , f , l ) = ∪DR∈PRS ( X , f , l ) DR. We note that the size of the PRS is bounded to the size of given dataset X . When |PRS ( X , f , l ) | = |X| , each sample in training dataset is assigned to each distinct DR in the l-th layer . To compare the PRS of networks , we define the PRS ratio , |PRS ( f , X , l ) ||X | , which measures the ratio between the size of the PRS and the given dataset . Figure 2 presents a comparison between two equivalent neural networks ( A and B ) with six convolution blocks ( CNN-6 ) trained on CIFAR-10 varying only the batch size ( 2048/128 ) . Figure 2 ( a ) presents the PRS ratio for the depth of layers in each model at the 300th epoch . We observe that only the penultimate layer ( l = 8 ) shows a different PRS ratio . Figure 2 ( b ) shows that the two networks have largely different PRS ratios with similar training/test accuracy . From the above observation and the fact that the penultimate layers are widely used as feature extraction , we only consider the PRS ratio on the penultimate layer in the remainder of the paper . Experimental setups For the systematic experiments , we select three different structures of DNNs to analyze : ( 1 ) a convolutional neural network with six convolution blocks ( CNN-6 ) ; ( 2 ) VGG-16 2In general , the number of training data is smaller than the maximal number of the linear region . ( Simonyan & Zisserman , 2014 ) ; and ( 3 ) ResNet-18 ( He et al. , 2016 ) . We train3 basic models with fixed five random seeds and four batch sizes ( 64 , 128 , 512 and 2048 ) over three datasets : MNIST ( LeCun & Cortes , 2010 ) , F-MNIST ( Xiao et al. , 2017 ) , and CIFAR-10 ( Krizhevsky et al. , 2009 ) . For the extensive analysis on the correlation between the PRS ratio and properties of network , we extract candidates from each basic model with the grid of epochs . Then we apply the test accuracy threshold to guarantee the sufficient performance . Finally , we obtain 947 models for analysis . The details for the network architecture and the selection procedure are described in Appendix A-C .
This work empirically studies for deep networks the relationship between (1) model robustness and (2) the decision surface. A novel metric is proposed, the Populated Region Set (PRS) metric, essentially the number of regions in decision space which have at least one training sample. The authors claim the metric has a "strong relationship" to robustness, as measured by correlation, and present a number of experiments to support their claim.
SP:3b16daad8d1675a80905dd147cade67558c50fd2
Empirical Study of the Decision Region and Robustness in Deep Neural Networks
1 INTRODUCTION . With the steep improvement of the performance of Deep Neural Networks ( DNNs ) , their applications are expanding to the real world , such as autonomous driving and healthcare ( LeCun et al. , 2015 ; Miotto et al. , 2018 ; Huang & Chen , 2020 ) . For real world application , it may be necessary to choose the best model among the candidates . Traditionally , the generalization performance which measures the objective score on the test dataset excluded in the training phase , is used to evaluate the models ( Bishop , 2006 ) . However , it is non-trivial to evaluate DNNs based on this single metric . For example , if two networks with the same structure have the similar test accuracy , it is ambiguous which is better . Robustness against adversarial attacks , measure of the vulnerability , can be an alternative to evaluate DNNs ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Gu & Rigazio , 2014 ; Huang et al. , 2015 ; Jakubovitz & Giryes , 2018 ; Yuan et al. , 2019 ; Zhong et al. , 2021 ) . Adversarial attacks aim to induce model misprediction by perturbing the input with small magnitude . Most previous works were focused on the way to find adversarial samples by utilizing the model properties such as gradients with respect to the loss function . Given that the adversarial attack seeks to find the perturbation path on the model prediction surface over the input space , robustness can be expressed in terms of the geometry of the model . However , few studies have been performed to interpret the robustness with the concept of the geometric properties of DNNs . From a geometric viewpoint , the internal properties of DNNs are represented by the boundaries and the regions ( Baughman & Liu , 2014 ) . It is shown that the DNNs with piece-wise linear activation layers are composed of many linear regions , and the maximal number of these regions is mathematically related to the expressivity of DNNs ( Montúfar et al. , 2014 ; Xiong et al. , 2020 ) . As these approaches only provide the upper bound for the expressivity with the same structured model , it does not explain how much information the model actually expresses . In this work , we investigate the relationship between the internal properties of DNNs and the robustness . In particular , our approach analyzes the internal characteristics from the perspective of the decision boundary ( DB ) and the decision region ( DR ) , which are basic components of DNNs ( Fawzi et al. , 2017 ) . To avoid insensitivity of the maximal number of linear regions in the same structure assumption , we propose the novel concept of the Populated Region Set ( PRS ) , which is a set of DRs containing at least one sample included in the training dataset . Since the PRS can be considered as the feasible complexity of the model , we hypothesize that the size of PRS is related to the robustness of network . To validate our hypothesis , we perform systematic experiments with various structures of DNNs and datasets . Our observations are summarized as follows : • The models with the same structure can have different size of PRS , although they have similar generalization performance . In experiments , we observe that this difference leads to different robustness of the network . • We empirically show that the size of the PRS is related to the robustness against the adversarial attack . The model with a small size of the PRS tends to show higher robustness compared to that with a large size ( in Section 4.1 ) . We further observe that when the model achieves a low PRS ratio , the linear classifier that maps the penultimate features to the logits has high cosine similarity between parameters corresponding to each class ( in Section 4.2 ) . • We verify that the size of intersection of the PRS from the training/test dataset is related to the robustness of model . The model with a high PRS inclusion ratio of test samples has higher robustness than that with a low PRS inclusion ratio ( in Section 5 ) . • We identify that the model with a small size of the PRS learns the sparse feature representation . In quantification , we observe the inversely correlated relationship between the size of the PRS and sparsity of feature representation ( in Section 6 ) . 2 RELATED WORK . Adversarial robustness For the real-world application of DNNs , the adversarial attack , which reveals the vulnerability of DNNs ( Goodfellow et al. , 2014 ) , is mainly used to validate the reliability of the trained network . As an early stage for adversarial attacks , the fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2014 ) based on the gradient with respect to the loss function and the multi-step iterative method ( Kurakin et al. , 2016 ) are proposed to create adversarial examples to change the model prediction with a small perturbation . Recently , many studies on effective attacks in various settings ( e.g. , white-box or black-box ) have been performed to understand the undesirable decision of the networks ( Shaham et al. , 2018 ; Madry et al. , 2018 ; Chen et al. , 2020 ) . In terms of factors affecting robustness , Yao et al . ( 2018 ) provide evidence to argue that training with a large batch size can degrade the robustness of the model against the adversarial attack from the perspective of the Hessian spectrum . In contrast , Kamath et al . ( 2019 ) propose that the model with a constant ratio between the learning rate and batch size does not degrade the model robustness even with a large batch size as it converges to the flatter minima . Geometric Analysis inside Deep Neural Networks With increasing interest in the expressive power of DNNs , there have been several attempts to analyze DNNs from a geometric perspective ( Dauphin et al. , 2014 ; Choromanska et al. , 2015 ) . In these studies , the characteristics of the decision boundary or regions formulated by the DNNs are mainly discussed . Montúfar et al . ( 2014 ) show that the cascade of the linear layer and the nonlinear activation organizes the numerous piece-wise linear regions . They show that the complexity of the decision boundary is related to the maximal number of these linear regions , which is determined by the depth and the width of the model . Xiong et al . ( 2020 ) extend the notion of the linear region to the convolutional layers and show the better geometric efficiency of the convolutional layers . Fawzi et al . ( 2018 ) reveal that classification regions in DNNs are topologically connected and the decision boundary of natural images is flat in most directions . It has also been shown that the manifolds learned by DNNs and the distributions over them are highly related to the representation capability of a network ( Lei et al. , 2018 ) . While these studies highlight the benefits of increasing expressivity of DNNs as the number of regions increases , interpreting the vulnerability of DNNs with the geometry is another important topic . Yang et al . ( 2020 ) show that a model with thick decision boundaries induces robustness . Moosavi-Dezfooli et al . ( 2019 ) show that a decision boundary with a small curvature acquires the high robustness of the model . These approaches focus on the decision boundaries , while this paper suggests to focus on the decision regions , which are composed by the surrounding decision boundaries . 3 INTERNAL PROPERTY OF DNNS . This section describes the internal properties of DNNs from the perspective of decision boundaries ( DBs ) and regions ( DRs ) . The DBs of the DNN classifier is mainly defined as the borderline between DRs for classification , where the prediction probability of class i and the neighboring class j are the same ( Fawzi et al. , 2018 ) . To expand the notion of DBs and DRs to the internal feature-level , we redefine the DBs in the classifier that generalizes the existing definition of DBs . We then propose the novel concept of the Populated Region ( PR ) that describes the specific DRs used from the network for training samples . PR is used to analyze the relationship between the trained parameters and the characteristics of networks . 3.1 DECISION BOUNDARY AND REGION . Let the classifier withL number of layers be F ( x ) = fL ( σ ( fL−1σ ( · · ·σ ( f1 ( x ) ) ) ) = fL:1 ( x ) , where x is the sample in the input space X ⊂ RDx and σ ( · ) denotes the non-linear activation function1 . For the l-th layer , fl ( · ) denotes the linear operation and f il:1 ( · ) denotes the value of the i-th element of the feature vector fl:1 ( x ) ∈ RDl . We define the DB for the i-th neuron of the l-th layer . Definition 1 ( Decision Boundary ( DB ) ) The i-th decision boundary at the l-th layer is defined as Bil = { x|f il:1 ( x ) = 0 , ∀x ∈ X } . We note that the internal DBBil ( l < L ) divides the input spaceX based on the hidden representation of the l-th layer ( i.e. , existence of feature and the amount of feature activation ) . There are a total of Dl boundaries and the configuration of the DBs are arranged by the training . As input samples in the same classification region are considered to belong to the same class , the input samples placed on the same side of the internal DB Bil share the similar feature representation . In this sense , we define the internal DR , which is surrounded by internal DBs . Definition 2 ( Decision Region ( DR ) ) Let Vl ∈ { −1 , +1 } Dl be the indicator vector to choose positive or negative side of decision boundaries of the l-th layer . Then the decision regionDRVl , which shares the sign of feature representation , is defined as DRVl = { x|sign ( fl:1 ( x ) ) = Vl , ∀x ∈ X } . 1Although there are various activation functions , we only consider ReLU activation for this paper . Figure 1 presents the internal properties for two networks trained on CIFAR-10 with similar test accuracy . The right column in Figure 1 depicts the internal DBs and DRs in the network with a high/low PRS ratio ( top and bottom ) . We randomly select two test images ( blue and green box ) and generate adversarial images for blue box ( orange and purple box ) in each network , respectively . We make a hyperplane with these images to visualize the DBs and DRs in the 2D space . We identify that the configuration of DBs and DRs appears to be different , although the two networks have the same structure and similar test accuracy . 3.2 POPULATED REGION SET . It is well-studied that the number of DRs is related to the representation power of DNNs ( Montúfar et al. , 2014 ; Xiong et al. , 2020 ) . In particular , the expressivity of DNNs with partial linear activation function is quantified by the maximal number of the linear regions and this number is related to the width and depth of the structure . We believe that although the maximal number can be one measure of expressity , the trained DNNs with finite training data2 can not handle the entire regions to solve the task . To only consider DRs that the network uses in the training process , we devise the train-related regions where training samples are populated more frequently . We define the Populated Region Set ( PRS ) , which is a set of DRs containing at least one sample included in the training dataset . PRS will be used to analyze the relationship between the geometrical property and the robustness of DNNs in a practical aspect . Definition 3 ( Populated Region Set ( PRS ) ) From the set of every DRs of the model f and given the dataset X , the Populated Region Set is defined as PRS ( X , f , l ) = { DRVl |x ∈ DRVl ∃ x ∈ X , ∀Vl ∈ { −1 , 1 } Dl } . We can then define a Populated Region as a union of decision regions in PRS as PR ( X , f , l ) = ∪DR∈PRS ( X , f , l ) DR. We note that the size of the PRS is bounded to the size of given dataset X . When |PRS ( X , f , l ) | = |X| , each sample in training dataset is assigned to each distinct DR in the l-th layer . To compare the PRS of networks , we define the PRS ratio , |PRS ( f , X , l ) ||X | , which measures the ratio between the size of the PRS and the given dataset . Figure 2 presents a comparison between two equivalent neural networks ( A and B ) with six convolution blocks ( CNN-6 ) trained on CIFAR-10 varying only the batch size ( 2048/128 ) . Figure 2 ( a ) presents the PRS ratio for the depth of layers in each model at the 300th epoch . We observe that only the penultimate layer ( l = 8 ) shows a different PRS ratio . Figure 2 ( b ) shows that the two networks have largely different PRS ratios with similar training/test accuracy . From the above observation and the fact that the penultimate layers are widely used as feature extraction , we only consider the PRS ratio on the penultimate layer in the remainder of the paper . Experimental setups For the systematic experiments , we select three different structures of DNNs to analyze : ( 1 ) a convolutional neural network with six convolution blocks ( CNN-6 ) ; ( 2 ) VGG-16 2In general , the number of training data is smaller than the maximal number of the linear region . ( Simonyan & Zisserman , 2014 ) ; and ( 3 ) ResNet-18 ( He et al. , 2016 ) . We train3 basic models with fixed five random seeds and four batch sizes ( 64 , 128 , 512 and 2048 ) over three datasets : MNIST ( LeCun & Cortes , 2010 ) , F-MNIST ( Xiao et al. , 2017 ) , and CIFAR-10 ( Krizhevsky et al. , 2009 ) . For the extensive analysis on the correlation between the PRS ratio and properties of network , we extract candidates from each basic model with the grid of epochs . Then we apply the test accuracy threshold to guarantee the sufficient performance . Finally , we obtain 947 models for analysis . The details for the network architecture and the selection procedure are described in Appendix A-C .
This paper aims to understand the robustness of DNNs from the perspective of decision regions. Towards that, the authors introduce a new metric, the so-called Populated Region Set (PRS) whose ratio is later used to investigate the robustness of a selection of DNNs empirically. Based on the respective empirical evidence, the paper claims that the lower PRS ratio (roughly #decision regions with at least one training data point/size of training data) leads to enhanced robustness, better representation of test instances in the populated regions, and learns the sparse feature representation. The empirical evidence is collected using the models CNN, ResNet-18, and VGG-16 over datasets MNIST, F-MNIST, and CIFAR-10 and under perturbations with various $\epsilon$ and untargeted/targeted attacks.
SP:3b16daad8d1675a80905dd147cade67558c50fd2
CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models
1 INTRODUCTION . Grounding natural language in fine-grained image regions is essential for a broad variety of visionlanguage tasks , such as robotic navigation ( Tellex et al. , 2011 ; Anderson et al. , 2018b ) , visual question answering ( Antol et al. , 2015 ; Anderson et al. , 2018a ) , visual dialogue ( Das et al. , 2017 ) , and visual commonsense reasoning ( Zellers et al. , 2019 ) . Recently Pre-Trained Vision-Language Models ( VL-PTMs ) have shown promising capabilities in visual grounding . Typically , generic cross-modal representations are first pre-trained on large-scale image-caption data in a self-supervised fashion , and then fine-tuned to adapt to downstream tasks ( Lu et al. , 2019 ; Su et al. , 2019 ; Li et al. , 2020 ; Radford et al. , 2021 ) . This pre-training-then-fine-tuning paradigm of VL-PTMs has greatly pushed forward the state-of-the-art of many cross-modal tasks . Despite the success , we note that there exists a significant gap between the objective forms of pretraining and fine-tuning of VL-PTMs . As illustrated in Figure 1 , during pre-training , most VL-PTMs are optimized based on the masked language modeling objective , trying to recover the masked token from the cross-modal context . However , during fine-tuning , downstream tasks are usually conducted by classifying unmasked token representations into semantic labels , where task-specific parameters are typically introduced . The gap hinders the effective adaptation of VL-PTMs to downstream tasks . As a result , a large amount of labeled data is typically required to stimulate the visual grounding capabilities of VL-PTMs for downstream tasks . In this work , inspired by recent progress in pre-trained language models in natural language processing ( Brown et al. , 2020 ; Schick & Schütze , 2021a ; Liu et al. , 2021 ) , we present Cross-modal Prompt Tuning ( CPT , alternatively , Colorful Prompt Tuning ) , a novel paradigm for tuning VLPTMs . The key insight is that by adding color-based co-referential markers in both image and text , visual grounding can be reformulated into a fill-in-the-blank problem , maximally mitigating the gap between pre-training and fine-tuning . As shown in Figure 1 , to ground natural language expressions in image data , CPT consists of two components : ( 1 ) a visual sub-prompt that uniquely marks image regions with colored blocks or segmentation masks , and ( 2 ) a textual sub-prompt that puts the query text into a color-based query template . Explicit grounding to the target image region can then be achieved by recovering the corresponding color text from the masked token in the query template . In addition , we present a principled method to search for high-quality cross-modal prompt configurations ( i.e. , visual appearances and texts of colors ) for CPT . By mitigating the gap from pre-training , CPT enables strong few-shot and even zero-shot visual grounding capabilities of VL-PTMs . Experimental results show that the prompt-tuned VL-PTMs outperform their fine-tuned counterparts by a large margin . For example , using colored blocks as visual sub-prompts , CPT achieves 17.3 % absolute accuracy improvement , and 73.8 % relative standard deviation reduction on average with one shot in RefCOCO evaluation . In the same setting , when equipped with colored segmentation masks as visual sub-prompts , CPT can further achieve 20.0 % absolute accuracy improvement , and 76.2 % relative standard deviation reduction than the vanilla fine-tuning approach . Our contributions are summarized as threefold : ( 1 ) We present a novel cross-modal prompt tuning paradigm for VL-PTMs . To the best of our knowledge , this is the first attempt in both cross-modal prompt tuning for VL-PTMs , and zero- and few-shot visual grounding independent of object types . ( 2 ) We present a principled approach to search for high-quality cross-modal prompt configurations for CPT . ( 3 ) We conduct comprehensive experiments which demonstrate the effectiveness of CPT . 2 PRELIMINARY . In the literature , visual grounding is typically formulated as a referring expression comprehension ( REC ) problem ( Plummer et al. , 2015 ; Mao et al. , 2016 ) . Given an image I and a query text of referring expression q , REC aims to locate the target region in I that corresponds to q . In this section , we introduce the vanilla fine-tuning approach for VL-PTMs . A common practice for REC is to first detect a set of region proposals { v1 , v2 , . . . , vn } via object detectors , and then classify or rank the proposals to select the target region ( Lu et al. , 2019 ; Chen et al. , 2020 ) . Specifically , visual and textual inputs are first transformed into a sequence of input tokens { [ IMG ] , v1 , v2 , . . . , vn , [ CLS ] , w1 , w2 , . . . , wm , [ SEP ] } , where { w1 , w2 , . . . , wm } are textual tokens of q , and [ IMG ] , [ CLS ] and [ SEP ] are special tokens . To obtain input representations , the feature of image regions is extracted by visual encoders , and the embeddings of textual and special tokens are obtained by a lookup table . Then input representations are fed into the pre-trained transformers to produce the hidden representations { h [ IMG ] , h1v , h2v , . . . , hnv , h [ CLS ] , h1w , h2w , . . . , hmw , h [ SEP ] } . Finally the hidden representation of the target region is optimized against negative ones via classification or ranking loss , where new task-specific parameters are introduced . As a result , fine-tuned VL-PTMs need a large mount of labeled instances to stimulate the visual grounding capability . 3 CROSS-MODAL PROMPT TUNING ( CPT ) . In this section , we introduce the framework of CPT , and how to apply CPT to zero-shot , few-shot and fully supervised visual grounding . 3.1 OVERVIEW . The key to visual grounding is to establish fine-grained connections between image regions and textual expressions . Therefore , a good cross-modal prompt tuning framework should take full advantage of co-referential signals from both image and text , and maximally mitigate the gap between pre-training and tuning . To this end , CPT reformulates visual grounding into a fill-in-the-blank problem , as shown in Figure 1 . Specifically , the CPT framework consists of two components : ( 1 ) a visual sub-prompt that uniquely marks the image regions with colored blocks or segmentation masks , and ( 2 ) a textual sub-prompt that puts the query text into a color-based query template . Equipped with CPT , it is then straightforward for VL-PTMs to ground the query text by filling the masked token with the color text of the target image region , where the objective form is identical to pre-training . 3.2 VISUAL SUB-PROMPT . Given an image I and its region proposalsR = { v1 , v2 , . . . , vn } , visual sub-prompt aims to uniquely mark the image regions with natural visual makers . Interestingly , we note that colored bounding boxes are widely used to uniquely mark objects in images for visualization in the literature . Inspired by this , we bridge the image regions and query text through a set of colors C , where each color ci = ( c i v , c i w ) ∈ C is defined by its visual appearance civ ( e.g. , RGB ( 255 , 0 , 0 ) ) and color text ciw ( e.g. , red ) . Then we mark each region proposal vi in the image with a unique color civ for grounding , resulting in a set of colored image proposals Ψ ( R ; C ) , where Ψ ( · ) denotes visual sub-prompt . As for the shape of the visual sub-prompt , in principle , there are multiple plausible choices to mark the regions with colors , including colored bounding boxes , solid blocks , or solid object segmentation masks . In our experiments , we find that coloring the object with solid blocks and segmentation masks yields better results than bounding boxes , since solid colors that fit the outlines of objects are more common in real-world images ( e.g. , red shirt and blue car ) . Note that the addition of visual sub-prompt to the raw image does not change the architecture or parameters of VL-PTMs . 3.3 TEXTUAL SUB-PROMPT . Textual sub-prompt aims to prompt VL-PTMs to establish the connections between the query text and image regions marked by visual sub-prompt . Specifically , the query text q ( e.g. , “ the horse watched by the woman ” ) is transformed into a fill-in-the-blank query using a template Tg ( · ) as : Tg ( q ) = [ CLS ] q is in [ MASK ] color [ SEP ] In this way , VL-PTMs are prompted to decide the color of which region is more appropriate to fill in the mask ( e.g. , red or blue ) as follows : P ( v = vi|R , q ) = P ( [ MASK ] = ciw|Ψ ( R ; C ) , Tg ( q ) ) = exp ( h > [ MASK ] c i w ) ∑ cj∈C exp ( h > [ MASK ] c j w ) , ( 1 ) where v is the target region , ciw is the embedding of c i w in the pre-trained MLM head . Note that the procedure does not introduce any new parameters , and also mitigates the gap between pre-training and tuning , and therefore improves the data efficiency for tuning VL-PTMs . 3.4 TRAINING AND INFERENCE . Equipped with CPT , VL-PTMs can readily perform zero-shot visual grounding without any labeled data , since the cross-modal representations of colors and their composition with other concepts ( e.g. , objects , attributes and relations ) have been well learned by VL-PTMs during pre-training . When a few or full labeled instances are available , VL-PTMs can be further tuned by CPT using the entropybased objective : L = − ∑ ( R , q , v ? ) ∈Dtrain logP ( v ? |R , q ) , where Dtrain is the training set . Although it is appealing to bridge the image and text through a color-based prompt , we identify two key challenges in its design : ( 1 ) how to determine the configurations of the color set C , and ( 2 ) how to deal with the large number of image regions with limited pre-trained colors . Cross-Modal Prompt Search . Previous works in textual prompt tuning show that prompt configurations ( e.g. , textual templates ) have a significant influence on the performance ( Jiang et al. , 2020 ) . In this work , we make the first investigation in searching the cross-modal prompt configuration ( i.e. , the color set C ) . Intuitively , C should consist of colors to which VL-PTMs are the most sensitive . To obtain a color ci = ( civ , c i w ) , a naive approach is to adopt the most frequent color text in the pre-training text as ciw , and its standard RGB as c i v ( e.g. , ci = ( ( 255 , 0 , 0 ) , red ) ) . However , this solution is sub-optimal , since it determines the color text without considering its visual appearance , and the visual appearance of a color in real-world images often differs from its standard RGB . To address the challenge , we present a principled cross-modal prompt search ( CPS ) algorithm for CPT , which jointly considers visual and textual semantics in real-world cross-modal data . Specifically , we first identify a candidate set of color texts Ĉw and visual appearances Ĉv . For each visual appearance candidate ĉv ∈ Ĉv , we feed into VL-PTMs a pseudo-data instance consisting of a pure colored block of ĉv and a text : “ [ CLS ] a photo in [ MASK ] color [ SEP ] ” . Then we compute the decoding score s ( ĉv , ĉw ) for each color text candidate ĉw ∈ Ĉw as in Equation 1 , where a larger decoding score indicates higher correlation between ĉv and ĉw . To select the color texts that are sensitive by VL-PTMs , we retain the color texts that achieve the largest decoding scores for visual appearance candidates : Cw = { cw|cw = arg maxĉjw∈Ĉw s ( ĉ i v , ĉ j w ) , ĉ i v ∈ Ĉv } . Similarly , we can obtain the visual appearances according to the largest decoding score , resulting in the color set : C = { ( cv , cw ) |cv = arg maxĉiv∈Ĉv s ( ĉ i v , c j w ) , c j w ∈ Cw } . We refer readers to Section B for the pseudo-code of the algorithm . In experiments , we find that the resultant colors yield better results than the naive ones . To make the raw content of the colored image regions available to VL-PTMs , a transparency hyperparameter α ∈ ( 0 , 1 ) is further applied to color visual appearances in practice . Image Region Batching . In visual grounding , the number of region proposals in an image usually exceeds the size of C ( ∼ 10 ) . Besides , we observe that heavily overlapped colored blocks can hinder visual grounding . Therefore , we divide the image regions into batches , where each batch contains a handful of moderately overlapping image regions , and mark each batch with a visual sub-prompt respectively . To handle the batches that do not contain the target region , we further introduce a new candidate text none in the decoding vocabulary , to indicate that there is no target region in the batch .
This paper proposed CPT, colorful prompt tuning for visual grounding tasks using the pre-trained V+L model. By adding color-based co-referential markers in both image and text, CPT makes visual ground as a fill-in-the-blank problem and mitigates the gap between pre-training and fine-tuning. The experiments are conducted on three visual grounding tasks and demonstrate the effectiveness of CPT.
SP:558d63e97493a29608da9b7700ef586c0047d592
CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models
1 INTRODUCTION . Grounding natural language in fine-grained image regions is essential for a broad variety of visionlanguage tasks , such as robotic navigation ( Tellex et al. , 2011 ; Anderson et al. , 2018b ) , visual question answering ( Antol et al. , 2015 ; Anderson et al. , 2018a ) , visual dialogue ( Das et al. , 2017 ) , and visual commonsense reasoning ( Zellers et al. , 2019 ) . Recently Pre-Trained Vision-Language Models ( VL-PTMs ) have shown promising capabilities in visual grounding . Typically , generic cross-modal representations are first pre-trained on large-scale image-caption data in a self-supervised fashion , and then fine-tuned to adapt to downstream tasks ( Lu et al. , 2019 ; Su et al. , 2019 ; Li et al. , 2020 ; Radford et al. , 2021 ) . This pre-training-then-fine-tuning paradigm of VL-PTMs has greatly pushed forward the state-of-the-art of many cross-modal tasks . Despite the success , we note that there exists a significant gap between the objective forms of pretraining and fine-tuning of VL-PTMs . As illustrated in Figure 1 , during pre-training , most VL-PTMs are optimized based on the masked language modeling objective , trying to recover the masked token from the cross-modal context . However , during fine-tuning , downstream tasks are usually conducted by classifying unmasked token representations into semantic labels , where task-specific parameters are typically introduced . The gap hinders the effective adaptation of VL-PTMs to downstream tasks . As a result , a large amount of labeled data is typically required to stimulate the visual grounding capabilities of VL-PTMs for downstream tasks . In this work , inspired by recent progress in pre-trained language models in natural language processing ( Brown et al. , 2020 ; Schick & Schütze , 2021a ; Liu et al. , 2021 ) , we present Cross-modal Prompt Tuning ( CPT , alternatively , Colorful Prompt Tuning ) , a novel paradigm for tuning VLPTMs . The key insight is that by adding color-based co-referential markers in both image and text , visual grounding can be reformulated into a fill-in-the-blank problem , maximally mitigating the gap between pre-training and fine-tuning . As shown in Figure 1 , to ground natural language expressions in image data , CPT consists of two components : ( 1 ) a visual sub-prompt that uniquely marks image regions with colored blocks or segmentation masks , and ( 2 ) a textual sub-prompt that puts the query text into a color-based query template . Explicit grounding to the target image region can then be achieved by recovering the corresponding color text from the masked token in the query template . In addition , we present a principled method to search for high-quality cross-modal prompt configurations ( i.e. , visual appearances and texts of colors ) for CPT . By mitigating the gap from pre-training , CPT enables strong few-shot and even zero-shot visual grounding capabilities of VL-PTMs . Experimental results show that the prompt-tuned VL-PTMs outperform their fine-tuned counterparts by a large margin . For example , using colored blocks as visual sub-prompts , CPT achieves 17.3 % absolute accuracy improvement , and 73.8 % relative standard deviation reduction on average with one shot in RefCOCO evaluation . In the same setting , when equipped with colored segmentation masks as visual sub-prompts , CPT can further achieve 20.0 % absolute accuracy improvement , and 76.2 % relative standard deviation reduction than the vanilla fine-tuning approach . Our contributions are summarized as threefold : ( 1 ) We present a novel cross-modal prompt tuning paradigm for VL-PTMs . To the best of our knowledge , this is the first attempt in both cross-modal prompt tuning for VL-PTMs , and zero- and few-shot visual grounding independent of object types . ( 2 ) We present a principled approach to search for high-quality cross-modal prompt configurations for CPT . ( 3 ) We conduct comprehensive experiments which demonstrate the effectiveness of CPT . 2 PRELIMINARY . In the literature , visual grounding is typically formulated as a referring expression comprehension ( REC ) problem ( Plummer et al. , 2015 ; Mao et al. , 2016 ) . Given an image I and a query text of referring expression q , REC aims to locate the target region in I that corresponds to q . In this section , we introduce the vanilla fine-tuning approach for VL-PTMs . A common practice for REC is to first detect a set of region proposals { v1 , v2 , . . . , vn } via object detectors , and then classify or rank the proposals to select the target region ( Lu et al. , 2019 ; Chen et al. , 2020 ) . Specifically , visual and textual inputs are first transformed into a sequence of input tokens { [ IMG ] , v1 , v2 , . . . , vn , [ CLS ] , w1 , w2 , . . . , wm , [ SEP ] } , where { w1 , w2 , . . . , wm } are textual tokens of q , and [ IMG ] , [ CLS ] and [ SEP ] are special tokens . To obtain input representations , the feature of image regions is extracted by visual encoders , and the embeddings of textual and special tokens are obtained by a lookup table . Then input representations are fed into the pre-trained transformers to produce the hidden representations { h [ IMG ] , h1v , h2v , . . . , hnv , h [ CLS ] , h1w , h2w , . . . , hmw , h [ SEP ] } . Finally the hidden representation of the target region is optimized against negative ones via classification or ranking loss , where new task-specific parameters are introduced . As a result , fine-tuned VL-PTMs need a large mount of labeled instances to stimulate the visual grounding capability . 3 CROSS-MODAL PROMPT TUNING ( CPT ) . In this section , we introduce the framework of CPT , and how to apply CPT to zero-shot , few-shot and fully supervised visual grounding . 3.1 OVERVIEW . The key to visual grounding is to establish fine-grained connections between image regions and textual expressions . Therefore , a good cross-modal prompt tuning framework should take full advantage of co-referential signals from both image and text , and maximally mitigate the gap between pre-training and tuning . To this end , CPT reformulates visual grounding into a fill-in-the-blank problem , as shown in Figure 1 . Specifically , the CPT framework consists of two components : ( 1 ) a visual sub-prompt that uniquely marks the image regions with colored blocks or segmentation masks , and ( 2 ) a textual sub-prompt that puts the query text into a color-based query template . Equipped with CPT , it is then straightforward for VL-PTMs to ground the query text by filling the masked token with the color text of the target image region , where the objective form is identical to pre-training . 3.2 VISUAL SUB-PROMPT . Given an image I and its region proposalsR = { v1 , v2 , . . . , vn } , visual sub-prompt aims to uniquely mark the image regions with natural visual makers . Interestingly , we note that colored bounding boxes are widely used to uniquely mark objects in images for visualization in the literature . Inspired by this , we bridge the image regions and query text through a set of colors C , where each color ci = ( c i v , c i w ) ∈ C is defined by its visual appearance civ ( e.g. , RGB ( 255 , 0 , 0 ) ) and color text ciw ( e.g. , red ) . Then we mark each region proposal vi in the image with a unique color civ for grounding , resulting in a set of colored image proposals Ψ ( R ; C ) , where Ψ ( · ) denotes visual sub-prompt . As for the shape of the visual sub-prompt , in principle , there are multiple plausible choices to mark the regions with colors , including colored bounding boxes , solid blocks , or solid object segmentation masks . In our experiments , we find that coloring the object with solid blocks and segmentation masks yields better results than bounding boxes , since solid colors that fit the outlines of objects are more common in real-world images ( e.g. , red shirt and blue car ) . Note that the addition of visual sub-prompt to the raw image does not change the architecture or parameters of VL-PTMs . 3.3 TEXTUAL SUB-PROMPT . Textual sub-prompt aims to prompt VL-PTMs to establish the connections between the query text and image regions marked by visual sub-prompt . Specifically , the query text q ( e.g. , “ the horse watched by the woman ” ) is transformed into a fill-in-the-blank query using a template Tg ( · ) as : Tg ( q ) = [ CLS ] q is in [ MASK ] color [ SEP ] In this way , VL-PTMs are prompted to decide the color of which region is more appropriate to fill in the mask ( e.g. , red or blue ) as follows : P ( v = vi|R , q ) = P ( [ MASK ] = ciw|Ψ ( R ; C ) , Tg ( q ) ) = exp ( h > [ MASK ] c i w ) ∑ cj∈C exp ( h > [ MASK ] c j w ) , ( 1 ) where v is the target region , ciw is the embedding of c i w in the pre-trained MLM head . Note that the procedure does not introduce any new parameters , and also mitigates the gap between pre-training and tuning , and therefore improves the data efficiency for tuning VL-PTMs . 3.4 TRAINING AND INFERENCE . Equipped with CPT , VL-PTMs can readily perform zero-shot visual grounding without any labeled data , since the cross-modal representations of colors and their composition with other concepts ( e.g. , objects , attributes and relations ) have been well learned by VL-PTMs during pre-training . When a few or full labeled instances are available , VL-PTMs can be further tuned by CPT using the entropybased objective : L = − ∑ ( R , q , v ? ) ∈Dtrain logP ( v ? |R , q ) , where Dtrain is the training set . Although it is appealing to bridge the image and text through a color-based prompt , we identify two key challenges in its design : ( 1 ) how to determine the configurations of the color set C , and ( 2 ) how to deal with the large number of image regions with limited pre-trained colors . Cross-Modal Prompt Search . Previous works in textual prompt tuning show that prompt configurations ( e.g. , textual templates ) have a significant influence on the performance ( Jiang et al. , 2020 ) . In this work , we make the first investigation in searching the cross-modal prompt configuration ( i.e. , the color set C ) . Intuitively , C should consist of colors to which VL-PTMs are the most sensitive . To obtain a color ci = ( civ , c i w ) , a naive approach is to adopt the most frequent color text in the pre-training text as ciw , and its standard RGB as c i v ( e.g. , ci = ( ( 255 , 0 , 0 ) , red ) ) . However , this solution is sub-optimal , since it determines the color text without considering its visual appearance , and the visual appearance of a color in real-world images often differs from its standard RGB . To address the challenge , we present a principled cross-modal prompt search ( CPS ) algorithm for CPT , which jointly considers visual and textual semantics in real-world cross-modal data . Specifically , we first identify a candidate set of color texts Ĉw and visual appearances Ĉv . For each visual appearance candidate ĉv ∈ Ĉv , we feed into VL-PTMs a pseudo-data instance consisting of a pure colored block of ĉv and a text : “ [ CLS ] a photo in [ MASK ] color [ SEP ] ” . Then we compute the decoding score s ( ĉv , ĉw ) for each color text candidate ĉw ∈ Ĉw as in Equation 1 , where a larger decoding score indicates higher correlation between ĉv and ĉw . To select the color texts that are sensitive by VL-PTMs , we retain the color texts that achieve the largest decoding scores for visual appearance candidates : Cw = { cw|cw = arg maxĉjw∈Ĉw s ( ĉ i v , ĉ j w ) , ĉ i v ∈ Ĉv } . Similarly , we can obtain the visual appearances according to the largest decoding score , resulting in the color set : C = { ( cv , cw ) |cv = arg maxĉiv∈Ĉv s ( ĉ i v , c j w ) , c j w ∈ Cw } . We refer readers to Section B for the pseudo-code of the algorithm . In experiments , we find that the resultant colors yield better results than the naive ones . To make the raw content of the colored image regions available to VL-PTMs , a transparency hyperparameter α ∈ ( 0 , 1 ) is further applied to color visual appearances in practice . Image Region Batching . In visual grounding , the number of region proposals in an image usually exceeds the size of C ( ∼ 10 ) . Besides , we observe that heavily overlapped colored blocks can hinder visual grounding . Therefore , we divide the image regions into batches , where each batch contains a handful of moderately overlapping image regions , and mark each batch with a visual sub-prompt respectively . To handle the batches that do not contain the target region , we further introduce a new candidate text none in the decoding vocabulary , to indicate that there is no target region in the batch .
This paper proposes a colorful prompt tuning (CPT) method for tuning pretrained vision-language models. CPT reformulates visual grounding into a fill-in-the-blank problem with color-based coreferential markers in image and text. The grounding to the target image region is achieved by recovering the corresponding color text from the masked token in the query template. Empirical studies show that CPT can outperform existing methods in promoting VL-PTMs for visual grounding in zero-shot, few-shot and fully supervised settings.
SP:558d63e97493a29608da9b7700ef586c0047d592
CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models
1 INTRODUCTION . Grounding natural language in fine-grained image regions is essential for a broad variety of visionlanguage tasks , such as robotic navigation ( Tellex et al. , 2011 ; Anderson et al. , 2018b ) , visual question answering ( Antol et al. , 2015 ; Anderson et al. , 2018a ) , visual dialogue ( Das et al. , 2017 ) , and visual commonsense reasoning ( Zellers et al. , 2019 ) . Recently Pre-Trained Vision-Language Models ( VL-PTMs ) have shown promising capabilities in visual grounding . Typically , generic cross-modal representations are first pre-trained on large-scale image-caption data in a self-supervised fashion , and then fine-tuned to adapt to downstream tasks ( Lu et al. , 2019 ; Su et al. , 2019 ; Li et al. , 2020 ; Radford et al. , 2021 ) . This pre-training-then-fine-tuning paradigm of VL-PTMs has greatly pushed forward the state-of-the-art of many cross-modal tasks . Despite the success , we note that there exists a significant gap between the objective forms of pretraining and fine-tuning of VL-PTMs . As illustrated in Figure 1 , during pre-training , most VL-PTMs are optimized based on the masked language modeling objective , trying to recover the masked token from the cross-modal context . However , during fine-tuning , downstream tasks are usually conducted by classifying unmasked token representations into semantic labels , where task-specific parameters are typically introduced . The gap hinders the effective adaptation of VL-PTMs to downstream tasks . As a result , a large amount of labeled data is typically required to stimulate the visual grounding capabilities of VL-PTMs for downstream tasks . In this work , inspired by recent progress in pre-trained language models in natural language processing ( Brown et al. , 2020 ; Schick & Schütze , 2021a ; Liu et al. , 2021 ) , we present Cross-modal Prompt Tuning ( CPT , alternatively , Colorful Prompt Tuning ) , a novel paradigm for tuning VLPTMs . The key insight is that by adding color-based co-referential markers in both image and text , visual grounding can be reformulated into a fill-in-the-blank problem , maximally mitigating the gap between pre-training and fine-tuning . As shown in Figure 1 , to ground natural language expressions in image data , CPT consists of two components : ( 1 ) a visual sub-prompt that uniquely marks image regions with colored blocks or segmentation masks , and ( 2 ) a textual sub-prompt that puts the query text into a color-based query template . Explicit grounding to the target image region can then be achieved by recovering the corresponding color text from the masked token in the query template . In addition , we present a principled method to search for high-quality cross-modal prompt configurations ( i.e. , visual appearances and texts of colors ) for CPT . By mitigating the gap from pre-training , CPT enables strong few-shot and even zero-shot visual grounding capabilities of VL-PTMs . Experimental results show that the prompt-tuned VL-PTMs outperform their fine-tuned counterparts by a large margin . For example , using colored blocks as visual sub-prompts , CPT achieves 17.3 % absolute accuracy improvement , and 73.8 % relative standard deviation reduction on average with one shot in RefCOCO evaluation . In the same setting , when equipped with colored segmentation masks as visual sub-prompts , CPT can further achieve 20.0 % absolute accuracy improvement , and 76.2 % relative standard deviation reduction than the vanilla fine-tuning approach . Our contributions are summarized as threefold : ( 1 ) We present a novel cross-modal prompt tuning paradigm for VL-PTMs . To the best of our knowledge , this is the first attempt in both cross-modal prompt tuning for VL-PTMs , and zero- and few-shot visual grounding independent of object types . ( 2 ) We present a principled approach to search for high-quality cross-modal prompt configurations for CPT . ( 3 ) We conduct comprehensive experiments which demonstrate the effectiveness of CPT . 2 PRELIMINARY . In the literature , visual grounding is typically formulated as a referring expression comprehension ( REC ) problem ( Plummer et al. , 2015 ; Mao et al. , 2016 ) . Given an image I and a query text of referring expression q , REC aims to locate the target region in I that corresponds to q . In this section , we introduce the vanilla fine-tuning approach for VL-PTMs . A common practice for REC is to first detect a set of region proposals { v1 , v2 , . . . , vn } via object detectors , and then classify or rank the proposals to select the target region ( Lu et al. , 2019 ; Chen et al. , 2020 ) . Specifically , visual and textual inputs are first transformed into a sequence of input tokens { [ IMG ] , v1 , v2 , . . . , vn , [ CLS ] , w1 , w2 , . . . , wm , [ SEP ] } , where { w1 , w2 , . . . , wm } are textual tokens of q , and [ IMG ] , [ CLS ] and [ SEP ] are special tokens . To obtain input representations , the feature of image regions is extracted by visual encoders , and the embeddings of textual and special tokens are obtained by a lookup table . Then input representations are fed into the pre-trained transformers to produce the hidden representations { h [ IMG ] , h1v , h2v , . . . , hnv , h [ CLS ] , h1w , h2w , . . . , hmw , h [ SEP ] } . Finally the hidden representation of the target region is optimized against negative ones via classification or ranking loss , where new task-specific parameters are introduced . As a result , fine-tuned VL-PTMs need a large mount of labeled instances to stimulate the visual grounding capability . 3 CROSS-MODAL PROMPT TUNING ( CPT ) . In this section , we introduce the framework of CPT , and how to apply CPT to zero-shot , few-shot and fully supervised visual grounding . 3.1 OVERVIEW . The key to visual grounding is to establish fine-grained connections between image regions and textual expressions . Therefore , a good cross-modal prompt tuning framework should take full advantage of co-referential signals from both image and text , and maximally mitigate the gap between pre-training and tuning . To this end , CPT reformulates visual grounding into a fill-in-the-blank problem , as shown in Figure 1 . Specifically , the CPT framework consists of two components : ( 1 ) a visual sub-prompt that uniquely marks the image regions with colored blocks or segmentation masks , and ( 2 ) a textual sub-prompt that puts the query text into a color-based query template . Equipped with CPT , it is then straightforward for VL-PTMs to ground the query text by filling the masked token with the color text of the target image region , where the objective form is identical to pre-training . 3.2 VISUAL SUB-PROMPT . Given an image I and its region proposalsR = { v1 , v2 , . . . , vn } , visual sub-prompt aims to uniquely mark the image regions with natural visual makers . Interestingly , we note that colored bounding boxes are widely used to uniquely mark objects in images for visualization in the literature . Inspired by this , we bridge the image regions and query text through a set of colors C , where each color ci = ( c i v , c i w ) ∈ C is defined by its visual appearance civ ( e.g. , RGB ( 255 , 0 , 0 ) ) and color text ciw ( e.g. , red ) . Then we mark each region proposal vi in the image with a unique color civ for grounding , resulting in a set of colored image proposals Ψ ( R ; C ) , where Ψ ( · ) denotes visual sub-prompt . As for the shape of the visual sub-prompt , in principle , there are multiple plausible choices to mark the regions with colors , including colored bounding boxes , solid blocks , or solid object segmentation masks . In our experiments , we find that coloring the object with solid blocks and segmentation masks yields better results than bounding boxes , since solid colors that fit the outlines of objects are more common in real-world images ( e.g. , red shirt and blue car ) . Note that the addition of visual sub-prompt to the raw image does not change the architecture or parameters of VL-PTMs . 3.3 TEXTUAL SUB-PROMPT . Textual sub-prompt aims to prompt VL-PTMs to establish the connections between the query text and image regions marked by visual sub-prompt . Specifically , the query text q ( e.g. , “ the horse watched by the woman ” ) is transformed into a fill-in-the-blank query using a template Tg ( · ) as : Tg ( q ) = [ CLS ] q is in [ MASK ] color [ SEP ] In this way , VL-PTMs are prompted to decide the color of which region is more appropriate to fill in the mask ( e.g. , red or blue ) as follows : P ( v = vi|R , q ) = P ( [ MASK ] = ciw|Ψ ( R ; C ) , Tg ( q ) ) = exp ( h > [ MASK ] c i w ) ∑ cj∈C exp ( h > [ MASK ] c j w ) , ( 1 ) where v is the target region , ciw is the embedding of c i w in the pre-trained MLM head . Note that the procedure does not introduce any new parameters , and also mitigates the gap between pre-training and tuning , and therefore improves the data efficiency for tuning VL-PTMs . 3.4 TRAINING AND INFERENCE . Equipped with CPT , VL-PTMs can readily perform zero-shot visual grounding without any labeled data , since the cross-modal representations of colors and their composition with other concepts ( e.g. , objects , attributes and relations ) have been well learned by VL-PTMs during pre-training . When a few or full labeled instances are available , VL-PTMs can be further tuned by CPT using the entropybased objective : L = − ∑ ( R , q , v ? ) ∈Dtrain logP ( v ? |R , q ) , where Dtrain is the training set . Although it is appealing to bridge the image and text through a color-based prompt , we identify two key challenges in its design : ( 1 ) how to determine the configurations of the color set C , and ( 2 ) how to deal with the large number of image regions with limited pre-trained colors . Cross-Modal Prompt Search . Previous works in textual prompt tuning show that prompt configurations ( e.g. , textual templates ) have a significant influence on the performance ( Jiang et al. , 2020 ) . In this work , we make the first investigation in searching the cross-modal prompt configuration ( i.e. , the color set C ) . Intuitively , C should consist of colors to which VL-PTMs are the most sensitive . To obtain a color ci = ( civ , c i w ) , a naive approach is to adopt the most frequent color text in the pre-training text as ciw , and its standard RGB as c i v ( e.g. , ci = ( ( 255 , 0 , 0 ) , red ) ) . However , this solution is sub-optimal , since it determines the color text without considering its visual appearance , and the visual appearance of a color in real-world images often differs from its standard RGB . To address the challenge , we present a principled cross-modal prompt search ( CPS ) algorithm for CPT , which jointly considers visual and textual semantics in real-world cross-modal data . Specifically , we first identify a candidate set of color texts Ĉw and visual appearances Ĉv . For each visual appearance candidate ĉv ∈ Ĉv , we feed into VL-PTMs a pseudo-data instance consisting of a pure colored block of ĉv and a text : “ [ CLS ] a photo in [ MASK ] color [ SEP ] ” . Then we compute the decoding score s ( ĉv , ĉw ) for each color text candidate ĉw ∈ Ĉw as in Equation 1 , where a larger decoding score indicates higher correlation between ĉv and ĉw . To select the color texts that are sensitive by VL-PTMs , we retain the color texts that achieve the largest decoding scores for visual appearance candidates : Cw = { cw|cw = arg maxĉjw∈Ĉw s ( ĉ i v , ĉ j w ) , ĉ i v ∈ Ĉv } . Similarly , we can obtain the visual appearances according to the largest decoding score , resulting in the color set : C = { ( cv , cw ) |cv = arg maxĉiv∈Ĉv s ( ĉ i v , c j w ) , c j w ∈ Cw } . We refer readers to Section B for the pseudo-code of the algorithm . In experiments , we find that the resultant colors yield better results than the naive ones . To make the raw content of the colored image regions available to VL-PTMs , a transparency hyperparameter α ∈ ( 0 , 1 ) is further applied to color visual appearances in practice . Image Region Batching . In visual grounding , the number of region proposals in an image usually exceeds the size of C ( ∼ 10 ) . Besides , we observe that heavily overlapped colored blocks can hinder visual grounding . Therefore , we divide the image regions into batches , where each batch contains a handful of moderately overlapping image regions , and mark each batch with a visual sub-prompt respectively . To handle the batches that do not contain the target region , we further introduce a new candidate text none in the decoding vocabulary , to indicate that there is no target region in the batch .
This paper proposes a novel paradigm named Cross-Modal Prompting Tuning (CPT) that reformulates visual grounding into a fill-in-the-blank problem. Specifically, CPT applies a unique colorful mask to each visual region in the input image and then utilizes a pre-defined template to wrap the input text, where the modal needs to identify the color of the corresponding region that contains the described object. Experiments are conducted on RefCOCO, RefCOCO+ as well as RefCOCOg and promising results are achieved.
SP:558d63e97493a29608da9b7700ef586c0047d592
AdaAug: Learning Class- and Instance-adaptive Data Augmentation Policies
1 INTRODUCTION Data augmentation is a common way to enhance the robustness of deep learning models by augmenting the datasets used for model training . Applying popular data augmentation operations such as randomized cropping , horizontal flipping , and color shifting to image data has become a standard procedure in modern image recognition models ( Krizhevsky et al. , 2012 ; Shorten & Khoshgoftaar , 2019 ) . Over the years , various augmentation methods using advanced operations have been proposed . Examples include occlusion-based operations like Cutout ( Devries & Taylor , 2017 ) that randomly masks part of an image to avoid overfitting , label-mixing operations like CutMix ( Yun et al. , 2019 ) that replaces the occluded part in Cutout with a different image patch , and Mixup ( Zhang et al. , 2018 ) that interpolates two images with their corresponding one-hot encoded labels . While these hand-crafted data augmentation methods can improve model generalization , choosing the operations and their corresponding parameters is often decided manually to make the augmentation scheme effective for the task at hand . Despite the manual efforts involved , an augmentation policy that is useful for a particular dataset often does not generalize well to other datasets ( Cubuk et al. , 2019 ) . To tackle this problem , a series of recent studies has been conducted to automate the process of finding an effective data augmentation policy for a target dataset . These automated data augmentation ( AutoDA ) methodologies show impressive results on several benchmark image datasets ( Cubuk et al. , 2019 ; Ho et al. , 2019 ; Lim et al. , 2019 ; Cubuk et al. , 2020 ; Hataya et al. , 2020 ; Hendrycks et al. , 2020 ; Li et al. , 2020 ; Cheung & Yeung , 2021 ) . The manual tuning of augmentation parameters can also be addressed by using generative approaches , such as training a generative adversarial network ( GAN ) to create new artificial images directly ( Antoniou et al. , 2017 ; Tran et al. , 2017 ; Jha & Cecotti , 2020 ; Yorioka et al. , 2020 ; Zhao et al. , 2020 ) . However , these generative models are often hard to implement and computationally expensive to train . In supervised learning , data augmentation is considered as a naive way to inject inductive biases , such as translational invariance , to a classifier . With the recent advances in representation learning , data augmentation has become a major approach to the learning of good representations . For example , there is an increasing trend to replace convolutional neural networks ( CNNs ) with transformers in computer vision ( Carion et al. , 2020 ; Dosovitskiy et al. , 2021 ) . Being a more generic architecture , transformers do not come with additional inductive bias like translational invariance in CNNs , thereby requiring more data to learn the effective invariance properties . Data augmentation serves as an effective way to achieve this goal ( Touvron et al. , 2020 ) . In addition , recent self-supervised models rely heavily on data augmentation to create different views of the same data and learn the robust representations through contrastive learning ( Chen et al. , 2020 ; Grill et al. , 2020 ; He et al. , 2020 ) . However , improper choice of augmentation operations , especially with excessive strength , may impose wrong inductive bias to the models and lead to performance degradation ( Chen et al. , 2020 ; Xiao et al. , 2021 ) . Consequently , there is a need to enrich the current data augmentation methods , especially with a better choice of the data augmentation policy , to take advantage of the recent advances in representation learning . Previous AutoDA methods attempt to find an optimal augmentation policy to augment a given dataset . However , most of the discovered policies are not adaptive to variation of the dataset . Even in a dataset in which an augmentation policy is found to be effective , the same augmentation scheme is applied equally to all classes . This can limit the potential of data diversity brought by data augmentation . For example , in digit classification , flip-invariance is useful for the digits “ 0 ” , “ 1 ” and “ 8 ” but not for the other digits ; in shape classification , shearing-invariance is useful for “ triangles ” but not for “ rectangles ” . None of the previous AutoDA methods can learn an adaptive class-dependent augmentation policy . To address this limitation , we propose AdaAug , as an AutoDA method , to learn a class-dependent and potentially instance-dependent augmentation policy efficiently . Despite the attractive potential of such an adaptive scheme if it can indeed be realized , actually learning an adaptive augmentation policy poses at least two major technical challenges . First , the search space for the per-class , per-instance augmentation policies is very large , rendering it intractable when trying to maintain an individual policy for each class or even each realization of the input data . Second , the gradient information of the augmentation parameters is hard to obtain as the operation selection process and the transformations are non-differentiable . Optimizing the augmentation policy efficiently is a challenging problem to address . In this work , AdaAug employs a recognition model to learn the underlying augmentation policy for each data instance and takes an alternating exploit-and-explore procedure to update the augmentation policy using a differentiable workflow . In the exploitation pass , AdaAug trains a classifier for a number of steps , followed by the exploration pass which validates the classifier and updates the policy to minimize the validation loss . The intuition behind our design of AdaAug is that such an alternating procedure can learn augmentations that help generalize the trained model to unseen validation data . For example , rotational invariance would be found to be a desirable inductive bias that the model should learn if the validation set contains some similar but rotated versions of the training images . Our goal is to capture such information and assign a higher probability to use rotation for augmentation in this case . An application scenario would be a computer vision task for drones where the unseen data may contain different kinds of rotated images . To summarize , our contributions are listed as follows : • We introduce a novel AutoDA method to learn a class-dependent and potentially instancedependent augmentation policy for each data instance . • We propose a differentiable workflow to search for the augmentation policy efficiently . • We demonstrate that the policies learned by our method transfer better to unseen datasets , such as Oxford Flowers , Oxford-IIT Pets , FGVC Aircraft , and Stanford Cars , when compared to other AutoDA baselines . • We demonstrate state-of-the-art performance on the CIFAR-10 , CIFAR-100 , and SVHN datasets . 2 RELATED WORK Automated Data Augmentation . Several AutoDA methods have been proposed to compose the augmentation operations automatically . AutoAugment ( Cubuk et al. , 2019 ) learns to generate the probability and magnitude in applying different augmentation operations as a policy using reinforcement learning ( RL ) . It alternately generates an augmentation policy to train a child model and updates the policy generator using the validation performance as reward . Since it is computationally expensive to train the child models repeatedly , several techniques have been proposed subsequently in an attempt to reduce the search effort . Fast AutoAugment ( Lim et al. , 2019 ) uses Bayesian optimization ( BO ) to tune the augmentation parameters . Population-based Augmentation ( PBA ) ( Ho et al. , 2019 ) exploits population-based training ( PBT ) to search for an optimal augmentation policy schedule by training multiple parallel child models using an evolutionary approach . RandAugment ( Cubuk et al. , 2020 ) applies the augmentation operations uniformly and reduces the search space significantly by covering only the number of operators and the global augmentation magnitude . Instead of using the validation performance to evaluate the augmentation quality , Adversarial AutoAugment ( Zhang et al. , 2020 ) uses an adversarial objective to learn the augmentation policy . MODALS ( Cheung & Yeung , 2021 ) utilizes PBA to search for an optimal latent space augmentation policy and augment data from any modality not limited to image data . Differentiable Data Augmentation . In addition to using RL , BO and PBT to optimize the augmentation parameters , there exist related methods that modify the otherwise discrete search procedure to make it end-to-end differentiable . This results in a more efficient optimization procedure and a more precise policy than RL and PBT as the search space is continuous . In AdaAug , part of our contribution is to design a differentiable workflow to learn the augmentation policy . For previous differentiable augmentation approaches , Faster AutoAugment ( Hataya et al. , 2020 ) proposes a differentiable Relaxed Bernoulli distribution to sample the candidate augmentation functions and estimate the gradients of the non-differentiable augmentation magnitude using the Stop Gradient estimator . Specifically , it optimizes a density matching loss between the training and validation data . DADA ( Li et al. , 2020 ) differentiates through the discrete policy sampling process using GumbelSoftmax trick While AutoDA and differentiable data augmentation have been shown to be successful in improving the generalization performance of deep learning models , the learned augmentation policy is often applied uniformly to the whole dataset , meaning that all classes and instances share the same augmentation policy . On the contrary , each class and even each data instance receive an adaptive policy to augment in our proposed method . Adaptive Data Augmentation . Attempts have been made to apply adaptive data augmentation at a class or subgroup level . Hauberg et al . ( 2016 ) proposed a statistical approach to model the transformations within each class and use statistical models to augment the dataset . The approach shows improvement in the small MNIST dataset and its variants . However , the augmentation operations are limited to spatial transformations . In addition , observations of the data must be locatable and alignable , making it difficult to extend to most other computer vision tasks . Recently , CAMEL ( Goel et al. , 2021 ) adopts a finer data-generation method by fixing those classifiers that fail on a subgroup of a class . It uses CycleGAN to learn different variations of the same training data within a subgroup . However , CAMEL requires specifying the subgroup information manually and assumes that the subgroups only exist within the same class . MetaAugment ( Zhou et al. , 2021 ) learns a samplewise weighting scheme and a global probability parameter to control the sampling of augmentation transformations . In AdaAug , the learned policy can capture class-dependent transformation automatically and also instance-dependent information , such as the light intensity of an image , across different classes . 3 ADAAUG 3.1 SEARCH SPACE Let T be a set of augmentation operations where τj denotes the j-th operation ( e.g. , “ rotation ” ) in the set . We formulate an augmentation policy as the probability p and magnitude λ in applying the augmentation operations . Here , p is a probability vector with each entry pj ∈ [ 0 , 1 ] ; ∑|T| j=1 pj = 1 and λ is a vector with each entry λj ∈ [ 0 , 1 ] , where pj and λj are the probability and magnitude , respectively , of applying the operation τj . Mathematically , τj : X → X is a mapping from the input space X to itself . For an image x ∈ X , τj transforms it with the magnitude parameter λj that specifies the strength of the transformation ( e.g. , degree of rotation ) , i.e. , x # → τj ( x ; λj ) . Note that some operations like flipping do not depend on the magnitude parameter . In a training pass , given an input data x , an augmentation policy ( p , λ ) , and the number of operators k , we sample k operations according to p and apply them with their corresponding magnitudes specified by λ : T ( x ; p , λ ) = τj ( x ; λj ) ; j ∼ p x̂ = T ( k ) ◦ · · · ◦ T ( 1 ) ( x ; p , λ ) ( 1 ) Here , T ( t ) , 1 ≤ t ≤ k , denotes applying the t-th operation . Our goal is to learn an augmentation policy function πθ : x # → ( p , λ ) to generate an adaptive augmentation policy that optimizes the generalization performance and is dependent on the input data ( see Figure 1 and Algorithm 1 ) . 3.2 SEARCH ALGORITHM Exploitation . AdaAug uses a feature extraction network fα : X → Z to map an input space to a latent space , a dense layer gβ : Z → Y to map a latent space to a label space , and a projection function hγ : Z → P × Λ to map a latent space ( representation ) to a probability and magnitude space , where p = [ 0 , 1 ] |T| and ‖p‖1 = 1 , ∀p ∈ P and λ = [ 0 , 1 ] |T| , ∀λ ∈ Λ . In our case , the functions f , g , h ( with the subscripts dropped for notational simplicity ) are implemented as neural networks with network weights α , β , γ , respectively . The softmax operation is applied to the first half of the output from h to get the probabilities and the sigmoid function is applied to the other half to get the magnitudes . The policy network πθ ( x ) can then connect the class and image information to the augmentation policy via h ◦ f ( x ) with parameters θ = ( γ , α ) . In an exploitation pass , given the training data x , the policy function generates the data-dependent augmentation policy ( p , λ ) = πθ ( x ) and augments x to give x̂ using Equation ( 1 ) . Here , x̂ is treated as a new unseen training example and is used to train the classification model g ◦ f ( x̂ ) by minimizing the crossentropy loss : minα , β Ltrain ( α , β ) . Note that there is a discrete sampling procedure in Equation ( 1 ) . During the update , the gradient of α does not involve the computations in the policy network . Exploration . In the exploration pass , AdaAug first generates the augmentation policy ( p , λ ) = πθ ( x ) given the validation data x . Then , it applies all the |T| augmentation operations to x separately with the corresponding magnitudes in λ . The augmented validation data are passed to the feature extraction network f individually to get the latent representations . The latent representations are summed based on their weights in the probability vector p. The mixed representation is passed to g for computing the predicted labels : ŷ = g |T|∑ j=1 pj · f ◦ τj ( x ; λj ) ; ( p , λ ) = πθ ( x ) ( 2 ) AdaAug updates the projection parameters γ to minimize the validation loss : minγ Lvalid ( α , β , γ ) . As we make no assumption that the provided augmentation operations are differentiable , we follow prior approaches ( Bengio et al. , 2013 ; Li et al. , 2020 ) using a straight-through gradient estimator to optimize the augmentation magnitudes . Specifically , the gradient of the magnitudes is estimated with respect to each input pixel value xh , w of the augmented data , i.e. , ∂x̂h , w ∂λj = 1 . The gradient can then be calculated by : ∂Lvalid ∂λj = ∑ w , h ∂Lvalid ∂x̂w , h ∂x̂w , h ∂λj = ∑ w , h ∂Lvalid ∂x̂w , h ( 3 ) Algorithm 1 Search algorithm 1 : procedure SEARCH ( Dtrain , Dvalid , T , k , m , r ) 2 : Initialize α , β , γ 3 : for r steps do # Exploitation 4 : Sample a mini-batch dtrain ∈ Dtrain 5 : for ( x , y ) ∈ dtrain do 6 : ( p , λ ) = πθ ( x ) 7 : x̂ = augment ( x ) ⊲ augment computes Eq . 1 8 : ŷ = g ◦ f ( x̂ ) 9 : Ltrain ( α , β ) = CrossEntropyLoss ( ŷ , y ) 10 : ( α , β ) ← argminα , β Ltrain ( α , β ) 11 : if r is divisible by m then # Exploration 12 : Sample a mini-batch dvalid ∈ Dvalid 13 : for ( x , y ) ∈ dvalid do 14 : ( p , λ ) = πθ ( x ) 15 : ŷ = explore ( x ) ⊲ explore computes Eq . 2 16 : Lvalid ( α , β , γ ) = CrossEntropyLoss ( ŷ , y ) 17 : γ ← argminγ Lvalid ( α , β , γ ) return α , γ Relation to Neural Architecture Search ( NAS ) . Tuning of the augmentation policy parameters bears some similarity with optimization of the network architecture weights in DARTS from the NAS literature ( Liu et al. , 2019 ) . DARTS prescribes different computation paths to different operation cells and relaxes the computation to a mixture of the operations weighted by learnable weights for each path . With w as the model parameters and α as the weights for the computation paths , the search algorithm solves a bi-level optimization problem : min α Lvalid ( w∗ ( α ) , α ) s.t . w∗ ( α ) = argmin w Ltrain ( w , α ) ( 4 ) While DARTS optimizes the architecture weights , AdaAug optimizes the projection parameter γ that decides the augmentation weights and magnitudes in the exploration . DARTS solves the optimization by using a first-order and finite-difference approximation of the architecture gradient . In AdaAug , we sample the augmentation operations and treat the augmented data as unseen missing training data X̂train . By absorbing the augmentation parameter in the training dataset , we avoid the complex bi-level optimzation and simplify the exploitation procedure into training a standard classifier : min γ Lvalid ( α∗ , β∗ , γ ; Xvalid ) s.t . α∗ , β∗ = argmin α , β Ltrain ( α , β ; X̂train ) ( 5 ) Relation to Density Matching . Data augmentation can be regarded as a density matching problem between the training and validation data ( Ratner et al. , 2017 ; Tran et al. , 2017 ; Hataya et al. , 2020 ; Lim et al. , 2019 ) . From this perspective , AdaAug improves model generalization by matching the density of Dtrain with the density of the augmented Dvalid . In the outer optimization objective in Equation ( 5 ) , AdaAugminimizes the classification loss with the augmentation parameters γ over the same optimal model parameters α∗ , β∗ learned from D̂train . In so doing , it approximately reduces the distance between the densities of the augmented Dvalid and Dtrain . 3.3 INFERENCE Like most automated augmentation piplines ( Cubuk et al. , 2019 ; Ho et al. , 2019 ; Lim et al. , 2019 ; Hataya et al. , 2020 ; Li et al. , 2020 ) , AdaAug searches for the augmentation policy on a small dataset using a small model and then applies the learned policy network πθ to train with a larger dataset or model . We call the process of applying a searched policy to augment a new dataset as inference time . Regarding this workflow , RandAugment argues that the different search spaces in search time and inference time makes the augmentation policy unable to adjust the regularization strength to different target datasets and models ( Cubuk et al. , 2020 ) . Therefore , it proposes to search for the global magnitude and number of operators for each case . To address this concern , AdaAug ultilizes three diversity parameters to fine-tune the regularization strength of the found policy for large datasets and models . First , the number of operators k is set to 1 at search time , but larger value of k can be used at inference time . To control the selection of more diverse operations , a temperature parameter T is introduced in the softmax function : pi = exp [ h ( zi ) /T ] ∑|T| j exp [ h ( zj ) /T ] . Setting a larger T value allows the operations with lower probability being sampled more often in the same order . Last , AdaAug perturbs the magnitude value with the parameter δ so the perturbed magnitude λ̂ is given as λ̂ ∼ Uniform ( λ− δ , λ+ δ ) . This allows the augmented data to show slight variations even with the same operation on the same input data . In practice , we can perform grid search on these three diversity parameters with the other hyperparameters using a holdout validation set . 4 EXPERIMENTS AND RESULTS In this section , we explain our experimental design followed by presenting the results . We evaluate the empirical performance of AdaAug in two experiments : AdaAug-transfer and AdaAug-direct . We select the comparison baselines that use the validation performance as the evaluation method to learn the augmentation policy . For all the experiments , we report the average test-set error rate as the performance metric . Each model is evaluated three times with different random initializations . Augmentation operations . We match the operations adopted by AutoAugment . In addition to the 16 operations proposed previously ( ShearX , ShearY , TranslateX , TranslateY , Rotate , AutoContrast , Invert , Equalize , Solarize , Posterize , Contrast , Color , Brightness , Sharpness , Cutout , and Sample Pairing ) , we add the Identity operation for not applying data augmentation . For the simple baseline , we apply random horizontal flip , color jittering , color normalization , and Cutout with 16× 16 patch size . Our method and other baselines apply the found policy on top of these standard augmentations . Policy search . We follow the setup adopted by AutoAugment ( Cubuk et al. , 2019 ) to use 4,000 training images for CIFAR-10 and CIFAR-100 , and 1,000 training images for SVHN . The remaining images are used as the validation set . We use Wide-ResNet-40-2 ( Zagoruyko & Komodakis , 2016 ) as the feature extraction network for all searches . We implement h as a linear layer and update the policy parameter γ after every 10 training steps using the Adam optimizer with learning rate 0.001 and a batch size of 128 . AdaAug-transfer . In the first experiment , we investigate how well the learned augmentation policy can transfer to unseen datasets . We search for the optimal augmentation policy on the CIFAR-100 dataset and use the learned policy to train with four fine-grained classification datasets : Oxford 102 Flowers ( Nilsback & Zisserman , 2008 ) , Oxford-IIIT Pets ( Em et al. , 2017 ) , FGVC Aircraft ( Maji et al. , 2013 ) , and Stanford Cars ( Krause et al. , 2013 ) . We compare the test error rate with AutoAugment , Fast AutoAugment , DADA , and RandAugment using their published policies for CIFAR-100 . For all the tested datasets , we compare the transfer results when training the ResNet-50 model ( He et al. , 2016 ) for 180 epochs from scratch and fine-tuning the ResNet-50 model pretrained on ImageNet for 100 epochs . We use the cosine learning rate decay with one annealing cycle ( Loshchilov & Hutter , 2017 ) , initial learning rate of 0.1 , weight decay 1e-4 and gradient clipping parameter 5 . AdaAug-direct . In the second experiment , we search for the optimal augmentation policy on a small subset of the target dataset and use the learned policy to train the full dataset . The purpose of the experiment is to demonstrate that while the AdaAug policy can adapt to other unseen datasets , it can also achieve competitive performance on the seen datasets with more training data . We compare AdaAug-direct with state-of-the-art AutoDA methods using the same evaluation datasets : CIFAR10 ( Krizhevsky & Hinton , 2009 ) , CIFAR-100 ( Krizhevsky & Hinton , 2009 ) , and SVHN ( Netzer et al. , 2011 ) . We test our method using Wide-ResNet-40-2 and Wide-ResNet-28-10 . At inference time , we set the temperature T = 3 and magnitude perturbation δ = 0.3 and search for the number of operators k ∈ { 1 , 2 , 3 , 4 } using a holdout validation set , like RandAugment ( Cubuk et al. , 2020 ) . For the hyperparameters , we follow AutoAugment , PBA , and Fast AutoAugment if possible . Table 1 and Table 2 show that the AdaAug policy outperforms the other baselines when training and fine-tuning the ResNet-50 model on the Flowers , Pets , Aircraft , and Cars datasets , respectively . These baselines apply the same augmentation policy to all datasets . Such a policy may not be optimal to the target domain . In contrast , AdaAug adapts the augmentation policy to individual image classes and instances automatically . The AdaAug policy network applies different augmentation policies to unseen images according to their similarity to the classes that AdaAug has seen in the search . To further justify our claim , we show the distribution of the augmentation parameters for different tasks in Appendix A.5.4 . The four datasets have many classes but few training examples per class . The negative effect of using non-adaptive data augmentation is likely to be imminent than a situation where its dataset has fewer classes but more examples per class . 4.2 ADAAUG-DIRECT CIFAR-10 and CIFAR-100 . The policies learned by AdaAug mostly achieve either comparable or better performance than the baselines for both WRN-40-2 and WRN-28-10 models on the CIFAR10 and CIFAR-100 datasets ( see Table 3 ) . We visualize the augmentation policy of CIFAR-10 by applying the policy to the validation data and averaging the predicted augmentation probability of each image for each class ( see Figure 2 ) . The policy contains all types of augmentation moderately at some degree . Among the operations , Flip dominates the policy . It is in line with the augmentation selected manually as people find applying horizontal flipping to CIFAR-10 can improve the prediction accuracy . The policy learned by AdaAug puts higher emphasis on Invert and Equalize , which are also reported by PBA . The focus on Brightness is also aligned with the policy in AutoAugment . Although there are minor variations in the importance of some operations between the policies learned by AdaAug , AutoAugment , and PBA , their empirical performance is similar . SVHN . AdaAug performs comparably to the baselines on the core set of SVHN . We visualize the augmentation policy for SVHN in Figure 2 . We found that AutoContrast , Invert , and Solarize receive a large attention in SVHN . This makes sense because the specific color of the number and background is irrelevant to the prediction . This is consistent with the findings by AutoAugment and PBA . The Flip operation receives a significantly higher probability in digits “ 0 ” , “ 1 ” , and “ 8 ” . It is likely because these three digits appear similar after flipping . This shows that AdaAug captures not only dataset-specific augmentation policy , but also class-dependent augmentation , which can not be achieved by other baselines . In addition to the augmentation probability , we visualize the learned augmentation magnitude in Appendix A.5.2 . ImageNet . We also validate our method on large-scale ImageNet dataset . AdaAug improves the top-1 accuracy 1 % over the ResNet-50 baseline ( see Appendix A.1.1 ) . Although some baselines like AutoAugment produce similar performance or slightly outperform our method in the AdaAugdirect experiment , AdaAug uses less computational effort in searching for the policy . Specifically , AutoAugment takes 5,000 GPU hours to search for the CIFAR-10 policy , while AdaAug takes only 3.3 GPU hours on an old GeForce GTX 1080 GPU card ( see Appendix A.4 ) . 5 DISCUSSION Instance-dependent augmentation . The AdaAug-direct experimental results show that the learned AdaAug policy can capture dataset- and class-dependent augmentations on CIFAR-10 , CIFAR100 , and SVHN . We further investigate whether AdaAug can learn instance-dependent information based on some image features . According to the architecture , AdaAug takes the output from the last layer of a CNN network as the image representation and uses it to predict the augmentation parameters . The image representation contains the class information and potentially some image features . Empirically , we first examine if AdaAug generates different augmentation policies for different instances even within the same class . In Appendix A.5.3 , we plot the standard deviations of the predicted augmentation probabilities of the image instances for each class . It is observed that even within the same class , the predicted augmentation policy is slightly different across instances . It is a clue that the AdaAug policy captures some instance-level augmentation information . Qualitatively , we show the augmented examples of a flower image using the AdaAug policy in Figure 3 . The input image is relatively darker than the other images . Apparently , AdaAug is aware of this property and applies more brightness-related augmentation to lighten up the image . We observe similar behavour of AdaAug in other classes , but there is inadequate empirical support to conclude a general rule on how the network decides the instance-aware augmentation as policies are learned in a data-driven way . Quality of augmented data . We compare the augmented images from AdaAugwith RandAugment under different augmentation strengths in Figure 3 . In terms of augmentation diversity , RandAug- ment produces more variations of the input image . However , not all augmented images produced are plausible . As RandAugment applies the augmentations uniformly , some augmented flower images show a strange color different from the original image . With increasing number of operators and magnitude , the flower object is sometimes translated out of the frame and results in a black image . This may affect the learning performance as color can be an important feature in classifying different species of flowers . For AdaAug , the augmented images are more visually appealing . Ablation study . In this section , we study the effects of AdaAug-transfer on Oxford 102 Flowers and AdaAug-direct on CIFAR-10 using three alternative search configurations . First , does the use of nonlinear projection deliver better performance ? We replace the linear layer in h with a 2-layer MLP using a 128 hidden layer size with ReLU activation . Second , we study whether the class-dependent and instance-dependent configuration improves model accuracy . We remove the class and instance information by replacing the policy network with a fixed vector . which decides the augmentation probabilities and magnitudes for the entire dataset . In the third setting , we mix the augmented images in the input space instead of the latent space and observe the effects . Our results show that the use of class- and instance-adaptive augmentation policy contributes larger improvement in AdaAug-direct . In AdaAug-transfer , using a nonlinear projection harms the prediction performance . A possible reason is that the nonlinear projection is more likely to overfit the search dataset and fail to generalize to unseen datasets . Moreover , combining the augmentation path in the latent space learns a better policy ( see Table 4 ) . In Appendix A.3 , we provide further analysis of using AdaAug with different diversity parameters . In this work , we propose a novel AutoDA approach , AdaAug , to learn class- and instance-adaptive augmentation policies efficiently . We demonstrate that the found policy transfers well to unseen datasets while achieving state-of-the-art results on the seen datasets . We provide evidence to show that the learned adaptive policy captures class- and instance-level information . We think that AdaAug can show further gains over existing baselines when applied to datasets with more dissimilar underlying augmentation rules among the data classes , and with fewer training examples for each class . It is also promising to investigate in the future if the proposed adaptive augmentation method can improve the performance of other computer vision and representation learning tasks . REFERENCES Antreas Antoniou , Amos J. Storkey , and Harrison Edwards . Data augmentation generative adver- sarial networks . arXiv preprint arXiv:1711.04340 , 2017 . Yoshua Bengio , Nicholas Léonard , and Aaron C. Courville . Estimating or propagating gradients through stochastic neurons for conditional computation . arXiv preprint arXiv:1308.3432 , 2013 . Nicolas Carion , Francisco Massa , Gabriel Synnaeve , Nicolas Usunier , Alexander Kirillov , and Sergey Zagoruyko . End-to-end object detection with transformers . In 16th European Conference on Computer Vision , ECCV 2020 , volume 12346 , pp . 213–229 . Springer , 2020 . Ting Chen , Simon Kornblith , Mohammad Norouzi , and Geoffrey E. Hinton . A simple framework for contrastive learning of visual representations . In Proceedings of the 37th International Conference on Machine Learning , ICML 2020 , volume 119 , pp . 1597–1607 . PMLR , 2020 . Tsz Him Cheung and Dit Yan Yeung . MODALS : Modality-agnostic automated data augmentation in the latent space . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Ekin D. Cubuk , Barret Zoph , Dandelion Mané , Vijay Vasudevan , and Quoc V. Le . AutoAugment : Learning augmentation strategies from data . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR 2019 , pp . 113–123 . IEEE , 2019 . Ekin D. Cubuk , Barret Zoph , Jonathon Shlens , and Quoc V. Le . RandAugment : Practical automated data augmentation with a reduced search space . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR Workshops 2020 , pp . 3008–3017 . IEEE , 2020 . Terrance Devries and Graham W. Taylor . Improved regularization of convolutional neural networks with cutout . arXiv preprint arXiv:1708.04552 , 2017 . Alexey Dosovitskiy , Lucas Beyer , Alexander Kolesnikov , Dirk Weissenborn , Xiaohua Zhai , Thomas Unterthiner , Mostafa Dehghani , Matthias Minderer , Georg Heigold , Sylvain Gelly , Jakob Uszkoreit , and Neil Houlsby . An image is worth 16x16 words : Transformers for image recognition at scale . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Yan Em , Feng Gao , Yihang Lou , Shiqi Wang , Tiejun Huang , and Ling-Yu Duan . Incorporating intra-class variance to fine-grained visual recognition . In 2017 IEEE International Conference on Multimedia and Expo , ICME 2017 , pp . 1452–1457 . IEEE , 2017 . Karan Goel , Albert Gu , Yixuan Li , and Christopher Re . Model patching : Closing the subgroup performance gap with data augmentation . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Jean-Bastien Grill , Florian Strub , Florent Altché , Corentin Tallec , Pierre H. Richemond , Elena Buchatskaya , Carl Doersch , Bernardo Ávila Pires , Zhaohan Guo , Mohammad Gheshlaghi Azar , Bilal Piot , Koray Kavukcuoglu , Rémi Munos , and Michal Valko . Bootstrap your own latent - A new approach to self-supervised learning . In Advances in Neural Information Processing Systems 33 : Annual Conference on Neural Information Processing Systems 2020 , NeurIPS 2020 , 2020 . Ryuichiro Hataya , Jan Zdenek , Kazuki Yoshizoe , and Hideki Nakayama . Faster AutoAugment : learning augmentation strategies using backpropagation . In 16th European Conference on Computer Vision , ECCV 2020 , volume 12370 , pp . 1–16 . Springer , 2020 . Søren Hauberg , Oren Freifeld , Anders Boesen Lindbo Larsen , John W. Fisher III , and Lars Kai Hansen . Dreaming more data : Class-dependent distributions over diffeomorphisms for learned data augmentation . In Arthur Gretton and Christian C. Robert ( eds . ) , Proceedings of the 19th International Conference on Artificial Intelligence and Statistics , AISTATS 2016 , volume 51 of JMLR Workshop and Conference Proceedings , pp . 342–350 . JMLR , 2016 . Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Deep residual learning for image recognition . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR 2016 , pp . 770– 778 . IEEE , 2016 . Kaiming He , Haoqi Fan , Yuxin Wu , Saining Xie , and Ross B. Girshick . Momentum contrast for unsupervised visual representation learning . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR 2020 , pp . 9726–9735 . IEEE , 2020 . Dan Hendrycks , Norman Mu , Ekin Dogus Cubuk , Barret Zoph , Justin Gilmer , and Balaji Lakshminarayanan . AugMix : A simple data processing method to improve robustness and uncertainty . In 8th International Conference on Learning Representations , ICLR 2020 , 2020 . Daniel Ho , Eric Liang , Xi Chen , Ion Stoica , and Pieter Abbeel . Population Based Augmentation : efficient learning of augmentation policy schedules . In Proceedings of the 36th International Conference on Machine Learning , ICML 2019 , volume 97 , pp . 2731–2741 . PMLR , 2019 . Ganesh Jha and Hubert Cecotti . Data augmentation for handwritten digit recognition using generative adversarial networks . Multim . Tools Appl. , 79 ( 47 ) :35055–35068 , 2020 . J. Krause , Jun Deng , Michael Stark , and Li Fei-Fei . Collecting a large-scale dataset of fine-grained cars . In Second Workshop on Fine-Grained Visual Categorization , 2013 . A. Krizhevsky and G. Hinton . Learning multiple layers of features from tiny images . Technical report , University of Toronto , 2009 . Alex Krizhevsky , Ilya Sutskever , and Geoffrey E. Hinton . ImageNet classification with deep convolutional neural networks . In Advances in Neural Information Processing Systems 25 : 26th Annual Conference on Neural Information Processing Systems 2012. , pp . 1106–1114 , 2012 . Hankook Lee , Kibok Lee , Kimin Lee , Honglak Lee , and Jinwoo Shin . Improving transferability of representations via augmentation-aware self-supervision . Advances in Neural Information Processing Systems 34 : Annual Conference on Neural Information Processing Systems 2021 , NeurIPS 2021 , 2021 . Yonggang Li , Guosheng Hu , Yongtao Wang , Timothy M. Hospedales , Neil Martin Robertson , and Yongxing Yang . DADA : Differentiable automatic data augmentation . 16th European Conference on Computer Vision , ECCV 2020 , 2020 . Sungbin Lim , Ildoo Kim , Taesup Kim , Chiheon Kim , and Sungwoong Kim . Fast AutoAugment . In Advances in Neural Information Processing Systems 32 : Annual Conference on Neural Information Processing Systems 2019 , NeurIPS 2019 , pp . 6662–6672 , 2019 . Hanxiao Liu , Karen Simonyan , and Yiming Yang . DARTS : Differentiable architecture search . In 7th International Conference on Learning Representations , ICLR 2019 , 2019 . Ilya Loshchilov and Frank Hutter . SGDR : Stochastic gradient descent with warm restarts . In 5th International Conference on Learning Representations , ICLR 2017 , 2017 . Subhransu Maji , Esa Rahtu , Juho Kannala , Matthew B. Blaschko , and Andrea Vedaldi . Fine-grained visual classification of aircraft . arXiv preprint arXiv:1306.5151 , 2013 . Yuval Netzer , Tao Wang , Adam Coates , Alessandro Bissacco , Bo Wu , and Andrew Ng . Reading digits in natural images with unsupervised feature learning . In NIPS Workshop on Deep Learning and Un-supervised Feature Learning , 2011 . Maria-Elena Nilsback and Andrew Zisserman . Automated flower classification over a large number of classes . In Sixth Indian Conference on Computer Vision , Graphics & Image Processing , ICVGIP 2008 , pp . 722–729 . IEEE , 2008 . Alexander J. Ratner , Henry R. Ehrenberg , Zeshan Hussain , Jared Dunnmon , and Christopher Ré . Learning to compose domain-specific transformations for data augmentation . In Advances in Neural Information Processing Systems 30 : Annual Conference on Neural Information Processing Systems 2017 , pp . 3236–3246 , 2017 . Connor Shorten and Taghi M. Khoshgoftaar . A survey on image data augmentation for deep learning . Journal of Big Data , 6:60 , 2019 . Hugo Touvron , Matthieu Cord , Matthijs Douze , Francisco Massa , Alexandre Sablayrolles , and Hervé Jégou . Training data-efficient image transformers & distillation through attention . arXiv preprint arXiv:2012.12877 , 2020 . Toan Tran , Trung Pham , Gustavo Carneiro , Lyle J. Palmer , and Ian D. Reid . A Bayesian data augmentation approach for learning deep models . In Advances in Neural Information Processing Systems 30 : Annual Conference on Neural Information Processing Systems 2017 , pp . 2797–2806 , 2017 . Tete Xiao , Xiaolong Wang , Alexei A Efros , and Trevor Darrell . What should not be contrastive in contrastive learning . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Daiki Yorioka , Hyunho Kang , and Keiichi Iwamura . Data augmentation for deep learning using generative adversarial networks . In 9th IEEE Global Conference on Consumer Electronics , GCCE 2020 , pp . 516–518 . IEEE , 2020 . Sangdoo Yun , Dongyoon Han , Sanghyuk Chun , Seong Joon Oh , Youngjoon Yoo , and Junsuk Choe . Cutmix : Regularization strategy to train strong classifiers with localizable features . In 2019 IEEE International Conference on Computer Vision , ICCV 2019 , pp . 6022–6031 . IEEE , 2019 . Sergey Zagoruyko and Nikos Komodakis . Wide residual networks . In Proceedings of the British Machine Vision Conference 2016 , BMVC 2016 . BMVA Press , 2016 . Hongyi Zhang , Moustapha Cissé , Yann N. Dauphin , and David Lopez-Paz . Mixup : Beyond empirical risk minimization . In 6th International Conference on Learning Representations , ICLR 2018 , 2018 . Xinyu Zhang , Qiang Wang , Jian Zhang , and Zhao Zhong . Adversarial AutoAugment . In 8th International Conference on Learning Representations , ICLR 2020 , 2020 . Shengyu Zhao , Zhijian Liu , Ji Lin , Jun-Yan Zhu , and Song Han . Differentiable augmentation for data-efficient GAN training . In Advances in Neural Information Processing Systems 33 : Annual Conference on Neural Information Processing Systems 2020 , NeurIPS 2020 , 2020 . Fengwei Zhou , Jiawei Li , Chuanlong Xie , Fei Chen , Lanqing Hong , Rui Sun , and Zhenguo Li . MetaAugment : Sample-aware data augmentation policy learning . In Thirty-Fifth AAAI Conference on Artificial Intelligence , AAAI 2021 , Thirty-Third Conference on Innovative Applications of Artificial Intelligence , IAAI 2021 , The Eleventh Symposium on Educational Advances in Artificial Intelligence , EAAI , pp . 11097–11105 . AAAI Press , 2021 . A APPENDIX A.1 ADDITIONAL EXPERIMENTS A.1.1 LARGE-SCALE DATASET For large-scale dataset , AdaAug improves the top-1 accuracy 1 % over the ImageNet ResNet-50 baseline ( see Table 5 ) . The performance gain is similar to previous AutoDA methods and validates the positive effect of AdaAug on complex dataset . A.1.2 ARCHITECTURE TRANSFER We also provide the experimental results when using the learned Augmentation policy to train the Shake-Shake ( 26 2x96d ) model on Reduced CIFAR-10 and Reduced SVHN in Table 6 . A.2 ADDITIONAL ABLATION STUDY To clarify the improvements of AdaAug , we compare the performance of AdaAug under different settings in Table 7 : Simple : standard data augmentation is applied ; Random : AdaAug is applied with randomly initialized hγ while keeping the diversity parameters the same ; AdaAug ( w/o diversity ) : AdaAug is applied without the diversity parameters ; AdaAug : AdaAug is applied with learned hγ and the diversity parameters . A.3 FINE-TUNING OF THE DIVERSITY PARAMETERS The following experiments study the sensitivity of the temperature T and magnitude perturbation δ on the Flower dataset by changing the values of T and δ around the default values used in our original experiments . The fine-tuning of the diversity parameters reduces the test-set error rate of AdaAug-transfer on the Flower dataset from 3.63 to 3.49 . A.4 EF F ICIENCY OF POLICY SEARCH We compare the GPU hours needed to search the augmentation policy between different Automated Data Augmentation methods in Table 9 . Among the baselines , AdaAug is more efficient than AutoAugment , PBA and Fast AutoAugment . A.5 MORE ANALYSIS ON ADAAUG POLICY LEARNING A.5.1 CONVERGENCE OF POLICY LEARNING Here , we provide some insights and empirical evidences for the convergence in policy training . In particular , Figure 4 shows the training and validation losses when learning the CIFAR-10 augmentation policy . The training and validation losses converge to some fixed values towards the end of training . In addition , we also visualize the change of the augmentation parameters ( p and µ ) in Figure 5 . The magnitude parameters start at a smaller value and converge towards the end of training . For the augmentation probability , most of the candidates are stabilized after certain epochs while some others are updated more frequently . These are the evidences that show the convergence of our proposed policy training method . For a more thorough analysis for the policy convergence , we would like to leave it as the future work . A.5.2 ANALYSIS OF LEARNED AUGMENTATION MAGNITUDE Complement with the augmentation probability in Figure 2 , Figure 6 shows the augmentation magnitude for CIFAR-10 and SVHN . We observe that the policy magnitude λ also shows slight variations among different classes and instances . Although the observation is less prominent and is harder to interpret when compared to the augmentation probability p , the learned augmentation magnitude adapts to different data samples . A.5.3 VARIANCE OF LEARNED AUGMENTATION POLICY GROUPED BY CLASS In Figure 7 , we plot the standard deviations of the predicted augmentation probabilities of the image instances grouped by its class label . It is observed that even within the same class , the predicted augmentation policy is different across instances . It is a clue that the AdaAug policy captures some instance-level augmentation information . A.5.4 DISTRIBUTIONS OF THE AUGMENTATION PARAMETERS IN ADAAUG-TRANSFER . In Figure 8 , we show the distributions of the augmentation parameters when transferring the learned policy to the Flower , Pet , Car and Aircraft datasets . The distributions of the augmentation parameters show slight differences between different tasks . Apparently , we observe that the colour transformations , for example Colour , Invert and Solarize are less preferred in the Flower and Pet datasets , while shearing operations are relatively more favourable in the Car and Aircraft dataset . The observation of using less color transformation for the Flower dataset aligns with the findings from Lee et al . ( 2021 ) . Although some differences in the augmentation distributions may be less prominent and harder to interpret when compared to the AdaAug-direct cases , the policy shows its adaptations to different tasks . A.6 LIMITATIONS Although AdaAug is differentiable and efficient in search , it requires forming different augmentation paths and passing the images through each path . Compared to standard training , where we forward b images in one mini-batch , AdaAug processes b · |T| images in the exploration pass . At inference time , AdaAug keeps a pre-trained policy network ( πθ ) to augment the new dataset . This adds extra computational effort when compared to other AutoDA policies , which may be a concern if the image resolution and model size are large but the computational resources are limited .
This paper introduces a data augmentation method AdaAug that learns adaptive augmentation policies in a class-dependent and potentially instance-dependent manner to improve the generalisation capability of deep learning models. Concretely, it proposes an efficient exploition-exploration workflow to search for an augmentation policy that optimizes the generalization performance. Experimental results on datasets with transfer and direct settings show the efficacy of this approach.
SP:6f8a327a9f14875aad8b76d34889c9c91d04a444
AdaAug: Learning Class- and Instance-adaptive Data Augmentation Policies
1 INTRODUCTION Data augmentation is a common way to enhance the robustness of deep learning models by augmenting the datasets used for model training . Applying popular data augmentation operations such as randomized cropping , horizontal flipping , and color shifting to image data has become a standard procedure in modern image recognition models ( Krizhevsky et al. , 2012 ; Shorten & Khoshgoftaar , 2019 ) . Over the years , various augmentation methods using advanced operations have been proposed . Examples include occlusion-based operations like Cutout ( Devries & Taylor , 2017 ) that randomly masks part of an image to avoid overfitting , label-mixing operations like CutMix ( Yun et al. , 2019 ) that replaces the occluded part in Cutout with a different image patch , and Mixup ( Zhang et al. , 2018 ) that interpolates two images with their corresponding one-hot encoded labels . While these hand-crafted data augmentation methods can improve model generalization , choosing the operations and their corresponding parameters is often decided manually to make the augmentation scheme effective for the task at hand . Despite the manual efforts involved , an augmentation policy that is useful for a particular dataset often does not generalize well to other datasets ( Cubuk et al. , 2019 ) . To tackle this problem , a series of recent studies has been conducted to automate the process of finding an effective data augmentation policy for a target dataset . These automated data augmentation ( AutoDA ) methodologies show impressive results on several benchmark image datasets ( Cubuk et al. , 2019 ; Ho et al. , 2019 ; Lim et al. , 2019 ; Cubuk et al. , 2020 ; Hataya et al. , 2020 ; Hendrycks et al. , 2020 ; Li et al. , 2020 ; Cheung & Yeung , 2021 ) . The manual tuning of augmentation parameters can also be addressed by using generative approaches , such as training a generative adversarial network ( GAN ) to create new artificial images directly ( Antoniou et al. , 2017 ; Tran et al. , 2017 ; Jha & Cecotti , 2020 ; Yorioka et al. , 2020 ; Zhao et al. , 2020 ) . However , these generative models are often hard to implement and computationally expensive to train . In supervised learning , data augmentation is considered as a naive way to inject inductive biases , such as translational invariance , to a classifier . With the recent advances in representation learning , data augmentation has become a major approach to the learning of good representations . For example , there is an increasing trend to replace convolutional neural networks ( CNNs ) with transformers in computer vision ( Carion et al. , 2020 ; Dosovitskiy et al. , 2021 ) . Being a more generic architecture , transformers do not come with additional inductive bias like translational invariance in CNNs , thereby requiring more data to learn the effective invariance properties . Data augmentation serves as an effective way to achieve this goal ( Touvron et al. , 2020 ) . In addition , recent self-supervised models rely heavily on data augmentation to create different views of the same data and learn the robust representations through contrastive learning ( Chen et al. , 2020 ; Grill et al. , 2020 ; He et al. , 2020 ) . However , improper choice of augmentation operations , especially with excessive strength , may impose wrong inductive bias to the models and lead to performance degradation ( Chen et al. , 2020 ; Xiao et al. , 2021 ) . Consequently , there is a need to enrich the current data augmentation methods , especially with a better choice of the data augmentation policy , to take advantage of the recent advances in representation learning . Previous AutoDA methods attempt to find an optimal augmentation policy to augment a given dataset . However , most of the discovered policies are not adaptive to variation of the dataset . Even in a dataset in which an augmentation policy is found to be effective , the same augmentation scheme is applied equally to all classes . This can limit the potential of data diversity brought by data augmentation . For example , in digit classification , flip-invariance is useful for the digits “ 0 ” , “ 1 ” and “ 8 ” but not for the other digits ; in shape classification , shearing-invariance is useful for “ triangles ” but not for “ rectangles ” . None of the previous AutoDA methods can learn an adaptive class-dependent augmentation policy . To address this limitation , we propose AdaAug , as an AutoDA method , to learn a class-dependent and potentially instance-dependent augmentation policy efficiently . Despite the attractive potential of such an adaptive scheme if it can indeed be realized , actually learning an adaptive augmentation policy poses at least two major technical challenges . First , the search space for the per-class , per-instance augmentation policies is very large , rendering it intractable when trying to maintain an individual policy for each class or even each realization of the input data . Second , the gradient information of the augmentation parameters is hard to obtain as the operation selection process and the transformations are non-differentiable . Optimizing the augmentation policy efficiently is a challenging problem to address . In this work , AdaAug employs a recognition model to learn the underlying augmentation policy for each data instance and takes an alternating exploit-and-explore procedure to update the augmentation policy using a differentiable workflow . In the exploitation pass , AdaAug trains a classifier for a number of steps , followed by the exploration pass which validates the classifier and updates the policy to minimize the validation loss . The intuition behind our design of AdaAug is that such an alternating procedure can learn augmentations that help generalize the trained model to unseen validation data . For example , rotational invariance would be found to be a desirable inductive bias that the model should learn if the validation set contains some similar but rotated versions of the training images . Our goal is to capture such information and assign a higher probability to use rotation for augmentation in this case . An application scenario would be a computer vision task for drones where the unseen data may contain different kinds of rotated images . To summarize , our contributions are listed as follows : • We introduce a novel AutoDA method to learn a class-dependent and potentially instancedependent augmentation policy for each data instance . • We propose a differentiable workflow to search for the augmentation policy efficiently . • We demonstrate that the policies learned by our method transfer better to unseen datasets , such as Oxford Flowers , Oxford-IIT Pets , FGVC Aircraft , and Stanford Cars , when compared to other AutoDA baselines . • We demonstrate state-of-the-art performance on the CIFAR-10 , CIFAR-100 , and SVHN datasets . 2 RELATED WORK Automated Data Augmentation . Several AutoDA methods have been proposed to compose the augmentation operations automatically . AutoAugment ( Cubuk et al. , 2019 ) learns to generate the probability and magnitude in applying different augmentation operations as a policy using reinforcement learning ( RL ) . It alternately generates an augmentation policy to train a child model and updates the policy generator using the validation performance as reward . Since it is computationally expensive to train the child models repeatedly , several techniques have been proposed subsequently in an attempt to reduce the search effort . Fast AutoAugment ( Lim et al. , 2019 ) uses Bayesian optimization ( BO ) to tune the augmentation parameters . Population-based Augmentation ( PBA ) ( Ho et al. , 2019 ) exploits population-based training ( PBT ) to search for an optimal augmentation policy schedule by training multiple parallel child models using an evolutionary approach . RandAugment ( Cubuk et al. , 2020 ) applies the augmentation operations uniformly and reduces the search space significantly by covering only the number of operators and the global augmentation magnitude . Instead of using the validation performance to evaluate the augmentation quality , Adversarial AutoAugment ( Zhang et al. , 2020 ) uses an adversarial objective to learn the augmentation policy . MODALS ( Cheung & Yeung , 2021 ) utilizes PBA to search for an optimal latent space augmentation policy and augment data from any modality not limited to image data . Differentiable Data Augmentation . In addition to using RL , BO and PBT to optimize the augmentation parameters , there exist related methods that modify the otherwise discrete search procedure to make it end-to-end differentiable . This results in a more efficient optimization procedure and a more precise policy than RL and PBT as the search space is continuous . In AdaAug , part of our contribution is to design a differentiable workflow to learn the augmentation policy . For previous differentiable augmentation approaches , Faster AutoAugment ( Hataya et al. , 2020 ) proposes a differentiable Relaxed Bernoulli distribution to sample the candidate augmentation functions and estimate the gradients of the non-differentiable augmentation magnitude using the Stop Gradient estimator . Specifically , it optimizes a density matching loss between the training and validation data . DADA ( Li et al. , 2020 ) differentiates through the discrete policy sampling process using GumbelSoftmax trick While AutoDA and differentiable data augmentation have been shown to be successful in improving the generalization performance of deep learning models , the learned augmentation policy is often applied uniformly to the whole dataset , meaning that all classes and instances share the same augmentation policy . On the contrary , each class and even each data instance receive an adaptive policy to augment in our proposed method . Adaptive Data Augmentation . Attempts have been made to apply adaptive data augmentation at a class or subgroup level . Hauberg et al . ( 2016 ) proposed a statistical approach to model the transformations within each class and use statistical models to augment the dataset . The approach shows improvement in the small MNIST dataset and its variants . However , the augmentation operations are limited to spatial transformations . In addition , observations of the data must be locatable and alignable , making it difficult to extend to most other computer vision tasks . Recently , CAMEL ( Goel et al. , 2021 ) adopts a finer data-generation method by fixing those classifiers that fail on a subgroup of a class . It uses CycleGAN to learn different variations of the same training data within a subgroup . However , CAMEL requires specifying the subgroup information manually and assumes that the subgroups only exist within the same class . MetaAugment ( Zhou et al. , 2021 ) learns a samplewise weighting scheme and a global probability parameter to control the sampling of augmentation transformations . In AdaAug , the learned policy can capture class-dependent transformation automatically and also instance-dependent information , such as the light intensity of an image , across different classes . 3 ADAAUG 3.1 SEARCH SPACE Let T be a set of augmentation operations where τj denotes the j-th operation ( e.g. , “ rotation ” ) in the set . We formulate an augmentation policy as the probability p and magnitude λ in applying the augmentation operations . Here , p is a probability vector with each entry pj ∈ [ 0 , 1 ] ; ∑|T| j=1 pj = 1 and λ is a vector with each entry λj ∈ [ 0 , 1 ] , where pj and λj are the probability and magnitude , respectively , of applying the operation τj . Mathematically , τj : X → X is a mapping from the input space X to itself . For an image x ∈ X , τj transforms it with the magnitude parameter λj that specifies the strength of the transformation ( e.g. , degree of rotation ) , i.e. , x # → τj ( x ; λj ) . Note that some operations like flipping do not depend on the magnitude parameter . In a training pass , given an input data x , an augmentation policy ( p , λ ) , and the number of operators k , we sample k operations according to p and apply them with their corresponding magnitudes specified by λ : T ( x ; p , λ ) = τj ( x ; λj ) ; j ∼ p x̂ = T ( k ) ◦ · · · ◦ T ( 1 ) ( x ; p , λ ) ( 1 ) Here , T ( t ) , 1 ≤ t ≤ k , denotes applying the t-th operation . Our goal is to learn an augmentation policy function πθ : x # → ( p , λ ) to generate an adaptive augmentation policy that optimizes the generalization performance and is dependent on the input data ( see Figure 1 and Algorithm 1 ) . 3.2 SEARCH ALGORITHM Exploitation . AdaAug uses a feature extraction network fα : X → Z to map an input space to a latent space , a dense layer gβ : Z → Y to map a latent space to a label space , and a projection function hγ : Z → P × Λ to map a latent space ( representation ) to a probability and magnitude space , where p = [ 0 , 1 ] |T| and ‖p‖1 = 1 , ∀p ∈ P and λ = [ 0 , 1 ] |T| , ∀λ ∈ Λ . In our case , the functions f , g , h ( with the subscripts dropped for notational simplicity ) are implemented as neural networks with network weights α , β , γ , respectively . The softmax operation is applied to the first half of the output from h to get the probabilities and the sigmoid function is applied to the other half to get the magnitudes . The policy network πθ ( x ) can then connect the class and image information to the augmentation policy via h ◦ f ( x ) with parameters θ = ( γ , α ) . In an exploitation pass , given the training data x , the policy function generates the data-dependent augmentation policy ( p , λ ) = πθ ( x ) and augments x to give x̂ using Equation ( 1 ) . Here , x̂ is treated as a new unseen training example and is used to train the classification model g ◦ f ( x̂ ) by minimizing the crossentropy loss : minα , β Ltrain ( α , β ) . Note that there is a discrete sampling procedure in Equation ( 1 ) . During the update , the gradient of α does not involve the computations in the policy network . Exploration . In the exploration pass , AdaAug first generates the augmentation policy ( p , λ ) = πθ ( x ) given the validation data x . Then , it applies all the |T| augmentation operations to x separately with the corresponding magnitudes in λ . The augmented validation data are passed to the feature extraction network f individually to get the latent representations . The latent representations are summed based on their weights in the probability vector p. The mixed representation is passed to g for computing the predicted labels : ŷ = g |T|∑ j=1 pj · f ◦ τj ( x ; λj ) ; ( p , λ ) = πθ ( x ) ( 2 ) AdaAug updates the projection parameters γ to minimize the validation loss : minγ Lvalid ( α , β , γ ) . As we make no assumption that the provided augmentation operations are differentiable , we follow prior approaches ( Bengio et al. , 2013 ; Li et al. , 2020 ) using a straight-through gradient estimator to optimize the augmentation magnitudes . Specifically , the gradient of the magnitudes is estimated with respect to each input pixel value xh , w of the augmented data , i.e. , ∂x̂h , w ∂λj = 1 . The gradient can then be calculated by : ∂Lvalid ∂λj = ∑ w , h ∂Lvalid ∂x̂w , h ∂x̂w , h ∂λj = ∑ w , h ∂Lvalid ∂x̂w , h ( 3 ) Algorithm 1 Search algorithm 1 : procedure SEARCH ( Dtrain , Dvalid , T , k , m , r ) 2 : Initialize α , β , γ 3 : for r steps do # Exploitation 4 : Sample a mini-batch dtrain ∈ Dtrain 5 : for ( x , y ) ∈ dtrain do 6 : ( p , λ ) = πθ ( x ) 7 : x̂ = augment ( x ) ⊲ augment computes Eq . 1 8 : ŷ = g ◦ f ( x̂ ) 9 : Ltrain ( α , β ) = CrossEntropyLoss ( ŷ , y ) 10 : ( α , β ) ← argminα , β Ltrain ( α , β ) 11 : if r is divisible by m then # Exploration 12 : Sample a mini-batch dvalid ∈ Dvalid 13 : for ( x , y ) ∈ dvalid do 14 : ( p , λ ) = πθ ( x ) 15 : ŷ = explore ( x ) ⊲ explore computes Eq . 2 16 : Lvalid ( α , β , γ ) = CrossEntropyLoss ( ŷ , y ) 17 : γ ← argminγ Lvalid ( α , β , γ ) return α , γ Relation to Neural Architecture Search ( NAS ) . Tuning of the augmentation policy parameters bears some similarity with optimization of the network architecture weights in DARTS from the NAS literature ( Liu et al. , 2019 ) . DARTS prescribes different computation paths to different operation cells and relaxes the computation to a mixture of the operations weighted by learnable weights for each path . With w as the model parameters and α as the weights for the computation paths , the search algorithm solves a bi-level optimization problem : min α Lvalid ( w∗ ( α ) , α ) s.t . w∗ ( α ) = argmin w Ltrain ( w , α ) ( 4 ) While DARTS optimizes the architecture weights , AdaAug optimizes the projection parameter γ that decides the augmentation weights and magnitudes in the exploration . DARTS solves the optimization by using a first-order and finite-difference approximation of the architecture gradient . In AdaAug , we sample the augmentation operations and treat the augmented data as unseen missing training data X̂train . By absorbing the augmentation parameter in the training dataset , we avoid the complex bi-level optimzation and simplify the exploitation procedure into training a standard classifier : min γ Lvalid ( α∗ , β∗ , γ ; Xvalid ) s.t . α∗ , β∗ = argmin α , β Ltrain ( α , β ; X̂train ) ( 5 ) Relation to Density Matching . Data augmentation can be regarded as a density matching problem between the training and validation data ( Ratner et al. , 2017 ; Tran et al. , 2017 ; Hataya et al. , 2020 ; Lim et al. , 2019 ) . From this perspective , AdaAug improves model generalization by matching the density of Dtrain with the density of the augmented Dvalid . In the outer optimization objective in Equation ( 5 ) , AdaAugminimizes the classification loss with the augmentation parameters γ over the same optimal model parameters α∗ , β∗ learned from D̂train . In so doing , it approximately reduces the distance between the densities of the augmented Dvalid and Dtrain . 3.3 INFERENCE Like most automated augmentation piplines ( Cubuk et al. , 2019 ; Ho et al. , 2019 ; Lim et al. , 2019 ; Hataya et al. , 2020 ; Li et al. , 2020 ) , AdaAug searches for the augmentation policy on a small dataset using a small model and then applies the learned policy network πθ to train with a larger dataset or model . We call the process of applying a searched policy to augment a new dataset as inference time . Regarding this workflow , RandAugment argues that the different search spaces in search time and inference time makes the augmentation policy unable to adjust the regularization strength to different target datasets and models ( Cubuk et al. , 2020 ) . Therefore , it proposes to search for the global magnitude and number of operators for each case . To address this concern , AdaAug ultilizes three diversity parameters to fine-tune the regularization strength of the found policy for large datasets and models . First , the number of operators k is set to 1 at search time , but larger value of k can be used at inference time . To control the selection of more diverse operations , a temperature parameter T is introduced in the softmax function : pi = exp [ h ( zi ) /T ] ∑|T| j exp [ h ( zj ) /T ] . Setting a larger T value allows the operations with lower probability being sampled more often in the same order . Last , AdaAug perturbs the magnitude value with the parameter δ so the perturbed magnitude λ̂ is given as λ̂ ∼ Uniform ( λ− δ , λ+ δ ) . This allows the augmented data to show slight variations even with the same operation on the same input data . In practice , we can perform grid search on these three diversity parameters with the other hyperparameters using a holdout validation set . 4 EXPERIMENTS AND RESULTS In this section , we explain our experimental design followed by presenting the results . We evaluate the empirical performance of AdaAug in two experiments : AdaAug-transfer and AdaAug-direct . We select the comparison baselines that use the validation performance as the evaluation method to learn the augmentation policy . For all the experiments , we report the average test-set error rate as the performance metric . Each model is evaluated three times with different random initializations . Augmentation operations . We match the operations adopted by AutoAugment . In addition to the 16 operations proposed previously ( ShearX , ShearY , TranslateX , TranslateY , Rotate , AutoContrast , Invert , Equalize , Solarize , Posterize , Contrast , Color , Brightness , Sharpness , Cutout , and Sample Pairing ) , we add the Identity operation for not applying data augmentation . For the simple baseline , we apply random horizontal flip , color jittering , color normalization , and Cutout with 16× 16 patch size . Our method and other baselines apply the found policy on top of these standard augmentations . Policy search . We follow the setup adopted by AutoAugment ( Cubuk et al. , 2019 ) to use 4,000 training images for CIFAR-10 and CIFAR-100 , and 1,000 training images for SVHN . The remaining images are used as the validation set . We use Wide-ResNet-40-2 ( Zagoruyko & Komodakis , 2016 ) as the feature extraction network for all searches . We implement h as a linear layer and update the policy parameter γ after every 10 training steps using the Adam optimizer with learning rate 0.001 and a batch size of 128 . AdaAug-transfer . In the first experiment , we investigate how well the learned augmentation policy can transfer to unseen datasets . We search for the optimal augmentation policy on the CIFAR-100 dataset and use the learned policy to train with four fine-grained classification datasets : Oxford 102 Flowers ( Nilsback & Zisserman , 2008 ) , Oxford-IIIT Pets ( Em et al. , 2017 ) , FGVC Aircraft ( Maji et al. , 2013 ) , and Stanford Cars ( Krause et al. , 2013 ) . We compare the test error rate with AutoAugment , Fast AutoAugment , DADA , and RandAugment using their published policies for CIFAR-100 . For all the tested datasets , we compare the transfer results when training the ResNet-50 model ( He et al. , 2016 ) for 180 epochs from scratch and fine-tuning the ResNet-50 model pretrained on ImageNet for 100 epochs . We use the cosine learning rate decay with one annealing cycle ( Loshchilov & Hutter , 2017 ) , initial learning rate of 0.1 , weight decay 1e-4 and gradient clipping parameter 5 . AdaAug-direct . In the second experiment , we search for the optimal augmentation policy on a small subset of the target dataset and use the learned policy to train the full dataset . The purpose of the experiment is to demonstrate that while the AdaAug policy can adapt to other unseen datasets , it can also achieve competitive performance on the seen datasets with more training data . We compare AdaAug-direct with state-of-the-art AutoDA methods using the same evaluation datasets : CIFAR10 ( Krizhevsky & Hinton , 2009 ) , CIFAR-100 ( Krizhevsky & Hinton , 2009 ) , and SVHN ( Netzer et al. , 2011 ) . We test our method using Wide-ResNet-40-2 and Wide-ResNet-28-10 . At inference time , we set the temperature T = 3 and magnitude perturbation δ = 0.3 and search for the number of operators k ∈ { 1 , 2 , 3 , 4 } using a holdout validation set , like RandAugment ( Cubuk et al. , 2020 ) . For the hyperparameters , we follow AutoAugment , PBA , and Fast AutoAugment if possible . Table 1 and Table 2 show that the AdaAug policy outperforms the other baselines when training and fine-tuning the ResNet-50 model on the Flowers , Pets , Aircraft , and Cars datasets , respectively . These baselines apply the same augmentation policy to all datasets . Such a policy may not be optimal to the target domain . In contrast , AdaAug adapts the augmentation policy to individual image classes and instances automatically . The AdaAug policy network applies different augmentation policies to unseen images according to their similarity to the classes that AdaAug has seen in the search . To further justify our claim , we show the distribution of the augmentation parameters for different tasks in Appendix A.5.4 . The four datasets have many classes but few training examples per class . The negative effect of using non-adaptive data augmentation is likely to be imminent than a situation where its dataset has fewer classes but more examples per class . 4.2 ADAAUG-DIRECT CIFAR-10 and CIFAR-100 . The policies learned by AdaAug mostly achieve either comparable or better performance than the baselines for both WRN-40-2 and WRN-28-10 models on the CIFAR10 and CIFAR-100 datasets ( see Table 3 ) . We visualize the augmentation policy of CIFAR-10 by applying the policy to the validation data and averaging the predicted augmentation probability of each image for each class ( see Figure 2 ) . The policy contains all types of augmentation moderately at some degree . Among the operations , Flip dominates the policy . It is in line with the augmentation selected manually as people find applying horizontal flipping to CIFAR-10 can improve the prediction accuracy . The policy learned by AdaAug puts higher emphasis on Invert and Equalize , which are also reported by PBA . The focus on Brightness is also aligned with the policy in AutoAugment . Although there are minor variations in the importance of some operations between the policies learned by AdaAug , AutoAugment , and PBA , their empirical performance is similar . SVHN . AdaAug performs comparably to the baselines on the core set of SVHN . We visualize the augmentation policy for SVHN in Figure 2 . We found that AutoContrast , Invert , and Solarize receive a large attention in SVHN . This makes sense because the specific color of the number and background is irrelevant to the prediction . This is consistent with the findings by AutoAugment and PBA . The Flip operation receives a significantly higher probability in digits “ 0 ” , “ 1 ” , and “ 8 ” . It is likely because these three digits appear similar after flipping . This shows that AdaAug captures not only dataset-specific augmentation policy , but also class-dependent augmentation , which can not be achieved by other baselines . In addition to the augmentation probability , we visualize the learned augmentation magnitude in Appendix A.5.2 . ImageNet . We also validate our method on large-scale ImageNet dataset . AdaAug improves the top-1 accuracy 1 % over the ResNet-50 baseline ( see Appendix A.1.1 ) . Although some baselines like AutoAugment produce similar performance or slightly outperform our method in the AdaAugdirect experiment , AdaAug uses less computational effort in searching for the policy . Specifically , AutoAugment takes 5,000 GPU hours to search for the CIFAR-10 policy , while AdaAug takes only 3.3 GPU hours on an old GeForce GTX 1080 GPU card ( see Appendix A.4 ) . 5 DISCUSSION Instance-dependent augmentation . The AdaAug-direct experimental results show that the learned AdaAug policy can capture dataset- and class-dependent augmentations on CIFAR-10 , CIFAR100 , and SVHN . We further investigate whether AdaAug can learn instance-dependent information based on some image features . According to the architecture , AdaAug takes the output from the last layer of a CNN network as the image representation and uses it to predict the augmentation parameters . The image representation contains the class information and potentially some image features . Empirically , we first examine if AdaAug generates different augmentation policies for different instances even within the same class . In Appendix A.5.3 , we plot the standard deviations of the predicted augmentation probabilities of the image instances for each class . It is observed that even within the same class , the predicted augmentation policy is slightly different across instances . It is a clue that the AdaAug policy captures some instance-level augmentation information . Qualitatively , we show the augmented examples of a flower image using the AdaAug policy in Figure 3 . The input image is relatively darker than the other images . Apparently , AdaAug is aware of this property and applies more brightness-related augmentation to lighten up the image . We observe similar behavour of AdaAug in other classes , but there is inadequate empirical support to conclude a general rule on how the network decides the instance-aware augmentation as policies are learned in a data-driven way . Quality of augmented data . We compare the augmented images from AdaAugwith RandAugment under different augmentation strengths in Figure 3 . In terms of augmentation diversity , RandAug- ment produces more variations of the input image . However , not all augmented images produced are plausible . As RandAugment applies the augmentations uniformly , some augmented flower images show a strange color different from the original image . With increasing number of operators and magnitude , the flower object is sometimes translated out of the frame and results in a black image . This may affect the learning performance as color can be an important feature in classifying different species of flowers . For AdaAug , the augmented images are more visually appealing . Ablation study . In this section , we study the effects of AdaAug-transfer on Oxford 102 Flowers and AdaAug-direct on CIFAR-10 using three alternative search configurations . First , does the use of nonlinear projection deliver better performance ? We replace the linear layer in h with a 2-layer MLP using a 128 hidden layer size with ReLU activation . Second , we study whether the class-dependent and instance-dependent configuration improves model accuracy . We remove the class and instance information by replacing the policy network with a fixed vector . which decides the augmentation probabilities and magnitudes for the entire dataset . In the third setting , we mix the augmented images in the input space instead of the latent space and observe the effects . Our results show that the use of class- and instance-adaptive augmentation policy contributes larger improvement in AdaAug-direct . In AdaAug-transfer , using a nonlinear projection harms the prediction performance . A possible reason is that the nonlinear projection is more likely to overfit the search dataset and fail to generalize to unseen datasets . Moreover , combining the augmentation path in the latent space learns a better policy ( see Table 4 ) . In Appendix A.3 , we provide further analysis of using AdaAug with different diversity parameters . In this work , we propose a novel AutoDA approach , AdaAug , to learn class- and instance-adaptive augmentation policies efficiently . We demonstrate that the found policy transfers well to unseen datasets while achieving state-of-the-art results on the seen datasets . We provide evidence to show that the learned adaptive policy captures class- and instance-level information . We think that AdaAug can show further gains over existing baselines when applied to datasets with more dissimilar underlying augmentation rules among the data classes , and with fewer training examples for each class . It is also promising to investigate in the future if the proposed adaptive augmentation method can improve the performance of other computer vision and representation learning tasks . REFERENCES Antreas Antoniou , Amos J. Storkey , and Harrison Edwards . Data augmentation generative adver- sarial networks . arXiv preprint arXiv:1711.04340 , 2017 . Yoshua Bengio , Nicholas Léonard , and Aaron C. Courville . Estimating or propagating gradients through stochastic neurons for conditional computation . arXiv preprint arXiv:1308.3432 , 2013 . Nicolas Carion , Francisco Massa , Gabriel Synnaeve , Nicolas Usunier , Alexander Kirillov , and Sergey Zagoruyko . End-to-end object detection with transformers . In 16th European Conference on Computer Vision , ECCV 2020 , volume 12346 , pp . 213–229 . Springer , 2020 . Ting Chen , Simon Kornblith , Mohammad Norouzi , and Geoffrey E. Hinton . A simple framework for contrastive learning of visual representations . In Proceedings of the 37th International Conference on Machine Learning , ICML 2020 , volume 119 , pp . 1597–1607 . PMLR , 2020 . Tsz Him Cheung and Dit Yan Yeung . MODALS : Modality-agnostic automated data augmentation in the latent space . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Ekin D. Cubuk , Barret Zoph , Dandelion Mané , Vijay Vasudevan , and Quoc V. Le . AutoAugment : Learning augmentation strategies from data . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR 2019 , pp . 113–123 . IEEE , 2019 . Ekin D. Cubuk , Barret Zoph , Jonathon Shlens , and Quoc V. Le . RandAugment : Practical automated data augmentation with a reduced search space . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR Workshops 2020 , pp . 3008–3017 . IEEE , 2020 . Terrance Devries and Graham W. Taylor . Improved regularization of convolutional neural networks with cutout . arXiv preprint arXiv:1708.04552 , 2017 . Alexey Dosovitskiy , Lucas Beyer , Alexander Kolesnikov , Dirk Weissenborn , Xiaohua Zhai , Thomas Unterthiner , Mostafa Dehghani , Matthias Minderer , Georg Heigold , Sylvain Gelly , Jakob Uszkoreit , and Neil Houlsby . An image is worth 16x16 words : Transformers for image recognition at scale . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Yan Em , Feng Gao , Yihang Lou , Shiqi Wang , Tiejun Huang , and Ling-Yu Duan . Incorporating intra-class variance to fine-grained visual recognition . In 2017 IEEE International Conference on Multimedia and Expo , ICME 2017 , pp . 1452–1457 . IEEE , 2017 . Karan Goel , Albert Gu , Yixuan Li , and Christopher Re . Model patching : Closing the subgroup performance gap with data augmentation . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Jean-Bastien Grill , Florian Strub , Florent Altché , Corentin Tallec , Pierre H. Richemond , Elena Buchatskaya , Carl Doersch , Bernardo Ávila Pires , Zhaohan Guo , Mohammad Gheshlaghi Azar , Bilal Piot , Koray Kavukcuoglu , Rémi Munos , and Michal Valko . Bootstrap your own latent - A new approach to self-supervised learning . In Advances in Neural Information Processing Systems 33 : Annual Conference on Neural Information Processing Systems 2020 , NeurIPS 2020 , 2020 . Ryuichiro Hataya , Jan Zdenek , Kazuki Yoshizoe , and Hideki Nakayama . Faster AutoAugment : learning augmentation strategies using backpropagation . In 16th European Conference on Computer Vision , ECCV 2020 , volume 12370 , pp . 1–16 . Springer , 2020 . Søren Hauberg , Oren Freifeld , Anders Boesen Lindbo Larsen , John W. Fisher III , and Lars Kai Hansen . Dreaming more data : Class-dependent distributions over diffeomorphisms for learned data augmentation . In Arthur Gretton and Christian C. Robert ( eds . ) , Proceedings of the 19th International Conference on Artificial Intelligence and Statistics , AISTATS 2016 , volume 51 of JMLR Workshop and Conference Proceedings , pp . 342–350 . JMLR , 2016 . Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Deep residual learning for image recognition . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR 2016 , pp . 770– 778 . IEEE , 2016 . Kaiming He , Haoqi Fan , Yuxin Wu , Saining Xie , and Ross B. Girshick . Momentum contrast for unsupervised visual representation learning . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR 2020 , pp . 9726–9735 . IEEE , 2020 . Dan Hendrycks , Norman Mu , Ekin Dogus Cubuk , Barret Zoph , Justin Gilmer , and Balaji Lakshminarayanan . AugMix : A simple data processing method to improve robustness and uncertainty . In 8th International Conference on Learning Representations , ICLR 2020 , 2020 . Daniel Ho , Eric Liang , Xi Chen , Ion Stoica , and Pieter Abbeel . Population Based Augmentation : efficient learning of augmentation policy schedules . In Proceedings of the 36th International Conference on Machine Learning , ICML 2019 , volume 97 , pp . 2731–2741 . PMLR , 2019 . Ganesh Jha and Hubert Cecotti . Data augmentation for handwritten digit recognition using generative adversarial networks . Multim . Tools Appl. , 79 ( 47 ) :35055–35068 , 2020 . J. Krause , Jun Deng , Michael Stark , and Li Fei-Fei . Collecting a large-scale dataset of fine-grained cars . In Second Workshop on Fine-Grained Visual Categorization , 2013 . A. Krizhevsky and G. Hinton . Learning multiple layers of features from tiny images . Technical report , University of Toronto , 2009 . Alex Krizhevsky , Ilya Sutskever , and Geoffrey E. Hinton . ImageNet classification with deep convolutional neural networks . In Advances in Neural Information Processing Systems 25 : 26th Annual Conference on Neural Information Processing Systems 2012. , pp . 1106–1114 , 2012 . Hankook Lee , Kibok Lee , Kimin Lee , Honglak Lee , and Jinwoo Shin . Improving transferability of representations via augmentation-aware self-supervision . Advances in Neural Information Processing Systems 34 : Annual Conference on Neural Information Processing Systems 2021 , NeurIPS 2021 , 2021 . Yonggang Li , Guosheng Hu , Yongtao Wang , Timothy M. Hospedales , Neil Martin Robertson , and Yongxing Yang . DADA : Differentiable automatic data augmentation . 16th European Conference on Computer Vision , ECCV 2020 , 2020 . Sungbin Lim , Ildoo Kim , Taesup Kim , Chiheon Kim , and Sungwoong Kim . Fast AutoAugment . In Advances in Neural Information Processing Systems 32 : Annual Conference on Neural Information Processing Systems 2019 , NeurIPS 2019 , pp . 6662–6672 , 2019 . Hanxiao Liu , Karen Simonyan , and Yiming Yang . DARTS : Differentiable architecture search . In 7th International Conference on Learning Representations , ICLR 2019 , 2019 . Ilya Loshchilov and Frank Hutter . SGDR : Stochastic gradient descent with warm restarts . In 5th International Conference on Learning Representations , ICLR 2017 , 2017 . Subhransu Maji , Esa Rahtu , Juho Kannala , Matthew B. Blaschko , and Andrea Vedaldi . Fine-grained visual classification of aircraft . arXiv preprint arXiv:1306.5151 , 2013 . Yuval Netzer , Tao Wang , Adam Coates , Alessandro Bissacco , Bo Wu , and Andrew Ng . Reading digits in natural images with unsupervised feature learning . In NIPS Workshop on Deep Learning and Un-supervised Feature Learning , 2011 . Maria-Elena Nilsback and Andrew Zisserman . Automated flower classification over a large number of classes . In Sixth Indian Conference on Computer Vision , Graphics & Image Processing , ICVGIP 2008 , pp . 722–729 . IEEE , 2008 . Alexander J. Ratner , Henry R. Ehrenberg , Zeshan Hussain , Jared Dunnmon , and Christopher Ré . Learning to compose domain-specific transformations for data augmentation . In Advances in Neural Information Processing Systems 30 : Annual Conference on Neural Information Processing Systems 2017 , pp . 3236–3246 , 2017 . Connor Shorten and Taghi M. Khoshgoftaar . A survey on image data augmentation for deep learning . Journal of Big Data , 6:60 , 2019 . Hugo Touvron , Matthieu Cord , Matthijs Douze , Francisco Massa , Alexandre Sablayrolles , and Hervé Jégou . Training data-efficient image transformers & distillation through attention . arXiv preprint arXiv:2012.12877 , 2020 . Toan Tran , Trung Pham , Gustavo Carneiro , Lyle J. Palmer , and Ian D. Reid . A Bayesian data augmentation approach for learning deep models . In Advances in Neural Information Processing Systems 30 : Annual Conference on Neural Information Processing Systems 2017 , pp . 2797–2806 , 2017 . Tete Xiao , Xiaolong Wang , Alexei A Efros , and Trevor Darrell . What should not be contrastive in contrastive learning . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Daiki Yorioka , Hyunho Kang , and Keiichi Iwamura . Data augmentation for deep learning using generative adversarial networks . In 9th IEEE Global Conference on Consumer Electronics , GCCE 2020 , pp . 516–518 . IEEE , 2020 . Sangdoo Yun , Dongyoon Han , Sanghyuk Chun , Seong Joon Oh , Youngjoon Yoo , and Junsuk Choe . Cutmix : Regularization strategy to train strong classifiers with localizable features . In 2019 IEEE International Conference on Computer Vision , ICCV 2019 , pp . 6022–6031 . IEEE , 2019 . Sergey Zagoruyko and Nikos Komodakis . Wide residual networks . In Proceedings of the British Machine Vision Conference 2016 , BMVC 2016 . BMVA Press , 2016 . Hongyi Zhang , Moustapha Cissé , Yann N. Dauphin , and David Lopez-Paz . Mixup : Beyond empirical risk minimization . In 6th International Conference on Learning Representations , ICLR 2018 , 2018 . Xinyu Zhang , Qiang Wang , Jian Zhang , and Zhao Zhong . Adversarial AutoAugment . In 8th International Conference on Learning Representations , ICLR 2020 , 2020 . Shengyu Zhao , Zhijian Liu , Ji Lin , Jun-Yan Zhu , and Song Han . Differentiable augmentation for data-efficient GAN training . In Advances in Neural Information Processing Systems 33 : Annual Conference on Neural Information Processing Systems 2020 , NeurIPS 2020 , 2020 . Fengwei Zhou , Jiawei Li , Chuanlong Xie , Fei Chen , Lanqing Hong , Rui Sun , and Zhenguo Li . MetaAugment : Sample-aware data augmentation policy learning . In Thirty-Fifth AAAI Conference on Artificial Intelligence , AAAI 2021 , Thirty-Third Conference on Innovative Applications of Artificial Intelligence , IAAI 2021 , The Eleventh Symposium on Educational Advances in Artificial Intelligence , EAAI , pp . 11097–11105 . AAAI Press , 2021 . A APPENDIX A.1 ADDITIONAL EXPERIMENTS A.1.1 LARGE-SCALE DATASET For large-scale dataset , AdaAug improves the top-1 accuracy 1 % over the ImageNet ResNet-50 baseline ( see Table 5 ) . The performance gain is similar to previous AutoDA methods and validates the positive effect of AdaAug on complex dataset . A.1.2 ARCHITECTURE TRANSFER We also provide the experimental results when using the learned Augmentation policy to train the Shake-Shake ( 26 2x96d ) model on Reduced CIFAR-10 and Reduced SVHN in Table 6 . A.2 ADDITIONAL ABLATION STUDY To clarify the improvements of AdaAug , we compare the performance of AdaAug under different settings in Table 7 : Simple : standard data augmentation is applied ; Random : AdaAug is applied with randomly initialized hγ while keeping the diversity parameters the same ; AdaAug ( w/o diversity ) : AdaAug is applied without the diversity parameters ; AdaAug : AdaAug is applied with learned hγ and the diversity parameters . A.3 FINE-TUNING OF THE DIVERSITY PARAMETERS The following experiments study the sensitivity of the temperature T and magnitude perturbation δ on the Flower dataset by changing the values of T and δ around the default values used in our original experiments . The fine-tuning of the diversity parameters reduces the test-set error rate of AdaAug-transfer on the Flower dataset from 3.63 to 3.49 . A.4 EF F ICIENCY OF POLICY SEARCH We compare the GPU hours needed to search the augmentation policy between different Automated Data Augmentation methods in Table 9 . Among the baselines , AdaAug is more efficient than AutoAugment , PBA and Fast AutoAugment . A.5 MORE ANALYSIS ON ADAAUG POLICY LEARNING A.5.1 CONVERGENCE OF POLICY LEARNING Here , we provide some insights and empirical evidences for the convergence in policy training . In particular , Figure 4 shows the training and validation losses when learning the CIFAR-10 augmentation policy . The training and validation losses converge to some fixed values towards the end of training . In addition , we also visualize the change of the augmentation parameters ( p and µ ) in Figure 5 . The magnitude parameters start at a smaller value and converge towards the end of training . For the augmentation probability , most of the candidates are stabilized after certain epochs while some others are updated more frequently . These are the evidences that show the convergence of our proposed policy training method . For a more thorough analysis for the policy convergence , we would like to leave it as the future work . A.5.2 ANALYSIS OF LEARNED AUGMENTATION MAGNITUDE Complement with the augmentation probability in Figure 2 , Figure 6 shows the augmentation magnitude for CIFAR-10 and SVHN . We observe that the policy magnitude λ also shows slight variations among different classes and instances . Although the observation is less prominent and is harder to interpret when compared to the augmentation probability p , the learned augmentation magnitude adapts to different data samples . A.5.3 VARIANCE OF LEARNED AUGMENTATION POLICY GROUPED BY CLASS In Figure 7 , we plot the standard deviations of the predicted augmentation probabilities of the image instances grouped by its class label . It is observed that even within the same class , the predicted augmentation policy is different across instances . It is a clue that the AdaAug policy captures some instance-level augmentation information . A.5.4 DISTRIBUTIONS OF THE AUGMENTATION PARAMETERS IN ADAAUG-TRANSFER . In Figure 8 , we show the distributions of the augmentation parameters when transferring the learned policy to the Flower , Pet , Car and Aircraft datasets . The distributions of the augmentation parameters show slight differences between different tasks . Apparently , we observe that the colour transformations , for example Colour , Invert and Solarize are less preferred in the Flower and Pet datasets , while shearing operations are relatively more favourable in the Car and Aircraft dataset . The observation of using less color transformation for the Flower dataset aligns with the findings from Lee et al . ( 2021 ) . Although some differences in the augmentation distributions may be less prominent and harder to interpret when compared to the AdaAug-direct cases , the policy shows its adaptations to different tasks . A.6 LIMITATIONS Although AdaAug is differentiable and efficient in search , it requires forming different augmentation paths and passing the images through each path . Compared to standard training , where we forward b images in one mini-batch , AdaAug processes b · |T| images in the exploration pass . At inference time , AdaAug keeps a pre-trained policy network ( πθ ) to augment the new dataset . This adds extra computational effort when compared to other AutoDA policies , which may be a concern if the image resolution and model size are large but the computational resources are limited .
- This paper proposes AdaAug, an Automated Data Augmentation (AutoDA) method to learn a class/instance-dependent augmentation policy efficiently. The key ideas of AdaAug are two-fold. First, it uses a hidden feature of the original input to adapt the augmentation for each instance. Second, it alternates exploit-and-explore procedures for efficiently updating the augmentation policy. The authors demonstrate the empirical effectiveness on two scenarios; 1) transfer and 2) direct. In transfer setup, AdaAug significantly outperforms other AutoDA baselines. In direct setup, AdaAug shows comparable performance with the baselines.
SP:6f8a327a9f14875aad8b76d34889c9c91d04a444
AdaAug: Learning Class- and Instance-adaptive Data Augmentation Policies
1 INTRODUCTION Data augmentation is a common way to enhance the robustness of deep learning models by augmenting the datasets used for model training . Applying popular data augmentation operations such as randomized cropping , horizontal flipping , and color shifting to image data has become a standard procedure in modern image recognition models ( Krizhevsky et al. , 2012 ; Shorten & Khoshgoftaar , 2019 ) . Over the years , various augmentation methods using advanced operations have been proposed . Examples include occlusion-based operations like Cutout ( Devries & Taylor , 2017 ) that randomly masks part of an image to avoid overfitting , label-mixing operations like CutMix ( Yun et al. , 2019 ) that replaces the occluded part in Cutout with a different image patch , and Mixup ( Zhang et al. , 2018 ) that interpolates two images with their corresponding one-hot encoded labels . While these hand-crafted data augmentation methods can improve model generalization , choosing the operations and their corresponding parameters is often decided manually to make the augmentation scheme effective for the task at hand . Despite the manual efforts involved , an augmentation policy that is useful for a particular dataset often does not generalize well to other datasets ( Cubuk et al. , 2019 ) . To tackle this problem , a series of recent studies has been conducted to automate the process of finding an effective data augmentation policy for a target dataset . These automated data augmentation ( AutoDA ) methodologies show impressive results on several benchmark image datasets ( Cubuk et al. , 2019 ; Ho et al. , 2019 ; Lim et al. , 2019 ; Cubuk et al. , 2020 ; Hataya et al. , 2020 ; Hendrycks et al. , 2020 ; Li et al. , 2020 ; Cheung & Yeung , 2021 ) . The manual tuning of augmentation parameters can also be addressed by using generative approaches , such as training a generative adversarial network ( GAN ) to create new artificial images directly ( Antoniou et al. , 2017 ; Tran et al. , 2017 ; Jha & Cecotti , 2020 ; Yorioka et al. , 2020 ; Zhao et al. , 2020 ) . However , these generative models are often hard to implement and computationally expensive to train . In supervised learning , data augmentation is considered as a naive way to inject inductive biases , such as translational invariance , to a classifier . With the recent advances in representation learning , data augmentation has become a major approach to the learning of good representations . For example , there is an increasing trend to replace convolutional neural networks ( CNNs ) with transformers in computer vision ( Carion et al. , 2020 ; Dosovitskiy et al. , 2021 ) . Being a more generic architecture , transformers do not come with additional inductive bias like translational invariance in CNNs , thereby requiring more data to learn the effective invariance properties . Data augmentation serves as an effective way to achieve this goal ( Touvron et al. , 2020 ) . In addition , recent self-supervised models rely heavily on data augmentation to create different views of the same data and learn the robust representations through contrastive learning ( Chen et al. , 2020 ; Grill et al. , 2020 ; He et al. , 2020 ) . However , improper choice of augmentation operations , especially with excessive strength , may impose wrong inductive bias to the models and lead to performance degradation ( Chen et al. , 2020 ; Xiao et al. , 2021 ) . Consequently , there is a need to enrich the current data augmentation methods , especially with a better choice of the data augmentation policy , to take advantage of the recent advances in representation learning . Previous AutoDA methods attempt to find an optimal augmentation policy to augment a given dataset . However , most of the discovered policies are not adaptive to variation of the dataset . Even in a dataset in which an augmentation policy is found to be effective , the same augmentation scheme is applied equally to all classes . This can limit the potential of data diversity brought by data augmentation . For example , in digit classification , flip-invariance is useful for the digits “ 0 ” , “ 1 ” and “ 8 ” but not for the other digits ; in shape classification , shearing-invariance is useful for “ triangles ” but not for “ rectangles ” . None of the previous AutoDA methods can learn an adaptive class-dependent augmentation policy . To address this limitation , we propose AdaAug , as an AutoDA method , to learn a class-dependent and potentially instance-dependent augmentation policy efficiently . Despite the attractive potential of such an adaptive scheme if it can indeed be realized , actually learning an adaptive augmentation policy poses at least two major technical challenges . First , the search space for the per-class , per-instance augmentation policies is very large , rendering it intractable when trying to maintain an individual policy for each class or even each realization of the input data . Second , the gradient information of the augmentation parameters is hard to obtain as the operation selection process and the transformations are non-differentiable . Optimizing the augmentation policy efficiently is a challenging problem to address . In this work , AdaAug employs a recognition model to learn the underlying augmentation policy for each data instance and takes an alternating exploit-and-explore procedure to update the augmentation policy using a differentiable workflow . In the exploitation pass , AdaAug trains a classifier for a number of steps , followed by the exploration pass which validates the classifier and updates the policy to minimize the validation loss . The intuition behind our design of AdaAug is that such an alternating procedure can learn augmentations that help generalize the trained model to unseen validation data . For example , rotational invariance would be found to be a desirable inductive bias that the model should learn if the validation set contains some similar but rotated versions of the training images . Our goal is to capture such information and assign a higher probability to use rotation for augmentation in this case . An application scenario would be a computer vision task for drones where the unseen data may contain different kinds of rotated images . To summarize , our contributions are listed as follows : • We introduce a novel AutoDA method to learn a class-dependent and potentially instancedependent augmentation policy for each data instance . • We propose a differentiable workflow to search for the augmentation policy efficiently . • We demonstrate that the policies learned by our method transfer better to unseen datasets , such as Oxford Flowers , Oxford-IIT Pets , FGVC Aircraft , and Stanford Cars , when compared to other AutoDA baselines . • We demonstrate state-of-the-art performance on the CIFAR-10 , CIFAR-100 , and SVHN datasets . 2 RELATED WORK Automated Data Augmentation . Several AutoDA methods have been proposed to compose the augmentation operations automatically . AutoAugment ( Cubuk et al. , 2019 ) learns to generate the probability and magnitude in applying different augmentation operations as a policy using reinforcement learning ( RL ) . It alternately generates an augmentation policy to train a child model and updates the policy generator using the validation performance as reward . Since it is computationally expensive to train the child models repeatedly , several techniques have been proposed subsequently in an attempt to reduce the search effort . Fast AutoAugment ( Lim et al. , 2019 ) uses Bayesian optimization ( BO ) to tune the augmentation parameters . Population-based Augmentation ( PBA ) ( Ho et al. , 2019 ) exploits population-based training ( PBT ) to search for an optimal augmentation policy schedule by training multiple parallel child models using an evolutionary approach . RandAugment ( Cubuk et al. , 2020 ) applies the augmentation operations uniformly and reduces the search space significantly by covering only the number of operators and the global augmentation magnitude . Instead of using the validation performance to evaluate the augmentation quality , Adversarial AutoAugment ( Zhang et al. , 2020 ) uses an adversarial objective to learn the augmentation policy . MODALS ( Cheung & Yeung , 2021 ) utilizes PBA to search for an optimal latent space augmentation policy and augment data from any modality not limited to image data . Differentiable Data Augmentation . In addition to using RL , BO and PBT to optimize the augmentation parameters , there exist related methods that modify the otherwise discrete search procedure to make it end-to-end differentiable . This results in a more efficient optimization procedure and a more precise policy than RL and PBT as the search space is continuous . In AdaAug , part of our contribution is to design a differentiable workflow to learn the augmentation policy . For previous differentiable augmentation approaches , Faster AutoAugment ( Hataya et al. , 2020 ) proposes a differentiable Relaxed Bernoulli distribution to sample the candidate augmentation functions and estimate the gradients of the non-differentiable augmentation magnitude using the Stop Gradient estimator . Specifically , it optimizes a density matching loss between the training and validation data . DADA ( Li et al. , 2020 ) differentiates through the discrete policy sampling process using GumbelSoftmax trick While AutoDA and differentiable data augmentation have been shown to be successful in improving the generalization performance of deep learning models , the learned augmentation policy is often applied uniformly to the whole dataset , meaning that all classes and instances share the same augmentation policy . On the contrary , each class and even each data instance receive an adaptive policy to augment in our proposed method . Adaptive Data Augmentation . Attempts have been made to apply adaptive data augmentation at a class or subgroup level . Hauberg et al . ( 2016 ) proposed a statistical approach to model the transformations within each class and use statistical models to augment the dataset . The approach shows improvement in the small MNIST dataset and its variants . However , the augmentation operations are limited to spatial transformations . In addition , observations of the data must be locatable and alignable , making it difficult to extend to most other computer vision tasks . Recently , CAMEL ( Goel et al. , 2021 ) adopts a finer data-generation method by fixing those classifiers that fail on a subgroup of a class . It uses CycleGAN to learn different variations of the same training data within a subgroup . However , CAMEL requires specifying the subgroup information manually and assumes that the subgroups only exist within the same class . MetaAugment ( Zhou et al. , 2021 ) learns a samplewise weighting scheme and a global probability parameter to control the sampling of augmentation transformations . In AdaAug , the learned policy can capture class-dependent transformation automatically and also instance-dependent information , such as the light intensity of an image , across different classes . 3 ADAAUG 3.1 SEARCH SPACE Let T be a set of augmentation operations where τj denotes the j-th operation ( e.g. , “ rotation ” ) in the set . We formulate an augmentation policy as the probability p and magnitude λ in applying the augmentation operations . Here , p is a probability vector with each entry pj ∈ [ 0 , 1 ] ; ∑|T| j=1 pj = 1 and λ is a vector with each entry λj ∈ [ 0 , 1 ] , where pj and λj are the probability and magnitude , respectively , of applying the operation τj . Mathematically , τj : X → X is a mapping from the input space X to itself . For an image x ∈ X , τj transforms it with the magnitude parameter λj that specifies the strength of the transformation ( e.g. , degree of rotation ) , i.e. , x # → τj ( x ; λj ) . Note that some operations like flipping do not depend on the magnitude parameter . In a training pass , given an input data x , an augmentation policy ( p , λ ) , and the number of operators k , we sample k operations according to p and apply them with their corresponding magnitudes specified by λ : T ( x ; p , λ ) = τj ( x ; λj ) ; j ∼ p x̂ = T ( k ) ◦ · · · ◦ T ( 1 ) ( x ; p , λ ) ( 1 ) Here , T ( t ) , 1 ≤ t ≤ k , denotes applying the t-th operation . Our goal is to learn an augmentation policy function πθ : x # → ( p , λ ) to generate an adaptive augmentation policy that optimizes the generalization performance and is dependent on the input data ( see Figure 1 and Algorithm 1 ) . 3.2 SEARCH ALGORITHM Exploitation . AdaAug uses a feature extraction network fα : X → Z to map an input space to a latent space , a dense layer gβ : Z → Y to map a latent space to a label space , and a projection function hγ : Z → P × Λ to map a latent space ( representation ) to a probability and magnitude space , where p = [ 0 , 1 ] |T| and ‖p‖1 = 1 , ∀p ∈ P and λ = [ 0 , 1 ] |T| , ∀λ ∈ Λ . In our case , the functions f , g , h ( with the subscripts dropped for notational simplicity ) are implemented as neural networks with network weights α , β , γ , respectively . The softmax operation is applied to the first half of the output from h to get the probabilities and the sigmoid function is applied to the other half to get the magnitudes . The policy network πθ ( x ) can then connect the class and image information to the augmentation policy via h ◦ f ( x ) with parameters θ = ( γ , α ) . In an exploitation pass , given the training data x , the policy function generates the data-dependent augmentation policy ( p , λ ) = πθ ( x ) and augments x to give x̂ using Equation ( 1 ) . Here , x̂ is treated as a new unseen training example and is used to train the classification model g ◦ f ( x̂ ) by minimizing the crossentropy loss : minα , β Ltrain ( α , β ) . Note that there is a discrete sampling procedure in Equation ( 1 ) . During the update , the gradient of α does not involve the computations in the policy network . Exploration . In the exploration pass , AdaAug first generates the augmentation policy ( p , λ ) = πθ ( x ) given the validation data x . Then , it applies all the |T| augmentation operations to x separately with the corresponding magnitudes in λ . The augmented validation data are passed to the feature extraction network f individually to get the latent representations . The latent representations are summed based on their weights in the probability vector p. The mixed representation is passed to g for computing the predicted labels : ŷ = g |T|∑ j=1 pj · f ◦ τj ( x ; λj ) ; ( p , λ ) = πθ ( x ) ( 2 ) AdaAug updates the projection parameters γ to minimize the validation loss : minγ Lvalid ( α , β , γ ) . As we make no assumption that the provided augmentation operations are differentiable , we follow prior approaches ( Bengio et al. , 2013 ; Li et al. , 2020 ) using a straight-through gradient estimator to optimize the augmentation magnitudes . Specifically , the gradient of the magnitudes is estimated with respect to each input pixel value xh , w of the augmented data , i.e. , ∂x̂h , w ∂λj = 1 . The gradient can then be calculated by : ∂Lvalid ∂λj = ∑ w , h ∂Lvalid ∂x̂w , h ∂x̂w , h ∂λj = ∑ w , h ∂Lvalid ∂x̂w , h ( 3 ) Algorithm 1 Search algorithm 1 : procedure SEARCH ( Dtrain , Dvalid , T , k , m , r ) 2 : Initialize α , β , γ 3 : for r steps do # Exploitation 4 : Sample a mini-batch dtrain ∈ Dtrain 5 : for ( x , y ) ∈ dtrain do 6 : ( p , λ ) = πθ ( x ) 7 : x̂ = augment ( x ) ⊲ augment computes Eq . 1 8 : ŷ = g ◦ f ( x̂ ) 9 : Ltrain ( α , β ) = CrossEntropyLoss ( ŷ , y ) 10 : ( α , β ) ← argminα , β Ltrain ( α , β ) 11 : if r is divisible by m then # Exploration 12 : Sample a mini-batch dvalid ∈ Dvalid 13 : for ( x , y ) ∈ dvalid do 14 : ( p , λ ) = πθ ( x ) 15 : ŷ = explore ( x ) ⊲ explore computes Eq . 2 16 : Lvalid ( α , β , γ ) = CrossEntropyLoss ( ŷ , y ) 17 : γ ← argminγ Lvalid ( α , β , γ ) return α , γ Relation to Neural Architecture Search ( NAS ) . Tuning of the augmentation policy parameters bears some similarity with optimization of the network architecture weights in DARTS from the NAS literature ( Liu et al. , 2019 ) . DARTS prescribes different computation paths to different operation cells and relaxes the computation to a mixture of the operations weighted by learnable weights for each path . With w as the model parameters and α as the weights for the computation paths , the search algorithm solves a bi-level optimization problem : min α Lvalid ( w∗ ( α ) , α ) s.t . w∗ ( α ) = argmin w Ltrain ( w , α ) ( 4 ) While DARTS optimizes the architecture weights , AdaAug optimizes the projection parameter γ that decides the augmentation weights and magnitudes in the exploration . DARTS solves the optimization by using a first-order and finite-difference approximation of the architecture gradient . In AdaAug , we sample the augmentation operations and treat the augmented data as unseen missing training data X̂train . By absorbing the augmentation parameter in the training dataset , we avoid the complex bi-level optimzation and simplify the exploitation procedure into training a standard classifier : min γ Lvalid ( α∗ , β∗ , γ ; Xvalid ) s.t . α∗ , β∗ = argmin α , β Ltrain ( α , β ; X̂train ) ( 5 ) Relation to Density Matching . Data augmentation can be regarded as a density matching problem between the training and validation data ( Ratner et al. , 2017 ; Tran et al. , 2017 ; Hataya et al. , 2020 ; Lim et al. , 2019 ) . From this perspective , AdaAug improves model generalization by matching the density of Dtrain with the density of the augmented Dvalid . In the outer optimization objective in Equation ( 5 ) , AdaAugminimizes the classification loss with the augmentation parameters γ over the same optimal model parameters α∗ , β∗ learned from D̂train . In so doing , it approximately reduces the distance between the densities of the augmented Dvalid and Dtrain . 3.3 INFERENCE Like most automated augmentation piplines ( Cubuk et al. , 2019 ; Ho et al. , 2019 ; Lim et al. , 2019 ; Hataya et al. , 2020 ; Li et al. , 2020 ) , AdaAug searches for the augmentation policy on a small dataset using a small model and then applies the learned policy network πθ to train with a larger dataset or model . We call the process of applying a searched policy to augment a new dataset as inference time . Regarding this workflow , RandAugment argues that the different search spaces in search time and inference time makes the augmentation policy unable to adjust the regularization strength to different target datasets and models ( Cubuk et al. , 2020 ) . Therefore , it proposes to search for the global magnitude and number of operators for each case . To address this concern , AdaAug ultilizes three diversity parameters to fine-tune the regularization strength of the found policy for large datasets and models . First , the number of operators k is set to 1 at search time , but larger value of k can be used at inference time . To control the selection of more diverse operations , a temperature parameter T is introduced in the softmax function : pi = exp [ h ( zi ) /T ] ∑|T| j exp [ h ( zj ) /T ] . Setting a larger T value allows the operations with lower probability being sampled more often in the same order . Last , AdaAug perturbs the magnitude value with the parameter δ so the perturbed magnitude λ̂ is given as λ̂ ∼ Uniform ( λ− δ , λ+ δ ) . This allows the augmented data to show slight variations even with the same operation on the same input data . In practice , we can perform grid search on these three diversity parameters with the other hyperparameters using a holdout validation set . 4 EXPERIMENTS AND RESULTS In this section , we explain our experimental design followed by presenting the results . We evaluate the empirical performance of AdaAug in two experiments : AdaAug-transfer and AdaAug-direct . We select the comparison baselines that use the validation performance as the evaluation method to learn the augmentation policy . For all the experiments , we report the average test-set error rate as the performance metric . Each model is evaluated three times with different random initializations . Augmentation operations . We match the operations adopted by AutoAugment . In addition to the 16 operations proposed previously ( ShearX , ShearY , TranslateX , TranslateY , Rotate , AutoContrast , Invert , Equalize , Solarize , Posterize , Contrast , Color , Brightness , Sharpness , Cutout , and Sample Pairing ) , we add the Identity operation for not applying data augmentation . For the simple baseline , we apply random horizontal flip , color jittering , color normalization , and Cutout with 16× 16 patch size . Our method and other baselines apply the found policy on top of these standard augmentations . Policy search . We follow the setup adopted by AutoAugment ( Cubuk et al. , 2019 ) to use 4,000 training images for CIFAR-10 and CIFAR-100 , and 1,000 training images for SVHN . The remaining images are used as the validation set . We use Wide-ResNet-40-2 ( Zagoruyko & Komodakis , 2016 ) as the feature extraction network for all searches . We implement h as a linear layer and update the policy parameter γ after every 10 training steps using the Adam optimizer with learning rate 0.001 and a batch size of 128 . AdaAug-transfer . In the first experiment , we investigate how well the learned augmentation policy can transfer to unseen datasets . We search for the optimal augmentation policy on the CIFAR-100 dataset and use the learned policy to train with four fine-grained classification datasets : Oxford 102 Flowers ( Nilsback & Zisserman , 2008 ) , Oxford-IIIT Pets ( Em et al. , 2017 ) , FGVC Aircraft ( Maji et al. , 2013 ) , and Stanford Cars ( Krause et al. , 2013 ) . We compare the test error rate with AutoAugment , Fast AutoAugment , DADA , and RandAugment using their published policies for CIFAR-100 . For all the tested datasets , we compare the transfer results when training the ResNet-50 model ( He et al. , 2016 ) for 180 epochs from scratch and fine-tuning the ResNet-50 model pretrained on ImageNet for 100 epochs . We use the cosine learning rate decay with one annealing cycle ( Loshchilov & Hutter , 2017 ) , initial learning rate of 0.1 , weight decay 1e-4 and gradient clipping parameter 5 . AdaAug-direct . In the second experiment , we search for the optimal augmentation policy on a small subset of the target dataset and use the learned policy to train the full dataset . The purpose of the experiment is to demonstrate that while the AdaAug policy can adapt to other unseen datasets , it can also achieve competitive performance on the seen datasets with more training data . We compare AdaAug-direct with state-of-the-art AutoDA methods using the same evaluation datasets : CIFAR10 ( Krizhevsky & Hinton , 2009 ) , CIFAR-100 ( Krizhevsky & Hinton , 2009 ) , and SVHN ( Netzer et al. , 2011 ) . We test our method using Wide-ResNet-40-2 and Wide-ResNet-28-10 . At inference time , we set the temperature T = 3 and magnitude perturbation δ = 0.3 and search for the number of operators k ∈ { 1 , 2 , 3 , 4 } using a holdout validation set , like RandAugment ( Cubuk et al. , 2020 ) . For the hyperparameters , we follow AutoAugment , PBA , and Fast AutoAugment if possible . Table 1 and Table 2 show that the AdaAug policy outperforms the other baselines when training and fine-tuning the ResNet-50 model on the Flowers , Pets , Aircraft , and Cars datasets , respectively . These baselines apply the same augmentation policy to all datasets . Such a policy may not be optimal to the target domain . In contrast , AdaAug adapts the augmentation policy to individual image classes and instances automatically . The AdaAug policy network applies different augmentation policies to unseen images according to their similarity to the classes that AdaAug has seen in the search . To further justify our claim , we show the distribution of the augmentation parameters for different tasks in Appendix A.5.4 . The four datasets have many classes but few training examples per class . The negative effect of using non-adaptive data augmentation is likely to be imminent than a situation where its dataset has fewer classes but more examples per class . 4.2 ADAAUG-DIRECT CIFAR-10 and CIFAR-100 . The policies learned by AdaAug mostly achieve either comparable or better performance than the baselines for both WRN-40-2 and WRN-28-10 models on the CIFAR10 and CIFAR-100 datasets ( see Table 3 ) . We visualize the augmentation policy of CIFAR-10 by applying the policy to the validation data and averaging the predicted augmentation probability of each image for each class ( see Figure 2 ) . The policy contains all types of augmentation moderately at some degree . Among the operations , Flip dominates the policy . It is in line with the augmentation selected manually as people find applying horizontal flipping to CIFAR-10 can improve the prediction accuracy . The policy learned by AdaAug puts higher emphasis on Invert and Equalize , which are also reported by PBA . The focus on Brightness is also aligned with the policy in AutoAugment . Although there are minor variations in the importance of some operations between the policies learned by AdaAug , AutoAugment , and PBA , their empirical performance is similar . SVHN . AdaAug performs comparably to the baselines on the core set of SVHN . We visualize the augmentation policy for SVHN in Figure 2 . We found that AutoContrast , Invert , and Solarize receive a large attention in SVHN . This makes sense because the specific color of the number and background is irrelevant to the prediction . This is consistent with the findings by AutoAugment and PBA . The Flip operation receives a significantly higher probability in digits “ 0 ” , “ 1 ” , and “ 8 ” . It is likely because these three digits appear similar after flipping . This shows that AdaAug captures not only dataset-specific augmentation policy , but also class-dependent augmentation , which can not be achieved by other baselines . In addition to the augmentation probability , we visualize the learned augmentation magnitude in Appendix A.5.2 . ImageNet . We also validate our method on large-scale ImageNet dataset . AdaAug improves the top-1 accuracy 1 % over the ResNet-50 baseline ( see Appendix A.1.1 ) . Although some baselines like AutoAugment produce similar performance or slightly outperform our method in the AdaAugdirect experiment , AdaAug uses less computational effort in searching for the policy . Specifically , AutoAugment takes 5,000 GPU hours to search for the CIFAR-10 policy , while AdaAug takes only 3.3 GPU hours on an old GeForce GTX 1080 GPU card ( see Appendix A.4 ) . 5 DISCUSSION Instance-dependent augmentation . The AdaAug-direct experimental results show that the learned AdaAug policy can capture dataset- and class-dependent augmentations on CIFAR-10 , CIFAR100 , and SVHN . We further investigate whether AdaAug can learn instance-dependent information based on some image features . According to the architecture , AdaAug takes the output from the last layer of a CNN network as the image representation and uses it to predict the augmentation parameters . The image representation contains the class information and potentially some image features . Empirically , we first examine if AdaAug generates different augmentation policies for different instances even within the same class . In Appendix A.5.3 , we plot the standard deviations of the predicted augmentation probabilities of the image instances for each class . It is observed that even within the same class , the predicted augmentation policy is slightly different across instances . It is a clue that the AdaAug policy captures some instance-level augmentation information . Qualitatively , we show the augmented examples of a flower image using the AdaAug policy in Figure 3 . The input image is relatively darker than the other images . Apparently , AdaAug is aware of this property and applies more brightness-related augmentation to lighten up the image . We observe similar behavour of AdaAug in other classes , but there is inadequate empirical support to conclude a general rule on how the network decides the instance-aware augmentation as policies are learned in a data-driven way . Quality of augmented data . We compare the augmented images from AdaAugwith RandAugment under different augmentation strengths in Figure 3 . In terms of augmentation diversity , RandAug- ment produces more variations of the input image . However , not all augmented images produced are plausible . As RandAugment applies the augmentations uniformly , some augmented flower images show a strange color different from the original image . With increasing number of operators and magnitude , the flower object is sometimes translated out of the frame and results in a black image . This may affect the learning performance as color can be an important feature in classifying different species of flowers . For AdaAug , the augmented images are more visually appealing . Ablation study . In this section , we study the effects of AdaAug-transfer on Oxford 102 Flowers and AdaAug-direct on CIFAR-10 using three alternative search configurations . First , does the use of nonlinear projection deliver better performance ? We replace the linear layer in h with a 2-layer MLP using a 128 hidden layer size with ReLU activation . Second , we study whether the class-dependent and instance-dependent configuration improves model accuracy . We remove the class and instance information by replacing the policy network with a fixed vector . which decides the augmentation probabilities and magnitudes for the entire dataset . In the third setting , we mix the augmented images in the input space instead of the latent space and observe the effects . Our results show that the use of class- and instance-adaptive augmentation policy contributes larger improvement in AdaAug-direct . In AdaAug-transfer , using a nonlinear projection harms the prediction performance . A possible reason is that the nonlinear projection is more likely to overfit the search dataset and fail to generalize to unseen datasets . Moreover , combining the augmentation path in the latent space learns a better policy ( see Table 4 ) . In Appendix A.3 , we provide further analysis of using AdaAug with different diversity parameters . In this work , we propose a novel AutoDA approach , AdaAug , to learn class- and instance-adaptive augmentation policies efficiently . We demonstrate that the found policy transfers well to unseen datasets while achieving state-of-the-art results on the seen datasets . We provide evidence to show that the learned adaptive policy captures class- and instance-level information . We think that AdaAug can show further gains over existing baselines when applied to datasets with more dissimilar underlying augmentation rules among the data classes , and with fewer training examples for each class . It is also promising to investigate in the future if the proposed adaptive augmentation method can improve the performance of other computer vision and representation learning tasks . REFERENCES Antreas Antoniou , Amos J. Storkey , and Harrison Edwards . Data augmentation generative adver- sarial networks . arXiv preprint arXiv:1711.04340 , 2017 . Yoshua Bengio , Nicholas Léonard , and Aaron C. Courville . Estimating or propagating gradients through stochastic neurons for conditional computation . arXiv preprint arXiv:1308.3432 , 2013 . Nicolas Carion , Francisco Massa , Gabriel Synnaeve , Nicolas Usunier , Alexander Kirillov , and Sergey Zagoruyko . End-to-end object detection with transformers . In 16th European Conference on Computer Vision , ECCV 2020 , volume 12346 , pp . 213–229 . Springer , 2020 . Ting Chen , Simon Kornblith , Mohammad Norouzi , and Geoffrey E. Hinton . A simple framework for contrastive learning of visual representations . In Proceedings of the 37th International Conference on Machine Learning , ICML 2020 , volume 119 , pp . 1597–1607 . PMLR , 2020 . Tsz Him Cheung and Dit Yan Yeung . MODALS : Modality-agnostic automated data augmentation in the latent space . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Ekin D. Cubuk , Barret Zoph , Dandelion Mané , Vijay Vasudevan , and Quoc V. Le . AutoAugment : Learning augmentation strategies from data . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR 2019 , pp . 113–123 . IEEE , 2019 . Ekin D. Cubuk , Barret Zoph , Jonathon Shlens , and Quoc V. Le . RandAugment : Practical automated data augmentation with a reduced search space . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR Workshops 2020 , pp . 3008–3017 . IEEE , 2020 . Terrance Devries and Graham W. Taylor . Improved regularization of convolutional neural networks with cutout . arXiv preprint arXiv:1708.04552 , 2017 . Alexey Dosovitskiy , Lucas Beyer , Alexander Kolesnikov , Dirk Weissenborn , Xiaohua Zhai , Thomas Unterthiner , Mostafa Dehghani , Matthias Minderer , Georg Heigold , Sylvain Gelly , Jakob Uszkoreit , and Neil Houlsby . An image is worth 16x16 words : Transformers for image recognition at scale . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Yan Em , Feng Gao , Yihang Lou , Shiqi Wang , Tiejun Huang , and Ling-Yu Duan . Incorporating intra-class variance to fine-grained visual recognition . In 2017 IEEE International Conference on Multimedia and Expo , ICME 2017 , pp . 1452–1457 . IEEE , 2017 . Karan Goel , Albert Gu , Yixuan Li , and Christopher Re . Model patching : Closing the subgroup performance gap with data augmentation . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Jean-Bastien Grill , Florian Strub , Florent Altché , Corentin Tallec , Pierre H. Richemond , Elena Buchatskaya , Carl Doersch , Bernardo Ávila Pires , Zhaohan Guo , Mohammad Gheshlaghi Azar , Bilal Piot , Koray Kavukcuoglu , Rémi Munos , and Michal Valko . Bootstrap your own latent - A new approach to self-supervised learning . In Advances in Neural Information Processing Systems 33 : Annual Conference on Neural Information Processing Systems 2020 , NeurIPS 2020 , 2020 . Ryuichiro Hataya , Jan Zdenek , Kazuki Yoshizoe , and Hideki Nakayama . Faster AutoAugment : learning augmentation strategies using backpropagation . In 16th European Conference on Computer Vision , ECCV 2020 , volume 12370 , pp . 1–16 . Springer , 2020 . Søren Hauberg , Oren Freifeld , Anders Boesen Lindbo Larsen , John W. Fisher III , and Lars Kai Hansen . Dreaming more data : Class-dependent distributions over diffeomorphisms for learned data augmentation . In Arthur Gretton and Christian C. Robert ( eds . ) , Proceedings of the 19th International Conference on Artificial Intelligence and Statistics , AISTATS 2016 , volume 51 of JMLR Workshop and Conference Proceedings , pp . 342–350 . JMLR , 2016 . Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Deep residual learning for image recognition . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR 2016 , pp . 770– 778 . IEEE , 2016 . Kaiming He , Haoqi Fan , Yuxin Wu , Saining Xie , and Ross B. Girshick . Momentum contrast for unsupervised visual representation learning . In IEEE Conference on Computer Vision and Pattern Recognition , CVPR 2020 , pp . 9726–9735 . IEEE , 2020 . Dan Hendrycks , Norman Mu , Ekin Dogus Cubuk , Barret Zoph , Justin Gilmer , and Balaji Lakshminarayanan . AugMix : A simple data processing method to improve robustness and uncertainty . In 8th International Conference on Learning Representations , ICLR 2020 , 2020 . Daniel Ho , Eric Liang , Xi Chen , Ion Stoica , and Pieter Abbeel . Population Based Augmentation : efficient learning of augmentation policy schedules . In Proceedings of the 36th International Conference on Machine Learning , ICML 2019 , volume 97 , pp . 2731–2741 . PMLR , 2019 . Ganesh Jha and Hubert Cecotti . Data augmentation for handwritten digit recognition using generative adversarial networks . Multim . Tools Appl. , 79 ( 47 ) :35055–35068 , 2020 . J. Krause , Jun Deng , Michael Stark , and Li Fei-Fei . Collecting a large-scale dataset of fine-grained cars . In Second Workshop on Fine-Grained Visual Categorization , 2013 . A. Krizhevsky and G. Hinton . Learning multiple layers of features from tiny images . Technical report , University of Toronto , 2009 . Alex Krizhevsky , Ilya Sutskever , and Geoffrey E. Hinton . ImageNet classification with deep convolutional neural networks . In Advances in Neural Information Processing Systems 25 : 26th Annual Conference on Neural Information Processing Systems 2012. , pp . 1106–1114 , 2012 . Hankook Lee , Kibok Lee , Kimin Lee , Honglak Lee , and Jinwoo Shin . Improving transferability of representations via augmentation-aware self-supervision . Advances in Neural Information Processing Systems 34 : Annual Conference on Neural Information Processing Systems 2021 , NeurIPS 2021 , 2021 . Yonggang Li , Guosheng Hu , Yongtao Wang , Timothy M. Hospedales , Neil Martin Robertson , and Yongxing Yang . DADA : Differentiable automatic data augmentation . 16th European Conference on Computer Vision , ECCV 2020 , 2020 . Sungbin Lim , Ildoo Kim , Taesup Kim , Chiheon Kim , and Sungwoong Kim . Fast AutoAugment . In Advances in Neural Information Processing Systems 32 : Annual Conference on Neural Information Processing Systems 2019 , NeurIPS 2019 , pp . 6662–6672 , 2019 . Hanxiao Liu , Karen Simonyan , and Yiming Yang . DARTS : Differentiable architecture search . In 7th International Conference on Learning Representations , ICLR 2019 , 2019 . Ilya Loshchilov and Frank Hutter . SGDR : Stochastic gradient descent with warm restarts . In 5th International Conference on Learning Representations , ICLR 2017 , 2017 . Subhransu Maji , Esa Rahtu , Juho Kannala , Matthew B. Blaschko , and Andrea Vedaldi . Fine-grained visual classification of aircraft . arXiv preprint arXiv:1306.5151 , 2013 . Yuval Netzer , Tao Wang , Adam Coates , Alessandro Bissacco , Bo Wu , and Andrew Ng . Reading digits in natural images with unsupervised feature learning . In NIPS Workshop on Deep Learning and Un-supervised Feature Learning , 2011 . Maria-Elena Nilsback and Andrew Zisserman . Automated flower classification over a large number of classes . In Sixth Indian Conference on Computer Vision , Graphics & Image Processing , ICVGIP 2008 , pp . 722–729 . IEEE , 2008 . Alexander J. Ratner , Henry R. Ehrenberg , Zeshan Hussain , Jared Dunnmon , and Christopher Ré . Learning to compose domain-specific transformations for data augmentation . In Advances in Neural Information Processing Systems 30 : Annual Conference on Neural Information Processing Systems 2017 , pp . 3236–3246 , 2017 . Connor Shorten and Taghi M. Khoshgoftaar . A survey on image data augmentation for deep learning . Journal of Big Data , 6:60 , 2019 . Hugo Touvron , Matthieu Cord , Matthijs Douze , Francisco Massa , Alexandre Sablayrolles , and Hervé Jégou . Training data-efficient image transformers & distillation through attention . arXiv preprint arXiv:2012.12877 , 2020 . Toan Tran , Trung Pham , Gustavo Carneiro , Lyle J. Palmer , and Ian D. Reid . A Bayesian data augmentation approach for learning deep models . In Advances in Neural Information Processing Systems 30 : Annual Conference on Neural Information Processing Systems 2017 , pp . 2797–2806 , 2017 . Tete Xiao , Xiaolong Wang , Alexei A Efros , and Trevor Darrell . What should not be contrastive in contrastive learning . In 9th International Conference on Learning Representations , ICLR 2021 , 2021 . Daiki Yorioka , Hyunho Kang , and Keiichi Iwamura . Data augmentation for deep learning using generative adversarial networks . In 9th IEEE Global Conference on Consumer Electronics , GCCE 2020 , pp . 516–518 . IEEE , 2020 . Sangdoo Yun , Dongyoon Han , Sanghyuk Chun , Seong Joon Oh , Youngjoon Yoo , and Junsuk Choe . Cutmix : Regularization strategy to train strong classifiers with localizable features . In 2019 IEEE International Conference on Computer Vision , ICCV 2019 , pp . 6022–6031 . IEEE , 2019 . Sergey Zagoruyko and Nikos Komodakis . Wide residual networks . In Proceedings of the British Machine Vision Conference 2016 , BMVC 2016 . BMVA Press , 2016 . Hongyi Zhang , Moustapha Cissé , Yann N. Dauphin , and David Lopez-Paz . Mixup : Beyond empirical risk minimization . In 6th International Conference on Learning Representations , ICLR 2018 , 2018 . Xinyu Zhang , Qiang Wang , Jian Zhang , and Zhao Zhong . Adversarial AutoAugment . In 8th International Conference on Learning Representations , ICLR 2020 , 2020 . Shengyu Zhao , Zhijian Liu , Ji Lin , Jun-Yan Zhu , and Song Han . Differentiable augmentation for data-efficient GAN training . In Advances in Neural Information Processing Systems 33 : Annual Conference on Neural Information Processing Systems 2020 , NeurIPS 2020 , 2020 . Fengwei Zhou , Jiawei Li , Chuanlong Xie , Fei Chen , Lanqing Hong , Rui Sun , and Zhenguo Li . MetaAugment : Sample-aware data augmentation policy learning . In Thirty-Fifth AAAI Conference on Artificial Intelligence , AAAI 2021 , Thirty-Third Conference on Innovative Applications of Artificial Intelligence , IAAI 2021 , The Eleventh Symposium on Educational Advances in Artificial Intelligence , EAAI , pp . 11097–11105 . AAAI Press , 2021 . A APPENDIX A.1 ADDITIONAL EXPERIMENTS A.1.1 LARGE-SCALE DATASET For large-scale dataset , AdaAug improves the top-1 accuracy 1 % over the ImageNet ResNet-50 baseline ( see Table 5 ) . The performance gain is similar to previous AutoDA methods and validates the positive effect of AdaAug on complex dataset . A.1.2 ARCHITECTURE TRANSFER We also provide the experimental results when using the learned Augmentation policy to train the Shake-Shake ( 26 2x96d ) model on Reduced CIFAR-10 and Reduced SVHN in Table 6 . A.2 ADDITIONAL ABLATION STUDY To clarify the improvements of AdaAug , we compare the performance of AdaAug under different settings in Table 7 : Simple : standard data augmentation is applied ; Random : AdaAug is applied with randomly initialized hγ while keeping the diversity parameters the same ; AdaAug ( w/o diversity ) : AdaAug is applied without the diversity parameters ; AdaAug : AdaAug is applied with learned hγ and the diversity parameters . A.3 FINE-TUNING OF THE DIVERSITY PARAMETERS The following experiments study the sensitivity of the temperature T and magnitude perturbation δ on the Flower dataset by changing the values of T and δ around the default values used in our original experiments . The fine-tuning of the diversity parameters reduces the test-set error rate of AdaAug-transfer on the Flower dataset from 3.63 to 3.49 . A.4 EF F ICIENCY OF POLICY SEARCH We compare the GPU hours needed to search the augmentation policy between different Automated Data Augmentation methods in Table 9 . Among the baselines , AdaAug is more efficient than AutoAugment , PBA and Fast AutoAugment . A.5 MORE ANALYSIS ON ADAAUG POLICY LEARNING A.5.1 CONVERGENCE OF POLICY LEARNING Here , we provide some insights and empirical evidences for the convergence in policy training . In particular , Figure 4 shows the training and validation losses when learning the CIFAR-10 augmentation policy . The training and validation losses converge to some fixed values towards the end of training . In addition , we also visualize the change of the augmentation parameters ( p and µ ) in Figure 5 . The magnitude parameters start at a smaller value and converge towards the end of training . For the augmentation probability , most of the candidates are stabilized after certain epochs while some others are updated more frequently . These are the evidences that show the convergence of our proposed policy training method . For a more thorough analysis for the policy convergence , we would like to leave it as the future work . A.5.2 ANALYSIS OF LEARNED AUGMENTATION MAGNITUDE Complement with the augmentation probability in Figure 2 , Figure 6 shows the augmentation magnitude for CIFAR-10 and SVHN . We observe that the policy magnitude λ also shows slight variations among different classes and instances . Although the observation is less prominent and is harder to interpret when compared to the augmentation probability p , the learned augmentation magnitude adapts to different data samples . A.5.3 VARIANCE OF LEARNED AUGMENTATION POLICY GROUPED BY CLASS In Figure 7 , we plot the standard deviations of the predicted augmentation probabilities of the image instances grouped by its class label . It is observed that even within the same class , the predicted augmentation policy is different across instances . It is a clue that the AdaAug policy captures some instance-level augmentation information . A.5.4 DISTRIBUTIONS OF THE AUGMENTATION PARAMETERS IN ADAAUG-TRANSFER . In Figure 8 , we show the distributions of the augmentation parameters when transferring the learned policy to the Flower , Pet , Car and Aircraft datasets . The distributions of the augmentation parameters show slight differences between different tasks . Apparently , we observe that the colour transformations , for example Colour , Invert and Solarize are less preferred in the Flower and Pet datasets , while shearing operations are relatively more favourable in the Car and Aircraft dataset . The observation of using less color transformation for the Flower dataset aligns with the findings from Lee et al . ( 2021 ) . Although some differences in the augmentation distributions may be less prominent and harder to interpret when compared to the AdaAug-direct cases , the policy shows its adaptations to different tasks . A.6 LIMITATIONS Although AdaAug is differentiable and efficient in search , it requires forming different augmentation paths and passing the images through each path . Compared to standard training , where we forward b images in one mini-batch , AdaAug processes b · |T| images in the exploration pass . At inference time , AdaAug keeps a pre-trained policy network ( πθ ) to augment the new dataset . This adds extra computational effort when compared to other AutoDA policies , which may be a concern if the image resolution and model size are large but the computational resources are limited .
This paper illustrates an adaptive based data augmentation method named as AdaAug that searches adaptive augmentation policies in a class-dependent and potentially instance-dependent manner to improve the generalisation capability of deep learning models. This paper proposes an efficient exploition-exploration workflow to search for an augmentation policy that optimizes the generalization performance. The empirical studies on some datasets shows that the performance increase with AdaAug.
SP:6f8a327a9f14875aad8b76d34889c9c91d04a444
Unsupervised Object Learning via Common Fate
1 INTRODUCTION . Machine learning excels if sufficient training data is available that is representative of the task at hand . In recent years , this i.i.d . data paradigm has been shown not only to apply for pattern recognition problems , but also for generative modeling ( Goodfellow et al. , 2014 ) . In practice , the amount of data required to reach a given level of performance will depend on the dimensionality of the data . The generation of high-dimensional images thus either requires huge amounts of data ( Karras et al. , 2020 ) or clever methods that exploit prior information , for instance on multi-scale structure or compositionality ( Razavi et al. , 2019 ) . Imagine we would like to automatically generate realistic images of yearbook group photos . A “ brute force ” approach would be to collect a massive dataset and train a large GAN ( Goodfellow et al. , 2014 ) , hoping that the model will not only learn typical backgrounds , but also the shape of individual humans ( or human faces ) , and arrange them into a group . A more modular approach , in contrast , would be to learn object models ( e.g. , for faces , or humans ) , and learn in which positions and arrangements they appear , as well as typical backgrounds . This approach would be more dataefficient : each training image would contain multiple humans , and we would thus effectively have more data for the object learning task . In addition , the sub-task would be lower-dimensional than the original task . Finally , if we leave the i.i.d . setting ( by , say , having a second task with different group sizes ) , the modular approach would lend itself more readily to knowledge transfer . Object-centric approaches aim to capture this compositionality and have been considerably improved over the past few years ( e.g. , Locatello et al . 2020 ; Engelcke et al . 2021 ) . However , these models tend to be difficult to train and do not yet scale well to visually more complex scenes . In addition , the commonly employed end-to-end learning approaches make it difficult to dissect the causes of these difficulties and what principles may be crucial to facilitate unsupervised object learning . In human vision , the Principle of Common Fate of Gestalt Psychology ( Wertheimer , 2012 ) has been shown to play an important role for object learning ( Spelke , 1990 ) . It posits that elements that are moving together tend to be perceived as one—a perceptual bias that may have evolved to be able to recognize camouflaged predators ( Troscianko et al. , 2009 ) . In our work , we show that this principle can be successfully used also for machine vision by using it in a multi-stage object learning approach ( Fig . 1 ) : First , we use unsupervised motion segmentation to obtain a candidate segmentation of a video frame . Second , we train generative object and background models on this segmentation . While the regions obtained by the motion segmentation are caused by objects moving in 3D , only visible parts can be segmented . To learn the actual objects ( i.e. , the causes ) , a crucial task for the object model is learning to generalize beyond the occlusions present in its input data . To measure success , we provide a dataset including object ground truth . As the last stage , we show that the learned object and background models can be combined into a flexible scene model that allows sampling manipulated novel scenes . Thus , in contrast to existing object-centric models trained end-to-end , our work aims at decomposing object learning into evaluable subproblems and testing the potential of exploiting object motions for building scalable object-centric models that allow for causally meaningful interventions in generation . Summing up , the present work makes the following contributions : • We provide the novel FISHBOWL dataset , positioned between simplistic toy scenarios and real world data , providing ground truth information for evaluating causal scene models . • We show that the Common Fate Principle can be succesfully used for object learning by proposing a multi-stage object learning approach based on this principle . • We demonstrate that the generative object and background models learned in this way can be combined into flexible scene models allowing for controlled out-of-distribution sampling . The dataset with rendering code and models including training code will be made publicly available . 2 RELATED WORK . Modular scene modeling . The idea to individually represent objects in a scene is not new . One approach , motivated by the analysis-by-synthesis paradigm from cognitive science ( Bever & Poeppel , 2010 ) , assumes a detailed specification of the generative process and infers a scene representation by trying to invert this process ( Kulkarni et al. , 2015 ; Wu et al. , 2017 ; Jampani et al. , 2015 ) . Many methods instead aim to also learn the generative process in an unsupervised way , see Greff et al . ( 2020 ) for a recent survey . Several models use a recurrent approach to sequentially decompose a given scene into objects ( Eslami et al. , 2016 ; Stelzner et al. , 2019 ; Kosiorek et al. , 2018 ; Gregor et al. , 2015 ; Mnih et al. , 2014 ; Yuan et al. , 2019 ; Engelcke et al. , 2020 ; Weis et al. , 2020 ; von Kügelgen et al. , 2020 ; Burgess et al. , 2019 ) , or directly learn a partial ordering ( Heess et al. , 2011 ; Le Roux et al. , 2011 ) . This sequential approach has also been extended with spatially-parallel components in ( Dittadi & Winther , 2019 ; Jiang et al. , 2019 ; Lin et al. , 2020b ; Chen et al. , 2020 ) . Other methods infer all object representations in parallel ( Greff et al. , 2017 ; van Steenkiste et al. , 2018 ) , with subsequent iterative refinement ( Greff et al. , 2019 ; Veerapaneni et al. , 2019 ; Locatello et al. , 2020 ; Nanbo et al. , 2020 ) . Whereas most of the above models are trained using a reconstruction objective—usually in a variational framework ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) —several works have also extended GANs ( Goodfellow et al. , 2014 ) to generate scenes in a modular way ( Yang et al. , 2017 ; Turkoglu et al. , 2019 ; Nguyen-Phuoc et al. , 2020 ; Ehrhardt et al. , 2020 ; Niemeyer & Geiger , 2021 ) . Those approaches typically use additional supervision such as ground-truth segmentation or additional views , with Ehrhardt et al . ( 2020 ) ; Niemeyer & Geiger ( 2021 ) being notable exceptions . While most methods can decompose a given scene into its constituent objects , only few are fully-generative in the sense that they can generate novel scenes ( Lin et al. , 2020a ; Ehrhardt et al. , 2020 ; Engelcke et al. , 2020 ; von Kügelgen et al. , 2020 ; Engelcke et al. , 2021 ; Niemeyer & Geiger , 2021 ; Dittadi & Winther , 2019 ) . Our approach differs from previous works in the following three key aspects . First , previous approaches typically train a full scene model in an end-to-end fashion and include architectural biases that lead to the models decomposing scenes into objects . While elegant in principle , those methods have not be shown to scale to more realistic datasets yet . Using a multi-stage approach , as in the present work , enables re-use of existing computer vision methods ( such as unsupervised motion segmentation ) for well-studied sub-tasks and therefore scales more easily to visually complex scenes . Second , while some existing methods make use of temporal information from videos ( Lin et al. , 2020a ; Crawford & Pineau , 2020 ; Kosiorek et al. , 2018 ; Weis et al. , 2020 ) , they do not explicitely use motion signals to discover ( i.e. , segment ) objects . Inspired by the development of the human visual system ( Spelke , 1990 ) , we instead explicitly include this segmentation cue in our approach . Third , most existing fully-generative approaches use a spatial mixture model to compose objects into a scene ( Ehrhardt et al. , 2020 ; Engelcke et al. , 2020 ; 2021 ) . While this simplifies training , it clearly does not match the true , underlying scene generation process . In this work , we instead follow the dead leaves model-approach of von Kügelgen et al . ( 2020 ) and scale it to more complex scenes . Motion Segmentation . We require an unsupervised motion segmentation method that is able to segment multiple object instances . For this , we build on a line of work that tracks points with optical flow , and then performs clustering in the space of the resulting point trajectories ( Brox & Malik , 2010 ; Ochs et al. , 2014 ; Ochs & Brox , 2012 ; Keuper et al. , 2015 ) . Motion segmentation methods that require supervision ( Dave et al. , 2019 ; Xie et al. , 2019 ; Tokmakov et al. , 2017a ; b ) or only perform binary motion segmentation ( Yang et al. , 2021 ; 2019 ; Ranjan et al. , 2019 ) are not applicable in our unsupervised setting . Learning from motion . In the present work , we propose exploiting motion information to decompose a scene into objects , and to learn generative object and scene models . Motion information is believed to be an important cue for the development of the human visual system ( Spelke , 1990 ) and has also been used as a training signal for computer vision ( Pathak et al. , 2017 ; Dorfman et al. , 2013 ; Mahendran et al. , 2018a ; b ) . While similar in spirit , these works however do not address learning of generative object and scene models . 3 THE FISHBOWL DATASET . Several video datasets have been used for object-centric represention learning before ( Weis et al. , 2020 ; Yi et al. , 2019 ; Ehrhardt et al. , 2020 ; Kosiorek et al. , 2018 ) . The ground truth object masks and appearances provided by those datasets , however , only cover the visible parts of the scene . In order to evaluate the capabilities of the object model to infer and represent the full objects even in the presence of occlusions , we propose the novel FISHBOWL dataset positioned between complex real world data and simplistic toy datasets . This dataset consist of 20,000 training and 1,000 validation and test videos recorded from a publicly available WebGL demo of an aquarium,1 each with a resolution of 480×320px and 128 frames . We adapted the rendering to obtain ground truth segmentations of the scene and the ground truth unoccluded background and objects ( Fig . 2 ) . More details regarding the recording setup can be found in the supplement . 4 A MULTI-STAGE APPROACH FOR UNSUPERVISED SCENE MODELLING . We model an image x as a composition of a background ( m0 = 1 , x0 ) and an ordered list of objects ( mi , xi ) , each represented as a binary mask mi and appearance xi . The background and objects are composed into a scene using a simple “ dead leaves ” model , i.e. , the value of each pixel ( u , v ) is determined by the foremost object covering that pixel . We propose a multi-stage approach ( Fig . 1 ) for learning generative models representing objects , backgrounds and scenes in this fashion . 1http : //webglsamples.org/aquarium/aquarium.html , 3-clause BSD license STAGE 1 : MOTION SEGMENTATION—OBTAINING CANDIDATE OBJECTS FROM VIDEOS As a first step , we use unsupervised motion segmentation to obtain candidate segmentations of the input videos . We build on the minimum cost multicut method by Keuper et al . ( 2015 ) , which tracks a subset of the pixels through the video using optical flow and then , inspired by the Common Fate Principle mentioned earlier , clusters the trajectories based on pairwise motion affinities . We use the original implementation of the authors , but replace the postprocessing required to obtain a dense segmentation with a simpler but faster non-parametric watershed algorithm ( Beucher & Meyer , 1993 ) followed by computing spatiotemporal connected components ( Silversmith , 2021 ) . The quality of the motion segmentation critically depends on the quality on the optical flow estimation , so we explore different models for that step . ARFlow ( Liu et al. , 2020 ) is the current state of the art self-supervised optical flow method that combines a common warping based objective with self-supervision using various augmentations . We use the published pretrained models as well as a variant trained on the Fishbowl dataset ( see supplement for details ) . Similar augmentations as used by ARFlow can alternatively be used to synthesize training data for supervised methods , as done for generating the FlyingChairs and FlyingThings datasets ( Dosovitskiy et al. , 2015 ; Mayer et al. , 2016 ) . We experiment with FlowNet 2.0 ( Ilg et al. , 2017 ) and the more recent RAFT ( Teed & Deng , 2020 ) trained on those two datasets . To obtain background masks for training the background model , it is not necessary to differentiate between multiple object instances . We aim for a low rate of foreground pixels being mistaken for background pixels , while background pixels mistaken for foreground are of less concern . Hence , we use an ensemble of different background-foreground segmentation models from the bgslibrary ( Sobral , 2013 ) . Based on early experiments , we used the PAWKS ( St-Charles et al. , 2016 ) , LOBSTER ( St-Charles & Bilodeau , 2014 ) , Σ−∆ estimation ( Manzanera & Richefeu , 2007 ) and static frame differences and label every pixel detected as foreground by either of the methods as a foreground pixel . We found that this rather simple model can faithfully remove the foreground objects in most of the cases . We provide additional details in the appendix . STAGE 2A : OBJECT MODEL—LEARNING TO GENERATE UNOCCLUDED , MASKED OBJECTS Object extraction . We use the bounding boxes of the candidate segmentation to extract object crops from the original videos and rescale them to a common size of 128× 64px . We filter out degenerate masks by ignoring all masks with an area smaller than 64 pixels and only considering bounding boxes with a minimum distance of 16px to the frame boundary . Accordingly , we extract the candidate segmentation masks m0 , . . . , mK for each crop . For notational convenience , we take m0 and m1 to correspond to the background and the object of interest ( i.e. , that whose bounding box was used to create the crop ) , respectively , so that mk with k≥ 2 correspond to the masks of other objects . Task . We use the segmented object crops for training a β -VAE-based generative object model ( Higgins et al. , 2017 ) . Input to the model is the object crop without the segmentation , output is the reconstructed object appearance including the binary object mask . We train the model with the standard β -VAE loss with an adapted reconstruction term including both the appearance and the mask . For an input batch , let c and m0 : K be the ground truth crops with candidate segmentations , and ĉ and m̂ the reconstructed object appearances ( RGB values for each pixel ) and shapes ( foreground probabiliy for each each pixel ) . The reconstruction loss LR for these objects is then the weighted sum of the pixel-wise MSE for the appearance and the pixel-wise binary cross entropy for the mask : LR , appear . = ∑ i ( ∑ u , v m ( i ) 1 ( u , v ) ∥∥∥c ( i ) ( u , v ) − ĉ ( i ) ( u , v ) ∥∥∥2 2 /∑ u , v m ( i ) 1 ( u , v ) ) , LR , mask = ∑ i ( ∑ u , v [ m ( i ) 0 +m ( i ) 1 ] ( u , v ) ·BCE [ m ( i ) 1 ( u , v ) , m̂ ( i ) ( u , v ) ] /∑ u , v [ m ( i ) 0 +m ( i ) 1 ] ( u , v ) ) . As the task for the object model is to only represent the central object in each crop , we restrict the appearance loss to the candidate mask of the object ( m1 ) and the mask loss to the union of the candidates masks of the object and the background ( m0 +m1 ) . Importantly , the reconstruction loss is not evaluated for pixels belonging to other objects according to the candidate masks . Therefore , the object model is not penalized for completing object parts that are occluded by another object . Learning object completion via artificial occlusions . To encourage the model to correctly complete partial objects , we use artificial occlusions as an augmentation during training . Similar to a denoising autoencoder ( Vincent et al. , 2008 ) , we compute the reconstruction loss using the unaugmented object crop . We consider two types of artificial occlusions : first , we use a cutout augmentation ( DeVries & Taylor , 2017 ) placing a variable number of grey rectangles on the input image . As an alternative , we use the candidate segmentation to place another , randomly shifted object from the same input batch onto each crop . Model . We use a β -VAE with 128 latent dimensions . The encoder is a ten layer CNN , the appearance decoder is a corresponding CNN using transposed convolutions ( Dumoulin & Visin , 2018 ) and one additional convolutional decoding layer . We use a second decoder with the same architecture but only a single output channel do decode the object masks . During each epoch , we use crops from two random frames from every object . We train our model for 60 epochs using Adam ( Kingma & Ba , 2015 ) with a learning rate of 10−4 , which we decrease by a factor of 10 after 40 epochs . We chose the optimal hyperparameters for this architecture using grid searches . More details regarding the model architecture and the hyperparameters are provided in the supplement . STAGE 2B : BACKGROUND MODEL—LEARNING TO GENERATE UNOCCLUDED BACKGROUNDS Task . We use an ensemble of background extraction techniques outlined above to estimate background scenes for each frame . We train a β -VAE on these backgrounds using the appearance loss LR , appear . with the inferred background mask , without any additional cutout or object augmentation . Architecture . The β -VAE has the same architecture as the object model , but only uses a single decoder for the background appearance . We do not focus on a detailed reconstruction of background samples and limit the resolution to 96×64px . When sampling scenes , the outputs are upsampled to the original resolution of 480×320px using bilinear interpolation . STAGE 3 : SCENE MODEL—LEARNING TO GENERATE COHERENT SCENES In the final stage , we combine the object and background model into a scene model that allows sampling novel scenes . As the scene model can reuse the decoders from the previous stages , its main task is to model the parameters defining the scene composition such as object counts , locations and dependencies between the background and the object latents . Compared to an end-to-end approach , the complexity of the learning problem is greatly reduced in this setting . It is straightforward to generalize the scene model beyond the training distribution : E.g. , it is easy to sample more objects than observed in the input scenes . We use a scene model following the causal graph depicted in Fig . 3 : First , we sample a background latent zbg which describes global properties of the scene such as its composition and illumination ; zbg is then decoded by the background model into a background image x0 . Conditioned on the background latent , we sequentially sample K tuples ( zappk , z pos k , z scale k ) of latents encoding appearance , position , and scale of object k , respectively ; the number of objects K is sampled conditional on zbg as well . Each appearance latent zappk is decoded by the object model into a masked object ok = ( mi , xi ) , which is subsequently re-scaled by zscalek and placed in the scene at position z pos k according to a dead-leaves model ( i.e. , occluding previously visible pixels at the same location ) . Due to the formulation of the model , we are flexible in specifying the conditional and prior distributions needed to generate samples . A particular simple special case is to sample all latents ( indicated as circles in Fig . 3 ) independently . This can be done by informed prior distributions , or by leveraging the training dataset . In the former case , we sample zbg and all zappk from the standard normal prior of the β -VAE , but reject objects for which the binary entropy of the mask ( averaged across all pixels ) exceeds a threshold ( for figures in the main paper , 100 bits ) . We found that empirically , this entropy threshold can be used to trade diversity of samples for higher-quality samples ( cf . supplement ) . For the coordinates , a uniform prior within the image yields reasonable samples , and scales can be sampled from a uniform distribution between 64× 32px and 192× 96px and at fixed 2 : 1 ratio . Alternatively , all distributions can be fit based on values obtained from the motion segmentation ( object and background latents , distribution of sizes , distribution of coordinates ) . We provide a detailed analysis in the supplement .
This paper provides a multi-stage solution to unsupervised, frame-wise segmentation in videos. The common fate heuristic is used to provide initial object detections and segmentations. Then VAE based model is used to refine the initial results. A new simulated dataset is proposed. Results show that the proposed method successfully segments out moving objects and is highly scalable.
SP:ad56e4793625bbd44652ebff8b82d56b9efbd612
Unsupervised Object Learning via Common Fate
1 INTRODUCTION . Machine learning excels if sufficient training data is available that is representative of the task at hand . In recent years , this i.i.d . data paradigm has been shown not only to apply for pattern recognition problems , but also for generative modeling ( Goodfellow et al. , 2014 ) . In practice , the amount of data required to reach a given level of performance will depend on the dimensionality of the data . The generation of high-dimensional images thus either requires huge amounts of data ( Karras et al. , 2020 ) or clever methods that exploit prior information , for instance on multi-scale structure or compositionality ( Razavi et al. , 2019 ) . Imagine we would like to automatically generate realistic images of yearbook group photos . A “ brute force ” approach would be to collect a massive dataset and train a large GAN ( Goodfellow et al. , 2014 ) , hoping that the model will not only learn typical backgrounds , but also the shape of individual humans ( or human faces ) , and arrange them into a group . A more modular approach , in contrast , would be to learn object models ( e.g. , for faces , or humans ) , and learn in which positions and arrangements they appear , as well as typical backgrounds . This approach would be more dataefficient : each training image would contain multiple humans , and we would thus effectively have more data for the object learning task . In addition , the sub-task would be lower-dimensional than the original task . Finally , if we leave the i.i.d . setting ( by , say , having a second task with different group sizes ) , the modular approach would lend itself more readily to knowledge transfer . Object-centric approaches aim to capture this compositionality and have been considerably improved over the past few years ( e.g. , Locatello et al . 2020 ; Engelcke et al . 2021 ) . However , these models tend to be difficult to train and do not yet scale well to visually more complex scenes . In addition , the commonly employed end-to-end learning approaches make it difficult to dissect the causes of these difficulties and what principles may be crucial to facilitate unsupervised object learning . In human vision , the Principle of Common Fate of Gestalt Psychology ( Wertheimer , 2012 ) has been shown to play an important role for object learning ( Spelke , 1990 ) . It posits that elements that are moving together tend to be perceived as one—a perceptual bias that may have evolved to be able to recognize camouflaged predators ( Troscianko et al. , 2009 ) . In our work , we show that this principle can be successfully used also for machine vision by using it in a multi-stage object learning approach ( Fig . 1 ) : First , we use unsupervised motion segmentation to obtain a candidate segmentation of a video frame . Second , we train generative object and background models on this segmentation . While the regions obtained by the motion segmentation are caused by objects moving in 3D , only visible parts can be segmented . To learn the actual objects ( i.e. , the causes ) , a crucial task for the object model is learning to generalize beyond the occlusions present in its input data . To measure success , we provide a dataset including object ground truth . As the last stage , we show that the learned object and background models can be combined into a flexible scene model that allows sampling manipulated novel scenes . Thus , in contrast to existing object-centric models trained end-to-end , our work aims at decomposing object learning into evaluable subproblems and testing the potential of exploiting object motions for building scalable object-centric models that allow for causally meaningful interventions in generation . Summing up , the present work makes the following contributions : • We provide the novel FISHBOWL dataset , positioned between simplistic toy scenarios and real world data , providing ground truth information for evaluating causal scene models . • We show that the Common Fate Principle can be succesfully used for object learning by proposing a multi-stage object learning approach based on this principle . • We demonstrate that the generative object and background models learned in this way can be combined into flexible scene models allowing for controlled out-of-distribution sampling . The dataset with rendering code and models including training code will be made publicly available . 2 RELATED WORK . Modular scene modeling . The idea to individually represent objects in a scene is not new . One approach , motivated by the analysis-by-synthesis paradigm from cognitive science ( Bever & Poeppel , 2010 ) , assumes a detailed specification of the generative process and infers a scene representation by trying to invert this process ( Kulkarni et al. , 2015 ; Wu et al. , 2017 ; Jampani et al. , 2015 ) . Many methods instead aim to also learn the generative process in an unsupervised way , see Greff et al . ( 2020 ) for a recent survey . Several models use a recurrent approach to sequentially decompose a given scene into objects ( Eslami et al. , 2016 ; Stelzner et al. , 2019 ; Kosiorek et al. , 2018 ; Gregor et al. , 2015 ; Mnih et al. , 2014 ; Yuan et al. , 2019 ; Engelcke et al. , 2020 ; Weis et al. , 2020 ; von Kügelgen et al. , 2020 ; Burgess et al. , 2019 ) , or directly learn a partial ordering ( Heess et al. , 2011 ; Le Roux et al. , 2011 ) . This sequential approach has also been extended with spatially-parallel components in ( Dittadi & Winther , 2019 ; Jiang et al. , 2019 ; Lin et al. , 2020b ; Chen et al. , 2020 ) . Other methods infer all object representations in parallel ( Greff et al. , 2017 ; van Steenkiste et al. , 2018 ) , with subsequent iterative refinement ( Greff et al. , 2019 ; Veerapaneni et al. , 2019 ; Locatello et al. , 2020 ; Nanbo et al. , 2020 ) . Whereas most of the above models are trained using a reconstruction objective—usually in a variational framework ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) —several works have also extended GANs ( Goodfellow et al. , 2014 ) to generate scenes in a modular way ( Yang et al. , 2017 ; Turkoglu et al. , 2019 ; Nguyen-Phuoc et al. , 2020 ; Ehrhardt et al. , 2020 ; Niemeyer & Geiger , 2021 ) . Those approaches typically use additional supervision such as ground-truth segmentation or additional views , with Ehrhardt et al . ( 2020 ) ; Niemeyer & Geiger ( 2021 ) being notable exceptions . While most methods can decompose a given scene into its constituent objects , only few are fully-generative in the sense that they can generate novel scenes ( Lin et al. , 2020a ; Ehrhardt et al. , 2020 ; Engelcke et al. , 2020 ; von Kügelgen et al. , 2020 ; Engelcke et al. , 2021 ; Niemeyer & Geiger , 2021 ; Dittadi & Winther , 2019 ) . Our approach differs from previous works in the following three key aspects . First , previous approaches typically train a full scene model in an end-to-end fashion and include architectural biases that lead to the models decomposing scenes into objects . While elegant in principle , those methods have not be shown to scale to more realistic datasets yet . Using a multi-stage approach , as in the present work , enables re-use of existing computer vision methods ( such as unsupervised motion segmentation ) for well-studied sub-tasks and therefore scales more easily to visually complex scenes . Second , while some existing methods make use of temporal information from videos ( Lin et al. , 2020a ; Crawford & Pineau , 2020 ; Kosiorek et al. , 2018 ; Weis et al. , 2020 ) , they do not explicitely use motion signals to discover ( i.e. , segment ) objects . Inspired by the development of the human visual system ( Spelke , 1990 ) , we instead explicitly include this segmentation cue in our approach . Third , most existing fully-generative approaches use a spatial mixture model to compose objects into a scene ( Ehrhardt et al. , 2020 ; Engelcke et al. , 2020 ; 2021 ) . While this simplifies training , it clearly does not match the true , underlying scene generation process . In this work , we instead follow the dead leaves model-approach of von Kügelgen et al . ( 2020 ) and scale it to more complex scenes . Motion Segmentation . We require an unsupervised motion segmentation method that is able to segment multiple object instances . For this , we build on a line of work that tracks points with optical flow , and then performs clustering in the space of the resulting point trajectories ( Brox & Malik , 2010 ; Ochs et al. , 2014 ; Ochs & Brox , 2012 ; Keuper et al. , 2015 ) . Motion segmentation methods that require supervision ( Dave et al. , 2019 ; Xie et al. , 2019 ; Tokmakov et al. , 2017a ; b ) or only perform binary motion segmentation ( Yang et al. , 2021 ; 2019 ; Ranjan et al. , 2019 ) are not applicable in our unsupervised setting . Learning from motion . In the present work , we propose exploiting motion information to decompose a scene into objects , and to learn generative object and scene models . Motion information is believed to be an important cue for the development of the human visual system ( Spelke , 1990 ) and has also been used as a training signal for computer vision ( Pathak et al. , 2017 ; Dorfman et al. , 2013 ; Mahendran et al. , 2018a ; b ) . While similar in spirit , these works however do not address learning of generative object and scene models . 3 THE FISHBOWL DATASET . Several video datasets have been used for object-centric represention learning before ( Weis et al. , 2020 ; Yi et al. , 2019 ; Ehrhardt et al. , 2020 ; Kosiorek et al. , 2018 ) . The ground truth object masks and appearances provided by those datasets , however , only cover the visible parts of the scene . In order to evaluate the capabilities of the object model to infer and represent the full objects even in the presence of occlusions , we propose the novel FISHBOWL dataset positioned between complex real world data and simplistic toy datasets . This dataset consist of 20,000 training and 1,000 validation and test videos recorded from a publicly available WebGL demo of an aquarium,1 each with a resolution of 480×320px and 128 frames . We adapted the rendering to obtain ground truth segmentations of the scene and the ground truth unoccluded background and objects ( Fig . 2 ) . More details regarding the recording setup can be found in the supplement . 4 A MULTI-STAGE APPROACH FOR UNSUPERVISED SCENE MODELLING . We model an image x as a composition of a background ( m0 = 1 , x0 ) and an ordered list of objects ( mi , xi ) , each represented as a binary mask mi and appearance xi . The background and objects are composed into a scene using a simple “ dead leaves ” model , i.e. , the value of each pixel ( u , v ) is determined by the foremost object covering that pixel . We propose a multi-stage approach ( Fig . 1 ) for learning generative models representing objects , backgrounds and scenes in this fashion . 1http : //webglsamples.org/aquarium/aquarium.html , 3-clause BSD license STAGE 1 : MOTION SEGMENTATION—OBTAINING CANDIDATE OBJECTS FROM VIDEOS As a first step , we use unsupervised motion segmentation to obtain candidate segmentations of the input videos . We build on the minimum cost multicut method by Keuper et al . ( 2015 ) , which tracks a subset of the pixels through the video using optical flow and then , inspired by the Common Fate Principle mentioned earlier , clusters the trajectories based on pairwise motion affinities . We use the original implementation of the authors , but replace the postprocessing required to obtain a dense segmentation with a simpler but faster non-parametric watershed algorithm ( Beucher & Meyer , 1993 ) followed by computing spatiotemporal connected components ( Silversmith , 2021 ) . The quality of the motion segmentation critically depends on the quality on the optical flow estimation , so we explore different models for that step . ARFlow ( Liu et al. , 2020 ) is the current state of the art self-supervised optical flow method that combines a common warping based objective with self-supervision using various augmentations . We use the published pretrained models as well as a variant trained on the Fishbowl dataset ( see supplement for details ) . Similar augmentations as used by ARFlow can alternatively be used to synthesize training data for supervised methods , as done for generating the FlyingChairs and FlyingThings datasets ( Dosovitskiy et al. , 2015 ; Mayer et al. , 2016 ) . We experiment with FlowNet 2.0 ( Ilg et al. , 2017 ) and the more recent RAFT ( Teed & Deng , 2020 ) trained on those two datasets . To obtain background masks for training the background model , it is not necessary to differentiate between multiple object instances . We aim for a low rate of foreground pixels being mistaken for background pixels , while background pixels mistaken for foreground are of less concern . Hence , we use an ensemble of different background-foreground segmentation models from the bgslibrary ( Sobral , 2013 ) . Based on early experiments , we used the PAWKS ( St-Charles et al. , 2016 ) , LOBSTER ( St-Charles & Bilodeau , 2014 ) , Σ−∆ estimation ( Manzanera & Richefeu , 2007 ) and static frame differences and label every pixel detected as foreground by either of the methods as a foreground pixel . We found that this rather simple model can faithfully remove the foreground objects in most of the cases . We provide additional details in the appendix . STAGE 2A : OBJECT MODEL—LEARNING TO GENERATE UNOCCLUDED , MASKED OBJECTS Object extraction . We use the bounding boxes of the candidate segmentation to extract object crops from the original videos and rescale them to a common size of 128× 64px . We filter out degenerate masks by ignoring all masks with an area smaller than 64 pixels and only considering bounding boxes with a minimum distance of 16px to the frame boundary . Accordingly , we extract the candidate segmentation masks m0 , . . . , mK for each crop . For notational convenience , we take m0 and m1 to correspond to the background and the object of interest ( i.e. , that whose bounding box was used to create the crop ) , respectively , so that mk with k≥ 2 correspond to the masks of other objects . Task . We use the segmented object crops for training a β -VAE-based generative object model ( Higgins et al. , 2017 ) . Input to the model is the object crop without the segmentation , output is the reconstructed object appearance including the binary object mask . We train the model with the standard β -VAE loss with an adapted reconstruction term including both the appearance and the mask . For an input batch , let c and m0 : K be the ground truth crops with candidate segmentations , and ĉ and m̂ the reconstructed object appearances ( RGB values for each pixel ) and shapes ( foreground probabiliy for each each pixel ) . The reconstruction loss LR for these objects is then the weighted sum of the pixel-wise MSE for the appearance and the pixel-wise binary cross entropy for the mask : LR , appear . = ∑ i ( ∑ u , v m ( i ) 1 ( u , v ) ∥∥∥c ( i ) ( u , v ) − ĉ ( i ) ( u , v ) ∥∥∥2 2 /∑ u , v m ( i ) 1 ( u , v ) ) , LR , mask = ∑ i ( ∑ u , v [ m ( i ) 0 +m ( i ) 1 ] ( u , v ) ·BCE [ m ( i ) 1 ( u , v ) , m̂ ( i ) ( u , v ) ] /∑ u , v [ m ( i ) 0 +m ( i ) 1 ] ( u , v ) ) . As the task for the object model is to only represent the central object in each crop , we restrict the appearance loss to the candidate mask of the object ( m1 ) and the mask loss to the union of the candidates masks of the object and the background ( m0 +m1 ) . Importantly , the reconstruction loss is not evaluated for pixels belonging to other objects according to the candidate masks . Therefore , the object model is not penalized for completing object parts that are occluded by another object . Learning object completion via artificial occlusions . To encourage the model to correctly complete partial objects , we use artificial occlusions as an augmentation during training . Similar to a denoising autoencoder ( Vincent et al. , 2008 ) , we compute the reconstruction loss using the unaugmented object crop . We consider two types of artificial occlusions : first , we use a cutout augmentation ( DeVries & Taylor , 2017 ) placing a variable number of grey rectangles on the input image . As an alternative , we use the candidate segmentation to place another , randomly shifted object from the same input batch onto each crop . Model . We use a β -VAE with 128 latent dimensions . The encoder is a ten layer CNN , the appearance decoder is a corresponding CNN using transposed convolutions ( Dumoulin & Visin , 2018 ) and one additional convolutional decoding layer . We use a second decoder with the same architecture but only a single output channel do decode the object masks . During each epoch , we use crops from two random frames from every object . We train our model for 60 epochs using Adam ( Kingma & Ba , 2015 ) with a learning rate of 10−4 , which we decrease by a factor of 10 after 40 epochs . We chose the optimal hyperparameters for this architecture using grid searches . More details regarding the model architecture and the hyperparameters are provided in the supplement . STAGE 2B : BACKGROUND MODEL—LEARNING TO GENERATE UNOCCLUDED BACKGROUNDS Task . We use an ensemble of background extraction techniques outlined above to estimate background scenes for each frame . We train a β -VAE on these backgrounds using the appearance loss LR , appear . with the inferred background mask , without any additional cutout or object augmentation . Architecture . The β -VAE has the same architecture as the object model , but only uses a single decoder for the background appearance . We do not focus on a detailed reconstruction of background samples and limit the resolution to 96×64px . When sampling scenes , the outputs are upsampled to the original resolution of 480×320px using bilinear interpolation . STAGE 3 : SCENE MODEL—LEARNING TO GENERATE COHERENT SCENES In the final stage , we combine the object and background model into a scene model that allows sampling novel scenes . As the scene model can reuse the decoders from the previous stages , its main task is to model the parameters defining the scene composition such as object counts , locations and dependencies between the background and the object latents . Compared to an end-to-end approach , the complexity of the learning problem is greatly reduced in this setting . It is straightforward to generalize the scene model beyond the training distribution : E.g. , it is easy to sample more objects than observed in the input scenes . We use a scene model following the causal graph depicted in Fig . 3 : First , we sample a background latent zbg which describes global properties of the scene such as its composition and illumination ; zbg is then decoded by the background model into a background image x0 . Conditioned on the background latent , we sequentially sample K tuples ( zappk , z pos k , z scale k ) of latents encoding appearance , position , and scale of object k , respectively ; the number of objects K is sampled conditional on zbg as well . Each appearance latent zappk is decoded by the object model into a masked object ok = ( mi , xi ) , which is subsequently re-scaled by zscalek and placed in the scene at position z pos k according to a dead-leaves model ( i.e. , occluding previously visible pixels at the same location ) . Due to the formulation of the model , we are flexible in specifying the conditional and prior distributions needed to generate samples . A particular simple special case is to sample all latents ( indicated as circles in Fig . 3 ) independently . This can be done by informed prior distributions , or by leveraging the training dataset . In the former case , we sample zbg and all zappk from the standard normal prior of the β -VAE , but reject objects for which the binary entropy of the mask ( averaged across all pixels ) exceeds a threshold ( for figures in the main paper , 100 bits ) . We found that empirically , this entropy threshold can be used to trade diversity of samples for higher-quality samples ( cf . supplement ) . For the coordinates , a uniform prior within the image yields reasonable samples , and scales can be sampled from a uniform distribution between 64× 32px and 192× 96px and at fixed 2 : 1 ratio . Alternatively , all distributions can be fit based on values obtained from the motion segmentation ( object and background latents , distribution of sizes , distribution of coordinates ) . We provide a detailed analysis in the supplement .
The paper introduces an object-centric generative model for visual scenes. The model decouples the problem into three tasks: 1) modelling the 2D appearance and shape of individual objects with a variational auto encoder; 2) same thing for background; 3) sampling the position, size and appearance of individual objects (i.e. scene composition) conditioned on the background. To my understanding, these three components are trained independently. Contributions: 1) Decoupling these three tasks allows for a more interpretable representation compared to existing end-to-end methods, which allows for better interactions with users (e.g. change the number of objects or their position) 2) This decoupling potentially makes the third task easier, because it only needs to learn relationships between positions and size given a latent representation (as opposed to learning everything jointly). 3) Training the tasks independently requires a way to obtain "ground-truth" object segmentations - the paper achieves this automatically from videos using motion segmentation with good results
SP:ad56e4793625bbd44652ebff8b82d56b9efbd612
Unsupervised Object Learning via Common Fate
1 INTRODUCTION . Machine learning excels if sufficient training data is available that is representative of the task at hand . In recent years , this i.i.d . data paradigm has been shown not only to apply for pattern recognition problems , but also for generative modeling ( Goodfellow et al. , 2014 ) . In practice , the amount of data required to reach a given level of performance will depend on the dimensionality of the data . The generation of high-dimensional images thus either requires huge amounts of data ( Karras et al. , 2020 ) or clever methods that exploit prior information , for instance on multi-scale structure or compositionality ( Razavi et al. , 2019 ) . Imagine we would like to automatically generate realistic images of yearbook group photos . A “ brute force ” approach would be to collect a massive dataset and train a large GAN ( Goodfellow et al. , 2014 ) , hoping that the model will not only learn typical backgrounds , but also the shape of individual humans ( or human faces ) , and arrange them into a group . A more modular approach , in contrast , would be to learn object models ( e.g. , for faces , or humans ) , and learn in which positions and arrangements they appear , as well as typical backgrounds . This approach would be more dataefficient : each training image would contain multiple humans , and we would thus effectively have more data for the object learning task . In addition , the sub-task would be lower-dimensional than the original task . Finally , if we leave the i.i.d . setting ( by , say , having a second task with different group sizes ) , the modular approach would lend itself more readily to knowledge transfer . Object-centric approaches aim to capture this compositionality and have been considerably improved over the past few years ( e.g. , Locatello et al . 2020 ; Engelcke et al . 2021 ) . However , these models tend to be difficult to train and do not yet scale well to visually more complex scenes . In addition , the commonly employed end-to-end learning approaches make it difficult to dissect the causes of these difficulties and what principles may be crucial to facilitate unsupervised object learning . In human vision , the Principle of Common Fate of Gestalt Psychology ( Wertheimer , 2012 ) has been shown to play an important role for object learning ( Spelke , 1990 ) . It posits that elements that are moving together tend to be perceived as one—a perceptual bias that may have evolved to be able to recognize camouflaged predators ( Troscianko et al. , 2009 ) . In our work , we show that this principle can be successfully used also for machine vision by using it in a multi-stage object learning approach ( Fig . 1 ) : First , we use unsupervised motion segmentation to obtain a candidate segmentation of a video frame . Second , we train generative object and background models on this segmentation . While the regions obtained by the motion segmentation are caused by objects moving in 3D , only visible parts can be segmented . To learn the actual objects ( i.e. , the causes ) , a crucial task for the object model is learning to generalize beyond the occlusions present in its input data . To measure success , we provide a dataset including object ground truth . As the last stage , we show that the learned object and background models can be combined into a flexible scene model that allows sampling manipulated novel scenes . Thus , in contrast to existing object-centric models trained end-to-end , our work aims at decomposing object learning into evaluable subproblems and testing the potential of exploiting object motions for building scalable object-centric models that allow for causally meaningful interventions in generation . Summing up , the present work makes the following contributions : • We provide the novel FISHBOWL dataset , positioned between simplistic toy scenarios and real world data , providing ground truth information for evaluating causal scene models . • We show that the Common Fate Principle can be succesfully used for object learning by proposing a multi-stage object learning approach based on this principle . • We demonstrate that the generative object and background models learned in this way can be combined into flexible scene models allowing for controlled out-of-distribution sampling . The dataset with rendering code and models including training code will be made publicly available . 2 RELATED WORK . Modular scene modeling . The idea to individually represent objects in a scene is not new . One approach , motivated by the analysis-by-synthesis paradigm from cognitive science ( Bever & Poeppel , 2010 ) , assumes a detailed specification of the generative process and infers a scene representation by trying to invert this process ( Kulkarni et al. , 2015 ; Wu et al. , 2017 ; Jampani et al. , 2015 ) . Many methods instead aim to also learn the generative process in an unsupervised way , see Greff et al . ( 2020 ) for a recent survey . Several models use a recurrent approach to sequentially decompose a given scene into objects ( Eslami et al. , 2016 ; Stelzner et al. , 2019 ; Kosiorek et al. , 2018 ; Gregor et al. , 2015 ; Mnih et al. , 2014 ; Yuan et al. , 2019 ; Engelcke et al. , 2020 ; Weis et al. , 2020 ; von Kügelgen et al. , 2020 ; Burgess et al. , 2019 ) , or directly learn a partial ordering ( Heess et al. , 2011 ; Le Roux et al. , 2011 ) . This sequential approach has also been extended with spatially-parallel components in ( Dittadi & Winther , 2019 ; Jiang et al. , 2019 ; Lin et al. , 2020b ; Chen et al. , 2020 ) . Other methods infer all object representations in parallel ( Greff et al. , 2017 ; van Steenkiste et al. , 2018 ) , with subsequent iterative refinement ( Greff et al. , 2019 ; Veerapaneni et al. , 2019 ; Locatello et al. , 2020 ; Nanbo et al. , 2020 ) . Whereas most of the above models are trained using a reconstruction objective—usually in a variational framework ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) —several works have also extended GANs ( Goodfellow et al. , 2014 ) to generate scenes in a modular way ( Yang et al. , 2017 ; Turkoglu et al. , 2019 ; Nguyen-Phuoc et al. , 2020 ; Ehrhardt et al. , 2020 ; Niemeyer & Geiger , 2021 ) . Those approaches typically use additional supervision such as ground-truth segmentation or additional views , with Ehrhardt et al . ( 2020 ) ; Niemeyer & Geiger ( 2021 ) being notable exceptions . While most methods can decompose a given scene into its constituent objects , only few are fully-generative in the sense that they can generate novel scenes ( Lin et al. , 2020a ; Ehrhardt et al. , 2020 ; Engelcke et al. , 2020 ; von Kügelgen et al. , 2020 ; Engelcke et al. , 2021 ; Niemeyer & Geiger , 2021 ; Dittadi & Winther , 2019 ) . Our approach differs from previous works in the following three key aspects . First , previous approaches typically train a full scene model in an end-to-end fashion and include architectural biases that lead to the models decomposing scenes into objects . While elegant in principle , those methods have not be shown to scale to more realistic datasets yet . Using a multi-stage approach , as in the present work , enables re-use of existing computer vision methods ( such as unsupervised motion segmentation ) for well-studied sub-tasks and therefore scales more easily to visually complex scenes . Second , while some existing methods make use of temporal information from videos ( Lin et al. , 2020a ; Crawford & Pineau , 2020 ; Kosiorek et al. , 2018 ; Weis et al. , 2020 ) , they do not explicitely use motion signals to discover ( i.e. , segment ) objects . Inspired by the development of the human visual system ( Spelke , 1990 ) , we instead explicitly include this segmentation cue in our approach . Third , most existing fully-generative approaches use a spatial mixture model to compose objects into a scene ( Ehrhardt et al. , 2020 ; Engelcke et al. , 2020 ; 2021 ) . While this simplifies training , it clearly does not match the true , underlying scene generation process . In this work , we instead follow the dead leaves model-approach of von Kügelgen et al . ( 2020 ) and scale it to more complex scenes . Motion Segmentation . We require an unsupervised motion segmentation method that is able to segment multiple object instances . For this , we build on a line of work that tracks points with optical flow , and then performs clustering in the space of the resulting point trajectories ( Brox & Malik , 2010 ; Ochs et al. , 2014 ; Ochs & Brox , 2012 ; Keuper et al. , 2015 ) . Motion segmentation methods that require supervision ( Dave et al. , 2019 ; Xie et al. , 2019 ; Tokmakov et al. , 2017a ; b ) or only perform binary motion segmentation ( Yang et al. , 2021 ; 2019 ; Ranjan et al. , 2019 ) are not applicable in our unsupervised setting . Learning from motion . In the present work , we propose exploiting motion information to decompose a scene into objects , and to learn generative object and scene models . Motion information is believed to be an important cue for the development of the human visual system ( Spelke , 1990 ) and has also been used as a training signal for computer vision ( Pathak et al. , 2017 ; Dorfman et al. , 2013 ; Mahendran et al. , 2018a ; b ) . While similar in spirit , these works however do not address learning of generative object and scene models . 3 THE FISHBOWL DATASET . Several video datasets have been used for object-centric represention learning before ( Weis et al. , 2020 ; Yi et al. , 2019 ; Ehrhardt et al. , 2020 ; Kosiorek et al. , 2018 ) . The ground truth object masks and appearances provided by those datasets , however , only cover the visible parts of the scene . In order to evaluate the capabilities of the object model to infer and represent the full objects even in the presence of occlusions , we propose the novel FISHBOWL dataset positioned between complex real world data and simplistic toy datasets . This dataset consist of 20,000 training and 1,000 validation and test videos recorded from a publicly available WebGL demo of an aquarium,1 each with a resolution of 480×320px and 128 frames . We adapted the rendering to obtain ground truth segmentations of the scene and the ground truth unoccluded background and objects ( Fig . 2 ) . More details regarding the recording setup can be found in the supplement . 4 A MULTI-STAGE APPROACH FOR UNSUPERVISED SCENE MODELLING . We model an image x as a composition of a background ( m0 = 1 , x0 ) and an ordered list of objects ( mi , xi ) , each represented as a binary mask mi and appearance xi . The background and objects are composed into a scene using a simple “ dead leaves ” model , i.e. , the value of each pixel ( u , v ) is determined by the foremost object covering that pixel . We propose a multi-stage approach ( Fig . 1 ) for learning generative models representing objects , backgrounds and scenes in this fashion . 1http : //webglsamples.org/aquarium/aquarium.html , 3-clause BSD license STAGE 1 : MOTION SEGMENTATION—OBTAINING CANDIDATE OBJECTS FROM VIDEOS As a first step , we use unsupervised motion segmentation to obtain candidate segmentations of the input videos . We build on the minimum cost multicut method by Keuper et al . ( 2015 ) , which tracks a subset of the pixels through the video using optical flow and then , inspired by the Common Fate Principle mentioned earlier , clusters the trajectories based on pairwise motion affinities . We use the original implementation of the authors , but replace the postprocessing required to obtain a dense segmentation with a simpler but faster non-parametric watershed algorithm ( Beucher & Meyer , 1993 ) followed by computing spatiotemporal connected components ( Silversmith , 2021 ) . The quality of the motion segmentation critically depends on the quality on the optical flow estimation , so we explore different models for that step . ARFlow ( Liu et al. , 2020 ) is the current state of the art self-supervised optical flow method that combines a common warping based objective with self-supervision using various augmentations . We use the published pretrained models as well as a variant trained on the Fishbowl dataset ( see supplement for details ) . Similar augmentations as used by ARFlow can alternatively be used to synthesize training data for supervised methods , as done for generating the FlyingChairs and FlyingThings datasets ( Dosovitskiy et al. , 2015 ; Mayer et al. , 2016 ) . We experiment with FlowNet 2.0 ( Ilg et al. , 2017 ) and the more recent RAFT ( Teed & Deng , 2020 ) trained on those two datasets . To obtain background masks for training the background model , it is not necessary to differentiate between multiple object instances . We aim for a low rate of foreground pixels being mistaken for background pixels , while background pixels mistaken for foreground are of less concern . Hence , we use an ensemble of different background-foreground segmentation models from the bgslibrary ( Sobral , 2013 ) . Based on early experiments , we used the PAWKS ( St-Charles et al. , 2016 ) , LOBSTER ( St-Charles & Bilodeau , 2014 ) , Σ−∆ estimation ( Manzanera & Richefeu , 2007 ) and static frame differences and label every pixel detected as foreground by either of the methods as a foreground pixel . We found that this rather simple model can faithfully remove the foreground objects in most of the cases . We provide additional details in the appendix . STAGE 2A : OBJECT MODEL—LEARNING TO GENERATE UNOCCLUDED , MASKED OBJECTS Object extraction . We use the bounding boxes of the candidate segmentation to extract object crops from the original videos and rescale them to a common size of 128× 64px . We filter out degenerate masks by ignoring all masks with an area smaller than 64 pixels and only considering bounding boxes with a minimum distance of 16px to the frame boundary . Accordingly , we extract the candidate segmentation masks m0 , . . . , mK for each crop . For notational convenience , we take m0 and m1 to correspond to the background and the object of interest ( i.e. , that whose bounding box was used to create the crop ) , respectively , so that mk with k≥ 2 correspond to the masks of other objects . Task . We use the segmented object crops for training a β -VAE-based generative object model ( Higgins et al. , 2017 ) . Input to the model is the object crop without the segmentation , output is the reconstructed object appearance including the binary object mask . We train the model with the standard β -VAE loss with an adapted reconstruction term including both the appearance and the mask . For an input batch , let c and m0 : K be the ground truth crops with candidate segmentations , and ĉ and m̂ the reconstructed object appearances ( RGB values for each pixel ) and shapes ( foreground probabiliy for each each pixel ) . The reconstruction loss LR for these objects is then the weighted sum of the pixel-wise MSE for the appearance and the pixel-wise binary cross entropy for the mask : LR , appear . = ∑ i ( ∑ u , v m ( i ) 1 ( u , v ) ∥∥∥c ( i ) ( u , v ) − ĉ ( i ) ( u , v ) ∥∥∥2 2 /∑ u , v m ( i ) 1 ( u , v ) ) , LR , mask = ∑ i ( ∑ u , v [ m ( i ) 0 +m ( i ) 1 ] ( u , v ) ·BCE [ m ( i ) 1 ( u , v ) , m̂ ( i ) ( u , v ) ] /∑ u , v [ m ( i ) 0 +m ( i ) 1 ] ( u , v ) ) . As the task for the object model is to only represent the central object in each crop , we restrict the appearance loss to the candidate mask of the object ( m1 ) and the mask loss to the union of the candidates masks of the object and the background ( m0 +m1 ) . Importantly , the reconstruction loss is not evaluated for pixels belonging to other objects according to the candidate masks . Therefore , the object model is not penalized for completing object parts that are occluded by another object . Learning object completion via artificial occlusions . To encourage the model to correctly complete partial objects , we use artificial occlusions as an augmentation during training . Similar to a denoising autoencoder ( Vincent et al. , 2008 ) , we compute the reconstruction loss using the unaugmented object crop . We consider two types of artificial occlusions : first , we use a cutout augmentation ( DeVries & Taylor , 2017 ) placing a variable number of grey rectangles on the input image . As an alternative , we use the candidate segmentation to place another , randomly shifted object from the same input batch onto each crop . Model . We use a β -VAE with 128 latent dimensions . The encoder is a ten layer CNN , the appearance decoder is a corresponding CNN using transposed convolutions ( Dumoulin & Visin , 2018 ) and one additional convolutional decoding layer . We use a second decoder with the same architecture but only a single output channel do decode the object masks . During each epoch , we use crops from two random frames from every object . We train our model for 60 epochs using Adam ( Kingma & Ba , 2015 ) with a learning rate of 10−4 , which we decrease by a factor of 10 after 40 epochs . We chose the optimal hyperparameters for this architecture using grid searches . More details regarding the model architecture and the hyperparameters are provided in the supplement . STAGE 2B : BACKGROUND MODEL—LEARNING TO GENERATE UNOCCLUDED BACKGROUNDS Task . We use an ensemble of background extraction techniques outlined above to estimate background scenes for each frame . We train a β -VAE on these backgrounds using the appearance loss LR , appear . with the inferred background mask , without any additional cutout or object augmentation . Architecture . The β -VAE has the same architecture as the object model , but only uses a single decoder for the background appearance . We do not focus on a detailed reconstruction of background samples and limit the resolution to 96×64px . When sampling scenes , the outputs are upsampled to the original resolution of 480×320px using bilinear interpolation . STAGE 3 : SCENE MODEL—LEARNING TO GENERATE COHERENT SCENES In the final stage , we combine the object and background model into a scene model that allows sampling novel scenes . As the scene model can reuse the decoders from the previous stages , its main task is to model the parameters defining the scene composition such as object counts , locations and dependencies between the background and the object latents . Compared to an end-to-end approach , the complexity of the learning problem is greatly reduced in this setting . It is straightforward to generalize the scene model beyond the training distribution : E.g. , it is easy to sample more objects than observed in the input scenes . We use a scene model following the causal graph depicted in Fig . 3 : First , we sample a background latent zbg which describes global properties of the scene such as its composition and illumination ; zbg is then decoded by the background model into a background image x0 . Conditioned on the background latent , we sequentially sample K tuples ( zappk , z pos k , z scale k ) of latents encoding appearance , position , and scale of object k , respectively ; the number of objects K is sampled conditional on zbg as well . Each appearance latent zappk is decoded by the object model into a masked object ok = ( mi , xi ) , which is subsequently re-scaled by zscalek and placed in the scene at position z pos k according to a dead-leaves model ( i.e. , occluding previously visible pixels at the same location ) . Due to the formulation of the model , we are flexible in specifying the conditional and prior distributions needed to generate samples . A particular simple special case is to sample all latents ( indicated as circles in Fig . 3 ) independently . This can be done by informed prior distributions , or by leveraging the training dataset . In the former case , we sample zbg and all zappk from the standard normal prior of the β -VAE , but reject objects for which the binary entropy of the mask ( averaged across all pixels ) exceeds a threshold ( for figures in the main paper , 100 bits ) . We found that empirically , this entropy threshold can be used to trade diversity of samples for higher-quality samples ( cf . supplement ) . For the coordinates , a uniform prior within the image yields reasonable samples , and scales can be sampled from a uniform distribution between 64× 32px and 192× 96px and at fixed 2 : 1 ratio . Alternatively , all distributions can be fit based on values obtained from the motion segmentation ( object and background latents , distribution of sizes , distribution of coordinates ) . We provide a detailed analysis in the supplement .
This work proposes an object-centric generative model. It consists of motion segmentation, object model, background model, and scene model. The object model is trained to reconstruct an object as if it is not occluded. The authors have introduced a new dataset, called Fishbowl, which provides inmodal and amodal segmentation masks of objects.
SP:ad56e4793625bbd44652ebff8b82d56b9efbd612
Equivariant Transformers for Neural Network based Molecular Potentials
1 INTRODUCTION . Quantum mechanics are essential for the computational analysis and design of molecules and materials . However , the complete solution of the Schrödinger equation is analytically and computationally not practical , which initiated the study of approximations in the past decades ( Szabo & Ostlund , 1996 ) . A common quantum mechanics approximation method is to model atomic systems according to density functional theory ( DFT ) , which can provide energy estimates with sufficiently high accuracy for different application cases in biology , physics , chemistry , and materials science . Even more accurate techniques like coupled-cluster exist but both still lack the computational efficiency to be applied on a larger scale , although recent advances are promising in the case of quantum Monte Carlo ( Pfau et al. , 2020 ; Hermann et al. , 2020 ) . Other methods include force-field and semiempirical quantum mechanical theories , which provide very efficient estimates but lack accuracy . The field of machine learning molecular potentials is relatively novel . The first important contributions are rooted in the Behler-Parrinello ( BP ) representation ( Behler & Parrinello , 2007 ) and the seminal work from Rupp et al . ( 2012 ) . One of the best transferable machine learning potentials for biomolecules , called ANI ( Smith et al. , 2017a ) , is based on BP . A second class of methods , mainly developed in the field of materials science and quantum chemistry , uses more modern graph convolutions ( Schütt et al. , 2018 ; Unke & Meuwly , 2019 ; Qiao et al. , 2020 ; Schütt et al. , 2021 ) . SchNet ( Schütt et al. , 2017b ; 2018 ) , for example , uses continuous filter convolutions in a graph network architecture to predict the energy of a system and computes forces by direct differentiation of the neural network against atomic coordinates . Outside of its original use case , this approach has been extended to coupled-cluster solvers ( Hermann et al. , 2020 ) and protein folding using coarse-grained systems ( Wang et al. , 2019 ; Husic et al. , 2020 ; Doerr et al. , 2021 ) . Recently , other work has shown that a shift towards rotationally equivariant networks ( Anderson et al. , 2019 ; Fuchs et al. , 2020 ; Schütt et al. , 2021 ) , particularly useful when the predicted quantities are vectors and tensors , can also improve the accuracy on scalars ( e.g . energy ) . Next to the parametric group of neural network based methods , a nonparametric class of approaches exists . These are usually based on kernel methods , particularly used in materials science . In this work , we will focus on parametric neural network potentials ( NNPs ) because they have a scaling advantage to large amounts of data , while kernel methods usually work best in a scarce data regime . Previous deep learning based work in the domain of quantum chemistry focused largely on graph neural network architectures ( GNNs ) with different levels of handcrafted and learned features ( Schütt et al. , 2017b ; Qiao et al. , 2020 ; Klicpera et al. , 2020b ; Unke & Meuwly , 2019 ; Liu et al. , 2020 ; Schütt et al. , 2021 ) . For example , Qiao et al . ( 2020 ) first perform a low-cost mean-field electronic structure calculation , from which different quantities are used as input to their neural network . Recently proposed neural network architectures in this context usually include some form of attention ( Luong et al. , 2015 ) inside the GNN ’ s message passing step ( Qiao et al. , 2020 ; Unke & Meuwly , 2019 ; Liu et al. , 2020 ) . In this work , we introduce TorchMD-NET , an equivariant Transformer ( ET ) architecture for the prediction of quantum mechanical properties . By building on top of the Transformer ( Vaswani et al. , 2017 ) architecture , we are centering the design around the attention mechanism , achieving state-of-the-art accuracy on multiple benchmarks while relying solely on a learned featurization of atomic types and coordinates . Furthermore , we gain insights into the black box prediction of neural networks by analyzing the Transformer ’ s attention weights and comparing latent representations between different types of data such as energy-minimized ( QM9 ( Ramakrishnan et al. , 2014 ) ) , molecular dynamics ( MD17 ( Chmiela et al. , 2017 ) and normal mode sampled data ( ANI-1 ( Smith et al. , 2017b ) ) . 2 METHODS . The traditional Transformer architecture as proposed by Vaswani et al . ( 2017 ) operates on a sequence of tokens . In the context of chemistry , however , the natural data structure for the representation of molecules is a graph . To work on graphs , one can interpret self-attention as constructing a fully connected graph over input tokens and computing interactions between nodes . We leverage this concept and extend it to include information stored in the graph ’ s edges , corresponding to interatomic distances in the context of molecular data . This requires the use of a modified attention mechanism , which we introduce in the following sections , along with the overall architecture of our equivariant Transformer . The equivariant Transformer is made up of three main blocks . An embedding layer encodes atom types Z and the atomic neighborhood of each atom into a dense feature vector xi . Then , a series of update layers compute interactions between pairs of atoms through a modified multi-head attention mechanism , with which the latent atomic representations are updated . Finally , a layer normalization ( Ba et al. , 2016 ) followed by an output network computes scalar atomwise predictions using gated equivariant blocks ( Weiler et al. , 2018 ; Schütt et al. , 2021 ) , which get aggregated into a single molecular prediction . This can be matched with a scalar target variable or differentiated against atomic coordinates , providing force predictions . An illustration of the architecture is given in Figure 1 . 2.1 NOTATION . To differentiate between the concepts of scalar and vector features , this work follows a certain notation . Scalar features are written as x ∈ RF , while we refer to vector features as ~v ∈ R3×F . The vector norm ‖·‖ and scalar product 〈· , ·〉 of vector features are applied to the spatial dimension , while all other operations act on the feature dimension . Upper case letters denote matrices A ∈ RN×M . 2.2 EMBEDDING LAYER . The embedding layer assigns two learned vectors to each atom type zi . One is used to encode information specific to an atom , the other takes the role of a neighborhood embedding . The neighborhood embedding , which is an embedding of the types of neighboring atoms , is multiplied by a distance filter . This operation resembles a continuous-filter convolution ( Schütt et al. , 2017b ) but , as it is used in the first layer , allows the model to store atomic information in two separate weight matrices . These can be thought of as containing information that is intrinsic to an atom versus information about the interaction of two atoms . The distance filter is generated from expanded interatomic distances using a linear transformation WF . First , the distance dij between two atoms i and j is expanded via a set of exponential normal radial basis functions eRBF , defined as eRBFk ( dij ) = φ ( dij ) exp ( −βk ( exp ( −dij ) − µk ) 2 ) ( 1 ) where βk and µk are fixed parameters specifying the center and width of radial basis function k. The µ vector is initialized with values equally spaced between exp ( −dcut ) and 1 , β is initialized as ( 2K−1 ( 1 − exp ( −dcut ) ) ) −2 for all k as proposed by Unke & Meuwly ( 2019 ) . The cutoff distance dcut was set to 5Å . The cosine cutoff φ ( dij ) is used to ensure a smooth transition to 0 as dij approaches dcut in order to avoid jumps in the regression landscape . It is given by φ ( dij ) = { 1 2 ( cos ( πdij dcut ) + 1 ) , if dij ≤ dcut 0 , if dij > dcut . ( 2 ) The neighborhood embedding ni for atom i is then defined as ni = N∑ j=1 embednbh ( zj ) WF eRBF ( dij ) ( 3 ) with embednbh being the neighborhood embedding function and N the number of atoms in the graph . The final atomic embedding xi is calculated as a linear projection of the concatenated intrinsic embedding and neighborhood embedding [ embedint ( zi ) , ni ] , resulting in xi = W C [ embedint ( zi ) , ni ] + bC ( 4 ) with embedint being the intrinsic embedding function . The vector features ~vi are initially set to 0 . 2.3 MODIFIED ATTENTION MECHANISM . We use a modified multi-head attention mechanism ( Figure 1c ) , extending dot-product attention , in order to include edge data into the calculation of attention weights . First , the feature vectors are passed through a layer normalization . Then , edge data , i.e . interatomic distances rij , are projected into two multidimensional filters DK and DV , according to DK = σ ( WD K eRBF ( rij ) + b DK ) DV = σ ( WD V eRBF ( rij ) + b DV ) ( 5 ) The attention weights are computed via an extended dot product , i.e . an elementwise multiplication and subsequent sum over the feature dimension , of the three input vectors : query Q , key K and distance projection DK : Q = WQxi and K = WKxi ( 6 ) dot ( Q , K , DK ) = F∑ k Qk Kk DKk ( 7 ) The resulting matrix is passed through a nonlinear activation function and is weighted by a cosine cutoff φ ( see equation 2 ) , ensuring that atoms with a distance larger than dcut do not interact . A = SiLU ( dot ( Q , K , DK ) ) · φ ( dij ) ( 8 ) Traditionally , the resulting attention matrix A is passed through a softmax activation , however , we replace this step with a SiLU function to preserve the distance cutoff . The softmax scaling factor of √ dk −1 , which normally rescales small gradients from the softmax function , is left out . Work by Choromanski et al . ( 2021 ) suggests that replacing the softmax activation function in Transformers with ReLU-like functions might even improve accuracy , supporting the idea of switching to SiLU in this case . We place a continuous filter graph convolution ( Schütt et al. , 2017b ) in the attention mechanism ’ s value pathway . This enables the model to not only consider interatomic distances in the attention weights but also incorporate this information into the feature vectors directly . The resulting representation is split into three equally sized vectors s1ij , s 2 ij , s 3 ij ∈ RF . The vector s3ij is scaled by the attention matrix A and aggregated over the value-dimension , leading to an updated list of feature vectors . The linear transformation O is used to combine the attention heads ’ outputs into a single feature vector yi ∈ R384 . s1ij , s 2 ij , s 3 ij = split ( Vj DV ij ) yi = O N∑ j Aij · s3ij ( 9 ) The attention mechanism ’ s output , therefore , corresponds to the updated scalar feature vectors yi and scalar filters s1ij and s 2 ij , which are used to weight the directional information inside the update layer .
The authors introduce a novel architecture for ML force fields, the Equivariant transformer (ET). It is based on the Transformer approach and can be used to predict energies (and forces) and other molecular properties (e.g., QM targets). The performance on standard benchmarks such as QM9 and MD17 is impressive. The authors inspect the attention weights.
SP:7ba40525c4aeb9f391027539b4019374231ddfdb
Equivariant Transformers for Neural Network based Molecular Potentials
1 INTRODUCTION . Quantum mechanics are essential for the computational analysis and design of molecules and materials . However , the complete solution of the Schrödinger equation is analytically and computationally not practical , which initiated the study of approximations in the past decades ( Szabo & Ostlund , 1996 ) . A common quantum mechanics approximation method is to model atomic systems according to density functional theory ( DFT ) , which can provide energy estimates with sufficiently high accuracy for different application cases in biology , physics , chemistry , and materials science . Even more accurate techniques like coupled-cluster exist but both still lack the computational efficiency to be applied on a larger scale , although recent advances are promising in the case of quantum Monte Carlo ( Pfau et al. , 2020 ; Hermann et al. , 2020 ) . Other methods include force-field and semiempirical quantum mechanical theories , which provide very efficient estimates but lack accuracy . The field of machine learning molecular potentials is relatively novel . The first important contributions are rooted in the Behler-Parrinello ( BP ) representation ( Behler & Parrinello , 2007 ) and the seminal work from Rupp et al . ( 2012 ) . One of the best transferable machine learning potentials for biomolecules , called ANI ( Smith et al. , 2017a ) , is based on BP . A second class of methods , mainly developed in the field of materials science and quantum chemistry , uses more modern graph convolutions ( Schütt et al. , 2018 ; Unke & Meuwly , 2019 ; Qiao et al. , 2020 ; Schütt et al. , 2021 ) . SchNet ( Schütt et al. , 2017b ; 2018 ) , for example , uses continuous filter convolutions in a graph network architecture to predict the energy of a system and computes forces by direct differentiation of the neural network against atomic coordinates . Outside of its original use case , this approach has been extended to coupled-cluster solvers ( Hermann et al. , 2020 ) and protein folding using coarse-grained systems ( Wang et al. , 2019 ; Husic et al. , 2020 ; Doerr et al. , 2021 ) . Recently , other work has shown that a shift towards rotationally equivariant networks ( Anderson et al. , 2019 ; Fuchs et al. , 2020 ; Schütt et al. , 2021 ) , particularly useful when the predicted quantities are vectors and tensors , can also improve the accuracy on scalars ( e.g . energy ) . Next to the parametric group of neural network based methods , a nonparametric class of approaches exists . These are usually based on kernel methods , particularly used in materials science . In this work , we will focus on parametric neural network potentials ( NNPs ) because they have a scaling advantage to large amounts of data , while kernel methods usually work best in a scarce data regime . Previous deep learning based work in the domain of quantum chemistry focused largely on graph neural network architectures ( GNNs ) with different levels of handcrafted and learned features ( Schütt et al. , 2017b ; Qiao et al. , 2020 ; Klicpera et al. , 2020b ; Unke & Meuwly , 2019 ; Liu et al. , 2020 ; Schütt et al. , 2021 ) . For example , Qiao et al . ( 2020 ) first perform a low-cost mean-field electronic structure calculation , from which different quantities are used as input to their neural network . Recently proposed neural network architectures in this context usually include some form of attention ( Luong et al. , 2015 ) inside the GNN ’ s message passing step ( Qiao et al. , 2020 ; Unke & Meuwly , 2019 ; Liu et al. , 2020 ) . In this work , we introduce TorchMD-NET , an equivariant Transformer ( ET ) architecture for the prediction of quantum mechanical properties . By building on top of the Transformer ( Vaswani et al. , 2017 ) architecture , we are centering the design around the attention mechanism , achieving state-of-the-art accuracy on multiple benchmarks while relying solely on a learned featurization of atomic types and coordinates . Furthermore , we gain insights into the black box prediction of neural networks by analyzing the Transformer ’ s attention weights and comparing latent representations between different types of data such as energy-minimized ( QM9 ( Ramakrishnan et al. , 2014 ) ) , molecular dynamics ( MD17 ( Chmiela et al. , 2017 ) and normal mode sampled data ( ANI-1 ( Smith et al. , 2017b ) ) . 2 METHODS . The traditional Transformer architecture as proposed by Vaswani et al . ( 2017 ) operates on a sequence of tokens . In the context of chemistry , however , the natural data structure for the representation of molecules is a graph . To work on graphs , one can interpret self-attention as constructing a fully connected graph over input tokens and computing interactions between nodes . We leverage this concept and extend it to include information stored in the graph ’ s edges , corresponding to interatomic distances in the context of molecular data . This requires the use of a modified attention mechanism , which we introduce in the following sections , along with the overall architecture of our equivariant Transformer . The equivariant Transformer is made up of three main blocks . An embedding layer encodes atom types Z and the atomic neighborhood of each atom into a dense feature vector xi . Then , a series of update layers compute interactions between pairs of atoms through a modified multi-head attention mechanism , with which the latent atomic representations are updated . Finally , a layer normalization ( Ba et al. , 2016 ) followed by an output network computes scalar atomwise predictions using gated equivariant blocks ( Weiler et al. , 2018 ; Schütt et al. , 2021 ) , which get aggregated into a single molecular prediction . This can be matched with a scalar target variable or differentiated against atomic coordinates , providing force predictions . An illustration of the architecture is given in Figure 1 . 2.1 NOTATION . To differentiate between the concepts of scalar and vector features , this work follows a certain notation . Scalar features are written as x ∈ RF , while we refer to vector features as ~v ∈ R3×F . The vector norm ‖·‖ and scalar product 〈· , ·〉 of vector features are applied to the spatial dimension , while all other operations act on the feature dimension . Upper case letters denote matrices A ∈ RN×M . 2.2 EMBEDDING LAYER . The embedding layer assigns two learned vectors to each atom type zi . One is used to encode information specific to an atom , the other takes the role of a neighborhood embedding . The neighborhood embedding , which is an embedding of the types of neighboring atoms , is multiplied by a distance filter . This operation resembles a continuous-filter convolution ( Schütt et al. , 2017b ) but , as it is used in the first layer , allows the model to store atomic information in two separate weight matrices . These can be thought of as containing information that is intrinsic to an atom versus information about the interaction of two atoms . The distance filter is generated from expanded interatomic distances using a linear transformation WF . First , the distance dij between two atoms i and j is expanded via a set of exponential normal radial basis functions eRBF , defined as eRBFk ( dij ) = φ ( dij ) exp ( −βk ( exp ( −dij ) − µk ) 2 ) ( 1 ) where βk and µk are fixed parameters specifying the center and width of radial basis function k. The µ vector is initialized with values equally spaced between exp ( −dcut ) and 1 , β is initialized as ( 2K−1 ( 1 − exp ( −dcut ) ) ) −2 for all k as proposed by Unke & Meuwly ( 2019 ) . The cutoff distance dcut was set to 5Å . The cosine cutoff φ ( dij ) is used to ensure a smooth transition to 0 as dij approaches dcut in order to avoid jumps in the regression landscape . It is given by φ ( dij ) = { 1 2 ( cos ( πdij dcut ) + 1 ) , if dij ≤ dcut 0 , if dij > dcut . ( 2 ) The neighborhood embedding ni for atom i is then defined as ni = N∑ j=1 embednbh ( zj ) WF eRBF ( dij ) ( 3 ) with embednbh being the neighborhood embedding function and N the number of atoms in the graph . The final atomic embedding xi is calculated as a linear projection of the concatenated intrinsic embedding and neighborhood embedding [ embedint ( zi ) , ni ] , resulting in xi = W C [ embedint ( zi ) , ni ] + bC ( 4 ) with embedint being the intrinsic embedding function . The vector features ~vi are initially set to 0 . 2.3 MODIFIED ATTENTION MECHANISM . We use a modified multi-head attention mechanism ( Figure 1c ) , extending dot-product attention , in order to include edge data into the calculation of attention weights . First , the feature vectors are passed through a layer normalization . Then , edge data , i.e . interatomic distances rij , are projected into two multidimensional filters DK and DV , according to DK = σ ( WD K eRBF ( rij ) + b DK ) DV = σ ( WD V eRBF ( rij ) + b DV ) ( 5 ) The attention weights are computed via an extended dot product , i.e . an elementwise multiplication and subsequent sum over the feature dimension , of the three input vectors : query Q , key K and distance projection DK : Q = WQxi and K = WKxi ( 6 ) dot ( Q , K , DK ) = F∑ k Qk Kk DKk ( 7 ) The resulting matrix is passed through a nonlinear activation function and is weighted by a cosine cutoff φ ( see equation 2 ) , ensuring that atoms with a distance larger than dcut do not interact . A = SiLU ( dot ( Q , K , DK ) ) · φ ( dij ) ( 8 ) Traditionally , the resulting attention matrix A is passed through a softmax activation , however , we replace this step with a SiLU function to preserve the distance cutoff . The softmax scaling factor of √ dk −1 , which normally rescales small gradients from the softmax function , is left out . Work by Choromanski et al . ( 2021 ) suggests that replacing the softmax activation function in Transformers with ReLU-like functions might even improve accuracy , supporting the idea of switching to SiLU in this case . We place a continuous filter graph convolution ( Schütt et al. , 2017b ) in the attention mechanism ’ s value pathway . This enables the model to not only consider interatomic distances in the attention weights but also incorporate this information into the feature vectors directly . The resulting representation is split into three equally sized vectors s1ij , s 2 ij , s 3 ij ∈ RF . The vector s3ij is scaled by the attention matrix A and aggregated over the value-dimension , leading to an updated list of feature vectors . The linear transformation O is used to combine the attention heads ’ outputs into a single feature vector yi ∈ R384 . s1ij , s 2 ij , s 3 ij = split ( Vj DV ij ) yi = O N∑ j Aij · s3ij ( 9 ) The attention mechanism ’ s output , therefore , corresponds to the updated scalar feature vectors yi and scalar filters s1ij and s 2 ij , which are used to weight the directional information inside the update layer .
The paper presents an equivariant transformer model for predicting quantum mechanical properties from an atomic graph. The model obtains SOTA or near-SOTA results on three popular datasets while maintaining good computational efficiency. The primary novelty in their method is a new way to compute the attention score using edge features. The paper also presents a detailed analysis of the attention weights that give insights into what the model is attending over. This is interesting from a chemistry perspective.
SP:7ba40525c4aeb9f391027539b4019374231ddfdb
Equivariant Transformers for Neural Network based Molecular Potentials
1 INTRODUCTION . Quantum mechanics are essential for the computational analysis and design of molecules and materials . However , the complete solution of the Schrödinger equation is analytically and computationally not practical , which initiated the study of approximations in the past decades ( Szabo & Ostlund , 1996 ) . A common quantum mechanics approximation method is to model atomic systems according to density functional theory ( DFT ) , which can provide energy estimates with sufficiently high accuracy for different application cases in biology , physics , chemistry , and materials science . Even more accurate techniques like coupled-cluster exist but both still lack the computational efficiency to be applied on a larger scale , although recent advances are promising in the case of quantum Monte Carlo ( Pfau et al. , 2020 ; Hermann et al. , 2020 ) . Other methods include force-field and semiempirical quantum mechanical theories , which provide very efficient estimates but lack accuracy . The field of machine learning molecular potentials is relatively novel . The first important contributions are rooted in the Behler-Parrinello ( BP ) representation ( Behler & Parrinello , 2007 ) and the seminal work from Rupp et al . ( 2012 ) . One of the best transferable machine learning potentials for biomolecules , called ANI ( Smith et al. , 2017a ) , is based on BP . A second class of methods , mainly developed in the field of materials science and quantum chemistry , uses more modern graph convolutions ( Schütt et al. , 2018 ; Unke & Meuwly , 2019 ; Qiao et al. , 2020 ; Schütt et al. , 2021 ) . SchNet ( Schütt et al. , 2017b ; 2018 ) , for example , uses continuous filter convolutions in a graph network architecture to predict the energy of a system and computes forces by direct differentiation of the neural network against atomic coordinates . Outside of its original use case , this approach has been extended to coupled-cluster solvers ( Hermann et al. , 2020 ) and protein folding using coarse-grained systems ( Wang et al. , 2019 ; Husic et al. , 2020 ; Doerr et al. , 2021 ) . Recently , other work has shown that a shift towards rotationally equivariant networks ( Anderson et al. , 2019 ; Fuchs et al. , 2020 ; Schütt et al. , 2021 ) , particularly useful when the predicted quantities are vectors and tensors , can also improve the accuracy on scalars ( e.g . energy ) . Next to the parametric group of neural network based methods , a nonparametric class of approaches exists . These are usually based on kernel methods , particularly used in materials science . In this work , we will focus on parametric neural network potentials ( NNPs ) because they have a scaling advantage to large amounts of data , while kernel methods usually work best in a scarce data regime . Previous deep learning based work in the domain of quantum chemistry focused largely on graph neural network architectures ( GNNs ) with different levels of handcrafted and learned features ( Schütt et al. , 2017b ; Qiao et al. , 2020 ; Klicpera et al. , 2020b ; Unke & Meuwly , 2019 ; Liu et al. , 2020 ; Schütt et al. , 2021 ) . For example , Qiao et al . ( 2020 ) first perform a low-cost mean-field electronic structure calculation , from which different quantities are used as input to their neural network . Recently proposed neural network architectures in this context usually include some form of attention ( Luong et al. , 2015 ) inside the GNN ’ s message passing step ( Qiao et al. , 2020 ; Unke & Meuwly , 2019 ; Liu et al. , 2020 ) . In this work , we introduce TorchMD-NET , an equivariant Transformer ( ET ) architecture for the prediction of quantum mechanical properties . By building on top of the Transformer ( Vaswani et al. , 2017 ) architecture , we are centering the design around the attention mechanism , achieving state-of-the-art accuracy on multiple benchmarks while relying solely on a learned featurization of atomic types and coordinates . Furthermore , we gain insights into the black box prediction of neural networks by analyzing the Transformer ’ s attention weights and comparing latent representations between different types of data such as energy-minimized ( QM9 ( Ramakrishnan et al. , 2014 ) ) , molecular dynamics ( MD17 ( Chmiela et al. , 2017 ) and normal mode sampled data ( ANI-1 ( Smith et al. , 2017b ) ) . 2 METHODS . The traditional Transformer architecture as proposed by Vaswani et al . ( 2017 ) operates on a sequence of tokens . In the context of chemistry , however , the natural data structure for the representation of molecules is a graph . To work on graphs , one can interpret self-attention as constructing a fully connected graph over input tokens and computing interactions between nodes . We leverage this concept and extend it to include information stored in the graph ’ s edges , corresponding to interatomic distances in the context of molecular data . This requires the use of a modified attention mechanism , which we introduce in the following sections , along with the overall architecture of our equivariant Transformer . The equivariant Transformer is made up of three main blocks . An embedding layer encodes atom types Z and the atomic neighborhood of each atom into a dense feature vector xi . Then , a series of update layers compute interactions between pairs of atoms through a modified multi-head attention mechanism , with which the latent atomic representations are updated . Finally , a layer normalization ( Ba et al. , 2016 ) followed by an output network computes scalar atomwise predictions using gated equivariant blocks ( Weiler et al. , 2018 ; Schütt et al. , 2021 ) , which get aggregated into a single molecular prediction . This can be matched with a scalar target variable or differentiated against atomic coordinates , providing force predictions . An illustration of the architecture is given in Figure 1 . 2.1 NOTATION . To differentiate between the concepts of scalar and vector features , this work follows a certain notation . Scalar features are written as x ∈ RF , while we refer to vector features as ~v ∈ R3×F . The vector norm ‖·‖ and scalar product 〈· , ·〉 of vector features are applied to the spatial dimension , while all other operations act on the feature dimension . Upper case letters denote matrices A ∈ RN×M . 2.2 EMBEDDING LAYER . The embedding layer assigns two learned vectors to each atom type zi . One is used to encode information specific to an atom , the other takes the role of a neighborhood embedding . The neighborhood embedding , which is an embedding of the types of neighboring atoms , is multiplied by a distance filter . This operation resembles a continuous-filter convolution ( Schütt et al. , 2017b ) but , as it is used in the first layer , allows the model to store atomic information in two separate weight matrices . These can be thought of as containing information that is intrinsic to an atom versus information about the interaction of two atoms . The distance filter is generated from expanded interatomic distances using a linear transformation WF . First , the distance dij between two atoms i and j is expanded via a set of exponential normal radial basis functions eRBF , defined as eRBFk ( dij ) = φ ( dij ) exp ( −βk ( exp ( −dij ) − µk ) 2 ) ( 1 ) where βk and µk are fixed parameters specifying the center and width of radial basis function k. The µ vector is initialized with values equally spaced between exp ( −dcut ) and 1 , β is initialized as ( 2K−1 ( 1 − exp ( −dcut ) ) ) −2 for all k as proposed by Unke & Meuwly ( 2019 ) . The cutoff distance dcut was set to 5Å . The cosine cutoff φ ( dij ) is used to ensure a smooth transition to 0 as dij approaches dcut in order to avoid jumps in the regression landscape . It is given by φ ( dij ) = { 1 2 ( cos ( πdij dcut ) + 1 ) , if dij ≤ dcut 0 , if dij > dcut . ( 2 ) The neighborhood embedding ni for atom i is then defined as ni = N∑ j=1 embednbh ( zj ) WF eRBF ( dij ) ( 3 ) with embednbh being the neighborhood embedding function and N the number of atoms in the graph . The final atomic embedding xi is calculated as a linear projection of the concatenated intrinsic embedding and neighborhood embedding [ embedint ( zi ) , ni ] , resulting in xi = W C [ embedint ( zi ) , ni ] + bC ( 4 ) with embedint being the intrinsic embedding function . The vector features ~vi are initially set to 0 . 2.3 MODIFIED ATTENTION MECHANISM . We use a modified multi-head attention mechanism ( Figure 1c ) , extending dot-product attention , in order to include edge data into the calculation of attention weights . First , the feature vectors are passed through a layer normalization . Then , edge data , i.e . interatomic distances rij , are projected into two multidimensional filters DK and DV , according to DK = σ ( WD K eRBF ( rij ) + b DK ) DV = σ ( WD V eRBF ( rij ) + b DV ) ( 5 ) The attention weights are computed via an extended dot product , i.e . an elementwise multiplication and subsequent sum over the feature dimension , of the three input vectors : query Q , key K and distance projection DK : Q = WQxi and K = WKxi ( 6 ) dot ( Q , K , DK ) = F∑ k Qk Kk DKk ( 7 ) The resulting matrix is passed through a nonlinear activation function and is weighted by a cosine cutoff φ ( see equation 2 ) , ensuring that atoms with a distance larger than dcut do not interact . A = SiLU ( dot ( Q , K , DK ) ) · φ ( dij ) ( 8 ) Traditionally , the resulting attention matrix A is passed through a softmax activation , however , we replace this step with a SiLU function to preserve the distance cutoff . The softmax scaling factor of √ dk −1 , which normally rescales small gradients from the softmax function , is left out . Work by Choromanski et al . ( 2021 ) suggests that replacing the softmax activation function in Transformers with ReLU-like functions might even improve accuracy , supporting the idea of switching to SiLU in this case . We place a continuous filter graph convolution ( Schütt et al. , 2017b ) in the attention mechanism ’ s value pathway . This enables the model to not only consider interatomic distances in the attention weights but also incorporate this information into the feature vectors directly . The resulting representation is split into three equally sized vectors s1ij , s 2 ij , s 3 ij ∈ RF . The vector s3ij is scaled by the attention matrix A and aggregated over the value-dimension , leading to an updated list of feature vectors . The linear transformation O is used to combine the attention heads ’ outputs into a single feature vector yi ∈ R384 . s1ij , s 2 ij , s 3 ij = split ( Vj DV ij ) yi = O N∑ j Aij · s3ij ( 9 ) The attention mechanism ’ s output , therefore , corresponds to the updated scalar feature vectors yi and scalar filters s1ij and s 2 ij , which are used to weight the directional information inside the update layer .
This paper proposed equivariant transformers --- a neural network based algorithm to predict properties of molecules. The architecture is built upon the traditional transformer architecture, combined with some modifications specific to molecular property prediction tasks, such as exponential normal radial basis functions, SiLU activation function and design of update layers. It is shown that the proposed model have good performance on QM9, MD17 and ANI-1 dataset. Ablation studies analyze the attention weight and give some insight about how the model works.
SP:7ba40525c4aeb9f391027539b4019374231ddfdb
Distributed Skellam Mechanism: a Novel Approach to Federated Learning with Differential Privacy
1 INTRODUCTION . Deep neural networks , especially large-scale ones such as GPT-3 ( Brown et al. , 2020 ) , are known for their excellent memorization capabilities ( Song et al. , 2017 ; Feldman , 2020 ; Zhang et al. , 2021 ) . However , it is rather difficult to control what exactly the neural net memorizes , and unintended data memorization can be a serious concern when the underlying training data contains sensitive information ( Carlini et al. , 2019 ) . For instance , consider a bank that trains a GPT-like language model on call center transcripts . Due to data memorization , it is possible to extract sensitive information by letting the model auto-complete a prefix , e.g. , “ my account number is : ” . Clearly , if such a model ( or its API ) is ever exposed to the adversary , it becomes a ligation machine as attackers can attempt with various prefixes to extract sensitive data , and subsequently sue the bank for privacy violations . Shokri et al . ( 2017 ) report that simple and intuitive measures often fail to provide sufficient protection , and the only way found to completely address the issue is to train the model with the rigorous guarantees of differential privacy ( DP ) ( Dwork et al. , 2006 ) . This paper focuses on the scenario that multiple individual participants jointly train a machine learning model using federated learning ( FL ) ( McMahan et al. , 2017 ) through distributed stochastic gradient descent ( SGD ) ( McDonald et al. , 2010 ; Dean et al. , 2012 ; Coates et al. , 2013 ; Abadi et al. , 2016a ) . Specifically , in every iteration , each individual computes the gradients with respect to the current model weights based on her own data ; then , gradients from all participants are aggregated to update the model . Note that the gradients from each individual may reveal sensitive information about her private dataset ( Shokri et al. , 2017 ; Pyrgelis et al. , 2018 ; Yeom et al. , 2018 ; Nasr et al. , 2019 ; Melis et al. , 2019 ) . A common approach to addressing this problem is by employing a secure multiparty computation ( MPC ) protocol ( Yao , 1986 ; Chaum et al. , 1987 ; Gennaro et al. , 2002 ; Ishai et al. , 2010 ; Beimel et al. , 2014 ; Cramer et al. , 2015 ; Ananth et al. , 2018 ) , which computes the aggregate gradients while preserving the confidentiality of the gradients from each individual participant . One advantage of MPC is that it is a decentralized approach that does not require a trusted third party , which can be difficult to establish in some applications , e.g. , in finance and healthcare . Note that although MPC protects individuals ’ privacy in the gradient update process by concealing the gradient values of each participant , it does not provide any protection against data extraction attacks caused by unintended data memorization ( Dwork et al. , 2015 ; Song & Shmatikov , 2019 ; Melis et al. , 2019 ; Song & Shmatikov , 2020 ) . As mentioned earlier , an effective methodology to defend against such attacks is to perturb the gradients to satisfy differential privacy ( Shokri et al. , 2017 ) . Since there is no trusted third-party in our setting , such gradient perturbations need to done in a decentralized fashion , i.e. , each FL participant adds noise to her own gradients , such that the aggregated gradients over all participants satisfies DP , which is referred to as distributed differential privacy ( Goryczka et al. , 2013 ; Kairouz et al. , 2021 ) . Although gradient perturbation under DP has been studied in previous work ( notably , DPSGD ( Abadi et al. , 2016b ) ) , it is far from trivial to adapt centralized DP solutions to our setting , due to a fundamental problem : that the MPC protocol requires gradients to be represented as integers ( more precisely , finite field elements ( Paillier , 1999 ; Bonawitz et al. , 2017 ; Bell et al. , 2020 ) ) . DPSGD , on the other hand , injects real-valued Gaussian noise to the gradients . Although real numbers can be quantized and ( approximately ) represented using large integers , the quantized random noise have rather different mathematical properties , which render a tight privacy cost analysis much more difficult , especially under the decentralized setting of FL . For instance , a nice property of the continuous Gaussian distribution is that summing up n continuous noise values following i.i.d . unit-variance Gaussian distribution results in an amplified continuous Gaussian noise of variance n. This property does not hold , however , if the Gaussian noise values are first quantized before aggregated . Further , the privacy analysis ( specifically , the moment accountant analysis technique ) of the DPSGD algorithm also replies on other important properties of the continuous Gaussian distribution , which do not hold when the noise is quantized . Hence , DPSGD does not directly apply to our setting . This issue has been neglected by many existing distributed DP solutions , e.g. , ( Goryczka et al. , 2013 ; Valovich & Aldà , 2017 ; Truex et al. , 2019 ) . Existing Solutions . Agarwal et al . ( 2018 ) propose cpSGD , which injects binomial noise ( i.e. , the sum of multiple binary values drawn from independent Bernoulli trials ) to the discretized gradients at each participant of FL , to satisfy DP . Similar to Gaussian noise in the continuous domain , binomial noise can also be aggregated , i.e. , the sum of multiple i.i.d . binomial noise values also follows a binomial distribution . However , compared to the continuous Gaussian distribution , existing theoretical tools for analyzing binomial noise aggregation leads to rather loose bounds ; further , the bionomial distribution is incompatible with the moment accountant analysis technique in DPSGD ( Kairouz et al. , 2021 ) . Consequently , cpSGD leads to poor utility , as demonstrated in Section 4 . Recently , the distributed discrete Gaussian mechanism ( DDG ) ( Kairouz et al. , 2021 ) addresses the above issues by injecting independent discrete Gaussian noise ( Canonne et al. , 2020 ) to the gradients at each participant . Similar to the binomial distribution , the discrete Gaussian distribution is also defined over an integer domain ; meanwhile , DDG is fully compatible with the moment accountant analysis technique in DP-SGD , and , thus , enjoys the tight privacy cost analysis . However , the discrete Gaussian distribution is not aggregatable , meaning that the sum of noise drawn from multiple i.i.d . discrete Gaussian distributions does not follow another discrete Guassian distribution , which renders analysis difficult under in the decentralized setting of FL , and leads to looser bounds in the privacy analysis . Further , the privacy guarantee of the aggregated noise in DDG degrades linearly with the dimensionality d of the gradients , leading to poor scalablity to large neural networks . Our contribution . In this work , we propose a new mechanism for enforcing distributed differential privacy for federated learning : the distributed Skellam mechanism ( DSM ) , which injects random noise drawn from the symmetric Skellam distribution . Although the Skellam distribution has been used before in the DP literature ( Valovich & Aldà , 2017 ) , the privacy analysis therein does not cover the decentralized setting of FL , or the iterative , sampling-based SGD algorithm , both of which require highly non-trivial mathematical analysis . Specifically , we prove that DSM satisfies both Rényi-DP and ( , δ ) -DP , defined in Section 2 . Similar to our competitor DDG described above , DSM is compatible with the DPSGD framework and its moment accountant analysis technique , leading to tight bounds on the privacy loss analysis . Meanwhile , unlike DDG , the privacy guarantees of DSM are independent of the dimensionality of the gradients , which scales well to large models . Further , similar to the continuous Gaussian distribution ( and unlike the discrete Gaussian distribution in DDG ) , i.i.d . Skellam noise values can be aggregated to form an amplified noise that still follows the Skellam distribution , which leads to clean and elegant proofs in the decentralized setting of FL , and tight bounds in the privacy analysis . We apply DSM to federated learning with distributed SGD , with quantized gradients , e.g. , as required by the MPC protocol , and present the complete training algorithm . Extensive experiments using benchmark datasets show that our solution leads to consistent and significant utility gains over its competitors , under a variety of settings with different privacy and communication constraints . 2 PRELIMINARIES . A random variable Y follows a Poisson distribution of parameter λ if its probability distribution is Pr [ Y = k ] = exp ( −λ ) λ k k ! , k = 0 , 1 , 2 , . . .. Both the mean and variance of Y is λ . A random variable Z follows a Skellam distribution if it is the difference between two independent Poisson variables Y1 and Y2 . In this work , we restrict our attention to the case where Y1 and Y2 have the same parameter λ . In that case , the probability distribution of Z is Pr [ Z = k ] = exp ( −2λ ) I|k| ( 2λ ) , k = 0 , ±1 , ±2 , . . . , ( 1 ) where Iv ( u ) , ∑∞ h=0 1 h ! Γ ( h+v+1 ) ( u 2 ) 2h+v is the modified Bessel function of the first kind . We write that Z ∼ Sk ( λ , λ ) . By linearity of expectation , Z has mean 0 and variance 2λ . We say that two datasets X and X ′ are neighboring if one can be obtained by adding or removing one tuple from the other . The main idea of differential privacy ( DP ) is to ensure that the outcomes of a randomized mechanism on neighboring datasets are always similar ; intuitively , this provides plausible deniability on whether a given data record x belongs to the dataset X or not , and , thus , protects the privacy of the individual whose record is x . A classic definition of differential privacy is ( , δ ) -DP ( Dwork et al. , 2006 ) , as follows . Definition 1 ( ( , δ ) -Differential Privacy ( Dwork et al. , 2006 ) ) . A randomized mechanismM satisfies ( , δ ) -differential privacy ( DP ) if Pr [ M ( X ) ∈ O ] ≤ exp ( ) · Pr [ M ( X ′ ) ∈ O ] + δ , ( 2 ) for any set of output O ⊆ Range ( M ) and any neighboring datasets X and X ′ . Note that ( , δ ) -DP can be considered as a worst-case privacy guarantee for a mechanism , as it enforces an upper bound on the probability ratio of all possible outcomes . An alternative definition called Rényi-Differential Privacy ( RDP ) ( Mironov , 2017 ) , which is built upon the concept of Rényi Divergence , considers the average case privacy guarantee instead . Definition 2 ( Rényi Divergence ( van Erven & Harremoës , 2014 ) ) . Assuming that distributions P and Q are defined over the same domain , and P is absolute continuous with respect to Q , then the Rényi divergence of P from Q of finite order α ∈ ( 0 , 1 ) ∪ ( 1 , ∞ ) is defined as : Dα ( P‖Q ) = 1 α− 1 logEX∼P [ ( P ( X ) Q ( X ) ) α−1 ] , ( 3 ) where we adopt the convention that 00 = 0 and y 0 = ∞ for any y > 0 , and the logarithm is with base e. Definition 3 ( Rényi Differential Privacy ( Mironov , 2017 ) ) . A randomized mechanismM satisfies ( α , τ ) -Rényi differential privacy ( RDP ) if Dα ( M ( X ) ‖M ( X ′ ) ) ≤ τ for all neighboring datasets X and X ′ . Given a function of interest , the canonical way make it differentially private is to perturb its outcome through noise injection . Roughly speaking , the scale of the noise should be calibrated to the sensitivity of the function of interest ( Dwork et al. , 2006 ) , formally defined as follows . Definition 4 ( Sensitivity ) . The sensitivity S ( F ) of a function F : D → Rd , denoted as S ( F ) , is defined as S ( F ) = max X∼X′ ‖F ( X ) − F ( X ′ ) ‖ , where X ∼ X ′ denotes that X and X ′ are neighboring datasets , and ‖·‖ is a norm . In particular , injecting continuous Gaussian noise sampled from N ( 0 , σ2 ) to each dimension of function F satisfies ( α , αS 2 ( F ) 2σ2 ) -RDP ( Mironov , 2017 ) , where S ( F ) stands for the L2 sensitivity of function F . In many applications , we also need to analyze the overall privacy guarantee of a mechanism consisting of multiple components ( e.g. , training neural networks with SGD ) . We have the following composition and sub-sampling lemmata for RDP mechanisms . Lemma 1 ( Composition Lemma ( Mironov , 2017 ) ) . If mechanisms M1 , . . . , MT satisfies ( α , τ1 ) , . . . , ( α , τT ) -RDP , respectively , thenM1 ◦ . . . ◦MT satisfies ( α , ∑T t=1 τi ) -RDP . Lemma 2 ( Subsampling Lemma ( Zhu & Wang , 2019 ; Mironov et al. , 2019 ) ) . LetM be a mechanism that satisfies ( l , τ ( l ) ) -RDP for l = 2 , . . . , α ( α ∈ Z , α > 2 ) , and Sq be a procedure that uniformly sample each record of the input data with probability q . Then M ◦ Sq satisfies ( α , τ ) RDP with τ = 1 α− 1 log ( ( 1− q ) α−1 ( αq − q − 1 ) + α∑ l=2 ( α l ) ( 1− q ) α−lqle ( l−1 ) τ ( l ) ) . Finally , any mechanism that satisfies ( α , τ ) -RDP also satisfies ( , δ ) -DP , for values of and δ as follows . Lemma 3 ( Converting ( α , τ ) -RDP to ( , δ ) -DP ( Canonne et al. , 2020 ) ) . For any α ∈ ( 1 , ∞ ) , if Dα ( M ( X ) ‖M ( X ′ ) ) ≤ τ for any neighboring databases X and X ′ , thenM satisfies ( , δ ) -DP for = τ + log ( 1/δ ) + ( α− 1 ) log ( 1− 1/α ) − log ( α ) α− 1 . ( 4 )
### update after the discussions I am satisfied with the responses from the authors' and the updated paper. I therefore recommend accepting the paper. --- The paper considers the problem of distributed differentially private (DP) learning using black-box secure multi-party computation (MPC) for aggregating gradients for learning the model. In principle, using e.g. secure aggregation allows each party to add a small amount of noise scaled so that the noise level after aggregation matches what one needs with a trusted central party. However, since the existing tools (mainly Gaussian mechanism) typically assume continuous space, while MPC works with discrete values, this does not work in practice: the main problem is that the discretised noise is not infinitely divisible so the sum is not guaranteed to follow the same distribution as the individual contributions. To remedy the problem, the authors propose the distributed Skellam mechanism, which is both dicrete and infinitely divisible. The authors show that the Skellam mechanism privacy cost can be calculated using Rényi DP (RDP), and continue to show that it performs significantly better than the existing methods based on binomial noise and discrete Gaussian noise using MNIST and Fashion MNIST data for testing.
SP:b42d2b125c1877e8fe644c0ed5ed77ea64602430
Distributed Skellam Mechanism: a Novel Approach to Federated Learning with Differential Privacy
1 INTRODUCTION . Deep neural networks , especially large-scale ones such as GPT-3 ( Brown et al. , 2020 ) , are known for their excellent memorization capabilities ( Song et al. , 2017 ; Feldman , 2020 ; Zhang et al. , 2021 ) . However , it is rather difficult to control what exactly the neural net memorizes , and unintended data memorization can be a serious concern when the underlying training data contains sensitive information ( Carlini et al. , 2019 ) . For instance , consider a bank that trains a GPT-like language model on call center transcripts . Due to data memorization , it is possible to extract sensitive information by letting the model auto-complete a prefix , e.g. , “ my account number is : ” . Clearly , if such a model ( or its API ) is ever exposed to the adversary , it becomes a ligation machine as attackers can attempt with various prefixes to extract sensitive data , and subsequently sue the bank for privacy violations . Shokri et al . ( 2017 ) report that simple and intuitive measures often fail to provide sufficient protection , and the only way found to completely address the issue is to train the model with the rigorous guarantees of differential privacy ( DP ) ( Dwork et al. , 2006 ) . This paper focuses on the scenario that multiple individual participants jointly train a machine learning model using federated learning ( FL ) ( McMahan et al. , 2017 ) through distributed stochastic gradient descent ( SGD ) ( McDonald et al. , 2010 ; Dean et al. , 2012 ; Coates et al. , 2013 ; Abadi et al. , 2016a ) . Specifically , in every iteration , each individual computes the gradients with respect to the current model weights based on her own data ; then , gradients from all participants are aggregated to update the model . Note that the gradients from each individual may reveal sensitive information about her private dataset ( Shokri et al. , 2017 ; Pyrgelis et al. , 2018 ; Yeom et al. , 2018 ; Nasr et al. , 2019 ; Melis et al. , 2019 ) . A common approach to addressing this problem is by employing a secure multiparty computation ( MPC ) protocol ( Yao , 1986 ; Chaum et al. , 1987 ; Gennaro et al. , 2002 ; Ishai et al. , 2010 ; Beimel et al. , 2014 ; Cramer et al. , 2015 ; Ananth et al. , 2018 ) , which computes the aggregate gradients while preserving the confidentiality of the gradients from each individual participant . One advantage of MPC is that it is a decentralized approach that does not require a trusted third party , which can be difficult to establish in some applications , e.g. , in finance and healthcare . Note that although MPC protects individuals ’ privacy in the gradient update process by concealing the gradient values of each participant , it does not provide any protection against data extraction attacks caused by unintended data memorization ( Dwork et al. , 2015 ; Song & Shmatikov , 2019 ; Melis et al. , 2019 ; Song & Shmatikov , 2020 ) . As mentioned earlier , an effective methodology to defend against such attacks is to perturb the gradients to satisfy differential privacy ( Shokri et al. , 2017 ) . Since there is no trusted third-party in our setting , such gradient perturbations need to done in a decentralized fashion , i.e. , each FL participant adds noise to her own gradients , such that the aggregated gradients over all participants satisfies DP , which is referred to as distributed differential privacy ( Goryczka et al. , 2013 ; Kairouz et al. , 2021 ) . Although gradient perturbation under DP has been studied in previous work ( notably , DPSGD ( Abadi et al. , 2016b ) ) , it is far from trivial to adapt centralized DP solutions to our setting , due to a fundamental problem : that the MPC protocol requires gradients to be represented as integers ( more precisely , finite field elements ( Paillier , 1999 ; Bonawitz et al. , 2017 ; Bell et al. , 2020 ) ) . DPSGD , on the other hand , injects real-valued Gaussian noise to the gradients . Although real numbers can be quantized and ( approximately ) represented using large integers , the quantized random noise have rather different mathematical properties , which render a tight privacy cost analysis much more difficult , especially under the decentralized setting of FL . For instance , a nice property of the continuous Gaussian distribution is that summing up n continuous noise values following i.i.d . unit-variance Gaussian distribution results in an amplified continuous Gaussian noise of variance n. This property does not hold , however , if the Gaussian noise values are first quantized before aggregated . Further , the privacy analysis ( specifically , the moment accountant analysis technique ) of the DPSGD algorithm also replies on other important properties of the continuous Gaussian distribution , which do not hold when the noise is quantized . Hence , DPSGD does not directly apply to our setting . This issue has been neglected by many existing distributed DP solutions , e.g. , ( Goryczka et al. , 2013 ; Valovich & Aldà , 2017 ; Truex et al. , 2019 ) . Existing Solutions . Agarwal et al . ( 2018 ) propose cpSGD , which injects binomial noise ( i.e. , the sum of multiple binary values drawn from independent Bernoulli trials ) to the discretized gradients at each participant of FL , to satisfy DP . Similar to Gaussian noise in the continuous domain , binomial noise can also be aggregated , i.e. , the sum of multiple i.i.d . binomial noise values also follows a binomial distribution . However , compared to the continuous Gaussian distribution , existing theoretical tools for analyzing binomial noise aggregation leads to rather loose bounds ; further , the bionomial distribution is incompatible with the moment accountant analysis technique in DPSGD ( Kairouz et al. , 2021 ) . Consequently , cpSGD leads to poor utility , as demonstrated in Section 4 . Recently , the distributed discrete Gaussian mechanism ( DDG ) ( Kairouz et al. , 2021 ) addresses the above issues by injecting independent discrete Gaussian noise ( Canonne et al. , 2020 ) to the gradients at each participant . Similar to the binomial distribution , the discrete Gaussian distribution is also defined over an integer domain ; meanwhile , DDG is fully compatible with the moment accountant analysis technique in DP-SGD , and , thus , enjoys the tight privacy cost analysis . However , the discrete Gaussian distribution is not aggregatable , meaning that the sum of noise drawn from multiple i.i.d . discrete Gaussian distributions does not follow another discrete Guassian distribution , which renders analysis difficult under in the decentralized setting of FL , and leads to looser bounds in the privacy analysis . Further , the privacy guarantee of the aggregated noise in DDG degrades linearly with the dimensionality d of the gradients , leading to poor scalablity to large neural networks . Our contribution . In this work , we propose a new mechanism for enforcing distributed differential privacy for federated learning : the distributed Skellam mechanism ( DSM ) , which injects random noise drawn from the symmetric Skellam distribution . Although the Skellam distribution has been used before in the DP literature ( Valovich & Aldà , 2017 ) , the privacy analysis therein does not cover the decentralized setting of FL , or the iterative , sampling-based SGD algorithm , both of which require highly non-trivial mathematical analysis . Specifically , we prove that DSM satisfies both Rényi-DP and ( , δ ) -DP , defined in Section 2 . Similar to our competitor DDG described above , DSM is compatible with the DPSGD framework and its moment accountant analysis technique , leading to tight bounds on the privacy loss analysis . Meanwhile , unlike DDG , the privacy guarantees of DSM are independent of the dimensionality of the gradients , which scales well to large models . Further , similar to the continuous Gaussian distribution ( and unlike the discrete Gaussian distribution in DDG ) , i.i.d . Skellam noise values can be aggregated to form an amplified noise that still follows the Skellam distribution , which leads to clean and elegant proofs in the decentralized setting of FL , and tight bounds in the privacy analysis . We apply DSM to federated learning with distributed SGD , with quantized gradients , e.g. , as required by the MPC protocol , and present the complete training algorithm . Extensive experiments using benchmark datasets show that our solution leads to consistent and significant utility gains over its competitors , under a variety of settings with different privacy and communication constraints . 2 PRELIMINARIES . A random variable Y follows a Poisson distribution of parameter λ if its probability distribution is Pr [ Y = k ] = exp ( −λ ) λ k k ! , k = 0 , 1 , 2 , . . .. Both the mean and variance of Y is λ . A random variable Z follows a Skellam distribution if it is the difference between two independent Poisson variables Y1 and Y2 . In this work , we restrict our attention to the case where Y1 and Y2 have the same parameter λ . In that case , the probability distribution of Z is Pr [ Z = k ] = exp ( −2λ ) I|k| ( 2λ ) , k = 0 , ±1 , ±2 , . . . , ( 1 ) where Iv ( u ) , ∑∞ h=0 1 h ! Γ ( h+v+1 ) ( u 2 ) 2h+v is the modified Bessel function of the first kind . We write that Z ∼ Sk ( λ , λ ) . By linearity of expectation , Z has mean 0 and variance 2λ . We say that two datasets X and X ′ are neighboring if one can be obtained by adding or removing one tuple from the other . The main idea of differential privacy ( DP ) is to ensure that the outcomes of a randomized mechanism on neighboring datasets are always similar ; intuitively , this provides plausible deniability on whether a given data record x belongs to the dataset X or not , and , thus , protects the privacy of the individual whose record is x . A classic definition of differential privacy is ( , δ ) -DP ( Dwork et al. , 2006 ) , as follows . Definition 1 ( ( , δ ) -Differential Privacy ( Dwork et al. , 2006 ) ) . A randomized mechanismM satisfies ( , δ ) -differential privacy ( DP ) if Pr [ M ( X ) ∈ O ] ≤ exp ( ) · Pr [ M ( X ′ ) ∈ O ] + δ , ( 2 ) for any set of output O ⊆ Range ( M ) and any neighboring datasets X and X ′ . Note that ( , δ ) -DP can be considered as a worst-case privacy guarantee for a mechanism , as it enforces an upper bound on the probability ratio of all possible outcomes . An alternative definition called Rényi-Differential Privacy ( RDP ) ( Mironov , 2017 ) , which is built upon the concept of Rényi Divergence , considers the average case privacy guarantee instead . Definition 2 ( Rényi Divergence ( van Erven & Harremoës , 2014 ) ) . Assuming that distributions P and Q are defined over the same domain , and P is absolute continuous with respect to Q , then the Rényi divergence of P from Q of finite order α ∈ ( 0 , 1 ) ∪ ( 1 , ∞ ) is defined as : Dα ( P‖Q ) = 1 α− 1 logEX∼P [ ( P ( X ) Q ( X ) ) α−1 ] , ( 3 ) where we adopt the convention that 00 = 0 and y 0 = ∞ for any y > 0 , and the logarithm is with base e. Definition 3 ( Rényi Differential Privacy ( Mironov , 2017 ) ) . A randomized mechanismM satisfies ( α , τ ) -Rényi differential privacy ( RDP ) if Dα ( M ( X ) ‖M ( X ′ ) ) ≤ τ for all neighboring datasets X and X ′ . Given a function of interest , the canonical way make it differentially private is to perturb its outcome through noise injection . Roughly speaking , the scale of the noise should be calibrated to the sensitivity of the function of interest ( Dwork et al. , 2006 ) , formally defined as follows . Definition 4 ( Sensitivity ) . The sensitivity S ( F ) of a function F : D → Rd , denoted as S ( F ) , is defined as S ( F ) = max X∼X′ ‖F ( X ) − F ( X ′ ) ‖ , where X ∼ X ′ denotes that X and X ′ are neighboring datasets , and ‖·‖ is a norm . In particular , injecting continuous Gaussian noise sampled from N ( 0 , σ2 ) to each dimension of function F satisfies ( α , αS 2 ( F ) 2σ2 ) -RDP ( Mironov , 2017 ) , where S ( F ) stands for the L2 sensitivity of function F . In many applications , we also need to analyze the overall privacy guarantee of a mechanism consisting of multiple components ( e.g. , training neural networks with SGD ) . We have the following composition and sub-sampling lemmata for RDP mechanisms . Lemma 1 ( Composition Lemma ( Mironov , 2017 ) ) . If mechanisms M1 , . . . , MT satisfies ( α , τ1 ) , . . . , ( α , τT ) -RDP , respectively , thenM1 ◦ . . . ◦MT satisfies ( α , ∑T t=1 τi ) -RDP . Lemma 2 ( Subsampling Lemma ( Zhu & Wang , 2019 ; Mironov et al. , 2019 ) ) . LetM be a mechanism that satisfies ( l , τ ( l ) ) -RDP for l = 2 , . . . , α ( α ∈ Z , α > 2 ) , and Sq be a procedure that uniformly sample each record of the input data with probability q . Then M ◦ Sq satisfies ( α , τ ) RDP with τ = 1 α− 1 log ( ( 1− q ) α−1 ( αq − q − 1 ) + α∑ l=2 ( α l ) ( 1− q ) α−lqle ( l−1 ) τ ( l ) ) . Finally , any mechanism that satisfies ( α , τ ) -RDP also satisfies ( , δ ) -DP , for values of and δ as follows . Lemma 3 ( Converting ( α , τ ) -RDP to ( , δ ) -DP ( Canonne et al. , 2020 ) ) . For any α ∈ ( 1 , ∞ ) , if Dα ( M ( X ) ‖M ( X ′ ) ) ≤ τ for any neighboring databases X and X ′ , thenM satisfies ( , δ ) -DP for = τ + log ( 1/δ ) + ( α− 1 ) log ( 1− 1/α ) − log ( α ) α− 1 . ( 4 )
This paper studies federated learning under the distributed DP framework [KLS 2021] and proposes the distributed Skellam mechanism (DSM). Compared to the existing approach [KLS 2021] that uses distributed discrete Gaussian (DDG) noise, DSM perturbs each local gradient with independent Skellam noise. This gives the advantage that the privacy guarantee is independent of the dimensionality of the gradients; further, DSM allows tight privacy accounting due to the nice composition and sub-sampling properties of the Skellam distribution and hence enjoys a better constant compared to the DG. Experimental results also imply that the DSM improves the previous DDG scheme (proposed in [KLS 2021]) when the communication is limited, say 12 bits per parameter.
SP:b42d2b125c1877e8fe644c0ed5ed77ea64602430
Distributed Skellam Mechanism: a Novel Approach to Federated Learning with Differential Privacy
1 INTRODUCTION . Deep neural networks , especially large-scale ones such as GPT-3 ( Brown et al. , 2020 ) , are known for their excellent memorization capabilities ( Song et al. , 2017 ; Feldman , 2020 ; Zhang et al. , 2021 ) . However , it is rather difficult to control what exactly the neural net memorizes , and unintended data memorization can be a serious concern when the underlying training data contains sensitive information ( Carlini et al. , 2019 ) . For instance , consider a bank that trains a GPT-like language model on call center transcripts . Due to data memorization , it is possible to extract sensitive information by letting the model auto-complete a prefix , e.g. , “ my account number is : ” . Clearly , if such a model ( or its API ) is ever exposed to the adversary , it becomes a ligation machine as attackers can attempt with various prefixes to extract sensitive data , and subsequently sue the bank for privacy violations . Shokri et al . ( 2017 ) report that simple and intuitive measures often fail to provide sufficient protection , and the only way found to completely address the issue is to train the model with the rigorous guarantees of differential privacy ( DP ) ( Dwork et al. , 2006 ) . This paper focuses on the scenario that multiple individual participants jointly train a machine learning model using federated learning ( FL ) ( McMahan et al. , 2017 ) through distributed stochastic gradient descent ( SGD ) ( McDonald et al. , 2010 ; Dean et al. , 2012 ; Coates et al. , 2013 ; Abadi et al. , 2016a ) . Specifically , in every iteration , each individual computes the gradients with respect to the current model weights based on her own data ; then , gradients from all participants are aggregated to update the model . Note that the gradients from each individual may reveal sensitive information about her private dataset ( Shokri et al. , 2017 ; Pyrgelis et al. , 2018 ; Yeom et al. , 2018 ; Nasr et al. , 2019 ; Melis et al. , 2019 ) . A common approach to addressing this problem is by employing a secure multiparty computation ( MPC ) protocol ( Yao , 1986 ; Chaum et al. , 1987 ; Gennaro et al. , 2002 ; Ishai et al. , 2010 ; Beimel et al. , 2014 ; Cramer et al. , 2015 ; Ananth et al. , 2018 ) , which computes the aggregate gradients while preserving the confidentiality of the gradients from each individual participant . One advantage of MPC is that it is a decentralized approach that does not require a trusted third party , which can be difficult to establish in some applications , e.g. , in finance and healthcare . Note that although MPC protects individuals ’ privacy in the gradient update process by concealing the gradient values of each participant , it does not provide any protection against data extraction attacks caused by unintended data memorization ( Dwork et al. , 2015 ; Song & Shmatikov , 2019 ; Melis et al. , 2019 ; Song & Shmatikov , 2020 ) . As mentioned earlier , an effective methodology to defend against such attacks is to perturb the gradients to satisfy differential privacy ( Shokri et al. , 2017 ) . Since there is no trusted third-party in our setting , such gradient perturbations need to done in a decentralized fashion , i.e. , each FL participant adds noise to her own gradients , such that the aggregated gradients over all participants satisfies DP , which is referred to as distributed differential privacy ( Goryczka et al. , 2013 ; Kairouz et al. , 2021 ) . Although gradient perturbation under DP has been studied in previous work ( notably , DPSGD ( Abadi et al. , 2016b ) ) , it is far from trivial to adapt centralized DP solutions to our setting , due to a fundamental problem : that the MPC protocol requires gradients to be represented as integers ( more precisely , finite field elements ( Paillier , 1999 ; Bonawitz et al. , 2017 ; Bell et al. , 2020 ) ) . DPSGD , on the other hand , injects real-valued Gaussian noise to the gradients . Although real numbers can be quantized and ( approximately ) represented using large integers , the quantized random noise have rather different mathematical properties , which render a tight privacy cost analysis much more difficult , especially under the decentralized setting of FL . For instance , a nice property of the continuous Gaussian distribution is that summing up n continuous noise values following i.i.d . unit-variance Gaussian distribution results in an amplified continuous Gaussian noise of variance n. This property does not hold , however , if the Gaussian noise values are first quantized before aggregated . Further , the privacy analysis ( specifically , the moment accountant analysis technique ) of the DPSGD algorithm also replies on other important properties of the continuous Gaussian distribution , which do not hold when the noise is quantized . Hence , DPSGD does not directly apply to our setting . This issue has been neglected by many existing distributed DP solutions , e.g. , ( Goryczka et al. , 2013 ; Valovich & Aldà , 2017 ; Truex et al. , 2019 ) . Existing Solutions . Agarwal et al . ( 2018 ) propose cpSGD , which injects binomial noise ( i.e. , the sum of multiple binary values drawn from independent Bernoulli trials ) to the discretized gradients at each participant of FL , to satisfy DP . Similar to Gaussian noise in the continuous domain , binomial noise can also be aggregated , i.e. , the sum of multiple i.i.d . binomial noise values also follows a binomial distribution . However , compared to the continuous Gaussian distribution , existing theoretical tools for analyzing binomial noise aggregation leads to rather loose bounds ; further , the bionomial distribution is incompatible with the moment accountant analysis technique in DPSGD ( Kairouz et al. , 2021 ) . Consequently , cpSGD leads to poor utility , as demonstrated in Section 4 . Recently , the distributed discrete Gaussian mechanism ( DDG ) ( Kairouz et al. , 2021 ) addresses the above issues by injecting independent discrete Gaussian noise ( Canonne et al. , 2020 ) to the gradients at each participant . Similar to the binomial distribution , the discrete Gaussian distribution is also defined over an integer domain ; meanwhile , DDG is fully compatible with the moment accountant analysis technique in DP-SGD , and , thus , enjoys the tight privacy cost analysis . However , the discrete Gaussian distribution is not aggregatable , meaning that the sum of noise drawn from multiple i.i.d . discrete Gaussian distributions does not follow another discrete Guassian distribution , which renders analysis difficult under in the decentralized setting of FL , and leads to looser bounds in the privacy analysis . Further , the privacy guarantee of the aggregated noise in DDG degrades linearly with the dimensionality d of the gradients , leading to poor scalablity to large neural networks . Our contribution . In this work , we propose a new mechanism for enforcing distributed differential privacy for federated learning : the distributed Skellam mechanism ( DSM ) , which injects random noise drawn from the symmetric Skellam distribution . Although the Skellam distribution has been used before in the DP literature ( Valovich & Aldà , 2017 ) , the privacy analysis therein does not cover the decentralized setting of FL , or the iterative , sampling-based SGD algorithm , both of which require highly non-trivial mathematical analysis . Specifically , we prove that DSM satisfies both Rényi-DP and ( , δ ) -DP , defined in Section 2 . Similar to our competitor DDG described above , DSM is compatible with the DPSGD framework and its moment accountant analysis technique , leading to tight bounds on the privacy loss analysis . Meanwhile , unlike DDG , the privacy guarantees of DSM are independent of the dimensionality of the gradients , which scales well to large models . Further , similar to the continuous Gaussian distribution ( and unlike the discrete Gaussian distribution in DDG ) , i.i.d . Skellam noise values can be aggregated to form an amplified noise that still follows the Skellam distribution , which leads to clean and elegant proofs in the decentralized setting of FL , and tight bounds in the privacy analysis . We apply DSM to federated learning with distributed SGD , with quantized gradients , e.g. , as required by the MPC protocol , and present the complete training algorithm . Extensive experiments using benchmark datasets show that our solution leads to consistent and significant utility gains over its competitors , under a variety of settings with different privacy and communication constraints . 2 PRELIMINARIES . A random variable Y follows a Poisson distribution of parameter λ if its probability distribution is Pr [ Y = k ] = exp ( −λ ) λ k k ! , k = 0 , 1 , 2 , . . .. Both the mean and variance of Y is λ . A random variable Z follows a Skellam distribution if it is the difference between two independent Poisson variables Y1 and Y2 . In this work , we restrict our attention to the case where Y1 and Y2 have the same parameter λ . In that case , the probability distribution of Z is Pr [ Z = k ] = exp ( −2λ ) I|k| ( 2λ ) , k = 0 , ±1 , ±2 , . . . , ( 1 ) where Iv ( u ) , ∑∞ h=0 1 h ! Γ ( h+v+1 ) ( u 2 ) 2h+v is the modified Bessel function of the first kind . We write that Z ∼ Sk ( λ , λ ) . By linearity of expectation , Z has mean 0 and variance 2λ . We say that two datasets X and X ′ are neighboring if one can be obtained by adding or removing one tuple from the other . The main idea of differential privacy ( DP ) is to ensure that the outcomes of a randomized mechanism on neighboring datasets are always similar ; intuitively , this provides plausible deniability on whether a given data record x belongs to the dataset X or not , and , thus , protects the privacy of the individual whose record is x . A classic definition of differential privacy is ( , δ ) -DP ( Dwork et al. , 2006 ) , as follows . Definition 1 ( ( , δ ) -Differential Privacy ( Dwork et al. , 2006 ) ) . A randomized mechanismM satisfies ( , δ ) -differential privacy ( DP ) if Pr [ M ( X ) ∈ O ] ≤ exp ( ) · Pr [ M ( X ′ ) ∈ O ] + δ , ( 2 ) for any set of output O ⊆ Range ( M ) and any neighboring datasets X and X ′ . Note that ( , δ ) -DP can be considered as a worst-case privacy guarantee for a mechanism , as it enforces an upper bound on the probability ratio of all possible outcomes . An alternative definition called Rényi-Differential Privacy ( RDP ) ( Mironov , 2017 ) , which is built upon the concept of Rényi Divergence , considers the average case privacy guarantee instead . Definition 2 ( Rényi Divergence ( van Erven & Harremoës , 2014 ) ) . Assuming that distributions P and Q are defined over the same domain , and P is absolute continuous with respect to Q , then the Rényi divergence of P from Q of finite order α ∈ ( 0 , 1 ) ∪ ( 1 , ∞ ) is defined as : Dα ( P‖Q ) = 1 α− 1 logEX∼P [ ( P ( X ) Q ( X ) ) α−1 ] , ( 3 ) where we adopt the convention that 00 = 0 and y 0 = ∞ for any y > 0 , and the logarithm is with base e. Definition 3 ( Rényi Differential Privacy ( Mironov , 2017 ) ) . A randomized mechanismM satisfies ( α , τ ) -Rényi differential privacy ( RDP ) if Dα ( M ( X ) ‖M ( X ′ ) ) ≤ τ for all neighboring datasets X and X ′ . Given a function of interest , the canonical way make it differentially private is to perturb its outcome through noise injection . Roughly speaking , the scale of the noise should be calibrated to the sensitivity of the function of interest ( Dwork et al. , 2006 ) , formally defined as follows . Definition 4 ( Sensitivity ) . The sensitivity S ( F ) of a function F : D → Rd , denoted as S ( F ) , is defined as S ( F ) = max X∼X′ ‖F ( X ) − F ( X ′ ) ‖ , where X ∼ X ′ denotes that X and X ′ are neighboring datasets , and ‖·‖ is a norm . In particular , injecting continuous Gaussian noise sampled from N ( 0 , σ2 ) to each dimension of function F satisfies ( α , αS 2 ( F ) 2σ2 ) -RDP ( Mironov , 2017 ) , where S ( F ) stands for the L2 sensitivity of function F . In many applications , we also need to analyze the overall privacy guarantee of a mechanism consisting of multiple components ( e.g. , training neural networks with SGD ) . We have the following composition and sub-sampling lemmata for RDP mechanisms . Lemma 1 ( Composition Lemma ( Mironov , 2017 ) ) . If mechanisms M1 , . . . , MT satisfies ( α , τ1 ) , . . . , ( α , τT ) -RDP , respectively , thenM1 ◦ . . . ◦MT satisfies ( α , ∑T t=1 τi ) -RDP . Lemma 2 ( Subsampling Lemma ( Zhu & Wang , 2019 ; Mironov et al. , 2019 ) ) . LetM be a mechanism that satisfies ( l , τ ( l ) ) -RDP for l = 2 , . . . , α ( α ∈ Z , α > 2 ) , and Sq be a procedure that uniformly sample each record of the input data with probability q . Then M ◦ Sq satisfies ( α , τ ) RDP with τ = 1 α− 1 log ( ( 1− q ) α−1 ( αq − q − 1 ) + α∑ l=2 ( α l ) ( 1− q ) α−lqle ( l−1 ) τ ( l ) ) . Finally , any mechanism that satisfies ( α , τ ) -RDP also satisfies ( , δ ) -DP , for values of and δ as follows . Lemma 3 ( Converting ( α , τ ) -RDP to ( , δ ) -DP ( Canonne et al. , 2020 ) ) . For any α ∈ ( 1 , ∞ ) , if Dα ( M ( X ) ‖M ( X ′ ) ) ≤ τ for any neighboring databases X and X ′ , thenM satisfies ( , δ ) -DP for = τ + log ( 1/δ ) + ( α− 1 ) log ( 1− 1/α ) − log ( α ) α− 1 . ( 4 )
This paper presents a mechanism based on Skellam distribution (called Distributed Skellam Mechanism (DSM)) to prevent privacy leakage for federated learning. It provides analysis of privacy guarantee in the decentralized setting. Specifically, DSM is shown to be both RDP and (\epsilon, \delta)-DP. Also, DSM is applied to differentially private federated learning with distributed SGD and quantized gradients.
SP:b42d2b125c1877e8fe644c0ed5ed77ea64602430
Distribution Matching in Deep Generative Models with Kernel Transfer Operators
1 Introduction . Generative modeling , in its unconditional form , refers to the problem of estimating the data generating distribution : given i.i.d . samples X with an unknown distribution PX , a generative model seeks to find a parametric distribution that closely resembles PX . In modern deep generative models , we often approach this problem via a latent variable – i.e. , we assume that there is some variable Z ∈ Z associated with the observed data X ∈ X that follows a known distribution PZ ( also referred to as the prior in generative models ) . Thus , we can learn a mapping f : Z → X such that the distribution after transformation , denoted by Pf ( Z ) , aligns well with the data generating distribution PX . Therefore , sampling from PX becomes convenient since PZ can be efficiently sampled . Frequently , f is parameterized by deep neural networks and optimized with stochastic gradient descent ( SGD ) . Existing generative modeling methods variously optimize the transformation f , most commonly modeling it as a Maximum Likelihood Estimation ( MLE ) or distribution matching problem . For instance , given data X = { x1 , . . . , xn } , a variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) first constructs Z through the approximate posterior qZ|X and maximizes a lower bound of likelihood pf ( Z ) ( X ) . Generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) relies on a simultaneously learned discriminator such that samples of Pf ( Z ) are indistinguishable from X . Results in ( Arjovsky et al. , 2017 ; Li et al. , 2017 ) suggest that GANs minimize the distributional discrepancies between Pf ( Z ) and PX . Flow-based generative models optimize pf ( Z ) ( X ) explicitly through the change of variable rule and efficiently calculating the Jacobian determinant of the inverse mapping f−1 . In all examples above , the architecture or objective notwithstanding , the common goal is to find a suitable function f that reduces the difference between Pf ( Z ) and PX . Thus , a key component in many deep generative models is to learn a forward operator as defined below . Definition 1.1 ( Forward operator ) . A forward operator f ? ∈ C : Z → X is defined to be a mapping associated with some latent variable Z ∼ PZ such that f ? = arg minf∈C d ( Pf ( Z ) , PX ) for some function class C and a distance measure d ( · , · ) . Motivation : The specifics of the forward operator may differ from case to case . But its properties and how it is estimated numerically greatly influences the empirical performance of the model . For instance , mode collapse issues in GANs are well known and solutions continue to emerge ( Srivastava et al. , 2017 ) . To learn the forward operator , VAEs use an approximate posterior qZ|X that may sometimes fail to align with the prior ( Kingma et al. , 2016 ; Dai & Wipf , 2019 ) . Flow-based generative models enable direct access to the posterior likelihood , yet in order to tractably evaluate the Jacobian of the transformation during training , one must either restrict the expressiveness at each layer ( Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ) or use more involved solutions ( Chen et al. , 2018 ) . Of course , solutions to mitigate these weaknesses ( Ho et al. , 2019 ) remains an active area of research . The starting point of our work is to evaluate the extent to which we can radically simplify the forward operator in deep generative models . Consider some desirable properties of a hypothetical forward operator ( in Def . ( 1.1 ) ) : ( a ) Upon convergence , the learned operator f ? minimizes the distance between PX and Pf ( Z ) over all possible operators of a certain class . ( b ) The training directly learns the mapping from the prior distribution PZ , rather than a variational approximation . ( c ) The forward operator f ? can be efficiently learned and sample generation is also efficient . It would appear that these criteria violate the “ no free lunch rule ” , and some compromise must be involved . Our goal is to investigate this trade-off : which design choices can make this approach work ? Specifically , a well studied construct in dynamical systems , namely the Perron-Frobenius operator ( Lemmens & Nussbaum , 2012 ) , suggests an alternative linear route to model the forward operator . Here , we show that if we are willing to give up on a few features in existing models – this may be acceptable depending on the downstream use case – then , the forward operator in generative models can be efficiently approximated as the estimation of a closed-form linear operator in the reproducing kernel Hilbert space ( RKHS ) . With simple adjustments of existing results , we identify a novel way to replace the expensive training for generative tasks with a simple principled kernel approach . Contributions . Our results are largely based on results in kernel methods and dynamical systems , but we demonstrate their relevance in generative modeling and complement recent ideas that emphasize links between deep generative models and dynamical systems . Our contributions are ( a ) We propose a non-parametric method for transferring a known prior density linearly in RKHS to an unknown data density – equivalent to learning a nonlinear forward operator in the input space . When compared to its functionally-analogous module used in other deep generative methods , our method avoids multiple expensive training steps yielding significant efficiency gains ; ( b ) We evaluate this idea in multiple scenarios and show competitive generation performance and efficiency benefits with pre-trained autoencoders on popular image datasets including MNIST , CIFAR-10 , CelebA and FFHQ ; ( c ) As a special use case , we demonstrate the advantages over other methods in limited data settings . 2 Preliminaries We briefly introduce reproducing kernel Hilbert space ( RKHS ) and kernel embedding of probability distributions , concepts we will use frequently . Definition 2.1 ( RKHS ( Aronszajn , 1950 ) ) . For a set X , let H be a set of functions g : X → R. Then , H is a reproducing kernel Hilbert space ( RKHS ) with a product 〈· , ·〉H if there exists a function k : X × X → R ( called a reproducing kernel ) such that ( i ) ∀x ∈ X , g ∈ H , g ( x ) = 〈g , k ( x , · ) 〉H ; ( ii ) H = cl ( span ( { k ( x , · ) , x ∈ X } ) ) , where cl ( · ) is the set closure . The function φ ( x ) = k ( x , · ) : X → H is referred to as the feature mapping of the induced RKHS H. A useful identity derived from feature mappings is the kernel mean embedding : it defines a mapping from a probablity measure in X to an element in the RKHS . Definition 2.2 ( Kernel Mean Embedding ( Smola et al. , 2007 ) ) . Given a probability measure p on X with an associated RKHS H equipped with a reproducing kernel k such that supx∈X k ( x , x ) < ∞ , the kernel mean embedding of p in RKHS H , denoted by µp ∈ H , is defined as µp = Ep [ φ ( x ) ] = ∫ k ( x , · ) p ( x ) dx , and the mean embedding operator E : L1 ( X ) → H is defined as µp = Ep . Remark 1 . For characteristic kernels , the operator E is injective . Thus , two distributions ( p , q ) in X are identical iff Ep = Eq . This property allows using of Maximum Mean Discrepancy ( MMD ) for distribution matching ( Gretton et al. , 2012 ; Li et al. , 2017 ) and is common , see ( Muandet et al. , 2017 ; Zhou et al. , 2018 ) . For a finite number of samples { xi } ni=1 drawn from the probability measure p , an unbiased empirical estimate of µH is µ̂H = 1 n ∑n i=1 k ( xi , · ) such that limn→∞ 1 n ∑n i=1 k ( xi , · ) = µH . Next , we review the covariance/cross covariance operators , two widely-used identities in kernel methods ( Fukumizu et al. , 2013 ; Song et al. , 2013 ) and building blocks of our approach . Definition 2.3 ( Covariance/Cross-covariance Operator ) . Let X , Z be random variables defined on X × Z with joint distribution PX , Z and marginal distributions PX , PZ . Let ( l , φ , H ) and ( k , ψ , G ) be two sets of ( a ) bounded kernel , ( b ) their corresponding feature map , and ( c ) their induced RKHS , respectively . The ( uncentered ) covariance operator CZZ : H → H and cross-covariance operator CXZ : H → G are defined as CZZ , Ez∼PZ [ φ ( z ) ⊗ φ ( z ) ] CXZ , E ( x , z ) ∼PX , Z [ ψ ( x ) ⊗ φ ( z ) ] ( 1 ) where ⊗ is the outer product operator . 3 Simplifying the estimation of the forward operator . Forward operator as a dynamical system : The dynamical system view of generative models has been described by others ( Chen et al. , 2018 ; Grathwohl et al. , 2019 ; Behrmann et al. , 2019 ) . These strategies model the evolution of latent variables in a residual neural network in terms of its dynamics over continuous or discrete time t , and consider the output function f as the evaluation function at a predetermined boundary condition t = t1 . Specifically , given an input ( i.e. , initial condition ) z ( t0 ) , f is defined as f ( z ( t0 ) ) = z ( t0 ) + ∫ t1 t0 ∆t ( z ( t ) ) dt ( 2 ) where ∆t is a time-dependent neural network function and z ( t ) is the intermediate solution at t. This view of generative models is not limited to specific methods or model archetypes , but generally useful , for example , by viewing the outputs of each hidden layer as evaluations in discrete-time dynamics . After applying f on a random variable Z ∈ Z , the marginal density of the output over any subspace Λ ⊆ X can be expressed as∫ Λ pf ( Z ) ( x ) dx = ∫ z∈f−1 ( Λ ) pZ ( z ) dz ( 3 ) If there exists some neural network instance ∆ ? t such that the corresponding output function f ? satisfies PX = Pf ? ( Z ) , by Def . 1.1 , f ? is a forward operator . Let X be a set of i.i.d . samples drawn from PX . In typical generative learning , either maximizing the likelihood 1|X| ∑ x∈X pf ( Z ) ( x ) or minimizing the distributional divergence d ( Pf ( Z ) , PX ) requires evaluating and differentiating through f or f−1 many times . Towards a one-step estimation of forward operator : Since f and f−1 in ( 3 ) will be highly nonlinear in practice , evaluating and computing the gradients can be expensive . Nevertheless , the dynamical systems literature suggests a linear extension of f∗ , namely the Perron-Frobenius operator or transfer operator , that conveniently transfers pZ to pX . Definition 3.1 ( Perron-Frobenius operator ( Mayer , 1980 ) ) . Given a dynamical system f : X → X , the Perron-Frobenius ( PF ) operator P : L1 ( X ) → L1 ( X ) is an infinitedimensional linear operator defined as ∫ Λ ( PpZ ) ( x ) dx = ∫ z∈f−1 ( Λ ) pZ ( z ) dz for all Λ ⊆ X . Although in Def . 3.1 , the PF operator P is defined for self-maps , it is trivial to extend P to mappings f : Z → X by restricting the RHS integral ∫ z∈f−1 ( Λ ) pZ ( z ) dz to Z . It can be seen that , for the forward operator f∗ , the corresponding PF operator P satisfies pX = PpZ . ( 4 ) If P can be efficiently computed , transferring the tractable density pZ to the target density pX can be accomplished simply by applying P. However , since P is an infinite-dimensional operator on L1 ( X ) , it is impractical to instantiate it explicitly and exactly . Nonetheless , there exist several methods for estimating the Perron-Frobenius operator , including Ulam ’ s method ( Ulam , 1960 ) and the Extended Dynamical Mode Decomposition ( EDMD ) ( Williams et al. , 2015a ) . Both strategies project P onto a finite number of hand-crafted basis functions – this may suffice in many settings but may fall short in modeling highly complex dynamics . Kernel-embedded form of PF operator : A natural extension of PF operator is to represent P by an infinite set of functions ( Klus et al. , 2020 ) , e.g. , projecting it onto the bases of an RKHS via the kernel trick . There , for a characteristic kernel l , the kernel mean embedding uniquely identifies an element µX = ElpX ∈ G for any pX ∈ L1 ( X ) . Thus , to approximate P , we may alternatively solve for the dynamics from pZ to pX in their embedded form . Using Tab . 1 notations , we have the following linear operator that defines the dynamics between two embedded densities . Definition 3.2 ( Kernel-embedded Perron-Frobenius operator ( Klus et al. , 2020 ) ) . Given pZ ∈ L1 ( X ) and pX ∈ L1 ( X ) . Denote k as the input kernel and l as the output kernel . Let µX = ElpX and µZ = EkpZ be their corresponding mean kernel embeddings . The kernel-embedded Perron-Frobenius ( kPF ) operator , denoted by PE : H → G , is defined as PE = CXZC−1ZZ ( 5 ) Proposition 3.1 ( Song et al . ( 2013 ) ) . With the above definition , PE satisfies µX = PEµZ ( 6 ) under the conditions : ( i ) CZZ is injective ( ii ) µt ∈ range ( CZZ ) ( iii ) E [ g ( X ) |Z = · ] ∈ H for any g ∈ G. The last two assumptions can sometimes be difficult to satisfy for certain RKHS ( see Theorem 2 of Fukumizu et al . ( 2013 ) ) . In such cases , a relaxed solution can be constructed by replacing C−1ZZ by a regularized inverse ( CZZ + λI ) −1 or a Moore-Penrose pseudoinverse C † ZZ . The following proposition shows commutativity between the ( kernel-embedded ) PF operator and the mean embedding operator , showing its equivalence to P when l is characteristic . Proposition 3.2 ( ( Klus et al. , 2020 ) ) . With the above notations , El ◦ P = PE ◦ Ek . Transferring embedded densities with the kPF operator : The kPF operator is a powerful tool that allows transferring embedded densities in RKHS . The main steps are : ( 1 ) Use mean embedding operator El on pZ . Let us denote it by µZ . ( 2 ) Transfer µZ using kPF operator PE to get the mean embedded pX , given by µX . Of course , in practice with finite data , { xi } i∈ [ n ] ∼ PX and { zi } i∈ [ n ] ∼ PX , PE must be estimated empirically ( see Klus et al . ( 2020 ) for an error analysis ) . P̂E = ĈXZ ( ĈZZ ) −1 ≈ Ψ ( ΦTΦ + λnI ) −1ΦT ≈ Ψ ( ΦTΦ ) †ΦT where Φ = [ k ( z1 , · ) , · · · , k ( zn , · ) ] , Ψ = [ l ( x1 , · ) , · · · , l ( xn , · ) ] are simply the corresponding feature matrices for samples of PX and PZ , and λ is a small penalty term . Learning kPF for unconditional generative modeling : Some generative modeling methods such as VAEs and flow-based formulations explicitly model the latent variable Z as conditionally dependent on the data variable X . This allows deriving/optimizing the likelihood pf ( Z ) ( X ) . This is desirable but may not be essential in all applications . To learn a kPF , however , X and Z can be independent RVs . While it may not be immediately obvious why we could assume this independence , we can observe the following property for the empirical kPF operator , assuming that the empirical covariance operator ĈZZ is non-singular : P̂E µ̂Z = ĈXZ Ĉ−1ZZ µ̂Z = ΨΦ > ︸ ︷︷ ︸ ĈXZ ( ΦΦ > ︸ ︷︷ ︸ ĈZZ ) −1Φ1n = Ψ ( Φ > Φ ) −1Φ > Φ1n = Ψ1n = µ̂X ( 7 ) Suppose that { xi } i∈ [ n ] and { zj } i∈ [ n ] are independently sampled from the marginals PX and PZ . It is easy to verify that ( 7 ) holds for any pairing { ( xi , zj ) } ( i , j ) ∈ [ n ] × [ n ] . However , instantiating the RVs in this way rules out the use of kPF for certain downstream tasks such as controlled generation or mode detection , since Z does not contain information regarding X . Nevertheless , if sampling is our only goal , then this instantiation of kPF will suffice . Mapping Z to G : Now , since PE is a deterministic linear operator , we can easily set up a scheme to map samples of Z to elements of G where the expectation of the mapped samples equals µX Define φ ( z ) = k ( z , · ) and ψ ( x ) = l ( x , · ) as feature maps of kernels k and l. We can rewrite µX as µX = PEEkpZ = PEEZ [ φ ( Z ) ] = EZ [ PE ( φ ( Z ) ) ] = EZ [ ψ ( ψ−1 ( PEφ ( Z ) ) ) ] ( 8 ) Here ψ−1 is the inverse or the preimage map of ψ . Such an inverse , in general , may not exist ( Kwok & Tsang , 2004 ; Honeine & Richard , 2011 ) . We will discuss a procedure to approximate ψ−1 in §4.1 . In what follows , we will temporarily assume that an exact preimage map exists and is tractable to compute . Define Ψ∗ = P̂Eφ ( Z ) as the transferred sample in G using the empirical embedded PF operator P̂E . Then the next result shows that asymptotically the transferred samples converge in distribution to the target distribution . Proposition 3.3 . As n → ∞ , ψ−1 ( Ψ∗ ) D→ PX . That is , the preimage of the transferred sample approximately conforms to PX under previous assumptions when n is large . Proof . Since P̂E asymp.→ P , the proof immediately follows from ( 8 ) .
The authors propose a new type of generative model. The new scheme is based on a kernel transfer operator that leads to a cheap method for distribution matching. The authors rely on rigorous theory on RKHS and propose a framework for transferring a prior distribution linearly (in RKHS) to the data distribution. The authors demonstrate that the proposed approach leads to improved approximations of observed distributions. Specifically, the new approach can lead to the generation of new images form a given distribution and requires less training time compared to existing baselines. The paper is mostly well written, and the method relies on solid justifications. The authors demonstrate the usefulness of the method and compare it to several existing methods.
SP:25694c2c1a190114854a49d57a3abdd336c88b2f
Distribution Matching in Deep Generative Models with Kernel Transfer Operators
1 Introduction . Generative modeling , in its unconditional form , refers to the problem of estimating the data generating distribution : given i.i.d . samples X with an unknown distribution PX , a generative model seeks to find a parametric distribution that closely resembles PX . In modern deep generative models , we often approach this problem via a latent variable – i.e. , we assume that there is some variable Z ∈ Z associated with the observed data X ∈ X that follows a known distribution PZ ( also referred to as the prior in generative models ) . Thus , we can learn a mapping f : Z → X such that the distribution after transformation , denoted by Pf ( Z ) , aligns well with the data generating distribution PX . Therefore , sampling from PX becomes convenient since PZ can be efficiently sampled . Frequently , f is parameterized by deep neural networks and optimized with stochastic gradient descent ( SGD ) . Existing generative modeling methods variously optimize the transformation f , most commonly modeling it as a Maximum Likelihood Estimation ( MLE ) or distribution matching problem . For instance , given data X = { x1 , . . . , xn } , a variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) first constructs Z through the approximate posterior qZ|X and maximizes a lower bound of likelihood pf ( Z ) ( X ) . Generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) relies on a simultaneously learned discriminator such that samples of Pf ( Z ) are indistinguishable from X . Results in ( Arjovsky et al. , 2017 ; Li et al. , 2017 ) suggest that GANs minimize the distributional discrepancies between Pf ( Z ) and PX . Flow-based generative models optimize pf ( Z ) ( X ) explicitly through the change of variable rule and efficiently calculating the Jacobian determinant of the inverse mapping f−1 . In all examples above , the architecture or objective notwithstanding , the common goal is to find a suitable function f that reduces the difference between Pf ( Z ) and PX . Thus , a key component in many deep generative models is to learn a forward operator as defined below . Definition 1.1 ( Forward operator ) . A forward operator f ? ∈ C : Z → X is defined to be a mapping associated with some latent variable Z ∼ PZ such that f ? = arg minf∈C d ( Pf ( Z ) , PX ) for some function class C and a distance measure d ( · , · ) . Motivation : The specifics of the forward operator may differ from case to case . But its properties and how it is estimated numerically greatly influences the empirical performance of the model . For instance , mode collapse issues in GANs are well known and solutions continue to emerge ( Srivastava et al. , 2017 ) . To learn the forward operator , VAEs use an approximate posterior qZ|X that may sometimes fail to align with the prior ( Kingma et al. , 2016 ; Dai & Wipf , 2019 ) . Flow-based generative models enable direct access to the posterior likelihood , yet in order to tractably evaluate the Jacobian of the transformation during training , one must either restrict the expressiveness at each layer ( Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ) or use more involved solutions ( Chen et al. , 2018 ) . Of course , solutions to mitigate these weaknesses ( Ho et al. , 2019 ) remains an active area of research . The starting point of our work is to evaluate the extent to which we can radically simplify the forward operator in deep generative models . Consider some desirable properties of a hypothetical forward operator ( in Def . ( 1.1 ) ) : ( a ) Upon convergence , the learned operator f ? minimizes the distance between PX and Pf ( Z ) over all possible operators of a certain class . ( b ) The training directly learns the mapping from the prior distribution PZ , rather than a variational approximation . ( c ) The forward operator f ? can be efficiently learned and sample generation is also efficient . It would appear that these criteria violate the “ no free lunch rule ” , and some compromise must be involved . Our goal is to investigate this trade-off : which design choices can make this approach work ? Specifically , a well studied construct in dynamical systems , namely the Perron-Frobenius operator ( Lemmens & Nussbaum , 2012 ) , suggests an alternative linear route to model the forward operator . Here , we show that if we are willing to give up on a few features in existing models – this may be acceptable depending on the downstream use case – then , the forward operator in generative models can be efficiently approximated as the estimation of a closed-form linear operator in the reproducing kernel Hilbert space ( RKHS ) . With simple adjustments of existing results , we identify a novel way to replace the expensive training for generative tasks with a simple principled kernel approach . Contributions . Our results are largely based on results in kernel methods and dynamical systems , but we demonstrate their relevance in generative modeling and complement recent ideas that emphasize links between deep generative models and dynamical systems . Our contributions are ( a ) We propose a non-parametric method for transferring a known prior density linearly in RKHS to an unknown data density – equivalent to learning a nonlinear forward operator in the input space . When compared to its functionally-analogous module used in other deep generative methods , our method avoids multiple expensive training steps yielding significant efficiency gains ; ( b ) We evaluate this idea in multiple scenarios and show competitive generation performance and efficiency benefits with pre-trained autoencoders on popular image datasets including MNIST , CIFAR-10 , CelebA and FFHQ ; ( c ) As a special use case , we demonstrate the advantages over other methods in limited data settings . 2 Preliminaries We briefly introduce reproducing kernel Hilbert space ( RKHS ) and kernel embedding of probability distributions , concepts we will use frequently . Definition 2.1 ( RKHS ( Aronszajn , 1950 ) ) . For a set X , let H be a set of functions g : X → R. Then , H is a reproducing kernel Hilbert space ( RKHS ) with a product 〈· , ·〉H if there exists a function k : X × X → R ( called a reproducing kernel ) such that ( i ) ∀x ∈ X , g ∈ H , g ( x ) = 〈g , k ( x , · ) 〉H ; ( ii ) H = cl ( span ( { k ( x , · ) , x ∈ X } ) ) , where cl ( · ) is the set closure . The function φ ( x ) = k ( x , · ) : X → H is referred to as the feature mapping of the induced RKHS H. A useful identity derived from feature mappings is the kernel mean embedding : it defines a mapping from a probablity measure in X to an element in the RKHS . Definition 2.2 ( Kernel Mean Embedding ( Smola et al. , 2007 ) ) . Given a probability measure p on X with an associated RKHS H equipped with a reproducing kernel k such that supx∈X k ( x , x ) < ∞ , the kernel mean embedding of p in RKHS H , denoted by µp ∈ H , is defined as µp = Ep [ φ ( x ) ] = ∫ k ( x , · ) p ( x ) dx , and the mean embedding operator E : L1 ( X ) → H is defined as µp = Ep . Remark 1 . For characteristic kernels , the operator E is injective . Thus , two distributions ( p , q ) in X are identical iff Ep = Eq . This property allows using of Maximum Mean Discrepancy ( MMD ) for distribution matching ( Gretton et al. , 2012 ; Li et al. , 2017 ) and is common , see ( Muandet et al. , 2017 ; Zhou et al. , 2018 ) . For a finite number of samples { xi } ni=1 drawn from the probability measure p , an unbiased empirical estimate of µH is µ̂H = 1 n ∑n i=1 k ( xi , · ) such that limn→∞ 1 n ∑n i=1 k ( xi , · ) = µH . Next , we review the covariance/cross covariance operators , two widely-used identities in kernel methods ( Fukumizu et al. , 2013 ; Song et al. , 2013 ) and building blocks of our approach . Definition 2.3 ( Covariance/Cross-covariance Operator ) . Let X , Z be random variables defined on X × Z with joint distribution PX , Z and marginal distributions PX , PZ . Let ( l , φ , H ) and ( k , ψ , G ) be two sets of ( a ) bounded kernel , ( b ) their corresponding feature map , and ( c ) their induced RKHS , respectively . The ( uncentered ) covariance operator CZZ : H → H and cross-covariance operator CXZ : H → G are defined as CZZ , Ez∼PZ [ φ ( z ) ⊗ φ ( z ) ] CXZ , E ( x , z ) ∼PX , Z [ ψ ( x ) ⊗ φ ( z ) ] ( 1 ) where ⊗ is the outer product operator . 3 Simplifying the estimation of the forward operator . Forward operator as a dynamical system : The dynamical system view of generative models has been described by others ( Chen et al. , 2018 ; Grathwohl et al. , 2019 ; Behrmann et al. , 2019 ) . These strategies model the evolution of latent variables in a residual neural network in terms of its dynamics over continuous or discrete time t , and consider the output function f as the evaluation function at a predetermined boundary condition t = t1 . Specifically , given an input ( i.e. , initial condition ) z ( t0 ) , f is defined as f ( z ( t0 ) ) = z ( t0 ) + ∫ t1 t0 ∆t ( z ( t ) ) dt ( 2 ) where ∆t is a time-dependent neural network function and z ( t ) is the intermediate solution at t. This view of generative models is not limited to specific methods or model archetypes , but generally useful , for example , by viewing the outputs of each hidden layer as evaluations in discrete-time dynamics . After applying f on a random variable Z ∈ Z , the marginal density of the output over any subspace Λ ⊆ X can be expressed as∫ Λ pf ( Z ) ( x ) dx = ∫ z∈f−1 ( Λ ) pZ ( z ) dz ( 3 ) If there exists some neural network instance ∆ ? t such that the corresponding output function f ? satisfies PX = Pf ? ( Z ) , by Def . 1.1 , f ? is a forward operator . Let X be a set of i.i.d . samples drawn from PX . In typical generative learning , either maximizing the likelihood 1|X| ∑ x∈X pf ( Z ) ( x ) or minimizing the distributional divergence d ( Pf ( Z ) , PX ) requires evaluating and differentiating through f or f−1 many times . Towards a one-step estimation of forward operator : Since f and f−1 in ( 3 ) will be highly nonlinear in practice , evaluating and computing the gradients can be expensive . Nevertheless , the dynamical systems literature suggests a linear extension of f∗ , namely the Perron-Frobenius operator or transfer operator , that conveniently transfers pZ to pX . Definition 3.1 ( Perron-Frobenius operator ( Mayer , 1980 ) ) . Given a dynamical system f : X → X , the Perron-Frobenius ( PF ) operator P : L1 ( X ) → L1 ( X ) is an infinitedimensional linear operator defined as ∫ Λ ( PpZ ) ( x ) dx = ∫ z∈f−1 ( Λ ) pZ ( z ) dz for all Λ ⊆ X . Although in Def . 3.1 , the PF operator P is defined for self-maps , it is trivial to extend P to mappings f : Z → X by restricting the RHS integral ∫ z∈f−1 ( Λ ) pZ ( z ) dz to Z . It can be seen that , for the forward operator f∗ , the corresponding PF operator P satisfies pX = PpZ . ( 4 ) If P can be efficiently computed , transferring the tractable density pZ to the target density pX can be accomplished simply by applying P. However , since P is an infinite-dimensional operator on L1 ( X ) , it is impractical to instantiate it explicitly and exactly . Nonetheless , there exist several methods for estimating the Perron-Frobenius operator , including Ulam ’ s method ( Ulam , 1960 ) and the Extended Dynamical Mode Decomposition ( EDMD ) ( Williams et al. , 2015a ) . Both strategies project P onto a finite number of hand-crafted basis functions – this may suffice in many settings but may fall short in modeling highly complex dynamics . Kernel-embedded form of PF operator : A natural extension of PF operator is to represent P by an infinite set of functions ( Klus et al. , 2020 ) , e.g. , projecting it onto the bases of an RKHS via the kernel trick . There , for a characteristic kernel l , the kernel mean embedding uniquely identifies an element µX = ElpX ∈ G for any pX ∈ L1 ( X ) . Thus , to approximate P , we may alternatively solve for the dynamics from pZ to pX in their embedded form . Using Tab . 1 notations , we have the following linear operator that defines the dynamics between two embedded densities . Definition 3.2 ( Kernel-embedded Perron-Frobenius operator ( Klus et al. , 2020 ) ) . Given pZ ∈ L1 ( X ) and pX ∈ L1 ( X ) . Denote k as the input kernel and l as the output kernel . Let µX = ElpX and µZ = EkpZ be their corresponding mean kernel embeddings . The kernel-embedded Perron-Frobenius ( kPF ) operator , denoted by PE : H → G , is defined as PE = CXZC−1ZZ ( 5 ) Proposition 3.1 ( Song et al . ( 2013 ) ) . With the above definition , PE satisfies µX = PEµZ ( 6 ) under the conditions : ( i ) CZZ is injective ( ii ) µt ∈ range ( CZZ ) ( iii ) E [ g ( X ) |Z = · ] ∈ H for any g ∈ G. The last two assumptions can sometimes be difficult to satisfy for certain RKHS ( see Theorem 2 of Fukumizu et al . ( 2013 ) ) . In such cases , a relaxed solution can be constructed by replacing C−1ZZ by a regularized inverse ( CZZ + λI ) −1 or a Moore-Penrose pseudoinverse C † ZZ . The following proposition shows commutativity between the ( kernel-embedded ) PF operator and the mean embedding operator , showing its equivalence to P when l is characteristic . Proposition 3.2 ( ( Klus et al. , 2020 ) ) . With the above notations , El ◦ P = PE ◦ Ek . Transferring embedded densities with the kPF operator : The kPF operator is a powerful tool that allows transferring embedded densities in RKHS . The main steps are : ( 1 ) Use mean embedding operator El on pZ . Let us denote it by µZ . ( 2 ) Transfer µZ using kPF operator PE to get the mean embedded pX , given by µX . Of course , in practice with finite data , { xi } i∈ [ n ] ∼ PX and { zi } i∈ [ n ] ∼ PX , PE must be estimated empirically ( see Klus et al . ( 2020 ) for an error analysis ) . P̂E = ĈXZ ( ĈZZ ) −1 ≈ Ψ ( ΦTΦ + λnI ) −1ΦT ≈ Ψ ( ΦTΦ ) †ΦT where Φ = [ k ( z1 , · ) , · · · , k ( zn , · ) ] , Ψ = [ l ( x1 , · ) , · · · , l ( xn , · ) ] are simply the corresponding feature matrices for samples of PX and PZ , and λ is a small penalty term . Learning kPF for unconditional generative modeling : Some generative modeling methods such as VAEs and flow-based formulations explicitly model the latent variable Z as conditionally dependent on the data variable X . This allows deriving/optimizing the likelihood pf ( Z ) ( X ) . This is desirable but may not be essential in all applications . To learn a kPF , however , X and Z can be independent RVs . While it may not be immediately obvious why we could assume this independence , we can observe the following property for the empirical kPF operator , assuming that the empirical covariance operator ĈZZ is non-singular : P̂E µ̂Z = ĈXZ Ĉ−1ZZ µ̂Z = ΨΦ > ︸ ︷︷ ︸ ĈXZ ( ΦΦ > ︸ ︷︷ ︸ ĈZZ ) −1Φ1n = Ψ ( Φ > Φ ) −1Φ > Φ1n = Ψ1n = µ̂X ( 7 ) Suppose that { xi } i∈ [ n ] and { zj } i∈ [ n ] are independently sampled from the marginals PX and PZ . It is easy to verify that ( 7 ) holds for any pairing { ( xi , zj ) } ( i , j ) ∈ [ n ] × [ n ] . However , instantiating the RVs in this way rules out the use of kPF for certain downstream tasks such as controlled generation or mode detection , since Z does not contain information regarding X . Nevertheless , if sampling is our only goal , then this instantiation of kPF will suffice . Mapping Z to G : Now , since PE is a deterministic linear operator , we can easily set up a scheme to map samples of Z to elements of G where the expectation of the mapped samples equals µX Define φ ( z ) = k ( z , · ) and ψ ( x ) = l ( x , · ) as feature maps of kernels k and l. We can rewrite µX as µX = PEEkpZ = PEEZ [ φ ( Z ) ] = EZ [ PE ( φ ( Z ) ) ] = EZ [ ψ ( ψ−1 ( PEφ ( Z ) ) ) ] ( 8 ) Here ψ−1 is the inverse or the preimage map of ψ . Such an inverse , in general , may not exist ( Kwok & Tsang , 2004 ; Honeine & Richard , 2011 ) . We will discuss a procedure to approximate ψ−1 in §4.1 . In what follows , we will temporarily assume that an exact preimage map exists and is tractable to compute . Define Ψ∗ = P̂Eφ ( Z ) as the transferred sample in G using the empirical embedded PF operator P̂E . Then the next result shows that asymptotically the transferred samples converge in distribution to the target distribution . Proposition 3.3 . As n → ∞ , ψ−1 ( Ψ∗ ) D→ PX . That is , the preimage of the transferred sample approximately conforms to PX under previous assumptions when n is large . Proof . Since P̂E asymp.→ P , the proof immediately follows from ( 8 ) .
The paper presents a novel (or an unusual type of) generative modeling approach that is kernel-based and non-parametric. The basic idea is using an operator that maps from the RKHS of Z to the RKHS of X, so that data generation can be done by mapping the prior distribution $p_{prior}(z)$ to the RKHS of Z, applying the operator, and project down the result in the RKHS of X to the data space. The operator is constructed as the RKHS analogy of the desired conditional distribution p(x|z) (which makes $\int p(x|z) p_{prior}(z) dz = p_{data}(x)$), so that it transforms the RHKS embedding of $p_{prior}(z)$ to that of $p_{data}(x)$. However, to guarantee a desired $p(x|z)$, the operator is chosen by assuming $x$ and $z$ are independent. The operator can be estimated by samples of the joint distribution, which amounts to samples of $p_{data}(x)$ (i.e., training dataset) and samples from $p_{prior}(z)$ under the independent assumption, and the maps between the sample space and the RKHS can also be estimated. Experiments show the utility of the method for data generation for densely supported distributions.
SP:25694c2c1a190114854a49d57a3abdd336c88b2f