paper_name stringlengths 11 170 | text stringlengths 8.07k 307k | summary stringlengths 152 6.16k | paper_id stringlengths 43 43 |
|---|---|---|---|
Exploring the Limits of Large Scale Pre-training | 1 INTRODUCTION . Recent impressive progress on transfer and few-shot learning ( Brown et al. , 2020 ; Goyal et al. , 2021 ; Kolesnikov et al. , 2019 ; Pham et al. , 2020 ; Dosovitskiy et al. , 2020 ; Dumoulin et al. , 2021 ; Radford et al. , 2021 ) suggests an emerging direction that scaling up models and training them on a huge corpus of data is the main obstacle towards better performance on downstream tasks with less or no data . These developments implicitly encourage two consistent views : 1 ) scaling up the model and data size improves the performance significantly ; 2 ) the performance improvement transfers to downstream tasks in a desirable way . In a more focused empirical study in support of the first view , Kaplan et al . ( 2020 ) show that scaling up the model size , data , and compute appropriately in the language modeling task results in a non-saturating return in performance . Bello et al . ( 2021 ) , Tan & Le ( 2019 ) show that favorable scaling can be achieved in image recognition tasks as well . The second view has also been a subject of recent focused studies . Hernandez et al . ( 2021 ) show that favorable scaling laws similar to that of ( Kaplan et al. , 2020 ; Tay et al. , 2021b ) hold in transfer and few-shot settings in NLP tasks . In perhaps closest prior work to ours , Kornblith et al . ( 2019 ) observe a linear relationship1 between the performances on ImageNet ( Russakovsky et al. , 2015 ) and downstream image recognition tasks . Adopting the above views has major implications moving forward . These views suggest that spending compute and research effort on improving the performance on one massive corpus would pay off because that would enable us to solve many downstream tasks almost for free . It also means while improving our upstream performance , we do not need to be worried about downstream tasks as their improvement is predictable based on a linear trend . While the aforementioned studies provide a compelling story , they suffer from a major shortcoming : due to compute limitations , performance for different choices of hyper-parameter values are not reported . Scaling plots seem more favorable if the hyper-parameter chosen for each scale is fixed or determined by a simple scaling function . 1The linear relationship in ( Kornblith et al. , 2019 ) is achieved after proper logit scaling of accuracy values . We show that with logit or linear scaling , the relationship is not linear . Moreover , often the goal is improving state-of-the-art results , hence naturally most of the efforts in hyper-parameter selection is focused on higher scales , which significantly biases the scaling plots . However , when studying scaling , we are concerned about the best downstream performance of models given all possible values for the hyper-parameters . Additionally , most scaling studies report the behavior within a limited range , and simply extrapolating that scaling without further understanding of the dynamics of scaling can be detrimental as there is no reason , a priori , for the scaling to hold outside of the studied range . In this paper , we systematically investigate the transferability of improvements on a large-scale upstream task to a wide range of downstream tasks in both few-shot and transfer learning scenarios . To address the above shortcomings , part of our work is a meta-study of more than 4800 Vision Transformer ( Dosovitskiy et al. , 2020 ) , MLP-Mixer ( Tolstikhin et al. , 2021 ) and ResNet ( Dosovitskiy et al. , 2020 ) models . The models are pre-trained on either JFT ( Sun et al. , 2017 ) with 303M images and 18K classes or ImageNet21K ( Deng et al. , 2009 ) with 14M images and 21K classes and evaluated on a variety of downstream datasets for few-shot and transfer learning settings . Our 25 downstream tasks cover a wide range of standard datasets that are included in benchmarks like VTAB ( Zhai et al. , 2019 ) , MetaDataset ( Triantafillou et al. , 2019 ) , Wilds ( Koh et al. , 2020 ) and medical imaging . We provide strong empirical evidence that scaling ( and hyper-parameter tuning ) does not lead to a one-model-fits-all solution . There are still many unresolved challenges remaining and at the center is the problem of data diversity for downstream tasks . We provide the first large scale and systematic investigation of this phenomena and discuss the reasons behind it . In Figure 1 , we present downstream ( DS ) vs upstream ( US ) performance plot on variety of models and downstream tasks . We observe that , as we increase US accuracy , for most cases DS accuracy saturates to a value considerably below 100 % . Also , saturating behavior is not an exception but rather the common trend and it is robust to the choice of number of shots and US tasks ( see Figure F.1 ) . We establish that this gap is not due to noise or any other factor that solely depends on DS task ; rather , it depends on the relationship between US , DS tasks . Moreover , given a set of models with similar US accuracy , the best model for different DS tasks varies . Contributions Our main contributions in this paper are as follows : • We establish through extensive study that as we improve the performance of the upstream ( US ) task either by scaling up or hyper-parameter and architectural choices , the performance of downstream ( DS ) tasks shows a saturating behaviour . In our experiments , several DS tasks reach full saturation within the studied range ( Section 2 ) . • We demonstrate that given a set of models with similar US accuracy , the best model for a DS task TDS1 might have much worse performance on another DS task TDS2 compared to the best model for TDS2 ( Figure 5 ) . • Given the scale of experiments , it is crucial for the proposed model to not be impacted by the density of the points in the DS-vs-US plot . We argue and demonstrate that fitting the power law to the convex hull of experiments would circumvent the effect of sampling biases on prediction of downstream accuracy and show the robustness of our model to sample size variations ( Section 2.2 ) . • Having observed the nonlinear relationship between upstream and downstream accuracy , in order to predict downstream performance for a given upstream accuracy , we model their relationship with a power law curve and establish that it captures the behavior well even with small number of samples ( Section 2.2 ) . • We study how scaling up model size , data size , and compute affects DS performance and show that these parameters impact DS performance mainly through the US performance ( Section 3 ) . • We investigate reasons behind the DS performance saturation and show that this behavior can be captured by the usefulness of feature representation in higher layers of the pre-trained model ( Section 4 ) . • We further explore the discrepancy between US and DS performances and show that for some choices of hyper-parameters , they might be at odds with each other . In particular , we showcase how the optimal hyper-parameters for the head used in pre-training ( upstream task ) are different for US and DS . We then uncover the reason behind this discrepancy ( Appendix C , D ) . • Finally , we show our observations are robust to several choices such as size of US data , common scalings of accuracy , number of shots , transfer vs few-shot setting and architecture ( Appendix E ) . Related Work . The closest work to ours is that of Kornblith et al . ( 2019 ) . They investigate the effect of ImageNet ( Russakovsky et al. , 2015 ) pre-training on image classification performance across 12 datasets for few-shot , transfer and random initialization scenarios . They show that performance on ImageNet translates linearly ( in logit scaling ) to performance on DS tasks . However , they do not consider extrapolation of the values . While both works investigate the effect of pre-training via various experiments , there are two main differences in our responses to the question of “ better upstream performance transfer to better downstream performance ? ” . First , we establish that clear “ saturation ” phenomena exists when looking into DS-vs-US performance . In Figure 1 , we see there are various cases when comparing two models , A , B . Where model A has a much higher US accuracy but lower DS accuracy and these are not exceptions to a rule , rather the majority of cases . Essentially , for each DS-vs-US plot two points where one is on the right but lower than the other are instances of such a case . Second , we also establish that , for each DS task you can see best performing models scale with power law as in Equation 1 but for each architecture best performing models are different across DS tasks and this depends on training hyper-parameters , See Figure 5 . In other words , when considering two DS tasks , TDS1 , TDS2 , we have numerous cases where model A has better performance on US and TDS1 but one can not conclude better performance on DS2 . We suspect the difference in conclusion is due to the earlier work being limited in the range of accuracy values they consider . In addition to this difference in conclusions , we investigate the reasons behind this saturation behavior . Moreover , ( in Appendix C ) we consider cases where US and DS performance are at odds with each other , specifically , the scenarios where worse performance on US , leads to performance improvement on DS . Inspired by ( Zhai et al. , 2021 ) who noted that increasing head weight decay during pre-training leads to worse performance on US while improving DS performance ; we investigate head hyper-parameters ( both weight decay and learning rate ) further and show that it can be explained by noting that these manipulations push the information stored in the head down to lower layers . Additional related work are covered in Appendix A . 1.1 EXPERIMENTAL SETUP . The analysis in this paper is based on a study on an exhaustive number of large-scale experiments on image recognition tasks , as well as a set of controlled experiments we conducted to ablate our setup and deepen our understanding of the studied phenomena . We investigate more than 4800 experiments with Vision Transformers , MLP-Mixers and ResNets with different configurations when pre-trained on a large amount of data in a supervised fashion and evaluated on several downstream image recognition tasks through few-shot learning and fine-tuning . For more details , see Appendix G. We emphasize that the large set of experiments we investigate are not trained for the purpose of this paper , rather , we have aggregated models trained by different researchers for different purposes to perform a meta-study on them . This , in fact , positions this meta-study in a unique spot . First , it may not be feasible to run such a number of large-scale trials for the purpose of studying particular phenomena.Second , no implicit or explicit assumption was made in these experiments with respect to the type of analysis we conducted on them afterwards , hence minimizing the systematic biases of the analysis process in the findings . We note that , there might potentially be other biases . For example , researchers usually focus on hyper-parameter tuning to improve SOTA on a specific downstream task ( usually ImageNet ) and this may lead to not do a grid search on high dimensional space of all possible hyper-parameters and possibly affecting the plots . In Section 3 , we investigate this and discuss that in this case the observed trend is similar to performing a grid search . In the main body of the paper , we report the results over eight downstream tasks and provide results for more than 20 downstream tasks in Appendix F. Moreover , the plots corresponding to pre-training on JFT , ImageNet21K are in the main part and Appendix F , respectively . | The paper presents an empirical study (respectively meta-study) of large-scale supervised pre-training for image recognition tasks. By analysing lots of experiments with varying model sizes, dataset sizes and training durations, the paper reaches the conclusion that simply scaling them up for a generic pre-training task will not lead to proportional gains for downstream tasks that build on the pre-trained model. On the contrary, the paper suggests that with increasing pre-training effort the downstream performance will reach a saturation level below its Bayes error. Moreover, that saturation level appears to vary across different downstream tasks (i.e., image characteristics and class definitions), which is seen as a sign that a generic one-fits-all feature extractor cannot be found by just scaling up. | SP:95f32813140f9e12b9d0d6f3ecad90c1ad00b0a0 |
Understanding Graph Learning with Local Intrinsic Dimensionality | 1 INTRODUCTION . Graphs are widely used to model real-life problems owing to their flexible structure and ability to carry different types of information . Graph learning has thus become essential for a wide range of applications in biomedicine ( Zitnik et al. , 2018 ) , physics ( Battaglia et al. , 2016 ) and traffic network ( Yu et al. , 2018 ) . Whilst the rise of Graph Neural Networks ( GNNs ) has enabled important breakthroughs in graph learning ( Senior et al. , 2020 ; Ying et al. , 2018 ) , there is still a lack of understandings of the intrinsic properties of graphs and their impact on learning . In this paper , we narrow this gap by characterizing and analyzing the intrinsic dimensionality ( ID ) of graphs and graph representations based on an expansion-based intrinsic dimensionality measure : Local Intrinsic Dimensionality ( LID ) . Such an analysis is beneficial for the community to better understand the intrinsic difficulty of a graph learning task and motivate advanced GNNs and learning methods . The intrinsic dimensionality of a dataset measures the dimension of its underlying manifold or the minimum number of parameters needed to represent the intrinsic structure of the data ( Bennett , 1969 ; Nakada & Imaizumi , 2020 ) . According to the manifold hypothesis ( Fefferman et al. , 2016 ) in machine learning , the intrinsic dimensionality is often much lower than the representation dimensionality ( the number of features ) for real-world high-dimensional data ( Tenenbaum et al. , 2000 ; Fodor , 2002 ; Cayton , 2005 ; Lin et al. , 2006 ) . LID is an expansion-based ID measure associated with the local neighborhood of data points . In other words , the LID of a point measures the intrinsic dimensionality of the local submanifold surrounding the point and the average LID over all points in a set depicts the dimensionality of the entire manifold . The LID metric has been applied to study the intrinsic complexity of many forms of data , such as images , texts and tabular data ( Pope et al. , 2020 ; Aghajanyan et al. , 2020 ; Ansuini et al. , 2019 ) , as well as the learning and generalization behaviors of deep neural networks . For instance , it has been shown that the LID characteristic of image datasets is closely related to the learning difficulty and generalization performance ( Pope et al. , 2020 ) . For graph learning , we are interested in the intrinsic dimensionality of node features , graph structure , representations learned by GNNs and its indication of the final performance . To this end , we apply LID on a diverse set of graph datasets and estimate the Feature LID ( FLID ) , Structure LID ( SLID ) and Representation LID ( RLID ) for each node . The three graph LID measures are then averaged over all nodes in the graph to reflect the overall intrinsic dimensionality . FLID has the same interpretation as the LID for non-graph data . The SLID of a graph can be interpreted as the expansion rate of the graph as the local neighborhood size of its nodes grows . Both FLID and SLID characterize the properties of the raw graph . RLID , on the other hand , characterizes the properties of the integrated representation of both the node feature and the graph structure . With the three LID measures , we provide the following key insights : • FLID and SLID are good indicators of graph complexity relative to node features and graph structures , respectively . This is verified on synthetic graphs generated using singular value decomposition ( SVD ) and random geometric graph ( RGG ) . • With FLID , we study 5 categories of 12 popular graph datasets including co-author graphs , co-purchase graphs , webpage graphs , citation graphs and Wikipedia graphs , and show that graphs of low FLIDs are generally easier to learn and different GNNs are likely to achieve higher accuracies in downstream node classification tasks . • With RLID and 4 representative GNN models , we show that graph learning is a process that maps the node features and graph structure together onto a simpler manifold that is of a much lower RLID . We also showcase that RLID can be leveraged as a regularizer to improve existing GNN models . • With SLID , we reveal that the underlying graph converges to a complete graph of SLID = 0.5 as the layers of message-passing based GNNs go deep , causing the over-smoothing problem . 2 RELATED WORK . Intrinsic dimensionality analysis plays an important role in dimensionality reduction ( DeMers & Cottrell , 1993 ) , manifold learning ( Law & Jain , 2006 ) , classification ( Gong et al. , 2019 ) , outlier detection ( Houle et al. , 2018 ) , generative modeling ( Li et al. , 2019 ) , adversarial example detection ( Ma et al. , 2018b ) , and deep learning understanding ( Ma et al. , 2018b ; Ansuini et al. , 2019 ; Pope et al. , 2020 ) . The intrinsic dimensionality of a data representation can be estimated either globally on the entire dataset via Principal Component Analysis ( PCA ) ( Wold et al. , 1987 ) , graph based methods ( Costa & Hero , 2003 ) , and fractal models ( Camastra & Staiano , 2016 ) or locally around the individual data points via Local Intrinsic Dimensionality ( LID ) and its variants ( Amsaleg et al. , 2015 ; Houle , 2017 ; Amsaleg et al. , 2019 ) . Different from the global ID measures , LID provides a local view of the intrinsic geometry of the data ( see formal definitions in Section 3 ) . LID has been related to the robustness properties of DNNs to adversarial attacks ( Amsaleg et al. , 2017 ; Ma et al. , 2018a ) and noisy labels ( Ma et al. , 2018b ) . It has been shown that the subspaces around adversarial examples are of much higher LID than of the normal examples in the deep representation space of DNNs ( Ma et al. , 2018a ) . And when there are noisy labels in the training data , DNN learning exhibits two distinctive phases from dimensionality compression to dimensionality expansion and the expansion phase is when the model starts to overfit the noisy labels ( Ma et al. , 2018b ) . The LID of the representations learned by DNNs has also been found to be a good indicator of the generalization performance ( Ansuini et al. , 2019 ) . Both LID and global ID have been applied to characterize the intrinsic dimensionality of image datasets and representations ( Gong et al. , 2019 ; Pope et al. , 2020 ) . The intrinsic dimensionality of the objective space ( defined by the loss function and model parameters ) has also been studied in both natural language processing ( Aghajanyan et al. , 2020 ) and computer vision ( Li et al. , 2018a ) to help understand the parameterization redundancy in DNNs . These understandings have motivated either model compression techniques ( Li et al. , 2018a ) or new theories ( with the intrinsic parameters ) for DNNs ( Aghajanyan et al. , 2020 ) . The current understandings of graphs are mostly focused on the expressive power of GNNs . For example , GNNs have been shown to have equivalent discriminative power to the Weisfeiler-Lehman graph isomorphism test ( Weisfeiler & Leman , 1968 ) . Xu et al . ( 2019 ) showed that GNNs are at most as powerful as the 1-WL test in distinguishing graph structures . Geerts et al . ( 2021 ) further proved that degree-aware Message Passing Neural Networks ( MPNNs ) may be one step ahead of the WL algorithm because of the degree information . Balcilar et al . ( 2021a ) proposed a MPNN model which is experimentally as powerful as a 3-WL test . The learning of GNNs has also been investigated from a spectral perspective . Hoang & Maehara ( 2019 ) argued that GNNs only work as a low-pass filter , which was then verified in Balcilar et al . ( 2021b ) by reformulating most of existing GNNs into one common framework . Oono & Suzuki ( 2019 ) investigated the asymptotic behaviors of GNNs as the layer size tended to infinity and related the expressive power of GNNs to the topological information in the spectral domain . In this work , we apply LID to explore the intrinsic complexity of graphs and graph representations , and provide a set of new and complementary insights into graph learning . 3 LOCAL INTRINSIC DIMENSIONALITY FOR GRAPHS . 3.1 LOCAL INTRINSIC DIMENSIONALITY . Given a data set X ⊂ Rn , X is said to have an intrinsic dimension of m if its elements lie entirely , without information loss , within a m-dimensional manifold of Rn , where m < n ( Fukunaga , 1982 ) . Before introducing LID , we first explain the intuition behind LID based on the expansion-based modeling of dimensionality . Among the family of dimensionality models , the expansion dimension ( ED ) ( Karger & Ruhl , 2002 ) quantify the ID in the vicinity of a point of interest in the data domain . More precisely , it assesses the rate of growth in the number of data points encountered as the distance from the reference point increases . As an example , in the Euclidean space Rm , one can measure the volume Vi of a m-ball of radius ri with i ∈ { 1 , 2 } , taking the logarithm of the ratio would reveal the dimension m : V2V1 = ( r2 r1 ) m ⇒ m = ln ( V2/V1 ) ln ( r2/r1 ) . Transferring the concept of expansion dimension to the statistical setting with neighborhood distance distributions gives us the formal definition of LID ( Houle , 2017 ) . Definition 1 ( Local Intrinsic Dimensionality ) Given a data sample x ∈ X , let R > 0 be a random variable denoting the distance from x to other data samples . If the cumulative distribution function F ( r ) of R is positive and continuously differentiable at distance r > 0 , the LID of x at distance r is given by : LIDF ( r ) ≜ lim ϵ→0 F ( ( 1 + ϵ ) r ) − F ( r ) ) ϵ · F ( r ) = r · F ′ ( r ) F ( r ) , ( 1 ) The local intrinsic dimension at x is then defined as the limit , as the radius r tends to zero , i.e . LIDF ≜ lim r→0 LIDF ( r ) . Here , the CDF F ( r ) is analogous to the volume in the Euclidean example . Since F ( r ) is unknown , estimators are needed for LID . There already exist a number of LID estimators in the literature ( Levina & Bickel , 2005 ; Amsaleg et al. , 2015 ; Liao et al. , 2014 ) . In the following , we will introduce one commonly used LID estimator and how it can be applied on graphs . 3.2 FEATURE AND REPRESENTATION LID . For graphs , we are interested in the LIDs of the nodes features and structure of the graph itself and node representations learned by GNNs . Node features and graph structure are two fundamental information of the graph , while the learned representation for a node is an integration of its feature and the structural information . The existing LID estimators developed for non-graph data can be directly applied to node features and node representation . Here , we first introduce the LID estimation for Feature LID ( FLID ) and Representation LID ( RLID ) . Amongst the existing LID estimators , the Maximum Likelihood Estimator ( MLE ) ( Levina & Bickel , 2005 ; Amsaleg et al. , 2015 ) is one of the most cited estimators . It treats the neighbors of each point x ∈ X as events in a Poisson process and the distance r ( j ) ( x ) between x and its j-th nearest neighbor as the event ’ s arrival time . Since this process depends on the dimensionality d , MLE estimates the intrinsic dimension by maximizing the log-likelihood of the observed process . The node features or representations are represented as vectors in the Euclidean space . Thus , FLID and RLID can be directly estimated by MLE . Let x denote the feature/representation vector of a particular node , the FLID/RLID of x can be estimated as following : FLID/RLID ( x , k ) = ( 1 k k∑ j=1 log r ( k+1 ) ( x ) r ( j ) ( x ) ) −1 , ( 2 ) where k is the neighborhood size ( i.e. , k-nearest ) and r ( i ) ( x ) is the Euclidean distance between x and its i-th nearest neighbor . Averaging FLID ( x , k ) across all nodes x in { xi } Ni=1 leads to the FLID of the entire graph , i.e. , FLIDG ( k ) = 1N ∑N i=1 FLID ( xi , k ) . Similarly , we can obtain RLIDG ( k ) = 1 N ∑N i=1 RLID ( xi , k ) . The k-nearest neighbors are identified based on the pairwise distance between all nodes in the graph . | In this work, the authors investigate the Local Intrinsic Dimensionality (LID), especially the feature (FLID), structure LID (SLID), and Representation LID (RLID) of a graph. Through experimental analysis, the authors demonstrate that the FLID and SLID are well correlated with the graph complexity, and real-world graphs have a much lower intrinsic dimensionality. In addition, authors also interpret the over-smoothing problem associated with GNN models from the perspective of the SLID's convergence. | SP:ab4f7885ce56867b46ac82f3ded3daa83f556c62 |
Understanding Graph Learning with Local Intrinsic Dimensionality | 1 INTRODUCTION . Graphs are widely used to model real-life problems owing to their flexible structure and ability to carry different types of information . Graph learning has thus become essential for a wide range of applications in biomedicine ( Zitnik et al. , 2018 ) , physics ( Battaglia et al. , 2016 ) and traffic network ( Yu et al. , 2018 ) . Whilst the rise of Graph Neural Networks ( GNNs ) has enabled important breakthroughs in graph learning ( Senior et al. , 2020 ; Ying et al. , 2018 ) , there is still a lack of understandings of the intrinsic properties of graphs and their impact on learning . In this paper , we narrow this gap by characterizing and analyzing the intrinsic dimensionality ( ID ) of graphs and graph representations based on an expansion-based intrinsic dimensionality measure : Local Intrinsic Dimensionality ( LID ) . Such an analysis is beneficial for the community to better understand the intrinsic difficulty of a graph learning task and motivate advanced GNNs and learning methods . The intrinsic dimensionality of a dataset measures the dimension of its underlying manifold or the minimum number of parameters needed to represent the intrinsic structure of the data ( Bennett , 1969 ; Nakada & Imaizumi , 2020 ) . According to the manifold hypothesis ( Fefferman et al. , 2016 ) in machine learning , the intrinsic dimensionality is often much lower than the representation dimensionality ( the number of features ) for real-world high-dimensional data ( Tenenbaum et al. , 2000 ; Fodor , 2002 ; Cayton , 2005 ; Lin et al. , 2006 ) . LID is an expansion-based ID measure associated with the local neighborhood of data points . In other words , the LID of a point measures the intrinsic dimensionality of the local submanifold surrounding the point and the average LID over all points in a set depicts the dimensionality of the entire manifold . The LID metric has been applied to study the intrinsic complexity of many forms of data , such as images , texts and tabular data ( Pope et al. , 2020 ; Aghajanyan et al. , 2020 ; Ansuini et al. , 2019 ) , as well as the learning and generalization behaviors of deep neural networks . For instance , it has been shown that the LID characteristic of image datasets is closely related to the learning difficulty and generalization performance ( Pope et al. , 2020 ) . For graph learning , we are interested in the intrinsic dimensionality of node features , graph structure , representations learned by GNNs and its indication of the final performance . To this end , we apply LID on a diverse set of graph datasets and estimate the Feature LID ( FLID ) , Structure LID ( SLID ) and Representation LID ( RLID ) for each node . The three graph LID measures are then averaged over all nodes in the graph to reflect the overall intrinsic dimensionality . FLID has the same interpretation as the LID for non-graph data . The SLID of a graph can be interpreted as the expansion rate of the graph as the local neighborhood size of its nodes grows . Both FLID and SLID characterize the properties of the raw graph . RLID , on the other hand , characterizes the properties of the integrated representation of both the node feature and the graph structure . With the three LID measures , we provide the following key insights : • FLID and SLID are good indicators of graph complexity relative to node features and graph structures , respectively . This is verified on synthetic graphs generated using singular value decomposition ( SVD ) and random geometric graph ( RGG ) . • With FLID , we study 5 categories of 12 popular graph datasets including co-author graphs , co-purchase graphs , webpage graphs , citation graphs and Wikipedia graphs , and show that graphs of low FLIDs are generally easier to learn and different GNNs are likely to achieve higher accuracies in downstream node classification tasks . • With RLID and 4 representative GNN models , we show that graph learning is a process that maps the node features and graph structure together onto a simpler manifold that is of a much lower RLID . We also showcase that RLID can be leveraged as a regularizer to improve existing GNN models . • With SLID , we reveal that the underlying graph converges to a complete graph of SLID = 0.5 as the layers of message-passing based GNNs go deep , causing the over-smoothing problem . 2 RELATED WORK . Intrinsic dimensionality analysis plays an important role in dimensionality reduction ( DeMers & Cottrell , 1993 ) , manifold learning ( Law & Jain , 2006 ) , classification ( Gong et al. , 2019 ) , outlier detection ( Houle et al. , 2018 ) , generative modeling ( Li et al. , 2019 ) , adversarial example detection ( Ma et al. , 2018b ) , and deep learning understanding ( Ma et al. , 2018b ; Ansuini et al. , 2019 ; Pope et al. , 2020 ) . The intrinsic dimensionality of a data representation can be estimated either globally on the entire dataset via Principal Component Analysis ( PCA ) ( Wold et al. , 1987 ) , graph based methods ( Costa & Hero , 2003 ) , and fractal models ( Camastra & Staiano , 2016 ) or locally around the individual data points via Local Intrinsic Dimensionality ( LID ) and its variants ( Amsaleg et al. , 2015 ; Houle , 2017 ; Amsaleg et al. , 2019 ) . Different from the global ID measures , LID provides a local view of the intrinsic geometry of the data ( see formal definitions in Section 3 ) . LID has been related to the robustness properties of DNNs to adversarial attacks ( Amsaleg et al. , 2017 ; Ma et al. , 2018a ) and noisy labels ( Ma et al. , 2018b ) . It has been shown that the subspaces around adversarial examples are of much higher LID than of the normal examples in the deep representation space of DNNs ( Ma et al. , 2018a ) . And when there are noisy labels in the training data , DNN learning exhibits two distinctive phases from dimensionality compression to dimensionality expansion and the expansion phase is when the model starts to overfit the noisy labels ( Ma et al. , 2018b ) . The LID of the representations learned by DNNs has also been found to be a good indicator of the generalization performance ( Ansuini et al. , 2019 ) . Both LID and global ID have been applied to characterize the intrinsic dimensionality of image datasets and representations ( Gong et al. , 2019 ; Pope et al. , 2020 ) . The intrinsic dimensionality of the objective space ( defined by the loss function and model parameters ) has also been studied in both natural language processing ( Aghajanyan et al. , 2020 ) and computer vision ( Li et al. , 2018a ) to help understand the parameterization redundancy in DNNs . These understandings have motivated either model compression techniques ( Li et al. , 2018a ) or new theories ( with the intrinsic parameters ) for DNNs ( Aghajanyan et al. , 2020 ) . The current understandings of graphs are mostly focused on the expressive power of GNNs . For example , GNNs have been shown to have equivalent discriminative power to the Weisfeiler-Lehman graph isomorphism test ( Weisfeiler & Leman , 1968 ) . Xu et al . ( 2019 ) showed that GNNs are at most as powerful as the 1-WL test in distinguishing graph structures . Geerts et al . ( 2021 ) further proved that degree-aware Message Passing Neural Networks ( MPNNs ) may be one step ahead of the WL algorithm because of the degree information . Balcilar et al . ( 2021a ) proposed a MPNN model which is experimentally as powerful as a 3-WL test . The learning of GNNs has also been investigated from a spectral perspective . Hoang & Maehara ( 2019 ) argued that GNNs only work as a low-pass filter , which was then verified in Balcilar et al . ( 2021b ) by reformulating most of existing GNNs into one common framework . Oono & Suzuki ( 2019 ) investigated the asymptotic behaviors of GNNs as the layer size tended to infinity and related the expressive power of GNNs to the topological information in the spectral domain . In this work , we apply LID to explore the intrinsic complexity of graphs and graph representations , and provide a set of new and complementary insights into graph learning . 3 LOCAL INTRINSIC DIMENSIONALITY FOR GRAPHS . 3.1 LOCAL INTRINSIC DIMENSIONALITY . Given a data set X ⊂ Rn , X is said to have an intrinsic dimension of m if its elements lie entirely , without information loss , within a m-dimensional manifold of Rn , where m < n ( Fukunaga , 1982 ) . Before introducing LID , we first explain the intuition behind LID based on the expansion-based modeling of dimensionality . Among the family of dimensionality models , the expansion dimension ( ED ) ( Karger & Ruhl , 2002 ) quantify the ID in the vicinity of a point of interest in the data domain . More precisely , it assesses the rate of growth in the number of data points encountered as the distance from the reference point increases . As an example , in the Euclidean space Rm , one can measure the volume Vi of a m-ball of radius ri with i ∈ { 1 , 2 } , taking the logarithm of the ratio would reveal the dimension m : V2V1 = ( r2 r1 ) m ⇒ m = ln ( V2/V1 ) ln ( r2/r1 ) . Transferring the concept of expansion dimension to the statistical setting with neighborhood distance distributions gives us the formal definition of LID ( Houle , 2017 ) . Definition 1 ( Local Intrinsic Dimensionality ) Given a data sample x ∈ X , let R > 0 be a random variable denoting the distance from x to other data samples . If the cumulative distribution function F ( r ) of R is positive and continuously differentiable at distance r > 0 , the LID of x at distance r is given by : LIDF ( r ) ≜ lim ϵ→0 F ( ( 1 + ϵ ) r ) − F ( r ) ) ϵ · F ( r ) = r · F ′ ( r ) F ( r ) , ( 1 ) The local intrinsic dimension at x is then defined as the limit , as the radius r tends to zero , i.e . LIDF ≜ lim r→0 LIDF ( r ) . Here , the CDF F ( r ) is analogous to the volume in the Euclidean example . Since F ( r ) is unknown , estimators are needed for LID . There already exist a number of LID estimators in the literature ( Levina & Bickel , 2005 ; Amsaleg et al. , 2015 ; Liao et al. , 2014 ) . In the following , we will introduce one commonly used LID estimator and how it can be applied on graphs . 3.2 FEATURE AND REPRESENTATION LID . For graphs , we are interested in the LIDs of the nodes features and structure of the graph itself and node representations learned by GNNs . Node features and graph structure are two fundamental information of the graph , while the learned representation for a node is an integration of its feature and the structural information . The existing LID estimators developed for non-graph data can be directly applied to node features and node representation . Here , we first introduce the LID estimation for Feature LID ( FLID ) and Representation LID ( RLID ) . Amongst the existing LID estimators , the Maximum Likelihood Estimator ( MLE ) ( Levina & Bickel , 2005 ; Amsaleg et al. , 2015 ) is one of the most cited estimators . It treats the neighbors of each point x ∈ X as events in a Poisson process and the distance r ( j ) ( x ) between x and its j-th nearest neighbor as the event ’ s arrival time . Since this process depends on the dimensionality d , MLE estimates the intrinsic dimension by maximizing the log-likelihood of the observed process . The node features or representations are represented as vectors in the Euclidean space . Thus , FLID and RLID can be directly estimated by MLE . Let x denote the feature/representation vector of a particular node , the FLID/RLID of x can be estimated as following : FLID/RLID ( x , k ) = ( 1 k k∑ j=1 log r ( k+1 ) ( x ) r ( j ) ( x ) ) −1 , ( 2 ) where k is the neighborhood size ( i.e. , k-nearest ) and r ( i ) ( x ) is the Euclidean distance between x and its i-th nearest neighbor . Averaging FLID ( x , k ) across all nodes x in { xi } Ni=1 leads to the FLID of the entire graph , i.e. , FLIDG ( k ) = 1N ∑N i=1 FLID ( xi , k ) . Similarly , we can obtain RLIDG ( k ) = 1 N ∑N i=1 RLID ( xi , k ) . The k-nearest neighbors are identified based on the pairwise distance between all nodes in the graph . | The paper characterizes the intrinsic dimensionality of node features, graph structures, and representations learned by GNNs via the so-called Local Intrinsic Dimensionality (LID) measure, intending that it can benefit the community in understanding the difficulty of an underlying graph learning task. In addition, estimators for Feature LID (FLID), Structure LID (SLID), and Representation LID (RLID) were introduced. This work showed that real-world graphs have much lower intrinsic dimensionality when compared to their extrinsic dimensionality. | SP:ab4f7885ce56867b46ac82f3ded3daa83f556c62 |
Pseudo Numerical Methods for Diffusion Models on Manifolds | 1 INTRODUCTION . Denoising Diffusion Probabilistic Models ( DDPMs ) ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) is a class of generative models which model the data distribution through an iterative denoising process reversing a multi-step noising process . DDPMs have been applied successfully to a variety of applications , including image generation ( Ho et al. , 2020 ; Song et al. , 2020b ) , text generation ( Hoogeboom et al. , 2021 ; Austin et al. , 2021 ) , 3D point cloud generation ( Luo & Hu , 2021 ) , textto-speech ( Kong et al. , 2021 ; Chen et al. , 2020 ) and image super-resolution ( Saharia et al. , 2021 ) . Unlike Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , which require careful hyperparameter tuning according to different model structures and datasets , DDPMs can use similar model structures and be trained by a simple denoising objective which makes the models fit the noise in the data . To generate samples , the iterative denoising process starts from white noise and progressively denoises it into the target domain according to the noise predicted by the model at every step . However , a critical drawback of DDPMs is that DDPMs require hundreds to thousands of iterations to produce high-quality samples and need to pass through a network at least once at every step , which makes the generation of a large number of samples extremely slow and infeasible . In contrast , GANs only need one pass through a network . There have been many recent works focusing on improving the speed of the denoising process . Some works search for better variance schedules , including Nichol & Dhariwal ( 2021 ) and Watson et al . ( 2021 ) . Some works focus on changing the inference equation , including Song et al . ( 2020a ) ∗Corresponding author 1Our implementation is available at https : //github.com/luping-liu/PNDM . and Song et al . ( 2020b ) . Denoising Diffusion Implicit Models ( DDIMs ) ( Song et al. , 2020a ) relying on a non-Markovian process accelerate the denoising process by taking multiple steps every iteration . Probability Flows ( PFs ) ( Song et al. , 2020b ) build a connection between the denoising process and solving ordinary differential equations and use numerical methods of differential equations to accelerate the denoising process . Additionally , we introduce more related works in Appendix A.1 . However , this direct connection between DDPMs and numerical methods ( e.g. , forward Euler method , linear multi-step method and RungeKutta method ( Timothy , 2017 ) ) has weaknesses in both speed and effect ( see Section 3.1 ) . Some numerical methods are straightforward , like the forward Euler method , but they can only trade quality for speed . Some numerical methods can accelerate the reverse process without loss of quality , like the Runge-Kutta method , but they need to propagate forward more times along a neural network at every step . Furthermore , we also notice that numerical methods can introduce noticeable noise at a high speedup rate , which makes high-order numerical methods ( e.g. , Runge-Kutta method ) even less effective than DDIMs . This phenomenon is also mentioned in Salimans & Ho ( 2022 ) . To figure out the reason for the performance degradation in classical numerical methods , we conduct some analyses and find that classical numerical methods may sample data far away from the main distribution area of the data , and the inference equations of DDPMs do not satisfy a necessary condition of numerical methods at the last several steps ( see Section 3.2 ) . To tackle these problems , we design new numerical methods called pseudo numerical methods for diffusion models ( PNDMs ) to generate samples along a specific manifold in Rn , which is the highdensity region of the data . We first compute the corresponding differential equations of diffusion models directly and self-consistently , which builds a theoretical connection between DDPMs and numerical methods . Considering that classical numerical methods can not guarantee to generate samples on certain manifolds , we provide brand-new numerical methods called pseudo numerical methods based on our theoretical analyses . We also find that DDIMs are simple cases of pseudo numerical methods , which means that we also provide a new way to understand DDIMs better . Furthermore , we find that the pseudo linear multi-step method is the fastest method for diffusion models under similar generated quality . Besides , we provide a detailed theoretical analysis of our new theory and give visualization results to support our theory intuitively . According to our experiments , our methods have several advantages : • Our methods combine the benefits of DDIMs and high-order numerical methods successfully . We theoretically prove that our new methods PNDMs are second-order convergent while DDIMs are first-order convergent , which makes PNDMs 20x faster without loss of quality on Cifar10 and CelebA . • Our methods can reduce the best FID of pre-trained models with even shorter sampling time . With only 250 steps , our new denoising process can reduce the best FID by around 0.4 points Cifar10 and CelebA . We achieve a new SOTA FID score of 2.71 on CelebA . • Our methods work well with different variance schedules , which means that our methods have a good generalization and can be used together with those works introducing better variance schedules to accelerate the denoising process further . 2 BACKGROUND . In this section , we introduce some backgrounds . Firstly , we present the classical understanding of DDPMs . Then we provide another understanding based on Song et al . ( 2020b ) , which inspires us to use numerical methods to accelerate the denoising process of diffusion models . After that , we introduce some background on numerical methods used later in this paper . 2.1 DENOISING DIFFUSION PROBABILISTIC MODELS . DDPMs model the data distribution from Gaussian distribution to image distribution through an iterative denoising process . Let x0 be an image , then the diffusion process is a Markov process and the reverse process has a similar form to the diffusion process , which satisfies : xt+1 ∼N ( √ 1− βtxt , βtI ) , t = 0 , 1 , · · · , N − 1. xt−1 ∼N ( µθ ( xt , t ) , βθ ( xt , t ) I ) , t = N , N − 1 , · · · , 1 . ( 1 ) Here , βt controls the speed of adding noise to the data , calling them the variance schedule . N is the total number of steps of the denoising process . µθ and βθ are two neural networks , and θ are their parameters . Ho et al . ( 2020 ) get some statistics estimations of µθ and βθ . According to the properties of the conditional Gaussian distribution , we have : q ( xt|x0 ) = N ( √ ᾱtx0 , ( 1− ᾱt ) I ) , q ( xt−1|xt , x0 ) = N ( µ̄t ( xt , x0 ) , β̄tI ) . ( 2 ) Here , αt = 1 − βt , ᾱt = ∏t i=1 αi , µ̄t = √ ᾱt−1βt 1−ᾱt x0 + √ αt ( 1−ᾱt−1 ) 1−ᾱt xt and β̄t = 1−ᾱt−1 1−ᾱt βt . Then this paper sets βθ = β̄t and designs a objective function to help neural networks to represent µθ . Objective Function The objective function is defined by : Lt−1 = Eq [ ||µ̄t ( xt , x0 ) − µθ ( xt , t ) ||2 ] = Ex0 , ϵ [ || 1√ αt ( xt ( x0 , ϵ ) − βt√ 1− ᾱt ϵ ) − µθ ( xt ( x0 , ϵ ) , t ) ||2 ] = Ex0 , ϵ [ β2t αt ( 1− ᾱt ) ||ϵ− ϵθ ( √ ᾱtx0 + √ 1− ᾱtϵ , t ) ||2 ] . ( 3 ) Here , xt ( x0 , ϵ ) = √ ᾱtx0+ √ 1− ᾱtϵ , ϵ ∼ N ( 0 , 1 ) , ϵθ is an estimate of the noise ϵ . The relationship between µθ and ϵθ is µθ = 1√αt ( xt − βt√ 1−ᾱt ϵθ ) . Because ϵ ∼ N ( 0 , 1 ) , we assume that the mean and variance of ϵθ are 0 and 1 . 2.2 STOCHASTIC DIFFERENTIAL EQUATION . According to Song et al . ( 2020b ) , there is another understanding of DDPMs . The diffusion process can be treated as solving a certain stochastic differential equation dx = ( √ 1− β ( t ) − 1 ) x ( t ) dt +√ β ( t ) dw . According to Anderson ( 1982 ) , the denoising process also satisfies a similar stochastic differential equation : dx = ( ( √ 1− β ( t ) − 1 ) x ( t ) − β ( t ) ϵθ ( x ( t ) , t ) ) dt+ √ β ( t ) dw̄ . ( 4 ) This is Variance Preserving stochastic differential equations ( VP-SDEs ) . Here , we change the domain of t from [ 1 , N ] to [ 0 , 1 ] . When N tends to infinity , { βi } Ni=1 , { xi } Ni=1 become continuous functions β ( t ) and x ( t ) on [ 0 , 1 ] . Song et al . ( 2020b ) also show that this equation has an ordinary differential equation ( ODE ) version with the same marginal probability density as Equation ( 4 ) : dx = ( ( √ 1− β ( t ) − 1 ) x ( t ) − 1 2 β ( t ) ϵθ ( x ( t ) , t ) ) dt . ( 5 ) This different denoising equation with no random item and the same diffusion equation together is Probability Flows ( PFs ) . These two denoising equations show us a new possibility that we can use numerical methods to accelerate the reverse process . As far as we know , DDIMs first try to remove this random item , so PFs can also be treated as an acceleration of DDIMs , while VP-SDEs are an acceleration of DDPMs . 2.3 NUMERICAL METHOD . Many classical numerical methods can be used to solve ODEs , including the forward Euler method , Runge-Kutta method and linear multi-step method ( Timothy , 2017 ) . Forward Euler Method For a certain differential equation satisfying dxdt = f ( x , t ) . The trivial numerical method is forward Euler method satisfying xt+δ = xt + δf ( xt , t ) . Runge-Kutta Method Runge-Kutta method uses more information at every step , so it can achieve higher accuracy 2 . Runge-Kutta method satisfies : k1 = f ( xt , t ) , k2 = f ( xt + δ 2k1 , t+ δ 2 ) k3 = f ( xt + δ 2k2 , t+ δ 2 ) , k4 = f ( xt + δk3 , t+ δ ) xt+δ = xt + δ 6 ( k1 + 2k2 + 2k3 + k4 ) . ( 6 ) Linear Multi-Step Method Linear multi-step method is another numerical method and satisfies : xt+δ = xt + δ 24 ( 55ft − 59ft−δ + 37ft−2δ − 9ft−3δ ) , ft = f ( xt , t ) . ( 7 ) 3 PSEUDO NUMERICAL METHOD FOR DDPM . In this section , we first compute the corresponding differential equations of diffusion models to build a direct connection between DDPMs and numerical methods . As a byproduct , we can directly use pre-trained models from DDPMs . After establishing this connection , we provide detailed analyses on the weakness of classical numerical methods . To solve the problems in classical numerical methods , we dive into the structure of numerical methods by dividing their equations into a gradient part and a transfer part and define pseudo numerical methods by introducing nonlinear transfer parts . We find that DDIMs can be regarded as simple pseudo numerical methods . Then , We explore the pros and cons of different numerical methods and choose the linear multi-step method to make numerical methods faster . Finally , we summarize our findings and analyses and safely propose our novel pseudo numerical methods for diffusion models ( PNDMs ) , which combine our proposed transfer part and the gradient part of the linear multi-step method . Furthermore , we analyze the convergence order of pseudo numerical methods to demonstrate the effectiveness of our methods theoretically . | The paper proposes a new efficient method for denoising diffusion probabilistic models (DDPM) (generative models that optimize for the closest solution on a manifold) based on the observation that this can be seen a solving a set of differential equations on a manifold. This allows efficient pseudo numerical methods to be applied here which have many advantageous properties over classical optimization methods, including less optimization steps and guaranteed manifold solutions due to separating the gradient part from the transfer part in the optimization. Results are shown on four datasets comparing to two reasonable but not sota baselines. | SP:9be34b13f59e6e33820e863d7ed33f0479fc368e |
Pseudo Numerical Methods for Diffusion Models on Manifolds | 1 INTRODUCTION . Denoising Diffusion Probabilistic Models ( DDPMs ) ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) is a class of generative models which model the data distribution through an iterative denoising process reversing a multi-step noising process . DDPMs have been applied successfully to a variety of applications , including image generation ( Ho et al. , 2020 ; Song et al. , 2020b ) , text generation ( Hoogeboom et al. , 2021 ; Austin et al. , 2021 ) , 3D point cloud generation ( Luo & Hu , 2021 ) , textto-speech ( Kong et al. , 2021 ; Chen et al. , 2020 ) and image super-resolution ( Saharia et al. , 2021 ) . Unlike Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , which require careful hyperparameter tuning according to different model structures and datasets , DDPMs can use similar model structures and be trained by a simple denoising objective which makes the models fit the noise in the data . To generate samples , the iterative denoising process starts from white noise and progressively denoises it into the target domain according to the noise predicted by the model at every step . However , a critical drawback of DDPMs is that DDPMs require hundreds to thousands of iterations to produce high-quality samples and need to pass through a network at least once at every step , which makes the generation of a large number of samples extremely slow and infeasible . In contrast , GANs only need one pass through a network . There have been many recent works focusing on improving the speed of the denoising process . Some works search for better variance schedules , including Nichol & Dhariwal ( 2021 ) and Watson et al . ( 2021 ) . Some works focus on changing the inference equation , including Song et al . ( 2020a ) ∗Corresponding author 1Our implementation is available at https : //github.com/luping-liu/PNDM . and Song et al . ( 2020b ) . Denoising Diffusion Implicit Models ( DDIMs ) ( Song et al. , 2020a ) relying on a non-Markovian process accelerate the denoising process by taking multiple steps every iteration . Probability Flows ( PFs ) ( Song et al. , 2020b ) build a connection between the denoising process and solving ordinary differential equations and use numerical methods of differential equations to accelerate the denoising process . Additionally , we introduce more related works in Appendix A.1 . However , this direct connection between DDPMs and numerical methods ( e.g. , forward Euler method , linear multi-step method and RungeKutta method ( Timothy , 2017 ) ) has weaknesses in both speed and effect ( see Section 3.1 ) . Some numerical methods are straightforward , like the forward Euler method , but they can only trade quality for speed . Some numerical methods can accelerate the reverse process without loss of quality , like the Runge-Kutta method , but they need to propagate forward more times along a neural network at every step . Furthermore , we also notice that numerical methods can introduce noticeable noise at a high speedup rate , which makes high-order numerical methods ( e.g. , Runge-Kutta method ) even less effective than DDIMs . This phenomenon is also mentioned in Salimans & Ho ( 2022 ) . To figure out the reason for the performance degradation in classical numerical methods , we conduct some analyses and find that classical numerical methods may sample data far away from the main distribution area of the data , and the inference equations of DDPMs do not satisfy a necessary condition of numerical methods at the last several steps ( see Section 3.2 ) . To tackle these problems , we design new numerical methods called pseudo numerical methods for diffusion models ( PNDMs ) to generate samples along a specific manifold in Rn , which is the highdensity region of the data . We first compute the corresponding differential equations of diffusion models directly and self-consistently , which builds a theoretical connection between DDPMs and numerical methods . Considering that classical numerical methods can not guarantee to generate samples on certain manifolds , we provide brand-new numerical methods called pseudo numerical methods based on our theoretical analyses . We also find that DDIMs are simple cases of pseudo numerical methods , which means that we also provide a new way to understand DDIMs better . Furthermore , we find that the pseudo linear multi-step method is the fastest method for diffusion models under similar generated quality . Besides , we provide a detailed theoretical analysis of our new theory and give visualization results to support our theory intuitively . According to our experiments , our methods have several advantages : • Our methods combine the benefits of DDIMs and high-order numerical methods successfully . We theoretically prove that our new methods PNDMs are second-order convergent while DDIMs are first-order convergent , which makes PNDMs 20x faster without loss of quality on Cifar10 and CelebA . • Our methods can reduce the best FID of pre-trained models with even shorter sampling time . With only 250 steps , our new denoising process can reduce the best FID by around 0.4 points Cifar10 and CelebA . We achieve a new SOTA FID score of 2.71 on CelebA . • Our methods work well with different variance schedules , which means that our methods have a good generalization and can be used together with those works introducing better variance schedules to accelerate the denoising process further . 2 BACKGROUND . In this section , we introduce some backgrounds . Firstly , we present the classical understanding of DDPMs . Then we provide another understanding based on Song et al . ( 2020b ) , which inspires us to use numerical methods to accelerate the denoising process of diffusion models . After that , we introduce some background on numerical methods used later in this paper . 2.1 DENOISING DIFFUSION PROBABILISTIC MODELS . DDPMs model the data distribution from Gaussian distribution to image distribution through an iterative denoising process . Let x0 be an image , then the diffusion process is a Markov process and the reverse process has a similar form to the diffusion process , which satisfies : xt+1 ∼N ( √ 1− βtxt , βtI ) , t = 0 , 1 , · · · , N − 1. xt−1 ∼N ( µθ ( xt , t ) , βθ ( xt , t ) I ) , t = N , N − 1 , · · · , 1 . ( 1 ) Here , βt controls the speed of adding noise to the data , calling them the variance schedule . N is the total number of steps of the denoising process . µθ and βθ are two neural networks , and θ are their parameters . Ho et al . ( 2020 ) get some statistics estimations of µθ and βθ . According to the properties of the conditional Gaussian distribution , we have : q ( xt|x0 ) = N ( √ ᾱtx0 , ( 1− ᾱt ) I ) , q ( xt−1|xt , x0 ) = N ( µ̄t ( xt , x0 ) , β̄tI ) . ( 2 ) Here , αt = 1 − βt , ᾱt = ∏t i=1 αi , µ̄t = √ ᾱt−1βt 1−ᾱt x0 + √ αt ( 1−ᾱt−1 ) 1−ᾱt xt and β̄t = 1−ᾱt−1 1−ᾱt βt . Then this paper sets βθ = β̄t and designs a objective function to help neural networks to represent µθ . Objective Function The objective function is defined by : Lt−1 = Eq [ ||µ̄t ( xt , x0 ) − µθ ( xt , t ) ||2 ] = Ex0 , ϵ [ || 1√ αt ( xt ( x0 , ϵ ) − βt√ 1− ᾱt ϵ ) − µθ ( xt ( x0 , ϵ ) , t ) ||2 ] = Ex0 , ϵ [ β2t αt ( 1− ᾱt ) ||ϵ− ϵθ ( √ ᾱtx0 + √ 1− ᾱtϵ , t ) ||2 ] . ( 3 ) Here , xt ( x0 , ϵ ) = √ ᾱtx0+ √ 1− ᾱtϵ , ϵ ∼ N ( 0 , 1 ) , ϵθ is an estimate of the noise ϵ . The relationship between µθ and ϵθ is µθ = 1√αt ( xt − βt√ 1−ᾱt ϵθ ) . Because ϵ ∼ N ( 0 , 1 ) , we assume that the mean and variance of ϵθ are 0 and 1 . 2.2 STOCHASTIC DIFFERENTIAL EQUATION . According to Song et al . ( 2020b ) , there is another understanding of DDPMs . The diffusion process can be treated as solving a certain stochastic differential equation dx = ( √ 1− β ( t ) − 1 ) x ( t ) dt +√ β ( t ) dw . According to Anderson ( 1982 ) , the denoising process also satisfies a similar stochastic differential equation : dx = ( ( √ 1− β ( t ) − 1 ) x ( t ) − β ( t ) ϵθ ( x ( t ) , t ) ) dt+ √ β ( t ) dw̄ . ( 4 ) This is Variance Preserving stochastic differential equations ( VP-SDEs ) . Here , we change the domain of t from [ 1 , N ] to [ 0 , 1 ] . When N tends to infinity , { βi } Ni=1 , { xi } Ni=1 become continuous functions β ( t ) and x ( t ) on [ 0 , 1 ] . Song et al . ( 2020b ) also show that this equation has an ordinary differential equation ( ODE ) version with the same marginal probability density as Equation ( 4 ) : dx = ( ( √ 1− β ( t ) − 1 ) x ( t ) − 1 2 β ( t ) ϵθ ( x ( t ) , t ) ) dt . ( 5 ) This different denoising equation with no random item and the same diffusion equation together is Probability Flows ( PFs ) . These two denoising equations show us a new possibility that we can use numerical methods to accelerate the reverse process . As far as we know , DDIMs first try to remove this random item , so PFs can also be treated as an acceleration of DDIMs , while VP-SDEs are an acceleration of DDPMs . 2.3 NUMERICAL METHOD . Many classical numerical methods can be used to solve ODEs , including the forward Euler method , Runge-Kutta method and linear multi-step method ( Timothy , 2017 ) . Forward Euler Method For a certain differential equation satisfying dxdt = f ( x , t ) . The trivial numerical method is forward Euler method satisfying xt+δ = xt + δf ( xt , t ) . Runge-Kutta Method Runge-Kutta method uses more information at every step , so it can achieve higher accuracy 2 . Runge-Kutta method satisfies : k1 = f ( xt , t ) , k2 = f ( xt + δ 2k1 , t+ δ 2 ) k3 = f ( xt + δ 2k2 , t+ δ 2 ) , k4 = f ( xt + δk3 , t+ δ ) xt+δ = xt + δ 6 ( k1 + 2k2 + 2k3 + k4 ) . ( 6 ) Linear Multi-Step Method Linear multi-step method is another numerical method and satisfies : xt+δ = xt + δ 24 ( 55ft − 59ft−δ + 37ft−2δ − 9ft−3δ ) , ft = f ( xt , t ) . ( 7 ) 3 PSEUDO NUMERICAL METHOD FOR DDPM . In this section , we first compute the corresponding differential equations of diffusion models to build a direct connection between DDPMs and numerical methods . As a byproduct , we can directly use pre-trained models from DDPMs . After establishing this connection , we provide detailed analyses on the weakness of classical numerical methods . To solve the problems in classical numerical methods , we dive into the structure of numerical methods by dividing their equations into a gradient part and a transfer part and define pseudo numerical methods by introducing nonlinear transfer parts . We find that DDIMs can be regarded as simple pseudo numerical methods . Then , We explore the pros and cons of different numerical methods and choose the linear multi-step method to make numerical methods faster . Finally , we summarize our findings and analyses and safely propose our novel pseudo numerical methods for diffusion models ( PNDMs ) , which combine our proposed transfer part and the gradient part of the linear multi-step method . Furthermore , we analyze the convergence order of pseudo numerical methods to demonstrate the effectiveness of our methods theoretically . | Highlighting the high computational complexity for sampling from Denoising Diffusion Probabilistic Models (DDPMs) (e.g. wrt GANs), authors build on the connection between diffusion processes and ODEs to propose efficient (pseudo-)numerical methods so as to sample data from the data manifold. The main idea is to combine the discrete update proposed in DDIMs, with a fourth-order gradient estimation given by the Runge-Kutta or linear multi-step methods. The motivation being that such gradient estimator should yield trajectories that stay closer to the data manifold. They empirically assess their method(s) on CIFAR10 and CelebA in terms of sample quality (measured by FID), and show that they get a ~x20 speed-up wrt DDIMs or a significant improvement in FID with the same number of steps. | SP:9be34b13f59e6e33820e863d7ed33f0479fc368e |
DKM: Differentiable k-Means Clustering Layer for Neural Network Compression | Deep neural network ( DNN ) model compression for efficient on-device inference becomes increasingly important to reduce memory requirements and keep user data on-device . To this end , we propose a novel differentiable k-means clustering layer ( DKM ) and its application to train-time weight-clustering for DNN model compression . DKM casts k-means clustering as an attention problem and enables joint optimization of the DNN parameters and clustering centroids . Unlike prior works that rely on additional parameters and regularizers , DKM-based compression keeps the original loss function and model architecture fixed . We evaluated DKM-based compression on various DNN models for computer vision and natural language processing ( NLP ) tasks . Our results demonstrate that DKM delivers superior compression and accuracy trade-off on ImageNet1k and GLUE benchmarks . For example , DKM-based compression can offer 74.5 % top-1 ImageNet1k accuracy on ResNet50 with 3.3MB model size ( 29.4x model compression factor ) . For MobileNet-v1 , which is a challenging DNN to compress , DKM delivers 63.9 % top-1 ImageNet1k accuracy with 0.72 MB model size ( 22.4x model compression factor ) . This result is 6.8 % higher top-1 accuracy and 33 % relatively smaller model size than the current state-of-the-art DNN compression algorithms . DKM also compressed a DistilBERT model by 11.8x with minimal ( 1.1 % ) accuracy loss on GLUE NLP benchmarks . 1 INTRODUCTION . Deep neural networks ( DNN ) have demonstrated super-human performance on many cognitive tasks ( Silver et al. , 2018 ) . While a fully-trained uncompressed DNN is commonly used for server-side inference , on-device inference is preferred to enhance user experience by reducing latency and keeping user data on-device . Many such on-device platforms are battery-powered and resource-constrained , demanding a DNN to meet the stringent resource requirements such as powerconsumption , compute budget and storage-overhead ( Wang et al. , 2019b ; Wu et al. , 2018 ) . One solution is to design a more efficient and compact DNN such as MobileNet ( Howard et al. , 2017 ) by innovating the network architecture or by leveraging Neural Architecture Search ( NAS ) methods ( Liu et al. , 2019 ; Tan et al. , 2019 ) . Another solution is to compress a model with small accuracy degradation so that it takes less storage and reduces System-on-Chip ( SoC ) memory bandwidth utilization , which can minimize power-consumption and latency . To this end , various DNN compression techniques have been proposed ( Wang et al. , 2019b ; Dong et al. , 2020 ; Park et al. , 2018 ; Rastegari et al. , 2016 ; Fan et al. , 2021 ; Stock et al. , 2020 ; Zhou et al. , 2019 ; Park et al. , 2019 ; Yu et al. , 2018 ; Polino et al. , 2018 ) . Among them , weight-clustering/sharing ( Han et al. , 2016 ; Wu et al. , 2018 ; Ullrich et al. , 2017 ; Stock et al. , 2020 ) has shown a high DNN compression ratio where weights are clustered into a few shareable weight values ( or centroids ) based on k-means clustering . Once weights are clustered , to shrink the model size , one can store indices ( 2bits , 4bits , etc . depending on the number of clusters ) with a lookup table rather than actual floating-point values . ∗equal contribution Designing a compact DNN architecture and enabling weight-clustering together could provide the best solution in terms of efficient on-device inference . However , the existing model compression approaches do not usefully compress an already-compact DNN like MobileNet , presumably because the model itself does not have significant redundancy . We conjecture that such limitation comes from the fact that weight-clustering through k-means algorithm ( both weight-cluster assignment and weight update ) has not been fully optimized with the target task . The fundamental complexity in applying k-means clustering for weight-sharing comes from the following : a ) both weights and corresponding k-means centroids are free to move ( a general k-means clustering with fixed observations is already NP-Hard ) , b ) the weight-to-cluster assignment is a discrete process which makes k-means clustering non-differentiable , preventing effective optimization . In this work , we propose a new layer without learnable parameters for differentiable k-means clustering , DKM , based on an attention mechanism ( Bahdana et al. , 2015 ) to capture the weight and cluster interactions seamlessly , and further apply it to enable train-time weight-clustering for model compression . Our major contributions include the following : • We propose a novel differentiable k-means clustering layer ( DKM ) for deep learning , which serves as a generic neural layer to develop clustering behavior on input and output . • We demonstrate that DKM can perform multi-dimensional k-means clustering efficiently and can offer a high-quality model for a given compression ratio target . • We apply DKM to compress a DNN model and demonstrate the state-of-the-art results on both computer vision and natural language models and tasks . 2 RELATED WORKS . Model compression using clustering : DeepCompression ( Han et al. , 2016 ) proposed to apply kmeans clustering for model compression . DeepCompression initially clusters the weights using kmeans algorithm . All the weights that belong to the same cluster share the same weight value which is initially the cluster centroid . In the forward-pass , the shared weight is used for each weight . In the backward-pass , the gradient for each shared weight is calculated and used to update the shared value . This approach might degrade model quality because it can not formulate weight-cluster assignment during gradient back propagation ( Yin et al. , 2019 ) . ESCQ ( Choi et al. , 2017 ; 2020 ) is optimizing the clusters to minimize the change in the loss by considering hessian . Therefore , it is to preserve the current model state , instead of searching for a fundamentally better model state for compression . HAQ ( Wang et al. , 2019b ) uses reinforcement learning to search for the optimal quantization policy on different tasks . For model compression , HAQ uses k-means clustering similar to DeepCompression yet with flexible bit-width on different layers . Our work is orthogonal to this work because the k-means clustering can be replaced with our DKM with a similar flexible configuration . `` And The Bit Goes Down '' ( Stock et al. , 2020 ) algorithm is based on Product Quantization and Knowledge Distillation . It evenly splits the weight vector of N elements into N/d contiguous d dimensional sub-vectors , and clusters the sub-vectors using weighted k-means clustering to minimize activation change from that of a teacher network . GOBO ( Zadeh et al. , 2020 ) first separates outlier weights far from the average of the weights of each layer and stores them uncompressed while clustering the other weights by an algorithm similar to k-means . Model compression using regularization : Directly incorporating k-means clustering in the training process is not straightforward ( Wu et al. , 2018 ) . Hence , ( Ullrich et al. , 2017 ) models weightclustering as Gaussian Mixture Model ( GMM ) and fits weight distribution into GMM with additional learning parameters using KL divergence ( i.e. , forcing weight distribution to follow k Gaussian distributions with a slight variance ) . ( Wu et al. , 2018 ) proposed deep k-means to enable weightclustering during re-training . By forcing the weights that have been already clustered to stay around the assigned center , the hard weight-clustering is approximated with additional parameters . Both ( Ullrich et al. , 2017 ) and ( Wu et al. , 2018 ) leverage regularization to enforce weight-clustering with additional parameters , which will interfere with the original loss target and requires additional updates for the new variables ( i.e. , singular value decomposition ( SVD ) in ( Wu et al. , 2018 ) ) . Also , relying on the modified loss can not capture the dynamic interaction between weight distributions and cluster centroids within a batch , thus requiring an additional training flow for re-training . Enhance Model compression using dropout : Quant-Noise ( Fan et al. , 2021 ) is a structured dropout which only quantizes a random subset of weights ( using any quantization technique ) and thus can improve the predictive power of a compressed model . For example , ( Fan et al. , 2021 ) showed good compression vs. accuracy trade-off on ResNet50 for ImageNet1k . Model quantization : Besides clustering and regularization methods , model quantization can also reduce the model size , and training-time quantization techniques have been developed to improve the accuracy of quantized models ( Li et al. , 2019 ; Zhao et al. , 2019 ) . EWGS ( J. Lee , 2021 ) adjusts gradients by scaling them up or down based on the Hessian approximation for each layer . PROFIT ( Park & Yoo , 2020 ) adopts an iterative process and freezes layers based on the activation instability . Efficient networks : Memory-efficient DNNs include MobileNet ( Howard et al. , 2017 ; Sandler et al. , 2018 ) , EfficientNet ( Tan & Le , 2019 ; 2021 ) and ESPNet ( Mehta et al. , 2019 ) . MobileNet-v1 ( Howard et al. , 2017 ) on ImageNet1k dataset has top-1 accuracy of 70.3 % with 16.1 MB of memory in comparison to a ResNet18 which has 69.3 % accuracy with 44.6 MB of model size . Our method can be applied to these compact networks to reduce their model sizes further . 3 ALGORITHM . 3.1 MOTIVATION . Popular weight-clustering techniques for DNN model compression ( J. Lee , 2021 ; Han et al. , 2016 ; Dong et al. , 2020 ; Stock et al. , 2020 ) are based on k-means clustering along with enhancements such as gradient scaling/approximation . Using k-means clustering , the weights are clustered and assigned to the nearest centroids which are used for forward/backward-propagation during training as illustrated in Fig . 1 ( a ) . Such conventional methods with clustering have two critical drawbacks : • The weight-to-cluster assignment in conventional approaches is not optimized through back-propagation of training loss function . • Gradients for the weights are computed in an ad-hoc fashion : the gradient of a centroid is re-purposed as the gradient of the weights assigned to the centroid . These limitations are more pronounced for the weights on the boundary such as i and j in Fig . 1 ( a ) . In the conventional approaches , i and j are assigned to the centroids C2 and C1 respectively , simply because of their marginal difference in a distance metric . However , assigning i to C0 and j to C2 could be better for the training loss as their difference in distance is so small ( Nagel et al. , 2020 ) . Such lost opportunity cost is especially higher with a smaller number of centroids ( or fewer bits for quantization ) , as each unfortunate hard assignment can degrade the training loss significantly . We overcome such limitations with DKM by interpreting weight-centroid assignment as distancebased attention optimization ( Bahdana et al. , 2015 ) as in Fig . 1 ( b ) and letting each weight interact with all the centroids . Such attention mechanism naturally cast differentiable and iterative k-means clustering into a parameter-free layer as in Fig . 2 . Therefore , during backward-propagation , attention allows a gradient of a weight to be a product of the attentions and the gradients of centroids , which in turn impact how the clustering and assignment will be done in the next batch . overall our weight assignment will align with the loss function , and can be highly effective for DNN compression . | The paper is concerned with reducing the size of deep neural networks using weight sharing. The paper proposes a new building block that performs a soft k-means algorithm where each weight is assigned a convex combination of the cluster centers. At test-time, the weights are assigned to their closest cluster center such that a real (hard) weight sharing is obtained. The method achieves state-of-the-art accuracy using various architectures on several tasks. | SP:67efe87f8db28e0aa68246cc5b34ddc028188df7 |
DKM: Differentiable k-Means Clustering Layer for Neural Network Compression | Deep neural network ( DNN ) model compression for efficient on-device inference becomes increasingly important to reduce memory requirements and keep user data on-device . To this end , we propose a novel differentiable k-means clustering layer ( DKM ) and its application to train-time weight-clustering for DNN model compression . DKM casts k-means clustering as an attention problem and enables joint optimization of the DNN parameters and clustering centroids . Unlike prior works that rely on additional parameters and regularizers , DKM-based compression keeps the original loss function and model architecture fixed . We evaluated DKM-based compression on various DNN models for computer vision and natural language processing ( NLP ) tasks . Our results demonstrate that DKM delivers superior compression and accuracy trade-off on ImageNet1k and GLUE benchmarks . For example , DKM-based compression can offer 74.5 % top-1 ImageNet1k accuracy on ResNet50 with 3.3MB model size ( 29.4x model compression factor ) . For MobileNet-v1 , which is a challenging DNN to compress , DKM delivers 63.9 % top-1 ImageNet1k accuracy with 0.72 MB model size ( 22.4x model compression factor ) . This result is 6.8 % higher top-1 accuracy and 33 % relatively smaller model size than the current state-of-the-art DNN compression algorithms . DKM also compressed a DistilBERT model by 11.8x with minimal ( 1.1 % ) accuracy loss on GLUE NLP benchmarks . 1 INTRODUCTION . Deep neural networks ( DNN ) have demonstrated super-human performance on many cognitive tasks ( Silver et al. , 2018 ) . While a fully-trained uncompressed DNN is commonly used for server-side inference , on-device inference is preferred to enhance user experience by reducing latency and keeping user data on-device . Many such on-device platforms are battery-powered and resource-constrained , demanding a DNN to meet the stringent resource requirements such as powerconsumption , compute budget and storage-overhead ( Wang et al. , 2019b ; Wu et al. , 2018 ) . One solution is to design a more efficient and compact DNN such as MobileNet ( Howard et al. , 2017 ) by innovating the network architecture or by leveraging Neural Architecture Search ( NAS ) methods ( Liu et al. , 2019 ; Tan et al. , 2019 ) . Another solution is to compress a model with small accuracy degradation so that it takes less storage and reduces System-on-Chip ( SoC ) memory bandwidth utilization , which can minimize power-consumption and latency . To this end , various DNN compression techniques have been proposed ( Wang et al. , 2019b ; Dong et al. , 2020 ; Park et al. , 2018 ; Rastegari et al. , 2016 ; Fan et al. , 2021 ; Stock et al. , 2020 ; Zhou et al. , 2019 ; Park et al. , 2019 ; Yu et al. , 2018 ; Polino et al. , 2018 ) . Among them , weight-clustering/sharing ( Han et al. , 2016 ; Wu et al. , 2018 ; Ullrich et al. , 2017 ; Stock et al. , 2020 ) has shown a high DNN compression ratio where weights are clustered into a few shareable weight values ( or centroids ) based on k-means clustering . Once weights are clustered , to shrink the model size , one can store indices ( 2bits , 4bits , etc . depending on the number of clusters ) with a lookup table rather than actual floating-point values . ∗equal contribution Designing a compact DNN architecture and enabling weight-clustering together could provide the best solution in terms of efficient on-device inference . However , the existing model compression approaches do not usefully compress an already-compact DNN like MobileNet , presumably because the model itself does not have significant redundancy . We conjecture that such limitation comes from the fact that weight-clustering through k-means algorithm ( both weight-cluster assignment and weight update ) has not been fully optimized with the target task . The fundamental complexity in applying k-means clustering for weight-sharing comes from the following : a ) both weights and corresponding k-means centroids are free to move ( a general k-means clustering with fixed observations is already NP-Hard ) , b ) the weight-to-cluster assignment is a discrete process which makes k-means clustering non-differentiable , preventing effective optimization . In this work , we propose a new layer without learnable parameters for differentiable k-means clustering , DKM , based on an attention mechanism ( Bahdana et al. , 2015 ) to capture the weight and cluster interactions seamlessly , and further apply it to enable train-time weight-clustering for model compression . Our major contributions include the following : • We propose a novel differentiable k-means clustering layer ( DKM ) for deep learning , which serves as a generic neural layer to develop clustering behavior on input and output . • We demonstrate that DKM can perform multi-dimensional k-means clustering efficiently and can offer a high-quality model for a given compression ratio target . • We apply DKM to compress a DNN model and demonstrate the state-of-the-art results on both computer vision and natural language models and tasks . 2 RELATED WORKS . Model compression using clustering : DeepCompression ( Han et al. , 2016 ) proposed to apply kmeans clustering for model compression . DeepCompression initially clusters the weights using kmeans algorithm . All the weights that belong to the same cluster share the same weight value which is initially the cluster centroid . In the forward-pass , the shared weight is used for each weight . In the backward-pass , the gradient for each shared weight is calculated and used to update the shared value . This approach might degrade model quality because it can not formulate weight-cluster assignment during gradient back propagation ( Yin et al. , 2019 ) . ESCQ ( Choi et al. , 2017 ; 2020 ) is optimizing the clusters to minimize the change in the loss by considering hessian . Therefore , it is to preserve the current model state , instead of searching for a fundamentally better model state for compression . HAQ ( Wang et al. , 2019b ) uses reinforcement learning to search for the optimal quantization policy on different tasks . For model compression , HAQ uses k-means clustering similar to DeepCompression yet with flexible bit-width on different layers . Our work is orthogonal to this work because the k-means clustering can be replaced with our DKM with a similar flexible configuration . `` And The Bit Goes Down '' ( Stock et al. , 2020 ) algorithm is based on Product Quantization and Knowledge Distillation . It evenly splits the weight vector of N elements into N/d contiguous d dimensional sub-vectors , and clusters the sub-vectors using weighted k-means clustering to minimize activation change from that of a teacher network . GOBO ( Zadeh et al. , 2020 ) first separates outlier weights far from the average of the weights of each layer and stores them uncompressed while clustering the other weights by an algorithm similar to k-means . Model compression using regularization : Directly incorporating k-means clustering in the training process is not straightforward ( Wu et al. , 2018 ) . Hence , ( Ullrich et al. , 2017 ) models weightclustering as Gaussian Mixture Model ( GMM ) and fits weight distribution into GMM with additional learning parameters using KL divergence ( i.e. , forcing weight distribution to follow k Gaussian distributions with a slight variance ) . ( Wu et al. , 2018 ) proposed deep k-means to enable weightclustering during re-training . By forcing the weights that have been already clustered to stay around the assigned center , the hard weight-clustering is approximated with additional parameters . Both ( Ullrich et al. , 2017 ) and ( Wu et al. , 2018 ) leverage regularization to enforce weight-clustering with additional parameters , which will interfere with the original loss target and requires additional updates for the new variables ( i.e. , singular value decomposition ( SVD ) in ( Wu et al. , 2018 ) ) . Also , relying on the modified loss can not capture the dynamic interaction between weight distributions and cluster centroids within a batch , thus requiring an additional training flow for re-training . Enhance Model compression using dropout : Quant-Noise ( Fan et al. , 2021 ) is a structured dropout which only quantizes a random subset of weights ( using any quantization technique ) and thus can improve the predictive power of a compressed model . For example , ( Fan et al. , 2021 ) showed good compression vs. accuracy trade-off on ResNet50 for ImageNet1k . Model quantization : Besides clustering and regularization methods , model quantization can also reduce the model size , and training-time quantization techniques have been developed to improve the accuracy of quantized models ( Li et al. , 2019 ; Zhao et al. , 2019 ) . EWGS ( J. Lee , 2021 ) adjusts gradients by scaling them up or down based on the Hessian approximation for each layer . PROFIT ( Park & Yoo , 2020 ) adopts an iterative process and freezes layers based on the activation instability . Efficient networks : Memory-efficient DNNs include MobileNet ( Howard et al. , 2017 ; Sandler et al. , 2018 ) , EfficientNet ( Tan & Le , 2019 ; 2021 ) and ESPNet ( Mehta et al. , 2019 ) . MobileNet-v1 ( Howard et al. , 2017 ) on ImageNet1k dataset has top-1 accuracy of 70.3 % with 16.1 MB of memory in comparison to a ResNet18 which has 69.3 % accuracy with 44.6 MB of model size . Our method can be applied to these compact networks to reduce their model sizes further . 3 ALGORITHM . 3.1 MOTIVATION . Popular weight-clustering techniques for DNN model compression ( J. Lee , 2021 ; Han et al. , 2016 ; Dong et al. , 2020 ; Stock et al. , 2020 ) are based on k-means clustering along with enhancements such as gradient scaling/approximation . Using k-means clustering , the weights are clustered and assigned to the nearest centroids which are used for forward/backward-propagation during training as illustrated in Fig . 1 ( a ) . Such conventional methods with clustering have two critical drawbacks : • The weight-to-cluster assignment in conventional approaches is not optimized through back-propagation of training loss function . • Gradients for the weights are computed in an ad-hoc fashion : the gradient of a centroid is re-purposed as the gradient of the weights assigned to the centroid . These limitations are more pronounced for the weights on the boundary such as i and j in Fig . 1 ( a ) . In the conventional approaches , i and j are assigned to the centroids C2 and C1 respectively , simply because of their marginal difference in a distance metric . However , assigning i to C0 and j to C2 could be better for the training loss as their difference in distance is so small ( Nagel et al. , 2020 ) . Such lost opportunity cost is especially higher with a smaller number of centroids ( or fewer bits for quantization ) , as each unfortunate hard assignment can degrade the training loss significantly . We overcome such limitations with DKM by interpreting weight-centroid assignment as distancebased attention optimization ( Bahdana et al. , 2015 ) as in Fig . 1 ( b ) and letting each weight interact with all the centroids . Such attention mechanism naturally cast differentiable and iterative k-means clustering into a parameter-free layer as in Fig . 2 . Therefore , during backward-propagation , attention allows a gradient of a weight to be a product of the attentions and the gradients of centroids , which in turn impact how the clustering and assignment will be done in the next batch . overall our weight assignment will align with the loss function , and can be highly effective for DNN compression . | This paper proposes a novel differentiable k-means clustering layer (DKM) for deep neural network model compression. The DKM utilizes attention mechanism to align the weight-to-cluster assignment with the training loss function. Overall, the idea is novel but the paper is not prepared enough. | SP:67efe87f8db28e0aa68246cc5b34ddc028188df7 |
Faster Reinforcement Learning with Value Target Lower Bounding | 1 INTRODUCTION . In temporal difference ( TD ) learning , the value function is adjusted toward its Bellman target , which is the reward of the current step plus the discounted value of the next state . This forms the basis of many state of the art reinforcement learning ( RL ) algorithms such as DQN ( Mnih et al. , 2013 ) , DDPG ( Lillicrap et al. , 2015 ) , TD3 ( Fujimoto et al. , 2018 ) , and SAC ( Haarnoja et al. , 2018 ) . The value of the next state is typically estimated using a “ bootstrapped value ” based on the value function itself , which is being actively learned during training . The bootstrapped values can be random or very inaccurate , especially at the initial stage of training . Consequently , the Bellman value targets as well as the learned value are usually far away from the optimal value . Naturally , this leads to the following idea : If we can make the value target closer to the optimal value , we may speedup TD learning . For example , we know that the optimal value is just the expected discounted return of the optimal policy , which always upper bounds the expected return of any policy . For episodic RL tasks , we could use the observed discounted return up to episode end from the training trajectories to lower bound the value target . This makes the new value target closer to the optimal value , when the empirical return is higher than the Bellman target . Will such a way of lower bounding the value target work : Will it still converge ? Will it converge to the optimal value ? Will it speed up value learning ? 2 THEORETICAL RESULTS FOR THE TABULAR CASE . For the tabular case , value target lower bounding converges to the same optimal value as the original Bellman value learning , and the proof is also straightforward . 2.1 BACKGROUND . In finite MDPs with a limited number of states and actions , a table can be used to keep track of the value of each state . Using dynamic programming algorithms such as value iteration , values are guaranteed to converge to the optimal through Bellman updates ( Chapter 4.4 ( Sutton & Barto , 2018 ) ) . Algorithm 1 : Bellman value iteration with value target lower bounding Data : Finite MDP p ( s′ , r|s , g , a ) , convergence threshold θ Result : State value v ( s ) 1 v ( s ) ← 0 ; 2 repeat 3 ∆← 0 ; 4 for each state s do 5 v ← v ( s ) ; 6 v ( s ) ←max ( f , maxa ∑ s′ , r p ( s ′ , r|s , g , a ) [ r + γv ( s′ ) ] ) ; 7 ∆← max ( ∆ , |v ( s ) − v| ) ; 8 end 9 until ∆ < θ ; The core of the algorithm is the Bellman update of the value function , B ( v ) : B ( v ) ( s ) : = max a ∑ s′ , r p ( s′ , r|s , g , a ) [ r + γv ( s′ ) ] ( 1 ) It is well known that the Bellman operator , B , is a contraction mapping over value functions ( Denardo , 1967 ) . That is , for any two value functions v1 and v2 , |B ( v1 ) − B ( v2 ) | ≤ γ|v1 − v2| for the discount factor γ ∈ [ 0 , 1 ) . This guarantees that any value function under the algorithm converges to the optimal value.1 2.2 VALUE TARGET LOWER BOUNDING CONVERGENCE THEOREM . Theorem 1 . Suppose the optimal value under the Bellman operator is B∞ ( v ) . For any value function f that lower bounds the optimal value , i.e . ∀s , f ( s ) ≤ B∞ ( v ) ( s ) , if we define the lower bounded Bellman operator asMf ◦ B ( v ) : = max ( B ( v ) , f ) , then ( Mf ◦ B ) ∞ ( v ) converges to B∞ ( v ) . A few things to note about the proof ( see Appendix A.1 ) . First , this only proves convergence , not contraction under the original ||v1 − v2||∞ metric . In the case of the Bellman operator , contraction shows that ∀v1 , v2 value functions , ||B ( v1 ) −B ( v2 ) ||∞ ≤ γ||v1− v2||∞ . Here , for value target lower bounding , there can be counter examples whereMf ◦B does not always contract in the original metric space for value functions . Here , convergence relies on the convergence of the Bellman value iteration and the existence of the fixed point v∗ . One difficulty caused by this change is that the stopping criterion in Algorithm 1 ( ∆ < θ ) no longer works , as we do not have access to the converged value during learning . This is perhaps not a serious concern in practice , as people often train algorithms for a fixed number of iterations or time steps . Second , based on the proof , the new algorithm is at least as fast as the original . When the lower bound actually improves the value target , i.e . f ( s ) > B ( v1 ) ( s ) , there is a chance for the convergence to be faster . Convergence is strictly faster when the lower bound f has an impact on the L∞ distance between the current value and the optimal value , i.e . it increases the value target for the states where the differences between the value target and the optimal value are the largest . Third , the lower bound function doesn ’ t have to be static during training . As long as there is a single f during each iteration , convergence property is preserved . Fourth , the theory works even when the underlying MDP is stochastic . Only the lower bounds based on empirical return introduced below require the MDP to be deterministic . 1See , for example , page 8 for the gist of the proof https : //people.eecs.berkeley.edu/ ˜pabbeel/cs287-fa09/lecture-notes/lecture5-2pp.pdf 3 EXAMPLE LOWER BOUND FUNCTIONS . We show a few cases where lower bound functions can be readily obtained from the training experience . Future work may investigate alternative lower bounds . 3.1 EPISODIC TASKS . In episodic tasks , discounted return is only accumulated up to the last step of an episode . In this case , we can wait until an episode ends , and compute future discounted returns of all time steps inside the episode . This discounted return is guaranteed to be a lower bound of the optimal value , if the environment is deterministic , i.e . the reward sequence can be repeated using the exact same sequence of actions . ( The behavior policy need not be deterministic , as long as the policy class contains the deterministic optimal policy . ) To make training efficient , we can compute and store such discounted returns into the replay buffer for each time step , and simply read them out during training . We call this variant lb-DR , short for lower bounding with discounted return . 3.1.1 EPISODIC WITH HINDSIGHT RELABELED GOALS . In goal conditioned tasks , one helpful technique is hindsight goal relabeling ( Andrychowicz et al. , 2017 ) . It takes a future state that is d time steps away from the current state as the hindsight / relabeled goal for the current state . When the goal is reached , a reward of 0 is given , otherwise a -1 reward is given for each time step . In this case , we know it took d steps to reach the hindsight goal , so the discounted future return is : Rd = ∑ i=0 , .. , d−1 −1γi =− 1 ( 1− γd ) / ( 1− γ ) ( 2 ) This calculation can be done on the fly as hindsight relabeling happens , requiring no extra space and very little computation . We call this variant lb-GD , short for lower bounding with goal distance based return . Additionally , we can also apply lb-DR and lb-GD together , with discounted return lower bounding ( lb-DR ) on the original experience and goal distance return lower bounding ( lb-GD ) on the hindsight experience , giving the lb-DR+GD variant , which was used by Fujita et al . ( 2020 ) independently . 3.2 NON-EPISODIC TASKS WITH POSITIVE REWARDS . When the task is continuing , without an episode end , discounted return needs to be accumulated all the way to infinity . This makes it difficult to lower bound the value if rewards can be negative . When rewards are always non-negative , one can still use the discounted return of the future n-steps to lower bound the value . Chapter 3.3 of Sutton & Barto ( 2018 ) has more details on episodic vs continuing tasks . 4 INTEGRATION INTO RL ALGORITHMS . 4.1 BACKGROUND . The value target lower bounds can be readily plugged into RL algorithms that regresses value to a target , e.g . DQN , DDPG or SAC . In these algorithms , the action value q ( s , a ) is learned through a squared loss with the target value y . In one step TD return , for a batch B of experience { s , a→ r , s′ } , the loss is : Lq : = ∑ ( s , a , r , s′ ) ∈B |q ( s , a ) − y|2 ( 3 ) In one step TD return , y is the one step TD return q̂ ( s , a , r , s′ ) : q̂ ( s , a , r , s′ ) : = r ( s , a ) + γq′ ( s′ , µ′ ( s′ ) ) ( 4 ) Here , q′ and µ′ are the bootstrap value and policy functions , typically following the value and policy functions in a delayed schedule during training . ( They are also called “ target value ” and “ target policy ” , and are very different from the “ value target ” y in this paper . ) 4.2 VALUE TARGET LOWER BOUNDING . With lower bounding , we replace the value target y with the lower bounded target : y ← max ( f , q̂ ( s , a , r , s′ ) ) = max ( f , r + γq′ ( s′ , µ′ ( s′ ) ) ) ( 5 ) This is subtly but importantly different from lower bounding the q value directly ( Oh et al. , 2018 ; Tang , 2020 ) : q ( s , a ) ← max ( f , q ( s , a ) ) , which stays overestimated if q ( s , a ) initially overestimates . This is the same as was done by Fujita et al . ( 2020 ) ( confirmed via personal communication ) . This way of simply lower bounding the value target does not require any tuning parameter , but one can always interpolate between these two value targets using a mixing weight α : y ← ( 1− α ) q̂ ( s , a ) + αmax ( f , q̂ ( s , a ) ) ( 6 ) A small α dampens the effect of the new value target , and may be desirable in practice when assumptions of the theorem can be violated , e.g . for non-deterministic tasks . See Appendix A.2 for an illustrative example of how value target lower bounding works in practice . | This paper proposed value target lower bounding as a simple modification to the Bellman operator that intends to improve convergence speed. It proves that using such a lower bound in the Bellman backup does not change the fixed point in the tabular setting. The paper then proposes two instantiations of particular value target lower bounds: first by using the return in deterministic episodic MDPs and second by hindsight relabeling in goal-directed tasks. Finally, the paper offers some experiments using the proposed lower bounds. | SP:7f894264c42d9e9670233250810e71c20d2f7fcf |
Faster Reinforcement Learning with Value Target Lower Bounding | 1 INTRODUCTION . In temporal difference ( TD ) learning , the value function is adjusted toward its Bellman target , which is the reward of the current step plus the discounted value of the next state . This forms the basis of many state of the art reinforcement learning ( RL ) algorithms such as DQN ( Mnih et al. , 2013 ) , DDPG ( Lillicrap et al. , 2015 ) , TD3 ( Fujimoto et al. , 2018 ) , and SAC ( Haarnoja et al. , 2018 ) . The value of the next state is typically estimated using a “ bootstrapped value ” based on the value function itself , which is being actively learned during training . The bootstrapped values can be random or very inaccurate , especially at the initial stage of training . Consequently , the Bellman value targets as well as the learned value are usually far away from the optimal value . Naturally , this leads to the following idea : If we can make the value target closer to the optimal value , we may speedup TD learning . For example , we know that the optimal value is just the expected discounted return of the optimal policy , which always upper bounds the expected return of any policy . For episodic RL tasks , we could use the observed discounted return up to episode end from the training trajectories to lower bound the value target . This makes the new value target closer to the optimal value , when the empirical return is higher than the Bellman target . Will such a way of lower bounding the value target work : Will it still converge ? Will it converge to the optimal value ? Will it speed up value learning ? 2 THEORETICAL RESULTS FOR THE TABULAR CASE . For the tabular case , value target lower bounding converges to the same optimal value as the original Bellman value learning , and the proof is also straightforward . 2.1 BACKGROUND . In finite MDPs with a limited number of states and actions , a table can be used to keep track of the value of each state . Using dynamic programming algorithms such as value iteration , values are guaranteed to converge to the optimal through Bellman updates ( Chapter 4.4 ( Sutton & Barto , 2018 ) ) . Algorithm 1 : Bellman value iteration with value target lower bounding Data : Finite MDP p ( s′ , r|s , g , a ) , convergence threshold θ Result : State value v ( s ) 1 v ( s ) ← 0 ; 2 repeat 3 ∆← 0 ; 4 for each state s do 5 v ← v ( s ) ; 6 v ( s ) ←max ( f , maxa ∑ s′ , r p ( s ′ , r|s , g , a ) [ r + γv ( s′ ) ] ) ; 7 ∆← max ( ∆ , |v ( s ) − v| ) ; 8 end 9 until ∆ < θ ; The core of the algorithm is the Bellman update of the value function , B ( v ) : B ( v ) ( s ) : = max a ∑ s′ , r p ( s′ , r|s , g , a ) [ r + γv ( s′ ) ] ( 1 ) It is well known that the Bellman operator , B , is a contraction mapping over value functions ( Denardo , 1967 ) . That is , for any two value functions v1 and v2 , |B ( v1 ) − B ( v2 ) | ≤ γ|v1 − v2| for the discount factor γ ∈ [ 0 , 1 ) . This guarantees that any value function under the algorithm converges to the optimal value.1 2.2 VALUE TARGET LOWER BOUNDING CONVERGENCE THEOREM . Theorem 1 . Suppose the optimal value under the Bellman operator is B∞ ( v ) . For any value function f that lower bounds the optimal value , i.e . ∀s , f ( s ) ≤ B∞ ( v ) ( s ) , if we define the lower bounded Bellman operator asMf ◦ B ( v ) : = max ( B ( v ) , f ) , then ( Mf ◦ B ) ∞ ( v ) converges to B∞ ( v ) . A few things to note about the proof ( see Appendix A.1 ) . First , this only proves convergence , not contraction under the original ||v1 − v2||∞ metric . In the case of the Bellman operator , contraction shows that ∀v1 , v2 value functions , ||B ( v1 ) −B ( v2 ) ||∞ ≤ γ||v1− v2||∞ . Here , for value target lower bounding , there can be counter examples whereMf ◦B does not always contract in the original metric space for value functions . Here , convergence relies on the convergence of the Bellman value iteration and the existence of the fixed point v∗ . One difficulty caused by this change is that the stopping criterion in Algorithm 1 ( ∆ < θ ) no longer works , as we do not have access to the converged value during learning . This is perhaps not a serious concern in practice , as people often train algorithms for a fixed number of iterations or time steps . Second , based on the proof , the new algorithm is at least as fast as the original . When the lower bound actually improves the value target , i.e . f ( s ) > B ( v1 ) ( s ) , there is a chance for the convergence to be faster . Convergence is strictly faster when the lower bound f has an impact on the L∞ distance between the current value and the optimal value , i.e . it increases the value target for the states where the differences between the value target and the optimal value are the largest . Third , the lower bound function doesn ’ t have to be static during training . As long as there is a single f during each iteration , convergence property is preserved . Fourth , the theory works even when the underlying MDP is stochastic . Only the lower bounds based on empirical return introduced below require the MDP to be deterministic . 1See , for example , page 8 for the gist of the proof https : //people.eecs.berkeley.edu/ ˜pabbeel/cs287-fa09/lecture-notes/lecture5-2pp.pdf 3 EXAMPLE LOWER BOUND FUNCTIONS . We show a few cases where lower bound functions can be readily obtained from the training experience . Future work may investigate alternative lower bounds . 3.1 EPISODIC TASKS . In episodic tasks , discounted return is only accumulated up to the last step of an episode . In this case , we can wait until an episode ends , and compute future discounted returns of all time steps inside the episode . This discounted return is guaranteed to be a lower bound of the optimal value , if the environment is deterministic , i.e . the reward sequence can be repeated using the exact same sequence of actions . ( The behavior policy need not be deterministic , as long as the policy class contains the deterministic optimal policy . ) To make training efficient , we can compute and store such discounted returns into the replay buffer for each time step , and simply read them out during training . We call this variant lb-DR , short for lower bounding with discounted return . 3.1.1 EPISODIC WITH HINDSIGHT RELABELED GOALS . In goal conditioned tasks , one helpful technique is hindsight goal relabeling ( Andrychowicz et al. , 2017 ) . It takes a future state that is d time steps away from the current state as the hindsight / relabeled goal for the current state . When the goal is reached , a reward of 0 is given , otherwise a -1 reward is given for each time step . In this case , we know it took d steps to reach the hindsight goal , so the discounted future return is : Rd = ∑ i=0 , .. , d−1 −1γi =− 1 ( 1− γd ) / ( 1− γ ) ( 2 ) This calculation can be done on the fly as hindsight relabeling happens , requiring no extra space and very little computation . We call this variant lb-GD , short for lower bounding with goal distance based return . Additionally , we can also apply lb-DR and lb-GD together , with discounted return lower bounding ( lb-DR ) on the original experience and goal distance return lower bounding ( lb-GD ) on the hindsight experience , giving the lb-DR+GD variant , which was used by Fujita et al . ( 2020 ) independently . 3.2 NON-EPISODIC TASKS WITH POSITIVE REWARDS . When the task is continuing , without an episode end , discounted return needs to be accumulated all the way to infinity . This makes it difficult to lower bound the value if rewards can be negative . When rewards are always non-negative , one can still use the discounted return of the future n-steps to lower bound the value . Chapter 3.3 of Sutton & Barto ( 2018 ) has more details on episodic vs continuing tasks . 4 INTEGRATION INTO RL ALGORITHMS . 4.1 BACKGROUND . The value target lower bounds can be readily plugged into RL algorithms that regresses value to a target , e.g . DQN , DDPG or SAC . In these algorithms , the action value q ( s , a ) is learned through a squared loss with the target value y . In one step TD return , for a batch B of experience { s , a→ r , s′ } , the loss is : Lq : = ∑ ( s , a , r , s′ ) ∈B |q ( s , a ) − y|2 ( 3 ) In one step TD return , y is the one step TD return q̂ ( s , a , r , s′ ) : q̂ ( s , a , r , s′ ) : = r ( s , a ) + γq′ ( s′ , µ′ ( s′ ) ) ( 4 ) Here , q′ and µ′ are the bootstrap value and policy functions , typically following the value and policy functions in a delayed schedule during training . ( They are also called “ target value ” and “ target policy ” , and are very different from the “ value target ” y in this paper . ) 4.2 VALUE TARGET LOWER BOUNDING . With lower bounding , we replace the value target y with the lower bounded target : y ← max ( f , q̂ ( s , a , r , s′ ) ) = max ( f , r + γq′ ( s′ , µ′ ( s′ ) ) ) ( 5 ) This is subtly but importantly different from lower bounding the q value directly ( Oh et al. , 2018 ; Tang , 2020 ) : q ( s , a ) ← max ( f , q ( s , a ) ) , which stays overestimated if q ( s , a ) initially overestimates . This is the same as was done by Fujita et al . ( 2020 ) ( confirmed via personal communication ) . This way of simply lower bounding the value target does not require any tuning parameter , but one can always interpolate between these two value targets using a mixing weight α : y ← ( 1− α ) q̂ ( s , a ) + αmax ( f , q̂ ( s , a ) ) ( 6 ) A small α dampens the effect of the new value target , and may be desirable in practice when assumptions of the theorem can be violated , e.g . for non-deterministic tasks . See Appendix A.2 for an illustrative example of how value target lower bounding works in practice . | This paper proposes a new RL algorithm based on a modified Bellman backup equation. The main idea is to estimate the value of a state in multiple ways (using a Q function and using Monte Carlo returns) and then to take the maximum over these estimates. The paper shows that, if all estimators are lower bounds on the true value, then the proposed method converges to the optimal policy. Experiments confirm that the proposed method outperforms standard Q learning (i.e., with regular Bellman backups) on some tasks. | SP:7f894264c42d9e9670233250810e71c20d2f7fcf |
Semi-supervised learning of partial differential operators and dynamical flows | 1 INTRODUCTION . The evolution of classical and quantum physical dynamical systems in space and time is generically modeled by non-linear partial differential equations . Such are , for instance , Einstein equations of General Relativity , Maxwell equations of Electromagnetism , Schrödinger equation of Quantum Mechanics and Navier-Stokes ( NS ) equations of fluid flows . These equations , together with appropriate initial and boundary conditions , provide a complete quantitative description of the physical world within their regime of validity . Since these dynamic evolution settings are governed by partial differential operators that are often highly non-linear , it is rare to have analytical solutions for dynamic systems . This is especially true when the system contains a large number of interacting degrees of freedom in the non-linear regime . Consider , as an example , the NS equations , which describe the motion of viscous fluids . In the regime of high Reynolds numbers of the order of one thousand , one observes turbulences , in which all symmetries are broken and all analytical techniques fail . The solution to these ( deterministic ) equations seems almost random and is very sensitive to the initial conditions . Many numerical techniques have been developed for constructing and analysing the solutions to fluid dynamics systems . However , the complexity of these solvers grows quickly as the spacing in the grid that is used for approximating the solution is reduced and the degrees of freedom of the interacting fluid increases . Given the theoretical and practical importance of constructing solutions to these equations , it is natural to ask whether neural networks can learn such evolution equations and construct new solutions . The two fundamental questions are : ( i ) The ability to generalize to initial conditions that are different from those presented in the training set , and ( ii ) The ability to generalize to new time points beyond the fixed grid points provided during training . The reason to hope that such tasks can be performed by machine learning is that despite the seemingly random behaviour of , e.g . fluid flows in the turbulent regime , there is an underlying low-entropy structure that can be learnt . Indeed , in diverse cases , neural network-based solvers have been shown to provide comparable results to other numerical methods , while utilizing fewer resources . Our Contributions We present a hyper-network based solver combined with a Fourier Neural Operator architecture . • Our hyper-network architecture treats time and space separately . Utilizing a data set of initial conditions and the corresponding solutions at a labeled fixed time , the network learns a large class of time evolution PDEs . • Our network successfully propagates initial conditions in discrete time steps by implementing the general composition properties of the partial differential operators . • Our solutions improve the learning accuracy at the supervision time-points . • Our solutions are able to interpolate and extrapolate to arbitrary ( unlabelled ) times . • We thoroughly test our method on various time evolution PDEs , including nonlinear fluid flows in one , two and three spatial dimensions . 2 RELATED WORK . Hypernetworks . While conventional networks employ a fixed set of pre-determined parameters , which is independent of the input , the hypernetwork scheme , invented multiple times , and coined by Ha et al . ( 2016 ) , allows the parameters of a neural network to explicitly rely on the input by combining two neural networks . The first neural network , called the hypernetwork , processes the input or part of it and outputs the weights of a second neural network . The second network , called the primary network , has a fixed architecture and weights that vary based on the input . It returns , given its input , the final output . This framework was used successfully in a variety of tasks , ranging from computer vision ( Littwin & Wolf , 2019 ) , continual learning ( von Oswald et al. , 2019 ) , and language modeling ( Suarez , 2017 ) . While it is natural to learn functions with hypernetworks , since the primary network can be seen as a dynamic , input-dependent function , we are not aware of any previous work that applies this scheme for recovering physical operators . Neural network-based PDE solvers . Due to the well-known limitations of traditional PDE solvers on one hand , and in light of new advances made in the field of neural networks on the other , lately we have witnessed very significant progress in the field of neural network-based PDE solvers ( Karniadakis et al. , 2021 ) . Neural network-based PDE solvers can be roughly divided into two groups according to the resource they utilize for learning : data-driven and model-based . Data-driven solvers use a large dataset containing initial conditions and final states , all at the same timepoint T to train ( Zhu & Zabaras , 2018 ) . These solvers predict the solution for new initial conditions at the same T , but can not be interpolated or extrapolated to times that are not increments of T . However , as the PDE is not required for such solvers , they can be used in cases where the underlying PDE is unknown . Data-driven approaches often reduce the problem setting of PDE solvers to the well-known problem setting of image-to-image mapping in computer vision . Following this approach , it is customary to use networks such U-Net ( Ronneberger et al. , 2015 ) . Model-based solvers , known as Physics Informed Neural Networks ( PINNs ) ( Raissi et al. , 2019 ) , harness the differential operator itself for supervision . This is done by defining a loss , the residual of the PDE . These solvers do not require a training dataset and can provide solutions for arbitrary times . Both approaches learn solutions on a specific discretized grid , which poses a limitation for any practical applications . Recently , a mesh-invariant data-driven direction has been proposed ( Lu et al. , 2019 ; Bhattacharya et al. , 2020 ; Nelsen & Stuart , 2021 ; Li et al. , 2020b ; Patel et al. , 2021 ) . The mesh invariance is obtained by learning operators rather than mappings between initial and final states . Li et al . ( 2020c ) have advanced the mesh-invariant line of work by introducing Fourier Neural Operators ( FNO ) . FNOs utilize both a convolution layer in real space and a Fourier integral layer in the Fourier domain . It has been shown that the FNO solver outperforms previous solvers in a number of important PDEs . FNO has a major limitation : the method ( in the framework of learning from pairs of initial states and final solutions ) can not be used to provide solutions for interpolation and arbitrary extrapolations . Knowing the solutions along the complete time evolution trajectory is highly valuable for theoretical and practical reasons and provides means for learning new dynamical principles . 3 PARTIAL DIFFERENTIAL EQUATIONS . Consider a d-dimensional vector field v ( x , t ) : Td×R→ Rd , where x = ( x1 , . . . , xd ) are periodic spatial coordinates xi ' xi + 2π on the d-dimensional torus , Td , and t is the time coordinate . The vector field evolves dynamically from the initial condition v ( x , t = 0 ) : Td → Rd according to a non-linear PDE of the form , ∂tv ( x , t ) = Lv ( x , t ) , ( 1 ) where L is a differential operator that does not depend explicitly on time , and has an expansion in v and its spatial derivatives . We assume that given a regular bounded initial vector field there is a unique regular solution to equation 1 . Since L does not depend explicitly on time , we can formally write the solution of such equations as v ( x , t ) = etLv ( x , 0 ) ≡ Φtv ( x , 0 ) . ( 2 ) Because of dissipation terms , such as the viscosity term in the NS equations , solutions to equation 1 generically break time reversal invariance ( t→ −t , v → −v ) . Furthermore , the total energy of the system is non-increasing as a function of time since we are not injecting energy at t 6= 0 , ∂t ∫ ddx v · v 2 ≤ 0 . ( 3 ) Equation 3 serves as a consistency on the network solutions . In this work , we consider the following non-linear PDEs , ( i ) Burgers equation describing one-dimensional compressible fluid flows with scalar velocity field v ( x , t ) , ∂tv + v∇v = ν∆v , ( 4 ) where ν is the kinematic viscosity , ∇ the gradient and ∆ is the Laplacian . ( ii ) Generalized one-dimensional Burgers equation with a parameter q = 2 , 3 , 4 , ∂tv + v q∇v = ν∆v . ( 5 ) ( iii ) One-dimensional Chafee–Infante equation with a constant λ parameter that models reactiondiffusion dynamics , ∂tv + λ ( v 3 − v ) = ∆v . ( 6 ) ( iv ) Two-dimensional Burgers equation for a two-dimensional vector field , v ( x , t ) = ( v1 , v2 ) , that describes compressible fluid flows in two space dimensions : ∂tv + ( v · ∇ ) v = ν∆v . ( 7 ) ( v ) Two and three-dimensional incompressible NS equations , ∂tv + ( v · ∇ ) v = −∇p+ ν∆v , ∇ · v = 0 , ( 8 ) where p is the fluid ’ s pressure , v ( x , t ) = ( v1 , . . . , vd ) and ν is the kinematic viscosity . 4 METHOD . Typically , a data-driven PDE solver , such as a neural network , evolves unseen velocity fields , v ( x , t = 0 ) to a fixed time T , ΦTv ( x , t = 0 ) = v ( x , T ) , ( 9 ) by learning from a set of i = 1 . . . N initial conditions sampled at t = 0 , vi ( x , t = 0 ) , and their corresponding time-evolved solutions of vi ( x , t = T ) of equation 1 . We generalize ΦT , to propagate solutions at intermediate times , 0 ≤ t ≤ T , by elevating a standard neural architecture to a hyper-network one . A hyper-network architecture is a composition of two neural networks , a primary network fw ( x ) and a hypernetwork gθ ( t ) with learned parameters θ , such that the parameters w of f are given as the output of g. Unlike common neural architectures , where a single input is mapped to a single output , in this construction , the input t is mapped to a function , fgθ ( t ) , which maps x to its output . Applying this to our case we have , Φtv ( x , 0 ) = fgθ ( t ) ( v ( x , 0 ) ) = v ( x , t ) . ( 10 ) This architecture may be used to learn not only the time-evolved solutions at t = T , but also intermediate solutions , at 0 < t < T , without explicitly providing the network with any intermediate time solution . This task may be accomplished by utilizing general consistency conditions , which apply to any equation of the form equation 1 , Φt1Φt2 = Φt2Φt1 = Φt1+t2 . ( 11 ) Notice that Φ0 = Id follows from equation 11 . | The paper combines the FNO architecture with hypernetworks to learn the solution operator (flow map) of Markovian PDE systems. A hypernetwork whose input is the time domain (\R_+) is trained to output the weights of a FNO which then acts on an initial condition to produce the PDE solution at the time input to the hypernetwork. Training is done by uniformly sampling random times on a bounded input domain and exploiting the Markov property of the flow map, allowing to use as data only the initial condition and the solution at a final time without a need for the entire time series. Numerical experiments showing a modest improvement over the standard FNO are performed on the 1-d Burgers' equation, generalized Burgers' equation, and the Chafee–Infante equation as well as on the 2-d Burgers' equation, and the Navier-Stokes equation. | SP:ade2bc3c672043e09debf5ff560deb6eb9c16c1d |
Semi-supervised learning of partial differential operators and dynamical flows | 1 INTRODUCTION . The evolution of classical and quantum physical dynamical systems in space and time is generically modeled by non-linear partial differential equations . Such are , for instance , Einstein equations of General Relativity , Maxwell equations of Electromagnetism , Schrödinger equation of Quantum Mechanics and Navier-Stokes ( NS ) equations of fluid flows . These equations , together with appropriate initial and boundary conditions , provide a complete quantitative description of the physical world within their regime of validity . Since these dynamic evolution settings are governed by partial differential operators that are often highly non-linear , it is rare to have analytical solutions for dynamic systems . This is especially true when the system contains a large number of interacting degrees of freedom in the non-linear regime . Consider , as an example , the NS equations , which describe the motion of viscous fluids . In the regime of high Reynolds numbers of the order of one thousand , one observes turbulences , in which all symmetries are broken and all analytical techniques fail . The solution to these ( deterministic ) equations seems almost random and is very sensitive to the initial conditions . Many numerical techniques have been developed for constructing and analysing the solutions to fluid dynamics systems . However , the complexity of these solvers grows quickly as the spacing in the grid that is used for approximating the solution is reduced and the degrees of freedom of the interacting fluid increases . Given the theoretical and practical importance of constructing solutions to these equations , it is natural to ask whether neural networks can learn such evolution equations and construct new solutions . The two fundamental questions are : ( i ) The ability to generalize to initial conditions that are different from those presented in the training set , and ( ii ) The ability to generalize to new time points beyond the fixed grid points provided during training . The reason to hope that such tasks can be performed by machine learning is that despite the seemingly random behaviour of , e.g . fluid flows in the turbulent regime , there is an underlying low-entropy structure that can be learnt . Indeed , in diverse cases , neural network-based solvers have been shown to provide comparable results to other numerical methods , while utilizing fewer resources . Our Contributions We present a hyper-network based solver combined with a Fourier Neural Operator architecture . • Our hyper-network architecture treats time and space separately . Utilizing a data set of initial conditions and the corresponding solutions at a labeled fixed time , the network learns a large class of time evolution PDEs . • Our network successfully propagates initial conditions in discrete time steps by implementing the general composition properties of the partial differential operators . • Our solutions improve the learning accuracy at the supervision time-points . • Our solutions are able to interpolate and extrapolate to arbitrary ( unlabelled ) times . • We thoroughly test our method on various time evolution PDEs , including nonlinear fluid flows in one , two and three spatial dimensions . 2 RELATED WORK . Hypernetworks . While conventional networks employ a fixed set of pre-determined parameters , which is independent of the input , the hypernetwork scheme , invented multiple times , and coined by Ha et al . ( 2016 ) , allows the parameters of a neural network to explicitly rely on the input by combining two neural networks . The first neural network , called the hypernetwork , processes the input or part of it and outputs the weights of a second neural network . The second network , called the primary network , has a fixed architecture and weights that vary based on the input . It returns , given its input , the final output . This framework was used successfully in a variety of tasks , ranging from computer vision ( Littwin & Wolf , 2019 ) , continual learning ( von Oswald et al. , 2019 ) , and language modeling ( Suarez , 2017 ) . While it is natural to learn functions with hypernetworks , since the primary network can be seen as a dynamic , input-dependent function , we are not aware of any previous work that applies this scheme for recovering physical operators . Neural network-based PDE solvers . Due to the well-known limitations of traditional PDE solvers on one hand , and in light of new advances made in the field of neural networks on the other , lately we have witnessed very significant progress in the field of neural network-based PDE solvers ( Karniadakis et al. , 2021 ) . Neural network-based PDE solvers can be roughly divided into two groups according to the resource they utilize for learning : data-driven and model-based . Data-driven solvers use a large dataset containing initial conditions and final states , all at the same timepoint T to train ( Zhu & Zabaras , 2018 ) . These solvers predict the solution for new initial conditions at the same T , but can not be interpolated or extrapolated to times that are not increments of T . However , as the PDE is not required for such solvers , they can be used in cases where the underlying PDE is unknown . Data-driven approaches often reduce the problem setting of PDE solvers to the well-known problem setting of image-to-image mapping in computer vision . Following this approach , it is customary to use networks such U-Net ( Ronneberger et al. , 2015 ) . Model-based solvers , known as Physics Informed Neural Networks ( PINNs ) ( Raissi et al. , 2019 ) , harness the differential operator itself for supervision . This is done by defining a loss , the residual of the PDE . These solvers do not require a training dataset and can provide solutions for arbitrary times . Both approaches learn solutions on a specific discretized grid , which poses a limitation for any practical applications . Recently , a mesh-invariant data-driven direction has been proposed ( Lu et al. , 2019 ; Bhattacharya et al. , 2020 ; Nelsen & Stuart , 2021 ; Li et al. , 2020b ; Patel et al. , 2021 ) . The mesh invariance is obtained by learning operators rather than mappings between initial and final states . Li et al . ( 2020c ) have advanced the mesh-invariant line of work by introducing Fourier Neural Operators ( FNO ) . FNOs utilize both a convolution layer in real space and a Fourier integral layer in the Fourier domain . It has been shown that the FNO solver outperforms previous solvers in a number of important PDEs . FNO has a major limitation : the method ( in the framework of learning from pairs of initial states and final solutions ) can not be used to provide solutions for interpolation and arbitrary extrapolations . Knowing the solutions along the complete time evolution trajectory is highly valuable for theoretical and practical reasons and provides means for learning new dynamical principles . 3 PARTIAL DIFFERENTIAL EQUATIONS . Consider a d-dimensional vector field v ( x , t ) : Td×R→ Rd , where x = ( x1 , . . . , xd ) are periodic spatial coordinates xi ' xi + 2π on the d-dimensional torus , Td , and t is the time coordinate . The vector field evolves dynamically from the initial condition v ( x , t = 0 ) : Td → Rd according to a non-linear PDE of the form , ∂tv ( x , t ) = Lv ( x , t ) , ( 1 ) where L is a differential operator that does not depend explicitly on time , and has an expansion in v and its spatial derivatives . We assume that given a regular bounded initial vector field there is a unique regular solution to equation 1 . Since L does not depend explicitly on time , we can formally write the solution of such equations as v ( x , t ) = etLv ( x , 0 ) ≡ Φtv ( x , 0 ) . ( 2 ) Because of dissipation terms , such as the viscosity term in the NS equations , solutions to equation 1 generically break time reversal invariance ( t→ −t , v → −v ) . Furthermore , the total energy of the system is non-increasing as a function of time since we are not injecting energy at t 6= 0 , ∂t ∫ ddx v · v 2 ≤ 0 . ( 3 ) Equation 3 serves as a consistency on the network solutions . In this work , we consider the following non-linear PDEs , ( i ) Burgers equation describing one-dimensional compressible fluid flows with scalar velocity field v ( x , t ) , ∂tv + v∇v = ν∆v , ( 4 ) where ν is the kinematic viscosity , ∇ the gradient and ∆ is the Laplacian . ( ii ) Generalized one-dimensional Burgers equation with a parameter q = 2 , 3 , 4 , ∂tv + v q∇v = ν∆v . ( 5 ) ( iii ) One-dimensional Chafee–Infante equation with a constant λ parameter that models reactiondiffusion dynamics , ∂tv + λ ( v 3 − v ) = ∆v . ( 6 ) ( iv ) Two-dimensional Burgers equation for a two-dimensional vector field , v ( x , t ) = ( v1 , v2 ) , that describes compressible fluid flows in two space dimensions : ∂tv + ( v · ∇ ) v = ν∆v . ( 7 ) ( v ) Two and three-dimensional incompressible NS equations , ∂tv + ( v · ∇ ) v = −∇p+ ν∆v , ∇ · v = 0 , ( 8 ) where p is the fluid ’ s pressure , v ( x , t ) = ( v1 , . . . , vd ) and ν is the kinematic viscosity . 4 METHOD . Typically , a data-driven PDE solver , such as a neural network , evolves unseen velocity fields , v ( x , t = 0 ) to a fixed time T , ΦTv ( x , t = 0 ) = v ( x , T ) , ( 9 ) by learning from a set of i = 1 . . . N initial conditions sampled at t = 0 , vi ( x , t = 0 ) , and their corresponding time-evolved solutions of vi ( x , t = T ) of equation 1 . We generalize ΦT , to propagate solutions at intermediate times , 0 ≤ t ≤ T , by elevating a standard neural architecture to a hyper-network one . A hyper-network architecture is a composition of two neural networks , a primary network fw ( x ) and a hypernetwork gθ ( t ) with learned parameters θ , such that the parameters w of f are given as the output of g. Unlike common neural architectures , where a single input is mapped to a single output , in this construction , the input t is mapped to a function , fgθ ( t ) , which maps x to its output . Applying this to our case we have , Φtv ( x , 0 ) = fgθ ( t ) ( v ( x , 0 ) ) = v ( x , t ) . ( 10 ) This architecture may be used to learn not only the time-evolved solutions at t = T , but also intermediate solutions , at 0 < t < T , without explicitly providing the network with any intermediate time solution . This task may be accomplished by utilizing general consistency conditions , which apply to any equation of the form equation 1 , Φt1Φt2 = Φt2Φt1 = Φt1+t2 . ( 11 ) Notice that Φ0 = Id follows from equation 11 . | The paper presents a model and a loss function for approximation of the solution map of certain partial differential equations. The idea is to use HyperNetworks: the mapping of the initial condition to time t done by model f($\theta$, x), and the parameters are predicted by another neural network. The models utilize the recently proposed Fourier Neural Operator (FNO) for the mapping model, and a fully-connected network for the hypernetwork. Several losses are proposed to ensure consistency of the proposed models, including the composition and reconstruction. The tests are done on a battery of examples, and compared to other neural network baselines the error is smaller. A theorem is proved, but is rather straightforward. | SP:ade2bc3c672043e09debf5ff560deb6eb9c16c1d |
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective | 1 INTRODUCTION . ML models , DNNs in particular , have achieved remarkable success owing to their superior predictive performance . However , they often lack robustness . For example , imperceptible but carefully-crafted input perturbations can fool the decision of a well-trained ML model . These input perturbations refer to adversarial perturbations , and the adversarially perturbed ( test-time ) examples are known as adversarial examples or adversarial attacks ( Goodfellow et al. , 2015 ; Carlini & Wagner , 2017 ; Papernot et al. , 2016 ) . Existing studies have shown that it is not difficult to generate adversarial attacks . Numerous attack generation methods have been designed and successfully applied to ( i ) different use cases from the digital world to the physical world , e.g. , image classification ( Brown et al. , 2017 ; Li et al. , 2019 ; Xu et al. , 2019 ; Yuan et al. , 2021 ) , object detection/tracking ( Eykholt et al. , 2017 ; Xu et al. , 2020 ; Sun et al. , 2020 ) , and image reconstruction ( Antun et al. , 2020 ; Raj et al. , 2020 ; Vasiljević et al. , 2021 ) , and ( ii ) different types of victim models , e.g. , white-box models whose details can be accessed by adversaries ( Madry et al. , 2018 ; Carlini & Wagner , 2017 ; Tramer et al. , 2020 ; Croce & Hein , 2020 ; Wang et al. , 2021 ) , and black-box models whose information is not disclosed to adversaries ( Papernot et al. , 2017 ; Tu et al. , 2019 ; Ilyas et al. , 2018a ; Liang et al. , 2021 ) . Given the prevalence of adversarial attacks , methods to robustify ML models are now a major focus in research . For example , adversarial training ( AT ) ( Madry et al. , 2018 ) , which has been poised one of the most effective defense methods ( Athalye et al. , 2018 ) , employed min-max optimization to minimize the worst-case ( maximum ) training loss induced by adversarial attacks . Extended from AT , various empirical defense methods were proposed , ranging from supervised learning , semi-supervised learning , to unsupervised learning ( Madry et al. , 2018 ; Zhang et al. , 2019b ; Shafahi et al. , 2019 ; Zhang et al. , 2019a ; Carmon et al. , 2019 ; Chen et al. , 2020 ; Zhang et al. , 2021 ) . In addition to empirical defense , certified defense is another research focus , which aims to train provably robust ML models and provide certificates of robustness ( Wong & Kolter , 2017 ; Raghunathan et al. , 2018 ; Katz et al. , 2017 ; Salman et al. , 2019 ; 2020 ; 2021 ) . Although exciting progress has been made in adversarial defense , nearly all existing works ask a defender to perform over white-box ML models ( assuming non-confidential model architectures and parameters ) . However , the white-box assumption may restrict the defense application in practice . For example , a model owner may refuse to share the model details , since disclosing model information could hamper the owner ’ s privacy , e.g. , model inversion attacks lead to training data leakage ( Fredrikson et al. , 2015 ) . Besides the privacy consideration , the white-box defense built upon the ( end-to-end ) robust training ( e.g. , AT ) is computationally intensive , and thus is difficult to scale when robustifying multiple models . For example , in the medical domain , there exist massive pre-trained ML models for different diseases using hundreds of neuroimaging datasets ( Sisodiya et al. , 2020 ) . Thus , robustly retraining all models becomes impractical . Taking the model privacy and the defense efficiency into consideration , we ask : Is it possible to design an adversarial defense over black-box models using only model queries ? Extending adversarial defense to the black-box regime ( that we call ‘ blackbox defense ’ ) is highly non-trivial due to the challenge of black-box optimization ( i.e. , learning over blackbox models ) . To tackle this problem , the prior work ( Salman et al. , 2020 ) leveraged surrogate models as approximations of the black-box models , over which defense can be conducted following the whitebox setup . Yet , this still requires to have access to the information on the victim model type and its function . In practice , those conditions could be difficult to achieve . For example , if the domain knowledge related to medicine or healthcare is lacking ( Qayyum et al. , 2020 ; Finlayson et al. , 2019 ) , then it will be difficult to determine a proper surrogate model of a medical ML system . Even if a black-box model estimate can be obtained using the model inversion technique ( Kumar & Levine , 2019 ) , a significantly large number of model queries are needed even just for tackling a MNIST-level prediction task ( Oh et al. , 2019 ) . Different from ( Salman et al. , 2020 ) . We study an authentic blackbox scenario , in which the interaction between defender and model is only based on input-output function queries ( see Fig . 1 ) . To our best knowledge , this is the first work to tackle the problem of query-based black-box defense . Contributions . We summarize our contributions below . ¬ ( Formulation-wise ) We formulate the problem of black-box defense and investigate it through the lens of zeroth-order ( ZO ) optimization . Different from existing work , our paper aims to design the restriction-least black-box defense and our formulation is built upon a query-based black-box setting , which avoids the use of surrogate models . ( Methodology-wise ) We propose a novel black-box defense approach , ZO AutoEncoder-based Denoised Smoothing ( ZO-AE-DS ) , which is able to tackle the challenge of ZO optimization in high dimensions and convert a pre-trained non-robust ML model into a certifiably robust model using only function queries . ® ( Experiment-wise ) We verify the efficacy of our method through an extensive experimental study . In the task of image classification , the proposed ZO-AE-DS significantly outperforms the ZO baseline built upon ( Salman et al. , 2020 ) . For instance , we can improve the certified robust accuracy of ResNet110 on CIFAR-10 from 19.16 % ( using baseline ) to 54.87 % ( using ZO-AE-DS ) under adversarial perturbations with ` 2 norm less than 64/255 . We also empirically show that our proposal stays effective even in the task of image reconstruction . 2 RELATED WORK . Empirical defense . An immense number of defense methods have been proposed , aiming to improve model robustness against adversarial attacks . Examples include detecting adversarial attacks ( Guo et al. , 2017 ; Meng & Chen , 2017 ; Gong et al. , 2017 ; Grosse et al. , 2017 ; Metzen et al. , 2017 ) and training robust ML models ( Madry et al. , 2018 ; Zhang et al. , 2019b ; Shafahi et al. , 2019 ; Wong et al. , 2020 ; Zhang et al. , 2019a ; Athalye et al. , 2018 ; Cheng et al. , 2017 ; Wong & Kolter , 2017 ; Salman et al. , 2019 ; Raghunathan et al. , 2018 ; Katz et al. , 2017 ) . In this paper , we focus on advancing the algorithm foundation of robust training over black-box models . Robust training can be broadly divided into two categories : empirical defense and certified defense . In the former category , the most representative method is AT ( adversarial training ) that formulates adversarial defense as a two-player game ( between attacker and defender ) ( Madry et al. , 2018 ) . Spurred by AT , empirical defense has developed rapidly . For example , in ( Zhang et al. , 2019b ) , TRADES was proposed to seek the optimal trade-off between accuracy and robustness . In ( Stanforth et al. , 2019 ; Carmon et al. , 2019 ) , unlabeled data and self-training were shown effective to improve adversarial defense in both robustness and generalization . In ( Shafahi et al. , 2019 ; Wong et al. , 2020 ; Zhang et al. , 2019a ; Andriushchenko & Flammarion , 2020 ) , to improve the scalability of adversarial defense , computationally-light alternatives of AT were developed . Despite the effectiveness of empirical defense against adversarial attacks ( Athalye et al. , 2018 ) , it lacks theoretical guarantee ( known as ‘ certificate ’ ) for the achieved robustness . Thus , the problem of certified defense arises . Certified defense . Certified defense seeks to provide a provably guarantee of ML models . One line of research focuses on post-hoc formal verification of a pre-trained ML model . The certified robustness is then given by a ‘ safe ’ input perturbation region , within which any perturbed input will not fool the given model ( Katz et al. , 2017 ; Ehlers , 2017 ; Bunel et al. , 2018 ; Dutta et al. , 2017 ) . Since the exact verification is computationally intensive , a series of work ( Raghunathan et al. , 2018 ; Dvijotham et al. , 2018 ; Wong & Kolter , 2017 ; Weng et al. , 2018a ; b ; Wong et al. , 2018 ) proposed ‘ incomplete ’ verification , which utilizes convex relaxation to over-approximate the output space of a predictive model when facing input perturbations . Such a relaxation leads to fast computation in the verification process but only proves a lower bound of the exact robustness guarantee . Besides the post-hoc model verification with respect to each input example , another line of research focuses on in-processing certification-aware training and prediction . For example , randomized smoothing ( RS ) transforms an empirical classifier into a provably robust one by convolving the former with an isotropic Gaussian distribution . It was shown in ( Cohen et al. , 2019 ) that RS can provide formal guarantees for adversarial robustness . Different types of RS-oriented provable defenses have been develped , such as adversarial smoothing ( Salman et al. , 2019 ) , denoised smoothing ( Salman et al. , 2020 ) , smoothed ViT ( Salman et al. , 2021 ) , and feature smoothing ( Addepalli et al. , 2021 ) . Zeroth-order ( ZO ) optimization for adversarial ML . ZO optimization methods are gradientfree counterparts of first-order ( FO ) optimization methods ( Liu et al. , 2020b ) . They approximate the FO gradients through function value based gradient estimates . Thus , ZO optimization is quite useful to solve black-box problems when explicit expressions of their gradients are difficult to compute or infeasible to obtain . In the area of adversarial ML , ZO optimization has become a principled approach to generate adversarial examples from black-box victim ML models ( Chen et al. , 2017 ; Ilyas et al. , 2018a ; b ; Tu et al. , 2019 ; Liu et al. , 2019 ; 2020a ; Huang & Zhang , 2020 ; Cai et al. , 2021 ) . Such ZO optimization-based attack generation methods can be as effective as state-of-the-art white-box attacks , despite only having access to the inputs and outputs of the targeted model . For example , the work ( Tu et al. , 2019 ) leveraged the white-box decoder to map the generated low-dimension perturbations back to the original input dimension . Inspired by ( Tu et al. , 2019 ) , we leverage the autoencoder architecture to tackle the high-dimension challenge of ZO optimization in black-box defense . Despite the widespread application of ZO optimization to black-box attack generation , few work studies the problem of black-box defense . | The authors formulate the problem of black-box defense and propose a novel black-box defense approach called the Zero Order AutoEncoder-based Denoised Smoothing (ZO-AE-DS). Black-box defense corresponds to situations in which the defense model information cannot be obtained due to privacy protection in real scenarios. ZO-AE-DS introduces zero-order optimization on the structure of denoised smoothing (DS) to estimate the gradient and uses an Autoencoder (AE) to connect the denoiser with the model so that zero-order optimization can be conducted in a (low-dimension) feature embedding space. | SP:57cc0c93dd03b67e5edf378ed41bd492bd6da2b2 |
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective | 1 INTRODUCTION . ML models , DNNs in particular , have achieved remarkable success owing to their superior predictive performance . However , they often lack robustness . For example , imperceptible but carefully-crafted input perturbations can fool the decision of a well-trained ML model . These input perturbations refer to adversarial perturbations , and the adversarially perturbed ( test-time ) examples are known as adversarial examples or adversarial attacks ( Goodfellow et al. , 2015 ; Carlini & Wagner , 2017 ; Papernot et al. , 2016 ) . Existing studies have shown that it is not difficult to generate adversarial attacks . Numerous attack generation methods have been designed and successfully applied to ( i ) different use cases from the digital world to the physical world , e.g. , image classification ( Brown et al. , 2017 ; Li et al. , 2019 ; Xu et al. , 2019 ; Yuan et al. , 2021 ) , object detection/tracking ( Eykholt et al. , 2017 ; Xu et al. , 2020 ; Sun et al. , 2020 ) , and image reconstruction ( Antun et al. , 2020 ; Raj et al. , 2020 ; Vasiljević et al. , 2021 ) , and ( ii ) different types of victim models , e.g. , white-box models whose details can be accessed by adversaries ( Madry et al. , 2018 ; Carlini & Wagner , 2017 ; Tramer et al. , 2020 ; Croce & Hein , 2020 ; Wang et al. , 2021 ) , and black-box models whose information is not disclosed to adversaries ( Papernot et al. , 2017 ; Tu et al. , 2019 ; Ilyas et al. , 2018a ; Liang et al. , 2021 ) . Given the prevalence of adversarial attacks , methods to robustify ML models are now a major focus in research . For example , adversarial training ( AT ) ( Madry et al. , 2018 ) , which has been poised one of the most effective defense methods ( Athalye et al. , 2018 ) , employed min-max optimization to minimize the worst-case ( maximum ) training loss induced by adversarial attacks . Extended from AT , various empirical defense methods were proposed , ranging from supervised learning , semi-supervised learning , to unsupervised learning ( Madry et al. , 2018 ; Zhang et al. , 2019b ; Shafahi et al. , 2019 ; Zhang et al. , 2019a ; Carmon et al. , 2019 ; Chen et al. , 2020 ; Zhang et al. , 2021 ) . In addition to empirical defense , certified defense is another research focus , which aims to train provably robust ML models and provide certificates of robustness ( Wong & Kolter , 2017 ; Raghunathan et al. , 2018 ; Katz et al. , 2017 ; Salman et al. , 2019 ; 2020 ; 2021 ) . Although exciting progress has been made in adversarial defense , nearly all existing works ask a defender to perform over white-box ML models ( assuming non-confidential model architectures and parameters ) . However , the white-box assumption may restrict the defense application in practice . For example , a model owner may refuse to share the model details , since disclosing model information could hamper the owner ’ s privacy , e.g. , model inversion attacks lead to training data leakage ( Fredrikson et al. , 2015 ) . Besides the privacy consideration , the white-box defense built upon the ( end-to-end ) robust training ( e.g. , AT ) is computationally intensive , and thus is difficult to scale when robustifying multiple models . For example , in the medical domain , there exist massive pre-trained ML models for different diseases using hundreds of neuroimaging datasets ( Sisodiya et al. , 2020 ) . Thus , robustly retraining all models becomes impractical . Taking the model privacy and the defense efficiency into consideration , we ask : Is it possible to design an adversarial defense over black-box models using only model queries ? Extending adversarial defense to the black-box regime ( that we call ‘ blackbox defense ’ ) is highly non-trivial due to the challenge of black-box optimization ( i.e. , learning over blackbox models ) . To tackle this problem , the prior work ( Salman et al. , 2020 ) leveraged surrogate models as approximations of the black-box models , over which defense can be conducted following the whitebox setup . Yet , this still requires to have access to the information on the victim model type and its function . In practice , those conditions could be difficult to achieve . For example , if the domain knowledge related to medicine or healthcare is lacking ( Qayyum et al. , 2020 ; Finlayson et al. , 2019 ) , then it will be difficult to determine a proper surrogate model of a medical ML system . Even if a black-box model estimate can be obtained using the model inversion technique ( Kumar & Levine , 2019 ) , a significantly large number of model queries are needed even just for tackling a MNIST-level prediction task ( Oh et al. , 2019 ) . Different from ( Salman et al. , 2020 ) . We study an authentic blackbox scenario , in which the interaction between defender and model is only based on input-output function queries ( see Fig . 1 ) . To our best knowledge , this is the first work to tackle the problem of query-based black-box defense . Contributions . We summarize our contributions below . ¬ ( Formulation-wise ) We formulate the problem of black-box defense and investigate it through the lens of zeroth-order ( ZO ) optimization . Different from existing work , our paper aims to design the restriction-least black-box defense and our formulation is built upon a query-based black-box setting , which avoids the use of surrogate models . ( Methodology-wise ) We propose a novel black-box defense approach , ZO AutoEncoder-based Denoised Smoothing ( ZO-AE-DS ) , which is able to tackle the challenge of ZO optimization in high dimensions and convert a pre-trained non-robust ML model into a certifiably robust model using only function queries . ® ( Experiment-wise ) We verify the efficacy of our method through an extensive experimental study . In the task of image classification , the proposed ZO-AE-DS significantly outperforms the ZO baseline built upon ( Salman et al. , 2020 ) . For instance , we can improve the certified robust accuracy of ResNet110 on CIFAR-10 from 19.16 % ( using baseline ) to 54.87 % ( using ZO-AE-DS ) under adversarial perturbations with ` 2 norm less than 64/255 . We also empirically show that our proposal stays effective even in the task of image reconstruction . 2 RELATED WORK . Empirical defense . An immense number of defense methods have been proposed , aiming to improve model robustness against adversarial attacks . Examples include detecting adversarial attacks ( Guo et al. , 2017 ; Meng & Chen , 2017 ; Gong et al. , 2017 ; Grosse et al. , 2017 ; Metzen et al. , 2017 ) and training robust ML models ( Madry et al. , 2018 ; Zhang et al. , 2019b ; Shafahi et al. , 2019 ; Wong et al. , 2020 ; Zhang et al. , 2019a ; Athalye et al. , 2018 ; Cheng et al. , 2017 ; Wong & Kolter , 2017 ; Salman et al. , 2019 ; Raghunathan et al. , 2018 ; Katz et al. , 2017 ) . In this paper , we focus on advancing the algorithm foundation of robust training over black-box models . Robust training can be broadly divided into two categories : empirical defense and certified defense . In the former category , the most representative method is AT ( adversarial training ) that formulates adversarial defense as a two-player game ( between attacker and defender ) ( Madry et al. , 2018 ) . Spurred by AT , empirical defense has developed rapidly . For example , in ( Zhang et al. , 2019b ) , TRADES was proposed to seek the optimal trade-off between accuracy and robustness . In ( Stanforth et al. , 2019 ; Carmon et al. , 2019 ) , unlabeled data and self-training were shown effective to improve adversarial defense in both robustness and generalization . In ( Shafahi et al. , 2019 ; Wong et al. , 2020 ; Zhang et al. , 2019a ; Andriushchenko & Flammarion , 2020 ) , to improve the scalability of adversarial defense , computationally-light alternatives of AT were developed . Despite the effectiveness of empirical defense against adversarial attacks ( Athalye et al. , 2018 ) , it lacks theoretical guarantee ( known as ‘ certificate ’ ) for the achieved robustness . Thus , the problem of certified defense arises . Certified defense . Certified defense seeks to provide a provably guarantee of ML models . One line of research focuses on post-hoc formal verification of a pre-trained ML model . The certified robustness is then given by a ‘ safe ’ input perturbation region , within which any perturbed input will not fool the given model ( Katz et al. , 2017 ; Ehlers , 2017 ; Bunel et al. , 2018 ; Dutta et al. , 2017 ) . Since the exact verification is computationally intensive , a series of work ( Raghunathan et al. , 2018 ; Dvijotham et al. , 2018 ; Wong & Kolter , 2017 ; Weng et al. , 2018a ; b ; Wong et al. , 2018 ) proposed ‘ incomplete ’ verification , which utilizes convex relaxation to over-approximate the output space of a predictive model when facing input perturbations . Such a relaxation leads to fast computation in the verification process but only proves a lower bound of the exact robustness guarantee . Besides the post-hoc model verification with respect to each input example , another line of research focuses on in-processing certification-aware training and prediction . For example , randomized smoothing ( RS ) transforms an empirical classifier into a provably robust one by convolving the former with an isotropic Gaussian distribution . It was shown in ( Cohen et al. , 2019 ) that RS can provide formal guarantees for adversarial robustness . Different types of RS-oriented provable defenses have been develped , such as adversarial smoothing ( Salman et al. , 2019 ) , denoised smoothing ( Salman et al. , 2020 ) , smoothed ViT ( Salman et al. , 2021 ) , and feature smoothing ( Addepalli et al. , 2021 ) . Zeroth-order ( ZO ) optimization for adversarial ML . ZO optimization methods are gradientfree counterparts of first-order ( FO ) optimization methods ( Liu et al. , 2020b ) . They approximate the FO gradients through function value based gradient estimates . Thus , ZO optimization is quite useful to solve black-box problems when explicit expressions of their gradients are difficult to compute or infeasible to obtain . In the area of adversarial ML , ZO optimization has become a principled approach to generate adversarial examples from black-box victim ML models ( Chen et al. , 2017 ; Ilyas et al. , 2018a ; b ; Tu et al. , 2019 ; Liu et al. , 2019 ; 2020a ; Huang & Zhang , 2020 ; Cai et al. , 2021 ) . Such ZO optimization-based attack generation methods can be as effective as state-of-the-art white-box attacks , despite only having access to the inputs and outputs of the targeted model . For example , the work ( Tu et al. , 2019 ) leveraged the white-box decoder to map the generated low-dimension perturbations back to the original input dimension . Inspired by ( Tu et al. , 2019 ) , we leverage the autoencoder architecture to tackle the high-dimension challenge of ZO optimization in black-box defense . Despite the widespread application of ZO optimization to black-box attack generation , few work studies the problem of black-box defense . | This work provides an algorithm to ensure robust training of an ML model with just black-box knowledge of it i.e., input and output access. The algorithm relies on using Denoised Smoothing with zeroth-order optimization where the gradients are estimated using random perturbations (finite-differencing). They avoid the computational burden and high variance of these estimates by first training an auto-encoder to reduce the inputs to a low-dimensional subspace. They show over different architectures and multiple datasets (Cifar10, STL10, image reconstruction over MNIST) that the proposed algorithm performs better than the baseline which is the Denoised Smoothing algorithm by Salman et al. | SP:57cc0c93dd03b67e5edf378ed41bd492bd6da2b2 |
Boosting the Certified Robustness of L-infinity Distance Nets | 1 INTRODUCTION . Modern neural networks , while achieving high accuracy on various tasks , are found to be vulnerable to small , adversarially-chosen perturbations of the inputs ( Szegedy et al. , 2013 ; Biggio et al. , 2013 ) . Given an image x correctly classified by a neural network , there often exists a small adversarial perturbation δ , such that the perturbed image x+ δ looks indistinguishable to x , but fools the network to predict an incorrect class with high confidence . Such vulnerability creates security concerns in many real-world applications . A large body of works has been developed to obtain robust classifiers . One line of works proposed heuristic approaches that are empirically robust to particular attack methods , among which adversarial training is the most successful approach ( Goodfellow et al. , 2014 ; Madry et al. , 2017 ; Zhang et al. , 2019a ) . However , a variety of these heuristics have been subsequently broken by stronger and adaptive attacks ( Carlini & Wagner , 2017 ; Athalye et al. , 2018 ; Uesato et al. , 2018 ; Tramer et al. , 2020 ; Croce & Hein , 2020 ) , and there are no formal guarantees whether the resulting model is truly robust . This motivates another line of works that seeks certifiably robust classifiers whose prediction is guaranteed to remain the same under all allowed perturbations . Representatives of this field use convex relaxation ( Wong & Kolter , 2018 ; Mirman et al. , 2018 ; Gowal et al. , 2018 ; Zhang et al. , 2020b ) or randomized smoothing ( Cohen et al. , 2019 ; Salman et al. , 2019a ; Zhai et al. , 2020 ; Yang et al. , 2020a ) . However , these approaches typically suffer from high computational cost , yet still can not achieve satisfactory results for commonly used ` ∞-norm perturbation scenario . Recently , Zhang et al . ( 2021 ) proposed a fundamentally different approach by designing a new network architecture called ` ∞-distance net , a name coming from its construction that the basic neuron is defined as the ` ∞-distance function . Using the fact that any ` ∞-distance net is inherently a 1- Lipschitz mapping , one can easily check whether the prediction is certifiably robust for a given data point according to the output margin . The whole procedure only requires a forward pass without any additional computation . The authors further showed that the model family has strong expressive power , e.g. , a large enough ` ∞-distance net can approximate any 1-Lipschitz function on a bounded domain . Unfortunately , however , the empirical model performance did not well reflect the theoretical advantages . As shown in Zhang et al . ( 2021 ) , it is necessary to use a conventional multi-layer perception ( MLP ) 1 on top of an ` ∞-distance net backbone to achieve better performance compared to the baseline methods . It makes both the training and the certification procedure complicated . More importantly , it calls into question whether the ` ∞-distance net is really a better model configuration than conventional architectures in the regime of certified robustness . In this paper , we give an affirmative answer by showing that ` ∞-distance net itself suffices for good performance and can be well learned using an improved training strategy . We first mathematically prove that under mild assumptions of the dataset , there exists an ` ∞-distance net with reasonable size by construction that achieves perfect certified robustness . This result indicates the strong expressive power of ` ∞-distance nets in robustness certification , and shows a fundamental advantage over conventional networks with typical certification approaches ( Mirman et al. , 2021 ) . However , it seems to contradict the previous empirical observations , suggesting that the model may fail to find an optimal solution and further motivating us to revisit the optimization process designed in Zhang et al . ( 2021 ) . Due to the non-smoothness of the ` ∞-distance function , Zhang et al . ( 2021 ) developed several training tricks to overcome the optimization difficulty . A notable trick is called the ` p-relaxation , in which ` p-distance neurons are used during optimization to give a smooth approximation of ` ∞-distance . However , we find that the relaxation on neurons unexpectedly relaxes the Lipschitz constant of the network to an exponentially large value , making the objective function no longer maximize the robust accuracy and leading to sub-optimal solutions . We develop a novel modification of the objective function to bypass the problem mentioned above . The objective function is a linear combination of a scaled cross-entropy term and a modified clipped hinge term . The cross-entropy loss maximizes the output margin despite the model ’ s Lipschitzness and makes optimization sufficient at the early training stage when p is small . The clipped hinge loss then focuses on robustness for correctly classified samples at the late training phase when p approaches infinity . The switch from cross-entropy loss to clipped hinge loss is reflected in the mixing coefficient , which decays to zero as p grows to infinity throughout the training procedure . Despite its simplicity , our experimental results show significant performance gains on various datasets . In particular , an ` ∞-distance net backbone can achieve 40.06 % certified robust accuracy on CIFAR-10 ( = 8/255 ) . This goes far beyond the previous results , which achieved 33.30 % certified accuracy on CIFAR-10 using the same architecture ( Zhang et al. , 2021 ) . Besides , it surpasses the relaxation-based certification approaches by at least 5 points ( Shi et al. , 2021 ; Lyu et al. , 2021 ) , establishing a new state-of-the-art result . To summarize , both the theoretical finding and empirical results in this paper demonstrate the merit of ` ∞-distance net for certified robustness . Considering the simplicity of the architecture and training strategy used in this paper , we believe there are still many potentials for future research of ` ∞-distance nets , and more generally , the class of Lipschitz architectures . 2 PRELIMINARY . In this section , we briefly introduce the ` ∞-distance net and its training strategy . An ` ∞-distance net is constructed using ` ∞-distance neurons as the basic component . The ` ∞-distance neuron u takes vector x as the input and calculates the ` ∞-norm distance between x and parameter w with a bias term b . The neuron can be written as u ( x , { w , b } ) = ‖x−w‖∞ + b . ( 1 ) Based on the neuron definition , a fully connected ` ∞-distance net can then be constructed . Formally , an L layer network g takes x ( 0 ) = x as the input , and the lth layer x ( l ) is calculated by x ( l ) i = u ( x ( l−1 ) , { w ( l , i ) , b ( l ) i } ) = ‖x ( l−1 ) −w ( l , i ) ‖∞ + b ( l ) i , l ∈ [ L ] , i ∈ [ nl ] . ( 2 ) 1Without any confusion , in this paper , a conventional neural network model is referred to as a network composed of linear transformations with non-linear activations . Here nl is the number of neurons in the lth layer . For K-class classification problems , nL = K. The network outputs g ( x ) = x ( L ) as logits and predicts the class arg maxi∈ [ K ] [ g ( x ) ] i . An important property of ` ∞-distance net is its Lipschitz continuity , as is stated below . Definition 2.1 . A mapping f ( z ) : Rm → Rn is called λ-Lipschitz with respect to ` p-norm ‖ · ‖p , if for any z1 , z2 , the following holds : ‖f ( z1 ) − f ( z2 ) ‖p ≤ λ‖z1 − z2‖p . ( 3 ) Proposition 2.2 . The mapping of an ` ∞-distance layer is 1-Lipschitz with respect to ` ∞-norm . Thus by composition , any ` ∞-distance net g ( · ) is 1-Lipschitz with respect to ` ∞-norm . ` ∞-distance nets naturally possess certified robustness using the Lipschitz property . In detail , for any data point x with label y , denote the output margin of network g as margin ( x , y ; g ) = [ g ( x ) ] y −max j 6=y [ g ( x ) ] j . ( 4 ) If x is correctly classified by g , then the prediction of a perturbed input x + δ will remain the same as x if ‖δ‖∞ < margin ( x , y ; g ) /2 . In other words , we can obtain the certified robustness for a given perturbation level according to I ( margin ( x , y ; g ) /2 > ) , where I ( · ) is the indicator function . We call this margin-based certification . Given this certification approach , a corresponding training approach can then be developed , where one simply learns a large margin classifier using standard loss functions , e.g. , hinge loss , without using adversarial training . Therefore the whole training procedure is as efficient as training standard networks with no additional cost . Zhang et al . ( 2021 ) further show that ` ∞-distance nets are Lipschitz-universal approximators . In detail , a large enough ` ∞-distance net can approximate any 1-Lipschitz function with respect to ` ∞-norm on a bounded domain arbitrarily well . Training ` ∞-distance nets . One major challenge in training ` ∞-distance net is that the ` ∞-distance operation is highly non-smooth , and the gradients ( i.e . ∇x‖x−w‖∞ and∇w‖x−w‖∞ ) are sparse . To mitigate the problem , Zhang et al . ( 2021 ) used ` p-distance neurons instead of ` ∞-distance ones during training , resulting in approximate and non-sparse gradients . Typically p is set to a small value ( e.g. , 8 ) in the beginning and increases throughout training until it reaches a large number ( e.g. , 1000 ) . The authors also designed several other tricks to further address the optimization difficulty . However , even with the help of tricks , ` ∞-distance nets only perform competitively to previous works . The authors thus considered using a hybrid model architecture , in which the ` ∞-distance net serves as a robust feature extractor , and an additional conventional multi-layer perceptron is used as the prediction head . This architecture achieves the best performance , but both the training and the certification approach become complicated again due to the presence of non-Lipschitz MLP layers . 3 EXPRESSIVE POWER OF ` ∞-DISTANCE NETS IN ROBUST CLASSIFICATION . In this section , we challenge the conclusion of previous work by proving that simple ` ∞-distance nets ( without the top MLP ) suffice for achieving perfect certified robustness in classification . Recall that Zhang et al . ( 2021 ) already provides a universal approximation theorem , showing the expressive power of ` ∞-distance nets to represent Lipschitz functions . However , their result focuses on realvalued function approximation and is not directly helpful for certified robustness in classification . One may ask : Does a certifiably robust ` ∞-distance net exist given a dataset ? If so , how large does the network need to be ? We will answer these questions and show that one can explicitly construct an ` ∞-distance net that achieves perfect certified robustness as long as the dataset satisfies the following ( weak ) condition called r-separation ( Yang et al. , 2020b ) . Definition 3.1 . ( r-separation ) Consider a labeled datasetD = { ( xi , yi ) } where yi ∈ [ K ] is the label of xi . We say D is r-separated with respect to ` p-norm if for any pair of samples ( xi , yi ) , ( xj , yj ) , as long as yi 6= yj , we have ‖xi − xj‖p > 2r . It is easy to see that r-separation is a necessary condition for robustness under ` p-norm perturbation = r. In fact , the condition holds for all commonly used datasets ( e.g. , MNIST , CIFAR-10 ) : the value of r in each dataset is much greater than the allowed perturbation level as is demonstrated in Yang et al . ( 2020b ) ( see Table 1 above ) . The authors took a further step and showed there always exists a classifier that achieves perfect robust accuracy if the condition holds . We now prove that even if we restrict the classifier to be the network function class represented by ` ∞-distance nets , the conclusion is still correct : a simple two-layer ` ∞-distance net with hidden sizeO ( n ) can already achieve perfect robustness for r-separated datasets . Theorem 3.2 . LetD be a dataset with n elements satisfying the r-separation condition with respect to ` ∞-norm . Then there exists a two-layer ` ∞-distance net with hidden size n , such that when using margin-based certification , the certified ` ∞ robust accuracy is 100 % onD under perturbation = r. Proof sketch . Consider a two layer ` ∞-distance net g defined in Equation ( 2 ) . Let its parameters be assigned by w ( 1 , i ) = xi , b ( 1 ) i = 0 for i ∈ [ n ] w ( 2 , j ) i = C · I ( yi = j ) , b ( 2 ) j = −C for i ∈ [ n ] , j ∈ [ K ] where C = 4 maxi∈ [ n ] ‖xi‖∞ is a constant , and I ( · ) is the indicator function . For the above assignment , it can be proved that the network outputs [ g ( x ) ] j = x ( 2 ) j = − min i∈ [ n ] , yi=j ‖x− xi‖∞ . ( 5 ) From Equation ( 5 ) the network g can represent a nearest neighbor classifier , in that it outputs the negative of the nearest neighbor distance between input x and the samples of each class . Therefore , given data x = xi in dataset D , the output margin of g ( x ) is at least 2r due to the r-separation condition . In other words , g achieves 100 % certified robust accuracy on D. Remark 3.3 . The above result can be extended to multi-layer networks . In general , we can prove the existence of such networks with L layers and no more than O ( n/L + K + d ) hidden neurons for each hidden layer where d is the input dimension . See Appendix A for details of the proof . The significance of Theorem 3.2 can be reflected in the following two aspects . Firstly , our result explicitly shows the strong expressive power of ` ∞-distance nets in robust classification , which complements the universal approximation theorem in Zhang et al . ( 2021 ) . Moreover , Theorem 3.2 gives an upper bound of O ( n ) on the required network size which is close to practical applications . It is much smaller than the size needed for function approximation ( O ( 1/εd ) under approximation error ε , proved in Zhang et al . ( 2021 ) ) , which scales exponentially in the input dimension d. Secondly , our result justifies that for well-designed architectures , using only the global Lipschitz property is sufficient for robustness certification . It contrasts to the prior view that suggests leveraging the local Lipschitz constant is necessary ( Huster et al. , 2018 ) , which typically needs sophisticated calculations ( Wong et al. , 2018 ; Zhang et al. , 2018 ; 2020b ) . More importantly , as a comparison , Mirman et al . ( 2021 ) very recently proved that for any conventional network , the commonlyused interval bound propagation ( IBP ) ( Mirman et al. , 2018 ; Gowal et al. , 2018 ) intrinsically can not achieve perfect certified robustness on a simple r-separation dataset containing only three data points ( under = r ) . In other words , ` ∞-distance nets certified using the global Lipschitz property have a fundamental advantage over conventional networks certified using interval bound propagation . | This paper proposed a simple modification of $\ell_\infty$ net training, which boosts the accuracy for certified robustness under $\ell_\infty$ attack. It provides a trainable scale on the output of the network and uses a clipped hinge loss. The paper also proves the expressive ability of $\ell_\infty$ nets for classification problems. | SP:f3620e9c72efa8512c624f0e055aa229b1af949e |
Boosting the Certified Robustness of L-infinity Distance Nets | 1 INTRODUCTION . Modern neural networks , while achieving high accuracy on various tasks , are found to be vulnerable to small , adversarially-chosen perturbations of the inputs ( Szegedy et al. , 2013 ; Biggio et al. , 2013 ) . Given an image x correctly classified by a neural network , there often exists a small adversarial perturbation δ , such that the perturbed image x+ δ looks indistinguishable to x , but fools the network to predict an incorrect class with high confidence . Such vulnerability creates security concerns in many real-world applications . A large body of works has been developed to obtain robust classifiers . One line of works proposed heuristic approaches that are empirically robust to particular attack methods , among which adversarial training is the most successful approach ( Goodfellow et al. , 2014 ; Madry et al. , 2017 ; Zhang et al. , 2019a ) . However , a variety of these heuristics have been subsequently broken by stronger and adaptive attacks ( Carlini & Wagner , 2017 ; Athalye et al. , 2018 ; Uesato et al. , 2018 ; Tramer et al. , 2020 ; Croce & Hein , 2020 ) , and there are no formal guarantees whether the resulting model is truly robust . This motivates another line of works that seeks certifiably robust classifiers whose prediction is guaranteed to remain the same under all allowed perturbations . Representatives of this field use convex relaxation ( Wong & Kolter , 2018 ; Mirman et al. , 2018 ; Gowal et al. , 2018 ; Zhang et al. , 2020b ) or randomized smoothing ( Cohen et al. , 2019 ; Salman et al. , 2019a ; Zhai et al. , 2020 ; Yang et al. , 2020a ) . However , these approaches typically suffer from high computational cost , yet still can not achieve satisfactory results for commonly used ` ∞-norm perturbation scenario . Recently , Zhang et al . ( 2021 ) proposed a fundamentally different approach by designing a new network architecture called ` ∞-distance net , a name coming from its construction that the basic neuron is defined as the ` ∞-distance function . Using the fact that any ` ∞-distance net is inherently a 1- Lipschitz mapping , one can easily check whether the prediction is certifiably robust for a given data point according to the output margin . The whole procedure only requires a forward pass without any additional computation . The authors further showed that the model family has strong expressive power , e.g. , a large enough ` ∞-distance net can approximate any 1-Lipschitz function on a bounded domain . Unfortunately , however , the empirical model performance did not well reflect the theoretical advantages . As shown in Zhang et al . ( 2021 ) , it is necessary to use a conventional multi-layer perception ( MLP ) 1 on top of an ` ∞-distance net backbone to achieve better performance compared to the baseline methods . It makes both the training and the certification procedure complicated . More importantly , it calls into question whether the ` ∞-distance net is really a better model configuration than conventional architectures in the regime of certified robustness . In this paper , we give an affirmative answer by showing that ` ∞-distance net itself suffices for good performance and can be well learned using an improved training strategy . We first mathematically prove that under mild assumptions of the dataset , there exists an ` ∞-distance net with reasonable size by construction that achieves perfect certified robustness . This result indicates the strong expressive power of ` ∞-distance nets in robustness certification , and shows a fundamental advantage over conventional networks with typical certification approaches ( Mirman et al. , 2021 ) . However , it seems to contradict the previous empirical observations , suggesting that the model may fail to find an optimal solution and further motivating us to revisit the optimization process designed in Zhang et al . ( 2021 ) . Due to the non-smoothness of the ` ∞-distance function , Zhang et al . ( 2021 ) developed several training tricks to overcome the optimization difficulty . A notable trick is called the ` p-relaxation , in which ` p-distance neurons are used during optimization to give a smooth approximation of ` ∞-distance . However , we find that the relaxation on neurons unexpectedly relaxes the Lipschitz constant of the network to an exponentially large value , making the objective function no longer maximize the robust accuracy and leading to sub-optimal solutions . We develop a novel modification of the objective function to bypass the problem mentioned above . The objective function is a linear combination of a scaled cross-entropy term and a modified clipped hinge term . The cross-entropy loss maximizes the output margin despite the model ’ s Lipschitzness and makes optimization sufficient at the early training stage when p is small . The clipped hinge loss then focuses on robustness for correctly classified samples at the late training phase when p approaches infinity . The switch from cross-entropy loss to clipped hinge loss is reflected in the mixing coefficient , which decays to zero as p grows to infinity throughout the training procedure . Despite its simplicity , our experimental results show significant performance gains on various datasets . In particular , an ` ∞-distance net backbone can achieve 40.06 % certified robust accuracy on CIFAR-10 ( = 8/255 ) . This goes far beyond the previous results , which achieved 33.30 % certified accuracy on CIFAR-10 using the same architecture ( Zhang et al. , 2021 ) . Besides , it surpasses the relaxation-based certification approaches by at least 5 points ( Shi et al. , 2021 ; Lyu et al. , 2021 ) , establishing a new state-of-the-art result . To summarize , both the theoretical finding and empirical results in this paper demonstrate the merit of ` ∞-distance net for certified robustness . Considering the simplicity of the architecture and training strategy used in this paper , we believe there are still many potentials for future research of ` ∞-distance nets , and more generally , the class of Lipschitz architectures . 2 PRELIMINARY . In this section , we briefly introduce the ` ∞-distance net and its training strategy . An ` ∞-distance net is constructed using ` ∞-distance neurons as the basic component . The ` ∞-distance neuron u takes vector x as the input and calculates the ` ∞-norm distance between x and parameter w with a bias term b . The neuron can be written as u ( x , { w , b } ) = ‖x−w‖∞ + b . ( 1 ) Based on the neuron definition , a fully connected ` ∞-distance net can then be constructed . Formally , an L layer network g takes x ( 0 ) = x as the input , and the lth layer x ( l ) is calculated by x ( l ) i = u ( x ( l−1 ) , { w ( l , i ) , b ( l ) i } ) = ‖x ( l−1 ) −w ( l , i ) ‖∞ + b ( l ) i , l ∈ [ L ] , i ∈ [ nl ] . ( 2 ) 1Without any confusion , in this paper , a conventional neural network model is referred to as a network composed of linear transformations with non-linear activations . Here nl is the number of neurons in the lth layer . For K-class classification problems , nL = K. The network outputs g ( x ) = x ( L ) as logits and predicts the class arg maxi∈ [ K ] [ g ( x ) ] i . An important property of ` ∞-distance net is its Lipschitz continuity , as is stated below . Definition 2.1 . A mapping f ( z ) : Rm → Rn is called λ-Lipschitz with respect to ` p-norm ‖ · ‖p , if for any z1 , z2 , the following holds : ‖f ( z1 ) − f ( z2 ) ‖p ≤ λ‖z1 − z2‖p . ( 3 ) Proposition 2.2 . The mapping of an ` ∞-distance layer is 1-Lipschitz with respect to ` ∞-norm . Thus by composition , any ` ∞-distance net g ( · ) is 1-Lipschitz with respect to ` ∞-norm . ` ∞-distance nets naturally possess certified robustness using the Lipschitz property . In detail , for any data point x with label y , denote the output margin of network g as margin ( x , y ; g ) = [ g ( x ) ] y −max j 6=y [ g ( x ) ] j . ( 4 ) If x is correctly classified by g , then the prediction of a perturbed input x + δ will remain the same as x if ‖δ‖∞ < margin ( x , y ; g ) /2 . In other words , we can obtain the certified robustness for a given perturbation level according to I ( margin ( x , y ; g ) /2 > ) , where I ( · ) is the indicator function . We call this margin-based certification . Given this certification approach , a corresponding training approach can then be developed , where one simply learns a large margin classifier using standard loss functions , e.g. , hinge loss , without using adversarial training . Therefore the whole training procedure is as efficient as training standard networks with no additional cost . Zhang et al . ( 2021 ) further show that ` ∞-distance nets are Lipschitz-universal approximators . In detail , a large enough ` ∞-distance net can approximate any 1-Lipschitz function with respect to ` ∞-norm on a bounded domain arbitrarily well . Training ` ∞-distance nets . One major challenge in training ` ∞-distance net is that the ` ∞-distance operation is highly non-smooth , and the gradients ( i.e . ∇x‖x−w‖∞ and∇w‖x−w‖∞ ) are sparse . To mitigate the problem , Zhang et al . ( 2021 ) used ` p-distance neurons instead of ` ∞-distance ones during training , resulting in approximate and non-sparse gradients . Typically p is set to a small value ( e.g. , 8 ) in the beginning and increases throughout training until it reaches a large number ( e.g. , 1000 ) . The authors also designed several other tricks to further address the optimization difficulty . However , even with the help of tricks , ` ∞-distance nets only perform competitively to previous works . The authors thus considered using a hybrid model architecture , in which the ` ∞-distance net serves as a robust feature extractor , and an additional conventional multi-layer perceptron is used as the prediction head . This architecture achieves the best performance , but both the training and the certification approach become complicated again due to the presence of non-Lipschitz MLP layers . 3 EXPRESSIVE POWER OF ` ∞-DISTANCE NETS IN ROBUST CLASSIFICATION . In this section , we challenge the conclusion of previous work by proving that simple ` ∞-distance nets ( without the top MLP ) suffice for achieving perfect certified robustness in classification . Recall that Zhang et al . ( 2021 ) already provides a universal approximation theorem , showing the expressive power of ` ∞-distance nets to represent Lipschitz functions . However , their result focuses on realvalued function approximation and is not directly helpful for certified robustness in classification . One may ask : Does a certifiably robust ` ∞-distance net exist given a dataset ? If so , how large does the network need to be ? We will answer these questions and show that one can explicitly construct an ` ∞-distance net that achieves perfect certified robustness as long as the dataset satisfies the following ( weak ) condition called r-separation ( Yang et al. , 2020b ) . Definition 3.1 . ( r-separation ) Consider a labeled datasetD = { ( xi , yi ) } where yi ∈ [ K ] is the label of xi . We say D is r-separated with respect to ` p-norm if for any pair of samples ( xi , yi ) , ( xj , yj ) , as long as yi 6= yj , we have ‖xi − xj‖p > 2r . It is easy to see that r-separation is a necessary condition for robustness under ` p-norm perturbation = r. In fact , the condition holds for all commonly used datasets ( e.g. , MNIST , CIFAR-10 ) : the value of r in each dataset is much greater than the allowed perturbation level as is demonstrated in Yang et al . ( 2020b ) ( see Table 1 above ) . The authors took a further step and showed there always exists a classifier that achieves perfect robust accuracy if the condition holds . We now prove that even if we restrict the classifier to be the network function class represented by ` ∞-distance nets , the conclusion is still correct : a simple two-layer ` ∞-distance net with hidden sizeO ( n ) can already achieve perfect robustness for r-separated datasets . Theorem 3.2 . LetD be a dataset with n elements satisfying the r-separation condition with respect to ` ∞-norm . Then there exists a two-layer ` ∞-distance net with hidden size n , such that when using margin-based certification , the certified ` ∞ robust accuracy is 100 % onD under perturbation = r. Proof sketch . Consider a two layer ` ∞-distance net g defined in Equation ( 2 ) . Let its parameters be assigned by w ( 1 , i ) = xi , b ( 1 ) i = 0 for i ∈ [ n ] w ( 2 , j ) i = C · I ( yi = j ) , b ( 2 ) j = −C for i ∈ [ n ] , j ∈ [ K ] where C = 4 maxi∈ [ n ] ‖xi‖∞ is a constant , and I ( · ) is the indicator function . For the above assignment , it can be proved that the network outputs [ g ( x ) ] j = x ( 2 ) j = − min i∈ [ n ] , yi=j ‖x− xi‖∞ . ( 5 ) From Equation ( 5 ) the network g can represent a nearest neighbor classifier , in that it outputs the negative of the nearest neighbor distance between input x and the samples of each class . Therefore , given data x = xi in dataset D , the output margin of g ( x ) is at least 2r due to the r-separation condition . In other words , g achieves 100 % certified robust accuracy on D. Remark 3.3 . The above result can be extended to multi-layer networks . In general , we can prove the existence of such networks with L layers and no more than O ( n/L + K + d ) hidden neurons for each hidden layer where d is the input dimension . See Appendix A for details of the proof . The significance of Theorem 3.2 can be reflected in the following two aspects . Firstly , our result explicitly shows the strong expressive power of ` ∞-distance nets in robust classification , which complements the universal approximation theorem in Zhang et al . ( 2021 ) . Moreover , Theorem 3.2 gives an upper bound of O ( n ) on the required network size which is close to practical applications . It is much smaller than the size needed for function approximation ( O ( 1/εd ) under approximation error ε , proved in Zhang et al . ( 2021 ) ) , which scales exponentially in the input dimension d. Secondly , our result justifies that for well-designed architectures , using only the global Lipschitz property is sufficient for robustness certification . It contrasts to the prior view that suggests leveraging the local Lipschitz constant is necessary ( Huster et al. , 2018 ) , which typically needs sophisticated calculations ( Wong et al. , 2018 ; Zhang et al. , 2018 ; 2020b ) . More importantly , as a comparison , Mirman et al . ( 2021 ) very recently proved that for any conventional network , the commonlyused interval bound propagation ( IBP ) ( Mirman et al. , 2018 ; Gowal et al. , 2018 ) intrinsically can not achieve perfect certified robustness on a simple r-separation dataset containing only three data points ( under = r ) . In other words , ` ∞-distance nets certified using the global Lipschitz property have a fundamental advantage over conventional networks certified using interval bound propagation . | This paper is a follow-up paper of Zhang et al. (2021). In Zhang et al. (2021), the authors proposed a new network architecture, l_infty distance net. By construction, the network is 1-Lipschitz w.r.t. l_infty distance. However, the training procedure therein is problematic. This paper resolves the issue by a new loss design of scaled cross entropy loss + clipped hinge loss. Without using MLP on top of the l_infty distance net backbone, the proposed new training method outperforms the original one in Zhang et al. (2021) and improves over the state-of-the-art by more than 5% for 8/255 and other radiuses. Theoretically, the paper shows the expressive power of l_infty distance net for well-separated data. | SP:f3620e9c72efa8512c624f0e055aa229b1af949e |
Wisdom of Committees: An Overlooked Approach To Faster and More Accurate Models | 1 INTRODUCTION Optimizing the efficiency of neural networks is important for real-world applications as they can only use limited computational resources and often have requirements on response time . There has been considerable work in this direction ( Howard et al. , 2017 ; Zhang et al. , 2018 ; Tan & Le , 2019 ) , but they mostly focus on designing novel network architectures that can achieve a favorable speed-accuracy trade-off . Here , we do not present any novel method or architecture design . Instead , we focus on analyzing the accuracy and efficiency of a simple paradigm : committeebased models . We use the term “ committee ” to refer to model ensembles or cascades , which indicates that they are built using multiple independent models . Committee-based models have been extensively studied and used before deep learning ( Breiman , 1996 ; Schapire , 1990 ; Freund & Schapire , 1997 ; Viola & Jones , 2001 ) . However , when com∗Work done during an internship at Google . †Work done as part of the Google AI Residency Program . paring the efficiency of deep models , committee-based models are rarely considered in recent work ( Howard et al. , 2017 ; Zhang et al. , 2018 ; Tan & Le , 2019 ) . There still lacks a systematic understanding of their efficiency in comparison with single models – models that only use one network . Such an understanding is informative for both researchers to push the frontier of efficient models and practitioners to select model designs in real-world applications . To fill this knowledge gap , we conduct a comprehensive analysis of the efficiency of committeebased models . To highlight the practical benefit of committee-based models , we intentionally choose the simplest possible method , which directly uses off-the-shelf , independently pre-trained models to build ensembles or cascades . We ensemble multiple pre-trained models via a simple average over their predictions ( Sec . 3 ) . For cascades , we sequentially apply each model and use a simple heuristic ( e.g. , maximum probability in the prediction ) to determine when to exit from the cascade ( Sec . 4 ) . We show that even this method already outperforms state-of-the-art architectures found by costly neural architecture search ( NAS ) methods . Note that this method works with off-the-shelf models and does not use specialized techniques . For example , it differs from Boosting ( Schapire , 1990 ) where each new model is conditioned on previous ones , and does not require the weight generation mechanism in previous efficient ensemble methods ( Wen et al. , 2020 ) . This method does not require the training of an early exit policy ( Bolukbasi et al. , 2017 ; Guan et al. , 2018 ) or the specially designed multi-scale architecture ( Huang et al. , 2018 ) in previous work on building cascades . To be clear , the contribution of this paper is not in the invention of model ensembles and cascades , as they have been known for decades , and is not in a new proposed method to build them . Instead , it is in the thorough evaluation and comparison of committee-based models with commonly used model architectures . Our analysis shows that committee-based models provide a simple complementary paradigm to achieve superior efficiency without tuning the architecture . One can often improve accuracy while reducing inference and training cost by building committees out of existing networks . Our findings generalize to a wide variety of tasks , including image classification , video classification , and semantic segmentation , and hold true for various architecture families : ViT ( Dosovitskiy et al. , 2021 ) , EfficientNet ( Tan & Le , 2019 ) , ResNet ( He et al. , 2016 ) , MobileNetV2 ( Sandler et al. , 2018 ) , and X3D ( Feichtenhofer , 2020 ) . We summarize our findings as follows : • Ensembles are more cost-effective than a single model in the large computation regime ( Sec . 3 ) . For example , an ensemble of two separately trained EfficientNet-B5 models matches B7 accuracy , a state-of-the-art ImageNet model , while having almost 50 % less FLOPs ( 20.5B vs. 37B ) . • Cascades outperform single models in all computation regimes ( Sec . 4 & 5 ) . Our cascade matches B7 accuracy while using on average 5.4x fewer FLOPs . Cascades can also achieve a 2.3x speedup over ViT-L-384 , a Transformer architecture , while matching its accuracy on ImageNet . • We further show that ( 1 ) the efficiency of cascades is evident in both FLOPs and on-device latency and throughput ( Sec . 5.1 ) ; ( 2 ) cascades can provide a guarantee on worst-case FLOPs ( Sec . 5.2 ) ; ( 3 ) one can build self-cascades using a single model with multiple inference resolutions to achieve a significant speedup ( Sec . 6 ) . • Committee-based models are applicable beyond image classification ( Sec . 7 ) and outperform single models on the task of video classification and semantic segmentation . Our cascade outperforms X3D-XL by 1.2 % on Kinetics-600 ( Carreira et al. , 2018 ) while using fewer FLOPs . 2 RELATED WORK . Efficient Neural Networks . There has been significant progress in designing efficient neural networks . In early work , most efficient networks , such as MobileNet ( Howard et al. , 2017 ; Sandler et al. , 2018 ) and ShuffleNet ( Howard et al. , 2019 ) , were manually designed . Recent work started to use neural architectures search ( NAS ) to automatically learn efficient network designs ( Zoph et al. , 2018 ; Cao et al. , 2019 ; Tan et al. , 2019 ; Tan & Le , 2019 ; Chaudhuri et al. , 2020 ) . They mostly fcous on improving the efficiency of single models by designing better architectures , while we explore committee-based models without tuning the architecture . Ensembles . Ensemble learning has been well studied in machine learning and there have been many seminal works , such as Bagging ( Breiman , 1996 ) , Boosting ( Schapire , 1990 ) , and AdaBoost ( Freund & Schapire , 1997 ) . Ensembles of neural networks have been used for many tasks , such as image classification ( Szegedy et al. , 2015 ; Huang et al. , 2017a ) , machine translation ( Wen et al. , 2020 ) , active learning ( Beluch et al. , 2018 ) , and out-of-distribution robustness ( Lakshminarayanan et al. , 2017 ; Fort et al. , 2019 ; Wenzel et al. , 2020 ) . But the efficiency of model ensembles has rarely been systematically investigated . Recent work indicated that ensembles can be more efficient than single models for image classification ( Kondratyuk et al. , 2020 ; Lobacheva et al. , 2020 ) . Our work further substantiates this claim through the analysis of modern architectures on large-scale benchmarks . Cascades . A large family of works have explored using cascades to speed up certain tasks . For example , the seminal work from Viola & Jones ( 2001 ) built a cascade of increasingly complex classifiers to speed up face detection . Cascades have also been explored in the context of deep neural networks . Bolukbasi et al . ( 2017 ) reduced the average test-time cost by learning a policy to allow easy examples to early exit from a network . A similar idea was also explored by Guan et al . ( 2018 ) . Huang et al . ( 2018 ) proposed a specially designed architecture Multi-Scale DenseNet to better incorporate early exits into neural networks . Given a pool of models , Streeter ( 2018 ) presented an approximation algorithm to produce a cascade that can preserve accuracy while reducing FLOPs and demonstrated improvement over state-of-the-art NAS-based models on ImageNet . Different from previous work that primarily focuses on developing new methods to build cascades , we show that even the most straightforward method can already provide a significant speedup without training an early exit policy ( Bolukbasi et al. , 2017 ; Guan et al. , 2018 ) or designing a specialized multi-scale architecture ( Huang et al. , 2018 ) . Dynamic Neural Networks . Dynamic neural networks allocate computational resources based on the input example , i.e. , spending more computation on hard examples and less on easy ones ( Han et al. , 2021 ) . For example , Shazeer et al . ( 2017 ) trained a gating network to determine what parts in a high-capacity model should be used for each example . Recent work ( Wu et al. , 2018 ; Veit & Belongie , 2018 ; Wang et al. , 2018 ) explored learning a policy to dynamically select layers or blocks to execute in ResNet based on the input image . Our analysis shows that cascades of pre-trained models are actually a strong baseline for dynamic neural networks . 3 ENSEMBLES ARE ACCURATE , EFFICIENT , AND FAST TO TRAIN . Model ensembles are useful for improving accuracy , but the usage of multiple models also introduces extra computational cost . When the total computation is fixed , which one will give a higher accuracy : single models or ensembles ? The answer is important for real-world applications but this question has rarely been systematically studied on modern architectures and large-scale benchmarks . We investigate this question on ImageNet ( Russakovsky et al. , 2015 ) with three architecture families : EfficientNet ( Tan & Le , 2019 ) , ResNet ( He et al. , 2016 ) , and MobileNetV2 ( Sandler et al. , 2018 ) . Each architecture family contains a series of networks with different levels of accuracy and computational cost . Within each family , we train a pool of models , compute the ensemble of different combinations of models , and compare these ensembles with the single models in the family . We denote an ensemble of n image classification models by { M1 , . . . , Mn } , where Mi is the ith model . Given an image x , αi = Mi ( x ) is a vector representing the logits for each class . To ensemble the n models , we compute the mean of logits1 αens = 1n ∑ i αi and predicts the class for image x by applying argmax to αens . The total computation of the ensemble is FLOPsens = ∑ i FLOPs ( Mi ) , where FLOPs ( · ) gives the FLOPs of a model . We show the top-1 accuracy on ImageNet and FLOPs of single models and ensembles in Figure 2 . Since there are many possible combinations of models to ensemble , we only show those Pareto optimal ensembles in the figure . We see that ensembles are more cost-effective than large single models , e.g. , EfficientNet-B5/B6/B7 and ResNet-152/200 . But in the small computation regime , single models outperform ensembles . For example , the ensemble of 2 B5 matches B7 accuracy while using about 50 % less FLOPs . However , ensembles use more FLOPs than MobileNetV2 when they have a similar accuracy . 1We note that the mean of probabilities is a more general choice since logits can be arbitrarily scaled . In our experiments , we observe that they yield similar performance with the mean of logits being marginally better . The findings in our work hold true no matter which choice is used . A possible explanation of why model ensembles are more powerful at large computation than at small computation comes from the perspective of bias-variance tradeoff . Large models usually have small bias but large variance , where the variance term dominates the test error . Therefore , ensembles are beneficial at large computation as they can reduce the variance in prediction ( Breiman , 1996 ) . For small models , the bias term dominates the test error . Ensembles can reduce the variance , but this can not compensate the fact that the bias of small models is large . Therefore , ensembles are less powerful at small computation . Our analysis indicates that instead of using a large model , one should use an ensemble of multiple relatively smaller models , which would give similar performance but with fewer FLOPs . In practice , model ensembles can be easily parallelized ( e.g. , using multiple accelerators ) , which may provide further speedup for inference . Moreover , often the total training cost of an ensemble is much lower than that of an equally accurate single model ( see appendix for more details ) . | The work provides an empirical accuracy-efficiency comparison of model ensembles and cascades of shallow models against single deeper models. The main finding, which supports previous results, is rather interesting: compositions of shallow models tend to provide better efficiency-accuracy trade-off than single deep models. This finding has been extended here to the ImageNet classification task with three architecture families (ResNet, EfficientNet and MobileNet), with further examples on video recognition and semantic segmentation. | SP:a0537bc2883ff413f0ffafc44a53a65be9fc7738 |
Wisdom of Committees: An Overlooked Approach To Faster and More Accurate Models | 1 INTRODUCTION Optimizing the efficiency of neural networks is important for real-world applications as they can only use limited computational resources and often have requirements on response time . There has been considerable work in this direction ( Howard et al. , 2017 ; Zhang et al. , 2018 ; Tan & Le , 2019 ) , but they mostly focus on designing novel network architectures that can achieve a favorable speed-accuracy trade-off . Here , we do not present any novel method or architecture design . Instead , we focus on analyzing the accuracy and efficiency of a simple paradigm : committeebased models . We use the term “ committee ” to refer to model ensembles or cascades , which indicates that they are built using multiple independent models . Committee-based models have been extensively studied and used before deep learning ( Breiman , 1996 ; Schapire , 1990 ; Freund & Schapire , 1997 ; Viola & Jones , 2001 ) . However , when com∗Work done during an internship at Google . †Work done as part of the Google AI Residency Program . paring the efficiency of deep models , committee-based models are rarely considered in recent work ( Howard et al. , 2017 ; Zhang et al. , 2018 ; Tan & Le , 2019 ) . There still lacks a systematic understanding of their efficiency in comparison with single models – models that only use one network . Such an understanding is informative for both researchers to push the frontier of efficient models and practitioners to select model designs in real-world applications . To fill this knowledge gap , we conduct a comprehensive analysis of the efficiency of committeebased models . To highlight the practical benefit of committee-based models , we intentionally choose the simplest possible method , which directly uses off-the-shelf , independently pre-trained models to build ensembles or cascades . We ensemble multiple pre-trained models via a simple average over their predictions ( Sec . 3 ) . For cascades , we sequentially apply each model and use a simple heuristic ( e.g. , maximum probability in the prediction ) to determine when to exit from the cascade ( Sec . 4 ) . We show that even this method already outperforms state-of-the-art architectures found by costly neural architecture search ( NAS ) methods . Note that this method works with off-the-shelf models and does not use specialized techniques . For example , it differs from Boosting ( Schapire , 1990 ) where each new model is conditioned on previous ones , and does not require the weight generation mechanism in previous efficient ensemble methods ( Wen et al. , 2020 ) . This method does not require the training of an early exit policy ( Bolukbasi et al. , 2017 ; Guan et al. , 2018 ) or the specially designed multi-scale architecture ( Huang et al. , 2018 ) in previous work on building cascades . To be clear , the contribution of this paper is not in the invention of model ensembles and cascades , as they have been known for decades , and is not in a new proposed method to build them . Instead , it is in the thorough evaluation and comparison of committee-based models with commonly used model architectures . Our analysis shows that committee-based models provide a simple complementary paradigm to achieve superior efficiency without tuning the architecture . One can often improve accuracy while reducing inference and training cost by building committees out of existing networks . Our findings generalize to a wide variety of tasks , including image classification , video classification , and semantic segmentation , and hold true for various architecture families : ViT ( Dosovitskiy et al. , 2021 ) , EfficientNet ( Tan & Le , 2019 ) , ResNet ( He et al. , 2016 ) , MobileNetV2 ( Sandler et al. , 2018 ) , and X3D ( Feichtenhofer , 2020 ) . We summarize our findings as follows : • Ensembles are more cost-effective than a single model in the large computation regime ( Sec . 3 ) . For example , an ensemble of two separately trained EfficientNet-B5 models matches B7 accuracy , a state-of-the-art ImageNet model , while having almost 50 % less FLOPs ( 20.5B vs. 37B ) . • Cascades outperform single models in all computation regimes ( Sec . 4 & 5 ) . Our cascade matches B7 accuracy while using on average 5.4x fewer FLOPs . Cascades can also achieve a 2.3x speedup over ViT-L-384 , a Transformer architecture , while matching its accuracy on ImageNet . • We further show that ( 1 ) the efficiency of cascades is evident in both FLOPs and on-device latency and throughput ( Sec . 5.1 ) ; ( 2 ) cascades can provide a guarantee on worst-case FLOPs ( Sec . 5.2 ) ; ( 3 ) one can build self-cascades using a single model with multiple inference resolutions to achieve a significant speedup ( Sec . 6 ) . • Committee-based models are applicable beyond image classification ( Sec . 7 ) and outperform single models on the task of video classification and semantic segmentation . Our cascade outperforms X3D-XL by 1.2 % on Kinetics-600 ( Carreira et al. , 2018 ) while using fewer FLOPs . 2 RELATED WORK . Efficient Neural Networks . There has been significant progress in designing efficient neural networks . In early work , most efficient networks , such as MobileNet ( Howard et al. , 2017 ; Sandler et al. , 2018 ) and ShuffleNet ( Howard et al. , 2019 ) , were manually designed . Recent work started to use neural architectures search ( NAS ) to automatically learn efficient network designs ( Zoph et al. , 2018 ; Cao et al. , 2019 ; Tan et al. , 2019 ; Tan & Le , 2019 ; Chaudhuri et al. , 2020 ) . They mostly fcous on improving the efficiency of single models by designing better architectures , while we explore committee-based models without tuning the architecture . Ensembles . Ensemble learning has been well studied in machine learning and there have been many seminal works , such as Bagging ( Breiman , 1996 ) , Boosting ( Schapire , 1990 ) , and AdaBoost ( Freund & Schapire , 1997 ) . Ensembles of neural networks have been used for many tasks , such as image classification ( Szegedy et al. , 2015 ; Huang et al. , 2017a ) , machine translation ( Wen et al. , 2020 ) , active learning ( Beluch et al. , 2018 ) , and out-of-distribution robustness ( Lakshminarayanan et al. , 2017 ; Fort et al. , 2019 ; Wenzel et al. , 2020 ) . But the efficiency of model ensembles has rarely been systematically investigated . Recent work indicated that ensembles can be more efficient than single models for image classification ( Kondratyuk et al. , 2020 ; Lobacheva et al. , 2020 ) . Our work further substantiates this claim through the analysis of modern architectures on large-scale benchmarks . Cascades . A large family of works have explored using cascades to speed up certain tasks . For example , the seminal work from Viola & Jones ( 2001 ) built a cascade of increasingly complex classifiers to speed up face detection . Cascades have also been explored in the context of deep neural networks . Bolukbasi et al . ( 2017 ) reduced the average test-time cost by learning a policy to allow easy examples to early exit from a network . A similar idea was also explored by Guan et al . ( 2018 ) . Huang et al . ( 2018 ) proposed a specially designed architecture Multi-Scale DenseNet to better incorporate early exits into neural networks . Given a pool of models , Streeter ( 2018 ) presented an approximation algorithm to produce a cascade that can preserve accuracy while reducing FLOPs and demonstrated improvement over state-of-the-art NAS-based models on ImageNet . Different from previous work that primarily focuses on developing new methods to build cascades , we show that even the most straightforward method can already provide a significant speedup without training an early exit policy ( Bolukbasi et al. , 2017 ; Guan et al. , 2018 ) or designing a specialized multi-scale architecture ( Huang et al. , 2018 ) . Dynamic Neural Networks . Dynamic neural networks allocate computational resources based on the input example , i.e. , spending more computation on hard examples and less on easy ones ( Han et al. , 2021 ) . For example , Shazeer et al . ( 2017 ) trained a gating network to determine what parts in a high-capacity model should be used for each example . Recent work ( Wu et al. , 2018 ; Veit & Belongie , 2018 ; Wang et al. , 2018 ) explored learning a policy to dynamically select layers or blocks to execute in ResNet based on the input image . Our analysis shows that cascades of pre-trained models are actually a strong baseline for dynamic neural networks . 3 ENSEMBLES ARE ACCURATE , EFFICIENT , AND FAST TO TRAIN . Model ensembles are useful for improving accuracy , but the usage of multiple models also introduces extra computational cost . When the total computation is fixed , which one will give a higher accuracy : single models or ensembles ? The answer is important for real-world applications but this question has rarely been systematically studied on modern architectures and large-scale benchmarks . We investigate this question on ImageNet ( Russakovsky et al. , 2015 ) with three architecture families : EfficientNet ( Tan & Le , 2019 ) , ResNet ( He et al. , 2016 ) , and MobileNetV2 ( Sandler et al. , 2018 ) . Each architecture family contains a series of networks with different levels of accuracy and computational cost . Within each family , we train a pool of models , compute the ensemble of different combinations of models , and compare these ensembles with the single models in the family . We denote an ensemble of n image classification models by { M1 , . . . , Mn } , where Mi is the ith model . Given an image x , αi = Mi ( x ) is a vector representing the logits for each class . To ensemble the n models , we compute the mean of logits1 αens = 1n ∑ i αi and predicts the class for image x by applying argmax to αens . The total computation of the ensemble is FLOPsens = ∑ i FLOPs ( Mi ) , where FLOPs ( · ) gives the FLOPs of a model . We show the top-1 accuracy on ImageNet and FLOPs of single models and ensembles in Figure 2 . Since there are many possible combinations of models to ensemble , we only show those Pareto optimal ensembles in the figure . We see that ensembles are more cost-effective than large single models , e.g. , EfficientNet-B5/B6/B7 and ResNet-152/200 . But in the small computation regime , single models outperform ensembles . For example , the ensemble of 2 B5 matches B7 accuracy while using about 50 % less FLOPs . However , ensembles use more FLOPs than MobileNetV2 when they have a similar accuracy . 1We note that the mean of probabilities is a more general choice since logits can be arbitrarily scaled . In our experiments , we observe that they yield similar performance with the mean of logits being marginally better . The findings in our work hold true no matter which choice is used . A possible explanation of why model ensembles are more powerful at large computation than at small computation comes from the perspective of bias-variance tradeoff . Large models usually have small bias but large variance , where the variance term dominates the test error . Therefore , ensembles are beneficial at large computation as they can reduce the variance in prediction ( Breiman , 1996 ) . For small models , the bias term dominates the test error . Ensembles can reduce the variance , but this can not compensate the fact that the bias of small models is large . Therefore , ensembles are less powerful at small computation . Our analysis indicates that instead of using a large model , one should use an ensemble of multiple relatively smaller models , which would give similar performance but with fewer FLOPs . In practice , model ensembles can be easily parallelized ( e.g. , using multiple accelerators ) , which may provide further speedup for inference . Moreover , often the total training cost of an ensemble is much lower than that of an equally accurate single model ( see appendix for more details ) . | This paper investigates the effectiveness of model cascades in computation/accuracy tradeoff improvement. A straightforward procedure is used, where all combination-permutations of a handful of models are evaluated in a cascade, and exit thresholds are determined by choosing best computation work within accuracy degradation constraint (or, best accuracy given computation constraint). The resulting cascades perform significantly better than larger single models (more accurate or fewer flops depending on comparison). Most significantly, the paper provides extensive evaluations on the degree of these gains for three model families (EfficientNet, Resnet, and MobileNetV2). | SP:a0537bc2883ff413f0ffafc44a53a65be9fc7738 |
MoReL: Multi-omics Relational Learning | 1 INTRODUCTION . Multi-view learning tries to fully leverage the information from multiple sources ( i.e . different types of omics data in molecular biology ) and represents them in a shared embedding space , which is beneficial for many downstream tasks with a limited number of training samples . In biomedical applications , the shared embedding space also enables better understanding of the underlying biological mechanisms by discovering interactions between different types of molecules , which is our focus in this paper . Existing multi-omics data integration methods are limited in their applicability . First , most of them attempt to derive low-dimensional embeddings of the input samples and are not designed to infer a multi-partite graph that encodes the interactions across views . In unsupervised setting , matrix factorization based methods , such as Bayesian Canonical Correlation Analysis ( BCCA ) ( Klami et al. , 2013 ) and Multi-Omics Factor Analysis ( MOFA ) ( Argelaguet et al. , 2018 ) , can achieve the similar goal of cross-view relational learning but often through two-step procedures , in which the factor loading parameters are used for downstream interaction analyses across views . Second , a very recent relational inference for multi-view data integration , BayRel ( Hajiramezanali et al. , 2020 ) , is built on three strict assumptions , which may limit its practical application , including in multi-omics data integration : 1 ) A graph of dependency between features of each view is available ; 2 ) Input dataset is complete on all views with no missing samples ; 3 ) The samples in different views are well-paired . While the first limitation might be solved by learning a graph using an ad-hoc technique , the last two issues are common in many multi-omics data integration problems . Integrated samples commonly have one or more view with various missing patterns . This is mostly due to limitations of experimental designs or compositions from different data platforms . In addition , data might be collected in different laboratories or the sample IDs are not available due to patient identification or security concerns , leading to unpaired datasets . Apart from these , we might not have access to a priori graph structure data in some view ( s ) as the nature of data might not be structured , or we only have incomplete or very noisy prior knowledge . For such multi-omics data , leaving out such a view may lose some complementary information while enforcing graph structures may cause degraded performance . In this work , we propose a new Multi-omics Relational Learning method , MoReL , based on the fused Gromov-Wasserstein ( FGW ) regularization , mitigating the dependency of multi-view learning on the aforementioned two assumptions . The proposed method contains four major contributions : 1 ) MoReL provides a new Bayesian multi-omics relational learning framework with efficient variational inference and is able to exploit non-linear transformations of data by leveraging deep learning models for either unstructured or graph-structured data ; 2 ) MoReL learns a multi-partite graph across different features from multiple views using a FGW-based decoder , facilitating meaningful biological knowledge discovery from integrative multi-omics data analysis while accounting for arbitrarily permutation and/or transformation caused by processing features with different deep functions across the views ; 3 ) MoReL can flexibly integrate both structured and unstructured heterogeneous views in one framework , in which only confident constraints need to be imposed to improve the model performance ; 4 ) MoReL is able to integrate multiple views with unpaired samples and/or arbitrary sample-missing patterns . 2 RELATED WORKS . Optimal transport . There have been extensive efforts to utilize Gromov-Wasserstein ( GW ) discrepancy to solve the alignment problems in shape and object matching ( Mémoli , 2009 ; 2011 ) . A similar attempt has been made recently to investigate its potential for more diverse applications , such as aligning vocabulary sets between different languages ( Alvarez-Melis & Jaakkola , 2018 ) , and graph matching ( Chowdhury & Mémoli , 2019 ; Vayer et al. , 2018b ; Xu et al. , 2019b ) . Peyré et al . ( 2016 ) have proposed a fast Sinkhorn projection-based algorithm ( Cuturi , 2013 ) to compute the entropy-regularized GW distance . Following this direction , Xu et al . ( 2019b ) have replaced the entropy regularizer with a Bregman proximal term . To further reduce the computational complexity , the recursive GW distance ( Xu et al. , 2019a ) and the sliced GW distance ( Vayer et al. , 2019 ) have been proposed . In Bunne et al . ( 2019 ) , a pair of generative models are learned for incomparable spaces by defining an adversarial objective function based on the GW discrepancy . It imposes an orthogonal assumption on the transformation between the sample and its latent space . However , it can not incorporate the graph structured data . Similar to our model in this paper , Vayer et al . ( 2018a ) and Xu et al . ( 2020 ) have proposed to impose the fused GW regularization in their objective functions by combining GW and Wasserstein discrepancies . Graph CCA ( gCCA ) . In order to utilize a priori known information about geometry of the samples , gCCA methods ( Chen et al. , 2019 ; 2018 ) have been proposed to construct a dependency graph between samples and directly impose it into a regularizer . Similar to classical CCA , gCCA learns an unstructured shared latent representation . Unlike our MoReL , though , they can neither take advantage of the dependency graph between features , nor explicitly model relational dependency between features across views . Therefore , they rely on ad-hoc post-processing procedures as a second step to infer inter-relations . Graph representation learning . Graph neural network architectures have been shown to be effective for link prediction ( Hamilton et al. , 2017 ; Kipf & Welling , 2016 ; Hasanzadeh et al. , 2019 ) as well as matrix completion for recommender systems ( Berg et al. , 2017 ; Monti et al. , 2017 ; Kalofolias et al. , 2014 ; Ma et al. , 2011 ) . The first group of models is dealing with a single graph and is not able to deal with heterogeneous graphs , with multiple types of nodes and edges , and node attributes ( Zhang et al. , 2019 ) . The second group utilizes the known item-item and user-user relationships and their attributes to complete the user-item rating matrix . However , they rely on two strict assumptions : 1 ) the inter-relation matrix is partially observed ; and 2 ) both views have structured information . The proposed MoReL achieves robust multi-view learning without these assumptions , making it more practical in multi-omics data integration . 3 PRELIMINARIES . 3.1 WASSERSTEIN DISTANCE . Wasserstein distance ( WD ) is a measure for comparing probability distributions on the same or different spaces where a meaningful distance across domains can be computed ( Vayer et al. , 2019 ) . Given two distributions Λ and ∆ defined on the corresponding space X and Y , and a transportation cost c : X× Y→ R+ , WD is the solution to the following optimization problem : inf π∈Π E ( x , y ) ∼π [ c ( x , y ) ] = inf π∈Π ∫ c ( x , y ) dπ ( x , y ) , where π ∈ Π ( X×Y ) is the transport map such that its marginals are Λ and ∆ , respectively . Assuming that the probability distributions are discrete , i.e . Λ = ∑n i=1 aiδxi and ∆ = ∑m j=1 bjδyj with δ as the Dirac delta function , WD optimization could be simplified as follows : DW ( Λ , ∆ ) = min T∈Π ( a , b ) n∑ i=1 m∑ j=1 Ti , j c ( xi , yj ) , where Ti , j is an element of the transport matrix T whose row-wise and column-wise sums equal to [ ai ] n i=1 and [ bj ] m j=1 , respectively . 3.2 GROMOV-WASSERSTEIN DISTANCE . Gromov-Wasserstein distance ( GWD ) has been proposed as a natural extension of WD when a meaningful transportation cost between the distributions can not be defined for WD . For example when two distributions are defined in Euclidean spaces with different dimensions ( Vayer et al. , 2019 ) . Instead of measuring inter-domain distances , GWD measures the distance between pairs of samples in one distribution and compares it to those in the other domain . More specifically , given two distributions Λ and ∆ defined on spaces X and Y , as well as two domain-specific transportation costs c ( X ) and c ( Y ) , GWD is the solution to the following optimization problem : inf π∈Π E ( x , y ) ∼π , ( x′ , y′ ) ∼π [ L ( x , x ′ , y , y′ ) ] = inf π∈Π ∫ ∫ L ( x , x′ , y , y′ ) dπ ( x , y ) dπ ( x′ , y′ ) , where L ( x , x′ , y , y′ ) =‖ c ( X ) ( x , x′ ) − c ( Y ) ( y , y′ ) ‖ and π ∈ Π ( X × Y ) is the transport map . Likewise , this can be derived for discrete distributions Λ = ∑n i=1 aiδxi and ∆ = ∑m j=1 bjδyj , as follows : DGW ( Λ , ∆ ) = min T∈Π ( a , b ) n∑ i , i′=1 m∑ j , j′=1 Ti , j Ti′ , j′ L ( xi , xi′ , yj , yj′ ) , ( 1 ) where Ti , j is an element of transport matrix T whose row-wise and column-wise sums equal to [ ai ] n i=1 and [ bj ] m j=1 , respectively . 4 METHOD . 4.1 PROBLEM FORMULATION AND NOTATIONS . We propose a novel hierarchical generative model for multi-omics data integration that incorporates view-specific structure information when it is available . Given observations from structured and unstructured views , our model , Multi-omics Relational Learning ( MoReL ) , aims to infer the interrelations among entities , i.e . features , across all of the views . More specifically , assume that multiple views , V , of data are given . Without loss of generality , we assume that the structure information , provided as a graph , is available for some of the views Vs ⊂ V , and the remaining views Vu = V \Vs are unstructured . We note that every structure could be represented as a graph . For example , image and sequential data could be represented over grid and directed path graphs , respectively . We represent the set of graphs for structured views by Gs = { G ( v ) } v∈Vs and their adjacency matrices by As = { A ( v ) } v∈Vs . We also define Xs = { X ( v ) } v∈Vs as the set of node attributes for structured views , and Xu = { X ( v ) } v∈Vu as the set of data for unstructured views . Moreover , Nv denotes the number of nodes in structured views and number of features for unstructured views . MoReL infers the interactions among the nodes in Gs and features in Xu . We represent these inter-relations by a multi-partite graph with ∑ v∈V Nv nodes and a multi-adjacency tensor A = { A ( vv ′ ) } v , v′∈V , v 6=v′ , where A ( vv ′ ) is the Nv ×Nv′ bi-adjacency matrix between views v and v′ . | The paper proposed a deep Bayesian model for heterogeneous multi-omics data integration. The Gromov-Wasserstein (FGW) regularization between latent representations of heterogeneous views is used to align nodes/features in every pair of views. The experimental results have demonstrated improvement in inferring meaningful relations. | SP:2d073cd15d16bdf07f9e934c28192a1b36a27b38 |
MoReL: Multi-omics Relational Learning | 1 INTRODUCTION . Multi-view learning tries to fully leverage the information from multiple sources ( i.e . different types of omics data in molecular biology ) and represents them in a shared embedding space , which is beneficial for many downstream tasks with a limited number of training samples . In biomedical applications , the shared embedding space also enables better understanding of the underlying biological mechanisms by discovering interactions between different types of molecules , which is our focus in this paper . Existing multi-omics data integration methods are limited in their applicability . First , most of them attempt to derive low-dimensional embeddings of the input samples and are not designed to infer a multi-partite graph that encodes the interactions across views . In unsupervised setting , matrix factorization based methods , such as Bayesian Canonical Correlation Analysis ( BCCA ) ( Klami et al. , 2013 ) and Multi-Omics Factor Analysis ( MOFA ) ( Argelaguet et al. , 2018 ) , can achieve the similar goal of cross-view relational learning but often through two-step procedures , in which the factor loading parameters are used for downstream interaction analyses across views . Second , a very recent relational inference for multi-view data integration , BayRel ( Hajiramezanali et al. , 2020 ) , is built on three strict assumptions , which may limit its practical application , including in multi-omics data integration : 1 ) A graph of dependency between features of each view is available ; 2 ) Input dataset is complete on all views with no missing samples ; 3 ) The samples in different views are well-paired . While the first limitation might be solved by learning a graph using an ad-hoc technique , the last two issues are common in many multi-omics data integration problems . Integrated samples commonly have one or more view with various missing patterns . This is mostly due to limitations of experimental designs or compositions from different data platforms . In addition , data might be collected in different laboratories or the sample IDs are not available due to patient identification or security concerns , leading to unpaired datasets . Apart from these , we might not have access to a priori graph structure data in some view ( s ) as the nature of data might not be structured , or we only have incomplete or very noisy prior knowledge . For such multi-omics data , leaving out such a view may lose some complementary information while enforcing graph structures may cause degraded performance . In this work , we propose a new Multi-omics Relational Learning method , MoReL , based on the fused Gromov-Wasserstein ( FGW ) regularization , mitigating the dependency of multi-view learning on the aforementioned two assumptions . The proposed method contains four major contributions : 1 ) MoReL provides a new Bayesian multi-omics relational learning framework with efficient variational inference and is able to exploit non-linear transformations of data by leveraging deep learning models for either unstructured or graph-structured data ; 2 ) MoReL learns a multi-partite graph across different features from multiple views using a FGW-based decoder , facilitating meaningful biological knowledge discovery from integrative multi-omics data analysis while accounting for arbitrarily permutation and/or transformation caused by processing features with different deep functions across the views ; 3 ) MoReL can flexibly integrate both structured and unstructured heterogeneous views in one framework , in which only confident constraints need to be imposed to improve the model performance ; 4 ) MoReL is able to integrate multiple views with unpaired samples and/or arbitrary sample-missing patterns . 2 RELATED WORKS . Optimal transport . There have been extensive efforts to utilize Gromov-Wasserstein ( GW ) discrepancy to solve the alignment problems in shape and object matching ( Mémoli , 2009 ; 2011 ) . A similar attempt has been made recently to investigate its potential for more diverse applications , such as aligning vocabulary sets between different languages ( Alvarez-Melis & Jaakkola , 2018 ) , and graph matching ( Chowdhury & Mémoli , 2019 ; Vayer et al. , 2018b ; Xu et al. , 2019b ) . Peyré et al . ( 2016 ) have proposed a fast Sinkhorn projection-based algorithm ( Cuturi , 2013 ) to compute the entropy-regularized GW distance . Following this direction , Xu et al . ( 2019b ) have replaced the entropy regularizer with a Bregman proximal term . To further reduce the computational complexity , the recursive GW distance ( Xu et al. , 2019a ) and the sliced GW distance ( Vayer et al. , 2019 ) have been proposed . In Bunne et al . ( 2019 ) , a pair of generative models are learned for incomparable spaces by defining an adversarial objective function based on the GW discrepancy . It imposes an orthogonal assumption on the transformation between the sample and its latent space . However , it can not incorporate the graph structured data . Similar to our model in this paper , Vayer et al . ( 2018a ) and Xu et al . ( 2020 ) have proposed to impose the fused GW regularization in their objective functions by combining GW and Wasserstein discrepancies . Graph CCA ( gCCA ) . In order to utilize a priori known information about geometry of the samples , gCCA methods ( Chen et al. , 2019 ; 2018 ) have been proposed to construct a dependency graph between samples and directly impose it into a regularizer . Similar to classical CCA , gCCA learns an unstructured shared latent representation . Unlike our MoReL , though , they can neither take advantage of the dependency graph between features , nor explicitly model relational dependency between features across views . Therefore , they rely on ad-hoc post-processing procedures as a second step to infer inter-relations . Graph representation learning . Graph neural network architectures have been shown to be effective for link prediction ( Hamilton et al. , 2017 ; Kipf & Welling , 2016 ; Hasanzadeh et al. , 2019 ) as well as matrix completion for recommender systems ( Berg et al. , 2017 ; Monti et al. , 2017 ; Kalofolias et al. , 2014 ; Ma et al. , 2011 ) . The first group of models is dealing with a single graph and is not able to deal with heterogeneous graphs , with multiple types of nodes and edges , and node attributes ( Zhang et al. , 2019 ) . The second group utilizes the known item-item and user-user relationships and their attributes to complete the user-item rating matrix . However , they rely on two strict assumptions : 1 ) the inter-relation matrix is partially observed ; and 2 ) both views have structured information . The proposed MoReL achieves robust multi-view learning without these assumptions , making it more practical in multi-omics data integration . 3 PRELIMINARIES . 3.1 WASSERSTEIN DISTANCE . Wasserstein distance ( WD ) is a measure for comparing probability distributions on the same or different spaces where a meaningful distance across domains can be computed ( Vayer et al. , 2019 ) . Given two distributions Λ and ∆ defined on the corresponding space X and Y , and a transportation cost c : X× Y→ R+ , WD is the solution to the following optimization problem : inf π∈Π E ( x , y ) ∼π [ c ( x , y ) ] = inf π∈Π ∫ c ( x , y ) dπ ( x , y ) , where π ∈ Π ( X×Y ) is the transport map such that its marginals are Λ and ∆ , respectively . Assuming that the probability distributions are discrete , i.e . Λ = ∑n i=1 aiδxi and ∆ = ∑m j=1 bjδyj with δ as the Dirac delta function , WD optimization could be simplified as follows : DW ( Λ , ∆ ) = min T∈Π ( a , b ) n∑ i=1 m∑ j=1 Ti , j c ( xi , yj ) , where Ti , j is an element of the transport matrix T whose row-wise and column-wise sums equal to [ ai ] n i=1 and [ bj ] m j=1 , respectively . 3.2 GROMOV-WASSERSTEIN DISTANCE . Gromov-Wasserstein distance ( GWD ) has been proposed as a natural extension of WD when a meaningful transportation cost between the distributions can not be defined for WD . For example when two distributions are defined in Euclidean spaces with different dimensions ( Vayer et al. , 2019 ) . Instead of measuring inter-domain distances , GWD measures the distance between pairs of samples in one distribution and compares it to those in the other domain . More specifically , given two distributions Λ and ∆ defined on spaces X and Y , as well as two domain-specific transportation costs c ( X ) and c ( Y ) , GWD is the solution to the following optimization problem : inf π∈Π E ( x , y ) ∼π , ( x′ , y′ ) ∼π [ L ( x , x ′ , y , y′ ) ] = inf π∈Π ∫ ∫ L ( x , x′ , y , y′ ) dπ ( x , y ) dπ ( x′ , y′ ) , where L ( x , x′ , y , y′ ) =‖ c ( X ) ( x , x′ ) − c ( Y ) ( y , y′ ) ‖ and π ∈ Π ( X × Y ) is the transport map . Likewise , this can be derived for discrete distributions Λ = ∑n i=1 aiδxi and ∆ = ∑m j=1 bjδyj , as follows : DGW ( Λ , ∆ ) = min T∈Π ( a , b ) n∑ i , i′=1 m∑ j , j′=1 Ti , j Ti′ , j′ L ( xi , xi′ , yj , yj′ ) , ( 1 ) where Ti , j is an element of transport matrix T whose row-wise and column-wise sums equal to [ ai ] n i=1 and [ bj ] m j=1 , respectively . 4 METHOD . 4.1 PROBLEM FORMULATION AND NOTATIONS . We propose a novel hierarchical generative model for multi-omics data integration that incorporates view-specific structure information when it is available . Given observations from structured and unstructured views , our model , Multi-omics Relational Learning ( MoReL ) , aims to infer the interrelations among entities , i.e . features , across all of the views . More specifically , assume that multiple views , V , of data are given . Without loss of generality , we assume that the structure information , provided as a graph , is available for some of the views Vs ⊂ V , and the remaining views Vu = V \Vs are unstructured . We note that every structure could be represented as a graph . For example , image and sequential data could be represented over grid and directed path graphs , respectively . We represent the set of graphs for structured views by Gs = { G ( v ) } v∈Vs and their adjacency matrices by As = { A ( v ) } v∈Vs . We also define Xs = { X ( v ) } v∈Vs as the set of node attributes for structured views , and Xu = { X ( v ) } v∈Vu as the set of data for unstructured views . Moreover , Nv denotes the number of nodes in structured views and number of features for unstructured views . MoReL infers the interactions among the nodes in Gs and features in Xu . We represent these inter-relations by a multi-partite graph with ∑ v∈V Nv nodes and a multi-adjacency tensor A = { A ( vv ′ ) } v , v′∈V , v 6=v′ , where A ( vv ′ ) is the Nv ×Nv′ bi-adjacency matrix between views v and v′ . | The authors propose a Bayesian framework to learn relations among multi-omic datasets. The main unique advantage over existing methods is the proposed method is able to learn without a priori dependency structure and it allows a certain degree of missingness and mismatching. Experiments on two biomedical multi-omics data sets partially demonstrate the effectiveness of the proposed method. | SP:2d073cd15d16bdf07f9e934c28192a1b36a27b38 |
Tactics on Refining Decision Boundary for Improving Certification-based Robust Training | In certification-based robust training , existing methods utilize relaxation based methods to bound the worst case performance of neural networks given certain perturbation . However , these certification based methods treat all the examples equally regardless of their vulnerability and true adversarial distribution , limiting the model ’ s potential in achieving optimal verifiable accuracy . In the paper , we propose new methods to include the customized weight distribution and automatic schedule tuning methods on the perturbation schedule . These methods are generally applicable to all the certifiable robust training with almost no additional computational cost . Our results show improvement on MNIST with = 0.3 and CIFAR on = 8/255 for both IBP and CROWN-IBP based methods . 1 INTRODUCTION . Deep neural networks ( DNNs ) have been shown to be highly vulnerable to adversarial attacks , carefully-crafted inputs that are nearly indistinguishable from naturally-occurring data but are misclassified by the network ( Goodfellow et al. , 2014 ; Szegedy et al. , 2014 ) . There exist many algorithms for both crafting adversarial attacks ( Papernot et al. , 2016 ) and building neural networks that are robust against such attacks . Fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2014 ) was the very first approach to generate strong adversary . Projected gradient descent ( PGD ) ( Madry et al. , 2018 ) is one of the most successful and widely-used defense methods available . Adversarial training seeks to minimize the worst-case loss under adversarial perturbations within a pre-defined perturbation level , where multi-step PGD is used to estimate the worst-case attack during training . Compared to standard training , the adversarial term introduces risk of over-fitting ( Moosavi-Dezfooli , 2021 ; Rice et al. , 2020 ; Wang et al. , 2019 ) and training instability ( Tsipras et al. , 2019 ; Zhang et al. , 2020b ) for adversarial training . There exist many related works on improving the model performance by additional regulation ( Zhang et al. , 2020b ; Cisse et al. , 2017 ) and customizing training curriculum ( Zhang et al. , 2019 ; Wang et al. , 2020 ; Cai et al. , 2018 ) for attack-based scenario . While adversarial training has been shown to be empirically effective against many types of attacks , it can not be proven that the resultant models are robust against all adversaries . In fact , it has been shown that many defense methods dependent on heuristic techniques , including adversarial training , can be bypassed by stronger adversaries ( Athalye et al. , 2018 ) . This has motivated a separate branch of research focused on robustness certification/verification : computing provable guarantees on the robustness performance of neural networks against inputs with arbitrary perturbations within some ` p norm-bounded ball . There are two main types of verification methods : complete and incomplete verification . The former computes exact robustness bounds using computationally-expensive methods such as mixed-integer programming ( MIP ) ( Tjeng et al. , 2019 ; Bunel et al. , 2018 ) , whereas the latter provides looser robustness bounds with different branches of methods like randomized smoothing ( Cohen et al. , 2019 ; Salman et al. , 2019 ; Lecuyer et al. , 2019 ) , Lipschitz-based robustness ( Trockman & Kolter , 2021 ; Tsuzuku et al. , 2018 ) and convex adversarial polytype ( Weng et al. , 2018 ; Zhang et al. , 2018 ; Dvijotham et al. , 2018b ; Gowal et al. , 2019 ) on which in this paper we mainly focus . A sparsely related branch of research , called certified robust training ( Dvijotham et al. , 2018a ; Gowal et al. , 2019 ; Zhang et al. , 2020a ) , aims to train a certifiably robust model . These methods compute verified robustness bounds and incorporate them into the training process , such that the resultant models can then be proven to be robust using a verification method . Currently , the most efficient certified training method is interval bound propagation ( IBP ) ( Gowal et al. , 2019 ) , which requires only two additional forward passes during training . It is important to note that IBP is significantly more efficient than adversarial training , which often require many PGD steps to achieve high robustness ( Madry et al. , 2018 ) . In standard certified training , a uniform and loss function weight are usually used across all the training examples , this is not ideal for optimal verified accuracy since the examples are not necessarily equivalent in terms of vulnerability . 1.1 OUR CONTRIBUTIONS . In this paper , we propose two novel methods to improve robustness via refining certified decision boundary for verifiable adversarial training methods such as IBP and CROWN-IBP . In particular , our algorithms are generally applicable to all the verifiable adversarial training methods at almost no additional computational cost . We sum up the key contributions as following : 1 . Zeng et al . ( 2020 ) pointed out the adversarial example distribution deviates from the clean data . We further analyze the importance weight to correct empirical distribution from a novel perspective and theoretically prove more weight is needed for the examples closer to decision boundary if sampled from clean data distribution . 2 . Building upon previous analysis , we come up with a symmetrical re-weighting function based on worst case margin of the correct labels , emphasizing the examples around the decision boundary . 3 . For verifiable adversarial perturbation , empirically a slightly larger train achieves optimal evaluation robustness accuracy on eval , while this uniform setup is not ideal for examples around decision boundary . To address the issue of large perturbation , we developed an auto-tuning algorithm to customize the train for each individual example . 2 BACKGROUND . 2.1 ADVERSARIAL ATTACKS AND TRAINING . Let D = { ( xi , yi ) } ni=1 represent the dataset , where xi ∈ X and yi ∈ Y = { 0 , 1 , ... , C − 1 } . Let B ( xi , ) denote the set of points in the ` p-norm ball with radius around xi . The objective function is min θ 1 n n∑ i=1 l ( fθ ( x ′ i ) , yi ) , where x ′ i = arg max x′∈B ( xi , ) l ( fθ ( x ′ ) , yi ) ( 1 ) fθ : X −→ RC is a score function and l : RC × Y −→ R is the loss function . Adversarial training tackles this min-max problem by alternating between solving inner loop by attack method to estimate the worst-case adversarial scenario in B ( xi , ) and outer loop to update the model parameters and minimize loss function . 2.2 CERTIFIED TRAINING . Robustness Verification . Different from attack-based training , certified training provides a guarantee for worst case scenario via bounding the neural network outputs . The certified accuracy is the lower bound of robustness accuracy under any attack method , thus improving the certified accuracy helps in understanding the potential of the neural network in defending against adversarial attacks . Interval Bound Propagation . There have been many proposed neural network verification algorithms that can be used to compute worst case output . IBP ( Gowal et al. , 2019 ) utilizes a simple , interval arithmetic approach to propagate bounds through a network . Let zk−1 denote the input to a layer zk . IBP bounds zk by computing lower and upper bounds zk , zk such that zk ≤ zk ≤ zk holds element-wise . For affine layers represented by hk ( zk−1 ) = Wzk−1 + b , IBP computes : zk = W zk−1+zk−1 2 − |W | zk−1−zk−1 2 + b and zk = W zk−1+zk−1 2 + |W | zk−1−zk−1 2 + b. Propagating the bounds through the network allows us to compute the upper and lower bound of last layer logits zK , zK and evaluate if an input x is verifiably robust . The logit of the true class equals the lower bound and the logits of other classes equal to the upper bound : ẑK , m = { zK , m if m is the true class zK , m otherwise ( 2 ) IBP training uses a hyperparameter schedule on ( starting from 0 and increasing to train , typically set at train which is slightly larger than eval ) and a mixed loss function that combines natural and robust cross-entropy loss : minθ E ( x , y ) ∼P [ κl ( zK , y ) + ( 1 − κ ) l ( ẑK , y ) ] , where P is the data distribution , l is the cross entropy loss , κ is a hyperparameter that balances the weight between natural and robust loss , and ẑK represents the worst case logits computed using IBP . CROWN-IBP . CROWN was introduced by Zhang et al . ( 2018 ) and achieves a tight bound by adaptively selecting the linear approximation . Zhang et al . ( 2020a ) proposed CROWN-IBP combining IBP forward bounding pass and CROWN style backward bounding pass . CROWN-IBP trained models has a tighter bound compared with IBP models under the IBP metric , with incurring cost on computational efficiency from CROWN backward propagation and generally requiring more epochs for training stability . 2.3 ATTACK-BASED RE-WEIGHTING . Motivated by the idea that all data points are not equally vulnerable to adversarial attack , researchers proposed methods to re-weight the minimax risk by adding a re-weighting term ω ( xi , yi ) ahead of individual example loss . For instance , Zeng et al . ( 2020 ) noted the adversarial distribution deviation from the clean examples and assigned weights that monotonically decreases with the examples ’ confidence margin . The “ confidence margin ” is calculated by attack methods such as PGD , then the risk is re-weighted by a parameterized exponential family as : minθ 1 n ∑n i=1 ω ( xi , yi ) l ( fθ ( x ′ i ) , yi ) , s.t . ω ( xi , yi ) = exp ( −α margin ( fθ , xi + δi , yi ) ) , where α is a positive hyper-parameter and the new risk biases larger weights towards the mis-classified examples . Zhang et al . ( 2020c ) propose GAIRAT ( Geometry-Aware Adversarial Training ) , a method to reweight adversarial examples based on how close they are to the decision boundary . During the training process , GAIRAT explicitly assigns larger/smaller weights to data points closer/farther to the decision boundary respectively : ω ( xi , yi ) = ( 1 + tanh ( λ + 5 × ( 1 − 2 × κ ( xi , yi ) /K ) /2 , where λ is a hyper-parameter , K is the maximal allowed attack iteration and κ is the least iteration number that the attack method requires to fool the classifier . Similar to re-weighting , Wang et al . ( 2021 ) improves clean image performance by prioritizing the robustness between the most dissimilar groups 2.4 CUSTOMIZED ADVERSARIAL TRAINING . In most adversarial training methods , the adversarial attack strength usually follows a pre-defined scheduler throughout the training process . For instance , the perturbation is a uniform number for all examples and usually gradually increases . Cheng et al . ( 2020 ) argued that this assumption may be problematic given the fact that adversarial examples are not equally vulnerable and proposed an auto-tuning method by assigning individual i to each data and increasing it if the current attack is successful . In Zhang et al . ( 2020b ) , they proposed friendly adversarial training ( FAT ) by progressively increasing the attack perturbation with early-stop PGD and alleviating the issue of strong adversarial attack and cross-over mixture . | This paper proposes bound-based weighted loss and epsilon auto-tuning to improve the performance of certifiable training. The insights of the improvements are mainly borrowed from well-developed adversarial training while they are customized for certifiable training considering bound margins provided bound propagation methods. The experimental results clearly show the improvements from individuals and the combination of the two methods. | SP:d7b4a400a8376f863b898e6ab363485bb95f0cde |
Tactics on Refining Decision Boundary for Improving Certification-based Robust Training | In certification-based robust training , existing methods utilize relaxation based methods to bound the worst case performance of neural networks given certain perturbation . However , these certification based methods treat all the examples equally regardless of their vulnerability and true adversarial distribution , limiting the model ’ s potential in achieving optimal verifiable accuracy . In the paper , we propose new methods to include the customized weight distribution and automatic schedule tuning methods on the perturbation schedule . These methods are generally applicable to all the certifiable robust training with almost no additional computational cost . Our results show improvement on MNIST with = 0.3 and CIFAR on = 8/255 for both IBP and CROWN-IBP based methods . 1 INTRODUCTION . Deep neural networks ( DNNs ) have been shown to be highly vulnerable to adversarial attacks , carefully-crafted inputs that are nearly indistinguishable from naturally-occurring data but are misclassified by the network ( Goodfellow et al. , 2014 ; Szegedy et al. , 2014 ) . There exist many algorithms for both crafting adversarial attacks ( Papernot et al. , 2016 ) and building neural networks that are robust against such attacks . Fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2014 ) was the very first approach to generate strong adversary . Projected gradient descent ( PGD ) ( Madry et al. , 2018 ) is one of the most successful and widely-used defense methods available . Adversarial training seeks to minimize the worst-case loss under adversarial perturbations within a pre-defined perturbation level , where multi-step PGD is used to estimate the worst-case attack during training . Compared to standard training , the adversarial term introduces risk of over-fitting ( Moosavi-Dezfooli , 2021 ; Rice et al. , 2020 ; Wang et al. , 2019 ) and training instability ( Tsipras et al. , 2019 ; Zhang et al. , 2020b ) for adversarial training . There exist many related works on improving the model performance by additional regulation ( Zhang et al. , 2020b ; Cisse et al. , 2017 ) and customizing training curriculum ( Zhang et al. , 2019 ; Wang et al. , 2020 ; Cai et al. , 2018 ) for attack-based scenario . While adversarial training has been shown to be empirically effective against many types of attacks , it can not be proven that the resultant models are robust against all adversaries . In fact , it has been shown that many defense methods dependent on heuristic techniques , including adversarial training , can be bypassed by stronger adversaries ( Athalye et al. , 2018 ) . This has motivated a separate branch of research focused on robustness certification/verification : computing provable guarantees on the robustness performance of neural networks against inputs with arbitrary perturbations within some ` p norm-bounded ball . There are two main types of verification methods : complete and incomplete verification . The former computes exact robustness bounds using computationally-expensive methods such as mixed-integer programming ( MIP ) ( Tjeng et al. , 2019 ; Bunel et al. , 2018 ) , whereas the latter provides looser robustness bounds with different branches of methods like randomized smoothing ( Cohen et al. , 2019 ; Salman et al. , 2019 ; Lecuyer et al. , 2019 ) , Lipschitz-based robustness ( Trockman & Kolter , 2021 ; Tsuzuku et al. , 2018 ) and convex adversarial polytype ( Weng et al. , 2018 ; Zhang et al. , 2018 ; Dvijotham et al. , 2018b ; Gowal et al. , 2019 ) on which in this paper we mainly focus . A sparsely related branch of research , called certified robust training ( Dvijotham et al. , 2018a ; Gowal et al. , 2019 ; Zhang et al. , 2020a ) , aims to train a certifiably robust model . These methods compute verified robustness bounds and incorporate them into the training process , such that the resultant models can then be proven to be robust using a verification method . Currently , the most efficient certified training method is interval bound propagation ( IBP ) ( Gowal et al. , 2019 ) , which requires only two additional forward passes during training . It is important to note that IBP is significantly more efficient than adversarial training , which often require many PGD steps to achieve high robustness ( Madry et al. , 2018 ) . In standard certified training , a uniform and loss function weight are usually used across all the training examples , this is not ideal for optimal verified accuracy since the examples are not necessarily equivalent in terms of vulnerability . 1.1 OUR CONTRIBUTIONS . In this paper , we propose two novel methods to improve robustness via refining certified decision boundary for verifiable adversarial training methods such as IBP and CROWN-IBP . In particular , our algorithms are generally applicable to all the verifiable adversarial training methods at almost no additional computational cost . We sum up the key contributions as following : 1 . Zeng et al . ( 2020 ) pointed out the adversarial example distribution deviates from the clean data . We further analyze the importance weight to correct empirical distribution from a novel perspective and theoretically prove more weight is needed for the examples closer to decision boundary if sampled from clean data distribution . 2 . Building upon previous analysis , we come up with a symmetrical re-weighting function based on worst case margin of the correct labels , emphasizing the examples around the decision boundary . 3 . For verifiable adversarial perturbation , empirically a slightly larger train achieves optimal evaluation robustness accuracy on eval , while this uniform setup is not ideal for examples around decision boundary . To address the issue of large perturbation , we developed an auto-tuning algorithm to customize the train for each individual example . 2 BACKGROUND . 2.1 ADVERSARIAL ATTACKS AND TRAINING . Let D = { ( xi , yi ) } ni=1 represent the dataset , where xi ∈ X and yi ∈ Y = { 0 , 1 , ... , C − 1 } . Let B ( xi , ) denote the set of points in the ` p-norm ball with radius around xi . The objective function is min θ 1 n n∑ i=1 l ( fθ ( x ′ i ) , yi ) , where x ′ i = arg max x′∈B ( xi , ) l ( fθ ( x ′ ) , yi ) ( 1 ) fθ : X −→ RC is a score function and l : RC × Y −→ R is the loss function . Adversarial training tackles this min-max problem by alternating between solving inner loop by attack method to estimate the worst-case adversarial scenario in B ( xi , ) and outer loop to update the model parameters and minimize loss function . 2.2 CERTIFIED TRAINING . Robustness Verification . Different from attack-based training , certified training provides a guarantee for worst case scenario via bounding the neural network outputs . The certified accuracy is the lower bound of robustness accuracy under any attack method , thus improving the certified accuracy helps in understanding the potential of the neural network in defending against adversarial attacks . Interval Bound Propagation . There have been many proposed neural network verification algorithms that can be used to compute worst case output . IBP ( Gowal et al. , 2019 ) utilizes a simple , interval arithmetic approach to propagate bounds through a network . Let zk−1 denote the input to a layer zk . IBP bounds zk by computing lower and upper bounds zk , zk such that zk ≤ zk ≤ zk holds element-wise . For affine layers represented by hk ( zk−1 ) = Wzk−1 + b , IBP computes : zk = W zk−1+zk−1 2 − |W | zk−1−zk−1 2 + b and zk = W zk−1+zk−1 2 + |W | zk−1−zk−1 2 + b. Propagating the bounds through the network allows us to compute the upper and lower bound of last layer logits zK , zK and evaluate if an input x is verifiably robust . The logit of the true class equals the lower bound and the logits of other classes equal to the upper bound : ẑK , m = { zK , m if m is the true class zK , m otherwise ( 2 ) IBP training uses a hyperparameter schedule on ( starting from 0 and increasing to train , typically set at train which is slightly larger than eval ) and a mixed loss function that combines natural and robust cross-entropy loss : minθ E ( x , y ) ∼P [ κl ( zK , y ) + ( 1 − κ ) l ( ẑK , y ) ] , where P is the data distribution , l is the cross entropy loss , κ is a hyperparameter that balances the weight between natural and robust loss , and ẑK represents the worst case logits computed using IBP . CROWN-IBP . CROWN was introduced by Zhang et al . ( 2018 ) and achieves a tight bound by adaptively selecting the linear approximation . Zhang et al . ( 2020a ) proposed CROWN-IBP combining IBP forward bounding pass and CROWN style backward bounding pass . CROWN-IBP trained models has a tighter bound compared with IBP models under the IBP metric , with incurring cost on computational efficiency from CROWN backward propagation and generally requiring more epochs for training stability . 2.3 ATTACK-BASED RE-WEIGHTING . Motivated by the idea that all data points are not equally vulnerable to adversarial attack , researchers proposed methods to re-weight the minimax risk by adding a re-weighting term ω ( xi , yi ) ahead of individual example loss . For instance , Zeng et al . ( 2020 ) noted the adversarial distribution deviation from the clean examples and assigned weights that monotonically decreases with the examples ’ confidence margin . The “ confidence margin ” is calculated by attack methods such as PGD , then the risk is re-weighted by a parameterized exponential family as : minθ 1 n ∑n i=1 ω ( xi , yi ) l ( fθ ( x ′ i ) , yi ) , s.t . ω ( xi , yi ) = exp ( −α margin ( fθ , xi + δi , yi ) ) , where α is a positive hyper-parameter and the new risk biases larger weights towards the mis-classified examples . Zhang et al . ( 2020c ) propose GAIRAT ( Geometry-Aware Adversarial Training ) , a method to reweight adversarial examples based on how close they are to the decision boundary . During the training process , GAIRAT explicitly assigns larger/smaller weights to data points closer/farther to the decision boundary respectively : ω ( xi , yi ) = ( 1 + tanh ( λ + 5 × ( 1 − 2 × κ ( xi , yi ) /K ) /2 , where λ is a hyper-parameter , K is the maximal allowed attack iteration and κ is the least iteration number that the attack method requires to fool the classifier . Similar to re-weighting , Wang et al . ( 2021 ) improves clean image performance by prioritizing the robustness between the most dissimilar groups 2.4 CUSTOMIZED ADVERSARIAL TRAINING . In most adversarial training methods , the adversarial attack strength usually follows a pre-defined scheduler throughout the training process . For instance , the perturbation is a uniform number for all examples and usually gradually increases . Cheng et al . ( 2020 ) argued that this assumption may be problematic given the fact that adversarial examples are not equally vulnerable and proposed an auto-tuning method by assigning individual i to each data and increasing it if the current attack is successful . In Zhang et al . ( 2020b ) , they proposed friendly adversarial training ( FAT ) by progressively increasing the attack perturbation with early-stop PGD and alleviating the issue of strong adversarial attack and cross-over mixture . | This paper proposes two ideas for improving the performance of certified training. The first idea is to use assign weight for each input based on the margin to the decision boundary. The second idea is to use automatic scheduling of perturbation radius during training. They show that using these two ideas leads to improved certified robustness on MNIST and CIFAR-10 datasets. | SP:d7b4a400a8376f863b898e6ab363485bb95f0cde |
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training | 1 INTRODUCTION Deep neural networks ( DNNs ) are notoriously vulnerable to maliciously crafted adversarial attacks . To conquer this fragility , numerous adversarial defense mechanisms are proposed to establish robust neural networks ( Schmidt et al. , 2018 ; Sun et al. , 2019 ; Nakkiran , 2019 ; Raghunathan et al. , 2019 ; Hu et al. , 2019 ; Chen et al. , 2020c ; 2021e ; Jiang et al. , 2020 ) . Among them , adversarial training ( AT ) based methods ( Madry et al. , 2017 ; Zhang et al. , 2019 ) have maintained the state-of-the-art robustness . However , the AT training process usually comes with order-ofmagnitude higher computational costs than standard training , since multiple attack iterations are needed to construct strong adversarial examples ( Madry et al. , 2018b ) . Moreover , AT was recently revealed to incur severe robust generalization gaps ( Rice et al. , 2020 ) , between its training and testing accuracies , as shown in Figure 1 ; and to require significantly more training samples ( Schmidt et al. , 2018 ) to generalize robustly . * Equal Contribution . 1 In response to those challenges , Schmidt et al . ( 2018 ) ; Lee et al . ( 2020 ) ; Song et al . ( 2019 ) investigate the possibility of improving generalization by leveraging advanced data augmentation techniques , which further amplifies the training cost of AT . Recent studies ( Rice et al. , 2020 ; Chen et al. , 2021e ) found that early stopping , or several smoothness/flatness-aware regularizations ( Chen et al. , 2021e ; Stutz et al. , 2021 ; Singla et al. , 2021 ) , can bring effective mitigation . In this paper , a new perspective has been explored to tackle the above challenges by enforcing appropriate sparsity patterns during AT . The connection between robust generalization and sparsity is mainly inspired by two facts . On one hand , sparsity can effectively regularize the learning of over-parameterized neural networks , hence potentially benefiting both standard and robust generalization ( Balda et al. , 2019 ) . As demonstrated in Figure 1 , with the increase of sparsity levels , the robust generalization gap is indeed substantially shrunk while the robust overfitting is alleviated . On the other hand , one key design philosophy that facilitates this consideration is the lottery ticket hypothesis ( LTH ) ( Frankle & Carbin , 2019 ) . The LTH advocates the existence of highly sparse and separately trainable subnetworks ( a.k.a . winning tickets ) , which can be trained from the original initialization to match or even surpass the corresponding dense networks ’ test accuracies . These facts point out a promising direction that utilizing proper sparsity is capable of boosting robust generalization while maintaining competitive standard and robust accuracy . Although sparsity is beneficial , the current methods ( Frankle & Carbin , 2019 ; Frankle et al. , 2020 ; Renda et al. , 2020 ) often empirically locate sparse critical subnetworks by Iterative Magnitude Pruning ( IMP ) . It demands excessive computational cost even for standard training due to the iterative train-prune-retrain process . Recently , You et al . ( 2020 ) demonstrated that these intriguing subnetworks can be identified at the very early training stage using one-shot pruning , which they term as Early Bird ( EB ) tickets . We show the phenomenon also exists in the adversarial training scheme . More importantly , we take one leap further to reveal that even in adversarial training , EB tickets can be drawn from a cheap standard training stage , while still achieving solid robustness . In other words , the Early Bird is also a Robust Bird that yields an attractive win-win of efficiency and robustness - we name this finding as Robust Bird ( RB ) tickets . Furthermore , we investigate the role of sparsity in a scene where the sparse connections of subnetworks change on the fly . Specifically , we initialize a subnetwork with random sparse connectivity and then optimize its weights and sparse typologies simultaneously , while sticking to the fixed small parameter budget . This training pipeline , called as Flying Bird ( FB ) , is motivated by the latest sparse training approaches ( Evci et al. , 2020b ) to further reduce robust generalization gap in AT , while ensuring low training costs . Moreover , an enhanced algorithm , i.e. , Flying Bird+ , is proposed to dynamically adjust the network capacity ( or sparsity ) to pursue superior robust generalization , at few extra prices of training efficiency . Our contributions can be summarized as follows : • We perform a thorough investigation to reveal that introducing appropriate sparsity into AT is an appealing win-win , specifically : ( 1 ) substantially alleviating the robust generalization gap ; ( 2 ) maintaining comparable or even better standard/robust accuracies ; and ( 3 ) enhancing the AT efficiency by training only compact subnetworks . • We explore two alternatives for sparse adversarial training : ( i ) the Robust Bird ( RB ) training that leverages static sparsity , by mining the critical sparse subnetwork at the early training stage , and using only the cheapest standard training ; ( ii ) the Flying Bird ( FB ) training that allows for dynamic sparsity , which jointly optimizes both network weights and their sparse connectivity during AT , while sticking to the same sparsity level . We also discuss a FB variant called Flying Bird+ that adaptively adjusts the sparsity level on demand during AT . • Extensive experiments are conducted on CIFAR-10 , CIFAR-100 , and Tiny-ImageNet with diverse network architectures . Specifically , our proposals obtain 80.16 % ∼ 87.83 % training FLOPs and 80.16 % ∼ 87.83 % inference FLOPs savings , shrink robust generalization from 28.00 % ∼ 63.18 % to 4.43 % ∼ 34.44 % , and boost the robust accuracy by up to 0.60 % and the standard accuracy by up to 0.90 % , across multiple datasets and architectures . Meanwhile , combining our sparse adversarial training frameworks with existing regularizations establishes the new state-of-the-art results . 2 RELATED WORK . Adversarial training and robust generalization/overfitting . Deep neural networks present vulnerability to imperceivable adversarial perturbations . To deal with this drawback , numerous defense 2 approaches have been proposed ( Goodfellow et al. , 2015 ; Kurakin et al. , 2016 ; Madry et al. , 2018a ) . Although many methods ( Liao et al. , 2018 ; Guo et al. , 2018a ; Xu et al. , 2017 ; Dziugaite et al. , 2016 ; Dhillon et al. , 2018a ; Xie et al. , 2018 ; Jiang et al. , 2020 ) were later found to result from obfuscated gradients ( Athalye et al. , 2018 ) , adversarial training ( AT ) ( Madry et al. , 2018a ) , together with some of its variants ( Zhang et al. , 2019 ; Mosbach et al. , 2018 ; Dong et al. , 2018 ) , remains as one of the most effective yet costly approaches . A pitfall of AT , i.e. , the poor robust generalization , was spotted recently . Schmidt et al . ( 2018 ) showed that AT intrinsically demands a larger sample complexity to identify well-generalizable robust solutions . Therefore , data augmentation ( Lee et al. , 2020 ; Song et al. , 2019 ) is an effective remedy . Stutz et al . ( 2021 ) ; Singla et al . ( 2021 ) related robust generalization gap to curvature/flatness of loss landscapes . They introduced weight perturbing approaches and smooth activation functions to reshape the loss geometry and boost robust generalization ability . Meanwhile , the robust overfitting ( Rice et al. , 2020 ) in AT usually happens with or as a result of inferior generalization . Previous studies ( Rice et al. , 2020 ; Chen et al. , 2021e ) demonstrated that conventional regularization-based methods ( e.g. , weight decay and simple data augmentation ) can not alleviate robust overfitting . Then , numerous advanced algorithms ( Zhang et al. , 2020 ; 2021b ; Zhou et al. , 2021 ; Bunk et al. , 2021 ; Chen et al. , 2021a ; Dong et al. , 2021 ; Zi et al. , 2021 ; Tack et al. , 2021 ; Zhang et al. , 2021a ) arose in the last half year to tackle the overfitting , using data manipulation , smoothened training , and else . Those methods work orthogonally to our proposal as evidenced in Section 4 . Another group of related literature lies in the field of sparse robust networks ( Guo et al. , 2018b ) . These works either treat model compression as a defense mechanism ( Wang et al. , 2018 ; Gao et al. , 2017 ; Dhillon et al. , 2018b ) or pursue robust and efficient sub-models that can be deployed in resource-limited platforms ( Gui et al. , 2019 ; Ye et al. , 2019 ; Sehwag et al. , 2019 ) . Compared to those inference-focused methods , our goal is fundamentally different : injecting sparsity during training to reduce the robust generalization gap while improving training efficiency . Static pruning and dynamic sparse training . Pruning ( LeCun et al. , 1990 ; Han et al. , 2015a ) serves as a powerful technique to eliminate the weight redundancy in over-parameterized DNNs , which aims to obtain storage and computational savings with almost undamaged performance . It can roughly divided into two categories based on how to generate sparse patterns : ( i ) static pruning . It removes parameters ( Han et al. , 2015a ; LeCun et al. , 1990 ; Han et al. , 2015b ) or substructures ( Liu et al. , 2017 ; Zhou et al. , 2016 ; He et al. , 2017 ) based on optimized importance scores ( Zhang et al. , 2018 ; He et al. , 2017 ) or some heuristics like weight magnitude ( Han et al. , 2015a ) , gradient ( Molchanov et al. , 2019 ) , hessian ( LeCun et al. , 1990 ) statistics . The discarded elements usually will not participate in the next round of training or pruning . Static pruning can be flexibly applied prior to training , such as SNIP ( Lee et al. , 2019 ) , GraSP ( Wang et al. , 2020 ) and SynFlow ( Tanaka et al. , 2020 ) ; during training ( Zhang et al. , 2018 ; He et al. , 2017 ) ; and post training ( Han et al. , 2015a ) for different trade-off between training cost and pruned models ’ quality . ( ii ) dynamic sparse training . It updates model parameters and sparse connectivities at the same time , starting from a randomly sparsified subnetwork ( Molchanov et al. , 2017 ) . During the training , the removed elements have chances to be grown back if they potentially benefit to predictions . Among the huge family of sparse training ( Mocanu et al. , 2016 ; Evci et al. , 2019 ; Mostafa & Wang , 2019 ; Liu et al. , 2021a ; Dettmers & Zettlemoyer , 2019 ; Jayakumar et al. , 2021 ; Raihan & Aamodt , 2020 ) , the recent methods Evci et al . ( 2020a ) ; Liu et al . ( 2021b ) lead to the state-of-the-art performance . A special case of static pruning , Lottery tickets hypothesis ( LTH ) ( Frankle & Carbin , 2019 ) , demonstrates the existence of sparse subnetworks in DNNs , which are capable of training in isolation and reach a comparable performance of their dense counterpart . The LTH indicates the great potential to train a sparse network from scratch without sacrificing expressiveness and has recently drawn lots of attention from diverse fields ( Chen et al. , 2020b ; a ; 2021g ; f ; d ; c ; b ; 2022 ; Ding et al. , 2022 ; Gan et al. , 2021 ) beyond image recognition ( Zhang et al. , 2021d ; Frankle et al. , 2020 ; Redman et al. , 2021 ) . | Recent studies demonstrate adversarial training suffers from severe overfitting besides getting very expensive. This paper proposes to handle the two problems organically altogether, with the tool of sparse training. The authors show that injecting appropriate sparsity forms in training could substantially shrink the robust generalization gap and alleviate the robust overfitting, meanwhile significantly saving training and inference FLOPs. | SP:2d4b408a083d8ccd887b847c98ff1faed9d90d30 |
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training | 1 INTRODUCTION Deep neural networks ( DNNs ) are notoriously vulnerable to maliciously crafted adversarial attacks . To conquer this fragility , numerous adversarial defense mechanisms are proposed to establish robust neural networks ( Schmidt et al. , 2018 ; Sun et al. , 2019 ; Nakkiran , 2019 ; Raghunathan et al. , 2019 ; Hu et al. , 2019 ; Chen et al. , 2020c ; 2021e ; Jiang et al. , 2020 ) . Among them , adversarial training ( AT ) based methods ( Madry et al. , 2017 ; Zhang et al. , 2019 ) have maintained the state-of-the-art robustness . However , the AT training process usually comes with order-ofmagnitude higher computational costs than standard training , since multiple attack iterations are needed to construct strong adversarial examples ( Madry et al. , 2018b ) . Moreover , AT was recently revealed to incur severe robust generalization gaps ( Rice et al. , 2020 ) , between its training and testing accuracies , as shown in Figure 1 ; and to require significantly more training samples ( Schmidt et al. , 2018 ) to generalize robustly . * Equal Contribution . 1 In response to those challenges , Schmidt et al . ( 2018 ) ; Lee et al . ( 2020 ) ; Song et al . ( 2019 ) investigate the possibility of improving generalization by leveraging advanced data augmentation techniques , which further amplifies the training cost of AT . Recent studies ( Rice et al. , 2020 ; Chen et al. , 2021e ) found that early stopping , or several smoothness/flatness-aware regularizations ( Chen et al. , 2021e ; Stutz et al. , 2021 ; Singla et al. , 2021 ) , can bring effective mitigation . In this paper , a new perspective has been explored to tackle the above challenges by enforcing appropriate sparsity patterns during AT . The connection between robust generalization and sparsity is mainly inspired by two facts . On one hand , sparsity can effectively regularize the learning of over-parameterized neural networks , hence potentially benefiting both standard and robust generalization ( Balda et al. , 2019 ) . As demonstrated in Figure 1 , with the increase of sparsity levels , the robust generalization gap is indeed substantially shrunk while the robust overfitting is alleviated . On the other hand , one key design philosophy that facilitates this consideration is the lottery ticket hypothesis ( LTH ) ( Frankle & Carbin , 2019 ) . The LTH advocates the existence of highly sparse and separately trainable subnetworks ( a.k.a . winning tickets ) , which can be trained from the original initialization to match or even surpass the corresponding dense networks ’ test accuracies . These facts point out a promising direction that utilizing proper sparsity is capable of boosting robust generalization while maintaining competitive standard and robust accuracy . Although sparsity is beneficial , the current methods ( Frankle & Carbin , 2019 ; Frankle et al. , 2020 ; Renda et al. , 2020 ) often empirically locate sparse critical subnetworks by Iterative Magnitude Pruning ( IMP ) . It demands excessive computational cost even for standard training due to the iterative train-prune-retrain process . Recently , You et al . ( 2020 ) demonstrated that these intriguing subnetworks can be identified at the very early training stage using one-shot pruning , which they term as Early Bird ( EB ) tickets . We show the phenomenon also exists in the adversarial training scheme . More importantly , we take one leap further to reveal that even in adversarial training , EB tickets can be drawn from a cheap standard training stage , while still achieving solid robustness . In other words , the Early Bird is also a Robust Bird that yields an attractive win-win of efficiency and robustness - we name this finding as Robust Bird ( RB ) tickets . Furthermore , we investigate the role of sparsity in a scene where the sparse connections of subnetworks change on the fly . Specifically , we initialize a subnetwork with random sparse connectivity and then optimize its weights and sparse typologies simultaneously , while sticking to the fixed small parameter budget . This training pipeline , called as Flying Bird ( FB ) , is motivated by the latest sparse training approaches ( Evci et al. , 2020b ) to further reduce robust generalization gap in AT , while ensuring low training costs . Moreover , an enhanced algorithm , i.e. , Flying Bird+ , is proposed to dynamically adjust the network capacity ( or sparsity ) to pursue superior robust generalization , at few extra prices of training efficiency . Our contributions can be summarized as follows : • We perform a thorough investigation to reveal that introducing appropriate sparsity into AT is an appealing win-win , specifically : ( 1 ) substantially alleviating the robust generalization gap ; ( 2 ) maintaining comparable or even better standard/robust accuracies ; and ( 3 ) enhancing the AT efficiency by training only compact subnetworks . • We explore two alternatives for sparse adversarial training : ( i ) the Robust Bird ( RB ) training that leverages static sparsity , by mining the critical sparse subnetwork at the early training stage , and using only the cheapest standard training ; ( ii ) the Flying Bird ( FB ) training that allows for dynamic sparsity , which jointly optimizes both network weights and their sparse connectivity during AT , while sticking to the same sparsity level . We also discuss a FB variant called Flying Bird+ that adaptively adjusts the sparsity level on demand during AT . • Extensive experiments are conducted on CIFAR-10 , CIFAR-100 , and Tiny-ImageNet with diverse network architectures . Specifically , our proposals obtain 80.16 % ∼ 87.83 % training FLOPs and 80.16 % ∼ 87.83 % inference FLOPs savings , shrink robust generalization from 28.00 % ∼ 63.18 % to 4.43 % ∼ 34.44 % , and boost the robust accuracy by up to 0.60 % and the standard accuracy by up to 0.90 % , across multiple datasets and architectures . Meanwhile , combining our sparse adversarial training frameworks with existing regularizations establishes the new state-of-the-art results . 2 RELATED WORK . Adversarial training and robust generalization/overfitting . Deep neural networks present vulnerability to imperceivable adversarial perturbations . To deal with this drawback , numerous defense 2 approaches have been proposed ( Goodfellow et al. , 2015 ; Kurakin et al. , 2016 ; Madry et al. , 2018a ) . Although many methods ( Liao et al. , 2018 ; Guo et al. , 2018a ; Xu et al. , 2017 ; Dziugaite et al. , 2016 ; Dhillon et al. , 2018a ; Xie et al. , 2018 ; Jiang et al. , 2020 ) were later found to result from obfuscated gradients ( Athalye et al. , 2018 ) , adversarial training ( AT ) ( Madry et al. , 2018a ) , together with some of its variants ( Zhang et al. , 2019 ; Mosbach et al. , 2018 ; Dong et al. , 2018 ) , remains as one of the most effective yet costly approaches . A pitfall of AT , i.e. , the poor robust generalization , was spotted recently . Schmidt et al . ( 2018 ) showed that AT intrinsically demands a larger sample complexity to identify well-generalizable robust solutions . Therefore , data augmentation ( Lee et al. , 2020 ; Song et al. , 2019 ) is an effective remedy . Stutz et al . ( 2021 ) ; Singla et al . ( 2021 ) related robust generalization gap to curvature/flatness of loss landscapes . They introduced weight perturbing approaches and smooth activation functions to reshape the loss geometry and boost robust generalization ability . Meanwhile , the robust overfitting ( Rice et al. , 2020 ) in AT usually happens with or as a result of inferior generalization . Previous studies ( Rice et al. , 2020 ; Chen et al. , 2021e ) demonstrated that conventional regularization-based methods ( e.g. , weight decay and simple data augmentation ) can not alleviate robust overfitting . Then , numerous advanced algorithms ( Zhang et al. , 2020 ; 2021b ; Zhou et al. , 2021 ; Bunk et al. , 2021 ; Chen et al. , 2021a ; Dong et al. , 2021 ; Zi et al. , 2021 ; Tack et al. , 2021 ; Zhang et al. , 2021a ) arose in the last half year to tackle the overfitting , using data manipulation , smoothened training , and else . Those methods work orthogonally to our proposal as evidenced in Section 4 . Another group of related literature lies in the field of sparse robust networks ( Guo et al. , 2018b ) . These works either treat model compression as a defense mechanism ( Wang et al. , 2018 ; Gao et al. , 2017 ; Dhillon et al. , 2018b ) or pursue robust and efficient sub-models that can be deployed in resource-limited platforms ( Gui et al. , 2019 ; Ye et al. , 2019 ; Sehwag et al. , 2019 ) . Compared to those inference-focused methods , our goal is fundamentally different : injecting sparsity during training to reduce the robust generalization gap while improving training efficiency . Static pruning and dynamic sparse training . Pruning ( LeCun et al. , 1990 ; Han et al. , 2015a ) serves as a powerful technique to eliminate the weight redundancy in over-parameterized DNNs , which aims to obtain storage and computational savings with almost undamaged performance . It can roughly divided into two categories based on how to generate sparse patterns : ( i ) static pruning . It removes parameters ( Han et al. , 2015a ; LeCun et al. , 1990 ; Han et al. , 2015b ) or substructures ( Liu et al. , 2017 ; Zhou et al. , 2016 ; He et al. , 2017 ) based on optimized importance scores ( Zhang et al. , 2018 ; He et al. , 2017 ) or some heuristics like weight magnitude ( Han et al. , 2015a ) , gradient ( Molchanov et al. , 2019 ) , hessian ( LeCun et al. , 1990 ) statistics . The discarded elements usually will not participate in the next round of training or pruning . Static pruning can be flexibly applied prior to training , such as SNIP ( Lee et al. , 2019 ) , GraSP ( Wang et al. , 2020 ) and SynFlow ( Tanaka et al. , 2020 ) ; during training ( Zhang et al. , 2018 ; He et al. , 2017 ) ; and post training ( Han et al. , 2015a ) for different trade-off between training cost and pruned models ’ quality . ( ii ) dynamic sparse training . It updates model parameters and sparse connectivities at the same time , starting from a randomly sparsified subnetwork ( Molchanov et al. , 2017 ) . During the training , the removed elements have chances to be grown back if they potentially benefit to predictions . Among the huge family of sparse training ( Mocanu et al. , 2016 ; Evci et al. , 2019 ; Mostafa & Wang , 2019 ; Liu et al. , 2021a ; Dettmers & Zettlemoyer , 2019 ; Jayakumar et al. , 2021 ; Raihan & Aamodt , 2020 ) , the recent methods Evci et al . ( 2020a ) ; Liu et al . ( 2021b ) lead to the state-of-the-art performance . A special case of static pruning , Lottery tickets hypothesis ( LTH ) ( Frankle & Carbin , 2019 ) , demonstrates the existence of sparse subnetworks in DNNs , which are capable of training in isolation and reach a comparable performance of their dense counterpart . The LTH indicates the great potential to train a sparse network from scratch without sacrificing expressiveness and has recently drawn lots of attention from diverse fields ( Chen et al. , 2020b ; a ; 2021g ; f ; d ; c ; b ; 2022 ; Ding et al. , 2022 ; Gan et al. , 2021 ) beyond image recognition ( Zhang et al. , 2021d ; Frankle et al. , 2020 ; Redman et al. , 2021 ) . | This paper deals with the problem of training a neural network so that it generalizes well over data unseen at training time. Namely, they address the particular case where a network is trained over an adversarial scheme. This paper proposes two methods for learning a sparse architecture called robust and flying bird. These methods aim at identifying sparse subnetworks arising during early training stages, so to get a pruning mask that eventually yields a sparse architecture (RobustBird). FlyingBird improves over Robust Bird in teh sense that the learning mask can be dynamically adjusted over time, i.e. pruned params may be recovered later on. The authors then experiment at training multiple architectures over different datasets in the experimental section, showing better generalization abilities and lower computational complexity (MACs) for their proposed methods Robust and Flying Birds. the authors conclude that sparsity help networks to generalize better, and as a byproduct it slashes computational complexity. | SP:2d4b408a083d8ccd887b847c98ff1faed9d90d30 |
High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize | √ T ) with high probability without the knowledge of smoothness and variance . We use a particular version of Freedman ’ s concentration bound for martingale difference sequences [ Kakade & Tewari , 2008 ] which enables us to achieve the best-known dependence of log ( 1/δ ) on the probability margin δ . We present our analysis in a modular way and obtain a complementary O ( 1/T ) convergence rate in the deterministic setting for the same class of algorithms . To the best of our knowledge , this is the first high probability result for AdaGrad with a truly adaptive scheme , i.e. , completely oblivious to the knowledge of smoothness and uniform variance bound , which simultaneously has best-known dependence on probability margin δ . 1 INTRODUCTION . Adaptive gradient methods are a staple of machine learning ( ML ) in solving core problems such as min x∈Rd f ( x ) : = Ez∼D [ f ( x ; z ) ] , ( P ) where the objective f ( x ) is possibly non-convex , and D is a probability distribution from which the random vector z is drawn . Problem ( P ) captures for instance empirical risk minimization or finite-sum minimization ( Shalev-Shwartz & Ben-David , 2014 ) problems , where z represents the mini-batches and D corresponds to the distribution governing the data or its sampling strategy . Within the context of large-scale problems , including streaming data , computing full gradients is extremely costly , if not impossible . Hence , stochastic iterative methods are the main optimizer choice in these scenarios . The so-called adaptive methods such as AdaGrad ( Duchi et al. , 2011 ) , Adam ( Kingma & Ba , 2014 ) and AmsGrad ( Reddi et al. , 2018 ) have witnessed a surge of interest both theoretically and practically due to their off-the-shelf performance . For instance , adaptive optimization methods are known to show superior performance in various learning tasks such as machine translation ( Zhang et al. , 2020 ; Vaswani et al. , 2017 ) . From a theoretical point of view , existing literature provides a quite comprehensive understanding regarding the expected behaviour of existing adaptive learning methods . Nevertheless , these results do not capture the behaviour of adaptive methods within a single/few runs , which is related to the probabilistic nature of these methods . While there exist high probability analysis of vanilla SGD for non-convex problems ( Ghadimi & Lan , 2013 ) , adaptive methods have received limited attention in this context . Our main goal in this paper is to understand the probabilistic convergence properties of adaptive algorithms , specifically AdaGrad , while focusing on their problem parameter adaptation capabilities in the non-convex setting . In this manuscript , adaptivity refers to the ability of an algorithm to ensure convergence without requiring the knowledge of quantities such as smoothness modulus or variance of noise . Studies along this direction largely exist for the convex objectives ; for instance , Levy et al . ( 2018 ) shows that AdaGrad can ( implicitly ) exploit smoothness and adapt to the magnitude of noise in the gradients when f ( x ) is convex in ( P ) . This alternative perspective to adaptivity is crucial because most existing analysis , both for classical and adaptive methods , assume to have access to smoothness constant , bound on gradients ( Reddi et al. , 2018 ) and even noise variance ( Ghadimi & Lan , 2013 ) . In practice , it is difficult , if not impossible , to compute or even estimate such quantities . For this purpose , in the setting of ( P ) we study a class of adaptive gradient methods that enable us to handle noisy gradient feedback without requiring the knowledge of the objective ’ s smoothness modulus , nor a bound on gradient norms neither noise variance . We summarize our contributions as follows : 1 . We provide a modular and simplified high probability analysis for AdaGrad-type adaptive methods . 2 . We present the first optimal high probability convergence result of the original AdaGrad algorithm for non-convex smooth problems . Concretely , ( a ) we analyze a fully adaptive step-size that is oblivious to Lipschitz constant L and the noise level σ , ( b ) our analysis demonstrates the best known dependence of log ( 1/δ ) on the probability margin δ . 3 . We present new adaptive versions of AdaGrad extensions that include averaging and momentum primitives , and prove similar high probability bounds for these methods as well . Concretely , we present a general adaptive template and provide complementary convergence results for different step-size regimes which individually recovers standard AdaGrad , AdaGrad with averaging and adaptive RSAG ( Ghadimi & Lan , 2016 ) . In the next section , we will provide a broad overview of related work with an emphasis on the recent developments . Section 3 formalizes the problem setting and states our blanket assumptions . Section 4 introduces the building blocks of our proposed proof technique while proving convergence results for AdaGrad . We generalize the convergence results of AdaGrad for a class of nonconvex , adaptive algorithms in Section 5 . Finally , we present concluding remarks in the last section . 2 RELATED WORK . Adaptive methods for stochastic optimization As an extended version of the online ( projected ) GD ( Zinkevich , 2003 ) , AdaGrad ( Duchi et al. , 2011 ) is the pioneering work behind most of the contemporary adaptive optimization algorithms Adam , AmsGrad and RmsProp ( Tieleman & Hinton , 2012 ) to name a few . Simply put , such AdaGrad-type methods compute step-sizes on-the-fly by accumulating gradient information and achieve adaptive regret bounds as a function of gradient history ( see also ( Tran & Phong , 2019 ; Alacaoglu et al. , 2020b ; Luo et al. , 2019 ; Huang et al. , 2019 ) ) . Universality , adaptive methods and acceleration We call an algorithm universal if it achieves optimal convergence rates under different settings , without any modifications . For convex minimization problems , Levy et al . ( 2018 ) showed that AdaGrad attains a rate of O ( 1/T + σ/ √ T ) by implicitly adapting to smoothness and noise levels in the feedback ; here T is the number of noisy gradient queries and σ is the noise variance . They also proposed an accelerated AdaGrad variant with scalar step-size . The latter result was extended for compactly constrained problems by ( Kavis et al. , 2019 ) , by devising accelerated Mirror-Prox algorithm . Very recently , Ene et al . ( 2021 ) have further generalized the latter results by designing and analyzing a novel adaptive and accelerated AdaGrad version with per-coordinate step-sizes . Convergence properties of these universal ( and some accelerated ) adaptive algorithms under smooth , non-convex losses are unknown to date . Adaptive methods for nonconvex optimization Following the popularity of neural networks , adaptive methods have attracted massive attention due to their favorable performance in training and their ease of tuning . The literature is quite vast , which is impossible to cover exhaustively here . Within the representative subset ( Chen et al. , 2019 ; Zaheer et al. , 2018 ; Li & Orabona , 2019 ; Zou et al. , 2019 ; Defossez et al. , 2020 ; Alacaoglu et al. , 2020a ; Chen et al. , 2020 ) . The majority of the theoretical results on adaptive methods for nonconvex problems focus on their in expectation performance . High probability results Ghadimi & Lan ( 2013 ) are the first to analyze SGD in the non-convex regime and provide tight convergence bounds . Nevertheless , their method requires a prior knowledge of the smoothness modulus and noise variance . In the context of adaptive methods , Li & Orabona ( 2020 ) considers delayed AdaGrad ( with lag-one-behind step-size ) for smooth , non-convex losses under subgaussian noise and proved O ( σ √ log ( T/δ ) / √ T ) rate . Under similar conditions , Zhou et al . ( 2018 ) proves convergence of order O ( ( σ2 log ( 1/δ ) ) /T + 1/ √ T ) for AdaGrad . However , both works require the knowledge of smoothness to set the step-size . Moreover , Ward et al . ( 2019 ) guarantees that AdaGrad with scalar step-size convergences at O ( ( 1/δ ) log ( T ) / √ T ) rate with high probability . Although their framework is oblivious to smoothness constant , their dependence of probability margin is far from optimal . More recently , under heavy-tailed noise having bounded pth moment for p ∈ ( 1 , 2 ) , Cutkosky & Mehta ( 2021 ) proves a rate of O ( log ( T/δ ) /T ( p−1 ) / ( 3p−2 ) ) for clipped normalized SGD with momentum ; nevertheless their method requires the knowledge of ( a bound on ) the behavior of the heavy tails . 3 SETUP AND PRELIMINARIES . As we stated in the introduction , we consider the unconstrained minimization setting min x∈Rd f ( x ) : = Ez∼D [ f ( x ; z ) ] , where the differentiable function f : Rd → R is a smooth and ( possibly ) nonconvex function . We are interested is finding a first-order -stationary point satisfying ‖∇f ( xt ) ‖2 ≤ , where ‖·‖ denotes the Euclidean norm for the sake of simplicity . As the standard measure of convergence , we will quantify the performance of algorithms with respect to average gradient norm , 1T ∑T t=1 ‖∇f ( xt ) ‖2 . It immediately implies convergence in minimum gradient norm , mint∈ [ T ] ‖∇f ( xt ) ‖2 . Moreover , note that if we are able to bound 1T ∑T t=1 ‖∇f ( xt ) ‖2 , then by choosing to output a solution x̄T which is chosen uniformly at random from the set of query points { x1 , . . . , xT } , then we ensure that E‖∇f ( x̄T ) ‖2 : = 1T ∑T t=1 ‖∇f ( xt ) ‖2 is bounded . A function is called G-Lipschitz continuous if it satisfies |f ( x ) − f ( y ) | ≤ G‖x− y‖ , ∀x , y ∈ dom ( f ) , ( 1 ) which immediately implies that ‖∇f ( x ) ‖ ≤ G , ∀x ∈ dom ( f ) . ( 2 ) A differentiable function is called L-smooth if it has L-Lipschitz gradient ‖∇f ( x ) −∇f ( y ) ‖ ≤ L‖x− y‖ , ∀x , y ∈ dom ( ∇f ) . ( 3 ) An equivalent characterization of smoothness is sometimes referred to as the “ descent lemma ” ( Ward et al. , 2019 ; Beck , 2017 ) , i.e. , |f ( x ) − f ( y ) − 〈∇f ( y ) , x− y〉| ≤ L 2 2 ‖x− y‖2 . ( 4 ) Assumptions on oracle model : We denote stochastic gradients with ∇̃f ( x ) = ∇f ( x ; z ) , for some random vector drawn from distribution D. Since our template embraces single-call algorithms , we use this shorthand notation for simplicity . An oracle is called ( conditionally ) unbiased if E [ ∇̃f ( x ) |x ] = ∇f ( x ) , ∀x ∈ dom ( ∇f ) . ( 5 ) Gradient estimates generated by a first-order oracle are said to have bounded variance if they satisfy E [ ‖∇̃f ( x ) −∇f ( x ) ‖2|x ] ≤ σ2 , ∀ ∈ dom ( ∇f ) . ( 6 ) Finally , we assume that the stochastic gradient are bounded almost surely , i.e. , ‖∇̃f ( x ) ‖ ≤ G̃ , ∀x ∈ dom ( ∇f ) . ( 7 ) Remark 1 . Bounded variance assumption ( 6 ) is standard in the analysis of stochastic methods ( Lan , 2020 ) . Similarly , for the analysis of adaptive methods in the nonconvex realm , it is very common to assume bounded stochastic gradients ( see ( Zaheer et al. , 2018 ; Zhou et al. , 2018 ; Chen et al. , 2019 ; Li & Orabona , 2020 ) and references therein ) . | This paper proposed a new analysis for AdaGrad method in smooth and non-convex optimization, to get high probability convergence toward stationary points. Based on some assumptions (Eqs. (2), (6) and (7)), i.e., Lipschitz, bounded variance of gradient estimates, and bounded stochastic gradient, the authors analyzed Algorithm 1 (AdaGrad), which does not use any information of the quantities in the assumptions. They show that (in Theorem 4.2), with high probability, the averaged cumulative gradient norm square is converging to $0$ at order of $\tilde{O}(\log{(1/\delta)}/\sqrt{T})$. The main difficulty is to derive high probability bounds for the two quantities in Eq. (10), i.e., Propositions 4.1 and 4.2. The authors then generalized their analysis to the generic AGD template (Algorithm 2), which recovers AdaGrad, AdaGrad with averaging, adaptive RSAG, and AcceleGrad as special cases by choosing different $\alpha_t$, $\eta_t$ and $\gamma_t$ parameters. Similar high probability convergence results can be obtained for AdaGrad with averaging, and adaptive RSAG, but not for AcceleGrad. | SP:e384abbadce76420670e40b9597cc5511369d422 |
High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize | √ T ) with high probability without the knowledge of smoothness and variance . We use a particular version of Freedman ’ s concentration bound for martingale difference sequences [ Kakade & Tewari , 2008 ] which enables us to achieve the best-known dependence of log ( 1/δ ) on the probability margin δ . We present our analysis in a modular way and obtain a complementary O ( 1/T ) convergence rate in the deterministic setting for the same class of algorithms . To the best of our knowledge , this is the first high probability result for AdaGrad with a truly adaptive scheme , i.e. , completely oblivious to the knowledge of smoothness and uniform variance bound , which simultaneously has best-known dependence on probability margin δ . 1 INTRODUCTION . Adaptive gradient methods are a staple of machine learning ( ML ) in solving core problems such as min x∈Rd f ( x ) : = Ez∼D [ f ( x ; z ) ] , ( P ) where the objective f ( x ) is possibly non-convex , and D is a probability distribution from which the random vector z is drawn . Problem ( P ) captures for instance empirical risk minimization or finite-sum minimization ( Shalev-Shwartz & Ben-David , 2014 ) problems , where z represents the mini-batches and D corresponds to the distribution governing the data or its sampling strategy . Within the context of large-scale problems , including streaming data , computing full gradients is extremely costly , if not impossible . Hence , stochastic iterative methods are the main optimizer choice in these scenarios . The so-called adaptive methods such as AdaGrad ( Duchi et al. , 2011 ) , Adam ( Kingma & Ba , 2014 ) and AmsGrad ( Reddi et al. , 2018 ) have witnessed a surge of interest both theoretically and practically due to their off-the-shelf performance . For instance , adaptive optimization methods are known to show superior performance in various learning tasks such as machine translation ( Zhang et al. , 2020 ; Vaswani et al. , 2017 ) . From a theoretical point of view , existing literature provides a quite comprehensive understanding regarding the expected behaviour of existing adaptive learning methods . Nevertheless , these results do not capture the behaviour of adaptive methods within a single/few runs , which is related to the probabilistic nature of these methods . While there exist high probability analysis of vanilla SGD for non-convex problems ( Ghadimi & Lan , 2013 ) , adaptive methods have received limited attention in this context . Our main goal in this paper is to understand the probabilistic convergence properties of adaptive algorithms , specifically AdaGrad , while focusing on their problem parameter adaptation capabilities in the non-convex setting . In this manuscript , adaptivity refers to the ability of an algorithm to ensure convergence without requiring the knowledge of quantities such as smoothness modulus or variance of noise . Studies along this direction largely exist for the convex objectives ; for instance , Levy et al . ( 2018 ) shows that AdaGrad can ( implicitly ) exploit smoothness and adapt to the magnitude of noise in the gradients when f ( x ) is convex in ( P ) . This alternative perspective to adaptivity is crucial because most existing analysis , both for classical and adaptive methods , assume to have access to smoothness constant , bound on gradients ( Reddi et al. , 2018 ) and even noise variance ( Ghadimi & Lan , 2013 ) . In practice , it is difficult , if not impossible , to compute or even estimate such quantities . For this purpose , in the setting of ( P ) we study a class of adaptive gradient methods that enable us to handle noisy gradient feedback without requiring the knowledge of the objective ’ s smoothness modulus , nor a bound on gradient norms neither noise variance . We summarize our contributions as follows : 1 . We provide a modular and simplified high probability analysis for AdaGrad-type adaptive methods . 2 . We present the first optimal high probability convergence result of the original AdaGrad algorithm for non-convex smooth problems . Concretely , ( a ) we analyze a fully adaptive step-size that is oblivious to Lipschitz constant L and the noise level σ , ( b ) our analysis demonstrates the best known dependence of log ( 1/δ ) on the probability margin δ . 3 . We present new adaptive versions of AdaGrad extensions that include averaging and momentum primitives , and prove similar high probability bounds for these methods as well . Concretely , we present a general adaptive template and provide complementary convergence results for different step-size regimes which individually recovers standard AdaGrad , AdaGrad with averaging and adaptive RSAG ( Ghadimi & Lan , 2016 ) . In the next section , we will provide a broad overview of related work with an emphasis on the recent developments . Section 3 formalizes the problem setting and states our blanket assumptions . Section 4 introduces the building blocks of our proposed proof technique while proving convergence results for AdaGrad . We generalize the convergence results of AdaGrad for a class of nonconvex , adaptive algorithms in Section 5 . Finally , we present concluding remarks in the last section . 2 RELATED WORK . Adaptive methods for stochastic optimization As an extended version of the online ( projected ) GD ( Zinkevich , 2003 ) , AdaGrad ( Duchi et al. , 2011 ) is the pioneering work behind most of the contemporary adaptive optimization algorithms Adam , AmsGrad and RmsProp ( Tieleman & Hinton , 2012 ) to name a few . Simply put , such AdaGrad-type methods compute step-sizes on-the-fly by accumulating gradient information and achieve adaptive regret bounds as a function of gradient history ( see also ( Tran & Phong , 2019 ; Alacaoglu et al. , 2020b ; Luo et al. , 2019 ; Huang et al. , 2019 ) ) . Universality , adaptive methods and acceleration We call an algorithm universal if it achieves optimal convergence rates under different settings , without any modifications . For convex minimization problems , Levy et al . ( 2018 ) showed that AdaGrad attains a rate of O ( 1/T + σ/ √ T ) by implicitly adapting to smoothness and noise levels in the feedback ; here T is the number of noisy gradient queries and σ is the noise variance . They also proposed an accelerated AdaGrad variant with scalar step-size . The latter result was extended for compactly constrained problems by ( Kavis et al. , 2019 ) , by devising accelerated Mirror-Prox algorithm . Very recently , Ene et al . ( 2021 ) have further generalized the latter results by designing and analyzing a novel adaptive and accelerated AdaGrad version with per-coordinate step-sizes . Convergence properties of these universal ( and some accelerated ) adaptive algorithms under smooth , non-convex losses are unknown to date . Adaptive methods for nonconvex optimization Following the popularity of neural networks , adaptive methods have attracted massive attention due to their favorable performance in training and their ease of tuning . The literature is quite vast , which is impossible to cover exhaustively here . Within the representative subset ( Chen et al. , 2019 ; Zaheer et al. , 2018 ; Li & Orabona , 2019 ; Zou et al. , 2019 ; Defossez et al. , 2020 ; Alacaoglu et al. , 2020a ; Chen et al. , 2020 ) . The majority of the theoretical results on adaptive methods for nonconvex problems focus on their in expectation performance . High probability results Ghadimi & Lan ( 2013 ) are the first to analyze SGD in the non-convex regime and provide tight convergence bounds . Nevertheless , their method requires a prior knowledge of the smoothness modulus and noise variance . In the context of adaptive methods , Li & Orabona ( 2020 ) considers delayed AdaGrad ( with lag-one-behind step-size ) for smooth , non-convex losses under subgaussian noise and proved O ( σ √ log ( T/δ ) / √ T ) rate . Under similar conditions , Zhou et al . ( 2018 ) proves convergence of order O ( ( σ2 log ( 1/δ ) ) /T + 1/ √ T ) for AdaGrad . However , both works require the knowledge of smoothness to set the step-size . Moreover , Ward et al . ( 2019 ) guarantees that AdaGrad with scalar step-size convergences at O ( ( 1/δ ) log ( T ) / √ T ) rate with high probability . Although their framework is oblivious to smoothness constant , their dependence of probability margin is far from optimal . More recently , under heavy-tailed noise having bounded pth moment for p ∈ ( 1 , 2 ) , Cutkosky & Mehta ( 2021 ) proves a rate of O ( log ( T/δ ) /T ( p−1 ) / ( 3p−2 ) ) for clipped normalized SGD with momentum ; nevertheless their method requires the knowledge of ( a bound on ) the behavior of the heavy tails . 3 SETUP AND PRELIMINARIES . As we stated in the introduction , we consider the unconstrained minimization setting min x∈Rd f ( x ) : = Ez∼D [ f ( x ; z ) ] , where the differentiable function f : Rd → R is a smooth and ( possibly ) nonconvex function . We are interested is finding a first-order -stationary point satisfying ‖∇f ( xt ) ‖2 ≤ , where ‖·‖ denotes the Euclidean norm for the sake of simplicity . As the standard measure of convergence , we will quantify the performance of algorithms with respect to average gradient norm , 1T ∑T t=1 ‖∇f ( xt ) ‖2 . It immediately implies convergence in minimum gradient norm , mint∈ [ T ] ‖∇f ( xt ) ‖2 . Moreover , note that if we are able to bound 1T ∑T t=1 ‖∇f ( xt ) ‖2 , then by choosing to output a solution x̄T which is chosen uniformly at random from the set of query points { x1 , . . . , xT } , then we ensure that E‖∇f ( x̄T ) ‖2 : = 1T ∑T t=1 ‖∇f ( xt ) ‖2 is bounded . A function is called G-Lipschitz continuous if it satisfies |f ( x ) − f ( y ) | ≤ G‖x− y‖ , ∀x , y ∈ dom ( f ) , ( 1 ) which immediately implies that ‖∇f ( x ) ‖ ≤ G , ∀x ∈ dom ( f ) . ( 2 ) A differentiable function is called L-smooth if it has L-Lipschitz gradient ‖∇f ( x ) −∇f ( y ) ‖ ≤ L‖x− y‖ , ∀x , y ∈ dom ( ∇f ) . ( 3 ) An equivalent characterization of smoothness is sometimes referred to as the “ descent lemma ” ( Ward et al. , 2019 ; Beck , 2017 ) , i.e. , |f ( x ) − f ( y ) − 〈∇f ( y ) , x− y〉| ≤ L 2 2 ‖x− y‖2 . ( 4 ) Assumptions on oracle model : We denote stochastic gradients with ∇̃f ( x ) = ∇f ( x ; z ) , for some random vector drawn from distribution D. Since our template embraces single-call algorithms , we use this shorthand notation for simplicity . An oracle is called ( conditionally ) unbiased if E [ ∇̃f ( x ) |x ] = ∇f ( x ) , ∀x ∈ dom ( ∇f ) . ( 5 ) Gradient estimates generated by a first-order oracle are said to have bounded variance if they satisfy E [ ‖∇̃f ( x ) −∇f ( x ) ‖2|x ] ≤ σ2 , ∀ ∈ dom ( ∇f ) . ( 6 ) Finally , we assume that the stochastic gradient are bounded almost surely , i.e. , ‖∇̃f ( x ) ‖ ≤ G̃ , ∀x ∈ dom ( ∇f ) . ( 7 ) Remark 1 . Bounded variance assumption ( 6 ) is standard in the analysis of stochastic methods ( Lan , 2020 ) . Similarly , for the analysis of adaptive methods in the nonconvex realm , it is very common to assume bounded stochastic gradients ( see ( Zaheer et al. , 2018 ; Zhou et al. , 2018 ; Chen et al. , 2019 ; Li & Orabona , 2020 ) and references therein ) . | This paper provides an analysis of adaptive learning rate scheme in stochastic non-convex settings. Under assumptions of smoothness and bounded stochastic gradients (a light tailed condition), it is shown that the sum of the gradient norms of the iterates of the non-convex algorithm is small. In order to accomplish this, the analysis must deal with the standard difficulty for adaptive learning rates, which is that the learning rate depends on the current iterate and so makes it harder to apply standard martingale inequalities. The final results include a dependence on the smoothness constant $L$ and variance $\sigma$ which are not explicitly used in the algorithm. | SP:e384abbadce76420670e40b9597cc5511369d422 |
FrugalMCT: Efficient Online ML API Selection for Multi-Label Classification Tasks | 1 INTRODUCTION . Many machine learning users are starting to adopt machine learning as a service ( MLaaS ) APIs to obtain high-quality predictions . One of the most common tasks these APIs target is multi-label classification . For example , one can use Google ’ s computer vision API ( Goo ) to tag an image with a wide range of possible labels for $ 0.0015 , or Microsoft ’ s API ( Mic ) for $ 0.0010 . Another example is to extract all text strings from an image for $ 0.005 via iFLYTEK ’ s API ( Ifl ) or $ 0.021 via Tencent ’ s API ( Ten ) . In practice , these APIs also provide different performance on different types of input data ( e.g. , English vs Chinese text ) . The heterogeneity in APIs ’ performance and prices makes it hard for users to decide which API , or combination of APIs , to use for their own datasets and budgets . Recent work ( Chen et al. , 2020 ) proposed FrugalML , an algorithmic framework that adaptively decides which APIs to call for a data point to optimize accuracy and cost . Their approach learns a fast decision rule for each possible output label that can significantly improve cost-performance over the individual APIs . However , FrugalML requires a large amount of training data and involves solving a non-convex optimization problem with complexity exponential in the number of distinct labels . This prevents it from being used for tasks with large number of labels , such as multi-label classification . Furthermore , FrugalML ignores correlation between different APIs ’ predictions , potentially limiting its accuracy . For example , APIs A and B may output { person , car } and { car , bike } separately for an image whose true keywords are { person , car , bike } . FrugalML would select one of the two label sets , but combining them results in the true label set and thus higher accuracy . Thus , this paper aims to solve these significant limitations and address the question : how do we design efficient ML API selection strategies for multi-label classification tasks to maximize accuracy within a budget ? We propose FrugalMCT , a principled framework that learns the strengths and weaknesses of different combinations of multi-label classification APIs , and efficiently selects the optimal combinations of APIs to call for different data items and budget constraints . As shown in Fig . 1 ( a ) , FrugalMCT directly estimates the accuracy of each API combination on a particular input based on the features and predicted labels of that input . Then it uses a fast service selector based on the estimated accuracy to balance accuracy and budget . For example , we might first call API A on an input . If A returns person and teddy bear and the accuracy predictor gives relatively high estimated accuracy ( Fig . 1 ( c ) ) , then we stop and report { person , teddy bear } as the label set . If A returns person and tennis racket , and we predict that combining it with API B ’ s output gives a much higher accuracy , then we invoke API B and combine their prediction to obtain { person , sports ball , tennis racket } ( Fig . 1 ( d ) ) . Contributions . FrugalMCT is an end-to-end approach that integrates the selection of APIs and the combination of their outputs for individual user queries . It leverages our key finding that current commercial APIs have complementary strengths and weaknesses , and that we can reliably predict which APIs are likely to work well for a new query based on easy-to-generate metadata about its input . FrugalMCT then executes an efficient online algorithm to determine which combination of APIs to call for different user queries . We show that the online algorithm enjoys an accuracy provably close to the offline method as well as a small computational cost . All components in FrugalMCT are trainable , making it easy to customize for different applications . To our knowledge , FrugalMCT is the first work on how to effectively select and combine multi-label ML APIs . Empirically , FrugalMCT produces substantially better prediction performance than individual APIs and than FrugalML adapted for multi-label tasks ( Fig . 1 ( b ) ) . Extensive experiments with real commercial APIs on several tasks , including multi-label image classifications , scene text recognition , and named entity recognition , show that FrugalMCT typically provides over 60 % ( as high as 98 % ) cost reduction when aiming to match the best commercial API ’ s performance . Also , when targeting the same cost as the best commercial API , FrugalMCT can improve performance up to 8 % . We will release our dataset of 295,212 samples annotated by commercial multi-label APIs as the largest dataset and resource for studying multi-label ML prediction APIs . Related work . MLaaS : With the growing importance and adoption of MLaaS APIs ( Ama ; Ten ; Goo ; IBM ; Mic ) , existing research has largely focused on evaluating individual API for their performance ( Yao et al. , 2017 ) , robustness ( Hosseini et al. , 2017 ) , biases ( Koenecke et al. , 2020 ) and applications ( Buolamwini & Gebru , 2018 ; Hosseini et al. , 2019 ; Reis et al. , 2018 ) . Recent work on FrugalML ( Chen et al. , 2020 ) studies API calling strategies for single label classification . While their approach ’ s computational complexity is exponential in the number of labels , FrugalMCT ’ s complexity does not depend on the number of labels , making it suitable for multi-label prediction APIs . In addition , FrugalML selects only one API per user query , while FrugalMCT considers the combination of multiple APIs ’ output for each input data . This improves the overall accuracy ( as shown in Sec 4 ) , but also creates unique optimization challenges that we solve . Ensembles for multi-label classification : Ensemble learning is a natural approach to combine different predictors ’ output . Several ensemble methods have been developed , such as using pruned sets ( Read et al. , 2008 ) , classifier chains ( Read et al. , 2011 ) , and random subsets ( Tsoumakas & Vlahavas , 2007 ) , with applications in image annotations ( Xu et al. , 2011 ) , document classification ( Chen et al. , 2017 ) , and speech categorization ( Liu et al. , 2019 ) . Moyano et al . ( 2018 ) provide a detailed survey of this area . Almost all of these ensemble methods require joint training of the base classifiers , but MLaaS APIs are black box to the users . Also , while ensemble methods focus only on improving accuracy , FrugalMCT explicitly considers the cost of each API and enforces a budget constraint . Model cascades : A series of works ( Viola & Jones , 2001a ; b ; Sun et al. , 2013 ; Cai et al. , 2015 ; Wang et al. , 2011 ; Xu et al. , 2014 ; Chen et al. , 2018 ; Kumar et al. , 2018 ; Chen et al. , 2018 ) explores cascades ( a sequence of models ) to balance the quality and runtime of inference . Model cascades use a single predicted quality score to avoid calling computationally expensive models , but FrugalMCT ’ strategies utilize both quality scores and predicted label sets to select an expensive add-on service . AutoML for multi-label classification : AutoML ( Thornton et al. , 2013 ) automates the customization of ML pipelines , including the selection , combination , and parametrization of the learning algorithms . There is a rich literature of AutoML techniques for standard single label tasks , and fewer methods on multi-label predictions ( Wever et al. , 2021 ) ( e.g . genetic algorithms ( de Sá et al. , 2017 ) and a neural network-based search scheme ( Pakrashi & Namee , 2019 ) ) . Applying AutoML to use multiple ML APIs is underexplored , and FrugalMCT can be viewed as the first AutoML approch designed for automating the selection of multiple mutlti-label ML APIs . While most AutoML systems exclusively focus on prediction performance , FrugalMCT optimizes accuracy and cost jointly , which is desirable for cost-sensitive API users . Multiple choice knapsack and integer programming : Many resource allocation problems can be modeled as multiple choice knapsack problem ( MCKP ) ( Pamela H. Vance & Toth ) , 1993 ) , such as keyword bidding ( Zhou & Naroditskiy , 2008 ) and quality of service control ( Lee et al. , 1999 ) . While NP-hard ( Sinha & Zoltners , 1979 ) , various approximations have been proposed for MCKP , such as branch and bound ( Pamela H. Vance & Toth ) , 1993 ) , convex hull relaxation ( Akbar et al. , 2006 ) and bi-objective transformation ( Bednarczuk et al. , 2018 ) . Inherently an integer linear programming ( ILP ) problem , MCKP can also be tackled by ILP solvers , motivated by online adwords searching ( Devanur & Hayes , 2009 ) , resource allocation ( Devanur & Hayes , 2019 ) and general linear programming ( Li et al. , 2020 ) . The service selector of FrugalMCT can be viewed as a MCKP with the same item cost vector per item group , which we leverage to obtain a customized fast and online solver . 2 PRELIMINARIES . Notation . We denote matrices and vectors in bold , and scalars , sets , and functions in standard script . Given a matrix A ∈ Rn×m , we let Ai , j denote its entry at location ( i , j ) . 1 ( · ) represents the indicator function . Multi-label classification Tasks . Throughout this paper , we focus on multi-label classification tasks : assigning a label set Y ⊆ Y to any data point x ∈ X . In contrast to basic supervised learning , in multi-label learning each data point is associated with a set of labels instead of a single label . Many MLaaS APIs target such tasks . Consider , for example , image tagging , where X is a set of images and Y is the set of all tags . Example label sets could be { person , car } or { bag , train , sky } . MLaaS Market . Consider a MLaaS market consisting of K different ML services for some multilabel task . For a data point x , the kth service returns to the user a set of labels with their quality scores , denoted by Yk ( x ) ⊆ Y × [ 0 , 1 ] . For example , one API for multi-label image classification might produce Yk ( x ) = { ( person , 0.8 ) , ( car , 0.7 ) } , indicating the label person with confidence 0.8 and car with confidence 0.7 . Let the vector c ∈ RK denote the unit cost of all services . For example , ck = 0.01 means that users need to pay $ 0.01 every time they call the kth service . 3 FRUGALMCT FRAMEWORK . In this section , we present FrugalMCT , a framework to adaptively select ML APIs for multi-label classification tasks within a budget . All proofs are left to the appendix . We generalize the scheme in Figure 1 ( a ) to K ML services . As shown in Figure 2 , FrugalMCT contains three main components : an accuracy estimator , a service selector , and a label combiner . Given a data point x , it first calls some base service , denoted by base , which is one of the K APIs , and obtains Ybase ( x ) . Often , base is a cheap or free service ; we discuss how to choose base in Section 3.4 . Next , an accuracy predictor produces a vector â ( x ) ∈ [ 0 , 1 ] K , whose kth value estimates the accuracy of the label set produced by the label combiner using base ’ s and kth API ’ s outputs . The service selector s ( · ) : X 7→ [ K ] then decides if and which add-on service needs to be invoked . Finally , a label combiner generates a label set by combining the predictions from the base and add-on APIs . Take Figure 1 ( d ) as an example . The image is first passed to the GitHub model , which produces { ( person , 0.46 ) , ( tennis racket,0.18 ) } , by which the accuracy predictor predicts the accuracy of the label set generated by combining each API ’ s output with GitHub model ’ s . The service selector then decides to further invoke Everypixel , which gives { ( person , 0.46 ) , ( sports ball , 0.52 ) } . Finally , the label combiner uses both APIs ’ output for the final prediction . FrugalMCT allows users to customize the accuracy predictor and the label combiner , depending on the applications . For example , for the image tagging problem , one might use image features ( e.g. , brightness and contrast ) to build the accuracy predictor , while word embeddings can be more useful for named entity recognition . In the following sections , we explain the key of accuracy predictor , API selector and the label combiner in more detail . | Given several mutli-label machine learning APIs, this paper study how to select those APIs under a budget constrant while striving to improve the overall accuracy. The author first formulate the budget API select problem as an integer linear programming problem, then relax the integer contraint and solving the relax problem in dual. The advantage of such modeling is to have a fast decision function for online deployment of their API selection systems. The experiment results are promising and consists of several ablation studies. | SP:31412e46449bccbfe0b74080a1c15df64b2363d5 |
FrugalMCT: Efficient Online ML API Selection for Multi-Label Classification Tasks | 1 INTRODUCTION . Many machine learning users are starting to adopt machine learning as a service ( MLaaS ) APIs to obtain high-quality predictions . One of the most common tasks these APIs target is multi-label classification . For example , one can use Google ’ s computer vision API ( Goo ) to tag an image with a wide range of possible labels for $ 0.0015 , or Microsoft ’ s API ( Mic ) for $ 0.0010 . Another example is to extract all text strings from an image for $ 0.005 via iFLYTEK ’ s API ( Ifl ) or $ 0.021 via Tencent ’ s API ( Ten ) . In practice , these APIs also provide different performance on different types of input data ( e.g. , English vs Chinese text ) . The heterogeneity in APIs ’ performance and prices makes it hard for users to decide which API , or combination of APIs , to use for their own datasets and budgets . Recent work ( Chen et al. , 2020 ) proposed FrugalML , an algorithmic framework that adaptively decides which APIs to call for a data point to optimize accuracy and cost . Their approach learns a fast decision rule for each possible output label that can significantly improve cost-performance over the individual APIs . However , FrugalML requires a large amount of training data and involves solving a non-convex optimization problem with complexity exponential in the number of distinct labels . This prevents it from being used for tasks with large number of labels , such as multi-label classification . Furthermore , FrugalML ignores correlation between different APIs ’ predictions , potentially limiting its accuracy . For example , APIs A and B may output { person , car } and { car , bike } separately for an image whose true keywords are { person , car , bike } . FrugalML would select one of the two label sets , but combining them results in the true label set and thus higher accuracy . Thus , this paper aims to solve these significant limitations and address the question : how do we design efficient ML API selection strategies for multi-label classification tasks to maximize accuracy within a budget ? We propose FrugalMCT , a principled framework that learns the strengths and weaknesses of different combinations of multi-label classification APIs , and efficiently selects the optimal combinations of APIs to call for different data items and budget constraints . As shown in Fig . 1 ( a ) , FrugalMCT directly estimates the accuracy of each API combination on a particular input based on the features and predicted labels of that input . Then it uses a fast service selector based on the estimated accuracy to balance accuracy and budget . For example , we might first call API A on an input . If A returns person and teddy bear and the accuracy predictor gives relatively high estimated accuracy ( Fig . 1 ( c ) ) , then we stop and report { person , teddy bear } as the label set . If A returns person and tennis racket , and we predict that combining it with API B ’ s output gives a much higher accuracy , then we invoke API B and combine their prediction to obtain { person , sports ball , tennis racket } ( Fig . 1 ( d ) ) . Contributions . FrugalMCT is an end-to-end approach that integrates the selection of APIs and the combination of their outputs for individual user queries . It leverages our key finding that current commercial APIs have complementary strengths and weaknesses , and that we can reliably predict which APIs are likely to work well for a new query based on easy-to-generate metadata about its input . FrugalMCT then executes an efficient online algorithm to determine which combination of APIs to call for different user queries . We show that the online algorithm enjoys an accuracy provably close to the offline method as well as a small computational cost . All components in FrugalMCT are trainable , making it easy to customize for different applications . To our knowledge , FrugalMCT is the first work on how to effectively select and combine multi-label ML APIs . Empirically , FrugalMCT produces substantially better prediction performance than individual APIs and than FrugalML adapted for multi-label tasks ( Fig . 1 ( b ) ) . Extensive experiments with real commercial APIs on several tasks , including multi-label image classifications , scene text recognition , and named entity recognition , show that FrugalMCT typically provides over 60 % ( as high as 98 % ) cost reduction when aiming to match the best commercial API ’ s performance . Also , when targeting the same cost as the best commercial API , FrugalMCT can improve performance up to 8 % . We will release our dataset of 295,212 samples annotated by commercial multi-label APIs as the largest dataset and resource for studying multi-label ML prediction APIs . Related work . MLaaS : With the growing importance and adoption of MLaaS APIs ( Ama ; Ten ; Goo ; IBM ; Mic ) , existing research has largely focused on evaluating individual API for their performance ( Yao et al. , 2017 ) , robustness ( Hosseini et al. , 2017 ) , biases ( Koenecke et al. , 2020 ) and applications ( Buolamwini & Gebru , 2018 ; Hosseini et al. , 2019 ; Reis et al. , 2018 ) . Recent work on FrugalML ( Chen et al. , 2020 ) studies API calling strategies for single label classification . While their approach ’ s computational complexity is exponential in the number of labels , FrugalMCT ’ s complexity does not depend on the number of labels , making it suitable for multi-label prediction APIs . In addition , FrugalML selects only one API per user query , while FrugalMCT considers the combination of multiple APIs ’ output for each input data . This improves the overall accuracy ( as shown in Sec 4 ) , but also creates unique optimization challenges that we solve . Ensembles for multi-label classification : Ensemble learning is a natural approach to combine different predictors ’ output . Several ensemble methods have been developed , such as using pruned sets ( Read et al. , 2008 ) , classifier chains ( Read et al. , 2011 ) , and random subsets ( Tsoumakas & Vlahavas , 2007 ) , with applications in image annotations ( Xu et al. , 2011 ) , document classification ( Chen et al. , 2017 ) , and speech categorization ( Liu et al. , 2019 ) . Moyano et al . ( 2018 ) provide a detailed survey of this area . Almost all of these ensemble methods require joint training of the base classifiers , but MLaaS APIs are black box to the users . Also , while ensemble methods focus only on improving accuracy , FrugalMCT explicitly considers the cost of each API and enforces a budget constraint . Model cascades : A series of works ( Viola & Jones , 2001a ; b ; Sun et al. , 2013 ; Cai et al. , 2015 ; Wang et al. , 2011 ; Xu et al. , 2014 ; Chen et al. , 2018 ; Kumar et al. , 2018 ; Chen et al. , 2018 ) explores cascades ( a sequence of models ) to balance the quality and runtime of inference . Model cascades use a single predicted quality score to avoid calling computationally expensive models , but FrugalMCT ’ strategies utilize both quality scores and predicted label sets to select an expensive add-on service . AutoML for multi-label classification : AutoML ( Thornton et al. , 2013 ) automates the customization of ML pipelines , including the selection , combination , and parametrization of the learning algorithms . There is a rich literature of AutoML techniques for standard single label tasks , and fewer methods on multi-label predictions ( Wever et al. , 2021 ) ( e.g . genetic algorithms ( de Sá et al. , 2017 ) and a neural network-based search scheme ( Pakrashi & Namee , 2019 ) ) . Applying AutoML to use multiple ML APIs is underexplored , and FrugalMCT can be viewed as the first AutoML approch designed for automating the selection of multiple mutlti-label ML APIs . While most AutoML systems exclusively focus on prediction performance , FrugalMCT optimizes accuracy and cost jointly , which is desirable for cost-sensitive API users . Multiple choice knapsack and integer programming : Many resource allocation problems can be modeled as multiple choice knapsack problem ( MCKP ) ( Pamela H. Vance & Toth ) , 1993 ) , such as keyword bidding ( Zhou & Naroditskiy , 2008 ) and quality of service control ( Lee et al. , 1999 ) . While NP-hard ( Sinha & Zoltners , 1979 ) , various approximations have been proposed for MCKP , such as branch and bound ( Pamela H. Vance & Toth ) , 1993 ) , convex hull relaxation ( Akbar et al. , 2006 ) and bi-objective transformation ( Bednarczuk et al. , 2018 ) . Inherently an integer linear programming ( ILP ) problem , MCKP can also be tackled by ILP solvers , motivated by online adwords searching ( Devanur & Hayes , 2009 ) , resource allocation ( Devanur & Hayes , 2019 ) and general linear programming ( Li et al. , 2020 ) . The service selector of FrugalMCT can be viewed as a MCKP with the same item cost vector per item group , which we leverage to obtain a customized fast and online solver . 2 PRELIMINARIES . Notation . We denote matrices and vectors in bold , and scalars , sets , and functions in standard script . Given a matrix A ∈ Rn×m , we let Ai , j denote its entry at location ( i , j ) . 1 ( · ) represents the indicator function . Multi-label classification Tasks . Throughout this paper , we focus on multi-label classification tasks : assigning a label set Y ⊆ Y to any data point x ∈ X . In contrast to basic supervised learning , in multi-label learning each data point is associated with a set of labels instead of a single label . Many MLaaS APIs target such tasks . Consider , for example , image tagging , where X is a set of images and Y is the set of all tags . Example label sets could be { person , car } or { bag , train , sky } . MLaaS Market . Consider a MLaaS market consisting of K different ML services for some multilabel task . For a data point x , the kth service returns to the user a set of labels with their quality scores , denoted by Yk ( x ) ⊆ Y × [ 0 , 1 ] . For example , one API for multi-label image classification might produce Yk ( x ) = { ( person , 0.8 ) , ( car , 0.7 ) } , indicating the label person with confidence 0.8 and car with confidence 0.7 . Let the vector c ∈ RK denote the unit cost of all services . For example , ck = 0.01 means that users need to pay $ 0.01 every time they call the kth service . 3 FRUGALMCT FRAMEWORK . In this section , we present FrugalMCT , a framework to adaptively select ML APIs for multi-label classification tasks within a budget . All proofs are left to the appendix . We generalize the scheme in Figure 1 ( a ) to K ML services . As shown in Figure 2 , FrugalMCT contains three main components : an accuracy estimator , a service selector , and a label combiner . Given a data point x , it first calls some base service , denoted by base , which is one of the K APIs , and obtains Ybase ( x ) . Often , base is a cheap or free service ; we discuss how to choose base in Section 3.4 . Next , an accuracy predictor produces a vector â ( x ) ∈ [ 0 , 1 ] K , whose kth value estimates the accuracy of the label set produced by the label combiner using base ’ s and kth API ’ s outputs . The service selector s ( · ) : X 7→ [ K ] then decides if and which add-on service needs to be invoked . Finally , a label combiner generates a label set by combining the predictions from the base and add-on APIs . Take Figure 1 ( d ) as an example . The image is first passed to the GitHub model , which produces { ( person , 0.46 ) , ( tennis racket,0.18 ) } , by which the accuracy predictor predicts the accuracy of the label set generated by combining each API ’ s output with GitHub model ’ s . The service selector then decides to further invoke Everypixel , which gives { ( person , 0.46 ) , ( sports ball , 0.52 ) } . Finally , the label combiner uses both APIs ’ output for the final prediction . FrugalMCT allows users to customize the accuracy predictor and the label combiner , depending on the applications . For example , for the image tagging problem , one might use image features ( e.g. , brightness and contrast ) to build the accuracy predictor , while word embeddings can be more useful for named entity recognition . In the following sections , we explain the key of accuracy predictor , API selector and the label combiner in more detail . | This paper addresses the practical task to use the combination of ML APIs for multi-label classification. Different from the related work FrugalML which ignores the correlation between ML APIs, the proposed FrugalMCT allows selecting and combining the different ML APIs based on a budget. Sufficient theoretical and empirical analyses are provided to demonstrate the effective of FrugalMCT. | SP:31412e46449bccbfe0b74080a1c15df64b2363d5 |
Reinforcement Learning for Adaptive Mesh Refinement | 1 INTRODUCTION . Numerical simulation of PDEs via the finite element method ( FEM ) ( Brenner & Scott , 2007 ) plays an integral role in computational science and engineering ( Reddy & Gartling , 2010 ; Monk et al. , 2003 ) . Given a fixed set of basis functions , the resolution of the finite element mesh determines the trade-off between solution accuracy and computational cost . For complex systems with large variations in local solution characteristics , uniform meshes can be computationally inefficient due to their suboptimal distribution of mesh density , under-resolving regions with complex features such as discontinuities or large gradients and over-resolving regions with smoothly varying solutions . For systems with multi-scale properties in particular , attempting to resolve these features with uniform meshes can be challenging even on the largest supercomputers . To achieve more efficient numerical simulations , adaptive mesh refinement ( AMR ) , a class of methods that dynamically adjust the mesh resolution during a simulation to maintain equidistribution of error , is used to significantly increase accuracy relative to computational cost . Existing methods for AMR share the same iterative process of computing a solution on the current mesh , estimating refinement indicators , marking element ( s ) to refine , and generating a new mesh by refining marked elements ( Bangerth & Rannacher , 2013 ; Červený et al. , 2019 ) . The optimal algorithms for error estimation and marking in many problems , especially evolutionary PDEs , are not known ( Bohn & Feischl , 2021 ) , and deriving them is difficult for complex refinement schemes such as hp-refinement ( Zienkiewicz et al. , 1989 ) . As such , the current state-of-the-art is guided largely by heuristic principles that are derived by intuition and expert knowledge ( Zienkiewicz & Zhu , 1992 ) , but choosing the best combination of heuristics is complex and not well understood . We advance the novel notion that adaptive mesh refinement is fundamentally a sequential decisionmaking problem in which a sequence of greedy decisions based on instantaneous error indicators does not constitute an optimal sequence of decisions for the actual goal of achieving high cumulative or terminal accuracy . In time-dependent problems for example , an error estimator by itself can not preemptively refine elements which would encounter complex features in the next time step . This means that the optimality of a refinement decision depends on the accuracy of the future solution and that selecting an element which yields the largest reduction in error at the current time step may not be the optimal decision over the entire simulation . Whether and how optimal AMR strategies can be found by directly optimizing a long-term performance objective are open questions . Given this perspective , we formulate AMR as a Markov decision process ( MDP ) ( Puterman , 2014 ) ( Figure 1 ) and propose a reinforcement learning ( RL ) ( Sutton & Barto , 2018 ) approach that explicitly trains a mesh refinement policy to optimize a performance metric , such as final solution error . In contrast to most , if not all , benchmark problems and complex applications of RL ( Mnih et al. , 2015 ; Brockman et al. , 2016 ; Osband et al. , 2019 ; Berner et al. , 2019 ; Vinyals et al. , 2019 ) , AMR poses a new challenge as the sizes of both the state and the set of available actions depend on the current number of mesh elements , which changes with each refinement action at every MDP time step . While one may define a fixed and bounded state and action space given a finite refinement budget , doing so is very inefficient as the policy ’ s input-output dimensions would have to accommodate the full exponentially large space but only subspaces ( with increasing size ) are encountered during simulation . In many practical applications , one would routinely encounter input dimensions on the order of millions or billions of degrees of freedom . This motivates the design of efficient policy architectures that leverage the correspondence between the current mesh state and valid action set . In this paper , we make the following conceptual , methodological , and experimental contributions : 1 ) We formally define an MDP with effective variable-size state and action spaces for AMR ( Section 3.2 ) ; 2 ) We propose three policy architectures—with differing generality , inductive bias , and capacity for modeling interaction—that operate on such variable-size spaces ( Section 4 ) ; 3 ) As a path toward potentially solving large and complex problems on which RL can not tractably be trained , we investigate the generalizability of policies trained on small representative features with known analytic solutions and the effectiveness of policies trained using a novel reward formulation that can be applied to problems without known analytic solutions ( Section 5 ) ; 4 ) Our experiments demonstrate for the first time that RL can be competitive with , and sometimes outperform , a greedy refinement strategy based on the widely-used Zienkiewicz-Zhu-type error estimator ; moreover , we show that an RL refinement policy can generalize to higher refinement budgets and larger meshes , transfer effectively from static to time-dependent problems , and can be effectively trained on more complex problems without readily-available ground truth solutions . ( Section 6 ) . 2 RELATED WORK . The formulation of problems in numerical analysis as statistical learning problems can be traced at least as far in time as to Poincaré ( Poincaré , 1912 ; Diaconis , 1988 ) . Contemporary works have employed neural networks as powerful function approximators in existing numerical PDE and linear system solvers to achieve faster convergence rates , generalize to different boundary conditions or larger problems , and approximate underresolved features in coarse-grained simulations ( Hsieh et al. , 2018 ; Luz et al. , 2020 ; Bar-Sinai et al. , 2019 ) . Our work focuses on optimizing a finite element space rather than components of a numerical solver . To the best of our knowledge , no prior work has formulated adaptive mesh refinement as a sequential decision-making problem and proposed a reinforcement learning approach ( Sutton & Barto , 2018 ) . Previous work at the intersection of neural networks and mesh-based simulation trained neural networks to predict mesh densities , sizes , or error fields for use by downstream mesh generators ( Dyck et al. , 1992 ; Chedid & Najjar , 1996 ; Zhang et al. , 2020 ; Pfaff et al. , 2020 ; Chen & Fidkowski , 2020 ) . Brevis et al . ( 2020 ) apply supervised learning to find an optimal parameterized test space without modifying the degrees of freedom . Bohn & Feischl ( 2021 ) show theoretically that the estimation and marking steps of AMR for an elliptic PDE can be represented optimally by a recurrent neural network , but model optimization was left as an open question . Recent studies have leveraged the effectiveness of graph neural networks ( GNN ) ( Sperduti & Starita , 1997 ; Gori et al. , 2005 ; Scarselli et al. , 2008 ) at representing relational structure to predict PDE dynamics on general unstructured and non-uniform meshes ( Alet et al. , 2019 ; Belbute-Peres et al. , 2020 ; Pfaff et al. , 2020 ) . Previous work on graph generation and formation have employed GNNs as the policy model in an RL context with applications to biological and social network datasets ( You et al. , 2018 ; Trivedi et al. , 2020 ) . Learning a policy for unbounded variable-size state and action spaces is a rare—if not new— problem for RL , which has been typically applied to environments with fixed-size observation and small bounded action spaces in almost all benchmark problems ( Mnih et al. , 2015 ; Brockman et al. , 2016 ; Osband et al. , 2019 ) . While there are notable applications where the available action set varies with state ( Berner et al. , 2019 ; Vinyals et al. , 2019 ) , they do not face the challenge of potentially millions of possible actions that arises in large-scale AMR . The technique of growing action spaces ( Farquhar et al. , 2020 ) maintains a fixed action space size within each episode , whereas both state and action space sizes change at every time step within an episode in AMR . 3 BACKGROUND AND FORMULATION . 3.1 FINITE ELEMENT METHOD . Our mesh adaptation strategy is implemented in a FEM-based framework ( Brenner & Scott , 2007 ) . In FEM , the domain Ω ⊂ RD is modeled with a mesh that is a union of E nonoverlapping subsets ( elements ) such that Ω : = ⋃ Ωk where k ∈ N : k 6 E. The solution on these elements is represented using polynomials ( basis functions ) which are used to transform the governing equations into a system of algebraic equations via the weak formulation . AMR is a commonly used approach to improve the trade-off between the solution accuracy , which depends on the shape and sizes of elements , and the computational cost , which depends on the number of elements . The most ubiquitous method for AMR is h-refinement , whereby elements are split into smaller elements ( refinement ) or multiple elements coalesce to form a single element ( derefinement ) . In practical applications with unknown true solutions , the conventional AMR approach is to take greedy refinement decisions based on a posteriori error estimators , which rely on the numerical solution and its derived quantities on the current mesh , without regard to long-term optimality . 3.2 AMR AS A MARKOV DECISION PROCESS . We formulate AMR with spatial h-refinement1 as a Markov decision process M : = ( O , Nmax , A , R , P , γ ) with each component defined as follows . Each episode consists of T RL time steps : for time-dependent PDEs , T spans the entire simulation and there may be multiple underlying PDE evolution steps per RL step ; for static problems , T is an arbitrary number of steps at which RL can act . Consider a time step t when the current mesh has Nt ≤ Nmax ∈ N elements . Each element i is associated with an observation oit ∈ O and the global state is st : = [ o 1 t , . . . , o N t ] ∈ ONt . We define O : = Rd such that each element ’ s observation is a tensor of shape d : = l × w × c that includes the values and refinement depths of a local window centered on itself . For brevity , let St denote the current global state spaceONt . We denote an action by at ∈ At : = { 0 , 1 , . . . , Nt } ⊂ A : = { 0 , 1 , . . . , Nmax } , where 0 means “ do-nothing ” and i 6= 0 means refine element i . Given the current state and action , the MDP transition P consists of : 1 ) refining the selected element into multiple finer elements ( which increases Nt ) if a refinement budget B is not exceeded and the selected element is not at the maximum refinement depth dmax ; 2 ) stepping the finite element simulation forward in time ( for time-dependent PDEs only ) ; 3 ) computing a solution on the new finite element space . When a true solution is available at training time , the reward at step t is defined as the change in error from the previous step , normalized by the initial error to reduce variation across function classes : rt : = ( ‖et−1‖2 − ‖et‖2 ) /‖e0‖2 , ( 1 ) where error e is computed relative to the true solution . With abuse of notation , we shall use e to indicate the error norm . The ground truth is not needed to deploy a trained policy on test problems . When the true solution is not readily available , as is the case for most non-trivial PDEs , one may run a reference simulation on a highly-resolved mesh to compute equation 1 , but this approach can be prohibitively expensive for training on large-scale simulations . Instead , we propose the use of a surrogate reward rt : = ‖ut , refine−ut , no-refine‖2 , the normed difference between the estimated solution u with and without executing the chosen refinement action . This surrogate , which is an upper bound 1Polynomial p-refinement can be formulated in a similar way . r-refinement ( Huang & Russell , 2010 ; Dobrev et al. , 2019 ) can be formulated as an RL problem but is not treated in this work . on the true reward and effectively acts as an estimate of the error reduction , is only used at training time to minimize computational effort , whereas at test time , the effectiveness of trained policies is evaluated using the error computed with respect to a highly-resolved reference simulation . Our objective to find a stochastic policy π : St → ∆ ( At ) to maximize the objective J ( π ) : = Ea∼π ( ·|s ) , st+1∼P ( ·|a , st ) [ T∑ t=1 γtrt ] . ( 2 ) Aside from γ ∈ ( 0 , 1 ) , this objective is equivalent to maximizing total error reduction : e0 − efinal . Although the size of the state vector and set of valid actions changes with each time step due to the varying Nt , this MDP is well-defined since one can define the global state space as the set of all possible ON , N < Nmax , and likewise for the action space . Hence , the policy is navigating through subspaces of increasing size during an episode . Moreover , the exact 1:1 correspondence between the number of observation components and the number of valid actions calls for designing a dedicated policy architecture for AMR , which we present below in Section 4 . We work with the class of policy optimization methods as they naturally admit stochastic policies that could benefit AMR at test time : a stochastic refinement action could reveal the need for further refinement in a region that appears flat on a coarse mesh . We build on the policy gradient algorithm ( Sutton et al. , 2000 ; Schulman et al. , 2017 ) to train a policy πθ ( parameterized by θ ) using batches of trajectories { τb : = { ( st , at , rt ) k } Tt=1 } Kk=1 generated by the current policy . | This paper proposes an application of reinforcement learning for adaptive mesh refinement in large-scale finite element simulations of complex physical systems. The authors suggest to formulate the mesh refinement problem as a MDP and propose different policy architectures for scalable application of reinforcement learning. Experiment results demonstrate that the proposed RL approaches outperform existing baselines, and can generalize well to situations of different finement budgets and larger meshes. | SP:50912255573295ef5e76ec95e6e83b9ee0b3534e |
Reinforcement Learning for Adaptive Mesh Refinement | 1 INTRODUCTION . Numerical simulation of PDEs via the finite element method ( FEM ) ( Brenner & Scott , 2007 ) plays an integral role in computational science and engineering ( Reddy & Gartling , 2010 ; Monk et al. , 2003 ) . Given a fixed set of basis functions , the resolution of the finite element mesh determines the trade-off between solution accuracy and computational cost . For complex systems with large variations in local solution characteristics , uniform meshes can be computationally inefficient due to their suboptimal distribution of mesh density , under-resolving regions with complex features such as discontinuities or large gradients and over-resolving regions with smoothly varying solutions . For systems with multi-scale properties in particular , attempting to resolve these features with uniform meshes can be challenging even on the largest supercomputers . To achieve more efficient numerical simulations , adaptive mesh refinement ( AMR ) , a class of methods that dynamically adjust the mesh resolution during a simulation to maintain equidistribution of error , is used to significantly increase accuracy relative to computational cost . Existing methods for AMR share the same iterative process of computing a solution on the current mesh , estimating refinement indicators , marking element ( s ) to refine , and generating a new mesh by refining marked elements ( Bangerth & Rannacher , 2013 ; Červený et al. , 2019 ) . The optimal algorithms for error estimation and marking in many problems , especially evolutionary PDEs , are not known ( Bohn & Feischl , 2021 ) , and deriving them is difficult for complex refinement schemes such as hp-refinement ( Zienkiewicz et al. , 1989 ) . As such , the current state-of-the-art is guided largely by heuristic principles that are derived by intuition and expert knowledge ( Zienkiewicz & Zhu , 1992 ) , but choosing the best combination of heuristics is complex and not well understood . We advance the novel notion that adaptive mesh refinement is fundamentally a sequential decisionmaking problem in which a sequence of greedy decisions based on instantaneous error indicators does not constitute an optimal sequence of decisions for the actual goal of achieving high cumulative or terminal accuracy . In time-dependent problems for example , an error estimator by itself can not preemptively refine elements which would encounter complex features in the next time step . This means that the optimality of a refinement decision depends on the accuracy of the future solution and that selecting an element which yields the largest reduction in error at the current time step may not be the optimal decision over the entire simulation . Whether and how optimal AMR strategies can be found by directly optimizing a long-term performance objective are open questions . Given this perspective , we formulate AMR as a Markov decision process ( MDP ) ( Puterman , 2014 ) ( Figure 1 ) and propose a reinforcement learning ( RL ) ( Sutton & Barto , 2018 ) approach that explicitly trains a mesh refinement policy to optimize a performance metric , such as final solution error . In contrast to most , if not all , benchmark problems and complex applications of RL ( Mnih et al. , 2015 ; Brockman et al. , 2016 ; Osband et al. , 2019 ; Berner et al. , 2019 ; Vinyals et al. , 2019 ) , AMR poses a new challenge as the sizes of both the state and the set of available actions depend on the current number of mesh elements , which changes with each refinement action at every MDP time step . While one may define a fixed and bounded state and action space given a finite refinement budget , doing so is very inefficient as the policy ’ s input-output dimensions would have to accommodate the full exponentially large space but only subspaces ( with increasing size ) are encountered during simulation . In many practical applications , one would routinely encounter input dimensions on the order of millions or billions of degrees of freedom . This motivates the design of efficient policy architectures that leverage the correspondence between the current mesh state and valid action set . In this paper , we make the following conceptual , methodological , and experimental contributions : 1 ) We formally define an MDP with effective variable-size state and action spaces for AMR ( Section 3.2 ) ; 2 ) We propose three policy architectures—with differing generality , inductive bias , and capacity for modeling interaction—that operate on such variable-size spaces ( Section 4 ) ; 3 ) As a path toward potentially solving large and complex problems on which RL can not tractably be trained , we investigate the generalizability of policies trained on small representative features with known analytic solutions and the effectiveness of policies trained using a novel reward formulation that can be applied to problems without known analytic solutions ( Section 5 ) ; 4 ) Our experiments demonstrate for the first time that RL can be competitive with , and sometimes outperform , a greedy refinement strategy based on the widely-used Zienkiewicz-Zhu-type error estimator ; moreover , we show that an RL refinement policy can generalize to higher refinement budgets and larger meshes , transfer effectively from static to time-dependent problems , and can be effectively trained on more complex problems without readily-available ground truth solutions . ( Section 6 ) . 2 RELATED WORK . The formulation of problems in numerical analysis as statistical learning problems can be traced at least as far in time as to Poincaré ( Poincaré , 1912 ; Diaconis , 1988 ) . Contemporary works have employed neural networks as powerful function approximators in existing numerical PDE and linear system solvers to achieve faster convergence rates , generalize to different boundary conditions or larger problems , and approximate underresolved features in coarse-grained simulations ( Hsieh et al. , 2018 ; Luz et al. , 2020 ; Bar-Sinai et al. , 2019 ) . Our work focuses on optimizing a finite element space rather than components of a numerical solver . To the best of our knowledge , no prior work has formulated adaptive mesh refinement as a sequential decision-making problem and proposed a reinforcement learning approach ( Sutton & Barto , 2018 ) . Previous work at the intersection of neural networks and mesh-based simulation trained neural networks to predict mesh densities , sizes , or error fields for use by downstream mesh generators ( Dyck et al. , 1992 ; Chedid & Najjar , 1996 ; Zhang et al. , 2020 ; Pfaff et al. , 2020 ; Chen & Fidkowski , 2020 ) . Brevis et al . ( 2020 ) apply supervised learning to find an optimal parameterized test space without modifying the degrees of freedom . Bohn & Feischl ( 2021 ) show theoretically that the estimation and marking steps of AMR for an elliptic PDE can be represented optimally by a recurrent neural network , but model optimization was left as an open question . Recent studies have leveraged the effectiveness of graph neural networks ( GNN ) ( Sperduti & Starita , 1997 ; Gori et al. , 2005 ; Scarselli et al. , 2008 ) at representing relational structure to predict PDE dynamics on general unstructured and non-uniform meshes ( Alet et al. , 2019 ; Belbute-Peres et al. , 2020 ; Pfaff et al. , 2020 ) . Previous work on graph generation and formation have employed GNNs as the policy model in an RL context with applications to biological and social network datasets ( You et al. , 2018 ; Trivedi et al. , 2020 ) . Learning a policy for unbounded variable-size state and action spaces is a rare—if not new— problem for RL , which has been typically applied to environments with fixed-size observation and small bounded action spaces in almost all benchmark problems ( Mnih et al. , 2015 ; Brockman et al. , 2016 ; Osband et al. , 2019 ) . While there are notable applications where the available action set varies with state ( Berner et al. , 2019 ; Vinyals et al. , 2019 ) , they do not face the challenge of potentially millions of possible actions that arises in large-scale AMR . The technique of growing action spaces ( Farquhar et al. , 2020 ) maintains a fixed action space size within each episode , whereas both state and action space sizes change at every time step within an episode in AMR . 3 BACKGROUND AND FORMULATION . 3.1 FINITE ELEMENT METHOD . Our mesh adaptation strategy is implemented in a FEM-based framework ( Brenner & Scott , 2007 ) . In FEM , the domain Ω ⊂ RD is modeled with a mesh that is a union of E nonoverlapping subsets ( elements ) such that Ω : = ⋃ Ωk where k ∈ N : k 6 E. The solution on these elements is represented using polynomials ( basis functions ) which are used to transform the governing equations into a system of algebraic equations via the weak formulation . AMR is a commonly used approach to improve the trade-off between the solution accuracy , which depends on the shape and sizes of elements , and the computational cost , which depends on the number of elements . The most ubiquitous method for AMR is h-refinement , whereby elements are split into smaller elements ( refinement ) or multiple elements coalesce to form a single element ( derefinement ) . In practical applications with unknown true solutions , the conventional AMR approach is to take greedy refinement decisions based on a posteriori error estimators , which rely on the numerical solution and its derived quantities on the current mesh , without regard to long-term optimality . 3.2 AMR AS A MARKOV DECISION PROCESS . We formulate AMR with spatial h-refinement1 as a Markov decision process M : = ( O , Nmax , A , R , P , γ ) with each component defined as follows . Each episode consists of T RL time steps : for time-dependent PDEs , T spans the entire simulation and there may be multiple underlying PDE evolution steps per RL step ; for static problems , T is an arbitrary number of steps at which RL can act . Consider a time step t when the current mesh has Nt ≤ Nmax ∈ N elements . Each element i is associated with an observation oit ∈ O and the global state is st : = [ o 1 t , . . . , o N t ] ∈ ONt . We define O : = Rd such that each element ’ s observation is a tensor of shape d : = l × w × c that includes the values and refinement depths of a local window centered on itself . For brevity , let St denote the current global state spaceONt . We denote an action by at ∈ At : = { 0 , 1 , . . . , Nt } ⊂ A : = { 0 , 1 , . . . , Nmax } , where 0 means “ do-nothing ” and i 6= 0 means refine element i . Given the current state and action , the MDP transition P consists of : 1 ) refining the selected element into multiple finer elements ( which increases Nt ) if a refinement budget B is not exceeded and the selected element is not at the maximum refinement depth dmax ; 2 ) stepping the finite element simulation forward in time ( for time-dependent PDEs only ) ; 3 ) computing a solution on the new finite element space . When a true solution is available at training time , the reward at step t is defined as the change in error from the previous step , normalized by the initial error to reduce variation across function classes : rt : = ( ‖et−1‖2 − ‖et‖2 ) /‖e0‖2 , ( 1 ) where error e is computed relative to the true solution . With abuse of notation , we shall use e to indicate the error norm . The ground truth is not needed to deploy a trained policy on test problems . When the true solution is not readily available , as is the case for most non-trivial PDEs , one may run a reference simulation on a highly-resolved mesh to compute equation 1 , but this approach can be prohibitively expensive for training on large-scale simulations . Instead , we propose the use of a surrogate reward rt : = ‖ut , refine−ut , no-refine‖2 , the normed difference between the estimated solution u with and without executing the chosen refinement action . This surrogate , which is an upper bound 1Polynomial p-refinement can be formulated in a similar way . r-refinement ( Huang & Russell , 2010 ; Dobrev et al. , 2019 ) can be formulated as an RL problem but is not treated in this work . on the true reward and effectively acts as an estimate of the error reduction , is only used at training time to minimize computational effort , whereas at test time , the effectiveness of trained policies is evaluated using the error computed with respect to a highly-resolved reference simulation . Our objective to find a stochastic policy π : St → ∆ ( At ) to maximize the objective J ( π ) : = Ea∼π ( ·|s ) , st+1∼P ( ·|a , st ) [ T∑ t=1 γtrt ] . ( 2 ) Aside from γ ∈ ( 0 , 1 ) , this objective is equivalent to maximizing total error reduction : e0 − efinal . Although the size of the state vector and set of valid actions changes with each time step due to the varying Nt , this MDP is well-defined since one can define the global state space as the set of all possible ON , N < Nmax , and likewise for the action space . Hence , the policy is navigating through subspaces of increasing size during an episode . Moreover , the exact 1:1 correspondence between the number of observation components and the number of valid actions calls for designing a dedicated policy architecture for AMR , which we present below in Section 4 . We work with the class of policy optimization methods as they naturally admit stochastic policies that could benefit AMR at test time : a stochastic refinement action could reveal the need for further refinement in a region that appears flat on a coarse mesh . We build on the policy gradient algorithm ( Sutton et al. , 2000 ; Schulman et al. , 2017 ) to train a policy πθ ( parameterized by θ ) using batches of trajectories { τb : = { ( st , at , rt ) k } Tt=1 } Kk=1 generated by the current policy . | For various complicated problems governed by PDEs (e.g. solid/fluid interactions, aerodynamics, elasticity, backscattering, etc.) the computational cost can become prohibitive even for one inquiry, let alone parametric study. In the same time, mesh refinement is crucial to achieve acceptable accuracy. To mitigate such challenges, one solution is to use adaptive mesh refinement (AMR), for which the mesh refined only in the regions that numerically are sensitive to error propagation. For example for boundary layer models or shock-boundary layer interactions, one much capture the dynamics in high-gradients region of the solution, which are typically in vicinity of the walls or are part of the solution, while in very far regions of the domain, a coarse mesh is sufficient. Authors recognize that the process of AMR, refinement at each step, can be formulated a Markov decision process (MDP) and hence utilize reinforcement learning (RL) to train refinement policies directly from simulation. But this in turn poses a new challenge, at each step, the dimension of state (number of elements) and action space may (and should) alter. They propose suitable policy updates to overcome this challenge and come up with three different architectures for the implementation. Three test cases are used for experiments (static, advection and Burger) and authors compare all three architectures with each other as well as some traditional AMR methods to demonstrate the performance of the proposed method. They also carry out extra tests on the same set of PDEs to show generalization and out of distribution capabilities of the method for both static and transient PDEs. The paper is well-written and two tricks for RL are impressive (using Nmax to take care of varying dimension of state/action space and use of surrogate reward for training). However, I have some major reservations that I'll explain below in the Main Review. | SP:50912255573295ef5e76ec95e6e83b9ee0b3534e |
Information Gain Propagation: a New Way to Graph Active Learning with Soft Labels | 1 INTRODUCTION . Graph Neural Networks ( GNNs ) have recently achieved remarkable success in various graph-based tasks , ranging from traffic networks , biology , to social networks ( Zhang et al. , 2020 ; Wang et al. , 2019 ; Li et al. , 2019 ; Do et al. , 2019 ) . Despite their effectiveness and popularity , GNNs typically require a large amount of labeled data to achieve satisfactory performance . However , obtaining these labels is a time-consuming , laborious , and costly process . Therefore , how to do this economically and efficiently has attracted great attention both in academia and industry . One of the most popular strategies to tackle this challenge is Active Learning ( AL ) ( Aggarwal et al. , 2014 ) . By combining model training and node selection for labeling , AL significantly reduces labeling cost by selecting the most valuable nodes to label . However , previous AL methods assume the hard label ( namely the exact label , that is , the label specifying the exact class of the node ) can always be provided by the oracle . As illustrated in Fig . 1 , for any selected node , the active learners in the aforementioned methods ask questions like “ which category does it exactly belong to ? ” . Such queries and assumptions exceed the capability of oracle in many labeling tasks requiring domain knowledge . For example , the task in ogbn-papers100M is to leverage the citation graph to infer the labels of the arXiv papers into 172 arXiv subject areas ( a single-label , 172-class classification problem ) . In this example , a specialist/expert in the subject areas of machine learning is incapable of labeling query instances with subject areas of finance ( such as mathematical finance or computational finance ) , which is out of his domain knowledge . In this paper , we propose a novel active learning paradigm for GNNs in which only soft labels are required . Two salient features of our paradigm are as follows : First , we propose relaxed queries where domain expert ( oracle ) only judges the correctness of the predicted labels ( by GNN models ) rather than the exact categorization . Specifically , we propose to select the class label with the highest predicted probability and then ask the oracle to judge whether the class label is correct rather than directly asking for the exact class label . In this way , the multi-class classification task is relaxed into a binary-class classification task for the oracle . Under such queries , if the model prediction is incorrect ( we assume the oracle does not make mistakes ) , we annotate the node with the soft label by re-normalizing the model predicted softmax outputs over the remaining 0.5 0.3 0.2 Model Prediction Oracle Which category does it exactly belong to ? Does it belong to the first category ? Query Oracle 1 0 0 0 0.6 0.4 Yes No Ground truth label Our strategy Previous strategy Figure 1 : An example of our new node labeling strategy . [ 1 , 0 , 0 ] [ 0.5 , 0.3 , 0.2 ] [ 0 , 0.4 , 0.6 ] 0.5 0.4 0.1 [ 0.7 , 0.16 , 0.14 ] IGP ( ) = H ( 0.7 , 0.16 , 0.14 ) – H ( 1/3 , 1/3 , 1/3 ) Figure 2 : An example of IGP . classes . As illustrated in Fig.1 , we allow relaxed queries such as “ Does the sample belong to this class ? ” ( e.g. , the first class in Fig . 1 ) , since this query is specific to the domain of experts , hence more ready to be answered . As far as we are concerned , no previous AL methods can deal with such relaxed queries . Second , the node selection criteria in previous AL methods are specially designed for hard-labeling oracles . As one of the most commonly used AL query strategies , uncertainty sampling queries the labels of the nodes which current model is least certain with , ensuring the largest entropy reduction on a single node when given the hard label from oracle . However , under our relaxed queries , we prove that the uncertainty sampling can not guarantee the largest information gain ( i.e. , entropy reduction ) on a single node . In this paper , we propose a new criterion to explicitly maximize information gain propagation ( IGP ) for active learners with relaxed queries and soft labels on graph . As shown in Fig . 2 , the labeled nodes ( with color ) can propagate their label information with different influence magnitude and then reduce the uncertainty of adjacent unlabeled nodes ( without color ) . Considering the influence propagation in GNNs , we select the nodes that can maximize expected information gain in total under the relaxed queries , i.e. , reducing the aggregated uncertainty of a neighborhood region beyond a single node . To the best of our knowledge , this is the first work to propose the information gain propagation with relaxed queries , and showing that it is highly effective . In summary , the core contributions of this work are the following : First , we provide a new AL paradigm for GNNs methodology with relaxed queries and soft labels . Second , we propose a novel node selection criterion that explicitly maximizes propagation of information gain under the new AL paradigm on GNNs . Third , experimental results show our paradigm significantly outperforms the compared baselines by a large margin in five open graph datasets . 2 PRELIMINARY . 2.1 PROBLEM FORMULATION . Let G = ( V , E ) with |V| = N nodes and |E| = M edges be the graph , and c is the number of classes . X = { x1 , x2 ... , xN } is the node feature matrix , the one-hot vector yi ∈ Rc and and xi ∈ Rd are the ground-truth label and node feature for node vi ∈ V , respectively . We consider a general AL setting in this work . Suppose the full node set V is partitioned into training set Vtrain , validation set Vval , and test set Vtest , and Vtrain can further be partitioned into the labeled set Vl and unlabeled set Vu , respectively . Given the labeling size B , the loss function ` , the goal of graph-based AL is to select a subset Vl ⊂ Vtrain to label , so that the model f trained with the supervision of Vl can get the lowest loss on Vtest : arg min Vl : |Vl|=B Evi∈Vtest [ ` ( yi , ŷi ) ] , ( 1 ) where ŷi is the softmax outputs of node vi given by the model f . Different from the previous setting , B is the labeling cost ( i.e. , money ) rather than the size in our new scenario . As the experts are hard to annotate the node out of their domain , we assume the conventional query cost c− 1 times as much as our relaxed query for each selected node . Correspondingly , the objective in our new scenario is argmin Vl : M ( Vl ) =B Evi∈Vtest [ ` ( yi , ŷi ) ] , ( 2 ) where M ( Vl ) is the labeling cost for annotating Vl , and we focus on f being GNNs due to their state-of-the-art performance in many semi-supervised node classification tasks . 2.2 GRAPH NEURAL NETWORKS . Unlike the images , text or tabular data , where training data are independently and identically distributed , samples are connected by edges in the graph , and each node in a GNN can aggregate the node embedding or feature from its neighbors to enhance their embedding along edges . Let f ( A , X ( k ) ; W ( k ) ) be any GNN , where A is an adjacency matrix , X ( k ) is the feature matrix of k-th layer , and W ( k ) are the learned weights of k-th layer . Taking the widely used Graph Convolution Network ( GCN ) ( Kipf & Welling , 2017a ) as an example , each GCN layer can be formulated as : X ( k+1 ) = f ( A , X ( k ) , W ( k ) ) = δ ( D̃−1ÃX ( k ) W ( k ) ) , ( 3 ) where X ( and X ( 0 ) ) is the original node feature , X ( k ) and X ( k+1 ) are the embeddings of layer k and k+1 respectively . Besides , D̃ is the diagonal node degree matrix for normalization and à = A+ IN is the adjacent matrix with self connection , where IN is the identity matrix . Compared with the DNN layer which formulated as X ( k+1 ) = δ ( X ( k ) W ( k ) ) , the feature matrix X ( k ) will be firstly enhanced with the feature propagation operation D̃−1ÃX ( k ) , and then boosts the semi-supervised node classification performance by getting more nodes involved in the model training . 2.3 ACTIVE LEARNING . Common AL . AL can improve labeling efficiency by selecting the most valuable samples to label . Considering the informativeness , Uncertainty Sampling ( Yang et al. , 2015 ; Zhu et al. , 2008 ) selects the nodes which has the most uncertain model prediction . Besides , Query-by-Committee ( Burbidge et al. , 2007 ; Melville & Mooney , 2004 ) trains a committee of models , and selects samples according to the extent to which the models disagree . Furthermore , Density-based ( Zhu et al. , 2008 ; Tang et al. , 2002 ) and Clustering-based ( Du et al. , 2015 ; Nguyen & Smeulders , 2004 ) and Diversity-based ( Wu et al. , 2020 ; Jiang & Gupta , 2021 ) methods can effectively select the most representative samples . All the methods above are proposed for binary or multi-class classification , and some methods ( Qi et al. , 2008 ; Yan & Huang , 2018 ) are proposed for the multi-label classification problem . Besides , considering the class imbalance , some methods ( Aggarwal et al. , 2020 ; Attenberg & Provost , 2010 ) are also proposed to tackle this issue with AL . GNN-based AL . Despite the effectiveness of common AL methods , it is unsuitable to directly apply them to GNNs since the characteristic of influence propagation has not been considered . To tackle this issue , both AGE ( Cai et al. , 2017 ) and ANRMAB ( Gao et al. , 2018 ) introduce the density of node embedding and PageRank centrality into the node selection criterion . Besides , ALG ( Zhang et al. , 2021a ) proposes to maximize the effectiveness of all influenced nodes of GNNs . Recently , Grain ( Zhang et al. , 2021b ) introduces a new diversity influence maximization objective , which takes the diversity of influenced nodes into consideration . Both AGE ( Cai et al. , 2017 ) and ANRMAB ( Gao et al. , 2018 ) are designed for AL on GNNs , which adopt the uncertainty , density , and node degree to select nodes . ANRMAB improves AGE by introducing a multi-armed bandit mechanism for adaptive decision-making . Considering the high training time of GNN , ALG ( Zhang et al. , 2021a ) decouples the GNN model and proposes a new node selection metric that maximizes the effective reception field . Grain ( Zhang et al. , 2021b ) further generalizes the reception field to the number of activated nodes in social influence maximization and introduces the diversity influence maximization for node selection . However , all these AL methods assume the oracle knows the information of all categories , thereby being able to correctly annotate all the selected nodes . This is impractical if the oracle encounters some areas that they are not familiar with , especially when the number of categories is large . IGP is the first work to actively labeling graph data under domain-specific experts . | The paper proposes an active learning method for GNNs that is based on an information gain maximization where ethe information gain is obtained by querying a data point and looking at the influence of the queried node on the neighborhood relative to their previous information. They also claim the setting of relaxing the oracle answer to be a binary confirmation of the most probable label. Experiments are presented where some advantage is shown for the method. | SP:3907616cf8748efca1c63a16cfb9335e1380aec8 |
Information Gain Propagation: a New Way to Graph Active Learning with Soft Labels | 1 INTRODUCTION . Graph Neural Networks ( GNNs ) have recently achieved remarkable success in various graph-based tasks , ranging from traffic networks , biology , to social networks ( Zhang et al. , 2020 ; Wang et al. , 2019 ; Li et al. , 2019 ; Do et al. , 2019 ) . Despite their effectiveness and popularity , GNNs typically require a large amount of labeled data to achieve satisfactory performance . However , obtaining these labels is a time-consuming , laborious , and costly process . Therefore , how to do this economically and efficiently has attracted great attention both in academia and industry . One of the most popular strategies to tackle this challenge is Active Learning ( AL ) ( Aggarwal et al. , 2014 ) . By combining model training and node selection for labeling , AL significantly reduces labeling cost by selecting the most valuable nodes to label . However , previous AL methods assume the hard label ( namely the exact label , that is , the label specifying the exact class of the node ) can always be provided by the oracle . As illustrated in Fig . 1 , for any selected node , the active learners in the aforementioned methods ask questions like “ which category does it exactly belong to ? ” . Such queries and assumptions exceed the capability of oracle in many labeling tasks requiring domain knowledge . For example , the task in ogbn-papers100M is to leverage the citation graph to infer the labels of the arXiv papers into 172 arXiv subject areas ( a single-label , 172-class classification problem ) . In this example , a specialist/expert in the subject areas of machine learning is incapable of labeling query instances with subject areas of finance ( such as mathematical finance or computational finance ) , which is out of his domain knowledge . In this paper , we propose a novel active learning paradigm for GNNs in which only soft labels are required . Two salient features of our paradigm are as follows : First , we propose relaxed queries where domain expert ( oracle ) only judges the correctness of the predicted labels ( by GNN models ) rather than the exact categorization . Specifically , we propose to select the class label with the highest predicted probability and then ask the oracle to judge whether the class label is correct rather than directly asking for the exact class label . In this way , the multi-class classification task is relaxed into a binary-class classification task for the oracle . Under such queries , if the model prediction is incorrect ( we assume the oracle does not make mistakes ) , we annotate the node with the soft label by re-normalizing the model predicted softmax outputs over the remaining 0.5 0.3 0.2 Model Prediction Oracle Which category does it exactly belong to ? Does it belong to the first category ? Query Oracle 1 0 0 0 0.6 0.4 Yes No Ground truth label Our strategy Previous strategy Figure 1 : An example of our new node labeling strategy . [ 1 , 0 , 0 ] [ 0.5 , 0.3 , 0.2 ] [ 0 , 0.4 , 0.6 ] 0.5 0.4 0.1 [ 0.7 , 0.16 , 0.14 ] IGP ( ) = H ( 0.7 , 0.16 , 0.14 ) – H ( 1/3 , 1/3 , 1/3 ) Figure 2 : An example of IGP . classes . As illustrated in Fig.1 , we allow relaxed queries such as “ Does the sample belong to this class ? ” ( e.g. , the first class in Fig . 1 ) , since this query is specific to the domain of experts , hence more ready to be answered . As far as we are concerned , no previous AL methods can deal with such relaxed queries . Second , the node selection criteria in previous AL methods are specially designed for hard-labeling oracles . As one of the most commonly used AL query strategies , uncertainty sampling queries the labels of the nodes which current model is least certain with , ensuring the largest entropy reduction on a single node when given the hard label from oracle . However , under our relaxed queries , we prove that the uncertainty sampling can not guarantee the largest information gain ( i.e. , entropy reduction ) on a single node . In this paper , we propose a new criterion to explicitly maximize information gain propagation ( IGP ) for active learners with relaxed queries and soft labels on graph . As shown in Fig . 2 , the labeled nodes ( with color ) can propagate their label information with different influence magnitude and then reduce the uncertainty of adjacent unlabeled nodes ( without color ) . Considering the influence propagation in GNNs , we select the nodes that can maximize expected information gain in total under the relaxed queries , i.e. , reducing the aggregated uncertainty of a neighborhood region beyond a single node . To the best of our knowledge , this is the first work to propose the information gain propagation with relaxed queries , and showing that it is highly effective . In summary , the core contributions of this work are the following : First , we provide a new AL paradigm for GNNs methodology with relaxed queries and soft labels . Second , we propose a novel node selection criterion that explicitly maximizes propagation of information gain under the new AL paradigm on GNNs . Third , experimental results show our paradigm significantly outperforms the compared baselines by a large margin in five open graph datasets . 2 PRELIMINARY . 2.1 PROBLEM FORMULATION . Let G = ( V , E ) with |V| = N nodes and |E| = M edges be the graph , and c is the number of classes . X = { x1 , x2 ... , xN } is the node feature matrix , the one-hot vector yi ∈ Rc and and xi ∈ Rd are the ground-truth label and node feature for node vi ∈ V , respectively . We consider a general AL setting in this work . Suppose the full node set V is partitioned into training set Vtrain , validation set Vval , and test set Vtest , and Vtrain can further be partitioned into the labeled set Vl and unlabeled set Vu , respectively . Given the labeling size B , the loss function ` , the goal of graph-based AL is to select a subset Vl ⊂ Vtrain to label , so that the model f trained with the supervision of Vl can get the lowest loss on Vtest : arg min Vl : |Vl|=B Evi∈Vtest [ ` ( yi , ŷi ) ] , ( 1 ) where ŷi is the softmax outputs of node vi given by the model f . Different from the previous setting , B is the labeling cost ( i.e. , money ) rather than the size in our new scenario . As the experts are hard to annotate the node out of their domain , we assume the conventional query cost c− 1 times as much as our relaxed query for each selected node . Correspondingly , the objective in our new scenario is argmin Vl : M ( Vl ) =B Evi∈Vtest [ ` ( yi , ŷi ) ] , ( 2 ) where M ( Vl ) is the labeling cost for annotating Vl , and we focus on f being GNNs due to their state-of-the-art performance in many semi-supervised node classification tasks . 2.2 GRAPH NEURAL NETWORKS . Unlike the images , text or tabular data , where training data are independently and identically distributed , samples are connected by edges in the graph , and each node in a GNN can aggregate the node embedding or feature from its neighbors to enhance their embedding along edges . Let f ( A , X ( k ) ; W ( k ) ) be any GNN , where A is an adjacency matrix , X ( k ) is the feature matrix of k-th layer , and W ( k ) are the learned weights of k-th layer . Taking the widely used Graph Convolution Network ( GCN ) ( Kipf & Welling , 2017a ) as an example , each GCN layer can be formulated as : X ( k+1 ) = f ( A , X ( k ) , W ( k ) ) = δ ( D̃−1ÃX ( k ) W ( k ) ) , ( 3 ) where X ( and X ( 0 ) ) is the original node feature , X ( k ) and X ( k+1 ) are the embeddings of layer k and k+1 respectively . Besides , D̃ is the diagonal node degree matrix for normalization and à = A+ IN is the adjacent matrix with self connection , where IN is the identity matrix . Compared with the DNN layer which formulated as X ( k+1 ) = δ ( X ( k ) W ( k ) ) , the feature matrix X ( k ) will be firstly enhanced with the feature propagation operation D̃−1ÃX ( k ) , and then boosts the semi-supervised node classification performance by getting more nodes involved in the model training . 2.3 ACTIVE LEARNING . Common AL . AL can improve labeling efficiency by selecting the most valuable samples to label . Considering the informativeness , Uncertainty Sampling ( Yang et al. , 2015 ; Zhu et al. , 2008 ) selects the nodes which has the most uncertain model prediction . Besides , Query-by-Committee ( Burbidge et al. , 2007 ; Melville & Mooney , 2004 ) trains a committee of models , and selects samples according to the extent to which the models disagree . Furthermore , Density-based ( Zhu et al. , 2008 ; Tang et al. , 2002 ) and Clustering-based ( Du et al. , 2015 ; Nguyen & Smeulders , 2004 ) and Diversity-based ( Wu et al. , 2020 ; Jiang & Gupta , 2021 ) methods can effectively select the most representative samples . All the methods above are proposed for binary or multi-class classification , and some methods ( Qi et al. , 2008 ; Yan & Huang , 2018 ) are proposed for the multi-label classification problem . Besides , considering the class imbalance , some methods ( Aggarwal et al. , 2020 ; Attenberg & Provost , 2010 ) are also proposed to tackle this issue with AL . GNN-based AL . Despite the effectiveness of common AL methods , it is unsuitable to directly apply them to GNNs since the characteristic of influence propagation has not been considered . To tackle this issue , both AGE ( Cai et al. , 2017 ) and ANRMAB ( Gao et al. , 2018 ) introduce the density of node embedding and PageRank centrality into the node selection criterion . Besides , ALG ( Zhang et al. , 2021a ) proposes to maximize the effectiveness of all influenced nodes of GNNs . Recently , Grain ( Zhang et al. , 2021b ) introduces a new diversity influence maximization objective , which takes the diversity of influenced nodes into consideration . Both AGE ( Cai et al. , 2017 ) and ANRMAB ( Gao et al. , 2018 ) are designed for AL on GNNs , which adopt the uncertainty , density , and node degree to select nodes . ANRMAB improves AGE by introducing a multi-armed bandit mechanism for adaptive decision-making . Considering the high training time of GNN , ALG ( Zhang et al. , 2021a ) decouples the GNN model and proposes a new node selection metric that maximizes the effective reception field . Grain ( Zhang et al. , 2021b ) further generalizes the reception field to the number of activated nodes in social influence maximization and introduces the diversity influence maximization for node selection . However , all these AL methods assume the oracle knows the information of all categories , thereby being able to correctly annotate all the selected nodes . This is impractical if the oracle encounters some areas that they are not familiar with , especially when the number of categories is large . IGP is the first work to actively labeling graph data under domain-specific experts . | The paper proposes a new method active learning (AL) on graphs. Unlike other AL approaches, the proposed approach provides soft labels via *relaxed queries* to the *domain experts*. Main Contributions: 1). The paper proposes a new innovative approach for graph active learning with soft labels. The key idea is to ask a human regarding whether the model prediction is correct or not (a binary classification task) as opposed to asking them the correct "hard label" of the node. The incorrect model predictions are also not "thrown away" and used as indirect supervision by performing a soft-max over the remaining classes. This leads to a new criteria for active learning, called "maximizing information gain propagation" as opposed to maximizing entropy as done by standard AL. 2). Results are shown on a variety of real-world datasets which show the superior performance of the proposed method in achieving higher test accuracy with a certain labeling budget. | SP:3907616cf8748efca1c63a16cfb9335e1380aec8 |
Style Equalization: Unsupervised Learning of Controllable Generative Sequence Models | 1 INTRODUCTION . The goal of controllable generative sequence models is to generate sequences containing target content in a target style . With the capability to select speaker voices , multi-speaker text-to-speech models have been successfully adopted in many voice assistants ( Gibiansky et al. , 2017 ; Ping et al. , 2018 ; Hayashi et al. , 2020 ) . Many applications , however , require style controllability beyond selecting speaker voices . For example , to perfectly reconstruct a speech example , we need to replicate not only the speaker ’ s voice characteristics but also all aspects of style about the sample , including but not limited to prosody , intonation dynamics , background noise , echo , and microphone response appeared in the given sample . To analyze failures or biases of a downstream recognizer , we need a style representation that models the entire style distribution , beyond speaker identity . In these applications , style represents all information ( except the content ) to exactly reconstruct a sample , as illustrated in Fig . 1a . Notice that to represent the time-dependent information of a sample , style is itself a sequence and changes over time , instead of a fixed vector . Moreover , even when the same speaker utters the same content , the resulting audios can contain different styles . To capture the large variation , the style representation should be learned in an unsupervised manner from a reference sample , rather than using a few human-annotated attributes . Our goal is to learn a controllable generative sequence model that controls its style with a reference example ( e.g. , an existing audio ) and controls the content with a content sequence ( e.g. , text ) , as shown in Fig . 1b . Our training datasetX is composed of { ( xi , ci ) } i=1 , ... , n , where xi= [ xi1 . . . x i Ti |xit ∈ Rd ] is the i-th sample and ci= [ ci1 . . . c i Ni | cij ∈ Rm ] is the corresponding content sequence . Note that in general , xi and ci have different lengths , i.e. , Ti 6= Ni , and we do not have the alignment between them . For example , in text-to-speech synthesis , xi is the mel-spectrogram of an audio sample , ci is the corresponding phonemes of the spoken words , and we do not have the mapping between the phonemes and mel-spectrogram . We also do not have any style supervision , including speaker or attribute labels , nor any grouping of the data based on style . While the unsupervised setting requires only the essential information ( i.e. , samples and their content ) , it makes learning a controllable generative sequence model very challenging . The main challenge is the mismatch between the inputs used during training and inference . As shown in Fig . 1c , during inference we pair arbitrary content A and reference sample B as inputs . However , due to the lack of ground truth containing content A and in the style of B , during training we pair content A and sample A . In other words , we train the model under the parallel setting where the reference style input contains the input content , but we use the model in the non-parallel setting ( where the reference style contains a different content than the target content ) during inference . Due to the training-inference mismatch , a well-performing model during training may perform poorly during inference . If a generative model learns to utilize the content information in the style example , during inference the generative model will generate wrong content . This phenomenon is called content leakage ( Hu et al. , 2020 ) . In an extreme case , a model can learn to copy the reference sample to the output ; despite its perfect training loss , it is useless because it always generates wrong content in practice . This paper proposes a simple but effective technique to deal with the training-inference mismatch when we learn controllable auto-regressive models in an unsupervised manner . As shown in Fig . 1d , we train the model under the non-parallel setting , i.e. , we pair arbitrary content A with an arbitrary sample B from the training dataset . Instead of directly using sample B as style ( in which case we have no ground truth ) , we jointly learn a style transformation function , which estimates the style difference between A and B and transforms the style of sample B to the style of A . The generative model then takes content A and the transformation output ( that contains the style of A ) to reconstruct sample A . The proposed method enables us to use sample A as the ground truth while learning in the non-parallel setting—the intended usage during inference . Additionally , our method provides NEW a systematic way to interpolate between the style of two samples by scaling the estimated style difference between two reference samples . We call the method style equalization . Note that for style equalization to work , the style transformation and difference estimator need to be carefully designed , such that no content information from content A can be transferred through sample B . We defer the discussion to Sec . 4 . The proposed method is general and can be applied to different sequence signals . We apply the proposed method on two signal domains , speech and online-handwriting , and evaluate the performance carefully via quantitative evaluation ( by computing content error rates ) and conducting qualitative user studies . Experimental results show that our method outperforms various unsupervised controllable FIX sequence generative models , even when they have additional style supervision like speaker labels . On LibriTTS , style equalization achieves close style replication ( 3.5 real oracle vs. 3.5 proposed in style FIX opinion score ) and content reproduction errors ( 6.6 % real oracle vs. 9.5 % proposed ) to real samples . 2 RELATED WORK . Controllable generative sequence models are not new in the literature ; however , the majority of these methods require style supervision , whereas the paper develops an unsupervised-style method . Table 6 provides an overview of the related works . NEW Unsupervised-style sequence models . Unsupervised methods extract style information directly from samples , i.e. , without any style labels or pretrained style embeddings . Existing unsupervised methods train models under the parallel setting , as shown in Fig . 1c . To prevent content leakage , most existing methods introduce a bottleneck on the capacity of the style encoder by representing style as a single ( time-invariant ) vector and limiting its dimension ( Wang et al. , 2018 ; Hsu et al. , 2018 ; Hu et al. , 2020 ; Ma et al. , 2018 ) . Wang et al . ( 2018 ) propose Global Style Token ( GST ) , which NEW represents a style vector as a linear combination of a learned dictionary ( called style tokens ) shared across the dataset . The number of style tokens ( the implicit dimension of the style vector ) is carefully controlled to prevent content leakage . As we will see in Sec . 3 , the bottleneck not only reduces the amount of content information contained in the style vector but also sacrifices style information . Alternative loss formulations have also been proposed to limit content information contained in the style representation . Hu et al . ( 2020 ) minimize the mutual information between the style vector and the content sequence but requires a pretrained content encoder and adversarial learning , which makes training their model difficult . Hsu et al . ( 2018 ) approximate the posterior distribution of the style vector using a mixture of Gaussian distributions with a small number of mixtures . Ma et al . ( 2018 ) utilize a discriminator conditioned on both the generated output and the content ( similar to a content NEW recognizer ) . Akuzawa et al . ( 2018 ) anneal the Kullback-Leibler divergence to control the amount of NEW information contained in style . Henter et al . ( 2018 ) utilize phoneme segmentation ( McAuliffe et al. , NEW 2017 ) to avoid learning the alignment between content c and output x. Priming is a technique that is introduced to control the style of auto-regressive generative sequence models ( Graves , 2013 ; Aksan et al. , 2018 ) . Since the hidden state of a Recurrent Neural Network ( RNN ) contain all information about current generation , including style , we can initialize the RNN by pre-rolling the reference sample through the RNN . Utilizing priming requires the content of the reference style . For example , Aksan et al . ( 2018 ) learn a character recognizer and use it during inference . Moreover , since the hidden state contains residual content from the reference example , it NEW often generates unexpected artifacts at the beginning of the sequence , as will be seen in Sec . 5 . Supervised-style sequence methods . Many existing controllable generative models require style supervision , either directly by passing attribute labels as inputs or implicitly by grouping training data with their attribute labels . In the following , we briefly introduce various supervised controllable sequence models . While using style supervision avoids training-inference mismatch , it limits the style control over a few sparsely-defined attribute classes . For instance , given a speech audio , we can recognize the spoken texts , the accent , or even the speaker , but provided solely with these attribute labels , it is impossible to exactly reconstruct the original speech audio . The sparsely-defined attributes are insufficient to capture the entire style information . User identifications or their embeddings have been used to learn multi-speaker text-to-speech models ( Jia et al. , 2018 ; Gibiansky et al. , 2017 ; Kameoka et al. , 2020 ; Donahue et al. , 2020 ; Chen et al. , 2021 ; Dhariwal et al. , 2020 ; Valle et al. , 2020 ; Kim et al. , 2020 ; Hayashi et al. , 2020 ; Sun et al. , 2020 ) , voice conversion models ( Qian et al. , 2019 ) and handwriting models ( Kotani et al. , 2020 ; NEW Bhunia et al. , 2021 ; Kang et al. , 2020 ; Davis et al. , 2020 ) . In addition to user identifications , predefined features like pitch , phoneme duration , loudness , and timbre have also been used by existing methods ( Ren et al. , 2020 ; Qian et al. , 2020 ; Dhariwal et al. , 2020 ; Valle et al. , 2020 ) . Instead of using speaker labels as input , Kameoka et al . ( 2018 ) ; Kaneko and Kameoka ( 2018 ) ; Kaneko et al . ( 2019a ; b ) group training samples by their speaker labels and apply adversarial learning to learn voice conversion models that change speaker voices while keeping the content of the input . Image methods . Controllable generative models have also been developed for images ( Härkönen et al. , 2020 ; Esser et al. , 2019 ; Singh et al. , 2019 ; Lample et al. , 2017 ; Karras et al. , 2020 ; Brock et al. , 2019 ; Collins et al. , 2020 ; Shen et al. , 2020 ; Esser et al. , 2020 ; Goetschalckx et al. , 2019 ; Pavllo et al. , 2020 ; Zhang et al. , 2018 ) , which control the object class , pose , lighting , etc. , of an image . Many image style transform methods have also been developed ( Isola et al. , 2017 ; Zhu et al. , 2017 ; Gatys et al. , 2016 ) . However , there is a fundamental difference between image and sequence problems . In image generative models , we do not need to learn the content-output alignment . The content is usually defined globally as an image class or as pixel labels , e.g. , segmentation map . In contrast , our content is given as text , the output is mel-spectrogram of a waveform , and the content and output have different lengths . To utilize the input content sequence , generative sequence models need to align the content and the output sequences and translate text to the output signal modality . The complication exacerbates the training-inference mismatch for sequence methods , since copying the style input is easier than utilizing the input content . | In this paper, the authors argue that the typical training algorithms for controllable sequence generative models suffers from the 'training-inference mismatch'. Therefore, to address such a problem, they introduce a style transformation module that is called 'style equalization'. Such a module is designed to enable training using different content and style samples and thereby mitigate the training-inference mismatch problem. To demonstrate the generality of the proposed approach, the 'style equalization' is applied to two tasks of TTS and text-to-handwriting synthesis on three datasets. On both tasks. the models show good results. Controllable sequential generative models have been studies for years. One of the most fundamental problem is how to effectively capture the content information and style information, respectively. It is a critical while very challenging research problem, because the 'content' and 'style' are entangled in the training samples, and ones must carefully design the training objective such that each of these factor can be learned in a controllable way. The idea of learning the 'style equalization' is interesting, and achieves promising results on tasks in different application scenarios, i.e. TTS and text-to-handwriting synthesis. Despite that, the paper is easy to follow, and the demos show in the project page qualitatively demonstrate the proposed approach. | SP:fd11668b6b0d122ede44d1dcec6f33e3f4e20e0c |
Style Equalization: Unsupervised Learning of Controllable Generative Sequence Models | 1 INTRODUCTION . The goal of controllable generative sequence models is to generate sequences containing target content in a target style . With the capability to select speaker voices , multi-speaker text-to-speech models have been successfully adopted in many voice assistants ( Gibiansky et al. , 2017 ; Ping et al. , 2018 ; Hayashi et al. , 2020 ) . Many applications , however , require style controllability beyond selecting speaker voices . For example , to perfectly reconstruct a speech example , we need to replicate not only the speaker ’ s voice characteristics but also all aspects of style about the sample , including but not limited to prosody , intonation dynamics , background noise , echo , and microphone response appeared in the given sample . To analyze failures or biases of a downstream recognizer , we need a style representation that models the entire style distribution , beyond speaker identity . In these applications , style represents all information ( except the content ) to exactly reconstruct a sample , as illustrated in Fig . 1a . Notice that to represent the time-dependent information of a sample , style is itself a sequence and changes over time , instead of a fixed vector . Moreover , even when the same speaker utters the same content , the resulting audios can contain different styles . To capture the large variation , the style representation should be learned in an unsupervised manner from a reference sample , rather than using a few human-annotated attributes . Our goal is to learn a controllable generative sequence model that controls its style with a reference example ( e.g. , an existing audio ) and controls the content with a content sequence ( e.g. , text ) , as shown in Fig . 1b . Our training datasetX is composed of { ( xi , ci ) } i=1 , ... , n , where xi= [ xi1 . . . x i Ti |xit ∈ Rd ] is the i-th sample and ci= [ ci1 . . . c i Ni | cij ∈ Rm ] is the corresponding content sequence . Note that in general , xi and ci have different lengths , i.e. , Ti 6= Ni , and we do not have the alignment between them . For example , in text-to-speech synthesis , xi is the mel-spectrogram of an audio sample , ci is the corresponding phonemes of the spoken words , and we do not have the mapping between the phonemes and mel-spectrogram . We also do not have any style supervision , including speaker or attribute labels , nor any grouping of the data based on style . While the unsupervised setting requires only the essential information ( i.e. , samples and their content ) , it makes learning a controllable generative sequence model very challenging . The main challenge is the mismatch between the inputs used during training and inference . As shown in Fig . 1c , during inference we pair arbitrary content A and reference sample B as inputs . However , due to the lack of ground truth containing content A and in the style of B , during training we pair content A and sample A . In other words , we train the model under the parallel setting where the reference style input contains the input content , but we use the model in the non-parallel setting ( where the reference style contains a different content than the target content ) during inference . Due to the training-inference mismatch , a well-performing model during training may perform poorly during inference . If a generative model learns to utilize the content information in the style example , during inference the generative model will generate wrong content . This phenomenon is called content leakage ( Hu et al. , 2020 ) . In an extreme case , a model can learn to copy the reference sample to the output ; despite its perfect training loss , it is useless because it always generates wrong content in practice . This paper proposes a simple but effective technique to deal with the training-inference mismatch when we learn controllable auto-regressive models in an unsupervised manner . As shown in Fig . 1d , we train the model under the non-parallel setting , i.e. , we pair arbitrary content A with an arbitrary sample B from the training dataset . Instead of directly using sample B as style ( in which case we have no ground truth ) , we jointly learn a style transformation function , which estimates the style difference between A and B and transforms the style of sample B to the style of A . The generative model then takes content A and the transformation output ( that contains the style of A ) to reconstruct sample A . The proposed method enables us to use sample A as the ground truth while learning in the non-parallel setting—the intended usage during inference . Additionally , our method provides NEW a systematic way to interpolate between the style of two samples by scaling the estimated style difference between two reference samples . We call the method style equalization . Note that for style equalization to work , the style transformation and difference estimator need to be carefully designed , such that no content information from content A can be transferred through sample B . We defer the discussion to Sec . 4 . The proposed method is general and can be applied to different sequence signals . We apply the proposed method on two signal domains , speech and online-handwriting , and evaluate the performance carefully via quantitative evaluation ( by computing content error rates ) and conducting qualitative user studies . Experimental results show that our method outperforms various unsupervised controllable FIX sequence generative models , even when they have additional style supervision like speaker labels . On LibriTTS , style equalization achieves close style replication ( 3.5 real oracle vs. 3.5 proposed in style FIX opinion score ) and content reproduction errors ( 6.6 % real oracle vs. 9.5 % proposed ) to real samples . 2 RELATED WORK . Controllable generative sequence models are not new in the literature ; however , the majority of these methods require style supervision , whereas the paper develops an unsupervised-style method . Table 6 provides an overview of the related works . NEW Unsupervised-style sequence models . Unsupervised methods extract style information directly from samples , i.e. , without any style labels or pretrained style embeddings . Existing unsupervised methods train models under the parallel setting , as shown in Fig . 1c . To prevent content leakage , most existing methods introduce a bottleneck on the capacity of the style encoder by representing style as a single ( time-invariant ) vector and limiting its dimension ( Wang et al. , 2018 ; Hsu et al. , 2018 ; Hu et al. , 2020 ; Ma et al. , 2018 ) . Wang et al . ( 2018 ) propose Global Style Token ( GST ) , which NEW represents a style vector as a linear combination of a learned dictionary ( called style tokens ) shared across the dataset . The number of style tokens ( the implicit dimension of the style vector ) is carefully controlled to prevent content leakage . As we will see in Sec . 3 , the bottleneck not only reduces the amount of content information contained in the style vector but also sacrifices style information . Alternative loss formulations have also been proposed to limit content information contained in the style representation . Hu et al . ( 2020 ) minimize the mutual information between the style vector and the content sequence but requires a pretrained content encoder and adversarial learning , which makes training their model difficult . Hsu et al . ( 2018 ) approximate the posterior distribution of the style vector using a mixture of Gaussian distributions with a small number of mixtures . Ma et al . ( 2018 ) utilize a discriminator conditioned on both the generated output and the content ( similar to a content NEW recognizer ) . Akuzawa et al . ( 2018 ) anneal the Kullback-Leibler divergence to control the amount of NEW information contained in style . Henter et al . ( 2018 ) utilize phoneme segmentation ( McAuliffe et al. , NEW 2017 ) to avoid learning the alignment between content c and output x. Priming is a technique that is introduced to control the style of auto-regressive generative sequence models ( Graves , 2013 ; Aksan et al. , 2018 ) . Since the hidden state of a Recurrent Neural Network ( RNN ) contain all information about current generation , including style , we can initialize the RNN by pre-rolling the reference sample through the RNN . Utilizing priming requires the content of the reference style . For example , Aksan et al . ( 2018 ) learn a character recognizer and use it during inference . Moreover , since the hidden state contains residual content from the reference example , it NEW often generates unexpected artifacts at the beginning of the sequence , as will be seen in Sec . 5 . Supervised-style sequence methods . Many existing controllable generative models require style supervision , either directly by passing attribute labels as inputs or implicitly by grouping training data with their attribute labels . In the following , we briefly introduce various supervised controllable sequence models . While using style supervision avoids training-inference mismatch , it limits the style control over a few sparsely-defined attribute classes . For instance , given a speech audio , we can recognize the spoken texts , the accent , or even the speaker , but provided solely with these attribute labels , it is impossible to exactly reconstruct the original speech audio . The sparsely-defined attributes are insufficient to capture the entire style information . User identifications or their embeddings have been used to learn multi-speaker text-to-speech models ( Jia et al. , 2018 ; Gibiansky et al. , 2017 ; Kameoka et al. , 2020 ; Donahue et al. , 2020 ; Chen et al. , 2021 ; Dhariwal et al. , 2020 ; Valle et al. , 2020 ; Kim et al. , 2020 ; Hayashi et al. , 2020 ; Sun et al. , 2020 ) , voice conversion models ( Qian et al. , 2019 ) and handwriting models ( Kotani et al. , 2020 ; NEW Bhunia et al. , 2021 ; Kang et al. , 2020 ; Davis et al. , 2020 ) . In addition to user identifications , predefined features like pitch , phoneme duration , loudness , and timbre have also been used by existing methods ( Ren et al. , 2020 ; Qian et al. , 2020 ; Dhariwal et al. , 2020 ; Valle et al. , 2020 ) . Instead of using speaker labels as input , Kameoka et al . ( 2018 ) ; Kaneko and Kameoka ( 2018 ) ; Kaneko et al . ( 2019a ; b ) group training samples by their speaker labels and apply adversarial learning to learn voice conversion models that change speaker voices while keeping the content of the input . Image methods . Controllable generative models have also been developed for images ( Härkönen et al. , 2020 ; Esser et al. , 2019 ; Singh et al. , 2019 ; Lample et al. , 2017 ; Karras et al. , 2020 ; Brock et al. , 2019 ; Collins et al. , 2020 ; Shen et al. , 2020 ; Esser et al. , 2020 ; Goetschalckx et al. , 2019 ; Pavllo et al. , 2020 ; Zhang et al. , 2018 ) , which control the object class , pose , lighting , etc. , of an image . Many image style transform methods have also been developed ( Isola et al. , 2017 ; Zhu et al. , 2017 ; Gatys et al. , 2016 ) . However , there is a fundamental difference between image and sequence problems . In image generative models , we do not need to learn the content-output alignment . The content is usually defined globally as an image class or as pixel labels , e.g. , segmentation map . In contrast , our content is given as text , the output is mel-spectrogram of a waveform , and the content and output have different lengths . To utilize the input content sequence , generative sequence models need to align the content and the output sequences and translate text to the output signal modality . The complication exacerbates the training-inference mismatch for sequence methods , since copying the style input is easier than utilizing the input content . | - To enhance the quality of style-controlled generation, especially in an unsupervised manner and non-parallel setting, this paper proposes a "style equalization" mechanism to prevent the content leakage problem. In the style equalization module, the style of a sample is transformed to be the same as the style of ground truth. The authors assumed that content information is time-dependent whereas the style can be time-independent so that the authors employ time-average pooling to learn the global style. Then the style difference is added to the inputs style features. At each time step, a content attended feature queries and attends appropriate style equalized feature via the multihead attention module. The entire model is optimized to maximize the ELBO. The proposed method is demonstrated on speech synthesis and hand-writing synthesis tasks. | SP:fd11668b6b0d122ede44d1dcec6f33e3f4e20e0c |
ANCER: Anisotropic Certification via Sample-wise Volume Maximization | 1 INTRODUCTION . The well-studied fact that Deep Neural Networks ( DNNs ) are vulnerable to additive imperceptible noise perturbations has led to a growing interest in developing robust classifiers ( Goodfellow et al. , 2015 ; Szegedy et al. , 2014 ) . A recent promising approach to achieve state-of-the-art provable robustness ( i.e . a theoretical bound on the output around every input ) at the scale of ImageNet ( Deng et al. , 2009 ) is randomized smoothing ( Lecuyer et al. , 2019 ; Cohen et al. , 2019 ) . Given an input x and a network f , randomized smoothing constructs g ( x ) = E ∼D [ f ( x + ) ] such that g ( x ) = g ( x + δ ) ∀δ ∈ R , where the certification region R is characterized by x , f , and the smoothing distribution D. For instance , Cohen et al . ( 2019 ) showed that if D = N ( 0 , σ2I ) , thenR is an ` 2-ball whose radius is determined by x , f and σ . Since then , there has been significant progress towards the design of D leading to the largestR for all inputs x . The interplay betweenR characterized by ` 1 , ` 2 and ` ∞-balls , and a notion of optimal distribution D has been previously studied Yang et al . ( 2020 ) . Despite this progress , current randomized smoothing approaches provide certification regions that are isotropic in nature , limiting their capacity to certifying smaller and worst-case regions . We provide an intuitive example of this behavior in Figure 1 . The isotropic nature ofR in prior art is due to the common assumption that the smoothing distribution D is identically distributed ( Yang et al. , 2020 ; Kumar et al. , 2020 ; Levine & Feizi , 2021 ) . Moreover , comparisons between various randomized smoothing approaches were limited to methods that produce the same ` p certificate , with no clear metrics for comparing with other certificates . In this paper , we address both concerns and present new state-of-the-art certified accuracy results on both CIFAR-10 and ImageNet datasets . Our contributions are threefold . ( i ) We provide a general and simpler analysis compared to prior art ( Cohen et al. , 2019 ; Yang et al. , 2020 ) that paves the way for the certification of anisotropic regions characterized by any norm , holding prior art as special cases . We then specialize our result to regions that , for a positive definite A , are ellipsoids , i.e . ‖Aδ‖2 ≤ c , and generalized cross-polytopes , i.e . ‖Aδ‖1 ≤ c , generalizing both ` 2 ( Cohen et al. , 2019 ) and ` 1 ( Lecuyer et al. , 2019 ; Yang et al. , 2020 ) certification ( Section 4 ) . ( ii ) We introduce a new evaluation framework to compare methods that certify general ( isotropic or anisotropic ) regions . We compare two general certificates by defining that a method certifyingR1 is superior to another certifyingR2 , ifR1 is a strict superset to R2 . Further , we define a standalone quantitative metric as the volume of the certified region , and specialize it for the cases of ellipsoids and generalized cross-polytopes ( Section 5 ) . ( iii ) We propose ANCER , an anisotropic certification method that performs sample-wise ( i.e . per sample in the test set ) region volume maximization ( Section 6 ) , generalizing the data-dependent , memory-based solution from Alfarra et al . ( 2020 ) . Through experiments on CIFAR-10 ( Krizhevsky , 2009 ) and ImageNet ( Deng et al. , 2009 ) , we show that restricting ANCER ’ s certification region to ` 1 and ` 2-balls outperforms state-of-the-art ` 1 and ` 2 results from previous works ( Yang et al. , 2020 ; Alfarra et al. , 2020 ) . Further , we show that the volume of the certified regions are significantly larger than all existing methods , thus setting a new state-of-the-art in certified accuracy . We highlight that while we effectively achieve state-of-the-art performance , it comes at a high cost given the data-dependency requirements . A discussion of the limitations of the solution is presented in Section 6 . Notation . We consider a base classifier f : Rn → P ( K ) , where P ( K ) is a probability simplex over K classes , i.e . f i ≥ 0 and 1 > f = 1 , for i ∈ { 1 , . . . , K } . Further , we use ( x , y ) to be a sample input x and its corresponding true label y drawn from a test set Dt , and fy to be the output of f at the correct class . We use ` p to be the typically defined ‖ · ‖p norm ( p ≥ 1 ) , and ` Ap or ‖ · ‖A , p for p = { 1 , 2 } to be a composite norm defined with respect to a positive definite matrix A as ‖A−1/pv‖p . 2 RELATED WORK . Verified Defenses . Since the discovery that DNNs are vulnerable against input perturbations ( Goodfellow et al. , 2015 ; Szegedy et al. , 2014 ) , a range of methods have been proposed to build classifiers that are verifiably robust ( Huang et al. , 2017 ; Gowal et al. , 2019 ; Bunel et al. , 2018 ; Salman et al. , 2019b ) . Despite this progress , these methods do not yet scale to the networks the community is interested in certifying ( Tjeng et al. , 2019 ; Weng et al. , 2018 ) . Randomized Smoothing . The first works on randomized smoothing used Laplacian ( Lecuyer et al. , 2019 ; Li et al. , 2019 ) and Gaussian Cohen et al . ( 2019 ) distributions to obtain ` 1 and ` 2-ball certificates , respectively . Several subsequent works improved the performance of smooth classifiers by training the base classifier using adversarial augmentation ( Salman et al. , 2019a ) , regularization ( Zhai et al. , 2019 ) , or general adjustments to training routines ( Jeong & Shin , 2020 ) . Recent work derived ` p-norm certificates for other isotropic smoothing distributions ( Yang et al. , 2020 ; Levine & Feizi , 2020 ; Zhang et al. , 2019 ) . Concurrently , Dvijotham et al . ( 2020 ) developed a framework to handle arbitrary smoothing measures in any ` p-norm ; however , the certification process requires significant hyperparameter tuning . Similarly , Mohapatra et al . ( 2020 ) introduces larger certificates that require higher-order information , yet do not provide a closed-form solution . This was followed by a complementary data-dependent smoothing approach , where the parameters of the smoothing distribution were optimized per test set sample to maximize the certified radius at an individual input ( Alfarra et al. , 2020 ) . All prior works considered smoothing with isotropic distributions and hence certified isotropic ` p-ball regions . In this paper , we extend randomized smoothing to certify anisotropic regions , by pairing it with a generalization of the data-dependent framework ( Alfarra et al. , 2020 ) to maximize the certified region at each input point . 3 MOTIVATING ANISOTROPIC CERTIFICATES . Certification approaches aim to find the safe region R , where arg maxi f i ( x ) = arg maxi f i ( x + δ ) ∀δ ∈ R. Recent randomized smoothing techniques perform this certification by explicitly optimizing the isotropic ` p certified region around each input ( Alfarra et al. , 2020 ) , obtaining state-ofthe-art performance as a result . Despite this ` p optimality , we note that any ` p-norm certificate is worst-case from the perspective of that norm , as it avoids adversary regions by limiting its certificate to the ` p-closest adversary . This means that it can only enjoy a radius that is at most equal to the distance to the closest decision boundary . However , decision boundaries of general classifiers are complex , non-linear , and non-radially distributed with respect to a generic input sample ( Karimi et al. , 2019 ) . This is evidenced by the fact that , within a reasonably small ` p-ball around an input , there are often only a small set of adversary directions ( Tramèr et al. , 2017 ; 2018 ) ( e.g . see the decision boundaries in Figure 1 ) . As such , while ` p-norm certificates are useful to reason about worst-case performance and are simple to obtain given previous works ( Cohen et al. , 2019 ; Yang et al. , 2020 ; Lee et al. , 2019 ) , they are otherwise uninformative in terms of the shape of decision boundaries , i.e . which regions around the input are safe . To visualize these concepts , we illustrate the decision boundaries of a base classifier f trained on a toy 2-dimensional , radially separable ( with respect to the origin ) binary classification dataset , and consider two different input test samples ( see Figure 1 ) . We compare the optimal isotropic and anisotropic certified regions of different shapes at these points . In Figures 1a and 1b , we compare an isotropic cross-polytope ( of the form ‖δ‖1 ≤ r ) with an anisotropic generalized cross-polytope ( of the form ‖Aδ‖1 ≤ r ) , while in Figures 1c and 1d we compare an isotropic ` 2 ball ( of the form ‖δ‖2 ≤ r ) with an anisotropic ellipsoid ( of the form ‖Aδ‖2 ≤ r ) . Notice that in Figures 1a and 1c , due to the curvature of the classification boundary ( shown in white ) , the optimal certification region is isotropic in nature , which is evidenced by the similarities of the optimal isotropic and anisotropic certificates . On the other hand , in Figures 1b and 1d , the location of the decision boundary allows for the anisotropic certified regions to be considerably larger than their isotropic counterparts , as they are not as constrained by the closest decision boundary , i.e . the worst-case performance . We note that these differences are further highlighted in higher dimensions , and we study them for a single CIFAR-10 test set sample in Appendix A.1 . Further , we also showcase how anisotropic certification allows for further insights into constant prediction ( safe ) directions in Appendix A.2 . 4 ANISOTROPIC CERTIFICATION . One of the main obstacles in enabling anisotropic certification is the complexity of the analysis required . To alleviate this , we follow a Lipschitz argument first observed by Salman et al . ( 2019a ) and Jordan & Dimakis ( 2020 ) and propose a simple and general certification analysis . We start with the following two observations . All proofs are in Appendix B . Proposition 1 . Consider a differentiable function g : Rn → R. If supx‖∇g ( x ) ‖∗ ≤ L where ‖ · ‖∗ has a dual norm ‖z‖ = maxx z > x s.t . ‖x‖∗ ≤ 1 , then g is L-Lipschitz under norm ‖ · ‖∗ , that is |g ( x ) − g ( y ) | ≤ L‖x− y‖ . Given the previous proposition , we formalize ‖ · ‖ certification as follows : Theorem 1 . Let g : Rn → RK , gi be L-Lipschitz continuous under norm ‖ · ‖∗ ∀i ∈ { 1 , . . . , K } , and cA = arg maxi gi ( x ) . Then , we have arg maxi gi ( x+ δ ) = cA for all δ satisfying : ‖δ‖ ≤ 1 2L ( gcA ( x ) −max c gc 6=cA ( x ) ) . Theorem 1 provides an ‖ · ‖ norm robustness certificate for any L-Lipschitz classifier g under ‖ · ‖∗ . The certificate is only informative when one can attain a tight non-trivial estimate of L , ideally supx‖∇g ( x ) ‖∗ , which is generally difficult when g is an arbitrary neural network . Framework Recipe . In light of Theorem 1 , randomized smoothing can be viewed differently as an instance of Theorem 1 with the favorable property that the constructed smooth classifier g enjoys an analytical form for L = supx‖∇g ( x ) ‖∗ by design . As such , to obtain an informative ‖ · ‖ certificate , one must , for an arbitrary choice of a smoothing distribution , compute the analytic Lipschitz constant L under ‖ · ‖∗ for the smooth g. While there can exist a notion of “ optimal ” smoothing distribution for a given choice of ‖ · ‖ certificate , as in part addressed earlier for the isotropic ` 1 , ` 2 and ` ∞ certificates ( Yang et al. , 2020 ) , this is not the focus of this paper . The choice of the smoothing distribution in later sections is inspired by previous work for the purpose of granting anisotropic certificates . This recipe complements randomized smoothing works based on Neyman-Pearson ’ s lemma ( Cohen et al. , 2019 ) or the Level-Set and Differential Method ( Yang et al. , 2020 ) . We will deploy this framework recipe to show two specializations for anisotropic certification , namely ellipsoids ( Section 4.1 ) and generalized cross-polytopes ( Section 4.2 ) .1 . | In this paper, the authors discuss the extension of $\ell_{p}$-randomized smoothing to anisotropic counterparts. In particular, they consider the extension of $\ell_{2}$-certificates from (hyper)spheres to (hyper)ellipsoids by sampling anisotropic rather than isotropic Gaussian noise, as well as the extension of $\ell_{1}$-certificates to cross-polytopes by sampling scaled Uniform noise rather than unscaled noise. Further, the authors discuss how these extended certificates can be compared to their base-counter parts to establish superiority (inclusion). The introduced certification algorithm, ANCER, utilizes the idea of data-dependent randomized smoothing to find the anisotropic shape with maximal certification volume. In experimental evaluation on Cifar-10 and Imagenet the authors show that the obtained certificates permit higher isotropic certificating radii than other methods in the per-sample optimization setting. | SP:b6fef1ef35ccf967c2df9d1704ba38df1af3e879 |
ANCER: Anisotropic Certification via Sample-wise Volume Maximization | 1 INTRODUCTION . The well-studied fact that Deep Neural Networks ( DNNs ) are vulnerable to additive imperceptible noise perturbations has led to a growing interest in developing robust classifiers ( Goodfellow et al. , 2015 ; Szegedy et al. , 2014 ) . A recent promising approach to achieve state-of-the-art provable robustness ( i.e . a theoretical bound on the output around every input ) at the scale of ImageNet ( Deng et al. , 2009 ) is randomized smoothing ( Lecuyer et al. , 2019 ; Cohen et al. , 2019 ) . Given an input x and a network f , randomized smoothing constructs g ( x ) = E ∼D [ f ( x + ) ] such that g ( x ) = g ( x + δ ) ∀δ ∈ R , where the certification region R is characterized by x , f , and the smoothing distribution D. For instance , Cohen et al . ( 2019 ) showed that if D = N ( 0 , σ2I ) , thenR is an ` 2-ball whose radius is determined by x , f and σ . Since then , there has been significant progress towards the design of D leading to the largestR for all inputs x . The interplay betweenR characterized by ` 1 , ` 2 and ` ∞-balls , and a notion of optimal distribution D has been previously studied Yang et al . ( 2020 ) . Despite this progress , current randomized smoothing approaches provide certification regions that are isotropic in nature , limiting their capacity to certifying smaller and worst-case regions . We provide an intuitive example of this behavior in Figure 1 . The isotropic nature ofR in prior art is due to the common assumption that the smoothing distribution D is identically distributed ( Yang et al. , 2020 ; Kumar et al. , 2020 ; Levine & Feizi , 2021 ) . Moreover , comparisons between various randomized smoothing approaches were limited to methods that produce the same ` p certificate , with no clear metrics for comparing with other certificates . In this paper , we address both concerns and present new state-of-the-art certified accuracy results on both CIFAR-10 and ImageNet datasets . Our contributions are threefold . ( i ) We provide a general and simpler analysis compared to prior art ( Cohen et al. , 2019 ; Yang et al. , 2020 ) that paves the way for the certification of anisotropic regions characterized by any norm , holding prior art as special cases . We then specialize our result to regions that , for a positive definite A , are ellipsoids , i.e . ‖Aδ‖2 ≤ c , and generalized cross-polytopes , i.e . ‖Aδ‖1 ≤ c , generalizing both ` 2 ( Cohen et al. , 2019 ) and ` 1 ( Lecuyer et al. , 2019 ; Yang et al. , 2020 ) certification ( Section 4 ) . ( ii ) We introduce a new evaluation framework to compare methods that certify general ( isotropic or anisotropic ) regions . We compare two general certificates by defining that a method certifyingR1 is superior to another certifyingR2 , ifR1 is a strict superset to R2 . Further , we define a standalone quantitative metric as the volume of the certified region , and specialize it for the cases of ellipsoids and generalized cross-polytopes ( Section 5 ) . ( iii ) We propose ANCER , an anisotropic certification method that performs sample-wise ( i.e . per sample in the test set ) region volume maximization ( Section 6 ) , generalizing the data-dependent , memory-based solution from Alfarra et al . ( 2020 ) . Through experiments on CIFAR-10 ( Krizhevsky , 2009 ) and ImageNet ( Deng et al. , 2009 ) , we show that restricting ANCER ’ s certification region to ` 1 and ` 2-balls outperforms state-of-the-art ` 1 and ` 2 results from previous works ( Yang et al. , 2020 ; Alfarra et al. , 2020 ) . Further , we show that the volume of the certified regions are significantly larger than all existing methods , thus setting a new state-of-the-art in certified accuracy . We highlight that while we effectively achieve state-of-the-art performance , it comes at a high cost given the data-dependency requirements . A discussion of the limitations of the solution is presented in Section 6 . Notation . We consider a base classifier f : Rn → P ( K ) , where P ( K ) is a probability simplex over K classes , i.e . f i ≥ 0 and 1 > f = 1 , for i ∈ { 1 , . . . , K } . Further , we use ( x , y ) to be a sample input x and its corresponding true label y drawn from a test set Dt , and fy to be the output of f at the correct class . We use ` p to be the typically defined ‖ · ‖p norm ( p ≥ 1 ) , and ` Ap or ‖ · ‖A , p for p = { 1 , 2 } to be a composite norm defined with respect to a positive definite matrix A as ‖A−1/pv‖p . 2 RELATED WORK . Verified Defenses . Since the discovery that DNNs are vulnerable against input perturbations ( Goodfellow et al. , 2015 ; Szegedy et al. , 2014 ) , a range of methods have been proposed to build classifiers that are verifiably robust ( Huang et al. , 2017 ; Gowal et al. , 2019 ; Bunel et al. , 2018 ; Salman et al. , 2019b ) . Despite this progress , these methods do not yet scale to the networks the community is interested in certifying ( Tjeng et al. , 2019 ; Weng et al. , 2018 ) . Randomized Smoothing . The first works on randomized smoothing used Laplacian ( Lecuyer et al. , 2019 ; Li et al. , 2019 ) and Gaussian Cohen et al . ( 2019 ) distributions to obtain ` 1 and ` 2-ball certificates , respectively . Several subsequent works improved the performance of smooth classifiers by training the base classifier using adversarial augmentation ( Salman et al. , 2019a ) , regularization ( Zhai et al. , 2019 ) , or general adjustments to training routines ( Jeong & Shin , 2020 ) . Recent work derived ` p-norm certificates for other isotropic smoothing distributions ( Yang et al. , 2020 ; Levine & Feizi , 2020 ; Zhang et al. , 2019 ) . Concurrently , Dvijotham et al . ( 2020 ) developed a framework to handle arbitrary smoothing measures in any ` p-norm ; however , the certification process requires significant hyperparameter tuning . Similarly , Mohapatra et al . ( 2020 ) introduces larger certificates that require higher-order information , yet do not provide a closed-form solution . This was followed by a complementary data-dependent smoothing approach , where the parameters of the smoothing distribution were optimized per test set sample to maximize the certified radius at an individual input ( Alfarra et al. , 2020 ) . All prior works considered smoothing with isotropic distributions and hence certified isotropic ` p-ball regions . In this paper , we extend randomized smoothing to certify anisotropic regions , by pairing it with a generalization of the data-dependent framework ( Alfarra et al. , 2020 ) to maximize the certified region at each input point . 3 MOTIVATING ANISOTROPIC CERTIFICATES . Certification approaches aim to find the safe region R , where arg maxi f i ( x ) = arg maxi f i ( x + δ ) ∀δ ∈ R. Recent randomized smoothing techniques perform this certification by explicitly optimizing the isotropic ` p certified region around each input ( Alfarra et al. , 2020 ) , obtaining state-ofthe-art performance as a result . Despite this ` p optimality , we note that any ` p-norm certificate is worst-case from the perspective of that norm , as it avoids adversary regions by limiting its certificate to the ` p-closest adversary . This means that it can only enjoy a radius that is at most equal to the distance to the closest decision boundary . However , decision boundaries of general classifiers are complex , non-linear , and non-radially distributed with respect to a generic input sample ( Karimi et al. , 2019 ) . This is evidenced by the fact that , within a reasonably small ` p-ball around an input , there are often only a small set of adversary directions ( Tramèr et al. , 2017 ; 2018 ) ( e.g . see the decision boundaries in Figure 1 ) . As such , while ` p-norm certificates are useful to reason about worst-case performance and are simple to obtain given previous works ( Cohen et al. , 2019 ; Yang et al. , 2020 ; Lee et al. , 2019 ) , they are otherwise uninformative in terms of the shape of decision boundaries , i.e . which regions around the input are safe . To visualize these concepts , we illustrate the decision boundaries of a base classifier f trained on a toy 2-dimensional , radially separable ( with respect to the origin ) binary classification dataset , and consider two different input test samples ( see Figure 1 ) . We compare the optimal isotropic and anisotropic certified regions of different shapes at these points . In Figures 1a and 1b , we compare an isotropic cross-polytope ( of the form ‖δ‖1 ≤ r ) with an anisotropic generalized cross-polytope ( of the form ‖Aδ‖1 ≤ r ) , while in Figures 1c and 1d we compare an isotropic ` 2 ball ( of the form ‖δ‖2 ≤ r ) with an anisotropic ellipsoid ( of the form ‖Aδ‖2 ≤ r ) . Notice that in Figures 1a and 1c , due to the curvature of the classification boundary ( shown in white ) , the optimal certification region is isotropic in nature , which is evidenced by the similarities of the optimal isotropic and anisotropic certificates . On the other hand , in Figures 1b and 1d , the location of the decision boundary allows for the anisotropic certified regions to be considerably larger than their isotropic counterparts , as they are not as constrained by the closest decision boundary , i.e . the worst-case performance . We note that these differences are further highlighted in higher dimensions , and we study them for a single CIFAR-10 test set sample in Appendix A.1 . Further , we also showcase how anisotropic certification allows for further insights into constant prediction ( safe ) directions in Appendix A.2 . 4 ANISOTROPIC CERTIFICATION . One of the main obstacles in enabling anisotropic certification is the complexity of the analysis required . To alleviate this , we follow a Lipschitz argument first observed by Salman et al . ( 2019a ) and Jordan & Dimakis ( 2020 ) and propose a simple and general certification analysis . We start with the following two observations . All proofs are in Appendix B . Proposition 1 . Consider a differentiable function g : Rn → R. If supx‖∇g ( x ) ‖∗ ≤ L where ‖ · ‖∗ has a dual norm ‖z‖ = maxx z > x s.t . ‖x‖∗ ≤ 1 , then g is L-Lipschitz under norm ‖ · ‖∗ , that is |g ( x ) − g ( y ) | ≤ L‖x− y‖ . Given the previous proposition , we formalize ‖ · ‖ certification as follows : Theorem 1 . Let g : Rn → RK , gi be L-Lipschitz continuous under norm ‖ · ‖∗ ∀i ∈ { 1 , . . . , K } , and cA = arg maxi gi ( x ) . Then , we have arg maxi gi ( x+ δ ) = cA for all δ satisfying : ‖δ‖ ≤ 1 2L ( gcA ( x ) −max c gc 6=cA ( x ) ) . Theorem 1 provides an ‖ · ‖ norm robustness certificate for any L-Lipschitz classifier g under ‖ · ‖∗ . The certificate is only informative when one can attain a tight non-trivial estimate of L , ideally supx‖∇g ( x ) ‖∗ , which is generally difficult when g is an arbitrary neural network . Framework Recipe . In light of Theorem 1 , randomized smoothing can be viewed differently as an instance of Theorem 1 with the favorable property that the constructed smooth classifier g enjoys an analytical form for L = supx‖∇g ( x ) ‖∗ by design . As such , to obtain an informative ‖ · ‖ certificate , one must , for an arbitrary choice of a smoothing distribution , compute the analytic Lipschitz constant L under ‖ · ‖∗ for the smooth g. While there can exist a notion of “ optimal ” smoothing distribution for a given choice of ‖ · ‖ certificate , as in part addressed earlier for the isotropic ` 1 , ` 2 and ` ∞ certificates ( Yang et al. , 2020 ) , this is not the focus of this paper . The choice of the smoothing distribution in later sections is inspired by previous work for the purpose of granting anisotropic certificates . This recipe complements randomized smoothing works based on Neyman-Pearson ’ s lemma ( Cohen et al. , 2019 ) or the Level-Set and Differential Method ( Yang et al. , 2020 ) . We will deploy this framework recipe to show two specializations for anisotropic certification , namely ellipsoids ( Section 4.1 ) and generalized cross-polytopes ( Section 4.2 ) .1 . | The paper proposes the anisotropic version of randomized smoothing. Evaluation metrics based on the volume of the certified region are proposed, allowing comparisons with the certified regions provided from isotropic randomized smoothing. Experimental results show the usefulness of introducing anisotropic randomized smoothing as it certifies larger regions. | SP:b6fef1ef35ccf967c2df9d1704ba38df1af3e879 |
Temporal Alignment Prediction for Supervised Representation Learning and Few-Shot Sequence Classification | 1 INTRODUCTION . Distance between sequences plays a crucial role in sequence classification ( Sakoe & Chiba , 1978 ) , retrieval ( Su et al. , 2019 ) , clustering ( Garcı́a-Garcı́a et al. , 2008 ) , etc . Measuring distance between sequences is difficult , since different sequences may have different sampling rates , execution speeds , local distortions , initial states , and elastic warpage . To tackle such temporal variances , existing sequence distances either encode each sequence into a feature vector invariant to temporal variances ( Abid & Zou , 2018 ; Lohit et al. , 2019 ) or employ alignment for temporal correspondence calibration ( Sakoe & Chiba , 1978 ; Su & Hua , 2019 ) . Typical feature-based methods use recurrent neural networks ( RNNs ) ( Ramachandran et al. , 2017 ) to encode sequences and measure the Euclidean distance between corresponding features . The feature of a sequence is fixed when compared with any other sequence . Although such methods are naturally learnable and only perform forward calculations in inference , they require large amounts of sequences to train complex RNNs . Moreover , how the learned features handle temporal variances and what kinds of variances can be handled are not clear . Alignment-based methods determine different optimal alignments for different sequence pairs . This is more intuitive and flexible because temporal variances may be different when comparing different sequences . The inferred alignments clearly indicate how and where the two sequences differ in temporal steps . Most alignment methods solve an optimization problem under pre-defined feasible constraints to infer the optimal alignment . E.g. , DTW ( Sakoe & Chiba , 1978 ) requires dynamic programming and OPW ( Su & Hua , 2019 ) employs fixed-point iterations . Such optimizations are often time-consuming and can not fully utilize GPU . Moreover , since inferring the alignment is itself an optimization problem and has its own objective , sequence distance-based end-to-end learning using other objectives becomes intractable . For ∗Corresponding author : Ji-Rong Wen . instance , learning discriminative temporal representations for elements in sequences often adopt the objective that sequences of different classes are better separated w.r.t . a sequence distance ( Mei et al. , 2014 ; Su & Wu , 2020b ) . Gradients of this overall objective are difficult to pass through alignments since they are latent variables determined by another optimization problem . In this paper , we propose a learnable alignment-based sequence distance , namely Temporal Alignment Prediction ( TAP ) . TAP simulates the optimization-based inference from the input sequence pair to the optimal alignment as a function . The function is modeled by a lightweight convolutional neural network to directly predict the optimal alignment . TAP can be applied in different distance-based machine learning tasks under different settings . As instances , we show the applications of TAP in supervised representation learning and few-shot learning for sequence data . For supervised learning , we employ metric learning losses by using TAP as the distance measure to learn the frame-wise temporal representations . For few-shot learning , we adopt TAP as the distance measure in the metric learning based paradigm to compare the query and support sequence samples . In both cases , owing to the straightforward structure of TAP , the alignment prediction network and the feature extraction or transformation network can be jointly trained in an end-to-end manner , resulting in principled yet straightforward supervised learning and few-shot learning solutions for sequences . We further show the application of TAP in self-supervised alignment learning in the appendix . The main contributions of this paper are three-fold . 1 . We propose TAP , a learnable and fast sequence distance that only requires straightforward calculations in inference . 2 . We show that TAP conveniently enables end-to-end supervised sequence representation learning with different losses . 3 . We adopt TAP to dynamically align the support and query sequences for few-shot action recognition , so that alignments and temporal representations can be jointly learned with the episode-training strategy . Experiments on seven datasets demonstrate the effectiveness of TAP for both supervised learning and few-shot learning . 2 RELATED WORK . Alignment-based sequence distance . Various variations of DTW have been proposed to speed up the inference ( Salvador & Chan , 2007 ; Al-Naymat et al. , 2009 ) , or adapt to additional or modified constraints ( Ratanamahatana & Keogh , 2004 ) , or tackle sequences from different modalities ( Zhou & Torre , 2009 ; Zhou & De la Torre , 2016 ; Trigeorgis et al. , 2016 ; 2017 ; Cohen et al. , 2021 ) . They rely on non-differential dynamic programming for inference . Soft-DTW ( Cuturi & Blondel , 2017 ) optimizes a differentiable loss by using the soft-minimum cost of all feasible alignments as the objective . TCOT ( Su et al. , 2019 ) and OPW ( Su & Hua , 2019 ) view the alignment problem as the optimal transport problem with temporal regularizations . These methods solve an optimization problem to infer the optimal alignment . In contrast , our method directly predicts the alignment using a neural network , does not have its own objective , and only involves differentiable calculations . Data-driven alignment methods . In Abid & Zou ( 2018 ) , Autowarp finds the warping distance that best mimics the distance between the features encoded by LSTM , which is difficult to be integrated into an end-to-end framework because the autoencoder needs to be pre-trained and a betaCV-based optimization is required . It is shown in Tallec & Ollivier ( 2018 ) that RNN can learn to warp sequences . In Lohit et al . ( 2019 ) ; Weber et al . ( 2019 ) , a fixed warping function is predicted for a given sequence when comparing with different sequences . This may be viewed as warping all sequences w.r.t . a fixed timing scale , which may bias to the training sequences . In contrast , since the relative local variations and correspondences between different sequences are different , our method predicts different alignments for different sequence pairs . Supervised representation learning for sequences . Generally , since alignment-based distances need to solve their own optimization problems to infer alignments in the latent space , the gradients of the supervised metric learning loss can not be directly propagated . Existing methods exploit either an approximate proxy loss ( Su et al. , 2018 ; Lajugie et al. , 2014 ) , or a time-consuming optimization method to iteratively infer alignments ( Mei et al. , 2014 ) , or both ( Su & Wu , 2020a ; b ) . In this paper , we show that existing deep metric learning methods can be readily applied to the proposed TAP distance in an end-to-end manner . Few-shot action recognition . Most existing few-shot action recognition methods follow the metric learning based paradigm . In Ben-Ari et al . ( 2021 ) and Tan & Yang ( 2019 ) , TAEN and FAN en- code each action into a representation with a fixed dimension and apply vector-wise metrics . Recent works identify the importance of temporal alignment for tackling non-linear temporal variations . Various alignment-based distances are proposed to compare sequence pairs , such as the temporal attentive relation network ( TARN ) ( Bishay et al. , 2019 ) , a variant of DTW used in OTAM ( Cao et al. , 2020 ) , the permutation-invariant spatial and temporal attention reweighted distance in ARN ( Zhang et al. , 2020 ) , the temporal CrossTransformer for comparing all possible subsequences in TRX ( Perrett et al. , 2021 ) , and the two-stage temporal alignment network ( TTAN ) ( Li et al. , 2021 ) . Following this spirit , we apply the proposed TAP to perform temporal alignment and measure the distance between support and query sequences without any complicated attention mechanism , yielding a simple yet effective few-shot sequence learning solution . 3 METHODOLOGY . 3.1 TEMPORAL ALIGNMENT PREDICTION . We first revisit the unified formulation of sequence distance in Su & Wu ( 2019 ) . Let X = [ x1 , · · · , xLX ] ∈ Rd×LX and Y = [ y1 , · · · , yLY ] ∈ Rd×LY be two sequences with lengths of LX and LY , respectively . The elements xi , i = 1 , · · · , LX of X and yj , j = 1 , · · · , LY of Y lie in a d-dimensional feature space Rd . Many alignment-based distances between X and Y can be unified as follows : d ( X , Y ) = 〈T ∗ , D〉 , ( 1 ) T ∗ = argmin T∈Φ 〈T , D〉+ R ( T ) , ( 2 ) where 〈T , D〉 = tr ( T TD ) is the Frobenius dot product . D : = [ e ( xi , yj ) ] ij ∈ RLX×LY is the matrix of pairwise distances between elements in X and Y , e ( xi , yj ) is a vector-wise distance between two elements xi and yj , which we use Euclidean distance in this paper . T is an alignment matrix whose element Tij indicates whether or how likely xi and yj are aligned . Φ is the set of all feasible T with some constraints , which is a subset of RLX×LY . R ( T ) is a regularization term on T . T ∗ is the optimal alignment which is the solution of the optimization problem in Eq . ( 2 ) . Different sequence distances impose different constraints on the feasible set , have different regularization terms , and use different optimization methods for inference . For instance , DTW optimizes Eq . ( 2 ) with R ( T ) = 0 and the boundary , continuity , and monotonicity constraints via dynamic programming , while OPW optimizes Eq . ( 2 ) with two temporal regularization terms and the coupling constraints via the Sinkhorn ’ s matrix scaling algorithm . Solving T ∗ by the optimization Eq . ( 2 ) not only requires a long inference time , but also makes it difficult to apply a loss on the sequence distance Eq . ( 1 ) for learning element representations , because T ∗ is a latent variable that needs to be inferred and its gradient can not be calculated . To avoid solving the optimization problem , we propose a feedforward framework for measuring the distances between sequences , namely Temporal Alignment Prediction ( TAP ) . Fig . 1 ( a ) illustrates the TAP framework . For two sequences X = [ x1 , · · · , xLX ] and Y = [ y1 , · · · , yLY ] , their TAP distance also has the form of Eq . ( 1 ) , i.e. , the Frobenius dot product of D and T ∗ . Different from other sequence distances which infer the alignment with predefined objectives and constraints , TAP uses an alignment prediction neural network f to directly predict the optimal T ∗ = f ( X , Y ) by taking the two sequences as inputs and learns f from data . The alignment prediction network f can be instantiated by different architectures . In this paper , we propose a lightweight convolutional neural network architecture for its simplicity and fast inference speed . Specifically , we reuse the spatial ground matrix D to measure all pairwise distances between elements in X and Y . The relative positions of x̂i and ŷj are i/LX and j/LY , respectively . To incorporate the temporal dissimilarities , TAP further calculates all pairwise Euclidean distances between the relative positions between xi and yj into a matrix Dt : = [ e ( i/LX , j/LY ) ] ij ∈ RLX×LY . Then , we concatenate D and Dt along the channel dimension to form Ds with a size of Lx×Ly×2 . We use a CNN g to predict the final alignment matrix from Ds . The CNN uses three convolutional layers each followed by ReLu . The kernel size in the three layers is 5 , 5 , and 3 , respectively , the stride is fixed to 1 , and the padding is set to 2 , 2 , and 1 , respectively , to keep the spatial size . The numbers of kernels in the three layers are set to 30 , 30 , and 1 , respectively . The learned kernels are expected to capture the local alignment patterns . The output of f is augmented by a residual connection with D , resulting in the similarity matrix S : = − ( D + g ( Ds ) ) ∈ RLX×LY . The attentions of elements in Y on xi are obtained by performing Softmax on the i-th row of S. The attentions on all elements in X form an attention matrix A , which can be obtained by performing Softmax along the 2nd dimension of S. To generate the predicted alignment T ∗ , TAP finally performs a global L1 normalization on A : A = [ exp ( Sij ) ∑LY k=1 exp ( Sik ) ] ij ∈ RLX×LY ; T ∗ = [ Aij∑LX i=1 ∑LY j=1 Aij ] ij ∈ RLX×LY . ( 3 ) T ∗ij indicates the probability of aligning xi and yj . The TAP distance between X and Y is calculated as in Eq . ( 1 ) . The prediction network f of TAP is lightweight since it only contains a few convolutional kernels of g as parameters . Limitations . 1 . The family of alignments for TAP is Φ : = { T ∈ RLx×Ly+ |T1Ly = 1Lx/Lx } , where 1L is a L-dimensional vector with all 1 elements . Since strict order-preserving is not guaranteed in Φ , the performance of TAP may be limited when data are strictly ordered . 2 . T T1Lx = 1Ly/Ly does not necessarily hold , hence TAP is asymmetric and is not a real metric . These limitations make TAP more flexible in turn . 1 . Without strict order-preserving constraint , TAP can tackle local temporal reorders and generalize to non-sequential ( e.g . spatial , cross-modal , etc ) correspondences . 2 . Asymmetric alignment distinguishes the source sequence X and the target sequence Y , where all elements in X must be transported with the same mass when aligning to different target sequences . To perform classification or retrieval , we can always set the test or query sequence as the source , which serves as a standard template to be aligned . The symmetric distance can be obtained by averaging d ( X , Y ) and d ( Y , X ) . | This paper focuses on temporal sequence alignment, i.e. the task of finding an optimal alignment between sequences of different lengths. This task has been addressed by various traditional methods (e.g. DTW) that involve dynamic programming or other optimisations techniques that cannot be easily embedded in end-to-end learning frameworks. This work proposes a method where the optimal temporal alignment is learnt by a CNN. The input is formed concatenating two matrices measuring element-wise distances between two sequences A and B. The network is trained to output a matrix whose (i, j)-th element indicates the likelihood of Ai being aligned to Bj. The framework is evaluated with two tasks: supervised representation learning and few-shot action recognition. Tested on several datasets, the method is able to attain competitive performance and fast inference speed. | SP:35fdcb687dd1aff475c0c3cee2b899579de19b46 |
Temporal Alignment Prediction for Supervised Representation Learning and Few-Shot Sequence Classification | 1 INTRODUCTION . Distance between sequences plays a crucial role in sequence classification ( Sakoe & Chiba , 1978 ) , retrieval ( Su et al. , 2019 ) , clustering ( Garcı́a-Garcı́a et al. , 2008 ) , etc . Measuring distance between sequences is difficult , since different sequences may have different sampling rates , execution speeds , local distortions , initial states , and elastic warpage . To tackle such temporal variances , existing sequence distances either encode each sequence into a feature vector invariant to temporal variances ( Abid & Zou , 2018 ; Lohit et al. , 2019 ) or employ alignment for temporal correspondence calibration ( Sakoe & Chiba , 1978 ; Su & Hua , 2019 ) . Typical feature-based methods use recurrent neural networks ( RNNs ) ( Ramachandran et al. , 2017 ) to encode sequences and measure the Euclidean distance between corresponding features . The feature of a sequence is fixed when compared with any other sequence . Although such methods are naturally learnable and only perform forward calculations in inference , they require large amounts of sequences to train complex RNNs . Moreover , how the learned features handle temporal variances and what kinds of variances can be handled are not clear . Alignment-based methods determine different optimal alignments for different sequence pairs . This is more intuitive and flexible because temporal variances may be different when comparing different sequences . The inferred alignments clearly indicate how and where the two sequences differ in temporal steps . Most alignment methods solve an optimization problem under pre-defined feasible constraints to infer the optimal alignment . E.g. , DTW ( Sakoe & Chiba , 1978 ) requires dynamic programming and OPW ( Su & Hua , 2019 ) employs fixed-point iterations . Such optimizations are often time-consuming and can not fully utilize GPU . Moreover , since inferring the alignment is itself an optimization problem and has its own objective , sequence distance-based end-to-end learning using other objectives becomes intractable . For ∗Corresponding author : Ji-Rong Wen . instance , learning discriminative temporal representations for elements in sequences often adopt the objective that sequences of different classes are better separated w.r.t . a sequence distance ( Mei et al. , 2014 ; Su & Wu , 2020b ) . Gradients of this overall objective are difficult to pass through alignments since they are latent variables determined by another optimization problem . In this paper , we propose a learnable alignment-based sequence distance , namely Temporal Alignment Prediction ( TAP ) . TAP simulates the optimization-based inference from the input sequence pair to the optimal alignment as a function . The function is modeled by a lightweight convolutional neural network to directly predict the optimal alignment . TAP can be applied in different distance-based machine learning tasks under different settings . As instances , we show the applications of TAP in supervised representation learning and few-shot learning for sequence data . For supervised learning , we employ metric learning losses by using TAP as the distance measure to learn the frame-wise temporal representations . For few-shot learning , we adopt TAP as the distance measure in the metric learning based paradigm to compare the query and support sequence samples . In both cases , owing to the straightforward structure of TAP , the alignment prediction network and the feature extraction or transformation network can be jointly trained in an end-to-end manner , resulting in principled yet straightforward supervised learning and few-shot learning solutions for sequences . We further show the application of TAP in self-supervised alignment learning in the appendix . The main contributions of this paper are three-fold . 1 . We propose TAP , a learnable and fast sequence distance that only requires straightforward calculations in inference . 2 . We show that TAP conveniently enables end-to-end supervised sequence representation learning with different losses . 3 . We adopt TAP to dynamically align the support and query sequences for few-shot action recognition , so that alignments and temporal representations can be jointly learned with the episode-training strategy . Experiments on seven datasets demonstrate the effectiveness of TAP for both supervised learning and few-shot learning . 2 RELATED WORK . Alignment-based sequence distance . Various variations of DTW have been proposed to speed up the inference ( Salvador & Chan , 2007 ; Al-Naymat et al. , 2009 ) , or adapt to additional or modified constraints ( Ratanamahatana & Keogh , 2004 ) , or tackle sequences from different modalities ( Zhou & Torre , 2009 ; Zhou & De la Torre , 2016 ; Trigeorgis et al. , 2016 ; 2017 ; Cohen et al. , 2021 ) . They rely on non-differential dynamic programming for inference . Soft-DTW ( Cuturi & Blondel , 2017 ) optimizes a differentiable loss by using the soft-minimum cost of all feasible alignments as the objective . TCOT ( Su et al. , 2019 ) and OPW ( Su & Hua , 2019 ) view the alignment problem as the optimal transport problem with temporal regularizations . These methods solve an optimization problem to infer the optimal alignment . In contrast , our method directly predicts the alignment using a neural network , does not have its own objective , and only involves differentiable calculations . Data-driven alignment methods . In Abid & Zou ( 2018 ) , Autowarp finds the warping distance that best mimics the distance between the features encoded by LSTM , which is difficult to be integrated into an end-to-end framework because the autoencoder needs to be pre-trained and a betaCV-based optimization is required . It is shown in Tallec & Ollivier ( 2018 ) that RNN can learn to warp sequences . In Lohit et al . ( 2019 ) ; Weber et al . ( 2019 ) , a fixed warping function is predicted for a given sequence when comparing with different sequences . This may be viewed as warping all sequences w.r.t . a fixed timing scale , which may bias to the training sequences . In contrast , since the relative local variations and correspondences between different sequences are different , our method predicts different alignments for different sequence pairs . Supervised representation learning for sequences . Generally , since alignment-based distances need to solve their own optimization problems to infer alignments in the latent space , the gradients of the supervised metric learning loss can not be directly propagated . Existing methods exploit either an approximate proxy loss ( Su et al. , 2018 ; Lajugie et al. , 2014 ) , or a time-consuming optimization method to iteratively infer alignments ( Mei et al. , 2014 ) , or both ( Su & Wu , 2020a ; b ) . In this paper , we show that existing deep metric learning methods can be readily applied to the proposed TAP distance in an end-to-end manner . Few-shot action recognition . Most existing few-shot action recognition methods follow the metric learning based paradigm . In Ben-Ari et al . ( 2021 ) and Tan & Yang ( 2019 ) , TAEN and FAN en- code each action into a representation with a fixed dimension and apply vector-wise metrics . Recent works identify the importance of temporal alignment for tackling non-linear temporal variations . Various alignment-based distances are proposed to compare sequence pairs , such as the temporal attentive relation network ( TARN ) ( Bishay et al. , 2019 ) , a variant of DTW used in OTAM ( Cao et al. , 2020 ) , the permutation-invariant spatial and temporal attention reweighted distance in ARN ( Zhang et al. , 2020 ) , the temporal CrossTransformer for comparing all possible subsequences in TRX ( Perrett et al. , 2021 ) , and the two-stage temporal alignment network ( TTAN ) ( Li et al. , 2021 ) . Following this spirit , we apply the proposed TAP to perform temporal alignment and measure the distance between support and query sequences without any complicated attention mechanism , yielding a simple yet effective few-shot sequence learning solution . 3 METHODOLOGY . 3.1 TEMPORAL ALIGNMENT PREDICTION . We first revisit the unified formulation of sequence distance in Su & Wu ( 2019 ) . Let X = [ x1 , · · · , xLX ] ∈ Rd×LX and Y = [ y1 , · · · , yLY ] ∈ Rd×LY be two sequences with lengths of LX and LY , respectively . The elements xi , i = 1 , · · · , LX of X and yj , j = 1 , · · · , LY of Y lie in a d-dimensional feature space Rd . Many alignment-based distances between X and Y can be unified as follows : d ( X , Y ) = 〈T ∗ , D〉 , ( 1 ) T ∗ = argmin T∈Φ 〈T , D〉+ R ( T ) , ( 2 ) where 〈T , D〉 = tr ( T TD ) is the Frobenius dot product . D : = [ e ( xi , yj ) ] ij ∈ RLX×LY is the matrix of pairwise distances between elements in X and Y , e ( xi , yj ) is a vector-wise distance between two elements xi and yj , which we use Euclidean distance in this paper . T is an alignment matrix whose element Tij indicates whether or how likely xi and yj are aligned . Φ is the set of all feasible T with some constraints , which is a subset of RLX×LY . R ( T ) is a regularization term on T . T ∗ is the optimal alignment which is the solution of the optimization problem in Eq . ( 2 ) . Different sequence distances impose different constraints on the feasible set , have different regularization terms , and use different optimization methods for inference . For instance , DTW optimizes Eq . ( 2 ) with R ( T ) = 0 and the boundary , continuity , and monotonicity constraints via dynamic programming , while OPW optimizes Eq . ( 2 ) with two temporal regularization terms and the coupling constraints via the Sinkhorn ’ s matrix scaling algorithm . Solving T ∗ by the optimization Eq . ( 2 ) not only requires a long inference time , but also makes it difficult to apply a loss on the sequence distance Eq . ( 1 ) for learning element representations , because T ∗ is a latent variable that needs to be inferred and its gradient can not be calculated . To avoid solving the optimization problem , we propose a feedforward framework for measuring the distances between sequences , namely Temporal Alignment Prediction ( TAP ) . Fig . 1 ( a ) illustrates the TAP framework . For two sequences X = [ x1 , · · · , xLX ] and Y = [ y1 , · · · , yLY ] , their TAP distance also has the form of Eq . ( 1 ) , i.e. , the Frobenius dot product of D and T ∗ . Different from other sequence distances which infer the alignment with predefined objectives and constraints , TAP uses an alignment prediction neural network f to directly predict the optimal T ∗ = f ( X , Y ) by taking the two sequences as inputs and learns f from data . The alignment prediction network f can be instantiated by different architectures . In this paper , we propose a lightweight convolutional neural network architecture for its simplicity and fast inference speed . Specifically , we reuse the spatial ground matrix D to measure all pairwise distances between elements in X and Y . The relative positions of x̂i and ŷj are i/LX and j/LY , respectively . To incorporate the temporal dissimilarities , TAP further calculates all pairwise Euclidean distances between the relative positions between xi and yj into a matrix Dt : = [ e ( i/LX , j/LY ) ] ij ∈ RLX×LY . Then , we concatenate D and Dt along the channel dimension to form Ds with a size of Lx×Ly×2 . We use a CNN g to predict the final alignment matrix from Ds . The CNN uses three convolutional layers each followed by ReLu . The kernel size in the three layers is 5 , 5 , and 3 , respectively , the stride is fixed to 1 , and the padding is set to 2 , 2 , and 1 , respectively , to keep the spatial size . The numbers of kernels in the three layers are set to 30 , 30 , and 1 , respectively . The learned kernels are expected to capture the local alignment patterns . The output of f is augmented by a residual connection with D , resulting in the similarity matrix S : = − ( D + g ( Ds ) ) ∈ RLX×LY . The attentions of elements in Y on xi are obtained by performing Softmax on the i-th row of S. The attentions on all elements in X form an attention matrix A , which can be obtained by performing Softmax along the 2nd dimension of S. To generate the predicted alignment T ∗ , TAP finally performs a global L1 normalization on A : A = [ exp ( Sij ) ∑LY k=1 exp ( Sik ) ] ij ∈ RLX×LY ; T ∗ = [ Aij∑LX i=1 ∑LY j=1 Aij ] ij ∈ RLX×LY . ( 3 ) T ∗ij indicates the probability of aligning xi and yj . The TAP distance between X and Y is calculated as in Eq . ( 1 ) . The prediction network f of TAP is lightweight since it only contains a few convolutional kernels of g as parameters . Limitations . 1 . The family of alignments for TAP is Φ : = { T ∈ RLx×Ly+ |T1Ly = 1Lx/Lx } , where 1L is a L-dimensional vector with all 1 elements . Since strict order-preserving is not guaranteed in Φ , the performance of TAP may be limited when data are strictly ordered . 2 . T T1Lx = 1Ly/Ly does not necessarily hold , hence TAP is asymmetric and is not a real metric . These limitations make TAP more flexible in turn . 1 . Without strict order-preserving constraint , TAP can tackle local temporal reorders and generalize to non-sequential ( e.g . spatial , cross-modal , etc ) correspondences . 2 . Asymmetric alignment distinguishes the source sequence X and the target sequence Y , where all elements in X must be transported with the same mass when aligning to different target sequences . To perform classification or retrieval , we can always set the test or query sequence as the source , which serves as a standard template to be aligned . The symmetric distance can be obtained by averaging d ( X , Y ) and d ( Y , X ) . | One-shot sequence alignment with CNN. Comparisons of sequence data are important for many tasks including action recognition and retrieval. Previous approaches include generating fixed-size feature vectors (e.g., RNN) or temporally aligning two sequences (e.g., DTW). Instead, the proposed method directly predict the alignment matrix T between two sequences X and Y with CNN. This is differentiable, and hence can be applied to many tasks such as supervised sequence classification/retrieval, and few-shot classification. Experiments uses a variety of sequence classification tasks, such as skeleton-based action recognition and spoken digit audio classification, and show a good performance of the proposed method. | SP:35fdcb687dd1aff475c0c3cee2b899579de19b46 |
ZARTS: On Zero-order Optimization for Neural Architecture Search | Differentiable architecture search ( DARTS ) has been a popular one-shot paradigm for NAS due to its high efficiency . It introduces trainable architecture parameters to represent the importance of candidate operations and proposes first/secondorder approximation to estimate their gradients , making it possible to solve NAS by gradient descent algorithm . However , our in-depth empirical results show that the approximation will often distort the loss landscape , leading to the biased objective to optimize and in turn inaccurate gradient estimation for architecture parameters . This work turns to zero-order optimization and proposes a novel NAS scheme , called ZARTS , to search without enforcing the above approximation . Specifically , three representative zero-order optimization methods are introduced : RS , MGS , and GLD , among which MGS performs best by balancing the accuracy and speed . Moreover , we explore the connections between RS/MGS and gradient descent algorithm and show that our ZARTS can be seen as a robust gradient-free counterpart to DARTS . Extensive experiments on multiple datasets and search spaces show the remarkable performance of our method . In particular , results on 12 benchmarks verify the outstanding robustness of ZARTS , where the performance of DARTS collapses due to its known instability issue . Also , we search on the search space of DARTS to compare with peer methods , and our discovered architecture achieves 97.54 % accuracy on CIFAR-10 and 75.7 % top-1 accuracy on ImageNet , which are state-of-the-art performance . 1 INTRODUCTION . Despite their success , neural networks are still designed mainly by humans ( Simonyan & Zisserman , 2014 ; He et al. , 2016 ; Howard et al. , 2017 ) . It remains open to automatically discover effective and efficient architectures . The problem of neural architecture search ( NAS ) has attracted wide attention , which can be modeled as bi-level optimization for network architectures and operation weights . One-shot NAS ( Bender et al. , 2018 ) is a popular search framework that regards neural architectures as directed acyclic graphs ( DAG ) and constructs a supernet with all possible connections and operations in the search space . DARTS ( Liu et al. , 2019 ) further introduces trainable architecture parameters to represent the importance of candidate operations , which are alternately trained by SGD optimizer along with network weights . It proposes a first-order approximation to estimate the gradients of architecture parameters , which is biased and may lead to the severe instability issue shown by ( Bi et al. , 2019 ) . Other works ( Zela et al. , 2020b ; Chen & Hsieh , 2020 ) point out that architecture parameters will converge to a sharp local minimum resulting in the instability issue and introduces extra regularization items so that architecture parameters converge to a flat local minimum . In this paper , we empirically show that the first-order approximation of optimal network weights sharpens the loss landscape and results in the instability issue of DARTS . It also shifts the global minimum , misleading the training of architecture parameters . To this end , we discard such approximation and turn to zero-order optimization algorithms , which can run without the requirement that the search loss is differentiable w.r.t . architecture parameters . Specifically , we introduce a novel NAS scheme named ZARTS , which outperforms DARTS by a large margin and can discover efficient architectures stably on multiple public benchmarks . In a nutshell , this paper sheds light on the frontier of NAS in the following aspects : 1 ) Establishing zero-order based robust paradigm to solve bi-level optimization for NAS . Differentiable architecture search has been a well-developed area ( Liu et al. , 2019 ; Xu et al. , 2020b ; Wang et al. , 2020b ) which solves the bi-level optimization of NAS by gradient descent algorithms . However , this paradigm suffers from the instability issue during search since biased approximation for optimal network weights distorts the loss landscape , as shown in Fig . 1 ( a ) and ( b ) . To this end , we propose a flexible zero-order optimization NAS framework to solve the bi-level optimization problem , which is compatible with multiple potential gradient-free algorithms in the literature . 2 ) Uncovering the connection between zero-order architecture search and DARTS . This work introduces three representative zero-order optimization algorithms without enforcing the unverified differentiability assumption for search loss w.r.t . architecture parameters . We reveal the connections between the zero-order algorithms and gradient descent algorithm , showing that two implementations of ZARTS can be seen as gradient-free counterparts to DARTS , being more stable and robust . 3 ) Strong empirical performance and robustness . Experiments on four datasets and five search spaces have been conducted to evaluate the performance of our method . Unlike DARTS that suffers the severe instability issue shown by ( Zela et al. , 2020b ; Bi et al. , 2019 ) , ZARTS can stably discover effective architectures on various benchmarks . In particular , the searched architecture achieves 75.7 % top-1 accuracy on ImageNet , outperforming DARTS and most of its variants . 2 RELATED WORK . One-shot Neural Architecture Search . Bender et al . ( 2018 ) construct a supernet so that all candidate architectures can be seen as its sub-graph . DARTS ( Liu et al. , 2019 ) introduces architecture parameters to represent the importance of operations in the supernet and update them by gradient descent algorithm . Some works ( Xu et al. , 2020b ; Wang et al. , 2020b ; Dong & Yang , 2019 ) reduce the memory requirement of DARTS in the search process . Other works ( Zela et al. , 2020b ; Chen & Hsieh , 2020 ) point out the instability issue of DARTS , i.e. , skip-connection gradually dominates the normal cells , leading to performance collapse during the search stage . Bi-level Optimization for NAS . NAS can be modeled as a bi-level optimization for architecture parameters and network weights . DARTS ( Liu et al. , 2019 ) proposes first/second-order approximations to estimate gradients of architecture parameters so that they can be trained by gradient descent algorithms . However , we show that such approximation will distort the loss landscape and mislead the training of architecture parameters . Amended-DARTS ( Bi et al. , 2019 ) derives an analytic formula of the gradient w.r.t . architecture parameters that includes the inverse of Hessian matrix of network weights , which is even unfeasible to compute . In contrast , this work discards the approximation in DARTS and attempts to solve the bi-level optimization by gradient-free algorithms . Zero-order Optimization . Unlike gradient-based optimization methods that require the objective differentiable w.r.t . the parameters , zero-order optimization can train parameters when the gradient of objective is unavailable or difficult to obtain , which has been widely used in adversarial robustness for neural networks ( Chen et al. , 2017 ; Ilyas et al. , 2018 ) , meta learning ( Song et al. , 2020 ) , and transfer learning ( Tsai et al. , 2020 ) . Liu et al . ( 2020b ) aim at AutoML and utilize zero-order optimization to discover optimal configurations for ML pipelines . In this work , ( to our best knowledge ) we make the first attempt to apply zero-order optimization to NAS and experiment with multiple algorithms , from vanilla random search ( Flaxman et al. , 2004 ) to more advanced and effective direct search ( Golovin et al. , 2020 ) , showing its great superiority against gradient-based methods . 3 BI-LEVEL OPTIMIZATION IN DARTS . Following one-shot NAS ( Bender et al. , 2018 ) , DARTS constructs a supernet stacked by normal cells and reduction cells . Cells in the supernet are represented by directed acyclic graphs ( DAG ) with N nodes { xi } Ni=1 , which represents latent feature maps . Each edge ei , j contains multiple operations { oi , j , o ∈ O } , whose importance is represented by architecture parameters αoi , j . Therefore , NAS can be modeled as a bi-level optimization problem by alternately updating the operation weights ω ( parameters within candidate operations on each edge ) and the architecture parameters α : min α Lval ( ω∗ ( α ) , α ) ; s.t . ω∗ ( α ) = arg min ω Ltrain ( ω , α ) . ( 1 ) 3.1 FUNDAMENTAL LIMITATIONS IN THE DARTS FRAMEWORK . By enforcing an unverified ( and in fact difficult to verify ) assumption that the search loss Lval ( ω∗ ( α ) , α ) is differentiable w.r.t . α , DARTS ( Liu et al. , 2019 ) proposes a second-order approximation for the optimal weights ω∗ ( α ) by applying one-step gradient descent : ω∗ ( α ) ≈ ω∗2nd ( α ) = ω − ξ∇ωLtrain ( ω , α ) = ω′ , ( 2 ) where ξ is the learning rate to update network weights . Thus the gradient of the loss function w.r.t . α , ∇αLval ( ω∗ ( α ) , α ) , can be computed by the chain rule : ∇αLval ( ω∗ ( α ) , α ) ≈ ∇αLval ( ω′ , α ) − ξ∇2α , ωLtrain ( ω , α ) ∇ω′Lval ( ω′ , α ) . Nevertheless , the second-order partial derivative is hard to compute , so the authors adopt the difference method , which is proved in Appendix A.1 . To further reduce the computational cost , first-order approximation is introduced by assuming ω∗ ( α ) being independent of α , as shown in Eq . 3 , which is much faster and widely used in many variants of DARTS ( Chen et al. , 2019 ; Wang et al. , 2020b ; Zela et al. , 2020b ) . ω∗ ( α ) ≈ ω∗1st ( α ) = w. ( 3 ) The gradient is then simplified as : ∇αLval ( ω∗ ( α ) , α ) ≈ ∇αLval ( ω , α ) , which , however , further exacerbates the estimation bias . Reexamining the definition of ω∗ ( α ) in Eq . 1 , one would note that it is intractable to derive a mathematical expression for ω∗ ( α ) , making Lval ( ω∗ ( α ) , α ) even non-differentiable w.r.t . α . Yet DARTS has to compromise with such approximations as Eq . 2 and Eq . 3 so that differentiability is established and SGD can be applied . However , such sketchy estimation of optimal operation weights can distort the loss landscape w.r.t . architecture parameters and thus mislead the search procedure , which is shown in Fig . 1 and analyzed in the next section . 3.2 DISTORTED LANDSCAPE AND INCORRECT OPTIMIZATION PROCESS IN DARTS . Fig . 1 illustrates the loss landscape with perturbations on architecture parameters α , showing how different approximations of ω∗ affect the search process . We train a supernet for 50 epochs and randomly select two orthonormal vectors as the directions to perturbα . The same group of perturbation directions is used to draw landscapes in Fig . 1 ( a ) and ( b ) for a fair comparison . Fig . 1 ( a ) shows the loss landscape with the first-order approximation in DARTS , ω∗1st ( α ) = ω , while Fig . 1 ( b ) shows the loss landscape with more accurate ω∗ ( α ) , which is obtained by fine-tuning the network weights ω for 10 iterations for each α. Landscapes ( contours ) are plotted by evaluating L at grid points ranged from -1 to 1 at an interval of 0.02 in both directions . Global minima are marked with stars on the landscapes , from which we have two observations : 1 ) The approximation ω∗1st ( α ) = ω shifts the global minimum and sharpens the landscape 1 , which is the representative characteristic of instability issue as pointed out by ( Zela et al. , 2020b ) . 2 ) Accurate estimation for ω∗ leads to a flatter landscape , indicating that the instability issue can be alleviated . Moreover , we display the landscape with second-order approximation ω∗2nd in Appendix A.2 , which is also sharp but slightly flatter than 1A “ sharp ” landscape has denser contours than a “ flat ” one . Algorithm 1 : ZARTS : Zero-order Optimization Framework for Architecture Search | This paper presents ZARTS, a zero-order optimization method for DARTS, to search without enforcing the approximation of the network weights. It conducts in-depth analysis on the first/second order approximation in DARTS, and points out that such approximation leads to bias and instability. Then the work proposes three zero-order optimization methods to solve the issue. | SP:072d9da072c1ef224e3cfb6f67ef7c0e78e456af |
ZARTS: On Zero-order Optimization for Neural Architecture Search | Differentiable architecture search ( DARTS ) has been a popular one-shot paradigm for NAS due to its high efficiency . It introduces trainable architecture parameters to represent the importance of candidate operations and proposes first/secondorder approximation to estimate their gradients , making it possible to solve NAS by gradient descent algorithm . However , our in-depth empirical results show that the approximation will often distort the loss landscape , leading to the biased objective to optimize and in turn inaccurate gradient estimation for architecture parameters . This work turns to zero-order optimization and proposes a novel NAS scheme , called ZARTS , to search without enforcing the above approximation . Specifically , three representative zero-order optimization methods are introduced : RS , MGS , and GLD , among which MGS performs best by balancing the accuracy and speed . Moreover , we explore the connections between RS/MGS and gradient descent algorithm and show that our ZARTS can be seen as a robust gradient-free counterpart to DARTS . Extensive experiments on multiple datasets and search spaces show the remarkable performance of our method . In particular , results on 12 benchmarks verify the outstanding robustness of ZARTS , where the performance of DARTS collapses due to its known instability issue . Also , we search on the search space of DARTS to compare with peer methods , and our discovered architecture achieves 97.54 % accuracy on CIFAR-10 and 75.7 % top-1 accuracy on ImageNet , which are state-of-the-art performance . 1 INTRODUCTION . Despite their success , neural networks are still designed mainly by humans ( Simonyan & Zisserman , 2014 ; He et al. , 2016 ; Howard et al. , 2017 ) . It remains open to automatically discover effective and efficient architectures . The problem of neural architecture search ( NAS ) has attracted wide attention , which can be modeled as bi-level optimization for network architectures and operation weights . One-shot NAS ( Bender et al. , 2018 ) is a popular search framework that regards neural architectures as directed acyclic graphs ( DAG ) and constructs a supernet with all possible connections and operations in the search space . DARTS ( Liu et al. , 2019 ) further introduces trainable architecture parameters to represent the importance of candidate operations , which are alternately trained by SGD optimizer along with network weights . It proposes a first-order approximation to estimate the gradients of architecture parameters , which is biased and may lead to the severe instability issue shown by ( Bi et al. , 2019 ) . Other works ( Zela et al. , 2020b ; Chen & Hsieh , 2020 ) point out that architecture parameters will converge to a sharp local minimum resulting in the instability issue and introduces extra regularization items so that architecture parameters converge to a flat local minimum . In this paper , we empirically show that the first-order approximation of optimal network weights sharpens the loss landscape and results in the instability issue of DARTS . It also shifts the global minimum , misleading the training of architecture parameters . To this end , we discard such approximation and turn to zero-order optimization algorithms , which can run without the requirement that the search loss is differentiable w.r.t . architecture parameters . Specifically , we introduce a novel NAS scheme named ZARTS , which outperforms DARTS by a large margin and can discover efficient architectures stably on multiple public benchmarks . In a nutshell , this paper sheds light on the frontier of NAS in the following aspects : 1 ) Establishing zero-order based robust paradigm to solve bi-level optimization for NAS . Differentiable architecture search has been a well-developed area ( Liu et al. , 2019 ; Xu et al. , 2020b ; Wang et al. , 2020b ) which solves the bi-level optimization of NAS by gradient descent algorithms . However , this paradigm suffers from the instability issue during search since biased approximation for optimal network weights distorts the loss landscape , as shown in Fig . 1 ( a ) and ( b ) . To this end , we propose a flexible zero-order optimization NAS framework to solve the bi-level optimization problem , which is compatible with multiple potential gradient-free algorithms in the literature . 2 ) Uncovering the connection between zero-order architecture search and DARTS . This work introduces three representative zero-order optimization algorithms without enforcing the unverified differentiability assumption for search loss w.r.t . architecture parameters . We reveal the connections between the zero-order algorithms and gradient descent algorithm , showing that two implementations of ZARTS can be seen as gradient-free counterparts to DARTS , being more stable and robust . 3 ) Strong empirical performance and robustness . Experiments on four datasets and five search spaces have been conducted to evaluate the performance of our method . Unlike DARTS that suffers the severe instability issue shown by ( Zela et al. , 2020b ; Bi et al. , 2019 ) , ZARTS can stably discover effective architectures on various benchmarks . In particular , the searched architecture achieves 75.7 % top-1 accuracy on ImageNet , outperforming DARTS and most of its variants . 2 RELATED WORK . One-shot Neural Architecture Search . Bender et al . ( 2018 ) construct a supernet so that all candidate architectures can be seen as its sub-graph . DARTS ( Liu et al. , 2019 ) introduces architecture parameters to represent the importance of operations in the supernet and update them by gradient descent algorithm . Some works ( Xu et al. , 2020b ; Wang et al. , 2020b ; Dong & Yang , 2019 ) reduce the memory requirement of DARTS in the search process . Other works ( Zela et al. , 2020b ; Chen & Hsieh , 2020 ) point out the instability issue of DARTS , i.e. , skip-connection gradually dominates the normal cells , leading to performance collapse during the search stage . Bi-level Optimization for NAS . NAS can be modeled as a bi-level optimization for architecture parameters and network weights . DARTS ( Liu et al. , 2019 ) proposes first/second-order approximations to estimate gradients of architecture parameters so that they can be trained by gradient descent algorithms . However , we show that such approximation will distort the loss landscape and mislead the training of architecture parameters . Amended-DARTS ( Bi et al. , 2019 ) derives an analytic formula of the gradient w.r.t . architecture parameters that includes the inverse of Hessian matrix of network weights , which is even unfeasible to compute . In contrast , this work discards the approximation in DARTS and attempts to solve the bi-level optimization by gradient-free algorithms . Zero-order Optimization . Unlike gradient-based optimization methods that require the objective differentiable w.r.t . the parameters , zero-order optimization can train parameters when the gradient of objective is unavailable or difficult to obtain , which has been widely used in adversarial robustness for neural networks ( Chen et al. , 2017 ; Ilyas et al. , 2018 ) , meta learning ( Song et al. , 2020 ) , and transfer learning ( Tsai et al. , 2020 ) . Liu et al . ( 2020b ) aim at AutoML and utilize zero-order optimization to discover optimal configurations for ML pipelines . In this work , ( to our best knowledge ) we make the first attempt to apply zero-order optimization to NAS and experiment with multiple algorithms , from vanilla random search ( Flaxman et al. , 2004 ) to more advanced and effective direct search ( Golovin et al. , 2020 ) , showing its great superiority against gradient-based methods . 3 BI-LEVEL OPTIMIZATION IN DARTS . Following one-shot NAS ( Bender et al. , 2018 ) , DARTS constructs a supernet stacked by normal cells and reduction cells . Cells in the supernet are represented by directed acyclic graphs ( DAG ) with N nodes { xi } Ni=1 , which represents latent feature maps . Each edge ei , j contains multiple operations { oi , j , o ∈ O } , whose importance is represented by architecture parameters αoi , j . Therefore , NAS can be modeled as a bi-level optimization problem by alternately updating the operation weights ω ( parameters within candidate operations on each edge ) and the architecture parameters α : min α Lval ( ω∗ ( α ) , α ) ; s.t . ω∗ ( α ) = arg min ω Ltrain ( ω , α ) . ( 1 ) 3.1 FUNDAMENTAL LIMITATIONS IN THE DARTS FRAMEWORK . By enforcing an unverified ( and in fact difficult to verify ) assumption that the search loss Lval ( ω∗ ( α ) , α ) is differentiable w.r.t . α , DARTS ( Liu et al. , 2019 ) proposes a second-order approximation for the optimal weights ω∗ ( α ) by applying one-step gradient descent : ω∗ ( α ) ≈ ω∗2nd ( α ) = ω − ξ∇ωLtrain ( ω , α ) = ω′ , ( 2 ) where ξ is the learning rate to update network weights . Thus the gradient of the loss function w.r.t . α , ∇αLval ( ω∗ ( α ) , α ) , can be computed by the chain rule : ∇αLval ( ω∗ ( α ) , α ) ≈ ∇αLval ( ω′ , α ) − ξ∇2α , ωLtrain ( ω , α ) ∇ω′Lval ( ω′ , α ) . Nevertheless , the second-order partial derivative is hard to compute , so the authors adopt the difference method , which is proved in Appendix A.1 . To further reduce the computational cost , first-order approximation is introduced by assuming ω∗ ( α ) being independent of α , as shown in Eq . 3 , which is much faster and widely used in many variants of DARTS ( Chen et al. , 2019 ; Wang et al. , 2020b ; Zela et al. , 2020b ) . ω∗ ( α ) ≈ ω∗1st ( α ) = w. ( 3 ) The gradient is then simplified as : ∇αLval ( ω∗ ( α ) , α ) ≈ ∇αLval ( ω , α ) , which , however , further exacerbates the estimation bias . Reexamining the definition of ω∗ ( α ) in Eq . 1 , one would note that it is intractable to derive a mathematical expression for ω∗ ( α ) , making Lval ( ω∗ ( α ) , α ) even non-differentiable w.r.t . α . Yet DARTS has to compromise with such approximations as Eq . 2 and Eq . 3 so that differentiability is established and SGD can be applied . However , such sketchy estimation of optimal operation weights can distort the loss landscape w.r.t . architecture parameters and thus mislead the search procedure , which is shown in Fig . 1 and analyzed in the next section . 3.2 DISTORTED LANDSCAPE AND INCORRECT OPTIMIZATION PROCESS IN DARTS . Fig . 1 illustrates the loss landscape with perturbations on architecture parameters α , showing how different approximations of ω∗ affect the search process . We train a supernet for 50 epochs and randomly select two orthonormal vectors as the directions to perturbα . The same group of perturbation directions is used to draw landscapes in Fig . 1 ( a ) and ( b ) for a fair comparison . Fig . 1 ( a ) shows the loss landscape with the first-order approximation in DARTS , ω∗1st ( α ) = ω , while Fig . 1 ( b ) shows the loss landscape with more accurate ω∗ ( α ) , which is obtained by fine-tuning the network weights ω for 10 iterations for each α. Landscapes ( contours ) are plotted by evaluating L at grid points ranged from -1 to 1 at an interval of 0.02 in both directions . Global minima are marked with stars on the landscapes , from which we have two observations : 1 ) The approximation ω∗1st ( α ) = ω shifts the global minimum and sharpens the landscape 1 , which is the representative characteristic of instability issue as pointed out by ( Zela et al. , 2020b ) . 2 ) Accurate estimation for ω∗ leads to a flatter landscape , indicating that the instability issue can be alleviated . Moreover , we display the landscape with second-order approximation ω∗2nd in Appendix A.2 , which is also sharp but slightly flatter than 1A “ sharp ” landscape has denser contours than a “ flat ” one . Algorithm 1 : ZARTS : Zero-order Optimization Framework for Architecture Search | This paper (ZARTS) proposes to apply gradient-estimation-based zero-order optimization methods to tackle neural architecture search (NAS). Two major contributions are made by this paper: (1) it is the first to borrow the methods of zero-order optimization to solve NAS problem; (2) it shows that zero-order optimization methods can greatly avoid the training instability of first- or second-order optimization NAS methods like DARTS (which sharpens the loss landscape). Moreover, this paper also provides an explanation on how ZARTS is connected with DARTS. Extensive experiments demonstrate the efficacy of ZARTS. | SP:072d9da072c1ef224e3cfb6f67ef7c0e78e456af |
Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness | 1 INTRODUCTION . Deep neural networks are vulnerable to maliciously crafted adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) , which aim to induce erroneous model predictions by adding small perturbations to the normal inputs . Due to the threats , a multitude of defense strategies have been proposed to improve adversarial robustness ( Kurakin et al. , 2017b ; Madry et al. , 2018 ; Liao et al. , 2018 ; Wong & Kolter , 2018 ; Zhang et al. , 2019b ; Dong et al. , 2020a ; Pang et al. , 2020 ) . However , many defenses have later been shown to be ineffective due to incomplete or incorrect robustness evaluations ( Athalye et al. , 2018a ; Uesato et al. , 2018 ; Carlini et al. , 2019 ; Croce & Hein , 2020b ; Dong et al. , 2020b ; Tramer et al. , 2020 ) , making it particularly challenging to understand their effects and identify the actual progress of the field . Therefore , developing methods that can evaluate adversarial robustness accurately and reliably is of profound importance . Adversarial attacks are broadly adopted as an indispensable solution to evaluate adversarial robustness for different models . One of the most prominent attacks is the projected gradient descent ( PGD ) method ( Madry et al. , 2018 ) , which generates an adversarial example by performing iterative gradient updates to maximize a classification loss ( e.g. , cross-entropy loss ) w.r.t . the input . Although recent methods improve upon PGD by introducing different loss functions ( Carlini & Wagner , 2017 ; Gowal et al. , 2019 ; Croce & Hein , 2020b ; Sriramanan et al. , 2020 ) and adjusting the step size ( Croce & Hein , 2020b ) , most of them adopt hand-designed optimization algorithms , such as vanilla gradient descent , momentum ( Polyak , 1964 ) , and Adam ( Kingma & Ba , 2015 ) . However , it has been shown that these attacks can be sub-optimal ( Croce & Hein , 2020b ; Tramer et al. , 2020 ) , arousing an overestimation of adversarial robustness for some defenses . It is thus imperative to develop more effective optimization algorithms for improving adversarial attacks . Nevertheless , designing a generic optimization algorithm in a hand-crafted manner is non-trivial , considering the different defense models with varying network architectures , defense strategies , and training datasets . To overcome this issue and develop stronger adversarial attacks , in this paper , we propose a ModelAgnostic Meta-Attack ( MAMA ) approach to learn optimization algorithms in adversarial attacks automatically . In particular , we parameterize the optimizer with a recurrent neural network ( RNN ) to mimic the behavior of iterative attacks , which outputs an update direction at each time step during adversarial example generation . By learning to solve a class of optimization problems with different data samples and defense models , the RNN optimizer could benefit from long-term dependencies of iterative attacks and exploit common structures among the optimization problems . To improve and stabilize training , we then propose a prior-guided refinement strategy , which imposes a prior given by the PGD update rule on the outputs of the RNN optimizer , whose parameters are trained via the maximum a posteriori ( MAP ) estimation framework . Consequently , the RNN optimizer can learn to yield more effective update directions than hand-designed ones . Despite the effectiveness , the learned optimizer may not generalize well to attacking unseen defenses due to potential overfitting to the defenses used for training . Although training a different optimizer for each defense is possible , the training process can be time-consuming and inconvenient in practice when it is utilized to benchmark adversarial robustness . To endow the learned optimizer with a better generalization ability , we develop a model-agnostic training algorithm with a gradient-based metatrain and meta-test process to simulate the shift from seen to unseen defenses . Therefore , the learned optimizer can directly be deployed to attack unseen defenses without retraining or fine-tuning . Extensive experiments validate the effectiveness and generalization ability of the learned optimizers for attacking different defense models . We also demonstrate the flexibility of our method integrating with various attacks , including those with different loss functions ( Carlini & Wagner , 2017 ; Gowal et al. , 2019 ; Croce & Hein , 2020b ) and initialization ( Tashiro et al. , 2020 ) . Our method consistently improves the attacking performance over baselines , while introducing little extra computational cost . By incorporating our method with an orthogonal technique — output diversified initialization ( ODI ) ( Tashiro et al. , 2020 ) , we achieve lower robust test accuracy on all 12 defense models that we study than other state-of-the-art attacks , including the MultiTargeted ( MT ) attack ( Gowal et al. , 2019 ) and AutoAttack ( Croce & Hein , 2020b ) , leading to a more reliable evaluation of adversarial robustness . Moreover , MAMA provides a plug-and-play module in adversarial attacks , which may benefit future adversarial attacks if new improvements have been made in different aspects . 2 PRELIMINARIES AND RELATED WORK . 2.1 ADVERSARIAL ATTACKS . Let f ( x ) : x ∈ RD → z ∈ RK denote a classifier , which outputs the logits z : = [ z1 , ... , zK ] over K classes for an input image x . The prediction of f is Cf ( x ) = arg maxi=1 , ... , K zi . Given the true label y of x , adversarial attacks aim to generate an adversarial example x̂ that is misclassified by f ( i.e. , Cf ( x̂ ) 6= y ) , while the distance between the adversarial example x̂ and the natural one x measured by the ` ∞ norm is smaller than a threshold as ‖x̂ − x‖∞ ≤ . Although we introduce our approach based on the ` ∞ norm only , the extension to other ` p norms is straightforward . The adversarial example x̂ can be generated by solving a constrained optimization problem as x̂ = arg max x′ L ( f ( x′ ) , y ) , s.t . ‖x′ − x‖∞ ≤ , ( 1 ) where L is a loss function on top of the classifier f ( x ) . For example , L could be the cross-entropy loss as Lce ( f ( x ) , y ) = − log py where py = ezy/∑Ki=1 ezi denotes the predicted probability of class y , or the margin-based CW loss ( Carlini & Wagner , 2017 ) as Lcw ( f ( x ) , y ) = maxi6=y zi − zy . A lot of gradient-based methods ( Goodfellow et al. , 2015 ; Kurakin et al. , 2017a ; Carlini & Wagner , 2017 ; Madry et al. , 2018 ; Dong et al. , 2018 ) have been proposed to solve this optimization problem . The projected gradient descent ( PGD ) method ( Madry et al. , 2018 ) performs iterative updates as x̂t+1 = ΠB ( x ) ( x̂t + αt · sign ( ∇xL ( f ( x̂t ) , y ) ) ) , ( 2 ) where B ( x ) = { x′ : ‖x′ − x‖∞ ≤ } denotes the ` ∞ ball centered at x with radius , Π ( · ) is the projection operation , and αt is the step size at the t-th attack iteration . PGD initializes x̂0 randomly by uniform sampling within B ( x ) and adopts a fixed step size α in every iteration . To develop stronger attacks for reliable robustness evaluation , recent improvements upon PGD include adopting different loss functions ( Gowal et al. , 2019 ; Croce & Hein , 2020b ; Sriramanan et al. , 2020 ) , adjusting the step size ( Croce & Hein , 2020b ) , and designing sampling strategies of initialization ( Tashiro et al. , 2020 ) . For the update rules , besides the vanilla gradient descent adopted in PGD , recent attacks ( Dong et al. , 2018 ; Gowal et al. , 2019 ) suggest to use the momentum ( Polyak , 1964 ) and Adam ( Kingma & Ba , 2015 ) optimizers . In contrast , we aim to enhance adversarial attacks by learning optimization algorithms in an automatic way . 2.2 ADVERSARIAL DEFENSES . To build robust models against adversarial attacks , numerous defense strategies have been proposed , but most defenses can be evaded by new adaptive attacks ( Athalye et al. , 2018a ; Tramer et al. , 2020 ) . Among the existing defenses , adversarial training ( AT ) is arguably the most effective defense technique , in which the network is trained on the adversarial examples generated by attacks . Based on the primary AT frameworks like PGD-AT ( Madry et al. , 2018 ) and TRADES ( Zhang et al. , 2019b ) , improvements have been made via ensemble learning ( Tramèr et al. , 2018 ; Pang et al. , 2019 ) , metric learning ( Mao et al. , 2019 ; Pang et al. , 2020 ) , semi-supervised learning ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ; Zhai et al. , 2019 ) , and self-supervised learning ( Chen et al. , 2020a ; Hendrycks et al. , 2019 ; Kim et al. , 2020 ) . Due to the high computational cost of AT , other efforts are devoted to accelerating the training procedure ( Shafahi et al. , 2019 ; Wong et al. , 2020 ; Zhang et al. , 2019a ) . Recent works emphasize the training tricks ( e.g. , weight decay , batch size , etc . ) in AT ( Gowal et al. , 2020 ; Pang et al. , 2021 ) . However , it has been shown that the robust test accuracy of numerous adversarially trained models can be degraded significantly by using stronger attacks ( Croce & Hein , 2020b ; Tramer et al. , 2020 ) , indicating that the reliable evaluation of adversarial robustness remains an imperative yet challenging task . In this paper , we focus on evaluating adversarial robustness of AT models due to their superior robustness over other defense techniques . 2.3 META-LEARNING . Meta-learning ( learning to learn ) studies how to learn learning algorithms automatically from a meta perspective , and has shown promise in few-shot learning ( Finn et al. , 2017 ) and learning optimizers ( Andrychowicz et al. , 2016 ) . The latter is called learning to optimize ( L2O ) . The first L2O method leverages a coordinate-wise long short term memory ( LSTM ) network as the optimizer model , which is trained by gradient descent on a class of optimization problems ( Andrychowicz et al. , 2016 ) . The optimizer can also be trained by reinforcement learning ( Li & Malik , 2017 ) . Bello et al . ( 2017 ) propose to adopt an RNN controller to generate the update rules of optimization algorithms , which can be described as a domain specific language . L2O is then extended to efficiently optimize black-box functions ( Chen et al. , 2017 ) . Recent methods focus on improving the training stability and generalization of the learned optimizers by designing better optimizer models and training techniques ( Lv et al. , 2017 ; Wichrowska et al. , 2017 ; Chen et al. , 2020b ) . The general idea of meta-learning has been investigated in adversarial attacks . Most related methods learn to generate adversarial examples by using convolutional neural networks ( CNNs ) , which usually take clean images as inputs and return adversarial examples/perturbations ( Baluja & Fischer , 2017 ; Poursaeed et al. , 2018 ) . Differently , our method adopts an RNN model as the attack optimizer to mimic the behavior of iterative attacks . Empirical results validate the superiority of our approach compared to the CNN-based generators . Meta-learning has also been used in black-box attacks ( Du et al. , 2020 ) and AT defenses ( Xiong & Hsieh , 2020 ; Jiang et al. , 2021 ) . It is noteworthy that Xiong & Hsieh ( 2020 ) propose a similar approach to ours that uses an RNN optimizer to learn attacks for AT . Although the RNN optimizer is also adopted in our approach , we improve the attacking performance and generalization ability of the RNN optimizer with new techniques , as presented in Sec . 3 . More detailed comparisons between our work and Xiong & Hsieh ( 2020 ) are shown in Appendix A . | This paper proposes an automatic approach for attacking classifiers, by approximating a possible adaptive attack that can take place on a newly-published defense. The methodology (MAMA) applies meta machine learning techniques, by training it on different defenses to let it grasp what might be a good signal to follow when attacking an unknown classifier. The authors then test this strategy and a naive version of it (that is, the one that is only trained on the attacks, and not fine tuned on a test set) against other attacks proposed in literature. Results suggest a modest improvement against defenses hardened with adversarial training. | SP:c44e42aed2bb38106bdc33fd5e7b3dfb4e9c5584 |
Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness | 1 INTRODUCTION . Deep neural networks are vulnerable to maliciously crafted adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) , which aim to induce erroneous model predictions by adding small perturbations to the normal inputs . Due to the threats , a multitude of defense strategies have been proposed to improve adversarial robustness ( Kurakin et al. , 2017b ; Madry et al. , 2018 ; Liao et al. , 2018 ; Wong & Kolter , 2018 ; Zhang et al. , 2019b ; Dong et al. , 2020a ; Pang et al. , 2020 ) . However , many defenses have later been shown to be ineffective due to incomplete or incorrect robustness evaluations ( Athalye et al. , 2018a ; Uesato et al. , 2018 ; Carlini et al. , 2019 ; Croce & Hein , 2020b ; Dong et al. , 2020b ; Tramer et al. , 2020 ) , making it particularly challenging to understand their effects and identify the actual progress of the field . Therefore , developing methods that can evaluate adversarial robustness accurately and reliably is of profound importance . Adversarial attacks are broadly adopted as an indispensable solution to evaluate adversarial robustness for different models . One of the most prominent attacks is the projected gradient descent ( PGD ) method ( Madry et al. , 2018 ) , which generates an adversarial example by performing iterative gradient updates to maximize a classification loss ( e.g. , cross-entropy loss ) w.r.t . the input . Although recent methods improve upon PGD by introducing different loss functions ( Carlini & Wagner , 2017 ; Gowal et al. , 2019 ; Croce & Hein , 2020b ; Sriramanan et al. , 2020 ) and adjusting the step size ( Croce & Hein , 2020b ) , most of them adopt hand-designed optimization algorithms , such as vanilla gradient descent , momentum ( Polyak , 1964 ) , and Adam ( Kingma & Ba , 2015 ) . However , it has been shown that these attacks can be sub-optimal ( Croce & Hein , 2020b ; Tramer et al. , 2020 ) , arousing an overestimation of adversarial robustness for some defenses . It is thus imperative to develop more effective optimization algorithms for improving adversarial attacks . Nevertheless , designing a generic optimization algorithm in a hand-crafted manner is non-trivial , considering the different defense models with varying network architectures , defense strategies , and training datasets . To overcome this issue and develop stronger adversarial attacks , in this paper , we propose a ModelAgnostic Meta-Attack ( MAMA ) approach to learn optimization algorithms in adversarial attacks automatically . In particular , we parameterize the optimizer with a recurrent neural network ( RNN ) to mimic the behavior of iterative attacks , which outputs an update direction at each time step during adversarial example generation . By learning to solve a class of optimization problems with different data samples and defense models , the RNN optimizer could benefit from long-term dependencies of iterative attacks and exploit common structures among the optimization problems . To improve and stabilize training , we then propose a prior-guided refinement strategy , which imposes a prior given by the PGD update rule on the outputs of the RNN optimizer , whose parameters are trained via the maximum a posteriori ( MAP ) estimation framework . Consequently , the RNN optimizer can learn to yield more effective update directions than hand-designed ones . Despite the effectiveness , the learned optimizer may not generalize well to attacking unseen defenses due to potential overfitting to the defenses used for training . Although training a different optimizer for each defense is possible , the training process can be time-consuming and inconvenient in practice when it is utilized to benchmark adversarial robustness . To endow the learned optimizer with a better generalization ability , we develop a model-agnostic training algorithm with a gradient-based metatrain and meta-test process to simulate the shift from seen to unseen defenses . Therefore , the learned optimizer can directly be deployed to attack unseen defenses without retraining or fine-tuning . Extensive experiments validate the effectiveness and generalization ability of the learned optimizers for attacking different defense models . We also demonstrate the flexibility of our method integrating with various attacks , including those with different loss functions ( Carlini & Wagner , 2017 ; Gowal et al. , 2019 ; Croce & Hein , 2020b ) and initialization ( Tashiro et al. , 2020 ) . Our method consistently improves the attacking performance over baselines , while introducing little extra computational cost . By incorporating our method with an orthogonal technique — output diversified initialization ( ODI ) ( Tashiro et al. , 2020 ) , we achieve lower robust test accuracy on all 12 defense models that we study than other state-of-the-art attacks , including the MultiTargeted ( MT ) attack ( Gowal et al. , 2019 ) and AutoAttack ( Croce & Hein , 2020b ) , leading to a more reliable evaluation of adversarial robustness . Moreover , MAMA provides a plug-and-play module in adversarial attacks , which may benefit future adversarial attacks if new improvements have been made in different aspects . 2 PRELIMINARIES AND RELATED WORK . 2.1 ADVERSARIAL ATTACKS . Let f ( x ) : x ∈ RD → z ∈ RK denote a classifier , which outputs the logits z : = [ z1 , ... , zK ] over K classes for an input image x . The prediction of f is Cf ( x ) = arg maxi=1 , ... , K zi . Given the true label y of x , adversarial attacks aim to generate an adversarial example x̂ that is misclassified by f ( i.e. , Cf ( x̂ ) 6= y ) , while the distance between the adversarial example x̂ and the natural one x measured by the ` ∞ norm is smaller than a threshold as ‖x̂ − x‖∞ ≤ . Although we introduce our approach based on the ` ∞ norm only , the extension to other ` p norms is straightforward . The adversarial example x̂ can be generated by solving a constrained optimization problem as x̂ = arg max x′ L ( f ( x′ ) , y ) , s.t . ‖x′ − x‖∞ ≤ , ( 1 ) where L is a loss function on top of the classifier f ( x ) . For example , L could be the cross-entropy loss as Lce ( f ( x ) , y ) = − log py where py = ezy/∑Ki=1 ezi denotes the predicted probability of class y , or the margin-based CW loss ( Carlini & Wagner , 2017 ) as Lcw ( f ( x ) , y ) = maxi6=y zi − zy . A lot of gradient-based methods ( Goodfellow et al. , 2015 ; Kurakin et al. , 2017a ; Carlini & Wagner , 2017 ; Madry et al. , 2018 ; Dong et al. , 2018 ) have been proposed to solve this optimization problem . The projected gradient descent ( PGD ) method ( Madry et al. , 2018 ) performs iterative updates as x̂t+1 = ΠB ( x ) ( x̂t + αt · sign ( ∇xL ( f ( x̂t ) , y ) ) ) , ( 2 ) where B ( x ) = { x′ : ‖x′ − x‖∞ ≤ } denotes the ` ∞ ball centered at x with radius , Π ( · ) is the projection operation , and αt is the step size at the t-th attack iteration . PGD initializes x̂0 randomly by uniform sampling within B ( x ) and adopts a fixed step size α in every iteration . To develop stronger attacks for reliable robustness evaluation , recent improvements upon PGD include adopting different loss functions ( Gowal et al. , 2019 ; Croce & Hein , 2020b ; Sriramanan et al. , 2020 ) , adjusting the step size ( Croce & Hein , 2020b ) , and designing sampling strategies of initialization ( Tashiro et al. , 2020 ) . For the update rules , besides the vanilla gradient descent adopted in PGD , recent attacks ( Dong et al. , 2018 ; Gowal et al. , 2019 ) suggest to use the momentum ( Polyak , 1964 ) and Adam ( Kingma & Ba , 2015 ) optimizers . In contrast , we aim to enhance adversarial attacks by learning optimization algorithms in an automatic way . 2.2 ADVERSARIAL DEFENSES . To build robust models against adversarial attacks , numerous defense strategies have been proposed , but most defenses can be evaded by new adaptive attacks ( Athalye et al. , 2018a ; Tramer et al. , 2020 ) . Among the existing defenses , adversarial training ( AT ) is arguably the most effective defense technique , in which the network is trained on the adversarial examples generated by attacks . Based on the primary AT frameworks like PGD-AT ( Madry et al. , 2018 ) and TRADES ( Zhang et al. , 2019b ) , improvements have been made via ensemble learning ( Tramèr et al. , 2018 ; Pang et al. , 2019 ) , metric learning ( Mao et al. , 2019 ; Pang et al. , 2020 ) , semi-supervised learning ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ; Zhai et al. , 2019 ) , and self-supervised learning ( Chen et al. , 2020a ; Hendrycks et al. , 2019 ; Kim et al. , 2020 ) . Due to the high computational cost of AT , other efforts are devoted to accelerating the training procedure ( Shafahi et al. , 2019 ; Wong et al. , 2020 ; Zhang et al. , 2019a ) . Recent works emphasize the training tricks ( e.g. , weight decay , batch size , etc . ) in AT ( Gowal et al. , 2020 ; Pang et al. , 2021 ) . However , it has been shown that the robust test accuracy of numerous adversarially trained models can be degraded significantly by using stronger attacks ( Croce & Hein , 2020b ; Tramer et al. , 2020 ) , indicating that the reliable evaluation of adversarial robustness remains an imperative yet challenging task . In this paper , we focus on evaluating adversarial robustness of AT models due to their superior robustness over other defense techniques . 2.3 META-LEARNING . Meta-learning ( learning to learn ) studies how to learn learning algorithms automatically from a meta perspective , and has shown promise in few-shot learning ( Finn et al. , 2017 ) and learning optimizers ( Andrychowicz et al. , 2016 ) . The latter is called learning to optimize ( L2O ) . The first L2O method leverages a coordinate-wise long short term memory ( LSTM ) network as the optimizer model , which is trained by gradient descent on a class of optimization problems ( Andrychowicz et al. , 2016 ) . The optimizer can also be trained by reinforcement learning ( Li & Malik , 2017 ) . Bello et al . ( 2017 ) propose to adopt an RNN controller to generate the update rules of optimization algorithms , which can be described as a domain specific language . L2O is then extended to efficiently optimize black-box functions ( Chen et al. , 2017 ) . Recent methods focus on improving the training stability and generalization of the learned optimizers by designing better optimizer models and training techniques ( Lv et al. , 2017 ; Wichrowska et al. , 2017 ; Chen et al. , 2020b ) . The general idea of meta-learning has been investigated in adversarial attacks . Most related methods learn to generate adversarial examples by using convolutional neural networks ( CNNs ) , which usually take clean images as inputs and return adversarial examples/perturbations ( Baluja & Fischer , 2017 ; Poursaeed et al. , 2018 ) . Differently , our method adopts an RNN model as the attack optimizer to mimic the behavior of iterative attacks . Empirical results validate the superiority of our approach compared to the CNN-based generators . Meta-learning has also been used in black-box attacks ( Du et al. , 2020 ) and AT defenses ( Xiong & Hsieh , 2020 ; Jiang et al. , 2021 ) . It is noteworthy that Xiong & Hsieh ( 2020 ) propose a similar approach to ours that uses an RNN optimizer to learn attacks for AT . Although the RNN optimizer is also adopted in our approach , we improve the attacking performance and generalization ability of the RNN optimizer with new techniques , as presented in Sec . 3 . More detailed comparisons between our work and Xiong & Hsieh ( 2020 ) are shown in Appendix A . | This paper proposes a model-agnostic meta-attack and achieves promising adversarial attack performance compared with state-of-the-art adversarial attack algorithms. The proposed algorithm overcomes many issues, such as the vanishing gradient problem, training instability problem, generalization to other defense models. The detailed experiments support the proposed method. | SP:c44e42aed2bb38106bdc33fd5e7b3dfb4e9c5584 |
GradSign: Model Performance Inference with Theoretical Insights | A key challenge in neural architecture search ( NAS ) is quickly inferring the predictive performance of a broad spectrum of networks to discover statistically accurate and computationally efficient ones . We refer to this task as model performance inference ( MPI ) . The current practice for efficient MPI is gradient-based methods that leverage the gradients of a network at initialization to infer its performance . However , existing gradient-based methods rely only on heuristic metrics and lack the necessary theoretical foundations to consolidate their designs . We propose GradSign , an accurate , simple , and flexible metric for model performance inference with theoretical insights . The key idea behind GradSign is a quantity Ψ to analyze the optimization landscape of different networks at the granularity of individual training samples . Theoretically , we show that both the network ’ s training and true population losses are proportionally upper bounded by Ψ under reasonable assumptions . In addition , we design GradSign , an accurate and simple approximation of Ψ using the gradients of a network evaluated at a random initialization state . Evaluation on seven NAS benchmarks across three training datasets shows that GradSign generalizes well to real-world networks and consistently outperforms state-of-the-art gradient-based methods for MPI evaluated by Spearman ’ s ρ and Kendall ’ s Tau . Additionally , we integrate GradSign into four existing NAS algorithms and show that the GradSign-assisted NAS algorithms outperform their vanilla counterparts by improving the accuracies of best-discovered networks by up to 0.3 % , 1.1 % , and 1.0 % on three real-world tasks . 1 INTRODUCTION . As deep learning methods evolve , neural architectures have gotten progressively larger and more sophisticated ( He et al. , 2015 ; Ioffe & Szegedy , 2015 ; Krizhevsky et al. , 2017 ; Devlin et al. , 2019 ; Rumelhart et al. , 1986 ; Srivastava et al. , 2014 ; Kingma & Ba , 2017 ) , making it increasingly challenging to design model architectures that can achieve state-of-the-art predictive performance manually . To alleviate this challenge , recent work has proposed several approaches to automatically discover statistically accurate and computationally efficient neural architectures . The most common approach is neural architecture search ( NAS ) , which explores a comprehensive search space of potential network architectures based on compositions of basic network modules to discover architectures outperforming human-designed counterparts ( Liu et al. , 2018a ; Zoph & Le , 2016 ; Pham et al. , 2018 ) . A key challenge in existing automated approaches is quickly assessing the predictive performance of a diverse set of candidate architectures to discover performant ones . We refer to this task as model performance inference ( MPI ) . A straightforward approach for MPI is directly training candidate architectures on a dataset until convergence and recording the achieved training losses and validation accuracies ( Frankle & Carbin , 2018 ; Chen et al. , 2020a ; Liu et al. , 2018a ; Zoph & Le , 2016 ) . Though accurate , this approach is computationally prohibitive and can not scale to large networks or datasets . The current practice to efficient MPI is gradient-based methods that leverage the gradient information of a network at initialization to infer its predictive performance ( Lee et al. , 2018 ; Wang et al. , 2020 ; Tanaka et al. , 2020 ) . Compared to directly measuring the accuracy of candidate networks on a training dataset , gradient-based methods are computationally more efficient since they only require evaluating a mini-batch of gradients at initialization . However , existing gradient-based methods rely only on heuristic metrics and lack the necessary theoretical insights to consolidate their designs . In this paper , we propose GradSign , a simple yet accurate metric for MPI with theoretical foundations . GradSign is inspired by analyzing the optimization landscapes of individual training samples . GradSign takes as inputs a mini-batch of sample-wise gradients evaluated at a random initialization point and outputs the statistical evidence of a network that highly correlates to its well-trained predictive performance measured by accuracy on the entire dataset . Prior theoretical results ( Allen-Zhu et al. , 2019 ) show that for a sufficiently large neighborhood of a randomly initialized point , the optimization landscape is nearly convex and semi-smooth . To realize its potential for MPI , we generalize these results to sample-wise optimization landscapes and propose a quantity Ψ to measure the density of sample-wise local optima in the convex areas around a random initialization point . Additionally , we prove that both the training and population losses ( i.e. , generalization error ) of a network are proportionally upper bounded by Ψ2 under reasonable assumptions . Based on our theoretical results , we design GradSign , an accurate and simple approximation of Ψ , and empirically show that GradSign generalizes to realistic setups that may violate our assumptions . In addition , our method uses only the sample-wise gradient information at a random initialization point to provide statistical evidence to infer Ψ , which makes it efficient to compute and easy to implement . Extensive evaluation on seven NAS benchmarks ( i.e. , NAS-Bench-101 , NAS-Bench-201 , and five design spaces of NDS ) across three datasets ( i.e. , CIFAR-10 , CIFAR-100 , and ImageNet16-120 ) shows that GradSign consistently outperforms existing SOTA gradient-based methods in all circumstances . Furthermore , we integrate GradSign into existing NAS algorithms and show that the GradSign-assisted variants of these NAS algorithms lead to more accurate architectures . Contributions . This paper makes the following contributions : • We provide a new perspective to view the overall optimization landscape of a network as a combination of sample-wise optimization landscapes . Based on this insight , we introduce a new quantity Ψ that provides an upper bound on both a network ’ s training and population losses under reasonable assumptions . • To infer Ψ , we propose GradSign , an accurate and simple estimation of Ψ. GradSign enables fast and efficient MPI using only the gradients of a network at initialization . • We empirically show that our method could generalize reasonably well for modern network architectures by outperforming other SOTA gradient-based MPI methods . We additionally show that with the help of our method , SOTA NAS algorithms can be further improved . 2 RELATED WORK . Table 1 summarizes existing approaches to inferring the statistical performance of neural architectures . Sample-based methods assess the performance of a neural architecture by training it on a dataset . Though accurate , sample-based methods require a surrogate training procedure to evaluate each architecture . EconNAS ( Zhou et al. , 2020 ) mitigates the cost of training candidate architectures by reducing the number of training epochs , input dataset sizes , resolution of input images , and model sizes . Theory-based methods leverage recent advances in deep learning theory , such as Neural Tangent Kernel ( Jacot et al. , 2018 ) and Linear Region Analysis ( Serra et al. , 2018 ) , to assess a model ’ s accuracy ( Chen et al. , 2020a ; Mellor et al. , 2021 ; Park et al. , 2020 ) . In particular , NNGP ( Park et al. , 2020 ) infers a model ’ s performance by fitting its kernel regression parameters on a training dataset and evaluating its accuracy on a validation set , which alleviates the burden of training . As another example , Chen et al . ( 2020a ) utilizes the kernel condition number proposed in Xiao et al . ( 2020 ) , which can be theoretically proved to correlate to training convergence rate and generalization performance . However , this theoretical evidence is only guaranteed for extremely wide networks while having a specialized initialization mode . While the linear region analysis used in Mellor et al . ( 2021 ) , Lin et al . ( 2021 ) and Chen et al . ( 2020a ) is easy to implement , such technique is only applicable to networks with ReLU activations ( Agarap , 2018 ) . Learning-based methods train a separate network ( e.g. , graph neural networks ) to predict a network ’ s accuracy ( Liu et al. , 2018a ; Luo et al. , 2020 ; Dai et al. , 2019 ; Wen et al. , 2020 ; Chen et al. , 2020b ; Siems et al. , 2020 ) . Though these learned models can achieve high accuracies on a specific task , this approach requires constructing a training dataset with sampled architectures for each downstream task . As a result , existing learning-based methods are generally task-specific and computationally expensive . Gradient-based methods infer the statistical performance of a network by leveraging the gradient information at initialization , which can be easily obtained using an automated differentiation tool in today ’ s ML frameworks , such as PyTorch ( Paszke et al. , 2017 ) and TensorFlow ( Abadi et al. , 2016 ) . The weight-wise salience score computed by several pruning at initialization ( Lee et al. , 2018 ; Wang et al. , 2020 ; Tanaka et al. , 2020 ) methods can easily be adapted to MPI settings by summing scores up . Though lack of theoretical foundations , such migrations have been empirically proven to be effective as baselines in recent works ( Abdelfattah et al. , 2021a ; Mellor et al. , 2021 ; Lin et al. , 2021 ) . An alternative stream of work ( Turner et al. , 2019 ; 2021 ; Theis et al. , 2018 ) uses approximated secondorder gradients , known as empirical Fisher Information Matrix ( FIM ) , at a random initialization point to infer the performance of a network . Empirical FIM ( Martens , 2014 ) is a valid approximation of a model ’ s predictive performance only if the model ’ s parameters are a Maximum Likelihood Estimation ( MLE ) . However , this assumption is invalid at a random initialization point , making FIM-based algorithms inapplicable . The key difference between GradSign and existing gradient-based methods is that our method is based on a fine-grained analysis of sample-wise optimization landscapes rather than heuristic insights . In addition , GradSign also provides the first attempt for MPI by leveraging the optimization landscape properties contained in sample-wise gradient information , while prior gradient-based methods only focus on gradients evaluated in a full batch fashion . 2.1 NEURAL ARCHITECTURE SEARCH . Recent works ( He et al. , 2021 ; Cai et al. , 2019 ; 2018 ; Tan & Le , 2019 ; Howard et al. , 2019 ) have proposed several algorithms to explore a NAS search space and discover highly accurate networks . RS ( Bergstra & Bengio , 2012 ) is one of the baseline algorithms that generates and evaluates architectures randomly in the search space . REINFORCE ( Williams , 1992 ) moves a step forward by reframing NAS as a reinforcement learning task where accuracy is the reward and architecture generation is the policy action . Given limited computational resources , BOHB ( Falkner et al. , 2018 ) uses Bayesian Optimization ( BO ) to propose candidates while uses HyperBand ( HB ) ( Li et al. , 2017 ) for searching resource allocation . REA ( Real et al. , 2019 ) uses a simple yet effective evolutionary searching strategy that achieves state-of-the-art performance . GradSign is complementary to and can be combined with existing NAS algorithms . We integrate GradSign into the NAS algorithms mentioned above and show that GradSign can improve the search procedure of NAS algorithms on various real-world tasks . 2.2 OPTIMIZATION LANDSCAPE ANALYSIS . Inspired by the fact that over-parameterized networks always find a remarkable fit for a training dataset ( Zhang et al. , 2016 ) , optimization landscape analysis has been one of the main focuses in deep learning theory ( Brutzkus & Globerson , 2017 ; Du et al. , 2018 ; Ge et al. , 2017 ; Li & Yuan , 2017 ; Soltanolkotabi , 2017 ; Allen-Zhu et al. , 2019 ) . Even though existing theoretical results for optimization landscape analysis rely on strict assumptions on the landscape ’ s smoothness , convexity , and initialization point , we can leverage theoretical insights to guide the design of GradSign . In θ * θ * 1 l ( fθ ( x1 ) , y1 ) θ * 2 l ( fθ ( x2 ) , y2 ) |θ * 1 − θ * 2 | θ l ( a ) Optimization landscape with sparser sample-wise local optima corresponding to worse J ( θ∗ ) . θ l θ * θ * 1 l ( fθ ( x1 ) , y1 ) θ * 2 l ( fθ ( x2 ) , y2 ) |θ * 1 − θ * 2 | ( b ) Optimization landscape with denser sample-wise local optima corresponding to better J ( θ∗ ) . Figure 1 : Illustration of our theoretical insight that denser sample-wise local optima indicate lower training losses . As the distances ( |θ∗1 − θ∗2 | , shown in red ) between the local optima across samples reduce , there is a higher probability that the gradients of different samples have the same sign at a random initialization point , shown as the green areas . addition , SGD-based optimizers trained from randomly initialized states hardly encounter nonsmoothness or non-convexity in practice for varieties of architectures ( Goodfellow et al. , 2014 ) . Furthermore , Allen-Zhu et al . ( 2019 ) provides theoretical evidence that for a sufficiently large neighborhood of a randomly initialized point , the optimization landscape is nearly convex and semismooth . Different from existing optimization landscape analyses depending on objectives evaluated across a mini-batch of training samples , we propose a new perspective that decomposes a mini-batch objective into the aggregation of sample-wise optimization landscapes . To the best of our knowledge , our work is the first attempt to inferring models ’ performance by leveraging sample-wise optimization landscapes . | Model performance inference is a key challenge in neural architecture search. This paper introduces GradSign, an accurate, simple, and flexible metric for model performance inference. GradSign approximately analyzes the optimization landscape of different networks at the granularity of individual training samples using the gradients evaluated at a random initialization state. Experimental results show that GradSign can generalize well to real-world networks and outperform state-of-the-art gradient-based methods for model performance inference. | SP:0eb1faf1d1b44c5c80745824ae303ee086eae8a4 |
GradSign: Model Performance Inference with Theoretical Insights | A key challenge in neural architecture search ( NAS ) is quickly inferring the predictive performance of a broad spectrum of networks to discover statistically accurate and computationally efficient ones . We refer to this task as model performance inference ( MPI ) . The current practice for efficient MPI is gradient-based methods that leverage the gradients of a network at initialization to infer its performance . However , existing gradient-based methods rely only on heuristic metrics and lack the necessary theoretical foundations to consolidate their designs . We propose GradSign , an accurate , simple , and flexible metric for model performance inference with theoretical insights . The key idea behind GradSign is a quantity Ψ to analyze the optimization landscape of different networks at the granularity of individual training samples . Theoretically , we show that both the network ’ s training and true population losses are proportionally upper bounded by Ψ under reasonable assumptions . In addition , we design GradSign , an accurate and simple approximation of Ψ using the gradients of a network evaluated at a random initialization state . Evaluation on seven NAS benchmarks across three training datasets shows that GradSign generalizes well to real-world networks and consistently outperforms state-of-the-art gradient-based methods for MPI evaluated by Spearman ’ s ρ and Kendall ’ s Tau . Additionally , we integrate GradSign into four existing NAS algorithms and show that the GradSign-assisted NAS algorithms outperform their vanilla counterparts by improving the accuracies of best-discovered networks by up to 0.3 % , 1.1 % , and 1.0 % on three real-world tasks . 1 INTRODUCTION . As deep learning methods evolve , neural architectures have gotten progressively larger and more sophisticated ( He et al. , 2015 ; Ioffe & Szegedy , 2015 ; Krizhevsky et al. , 2017 ; Devlin et al. , 2019 ; Rumelhart et al. , 1986 ; Srivastava et al. , 2014 ; Kingma & Ba , 2017 ) , making it increasingly challenging to design model architectures that can achieve state-of-the-art predictive performance manually . To alleviate this challenge , recent work has proposed several approaches to automatically discover statistically accurate and computationally efficient neural architectures . The most common approach is neural architecture search ( NAS ) , which explores a comprehensive search space of potential network architectures based on compositions of basic network modules to discover architectures outperforming human-designed counterparts ( Liu et al. , 2018a ; Zoph & Le , 2016 ; Pham et al. , 2018 ) . A key challenge in existing automated approaches is quickly assessing the predictive performance of a diverse set of candidate architectures to discover performant ones . We refer to this task as model performance inference ( MPI ) . A straightforward approach for MPI is directly training candidate architectures on a dataset until convergence and recording the achieved training losses and validation accuracies ( Frankle & Carbin , 2018 ; Chen et al. , 2020a ; Liu et al. , 2018a ; Zoph & Le , 2016 ) . Though accurate , this approach is computationally prohibitive and can not scale to large networks or datasets . The current practice to efficient MPI is gradient-based methods that leverage the gradient information of a network at initialization to infer its predictive performance ( Lee et al. , 2018 ; Wang et al. , 2020 ; Tanaka et al. , 2020 ) . Compared to directly measuring the accuracy of candidate networks on a training dataset , gradient-based methods are computationally more efficient since they only require evaluating a mini-batch of gradients at initialization . However , existing gradient-based methods rely only on heuristic metrics and lack the necessary theoretical insights to consolidate their designs . In this paper , we propose GradSign , a simple yet accurate metric for MPI with theoretical foundations . GradSign is inspired by analyzing the optimization landscapes of individual training samples . GradSign takes as inputs a mini-batch of sample-wise gradients evaluated at a random initialization point and outputs the statistical evidence of a network that highly correlates to its well-trained predictive performance measured by accuracy on the entire dataset . Prior theoretical results ( Allen-Zhu et al. , 2019 ) show that for a sufficiently large neighborhood of a randomly initialized point , the optimization landscape is nearly convex and semi-smooth . To realize its potential for MPI , we generalize these results to sample-wise optimization landscapes and propose a quantity Ψ to measure the density of sample-wise local optima in the convex areas around a random initialization point . Additionally , we prove that both the training and population losses ( i.e. , generalization error ) of a network are proportionally upper bounded by Ψ2 under reasonable assumptions . Based on our theoretical results , we design GradSign , an accurate and simple approximation of Ψ , and empirically show that GradSign generalizes to realistic setups that may violate our assumptions . In addition , our method uses only the sample-wise gradient information at a random initialization point to provide statistical evidence to infer Ψ , which makes it efficient to compute and easy to implement . Extensive evaluation on seven NAS benchmarks ( i.e. , NAS-Bench-101 , NAS-Bench-201 , and five design spaces of NDS ) across three datasets ( i.e. , CIFAR-10 , CIFAR-100 , and ImageNet16-120 ) shows that GradSign consistently outperforms existing SOTA gradient-based methods in all circumstances . Furthermore , we integrate GradSign into existing NAS algorithms and show that the GradSign-assisted variants of these NAS algorithms lead to more accurate architectures . Contributions . This paper makes the following contributions : • We provide a new perspective to view the overall optimization landscape of a network as a combination of sample-wise optimization landscapes . Based on this insight , we introduce a new quantity Ψ that provides an upper bound on both a network ’ s training and population losses under reasonable assumptions . • To infer Ψ , we propose GradSign , an accurate and simple estimation of Ψ. GradSign enables fast and efficient MPI using only the gradients of a network at initialization . • We empirically show that our method could generalize reasonably well for modern network architectures by outperforming other SOTA gradient-based MPI methods . We additionally show that with the help of our method , SOTA NAS algorithms can be further improved . 2 RELATED WORK . Table 1 summarizes existing approaches to inferring the statistical performance of neural architectures . Sample-based methods assess the performance of a neural architecture by training it on a dataset . Though accurate , sample-based methods require a surrogate training procedure to evaluate each architecture . EconNAS ( Zhou et al. , 2020 ) mitigates the cost of training candidate architectures by reducing the number of training epochs , input dataset sizes , resolution of input images , and model sizes . Theory-based methods leverage recent advances in deep learning theory , such as Neural Tangent Kernel ( Jacot et al. , 2018 ) and Linear Region Analysis ( Serra et al. , 2018 ) , to assess a model ’ s accuracy ( Chen et al. , 2020a ; Mellor et al. , 2021 ; Park et al. , 2020 ) . In particular , NNGP ( Park et al. , 2020 ) infers a model ’ s performance by fitting its kernel regression parameters on a training dataset and evaluating its accuracy on a validation set , which alleviates the burden of training . As another example , Chen et al . ( 2020a ) utilizes the kernel condition number proposed in Xiao et al . ( 2020 ) , which can be theoretically proved to correlate to training convergence rate and generalization performance . However , this theoretical evidence is only guaranteed for extremely wide networks while having a specialized initialization mode . While the linear region analysis used in Mellor et al . ( 2021 ) , Lin et al . ( 2021 ) and Chen et al . ( 2020a ) is easy to implement , such technique is only applicable to networks with ReLU activations ( Agarap , 2018 ) . Learning-based methods train a separate network ( e.g. , graph neural networks ) to predict a network ’ s accuracy ( Liu et al. , 2018a ; Luo et al. , 2020 ; Dai et al. , 2019 ; Wen et al. , 2020 ; Chen et al. , 2020b ; Siems et al. , 2020 ) . Though these learned models can achieve high accuracies on a specific task , this approach requires constructing a training dataset with sampled architectures for each downstream task . As a result , existing learning-based methods are generally task-specific and computationally expensive . Gradient-based methods infer the statistical performance of a network by leveraging the gradient information at initialization , which can be easily obtained using an automated differentiation tool in today ’ s ML frameworks , such as PyTorch ( Paszke et al. , 2017 ) and TensorFlow ( Abadi et al. , 2016 ) . The weight-wise salience score computed by several pruning at initialization ( Lee et al. , 2018 ; Wang et al. , 2020 ; Tanaka et al. , 2020 ) methods can easily be adapted to MPI settings by summing scores up . Though lack of theoretical foundations , such migrations have been empirically proven to be effective as baselines in recent works ( Abdelfattah et al. , 2021a ; Mellor et al. , 2021 ; Lin et al. , 2021 ) . An alternative stream of work ( Turner et al. , 2019 ; 2021 ; Theis et al. , 2018 ) uses approximated secondorder gradients , known as empirical Fisher Information Matrix ( FIM ) , at a random initialization point to infer the performance of a network . Empirical FIM ( Martens , 2014 ) is a valid approximation of a model ’ s predictive performance only if the model ’ s parameters are a Maximum Likelihood Estimation ( MLE ) . However , this assumption is invalid at a random initialization point , making FIM-based algorithms inapplicable . The key difference between GradSign and existing gradient-based methods is that our method is based on a fine-grained analysis of sample-wise optimization landscapes rather than heuristic insights . In addition , GradSign also provides the first attempt for MPI by leveraging the optimization landscape properties contained in sample-wise gradient information , while prior gradient-based methods only focus on gradients evaluated in a full batch fashion . 2.1 NEURAL ARCHITECTURE SEARCH . Recent works ( He et al. , 2021 ; Cai et al. , 2019 ; 2018 ; Tan & Le , 2019 ; Howard et al. , 2019 ) have proposed several algorithms to explore a NAS search space and discover highly accurate networks . RS ( Bergstra & Bengio , 2012 ) is one of the baseline algorithms that generates and evaluates architectures randomly in the search space . REINFORCE ( Williams , 1992 ) moves a step forward by reframing NAS as a reinforcement learning task where accuracy is the reward and architecture generation is the policy action . Given limited computational resources , BOHB ( Falkner et al. , 2018 ) uses Bayesian Optimization ( BO ) to propose candidates while uses HyperBand ( HB ) ( Li et al. , 2017 ) for searching resource allocation . REA ( Real et al. , 2019 ) uses a simple yet effective evolutionary searching strategy that achieves state-of-the-art performance . GradSign is complementary to and can be combined with existing NAS algorithms . We integrate GradSign into the NAS algorithms mentioned above and show that GradSign can improve the search procedure of NAS algorithms on various real-world tasks . 2.2 OPTIMIZATION LANDSCAPE ANALYSIS . Inspired by the fact that over-parameterized networks always find a remarkable fit for a training dataset ( Zhang et al. , 2016 ) , optimization landscape analysis has been one of the main focuses in deep learning theory ( Brutzkus & Globerson , 2017 ; Du et al. , 2018 ; Ge et al. , 2017 ; Li & Yuan , 2017 ; Soltanolkotabi , 2017 ; Allen-Zhu et al. , 2019 ) . Even though existing theoretical results for optimization landscape analysis rely on strict assumptions on the landscape ’ s smoothness , convexity , and initialization point , we can leverage theoretical insights to guide the design of GradSign . In θ * θ * 1 l ( fθ ( x1 ) , y1 ) θ * 2 l ( fθ ( x2 ) , y2 ) |θ * 1 − θ * 2 | θ l ( a ) Optimization landscape with sparser sample-wise local optima corresponding to worse J ( θ∗ ) . θ l θ * θ * 1 l ( fθ ( x1 ) , y1 ) θ * 2 l ( fθ ( x2 ) , y2 ) |θ * 1 − θ * 2 | ( b ) Optimization landscape with denser sample-wise local optima corresponding to better J ( θ∗ ) . Figure 1 : Illustration of our theoretical insight that denser sample-wise local optima indicate lower training losses . As the distances ( |θ∗1 − θ∗2 | , shown in red ) between the local optima across samples reduce , there is a higher probability that the gradients of different samples have the same sign at a random initialization point , shown as the green areas . addition , SGD-based optimizers trained from randomly initialized states hardly encounter nonsmoothness or non-convexity in practice for varieties of architectures ( Goodfellow et al. , 2014 ) . Furthermore , Allen-Zhu et al . ( 2019 ) provides theoretical evidence that for a sufficiently large neighborhood of a randomly initialized point , the optimization landscape is nearly convex and semismooth . Different from existing optimization landscape analyses depending on objectives evaluated across a mini-batch of training samples , we propose a new perspective that decomposes a mini-batch objective into the aggregation of sample-wise optimization landscapes . To the best of our knowledge , our work is the first attempt to inferring models ’ performance by leveraging sample-wise optimization landscapes . | This work discusses the problem of neural architecture search. While existing gradient methods are based on heuristics, this work proposes a metric called Gradsign for model performance inference, which provides some theoretical guarantees and performs well in practise. Authors compare gradsign to a number of state of the art methods using 3 benchmarks involving CIFAR10, CIFAR100, and ImageNet16-120 datasets. Gradsign shows better performance compared to these methods. | SP:0eb1faf1d1b44c5c80745824ae303ee086eae8a4 |
On the approximation properties of recurrent encoder-decoder architectures | 1 Introduction . Encoder-decoder is an increasingly popular architecture for sequence to sequence modelling problems ( Sutskever et al. , 2014 ; Chiu et al. , 2018 ; Venugopalan et al. , 2015 ) . The core of this architecture is to first encode the input sequence into a vector using the encoder and then map the vector into the output sequence through the decoder . In particular , such architecture forms the main component in the transformer network ( Vaswani et al. , 2017 ) , which has become a powerful method for modelling sequence to sequence relationships ( Parmar et al. , 2018 ; Beltagy et al. , 2020 ; Li et al. , 2019 ) . The encoder-decoder family of structures differ significantly from direct application of recurrent neural networks ( RNNs , Elman ( 1990 ) ) and its generalisations ( Hochreiter & Schmidhuber , 1997 ; Cho et al. , 2014b ) for processing sequences . However , both architectures can be considered as modelling mappings between sequences , albeit with different underlying structures . Hence , a natural but unresolved question is : how are these approaches fundamentally different ? Answering this question is not only of theoretical importance but also of practical interest . Currently , architectural selection for different time series modelling tasks is predominantly empirical . Thus , it is desirable to develop a concrete mathematical framework to understand the key differences between separate architectures in order to guide practitioners in a principled way . ∗Equal contribution †Corresponding author In this paper , we investigate the approximation properties of encoder-decoder architectures . Approximation is one of the most basic and important problems for supervised learning . It considers to what extent the model can fit a target . In particular , we prove a general approximation result in the linear setting , which characterises the types of temporal input-output relationships that can be efficiently approximated by encoder-decoder architectures . These results reveal that such architectures essentially generalise RNNs by lifting the requirement of time-homogeneity ( see Remark 3.2 ) in the target relationships . Hence , it can be used to tackle a broader class of sequence to sequence problems . Furthermore , of particular interest is the identification of a “ temporal product structure ” — a precise property of the target temporal relationship that highlights another intrinsic difference between recurrent encoder-decoders and RNNs . Our main contributions can be summarised as follows . 1 . We prove a universal approximation result for recurrent encoder-decoder architectures in the linear setting , including the approximation rates . 2 . We show that in the considered setting , the recurrent encoder-decoder generalises the RNNs and can approximate time-inhomogeneous relationships , which further adapt to additional temporal product structures in the target relationship . This answers precisely how encoder-decoders are different from RNNs , at least in the considered setting . Organisation . In Section 2 , we review the related work on encoder-decoder architectures and general approximation theories of sequence modelling . The approximation problem is formulated in Section 3 . Our main results , their consequences and numerical illustrations are presented in Section 4 . All the proofs and numerical details are included in appendices . Notations . For consistency , we adhere to the following notations . Boldfaced letters are reserved for sequences or paths , which can be understood as functions of time . Lower case letters can mean vectors or scalars . Matrices are denoted by capital letters . For α ∈ N , Cα denotes the space of functions with continuous derivatives up to order-α . 2 Related work . We first review some previous works on sequence to sequence modelling . The encoderdecoder architecture first appeared in Kalchbrenner & Blunsom ( 2013 ) , where they map the input sequence into a vector using convolutional neural networks ( CNNs ) , and then using a recurrent structure to map the vector to the output sequence . With the flexibility of manipulating the underlying structure of encoder and decoder , numerous models based on this architecture have come out thereafter . For instance , Cho et al . ( 2014b ) used gated RNNs as both the encoder and decoder , while in the later work ( Cho et al. , 2014a ) , they proposed a CNN-based decoder . In Sutskever et al . ( 2014 ) , they proposed a deep LSTM for both the encoder and decoder . Bahdanau et al . ( 2015 ) first introduced the attention mechanism , which was further developed in the well-known transformer networks ( Vaswani et al. , 2017 ) . However , most of the research on encoder-decoder architectures focused on applications . A theoretical understanding is helpful for its further improvement and development . From the theoretical point of view , Ye & Sung ( 2019 ) studied several theoretical properties of CNN encoder-decoders , including expressiveness , generalisation capability and optimisation landscape . Of particular relevance to the current work is expressiveness , which considers the relationships that can be generated from the architecture . However , this is not approximation . Yun et al . ( 2020 ) proved the universal approximation property of transformers for certain classes of functions , for example , permutation equivariant functions , but they did not consider the actual dynamical properties of target relationships that affect approximation . Dynamical proprieties such as memory , smoothness and low rank structures are essential , because they can precisely characterise different temporal relationships and affect the approximation capabilities of models . Assuming the target generated from a hidden dynamical system is one approach , which is widely applied ( Maass et al. , 2007 ; Schäfer & Zimmermann , 2007 ; Doya , 1993 ; Funahashi & Nakamura , 1993 ) . In contrast , a functional-based approach is recently introduced , where the target temporal relationships are generated from functionals satisfying specific properties such as linearity , continuity , regularity and time-homogeneity ( Li et al. , 2021 ) . In Li et al . ( 2021 ) , the approximation properties of linear RNN models are studied , and the results therein show that the approximation efficiency is related to the memory structure . In Jiang et al . ( 2021 ) , similar formulations are applied to investigate convolutional architectures , where the results suggest that targets with certain spectrum regularity can be well approximated by dilated CNNs . Under this framework , the target temporal relationship that can be efficiently approximated is characterised by properties such as memory , smoothness and sparsity . This enables us to make precise mathematical comparisons between different architectures . Our results in this work reveal that the encoder-decoders have a special temporal product structure which is intrinsically different from other sequence modelling architectures . 3 Problem formulation . In this section , we precisely define the input space , output space , concept space and hypothesis space , respectively . Functional formulation of temporal modelling . First , we define the input and output space precisely . A temporal sequence can be viewed as a function of time t. The input space is defined by X = C0 ( ( −∞ , 0 ] , Rd ) . This is the space of continuous functions from ( −∞ , 0 ] to Rd vanishing at infinity , where d ∈ N+ is the dimension . Denote the element in X by x = { xt ∈ Rd : t ≤ 0 } , we equip X with the supremum norm ‖x‖X : = supt≤0 ‖xt‖∞ . We take the outputs space as Y = Cb ( [ 0 , ∞ ) , R ) , the space of bounded continuous functions from [ 0 , ∞ ) to R. We consider real-valued outputs , since each dimension can be handled individually for vector-valued outputs . The mapping between input and output sequences can be formulated as a sequence of functionals , i.e . yt = Ht ( x ) , t ≥ 0 . The output yt at the time step t depends on the input sequence x . The ground truth relation between inputs and outputs is formulated by the sequence of functionals H : = { Ht : t ≥ 0 } . We provide an example to illustrate the above formulation . Given an input x , the output y is a smoothed version of x , resulting from convolving x with the Gaussian kernel g ( s ) = 1√ 2π exp ( − s2 2 ) . This relation can be formulated as yt = Ht ( x ) = ∫∞ 0 g ( t+ s ) x−sds . The RNN encoder-decoder model . For the supervised learning problem , our goal is to use a model to learn the target relationship H. First , we define the model . Among all different variants of the encoder-decoder architectures , the RNN encoder-decoder introduced in Cho et al . ( 2014b ) can be considered as the most simple and representative model , where the encoder and decoder are both RNNs . We study this particular model as we try to eliminate other factors and only focus on the encoder-decoder architecture itself . Under our setting , the simplified model of Cho et al . ( 2014b ) with RNNs as both encoder and decoder can be formulated as hs = σE ( WEhs−1 + UExs + bE ) , v = hτ , gt = σD ( WDgt−1 + bD ) , g0 = v , ot = WOgt + bO , ( 1 ) where ht , gt are hidden states of the encoder and decoder respectively . Recurrent activation functions are denoted by σE and σD . Here , τ denotes the terminating time step of the encoder , and v is the summary of the input sequence , which is called as the coding vector . The model prediction is denoted as ot ∈ R. All the other notations are model parameters . Equation ( 1 ) describes the following model dynamics . First , the encoder reads the entire input x , and then summarises the input into a fixed size coding vector v , which is also the last hidden state of the encoder . Next , the coding vector is passed into the decoder as the initial state , and then the decoder produces an output at each time step . Note that the encoder has a terminating time , and the decoder has a starting time . This is the reason why we take the input and output as semi-infinite sequences . We study a linear , residual and continuous-time idealisation of the model dynamics ( 1 ) : ḣs = Whs + Uxs , v = Qh0 , s ≤ 0 ġt = V gt , g0 = Pv , ot = c > gt , t ≥ 0 , ( 2 ) where W ∈ RmE×mE , U ∈ RmE×d , Q ∈ RN×mE , V ∈ RmD×mD , P ∈ RmD×N and c ∈ RmD are parameters . mE and mD denote the width of encoder and decoder , respectively . The coding vector v has dimension N , where we apply linear transformations to control it . We assume h−∞ = 0 , which is the usual choice for the initial condition of RNN hidden states . Since our goal is to investigate approximation problems over large time horizons , we are supposed to consider the stable RNN encoder-decoders , where W ∈ WmE : = { W ∈ RmE×mE : eigenvalues of W have negative real parts } , ( 3 ) V ∈ VmD : = { V ∈ RmD×mD : eigenvalues of V have negative real parts } . ( 4 ) The hypothesis space of RNN encoder-decoder models with arbitrary widths and coding vector dimension is defined as Ĥ : = ⋃ mE , mD , N∈N+ ĤmE , mD , N , where ĤmE , mD , N : = { Ĥ : = { Ĥt : t ≥ 0 } : Ĥt ( x ) = c > eV tP ∫ ∞ 0 QeWsUx−sds , with ( W , U , Q , V , P , c ) ∈ WmE × RmE×d × RN×mE × VmD × RmD×N × RmD } . ( 5 ) The widths mE , mD and the coding vector dimension N together control the capacity/complexity of the hypothesis space . Note that the assumptions on eigenvalues of W and V ensure that the parameterized linear functionals are continuous . Due to the mathematical form ( 5 ) , not all functionals can be represented by RNN encoderdecoders . To achieve a good approximation , the target functionals must possess certain structures . We introduce the following definitions to clarify these structures . Definition 3.1 . Let H = { Ht : t ≥ 0 } be a sequence of functionals . 1 . For any t ≥ 0 , the functional Ht is linear and continuous if for any λ1 , λ2 ∈ R and x1 , x2 ∈ X , we have Ht ( λ1x1 + λ2x2 ) = λ1Ht ( x1 ) + λ2Ht ( x2 ) , and ‖Ht‖ : = supx∈X , ‖x‖X≤1 |Ht ( x ) | < ∞ , where ‖Ht‖ denotes the induced functional norm . 2 . For any t ≥ 0 , the functional Ht is regular if for any sequence { x ( n ) } ∞n=1 ⊂ X such that limn→∞ x ( n ) s = 0 for almost every s ≤ 0 ( Lebesgue measure ) , we have limn→∞Ht ( x ( n ) ) = 0 . For a sequence of functionals H , we define its norm by ‖H‖ : = ∫ ∞ 0 ‖Ht‖dt . Remark 3.1 . The definitions of linear and continuous functionals are standard . One can view regular functionals as those not determined by inputs on arbitrarily small time intervals , e.g . an infinitely thin spike ( i.e . δ-functions ) . Given the above definitions , we immediately have the following observation . Proposition 3.1 . Let Ĥ ∈ Ĥ be a sequence of functionals in the RNN encoder-decoder hypothesis space ( see ( 5 ) ) . Then for any t ≥ 0 , Ĥt ∈ Ĥ is a linear , continuous and regular functional . Furthermore , ‖Ĥt‖ decays exponentially as a function of t. The proof is found in Appendix A . This proposition characterises properties of the encoderdecoder hypothesis space . In particular , it is different from the RNN hypothesis space discussed in Li et al . ( 2021 ) , since the encoder-decoder is not necessarily time-homogeneous . Remark 3.2 . A sequence of functionals H is time-homogeneous if for any t , τ ≥ 0 , Ht ( x ) = Ht+τ ( x ( τ ) ) , with x ( τ ) s = xs−τ for all s ∈ R. That is , if the input is shifted to the right by τ , the output is also shifted by τ . Temporal convolution is an example of time-homogeneous operation ( recall the Gaussian convolution discussed in Section 3 . An example of timeinhomogeneous relationship is video captioning : shifts in the sequence of input video frames do not necessarily lead to corresponding shifts in the caption text sequence . Relation with RNNs . Here , we emphasise the differences between the encoder-decoder hypothesis space and the RNN hypothesis space discussed in Li et al . ( 2021 ) , where Ĥ ( RNN ) t ( x ) = ∫∞ 0 c > eW ( t+s ) Ux−sds . A key difference is that the encoder-decoder has a structure involving two temporal parameters t and s , while the RNN only has one depending on t + s , due to the time-homogeneity . Owing to this difference and the fact that Ĥ ( RNN ) ⊂ Ĥ , the encoder-decoder hypothesis space ( 5 ) is more general , with the extra capability to learn time-inhomogeneous relationships . Furthermore , eV t and eWs adapt to a temporal product structure , which is an intrinsic difference between encoder-decoders and other architectures . We will discuss this in detail in the next section . | This paper provides theoretical studies for why encoder-decoder can be seen as a generalization of RNNs in time-inhomogenous sequence modeling. The authors put an impressive amount of effort into mathematically defining approximation properties of RNN encoder-decoders beginning from a universal approximation result. The authors first show the universal approximation property of RNN encoder-decoders, and subsequently, they show approximation rates of targets for RNN encoder-decoders. They introduce a notion of temporal products that can characterize the temporal relationships in the input/output pair. | SP:b33082267afa3cdcb3a8e2a049b1a4bc8f3d9d5d |
On the approximation properties of recurrent encoder-decoder architectures | 1 Introduction . Encoder-decoder is an increasingly popular architecture for sequence to sequence modelling problems ( Sutskever et al. , 2014 ; Chiu et al. , 2018 ; Venugopalan et al. , 2015 ) . The core of this architecture is to first encode the input sequence into a vector using the encoder and then map the vector into the output sequence through the decoder . In particular , such architecture forms the main component in the transformer network ( Vaswani et al. , 2017 ) , which has become a powerful method for modelling sequence to sequence relationships ( Parmar et al. , 2018 ; Beltagy et al. , 2020 ; Li et al. , 2019 ) . The encoder-decoder family of structures differ significantly from direct application of recurrent neural networks ( RNNs , Elman ( 1990 ) ) and its generalisations ( Hochreiter & Schmidhuber , 1997 ; Cho et al. , 2014b ) for processing sequences . However , both architectures can be considered as modelling mappings between sequences , albeit with different underlying structures . Hence , a natural but unresolved question is : how are these approaches fundamentally different ? Answering this question is not only of theoretical importance but also of practical interest . Currently , architectural selection for different time series modelling tasks is predominantly empirical . Thus , it is desirable to develop a concrete mathematical framework to understand the key differences between separate architectures in order to guide practitioners in a principled way . ∗Equal contribution †Corresponding author In this paper , we investigate the approximation properties of encoder-decoder architectures . Approximation is one of the most basic and important problems for supervised learning . It considers to what extent the model can fit a target . In particular , we prove a general approximation result in the linear setting , which characterises the types of temporal input-output relationships that can be efficiently approximated by encoder-decoder architectures . These results reveal that such architectures essentially generalise RNNs by lifting the requirement of time-homogeneity ( see Remark 3.2 ) in the target relationships . Hence , it can be used to tackle a broader class of sequence to sequence problems . Furthermore , of particular interest is the identification of a “ temporal product structure ” — a precise property of the target temporal relationship that highlights another intrinsic difference between recurrent encoder-decoders and RNNs . Our main contributions can be summarised as follows . 1 . We prove a universal approximation result for recurrent encoder-decoder architectures in the linear setting , including the approximation rates . 2 . We show that in the considered setting , the recurrent encoder-decoder generalises the RNNs and can approximate time-inhomogeneous relationships , which further adapt to additional temporal product structures in the target relationship . This answers precisely how encoder-decoders are different from RNNs , at least in the considered setting . Organisation . In Section 2 , we review the related work on encoder-decoder architectures and general approximation theories of sequence modelling . The approximation problem is formulated in Section 3 . Our main results , their consequences and numerical illustrations are presented in Section 4 . All the proofs and numerical details are included in appendices . Notations . For consistency , we adhere to the following notations . Boldfaced letters are reserved for sequences or paths , which can be understood as functions of time . Lower case letters can mean vectors or scalars . Matrices are denoted by capital letters . For α ∈ N , Cα denotes the space of functions with continuous derivatives up to order-α . 2 Related work . We first review some previous works on sequence to sequence modelling . The encoderdecoder architecture first appeared in Kalchbrenner & Blunsom ( 2013 ) , where they map the input sequence into a vector using convolutional neural networks ( CNNs ) , and then using a recurrent structure to map the vector to the output sequence . With the flexibility of manipulating the underlying structure of encoder and decoder , numerous models based on this architecture have come out thereafter . For instance , Cho et al . ( 2014b ) used gated RNNs as both the encoder and decoder , while in the later work ( Cho et al. , 2014a ) , they proposed a CNN-based decoder . In Sutskever et al . ( 2014 ) , they proposed a deep LSTM for both the encoder and decoder . Bahdanau et al . ( 2015 ) first introduced the attention mechanism , which was further developed in the well-known transformer networks ( Vaswani et al. , 2017 ) . However , most of the research on encoder-decoder architectures focused on applications . A theoretical understanding is helpful for its further improvement and development . From the theoretical point of view , Ye & Sung ( 2019 ) studied several theoretical properties of CNN encoder-decoders , including expressiveness , generalisation capability and optimisation landscape . Of particular relevance to the current work is expressiveness , which considers the relationships that can be generated from the architecture . However , this is not approximation . Yun et al . ( 2020 ) proved the universal approximation property of transformers for certain classes of functions , for example , permutation equivariant functions , but they did not consider the actual dynamical properties of target relationships that affect approximation . Dynamical proprieties such as memory , smoothness and low rank structures are essential , because they can precisely characterise different temporal relationships and affect the approximation capabilities of models . Assuming the target generated from a hidden dynamical system is one approach , which is widely applied ( Maass et al. , 2007 ; Schäfer & Zimmermann , 2007 ; Doya , 1993 ; Funahashi & Nakamura , 1993 ) . In contrast , a functional-based approach is recently introduced , where the target temporal relationships are generated from functionals satisfying specific properties such as linearity , continuity , regularity and time-homogeneity ( Li et al. , 2021 ) . In Li et al . ( 2021 ) , the approximation properties of linear RNN models are studied , and the results therein show that the approximation efficiency is related to the memory structure . In Jiang et al . ( 2021 ) , similar formulations are applied to investigate convolutional architectures , where the results suggest that targets with certain spectrum regularity can be well approximated by dilated CNNs . Under this framework , the target temporal relationship that can be efficiently approximated is characterised by properties such as memory , smoothness and sparsity . This enables us to make precise mathematical comparisons between different architectures . Our results in this work reveal that the encoder-decoders have a special temporal product structure which is intrinsically different from other sequence modelling architectures . 3 Problem formulation . In this section , we precisely define the input space , output space , concept space and hypothesis space , respectively . Functional formulation of temporal modelling . First , we define the input and output space precisely . A temporal sequence can be viewed as a function of time t. The input space is defined by X = C0 ( ( −∞ , 0 ] , Rd ) . This is the space of continuous functions from ( −∞ , 0 ] to Rd vanishing at infinity , where d ∈ N+ is the dimension . Denote the element in X by x = { xt ∈ Rd : t ≤ 0 } , we equip X with the supremum norm ‖x‖X : = supt≤0 ‖xt‖∞ . We take the outputs space as Y = Cb ( [ 0 , ∞ ) , R ) , the space of bounded continuous functions from [ 0 , ∞ ) to R. We consider real-valued outputs , since each dimension can be handled individually for vector-valued outputs . The mapping between input and output sequences can be formulated as a sequence of functionals , i.e . yt = Ht ( x ) , t ≥ 0 . The output yt at the time step t depends on the input sequence x . The ground truth relation between inputs and outputs is formulated by the sequence of functionals H : = { Ht : t ≥ 0 } . We provide an example to illustrate the above formulation . Given an input x , the output y is a smoothed version of x , resulting from convolving x with the Gaussian kernel g ( s ) = 1√ 2π exp ( − s2 2 ) . This relation can be formulated as yt = Ht ( x ) = ∫∞ 0 g ( t+ s ) x−sds . The RNN encoder-decoder model . For the supervised learning problem , our goal is to use a model to learn the target relationship H. First , we define the model . Among all different variants of the encoder-decoder architectures , the RNN encoder-decoder introduced in Cho et al . ( 2014b ) can be considered as the most simple and representative model , where the encoder and decoder are both RNNs . We study this particular model as we try to eliminate other factors and only focus on the encoder-decoder architecture itself . Under our setting , the simplified model of Cho et al . ( 2014b ) with RNNs as both encoder and decoder can be formulated as hs = σE ( WEhs−1 + UExs + bE ) , v = hτ , gt = σD ( WDgt−1 + bD ) , g0 = v , ot = WOgt + bO , ( 1 ) where ht , gt are hidden states of the encoder and decoder respectively . Recurrent activation functions are denoted by σE and σD . Here , τ denotes the terminating time step of the encoder , and v is the summary of the input sequence , which is called as the coding vector . The model prediction is denoted as ot ∈ R. All the other notations are model parameters . Equation ( 1 ) describes the following model dynamics . First , the encoder reads the entire input x , and then summarises the input into a fixed size coding vector v , which is also the last hidden state of the encoder . Next , the coding vector is passed into the decoder as the initial state , and then the decoder produces an output at each time step . Note that the encoder has a terminating time , and the decoder has a starting time . This is the reason why we take the input and output as semi-infinite sequences . We study a linear , residual and continuous-time idealisation of the model dynamics ( 1 ) : ḣs = Whs + Uxs , v = Qh0 , s ≤ 0 ġt = V gt , g0 = Pv , ot = c > gt , t ≥ 0 , ( 2 ) where W ∈ RmE×mE , U ∈ RmE×d , Q ∈ RN×mE , V ∈ RmD×mD , P ∈ RmD×N and c ∈ RmD are parameters . mE and mD denote the width of encoder and decoder , respectively . The coding vector v has dimension N , where we apply linear transformations to control it . We assume h−∞ = 0 , which is the usual choice for the initial condition of RNN hidden states . Since our goal is to investigate approximation problems over large time horizons , we are supposed to consider the stable RNN encoder-decoders , where W ∈ WmE : = { W ∈ RmE×mE : eigenvalues of W have negative real parts } , ( 3 ) V ∈ VmD : = { V ∈ RmD×mD : eigenvalues of V have negative real parts } . ( 4 ) The hypothesis space of RNN encoder-decoder models with arbitrary widths and coding vector dimension is defined as Ĥ : = ⋃ mE , mD , N∈N+ ĤmE , mD , N , where ĤmE , mD , N : = { Ĥ : = { Ĥt : t ≥ 0 } : Ĥt ( x ) = c > eV tP ∫ ∞ 0 QeWsUx−sds , with ( W , U , Q , V , P , c ) ∈ WmE × RmE×d × RN×mE × VmD × RmD×N × RmD } . ( 5 ) The widths mE , mD and the coding vector dimension N together control the capacity/complexity of the hypothesis space . Note that the assumptions on eigenvalues of W and V ensure that the parameterized linear functionals are continuous . Due to the mathematical form ( 5 ) , not all functionals can be represented by RNN encoderdecoders . To achieve a good approximation , the target functionals must possess certain structures . We introduce the following definitions to clarify these structures . Definition 3.1 . Let H = { Ht : t ≥ 0 } be a sequence of functionals . 1 . For any t ≥ 0 , the functional Ht is linear and continuous if for any λ1 , λ2 ∈ R and x1 , x2 ∈ X , we have Ht ( λ1x1 + λ2x2 ) = λ1Ht ( x1 ) + λ2Ht ( x2 ) , and ‖Ht‖ : = supx∈X , ‖x‖X≤1 |Ht ( x ) | < ∞ , where ‖Ht‖ denotes the induced functional norm . 2 . For any t ≥ 0 , the functional Ht is regular if for any sequence { x ( n ) } ∞n=1 ⊂ X such that limn→∞ x ( n ) s = 0 for almost every s ≤ 0 ( Lebesgue measure ) , we have limn→∞Ht ( x ( n ) ) = 0 . For a sequence of functionals H , we define its norm by ‖H‖ : = ∫ ∞ 0 ‖Ht‖dt . Remark 3.1 . The definitions of linear and continuous functionals are standard . One can view regular functionals as those not determined by inputs on arbitrarily small time intervals , e.g . an infinitely thin spike ( i.e . δ-functions ) . Given the above definitions , we immediately have the following observation . Proposition 3.1 . Let Ĥ ∈ Ĥ be a sequence of functionals in the RNN encoder-decoder hypothesis space ( see ( 5 ) ) . Then for any t ≥ 0 , Ĥt ∈ Ĥ is a linear , continuous and regular functional . Furthermore , ‖Ĥt‖ decays exponentially as a function of t. The proof is found in Appendix A . This proposition characterises properties of the encoderdecoder hypothesis space . In particular , it is different from the RNN hypothesis space discussed in Li et al . ( 2021 ) , since the encoder-decoder is not necessarily time-homogeneous . Remark 3.2 . A sequence of functionals H is time-homogeneous if for any t , τ ≥ 0 , Ht ( x ) = Ht+τ ( x ( τ ) ) , with x ( τ ) s = xs−τ for all s ∈ R. That is , if the input is shifted to the right by τ , the output is also shifted by τ . Temporal convolution is an example of time-homogeneous operation ( recall the Gaussian convolution discussed in Section 3 . An example of timeinhomogeneous relationship is video captioning : shifts in the sequence of input video frames do not necessarily lead to corresponding shifts in the caption text sequence . Relation with RNNs . Here , we emphasise the differences between the encoder-decoder hypothesis space and the RNN hypothesis space discussed in Li et al . ( 2021 ) , where Ĥ ( RNN ) t ( x ) = ∫∞ 0 c > eW ( t+s ) Ux−sds . A key difference is that the encoder-decoder has a structure involving two temporal parameters t and s , while the RNN only has one depending on t + s , due to the time-homogeneity . Owing to this difference and the fact that Ĥ ( RNN ) ⊂ Ĥ , the encoder-decoder hypothesis space ( 5 ) is more general , with the extra capability to learn time-inhomogeneous relationships . Furthermore , eV t and eWs adapt to a temporal product structure , which is an intrinsic difference between encoder-decoders and other architectures . We will discuss this in detail in the next section . | This paper provides theoretical insight for approximation properties of RNN encoder-decoder architecture in linear setting. More specifically, they study supervised learning problem of temporal modelling where a first RNN encodes a given sequence into a coding vector and a second RNN is responsible to decode said vector to a target output sequence. Linear setting here refers to a linear and continuous-time idealization in eq 6. Their analysis summarized as following: 1. Universal approximation: any linear, continuous, and regular temporal relation can be approximated by RNN encoder-decoder up to arbitrary accuracy. 2. Approximation rate for large size coding vector: beside width of encoder and decoder, this error bound depends on $\alpha$ smoothness of $\mathbf{H}$ and $\beta$ temporal decay rates of the output of constant signal under $\mathbf{H}$ where $\mathbf{H}$ is a sequence of linear, continuous, and regular functionals on inputs. Each $\mathbf{H}$ has a unique two-parameter representation $\rho(t,s)$. Target functionals that are smooth and have fast decaying memory are identified as good target of this approximation. 3. Approximation rate for small size coding vector: the architecture and the following assumptions in the paper give rise to an intrinsic structure to this type of RNN encoder-decoders called temporal product structure which can be deconstructed into encoder and decoder parts. By relating this structure to rank of temporal relationships they show approximation rate (in small size coding vector) is additionally a function of rank structure of target relationship. This analysis enables us to see the coding vector size as a knob to control number of parameters vs approximation error. 4. Experiments: they show that the theoretical analysis holds in their experiments. | SP:b33082267afa3cdcb3a8e2a049b1a4bc8f3d9d5d |
Debiasing Pretrained Text Encoders by Paying Attention to Paying Attention | 1 INTRODUCTION . Natural Language Processing ( NLP ) is increasingly penetrating real-world operations such as recruitment ( Hansen et al. , 2015 ) , legal systems ( Dale , 2019 ) , healthcare ( Velupillai et al. , 2018 ) and Web Search ( Nalisnick et al. , 2016 ) . Part of this success is attributed to the underlying embedding layer which encodes sophisticated semantic representations of language ( Camacho-Collados & Pilehvar , 2018 ) . The wide adoption of modern NLP models in critical domains has also inflicted a more thorough scrutiny . Recent research has uncovered some propensities of NLP models to replicate discriminatory social biases ( Bolukbasi et al. , 2016 ; Caliskan et al. , 2017 ; May et al. , 2019 ) which may cause unintended and undesired model behaviors with respect to social groups . Social bias in NLP is mainly caused by unbalanced mentions of attributes near advantaged groups in training data ( Zhao et al. , 2018a ) . For example , in most existing text corpora , very few cooks are referred to by male pronouns ( e.g . he , him , himself ) ( Zhao et al. , 2017 ) . Accordingly , text encoders or language models trained on such data may use this shortcut to inadvertently disassociate cooks from men , and learn that cooking is a female attribute . Worse , NLP models may even amplify social biases if left unchecked ( Bommasani et al. , 2021 ) . Methods to debias static word embeddings such as Word2vec ( Mikolov et al. , 2013 ) or GloVe ( Pennington et al. , 2014 ) have been applied for gender , race and religion ( Bolukbasi et al. , 2016 ; Zhao et al. , 2018b ; Kaneko & Bollegala , 2019 ; Ravfogel et al. , 2020 ) . However , by the time NLP practitioners started casting more attention to the fairness problem of their models , they had already switched to the more powerful sentence-level transformers in the likes of BERT ( Devlin et al. , 2018 ) , GPT3 ( Brown et al. , 2020 ) or T5 ( Raffel et al. , 2020 ) which owe their success to the novel self-attention mechanism ( Vaswani et al. , 2017 ) . This leap in accuracy in several NLP tasks does not extend to fairness since research discovered social stereotypes in modern text encoders ( May et al. , 2019 ; Nadeem et al. , 2020 ; Nangia et al. ) . To date , debiasing them remains comparatively under-explored . Mitigating biases in text encoders is difficult for four reasons : ( 1 ) They are expensive to retrain , so conventional methods based on Counterfactual Data Augmentation ( CDA ) to rebalance groupattribute mentions ( Zhao et al. , 2018a ; Webster et al. , 2020 ) become prohibitive as they generate more training data , and all debiasing attempts might be limited to either finetuning or adapting ( Houlsby et al. , 2019 ; Lauscher et al. , 2021 ) . ( 2 ) Static embeddings encode words whereas text encoders need context1 . Thus , it is not straightforward to use existing debiasing techniques for static embeddings off-the-shelf as it is not clear how to generate context for single words . Previous work tackled this problem by either designing bleached sentence templates ( May et al. , 2019 ; Kurita et al. , 2019 ) where they fill in the blanks with words of interest , or sampling sentences from large corpora where the words are mentioned ( Liang et al. , 2020a ; Cheng et al. , 2020 ) , thus creating context . The former betrays the expressiveness of natural language while the latter suffers from sampling and preprocessing bias ( Liang et al. , 2020a ) . ( 3 ) The input space of text encoders is the set of all possible sentences , so we can not debias every single input as it is done with static embeddings . ( 4 ) Text encoders are larger in capacity and complexity . This suggests that they can accommodate subtler and more sophisticated forms of stereotype , especially in their attention component , which renders bias imperceptible to existing detection methods as they are not designed to operate on attention . Despite these difficulties , previous works ( Liang et al. , 2020a ; Webster et al. , 2020 ; Liang et al. , 2020b ; Cheng et al. , 2020 ; Kaneko & Bollegala , 2021 ; Lauscher et al. , 2021 ) addressed the problem of reducing bias from modern text encoders with different techniques . However , most of them make strong assumptions about the linearity of bias . Moreover , they operate on the embeddings produced by text encoders , and leave their most important block - attention - largely unrectified . In this paper we explore attention-based debiasing . This approach stems from our observation that attention exhibits a great deal of social biases . We empirically show that this is the case , propose a novel method to reduce stereotypes from attention blocks , and demonstrate that it is effective in mitigating biases from sentence representations as a whole . Given an input sentence , our method compels the text encoder of interest to redistribute its internal attention scores such that each word in the input allocates the same attention for different social groups . Thus , it learns to forget previously encoded preferences , and generate fair representations , free of stereotypical influence . We also keep semantic information loss at a minimum while debiasing by distilling knowledge ( Hinton et al. , 2015 ; Gou et al. , 2021 ) from an unaltered teacher text encoder . In this setting , we encourage the debiased model to copy the original attention from its teacher to minimize semantic offset . Unlike most previous work which focus only on gender , we address five bias types in our experiments ( gender , race , religion , age and sexual orientation ) . We conduct likelihood- and inference-based evaluations to measure the intensity of bias in our final debiased models . Experiments demonstrate that the technique we propose effectively reduces bias , and outperforms existing debiasing methods . 2 RELATED WORK . In this section , we discuss related work about debiasing static word embeddings and sentence-level text encoders . Then , we shed some light on work done on the attention mechanism in general . It should be noted that bias at data level ( Pryzant et al. , 2020 ; Cryan et al. , 2020 ) and in language generation tasks ( Sheng et al. , 2020 ; Sap et al. , 2020 ; Dhamala et al. , 2021 ) are also active and complementary areas of research . However , due to space limitations , they will not be discussed in this paper . 2.1 BIAS IN STATIC WORD EMBEDDINGS . The work of Bolukbasi et al . ( 2016 ) pioneered bias research in NLP by discovering that static word embeddings such as Word2Vec ( Mikolov et al. , 2013 ) or GloVe ( Pennington et al. , 2014 ) encode significant amounts of binary gender bias . They proposed Hard-Debias : a simple method to remove biases by projecting gender-neutral word embeddings onto a gender-free direction . Manzini et al . ( 2019 ) extended Hard-Debias to the multiclass setting where they also treat racial and religious stereotypes . In both works , the bias direction is defined by a manually pre-compiled list of stereotyped words . In contrast , Ravfogel et al . ( 2020 ) suggest a data-driven approach to learn bias directions with a linear classifier . Debiasing is then conducted by iteratively projecting word embeddings on the null space of the classifier ’ s matrix . On the other hand , finetuning is the debiasing approach that attracted the widest adoption , either by using an autoencoder ( Kaneko & Bollegala , 2019 ) , attraction-repulsion mechanism ( Kumar et al. , 2020 ) , or adversarial attacks ( Xie et al. , 2017 ; 1A word needs to be in a context ( sentence or paragraph ) in order to be correctly encoded Li et al. , 2018 ; Elazar & Goldberg , 2018 ) . Unlike these post-processing methods , Zhao et al . ( 2018b ) added a new fairness constraint to GloVe loss function , and retrained their fair word embeddings from scratch . 2.2 BIAS IN TEXT ENCODERS . Research on biases in sentence representations is dominated by detection rather than correction and mitigation . To date , there are three main approaches to detect stereotypes in text encoders : ( 1 ) representation-based : where vector relationships between different types of inputs are measured . For example , May et al . ( 2019 ) extended the WEAT test ( Caliskan et al. , 2017 ) into sentence vector space ( SEAT ) , and compared the cosine similarity between representations of two sets of targets and two sets of attributes . All sentences in SEAT follow a predefined template . ( 2 ) likelihood-based : These approaches examine how often text encoders prefer stereotypes over anti-stereotypes . Preferences in this case are defined in terms of higher likelihoods as produced by language models using embeddings of the text encoders under study . Two benchmarks are widely used for measuring bias : StereoSet ( Nadeem et al. , 2020 ) and Crows-Pairs ( Nangia et al. ) . Both datasets are organized in pairs or triples of minimally-distant sentences which differ only in the word ( s ) carrying a stereotypical connotation . ( 3 ) inference-based : These methods employ text encoders in downstream NLP tasks ( Blodgett et al. , 2020 ) such as natural language inference ( Dev et al. , 2020 ) , sentiment analysis ( Dı́az et al. , 2018 ) or language generation ( Sap et al. , 2020 ; Sheng et al. , 2020 ) . Bias in such settings is declared as the difference in outcome when the models are tested with the same input sentence , differing only in social groups . Bias mitigation approaches are mostly inspired by debiasing static embeddings . In projection-based methods , Liang et al . ( 2020a ) contextualize words into sentences by sampling them from existing corpora before applying Hard-Debias . Kaneko & Bollegala ( 2021 ) minimize the projection of sentence representations on a learned bias subspace , while Qian et al . ( 2019 ) ; Bordia & Bowman ( 2019 ) ; Liang et al . ( 2020b ) add bias-reduction objectives to their loss functions . Another line of research uses CDA ( Webster et al. , 2020 ) to balance gender correlations in training data , while Lauscher et al . ( 2021 ) use adapters to reduce the large training time that CDA incurs . Finally , Cheng et al . ( 2020 ) use contrastive learning , and add a fair filter that minimizes mutual information between stereotypes and anti-stereotypes . In our work , rather than extending approaches from static embeddings , we focus on the self-attention mechanism which is characteristic of many text encoders , and show that fair attention leads to fair representations . 2.3 ATTENTION ANALYSIS IN TEXT ENCODERS . Clark et al . ( 2019 ) analyzed BERT ’ s attention heads and found that some of them correspond remarkably well to linguistic patterns of coreference and syntax without additional training . Michel et al . ( 2019 ) observe that not all attention heads within a model are made equal . They also propose a pruning algorithm to reduce the energy footprint of these models by eliminating the least important heads without much attenuation to the overall performance . Given the convenient interpretabilty of attention , it has also been used in a myriad of visualization works ( Vig , 2019 ; Hoover et al. , 2020 ; Tenney et al. , 2020 ; Bastings & Filippova , 2020 ) in an attempt to dissect and explain the inner functioning of text encoders . Most attention studies in text encoders are designed for analysis purposes . In contrast , we are the first to leverage the attention mechanism in order to make text encoders fairer and less stereotyped . | This paper proposes a new debiasing method for contextualized word embeddings, specifically for attention-based text encoders. At a very high level, the proposed method tries to calibrate the attention scores of words from different groups, e.g. to reduce gender bias, the method forces the model (text encoder) to allocate same attentions to word "man" and "woman". Experimentally, the paper also demonstrates relatively good results on both likelihood-based evaluation (StereoSet and Crows-Pairs) and inference-based evaluation (NLI). | SP:e7c0d655d20b3a6de09dd2ea2d150b149ed60845 |
Debiasing Pretrained Text Encoders by Paying Attention to Paying Attention | 1 INTRODUCTION . Natural Language Processing ( NLP ) is increasingly penetrating real-world operations such as recruitment ( Hansen et al. , 2015 ) , legal systems ( Dale , 2019 ) , healthcare ( Velupillai et al. , 2018 ) and Web Search ( Nalisnick et al. , 2016 ) . Part of this success is attributed to the underlying embedding layer which encodes sophisticated semantic representations of language ( Camacho-Collados & Pilehvar , 2018 ) . The wide adoption of modern NLP models in critical domains has also inflicted a more thorough scrutiny . Recent research has uncovered some propensities of NLP models to replicate discriminatory social biases ( Bolukbasi et al. , 2016 ; Caliskan et al. , 2017 ; May et al. , 2019 ) which may cause unintended and undesired model behaviors with respect to social groups . Social bias in NLP is mainly caused by unbalanced mentions of attributes near advantaged groups in training data ( Zhao et al. , 2018a ) . For example , in most existing text corpora , very few cooks are referred to by male pronouns ( e.g . he , him , himself ) ( Zhao et al. , 2017 ) . Accordingly , text encoders or language models trained on such data may use this shortcut to inadvertently disassociate cooks from men , and learn that cooking is a female attribute . Worse , NLP models may even amplify social biases if left unchecked ( Bommasani et al. , 2021 ) . Methods to debias static word embeddings such as Word2vec ( Mikolov et al. , 2013 ) or GloVe ( Pennington et al. , 2014 ) have been applied for gender , race and religion ( Bolukbasi et al. , 2016 ; Zhao et al. , 2018b ; Kaneko & Bollegala , 2019 ; Ravfogel et al. , 2020 ) . However , by the time NLP practitioners started casting more attention to the fairness problem of their models , they had already switched to the more powerful sentence-level transformers in the likes of BERT ( Devlin et al. , 2018 ) , GPT3 ( Brown et al. , 2020 ) or T5 ( Raffel et al. , 2020 ) which owe their success to the novel self-attention mechanism ( Vaswani et al. , 2017 ) . This leap in accuracy in several NLP tasks does not extend to fairness since research discovered social stereotypes in modern text encoders ( May et al. , 2019 ; Nadeem et al. , 2020 ; Nangia et al. ) . To date , debiasing them remains comparatively under-explored . Mitigating biases in text encoders is difficult for four reasons : ( 1 ) They are expensive to retrain , so conventional methods based on Counterfactual Data Augmentation ( CDA ) to rebalance groupattribute mentions ( Zhao et al. , 2018a ; Webster et al. , 2020 ) become prohibitive as they generate more training data , and all debiasing attempts might be limited to either finetuning or adapting ( Houlsby et al. , 2019 ; Lauscher et al. , 2021 ) . ( 2 ) Static embeddings encode words whereas text encoders need context1 . Thus , it is not straightforward to use existing debiasing techniques for static embeddings off-the-shelf as it is not clear how to generate context for single words . Previous work tackled this problem by either designing bleached sentence templates ( May et al. , 2019 ; Kurita et al. , 2019 ) where they fill in the blanks with words of interest , or sampling sentences from large corpora where the words are mentioned ( Liang et al. , 2020a ; Cheng et al. , 2020 ) , thus creating context . The former betrays the expressiveness of natural language while the latter suffers from sampling and preprocessing bias ( Liang et al. , 2020a ) . ( 3 ) The input space of text encoders is the set of all possible sentences , so we can not debias every single input as it is done with static embeddings . ( 4 ) Text encoders are larger in capacity and complexity . This suggests that they can accommodate subtler and more sophisticated forms of stereotype , especially in their attention component , which renders bias imperceptible to existing detection methods as they are not designed to operate on attention . Despite these difficulties , previous works ( Liang et al. , 2020a ; Webster et al. , 2020 ; Liang et al. , 2020b ; Cheng et al. , 2020 ; Kaneko & Bollegala , 2021 ; Lauscher et al. , 2021 ) addressed the problem of reducing bias from modern text encoders with different techniques . However , most of them make strong assumptions about the linearity of bias . Moreover , they operate on the embeddings produced by text encoders , and leave their most important block - attention - largely unrectified . In this paper we explore attention-based debiasing . This approach stems from our observation that attention exhibits a great deal of social biases . We empirically show that this is the case , propose a novel method to reduce stereotypes from attention blocks , and demonstrate that it is effective in mitigating biases from sentence representations as a whole . Given an input sentence , our method compels the text encoder of interest to redistribute its internal attention scores such that each word in the input allocates the same attention for different social groups . Thus , it learns to forget previously encoded preferences , and generate fair representations , free of stereotypical influence . We also keep semantic information loss at a minimum while debiasing by distilling knowledge ( Hinton et al. , 2015 ; Gou et al. , 2021 ) from an unaltered teacher text encoder . In this setting , we encourage the debiased model to copy the original attention from its teacher to minimize semantic offset . Unlike most previous work which focus only on gender , we address five bias types in our experiments ( gender , race , religion , age and sexual orientation ) . We conduct likelihood- and inference-based evaluations to measure the intensity of bias in our final debiased models . Experiments demonstrate that the technique we propose effectively reduces bias , and outperforms existing debiasing methods . 2 RELATED WORK . In this section , we discuss related work about debiasing static word embeddings and sentence-level text encoders . Then , we shed some light on work done on the attention mechanism in general . It should be noted that bias at data level ( Pryzant et al. , 2020 ; Cryan et al. , 2020 ) and in language generation tasks ( Sheng et al. , 2020 ; Sap et al. , 2020 ; Dhamala et al. , 2021 ) are also active and complementary areas of research . However , due to space limitations , they will not be discussed in this paper . 2.1 BIAS IN STATIC WORD EMBEDDINGS . The work of Bolukbasi et al . ( 2016 ) pioneered bias research in NLP by discovering that static word embeddings such as Word2Vec ( Mikolov et al. , 2013 ) or GloVe ( Pennington et al. , 2014 ) encode significant amounts of binary gender bias . They proposed Hard-Debias : a simple method to remove biases by projecting gender-neutral word embeddings onto a gender-free direction . Manzini et al . ( 2019 ) extended Hard-Debias to the multiclass setting where they also treat racial and religious stereotypes . In both works , the bias direction is defined by a manually pre-compiled list of stereotyped words . In contrast , Ravfogel et al . ( 2020 ) suggest a data-driven approach to learn bias directions with a linear classifier . Debiasing is then conducted by iteratively projecting word embeddings on the null space of the classifier ’ s matrix . On the other hand , finetuning is the debiasing approach that attracted the widest adoption , either by using an autoencoder ( Kaneko & Bollegala , 2019 ) , attraction-repulsion mechanism ( Kumar et al. , 2020 ) , or adversarial attacks ( Xie et al. , 2017 ; 1A word needs to be in a context ( sentence or paragraph ) in order to be correctly encoded Li et al. , 2018 ; Elazar & Goldberg , 2018 ) . Unlike these post-processing methods , Zhao et al . ( 2018b ) added a new fairness constraint to GloVe loss function , and retrained their fair word embeddings from scratch . 2.2 BIAS IN TEXT ENCODERS . Research on biases in sentence representations is dominated by detection rather than correction and mitigation . To date , there are three main approaches to detect stereotypes in text encoders : ( 1 ) representation-based : where vector relationships between different types of inputs are measured . For example , May et al . ( 2019 ) extended the WEAT test ( Caliskan et al. , 2017 ) into sentence vector space ( SEAT ) , and compared the cosine similarity between representations of two sets of targets and two sets of attributes . All sentences in SEAT follow a predefined template . ( 2 ) likelihood-based : These approaches examine how often text encoders prefer stereotypes over anti-stereotypes . Preferences in this case are defined in terms of higher likelihoods as produced by language models using embeddings of the text encoders under study . Two benchmarks are widely used for measuring bias : StereoSet ( Nadeem et al. , 2020 ) and Crows-Pairs ( Nangia et al. ) . Both datasets are organized in pairs or triples of minimally-distant sentences which differ only in the word ( s ) carrying a stereotypical connotation . ( 3 ) inference-based : These methods employ text encoders in downstream NLP tasks ( Blodgett et al. , 2020 ) such as natural language inference ( Dev et al. , 2020 ) , sentiment analysis ( Dı́az et al. , 2018 ) or language generation ( Sap et al. , 2020 ; Sheng et al. , 2020 ) . Bias in such settings is declared as the difference in outcome when the models are tested with the same input sentence , differing only in social groups . Bias mitigation approaches are mostly inspired by debiasing static embeddings . In projection-based methods , Liang et al . ( 2020a ) contextualize words into sentences by sampling them from existing corpora before applying Hard-Debias . Kaneko & Bollegala ( 2021 ) minimize the projection of sentence representations on a learned bias subspace , while Qian et al . ( 2019 ) ; Bordia & Bowman ( 2019 ) ; Liang et al . ( 2020b ) add bias-reduction objectives to their loss functions . Another line of research uses CDA ( Webster et al. , 2020 ) to balance gender correlations in training data , while Lauscher et al . ( 2021 ) use adapters to reduce the large training time that CDA incurs . Finally , Cheng et al . ( 2020 ) use contrastive learning , and add a fair filter that minimizes mutual information between stereotypes and anti-stereotypes . In our work , rather than extending approaches from static embeddings , we focus on the self-attention mechanism which is characteristic of many text encoders , and show that fair attention leads to fair representations . 2.3 ATTENTION ANALYSIS IN TEXT ENCODERS . Clark et al . ( 2019 ) analyzed BERT ’ s attention heads and found that some of them correspond remarkably well to linguistic patterns of coreference and syntax without additional training . Michel et al . ( 2019 ) observe that not all attention heads within a model are made equal . They also propose a pruning algorithm to reduce the energy footprint of these models by eliminating the least important heads without much attenuation to the overall performance . Given the convenient interpretabilty of attention , it has also been used in a myriad of visualization works ( Vig , 2019 ; Hoover et al. , 2020 ; Tenney et al. , 2020 ; Bastings & Filippova , 2020 ) in an attempt to dissect and explain the inner functioning of text encoders . Most attention studies in text encoders are designed for analysis purposes . In contrast , we are the first to leverage the attention mechanism in order to make text encoders fairer and less stereotyped . | This paper addresses potential biases introduced by attention models by re-weighing the attention weights. First, the paper provides a few examples that demonstrate attention weights correlated with social stereotype (e.g., doctor attending to he and nurse attending to she). Then, it proposes to reduce this type of bias by "calibrating" the attention weights. To do so, each sample is augmented with pairs of words corresponding to the groups for which the model is intended to mitigate bias. The augmented samples are used in training, and, thus, each token in the input will attend to tokens in the augmented portion of the sample. The weights are changed such that the weights attending on the augmented, group-relevant tokens are similar. This type of re-weighing can lead to some of the semantic meaning encoded in the network to change. To prevent this change, the attention weights corresponding to the tokens from the original sample are forced to follow the weights in an unaltered model that is trained in parallel. Last, the paper suggested also utilizing negative sampling: using random words for augmentation and forcing all attention weights to follow the ones of the teacher. For evaluation, the paper shows the performance of several transformer-based models. The evaluation of the method includes three different benchmarks: two benchmarks that measure bias intrinsically (Crows-Pairs and StereoSet) and an NLI task designed for measuring bias. To make sure that the semantic strengths of the model are not lost, the models are also evaluated using the GLUE tasks. The model obtains competitive performance for the GLUE tasks, while reducing bias compared to original models and a couple of related works. | SP:e7c0d655d20b3a6de09dd2ea2d150b149ed60845 |
Information Bottleneck: Exact Analysis of (Quantized) Neural Networks | 1 INTRODUCTION . Improving our theoretical understanding of why over-parameterized deep neural networks generalize well is arguably one of main problems in current machine learning research ( Poggio et al. , 2020 ) . Tishby & Zaslavsky ( 2015 ) suggested to analyze deep neural networks based on their Information Bottleneck ( IB ) concept , which is built on measurements of mutual information ( MI ) between the activations of hidden layers and the input and target ( Tishby et al. , 1999 ) , for an overview see Geiger ( 2020 ) . Shwartz-Ziv & Tishby ( 2017 ) empirically studied the IB principle applied to neural networks and made several qualitative observations about the training process ; especially , they observed a fitting phase and a compression phase . The latter information-theoretic compression is conjectured to be a reason for good generalization performance and has widely been considered in the literature ( Abrol & Tanner , 2020 ; Balda et al. , 2018 ; 2019 ; Chelombiev et al. , 2019 ; Cheng et al. , 2019 ; Darlow & Storkey , 2020 ; Elad et al. , 2019 ; Fang et al. , 2018 ; Gabrié et al. , 2019 ; Goldfeld et al. , 2019 ; Jónsson et al. , 2020 ; Kirsch et al. , 2020 ; Tang Nguyen & Choi , 2019 ; Noshad et al. , 2019 ; Schiemer & Ye , 2020 ; Shwartz-Ziv & Alemi , 2020 ; Wickstrøm et al. , 2019 ; Yu et al. , 2020 ) . The work and conclusions by Shwartz-Ziv & Tishby ( 2017 ) received a lot of critique , with the generality of their claims being doubted ; especially Saxe et al . ( 2018 ) argued that the results by Shwartz-Ziv & Tishby do not generalize to networks using a different activation function . Their critique was again refuted by the original authors with counter-claims about incorrect estimation of the MI , highlighting an issue with the approximation of MI in both studies . Our goal is to verify the claims by Shwartz-Ziv & Tishby and the critique by Saxe et al . in a setting where the MI can be computed exactly . These studies consider neural networks as theoretical entities working with infinite precision , which makes computation of the information theoretic quantities problematic ( for a detailed discussion we refer to Geiger , 2020 , see also Section 3 ) . Assuming continuous input distributions , a deterministic network using any of the standard activation functions ( e.g. , RELU , TANH ) can be shown to have infinite MI ( Amjad & Geiger , 2019 ) . If an empirical input distribution defined by a data set D is considered ( as it is the case in many of the previous studies ) , then randomly-initialized deterministic neural networks with invertible activation functions will most likely result in trivial measurements of MI in the sense that the MI is finite but always maximal , that is , equal to log |D| ( Goldfeld et al. , 2019 ; Amjad & Geiger , 2019 ) . In order to obtain non-trivial measurements of MI , real-valued activations are usually discretized by binning the values , throwing away information in the process . The resulting estimated MI can be shown to be highly dependent on this binning , we refer to Geiger ( 2020 ) for a detailed discussion . Instead of approximating the MI in this fashion , we take advantage of the fact that modern computers – and thus neural networks – are discrete in the sense that a floating point value can typically take at most 232 different values . Because 32-bit precision networks may still be too precise to observe compression ( i.e. , information loss ) , we apply quantization to the neural network system to an extent that we can compute informative quantities , that is , we amplify the effect of the information loss due to the discrete computations in the neural network . One may argue that we just moved the place where the discretization is applied . This is true , but leads to a fundamental difference : previous studies applying the discretization post-hoc rely on the in general false assumption that the binned MI approximates the continuous MI well – and thus introduce measurement errors , which may occlude certain phenomena and/or lead to artifactual observations . In contrast , our computations reflect the true information flow in a network during training . Our study confirms that estimation of MI by binning may lead to strong artifacts in IB analyses and shows that : • Both fitting and compression phases occur in the output SOFTMAX layer . • For the hidden layers , the fitting phase occurs for both TANH and RELU activations . • When using TANH in the hidden layers , compression is only observed in the last hidden layer . • When using RELU , we did not observe compression in the hidden layers . • Even when applying low precision quantization , more complex networks with many neurons in each layer are observed to be too expressive to exhibit compression , as no information is lost . • Our setting excludes that the MI approximation is the reason for these different IB dynamics . The next section introduces the IB concept with a focus on its application to neural networks including the critique and controversy as well as related work . Section 3 discusses issues relating to the estimation of MI , and the idea behind our contribution . Section 4 presents our experiments , results and discussion before we conclude in Section 5 . 2 THE INFORMATION BOTTLENECK . Preliminaries . Given a continuous random variable ( r.v . ) X with density function p ( x ) and support X , the continuous entropy H ( X ) of X is a measure of the uncertainty associated with X and is given by H ( X ) = − ∫ X p ( x ) log p ( x ) dx . Given two r.v.s X and Y with density functions p ( x ) and q ( y ) and supports X and Y , the mutual information I ( X ; Y ) of X and Y is a measure of the mutual “ knowledge ” between the two variables . The symmetric I ( X ; Y ) is given by I ( X ; Y ) = ∫ Y ∫ X p ( x , y ) log p ( x , y ) p ( x ) p ( y ) dxdy . In many cases it is impossible to compute the continuous entropy and MI for continuous r.v.s exactly , due to limited samples or computational limits , or because it may not be finite ( Geiger , 2020 ) . Instead , we often estimate the quantities by their discrete counterparts . When X is a discrete r.v. , we consider the Shannon entropy H ( X ) = − ∑ P ( x ) logP ( x ) . Correspondingly , the mutual information I ( X ; Y ) of two discrete r.v.s X , Y , is given by I ( X ; Y ) = ∑ x , y P ( x , y ) log P ( x , y ) P ( x ) P ( y ) . We have the following useful identity for both the continuous and discrete MI : I ( X ; Y ) = H ( X ) − H ( X|Y ) , ( 1 ) where H ( X|Y ) is the conditional entropy of X given Y . IB Definition . The IB method was proposed by Tishby et al . ( 1999 ) . It is an information theoretic framework for extracting relevant components of an input r.v . X with respect to an output r.v . Y . These relevant components are found by “ squeezing ” the information from X through a bottleneck , in the form of an r.v . T . In other words , T is a compression of X . The idea generalizes rate distortion theory , in which we wish to compress X , obtaining T , such that I ( X ; T ) is maximized subject to a constraint on the expected distortion d ( x , t ) wrt . the joint distribution p ( x , t ) ( Tishby et al. , 1999 ) . In the IB framework , the distortion measure d is replaced by the negative loss in MI between T and the output Y , I ( T ; Y ) . Both IB and rate distortion are lossy compression schemes.1 The data processing inequality ( DPI ) I ( Y ; X ) ≥ I ( Y ; T ) holds for the IB ; that is , the bottleneck r.v . can not contain more information about the label than the input . One drawback of the information bottleneck method is the dependence on the joint distribution , p ( x , y ) , which is generally not known . Shamir et al . ( 2010 ) addressed this issue and showed that the MI , the main ingredient in the method , can be estimated reliably with fewer samples than required for estimating the true joint distribution . As common in the IB literature , whenever we discuss the MI computed on a finite data set D , we assume that p ( x , y ) corresponds to the empirical distribution defined by D , which is true for the experiments in Section 4.1 . In practice , the assumption has to be relaxed to the data being drawn i.i.d . from p ( x , y ) . However , any uncertainty resulting from the finite sample estimation in the latter case is not considered in our discussions . IB In Deep Learning . Tishby & Zaslavsky ( 2015 ) applied the IB concept to neural networks . They view the layers of a deep neural network ( DNN ) as consecutive compressions of the input . They consider the Markov chain Y → X → T1 → T2 → ... → TL = Ŷ , where Ti denotes the i ’ th hidden layer of the L-layer network and TL = Ŷ denotes the output of the network . Again , the bottleneck must satisfy the DPI : I ( Y ; X ) ≥ I ( Y ; T1 ) ≥ I ( Y ; T2 ) ≥ ... ≥ I ( Y ; Ŷ ) , ( 2 ) I ( X ; X ) ≥ I ( X ; T1 ) ≥ I ( X ; T2 ) ≥ ... ≥ I ( X ; Ŷ ) . ( 3 ) Estimating the MI of continuous variables is difficult ( Alemi et al. , 2017 ) , as evident from the many different methods proposed ( Kraskov et al. , 2004 ; Kolchinsky & Tracey , 2017 ; Noshad et al. , 2019 ) . In the discrete case , I ( X ; T ) and I ( T ; Y ) can be computed as I ( X ; T ) = H ( T ) − H ( T |X ) = H ( T ) , ( 4 ) I ( T ; Y ) = I ( Y ; T ) = H ( T ) − H ( T |Y ) , ( 5 ) following from ( 1 ) and using in ( 4 ) the assumption that T is a deterministic function of X . However , for deterministic neural networks the continuous entropies may not be finite ( Goldfeld et al. , 2019 ; Saxe et al. , 2018 ; Amjad & Geiger , 2019 ) . Shwartz-Ziv & Tishby ( 2017 ) estimate the MI via ( 4 ) and ( 5 ) by discretizing T and then computing the discrete entropy . They trained a network ( shown in Figure 1a ) on a balanced synthetic data set consisting of 12-bit binary inputs and binary labels . The network was trained for a fixed number of epochs while training and test error were observed . For every epoch and every layer T , The discretization is done by binning of T : Given upper and lower bounds bu , bl , and m ∈ N , we let B : R→ [ m ] denote the binning operation , that maps x ∈ [ bl , bu ] to the index of the corresponding bin from the set of m uniformly distributed bins in [ bl , bu ] . Overloading the notation , we apply B directly to a vector in Rd in order to obtain the resulting vector in [ m ] d of bin indices . Using discretized T ′ = B ( T ) , I ( X ; T ′ ) and I ( T ′ ; Y ) are then computed directly by ( 4 ) and ( 5 ) , using estimates of P ( T ′ ) , P ( Y ) and P ( T ′|Y ) over all samples D of X . Shwartz-Ziv & Tishby used the TANH activation function for the hidden layers , with bl = −1 , bu = 1 ( bl = 0 for the output SOFTMAX layer ) and m = 30 bins . The estimated I ( X ; T ) and I ( T ; Y ) are plotted in the information plane , providing a visual representation of the information flow in 1For IB , finding the optimal representation T can be formulated as the minimization of the Lagrangian I ( X ; T ) − βI ( T ; Y ) subject to the Markov chain Y → X → T and β ∈ R+ ( Tishby & Zaslavsky , 2015 ) . the network during training ( see example in Figure 1b ) . Based on the obtained results , ShwartzZiv & Tishby ( 2017 ) make several observations ; one notable observation is the occurrence of two phases : an empirical risk minimization phase and a compression phase . The first phase , also referred to as fitting phase , is characterized by increasing I ( T ; Y ) related to a decreasing loss . The subsequent compression phase is characterized by decreasing I ( X ; T ) , and it has been argued that this compression leads to better generalization . Critique and Controversy . The work by Tishby & Zaslavsky ( 2015 ) and Shwartz-Ziv & Tishby ( 2017 ) has jump-started an increasing interest in the IB method for deep learning , with several papers investigating or extending their contributions , see the review by Geiger ( 2020 ) . However , as mentioned , their work also received criticism . In particular , the compression phase as a general phenomenon has been called into question . Saxe et al . ( 2018 ) published a paper refuting several of the claims made by Shwartz-Ziv & Tishby ( 2017 ) . They based their criticism on a replication of the experiment done by Shwartz-Ziv & Tishby ( 2017 ) where they replaced the bounded TANH activation function with the unbounded RELU activation function . When discretizing , they used the maximum activation observed across all epochs for the upper binning bound bu . The article claims that the two phases observed by Shwartz-Ziv & Tishby occurred because the activations computed using TANH saturate close to the boundaries −1 and 1 . The claim is supported by experiments using RELU activations and m = 100 bins , in which the two phases are not observed . The critique paper by Saxe et al . was published at ICLR 2018 , but started a discussion already during the review process the previous year , when Shwartz-Ziv & Tishby defended their paper in the online discussion forum OpenReview.net2 , posting a response titled “ Data falsifying the claims of this ICLR submission all together ” ( Saxe et al. , 2017 ) . The response specifically states that “ The authors don ’ t know how to estimate mutual information correctly ” , referring to Saxe et al . ( 2018 ) , and goes on to provide an example with a network using RELU activations , which does indeed exhibit the two phases . In response , Saxe et al . performed further experiments using different estimators for MI : a state-of-the-art non-parametric KDE approach ( Kolchinsky & Tracey , 2017 ) and a k-NN based estimator ( Kraskov et al. , 2004 ) . The authors still did not observe the two phases claimed by Shwartz-Ziv & Tishby . Following the discussion on OpenReview.net , several other papers have also commented on the controversy surrounding the information bottleneck . Noshad et al . ( 2019 ) presented a new MI estimator EDGE , based on dependency graphs , and tested it on the specific counter example using 2https : //openreview.net/ RELU activations as suggested by Saxe et al . ( 2018 ) , and they observed the two phases . Table I in the review by Geiger ( 2020 ) provides a nice overview over empirical IB studies and if the compression phase was observed ( Darlow & Storkey , 2020 ; Jónsson et al. , 2020 ; Kirsch et al. , 2020 ; Noshad et al. , 2019 ; Raj et al. , 2020 ; Shwartz-Ziv & Tishby , 2017 ) or not ( Abrol & Tanner , 2020 ; Balda et al. , 2018 ; 2019 ; Tang Nguyen & Choi , 2019 ; Shwartz-Ziv & Alemi , 2020 ; Yu et al. , 2020 ) or the results were mixed ( Chelombiev et al. , 2019 ; Cheng et al. , 2019 ; Elad et al. , 2019 ; Fang et al. , 2018 ; Gabrié et al. , 2019 ; Goldfeld et al. , 2019 ; Saxe et al. , 2018 ; Schiemer & Ye , 2020 ; Wickstrøm et al. , 2019 ) . In conclusion , an important part of the controversy surrounding the IB hinges on the estimation of the information-theoretic quantities – this issue has to be solved before researching the information flow . Related Work . The effect of estimating MI by binning has been investigated before , we again refer to Geiger ( 2020 ) for a good overview and discussion . Shwartz-Ziv & Alemi ( 2020 ) consider infinite ensembles of infinitely-wide networks , which renders MI computation feasible , but do not observe a compression phase . Chelombiev et al . ( 2019 ) applies adaptive binning , which , while less prone to issues caused by having the “ wrong ” number of bins , is still an estimation and thus also suffers from the same problems . Goldfeld et al . ( 2019 ) explores IB analysis by use of stochastic neural networks which allow them to show that the compression phase in these noisy networks occurs due to clustering of the hidden representations . While theoretically interesting , the stochastic neural networks are still qualitatively different from deterministic ones and thus not directly applicable in practice . Raj et al . ( 2020 ) conducted an IB analysis of binary networks , where both the activations and weights can only be ±1 , which allows for exact computation of MI . The binary networks are significantly different from the networks used in the original studies , whereas applying the IB analysis to quantized versions of networks from these studies allows for a more direct comparison . At a first glance , the work by Raj et al . ( 2020 ) could be viewed as taking our approach to the extreme . However , the computations and training dynamics of the binary networks are qualitatively different from our study and the original IB work . For example , the binary networks require modifications for gradient estimation ( e.g. , Raj et al . consider straight-through-estimator , approximate sign , and swish sign ) . | **Update after authors' response:** The authors have managed to clarify some issues and make a couple of small improvements to the manuscript. I am tempted to raise my score to a 7, but to me personally the paper does not quite pass the threshold for an 8 (which is the next possible rating on the conference scale). I am in favor of accepting the paper and will argue so during the reviewers' discussion. **Summary** The paper investigates the information-plane analysis of deep neural network training dynamics. The original claim was that neural networks go through distinct fitting and compression phases in SGD training that are well characterized by the information plane. These findings have later been disputed (particularly due to the way mutual information is estimated) which has lead to tens of papers investigating the phenomenon with mixed conclusions. The main idea of this paper is to eliminate the estimation problem by training neural networks with quantized activations, where the discrete mutual information can be computed exactly (in the limit of infinite samples). The original claims, controversy, and follow-up works are introduced and discussed in great detail. Original experiments are repeated with a very similar protocol to facilitate comparability of results. Additional experiments on MNIST are conducted, and ablations are performed. Results show the two distinct phases for some activation functions and some layers, but not for other activation functions. **Main contributions** 1) Thorough discussion of the history of the Information-plane analysis, the main claims and previous findings and the follow-up papers it has spawned. Significance: The background and related work is well researched and well presented, which is very helpful for readers who have not followed this line of research closely. The only downside is that there is a fairly recent review (which is cited in the paper) which somewhat limits significance. 2) Quantized-activation training to avoid estimation errors when computing mutual information terms. Significance: the idea is very sensible in principle. Unfortunately, as some of the ablations show, the qualitative results can depend strongly on the quantization bit-width chosen. This is unfortunate since the main goal was to eliminate the influence of binning in naive estimation of the mutual information. Nonetheless, the results are exact and reliable for training quantized-activation neural networks - what is not clear is how to choose the quantization. 3) Reproduction of original experiments of Schwartz-Ziv & Tishby and Saxe et al. By sticking as closely as possible to the original training protocols, the new results are as comparable as possible (which does not mean that it is guaranteed that quantized-activation training dynamics are similar to “non-quantized” dynamics analyzed via binning - but empirically this seems to hold to a large degree). The paper also shows interesting ablations and additional experiments on MNIST. Significance: The most important experiments to run are included in the paper and important ablations are shown. The significance of the results could be improved by showing larger scale experiments on different kinds of networks (CNNs, ResNets, Transformers, …). | SP:5626c1fc910929420a1453636be1da17572c3872 |
Information Bottleneck: Exact Analysis of (Quantized) Neural Networks | 1 INTRODUCTION . Improving our theoretical understanding of why over-parameterized deep neural networks generalize well is arguably one of main problems in current machine learning research ( Poggio et al. , 2020 ) . Tishby & Zaslavsky ( 2015 ) suggested to analyze deep neural networks based on their Information Bottleneck ( IB ) concept , which is built on measurements of mutual information ( MI ) between the activations of hidden layers and the input and target ( Tishby et al. , 1999 ) , for an overview see Geiger ( 2020 ) . Shwartz-Ziv & Tishby ( 2017 ) empirically studied the IB principle applied to neural networks and made several qualitative observations about the training process ; especially , they observed a fitting phase and a compression phase . The latter information-theoretic compression is conjectured to be a reason for good generalization performance and has widely been considered in the literature ( Abrol & Tanner , 2020 ; Balda et al. , 2018 ; 2019 ; Chelombiev et al. , 2019 ; Cheng et al. , 2019 ; Darlow & Storkey , 2020 ; Elad et al. , 2019 ; Fang et al. , 2018 ; Gabrié et al. , 2019 ; Goldfeld et al. , 2019 ; Jónsson et al. , 2020 ; Kirsch et al. , 2020 ; Tang Nguyen & Choi , 2019 ; Noshad et al. , 2019 ; Schiemer & Ye , 2020 ; Shwartz-Ziv & Alemi , 2020 ; Wickstrøm et al. , 2019 ; Yu et al. , 2020 ) . The work and conclusions by Shwartz-Ziv & Tishby ( 2017 ) received a lot of critique , with the generality of their claims being doubted ; especially Saxe et al . ( 2018 ) argued that the results by Shwartz-Ziv & Tishby do not generalize to networks using a different activation function . Their critique was again refuted by the original authors with counter-claims about incorrect estimation of the MI , highlighting an issue with the approximation of MI in both studies . Our goal is to verify the claims by Shwartz-Ziv & Tishby and the critique by Saxe et al . in a setting where the MI can be computed exactly . These studies consider neural networks as theoretical entities working with infinite precision , which makes computation of the information theoretic quantities problematic ( for a detailed discussion we refer to Geiger , 2020 , see also Section 3 ) . Assuming continuous input distributions , a deterministic network using any of the standard activation functions ( e.g. , RELU , TANH ) can be shown to have infinite MI ( Amjad & Geiger , 2019 ) . If an empirical input distribution defined by a data set D is considered ( as it is the case in many of the previous studies ) , then randomly-initialized deterministic neural networks with invertible activation functions will most likely result in trivial measurements of MI in the sense that the MI is finite but always maximal , that is , equal to log |D| ( Goldfeld et al. , 2019 ; Amjad & Geiger , 2019 ) . In order to obtain non-trivial measurements of MI , real-valued activations are usually discretized by binning the values , throwing away information in the process . The resulting estimated MI can be shown to be highly dependent on this binning , we refer to Geiger ( 2020 ) for a detailed discussion . Instead of approximating the MI in this fashion , we take advantage of the fact that modern computers – and thus neural networks – are discrete in the sense that a floating point value can typically take at most 232 different values . Because 32-bit precision networks may still be too precise to observe compression ( i.e. , information loss ) , we apply quantization to the neural network system to an extent that we can compute informative quantities , that is , we amplify the effect of the information loss due to the discrete computations in the neural network . One may argue that we just moved the place where the discretization is applied . This is true , but leads to a fundamental difference : previous studies applying the discretization post-hoc rely on the in general false assumption that the binned MI approximates the continuous MI well – and thus introduce measurement errors , which may occlude certain phenomena and/or lead to artifactual observations . In contrast , our computations reflect the true information flow in a network during training . Our study confirms that estimation of MI by binning may lead to strong artifacts in IB analyses and shows that : • Both fitting and compression phases occur in the output SOFTMAX layer . • For the hidden layers , the fitting phase occurs for both TANH and RELU activations . • When using TANH in the hidden layers , compression is only observed in the last hidden layer . • When using RELU , we did not observe compression in the hidden layers . • Even when applying low precision quantization , more complex networks with many neurons in each layer are observed to be too expressive to exhibit compression , as no information is lost . • Our setting excludes that the MI approximation is the reason for these different IB dynamics . The next section introduces the IB concept with a focus on its application to neural networks including the critique and controversy as well as related work . Section 3 discusses issues relating to the estimation of MI , and the idea behind our contribution . Section 4 presents our experiments , results and discussion before we conclude in Section 5 . 2 THE INFORMATION BOTTLENECK . Preliminaries . Given a continuous random variable ( r.v . ) X with density function p ( x ) and support X , the continuous entropy H ( X ) of X is a measure of the uncertainty associated with X and is given by H ( X ) = − ∫ X p ( x ) log p ( x ) dx . Given two r.v.s X and Y with density functions p ( x ) and q ( y ) and supports X and Y , the mutual information I ( X ; Y ) of X and Y is a measure of the mutual “ knowledge ” between the two variables . The symmetric I ( X ; Y ) is given by I ( X ; Y ) = ∫ Y ∫ X p ( x , y ) log p ( x , y ) p ( x ) p ( y ) dxdy . In many cases it is impossible to compute the continuous entropy and MI for continuous r.v.s exactly , due to limited samples or computational limits , or because it may not be finite ( Geiger , 2020 ) . Instead , we often estimate the quantities by their discrete counterparts . When X is a discrete r.v. , we consider the Shannon entropy H ( X ) = − ∑ P ( x ) logP ( x ) . Correspondingly , the mutual information I ( X ; Y ) of two discrete r.v.s X , Y , is given by I ( X ; Y ) = ∑ x , y P ( x , y ) log P ( x , y ) P ( x ) P ( y ) . We have the following useful identity for both the continuous and discrete MI : I ( X ; Y ) = H ( X ) − H ( X|Y ) , ( 1 ) where H ( X|Y ) is the conditional entropy of X given Y . IB Definition . The IB method was proposed by Tishby et al . ( 1999 ) . It is an information theoretic framework for extracting relevant components of an input r.v . X with respect to an output r.v . Y . These relevant components are found by “ squeezing ” the information from X through a bottleneck , in the form of an r.v . T . In other words , T is a compression of X . The idea generalizes rate distortion theory , in which we wish to compress X , obtaining T , such that I ( X ; T ) is maximized subject to a constraint on the expected distortion d ( x , t ) wrt . the joint distribution p ( x , t ) ( Tishby et al. , 1999 ) . In the IB framework , the distortion measure d is replaced by the negative loss in MI between T and the output Y , I ( T ; Y ) . Both IB and rate distortion are lossy compression schemes.1 The data processing inequality ( DPI ) I ( Y ; X ) ≥ I ( Y ; T ) holds for the IB ; that is , the bottleneck r.v . can not contain more information about the label than the input . One drawback of the information bottleneck method is the dependence on the joint distribution , p ( x , y ) , which is generally not known . Shamir et al . ( 2010 ) addressed this issue and showed that the MI , the main ingredient in the method , can be estimated reliably with fewer samples than required for estimating the true joint distribution . As common in the IB literature , whenever we discuss the MI computed on a finite data set D , we assume that p ( x , y ) corresponds to the empirical distribution defined by D , which is true for the experiments in Section 4.1 . In practice , the assumption has to be relaxed to the data being drawn i.i.d . from p ( x , y ) . However , any uncertainty resulting from the finite sample estimation in the latter case is not considered in our discussions . IB In Deep Learning . Tishby & Zaslavsky ( 2015 ) applied the IB concept to neural networks . They view the layers of a deep neural network ( DNN ) as consecutive compressions of the input . They consider the Markov chain Y → X → T1 → T2 → ... → TL = Ŷ , where Ti denotes the i ’ th hidden layer of the L-layer network and TL = Ŷ denotes the output of the network . Again , the bottleneck must satisfy the DPI : I ( Y ; X ) ≥ I ( Y ; T1 ) ≥ I ( Y ; T2 ) ≥ ... ≥ I ( Y ; Ŷ ) , ( 2 ) I ( X ; X ) ≥ I ( X ; T1 ) ≥ I ( X ; T2 ) ≥ ... ≥ I ( X ; Ŷ ) . ( 3 ) Estimating the MI of continuous variables is difficult ( Alemi et al. , 2017 ) , as evident from the many different methods proposed ( Kraskov et al. , 2004 ; Kolchinsky & Tracey , 2017 ; Noshad et al. , 2019 ) . In the discrete case , I ( X ; T ) and I ( T ; Y ) can be computed as I ( X ; T ) = H ( T ) − H ( T |X ) = H ( T ) , ( 4 ) I ( T ; Y ) = I ( Y ; T ) = H ( T ) − H ( T |Y ) , ( 5 ) following from ( 1 ) and using in ( 4 ) the assumption that T is a deterministic function of X . However , for deterministic neural networks the continuous entropies may not be finite ( Goldfeld et al. , 2019 ; Saxe et al. , 2018 ; Amjad & Geiger , 2019 ) . Shwartz-Ziv & Tishby ( 2017 ) estimate the MI via ( 4 ) and ( 5 ) by discretizing T and then computing the discrete entropy . They trained a network ( shown in Figure 1a ) on a balanced synthetic data set consisting of 12-bit binary inputs and binary labels . The network was trained for a fixed number of epochs while training and test error were observed . For every epoch and every layer T , The discretization is done by binning of T : Given upper and lower bounds bu , bl , and m ∈ N , we let B : R→ [ m ] denote the binning operation , that maps x ∈ [ bl , bu ] to the index of the corresponding bin from the set of m uniformly distributed bins in [ bl , bu ] . Overloading the notation , we apply B directly to a vector in Rd in order to obtain the resulting vector in [ m ] d of bin indices . Using discretized T ′ = B ( T ) , I ( X ; T ′ ) and I ( T ′ ; Y ) are then computed directly by ( 4 ) and ( 5 ) , using estimates of P ( T ′ ) , P ( Y ) and P ( T ′|Y ) over all samples D of X . Shwartz-Ziv & Tishby used the TANH activation function for the hidden layers , with bl = −1 , bu = 1 ( bl = 0 for the output SOFTMAX layer ) and m = 30 bins . The estimated I ( X ; T ) and I ( T ; Y ) are plotted in the information plane , providing a visual representation of the information flow in 1For IB , finding the optimal representation T can be formulated as the minimization of the Lagrangian I ( X ; T ) − βI ( T ; Y ) subject to the Markov chain Y → X → T and β ∈ R+ ( Tishby & Zaslavsky , 2015 ) . the network during training ( see example in Figure 1b ) . Based on the obtained results , ShwartzZiv & Tishby ( 2017 ) make several observations ; one notable observation is the occurrence of two phases : an empirical risk minimization phase and a compression phase . The first phase , also referred to as fitting phase , is characterized by increasing I ( T ; Y ) related to a decreasing loss . The subsequent compression phase is characterized by decreasing I ( X ; T ) , and it has been argued that this compression leads to better generalization . Critique and Controversy . The work by Tishby & Zaslavsky ( 2015 ) and Shwartz-Ziv & Tishby ( 2017 ) has jump-started an increasing interest in the IB method for deep learning , with several papers investigating or extending their contributions , see the review by Geiger ( 2020 ) . However , as mentioned , their work also received criticism . In particular , the compression phase as a general phenomenon has been called into question . Saxe et al . ( 2018 ) published a paper refuting several of the claims made by Shwartz-Ziv & Tishby ( 2017 ) . They based their criticism on a replication of the experiment done by Shwartz-Ziv & Tishby ( 2017 ) where they replaced the bounded TANH activation function with the unbounded RELU activation function . When discretizing , they used the maximum activation observed across all epochs for the upper binning bound bu . The article claims that the two phases observed by Shwartz-Ziv & Tishby occurred because the activations computed using TANH saturate close to the boundaries −1 and 1 . The claim is supported by experiments using RELU activations and m = 100 bins , in which the two phases are not observed . The critique paper by Saxe et al . was published at ICLR 2018 , but started a discussion already during the review process the previous year , when Shwartz-Ziv & Tishby defended their paper in the online discussion forum OpenReview.net2 , posting a response titled “ Data falsifying the claims of this ICLR submission all together ” ( Saxe et al. , 2017 ) . The response specifically states that “ The authors don ’ t know how to estimate mutual information correctly ” , referring to Saxe et al . ( 2018 ) , and goes on to provide an example with a network using RELU activations , which does indeed exhibit the two phases . In response , Saxe et al . performed further experiments using different estimators for MI : a state-of-the-art non-parametric KDE approach ( Kolchinsky & Tracey , 2017 ) and a k-NN based estimator ( Kraskov et al. , 2004 ) . The authors still did not observe the two phases claimed by Shwartz-Ziv & Tishby . Following the discussion on OpenReview.net , several other papers have also commented on the controversy surrounding the information bottleneck . Noshad et al . ( 2019 ) presented a new MI estimator EDGE , based on dependency graphs , and tested it on the specific counter example using 2https : //openreview.net/ RELU activations as suggested by Saxe et al . ( 2018 ) , and they observed the two phases . Table I in the review by Geiger ( 2020 ) provides a nice overview over empirical IB studies and if the compression phase was observed ( Darlow & Storkey , 2020 ; Jónsson et al. , 2020 ; Kirsch et al. , 2020 ; Noshad et al. , 2019 ; Raj et al. , 2020 ; Shwartz-Ziv & Tishby , 2017 ) or not ( Abrol & Tanner , 2020 ; Balda et al. , 2018 ; 2019 ; Tang Nguyen & Choi , 2019 ; Shwartz-Ziv & Alemi , 2020 ; Yu et al. , 2020 ) or the results were mixed ( Chelombiev et al. , 2019 ; Cheng et al. , 2019 ; Elad et al. , 2019 ; Fang et al. , 2018 ; Gabrié et al. , 2019 ; Goldfeld et al. , 2019 ; Saxe et al. , 2018 ; Schiemer & Ye , 2020 ; Wickstrøm et al. , 2019 ) . In conclusion , an important part of the controversy surrounding the IB hinges on the estimation of the information-theoretic quantities – this issue has to be solved before researching the information flow . Related Work . The effect of estimating MI by binning has been investigated before , we again refer to Geiger ( 2020 ) for a good overview and discussion . Shwartz-Ziv & Alemi ( 2020 ) consider infinite ensembles of infinitely-wide networks , which renders MI computation feasible , but do not observe a compression phase . Chelombiev et al . ( 2019 ) applies adaptive binning , which , while less prone to issues caused by having the “ wrong ” number of bins , is still an estimation and thus also suffers from the same problems . Goldfeld et al . ( 2019 ) explores IB analysis by use of stochastic neural networks which allow them to show that the compression phase in these noisy networks occurs due to clustering of the hidden representations . While theoretically interesting , the stochastic neural networks are still qualitatively different from deterministic ones and thus not directly applicable in practice . Raj et al . ( 2020 ) conducted an IB analysis of binary networks , where both the activations and weights can only be ±1 , which allows for exact computation of MI . The binary networks are significantly different from the networks used in the original studies , whereas applying the IB analysis to quantized versions of networks from these studies allows for a more direct comparison . At a first glance , the work by Raj et al . ( 2020 ) could be viewed as taking our approach to the extreme . However , the computations and training dynamics of the binary networks are qualitatively different from our study and the original IB work . For example , the binary networks require modifications for gradient estimation ( e.g. , Raj et al . consider straight-through-estimator , approximate sign , and swish sign ) . | This paper considers the important problem of mutual information estimation in neural networks, a problem at the root of a debate on the usefulness of the information-bottleneck approach for the analysis of information flow in neural networks. There exist many approximation schemes to get estimates of the inter-layers mutual information, but the issue is that they may lead to different conclusions due to their sensitivity to discretization schemes. The authors propose instead to study discretized neural nets, trained with a simple learning procedure taking into account the discretization and that thus does not need post-training discretization. In this case mutual informations can be computed exactly and the data quantifies the true information flow along training. | SP:5626c1fc910929420a1453636be1da17572c3872 |
A Simple Approach to Adversarial Robustness in Few-shot Image Classification | 1 INTRODUCTION . Few-shot learning presents the challenge of generalizing to unseen tasks with limited data . The problem is aimed at learning quickly from few examples of data , which is generally considered the hallmark of human intelligence . This is an important practical problem due to the scarce availability of fully annotated data in the real world . Researchers have shown that such a setting can be considered for various real world computer vision tasks such as image classification ( Finn et al. , 2017 ; Chen et al. , 2019 ) , object detection ( Wang et al. , 2020 ) , image segmentation ( Rakelly et al. , 2018 ) , facerecognition ( Guo et al. , 2020 ) and medical analysis ( Maicas et al. , 2018 ) . As a result , it is of paramount importance that such safety-critical systems are reliable and robust to changes in input . Specifically in this work , we consider robustness to adversarial examples - carefully crafted perturbations using gradients that when added to inputs , fool the classifier . The most common method of improving robustness is by adversarial training ( Goodfellow et al. , 2015 ) which involves training on adversarial examples using adversary of choice . Traditional adversarially robust methods ( Madry et al. , 2018 ; Goodfellow et al. , 2015 ; Szegedy et al. , 2013 ) consider a data-rich setting where many examples are available per category . It has also been shown that adversarial generalization possibly requires significantly more data ( Schmidt et al. , 2018 ) . This becomes challenging in a scenario where the end-user has access to limited amount of annotated data but is interested in building a robust few-shot classifier . Such a setting is more practical and it is important to develop methods which can work with minimal effort in the pre-deployment stage . We show from our experiments that a simple approach of finetuning the network on clean data from an adversarially robust base model can lead to significant improvement in robustness with minimal resources . Previous works on improving robustness for few-shot classifiers focus mainly on meta-learning approaches . Here the base model is trained on adversarial examples of episodic data . We show that standard adversarial training on a large dataset is sufficient for learning a robust classifier . This makes our method simple and scalable , making the process of training robust classifiers straightforward and also creating directions to explore methods from robustness literature for few-shot setting . It is important to understand the problem of few-shot learning in order to develop their robust counterparts . The goal in few-shot learning is to learn transferable knowledge for generalization to tasks with limited data . These have generally been partitioned into metric learning ( Snell et al. , 2017 ; Vinyals et al. , 2016 ; Sung et al. , 2018 ) , optimization-based ( Finn et al. , 2017 ; Ravi & Larochelle , 2016 ) and hallucination based methods ( Hariharan & Girshick , 2017 ; Yang et al. , 2021 ; Antoniou et al. , 2017 ; Wang et al. , 2018 ) . The most common work among optimization based methods is MAML ( Finn et al. , 2017 ) which aims at learning a network initialization using a bi-level optimization procedure , that when finetuned on limited data is able to generalize to the new task . Recent works have shown that meta-learning methods can be extended to include adversarial robustness as well . ( Goldblum et al. , 2019 ; Wang et al. , 2021 ) perform adversarial training on top of meta-learners to improve robustness significantly . However , adversarial training on its own is expensive and combining with meta-learning makes the problem computationally intensive . ( Wang et al. , 2021 ) showed that there exists a compromise between training robust meta-learners and performance , both in terms of standard and robust accuracy , motivating the need for a simpler approach . Another interesting line of work , focused on improving few-shot learning , shows that a simple method which involves training on large scale data and finetuning the model on the few-shot dataset can match or even outperform meta-learning methods ( Chen et al. , 2019 ; Dhillon et al. , 2019 ) . The intuition is that the model sees examples from all categories and can get a general sense of semantics rather than seeing only episodic data . Such a setting also makes it easier to train on large scale data that can lead to further improvements as shown in ( Dhillon et al. , 2019 ) . Our results show considering a simple setting can be beneficial for adversarial robustness as well . We consider the few-shot setting and show that adversarial training along with simple nearest centroid based classifier can outperform previous methods in terms of robustness . Such a setting is practically relevant , since the adversarial training on large data needs to be done just once and robustness for few-shot classes can be achieved without creating adversarial examples for the specific task . We believe it also becomes easier to incorporate new approaches to robustness , such as verifiably robust classifiers ( Gowal et al. , 2018 ; Cohen et al. , 2019 ) and can bring together robust methods for both large and limited dataset settings . In the following sections , we describe our method and discuss relevant related work . We present experimental findings and implementation details in the subsequent sections followed by conclusion and directions for future work . 2 METHOD . Here we introduce notation and provide a description of our method . Our first objective is to learn a feature extractor fθb and linear classifier Cωb using the abundantly-labeled base dataset Xb . At the next stage , when a N -way K-shot few-shot task is sampled from the novel dataset Xn , we use only the feature extractor fθb and learn a new linear classifier Cωn such that it can generalize to unseen examples from the novel categories . We show that a simple approach of training a robust base model and then adapting it to novel categories can outperform previous approaches . We divide our approach into two stages : ( 1 ) Robust Base training and ( 2 ) Novel training . 2.1 ROBUST BASE TRAINING . Given a base datasetXb with large number of annotated examples per category , we perform adversarial training using an iterative adversary such as PGD ( Madry et al. , 2018 ) . Specifically , we solve the min-max objective θ∗ = min θ E ( x , y ) ∈Xb [ max ||δ||p < L ( θ , x+ δ , y ) ] ( 1 ) Here , L ( θ , x , y ) represents the training objective , which is commonly cross-entropy and θ = ( θb , ωb ) represents the combination of the feature extractor and base classifier parameters . There are different methods for optimizing the inner maximization in Equation 1 . For all our experiments we use the Projected Gradient Descent ( PGD ) algorithm , an iterative algorithm presented in ( Madry et al. , 2018 ) with p =∞ which corresponds to finding a perturbation δ around an -bounded hypercube around x that maximizes the objective . Once we find the perturbation , the perturbed input is added to the training set and parameters are tuned . This method is called adversarial training and is the most widely used method to improve robustness to adversarial examples . Note that adversarial training , which is a computationally expensive procedure , needs to be performed just once using base dataset . In the next stage , we use only clean examples which makes it practical and easy to train robust few-shot classifiers for novel tasks . Weight averaging : Weight averaging ( WA ) has been shown as a simple way to improve generalization ( Izmailov et al. , 2018 ; Garipov et al. , 2018 ) in deep networks as it approximates ensembling in temporal fashion and can find flatter optima in loss surface . This method has been used in adversarial training ( Gowal et al. , 2020 ; Chen et al . ) for improving robustness in standard classification task . Since we are interested in using base parameters at the next stage , we perform weight averaging for only the feature extractor parameters θb and show this can be used for few-shot setting . Similar to ( Gowal et al. , 2020 ) , we keep a separate copy of the weights and for every iteration perform exponential moving average method θb′ ← τθb′ + ( 1 − τ ) ∗ θb and use θb′ during the evaluation . We set τ = 0.999 in all our experiments . 2.2 NOVEL TRAINING . During this stage , we consider the N -way , K-shot method as the novel task and adapt our learnt feature extractor fθb using classifier Cωn . For all our experiments , we found best results when the weights of feature extractor fθb are frozen and not optimized during novel training . Intuitively , this can be understood not wanting the parameters of the feature extractor to be biased towards the fewshot examples . And since we are interested in learning only from clean data during the novel training , there can also be an effect of forgetting the robustness learnt at the base stage . This was observed in ( Goldblum et al. , 2019 ) where only the final layer was trained and rest of the parameters were frozen . During the novel training stage , we use only clean examples and not adversarial examples , making the process straightforward . Linear classifier : The simplest possible baseline is to learn a linear model on top of the frozen feature extractor using the few shot examples of novel categories . As shown in our experiments , this simple baseline on its own achieves reasonable performance compared to previous approaches . This baseline also suggests that a robust base classifier corresponds to a robust novel classifier and a simple approach such as ours is enough to achieve robustness for few-shot classifiers . However , as observed in previous works ( Wang et al. , 2021 ; Goldblum et al. , 2019 ) and in our experiments , this approach alone is not sufficient to achieve improved robustness . Interestingly , we achieve much closer results to state of the art compared to previous works using this simple baseline . One challenge associated with using few-shot data is that the model can become biased towards the specific samples and may not capture the true class distribution . Hence there is a need for more calibrated classifiers . Background on Distribution Calibation ( DC ) ( Yang et al. , 2021 ) : Recent work ( Yang et al. , 2021 ) has shown that standard accuracy of few-shot classifiers can be improved by using Distribution Calibration . They present a free-lunch hallucination-based method where the feature distributions of the novel categories are calibrated using the base dataset , due to the similarity between the base and novel datasets . The mean and covariance of each novel category is calibrated using the statistic of base data . They use these statistics to hallucinate or sample many points from a Gaussian distribution , and learn a logistic regression classifier . This simple method was shown to improve standard accuracy significantly under various settings . Mean Calibration and Nearest Centroid ( NC ) : DC method can be computationally expensive due to the calculation of covariance matrix which can be of O ( N ∗D2 ) complexity where D is the dimensionality of the feature space and N is the number of data points in the base dataset . The covariance matrix is also expensive to store in memory . Moreover , sampling from a multivariate Gaussian with non-diagonal covariance is also expensive and can be of the order of at least O ( D2.3 ) ( Bishop , 2006 ) . These can reduce the applicability of the approach for certain architectures . Another aspect of using the hallucinated features is that the model can become biased to the clean features and learn a non-robust final classifier . Note that since these additional data points are generated in the feature space and not the image space , it is not possible to create adversarial versions of these features and perform adversarial training using the large set of features . This poses a problem of improving the performance during novel training without sacrificing robustness . To overcome these problems , we present a simple method where we rely only on the calibrated mean and classify query sample using a non-parametric Nearest Centroid based algorithm . We find the nearest base-category centers to each novel training sample and then average them along with the novel training sample to obtain the new mean or centroid for the novel category , similar to ( Yang et al. , 2021 ) . We do not consider the covariance matrix in our method and we find this approximation works equally well in our experiments . More formally : µj = 1 m+ 1 ( zj + ∑ i∈Sj µbi ) ( 2 ) where µj is the center for the novel category j and Sj is the set of m base category centers that are closest to zj , µbi is the mean of base category i in the feature space . In the case of k-shot setting , we calculate a centroid for each sample and average them to get one centroid for each category . At inference time , we simply find the nearest center to the query point and assign its label : ŷ = { yj | argmax j µ̃Tj z̃ } ( 3 ) where ã = a||a||2 is the ` 2 normalized version of the vector a and ŷ is the predicted category . Note that ` 2 normalization is performed for both query point and centroids . Since we consider the normalized version of the vectors , euclidean distance reduces to the form in Equation 3 , similar to recent works ( Grill et al. , 2020 ) . Our inference can be considered similar to the Linear Classifier method exceptthat our centres ( µj ) are estimated with simple averaging rather than learnt using SGD . Note that similar to ( Yang et al. , 2021 ) , as a preprocessing step , we transform the embeddings by taking the square root of each dimension so that their distribution gets closer to Gaussian form . It can be seen as “ Tukey ’ s Ladders of Power Transformation ” ( Tukey et al. , 1977 ) with λ = 0.5 . At the inference time , the Nearest Centroid ( NC ) method requires less memory and computation compared to a Nearest Neighbor classifier since we only have to store and compare with one prototype per class rather than the entire training set . | This paper aims to address the problem of adversarial attack for low shot image classification. This work is motivated by the challenging scenario where there is a need of significant amount of data to train an adversarial robust classifier and there is not much data under few-shot setting. This work proposed and demonstrated a simple approach for robust few shot classifier. A model is first adversarially trained on the base classes and produce a robust base model. The feature extractor of the robust base model is then frozen. With the frozen feature extractor, 2 different manners are used for training a novel / few shot classifier, including (1) training a linear classifier and (2) computing category centroid for each novel class and perform nearest neighbor classification using the centroid. The experiment demonstrates that this simple baseline can outperform prior baselines on 3 datasets. The ablation study includes 1 vs 5 shot and different adversarial training methods. However, the core contribution of this paper is weakened due to the lack of theoretical analysis. | SP:b27d0bb34999cb1d197f68cca7e4c01c433ed6e8 |
A Simple Approach to Adversarial Robustness in Few-shot Image Classification | 1 INTRODUCTION . Few-shot learning presents the challenge of generalizing to unseen tasks with limited data . The problem is aimed at learning quickly from few examples of data , which is generally considered the hallmark of human intelligence . This is an important practical problem due to the scarce availability of fully annotated data in the real world . Researchers have shown that such a setting can be considered for various real world computer vision tasks such as image classification ( Finn et al. , 2017 ; Chen et al. , 2019 ) , object detection ( Wang et al. , 2020 ) , image segmentation ( Rakelly et al. , 2018 ) , facerecognition ( Guo et al. , 2020 ) and medical analysis ( Maicas et al. , 2018 ) . As a result , it is of paramount importance that such safety-critical systems are reliable and robust to changes in input . Specifically in this work , we consider robustness to adversarial examples - carefully crafted perturbations using gradients that when added to inputs , fool the classifier . The most common method of improving robustness is by adversarial training ( Goodfellow et al. , 2015 ) which involves training on adversarial examples using adversary of choice . Traditional adversarially robust methods ( Madry et al. , 2018 ; Goodfellow et al. , 2015 ; Szegedy et al. , 2013 ) consider a data-rich setting where many examples are available per category . It has also been shown that adversarial generalization possibly requires significantly more data ( Schmidt et al. , 2018 ) . This becomes challenging in a scenario where the end-user has access to limited amount of annotated data but is interested in building a robust few-shot classifier . Such a setting is more practical and it is important to develop methods which can work with minimal effort in the pre-deployment stage . We show from our experiments that a simple approach of finetuning the network on clean data from an adversarially robust base model can lead to significant improvement in robustness with minimal resources . Previous works on improving robustness for few-shot classifiers focus mainly on meta-learning approaches . Here the base model is trained on adversarial examples of episodic data . We show that standard adversarial training on a large dataset is sufficient for learning a robust classifier . This makes our method simple and scalable , making the process of training robust classifiers straightforward and also creating directions to explore methods from robustness literature for few-shot setting . It is important to understand the problem of few-shot learning in order to develop their robust counterparts . The goal in few-shot learning is to learn transferable knowledge for generalization to tasks with limited data . These have generally been partitioned into metric learning ( Snell et al. , 2017 ; Vinyals et al. , 2016 ; Sung et al. , 2018 ) , optimization-based ( Finn et al. , 2017 ; Ravi & Larochelle , 2016 ) and hallucination based methods ( Hariharan & Girshick , 2017 ; Yang et al. , 2021 ; Antoniou et al. , 2017 ; Wang et al. , 2018 ) . The most common work among optimization based methods is MAML ( Finn et al. , 2017 ) which aims at learning a network initialization using a bi-level optimization procedure , that when finetuned on limited data is able to generalize to the new task . Recent works have shown that meta-learning methods can be extended to include adversarial robustness as well . ( Goldblum et al. , 2019 ; Wang et al. , 2021 ) perform adversarial training on top of meta-learners to improve robustness significantly . However , adversarial training on its own is expensive and combining with meta-learning makes the problem computationally intensive . ( Wang et al. , 2021 ) showed that there exists a compromise between training robust meta-learners and performance , both in terms of standard and robust accuracy , motivating the need for a simpler approach . Another interesting line of work , focused on improving few-shot learning , shows that a simple method which involves training on large scale data and finetuning the model on the few-shot dataset can match or even outperform meta-learning methods ( Chen et al. , 2019 ; Dhillon et al. , 2019 ) . The intuition is that the model sees examples from all categories and can get a general sense of semantics rather than seeing only episodic data . Such a setting also makes it easier to train on large scale data that can lead to further improvements as shown in ( Dhillon et al. , 2019 ) . Our results show considering a simple setting can be beneficial for adversarial robustness as well . We consider the few-shot setting and show that adversarial training along with simple nearest centroid based classifier can outperform previous methods in terms of robustness . Such a setting is practically relevant , since the adversarial training on large data needs to be done just once and robustness for few-shot classes can be achieved without creating adversarial examples for the specific task . We believe it also becomes easier to incorporate new approaches to robustness , such as verifiably robust classifiers ( Gowal et al. , 2018 ; Cohen et al. , 2019 ) and can bring together robust methods for both large and limited dataset settings . In the following sections , we describe our method and discuss relevant related work . We present experimental findings and implementation details in the subsequent sections followed by conclusion and directions for future work . 2 METHOD . Here we introduce notation and provide a description of our method . Our first objective is to learn a feature extractor fθb and linear classifier Cωb using the abundantly-labeled base dataset Xb . At the next stage , when a N -way K-shot few-shot task is sampled from the novel dataset Xn , we use only the feature extractor fθb and learn a new linear classifier Cωn such that it can generalize to unseen examples from the novel categories . We show that a simple approach of training a robust base model and then adapting it to novel categories can outperform previous approaches . We divide our approach into two stages : ( 1 ) Robust Base training and ( 2 ) Novel training . 2.1 ROBUST BASE TRAINING . Given a base datasetXb with large number of annotated examples per category , we perform adversarial training using an iterative adversary such as PGD ( Madry et al. , 2018 ) . Specifically , we solve the min-max objective θ∗ = min θ E ( x , y ) ∈Xb [ max ||δ||p < L ( θ , x+ δ , y ) ] ( 1 ) Here , L ( θ , x , y ) represents the training objective , which is commonly cross-entropy and θ = ( θb , ωb ) represents the combination of the feature extractor and base classifier parameters . There are different methods for optimizing the inner maximization in Equation 1 . For all our experiments we use the Projected Gradient Descent ( PGD ) algorithm , an iterative algorithm presented in ( Madry et al. , 2018 ) with p =∞ which corresponds to finding a perturbation δ around an -bounded hypercube around x that maximizes the objective . Once we find the perturbation , the perturbed input is added to the training set and parameters are tuned . This method is called adversarial training and is the most widely used method to improve robustness to adversarial examples . Note that adversarial training , which is a computationally expensive procedure , needs to be performed just once using base dataset . In the next stage , we use only clean examples which makes it practical and easy to train robust few-shot classifiers for novel tasks . Weight averaging : Weight averaging ( WA ) has been shown as a simple way to improve generalization ( Izmailov et al. , 2018 ; Garipov et al. , 2018 ) in deep networks as it approximates ensembling in temporal fashion and can find flatter optima in loss surface . This method has been used in adversarial training ( Gowal et al. , 2020 ; Chen et al . ) for improving robustness in standard classification task . Since we are interested in using base parameters at the next stage , we perform weight averaging for only the feature extractor parameters θb and show this can be used for few-shot setting . Similar to ( Gowal et al. , 2020 ) , we keep a separate copy of the weights and for every iteration perform exponential moving average method θb′ ← τθb′ + ( 1 − τ ) ∗ θb and use θb′ during the evaluation . We set τ = 0.999 in all our experiments . 2.2 NOVEL TRAINING . During this stage , we consider the N -way , K-shot method as the novel task and adapt our learnt feature extractor fθb using classifier Cωn . For all our experiments , we found best results when the weights of feature extractor fθb are frozen and not optimized during novel training . Intuitively , this can be understood not wanting the parameters of the feature extractor to be biased towards the fewshot examples . And since we are interested in learning only from clean data during the novel training , there can also be an effect of forgetting the robustness learnt at the base stage . This was observed in ( Goldblum et al. , 2019 ) where only the final layer was trained and rest of the parameters were frozen . During the novel training stage , we use only clean examples and not adversarial examples , making the process straightforward . Linear classifier : The simplest possible baseline is to learn a linear model on top of the frozen feature extractor using the few shot examples of novel categories . As shown in our experiments , this simple baseline on its own achieves reasonable performance compared to previous approaches . This baseline also suggests that a robust base classifier corresponds to a robust novel classifier and a simple approach such as ours is enough to achieve robustness for few-shot classifiers . However , as observed in previous works ( Wang et al. , 2021 ; Goldblum et al. , 2019 ) and in our experiments , this approach alone is not sufficient to achieve improved robustness . Interestingly , we achieve much closer results to state of the art compared to previous works using this simple baseline . One challenge associated with using few-shot data is that the model can become biased towards the specific samples and may not capture the true class distribution . Hence there is a need for more calibrated classifiers . Background on Distribution Calibation ( DC ) ( Yang et al. , 2021 ) : Recent work ( Yang et al. , 2021 ) has shown that standard accuracy of few-shot classifiers can be improved by using Distribution Calibration . They present a free-lunch hallucination-based method where the feature distributions of the novel categories are calibrated using the base dataset , due to the similarity between the base and novel datasets . The mean and covariance of each novel category is calibrated using the statistic of base data . They use these statistics to hallucinate or sample many points from a Gaussian distribution , and learn a logistic regression classifier . This simple method was shown to improve standard accuracy significantly under various settings . Mean Calibration and Nearest Centroid ( NC ) : DC method can be computationally expensive due to the calculation of covariance matrix which can be of O ( N ∗D2 ) complexity where D is the dimensionality of the feature space and N is the number of data points in the base dataset . The covariance matrix is also expensive to store in memory . Moreover , sampling from a multivariate Gaussian with non-diagonal covariance is also expensive and can be of the order of at least O ( D2.3 ) ( Bishop , 2006 ) . These can reduce the applicability of the approach for certain architectures . Another aspect of using the hallucinated features is that the model can become biased to the clean features and learn a non-robust final classifier . Note that since these additional data points are generated in the feature space and not the image space , it is not possible to create adversarial versions of these features and perform adversarial training using the large set of features . This poses a problem of improving the performance during novel training without sacrificing robustness . To overcome these problems , we present a simple method where we rely only on the calibrated mean and classify query sample using a non-parametric Nearest Centroid based algorithm . We find the nearest base-category centers to each novel training sample and then average them along with the novel training sample to obtain the new mean or centroid for the novel category , similar to ( Yang et al. , 2021 ) . We do not consider the covariance matrix in our method and we find this approximation works equally well in our experiments . More formally : µj = 1 m+ 1 ( zj + ∑ i∈Sj µbi ) ( 2 ) where µj is the center for the novel category j and Sj is the set of m base category centers that are closest to zj , µbi is the mean of base category i in the feature space . In the case of k-shot setting , we calculate a centroid for each sample and average them to get one centroid for each category . At inference time , we simply find the nearest center to the query point and assign its label : ŷ = { yj | argmax j µ̃Tj z̃ } ( 3 ) where ã = a||a||2 is the ` 2 normalized version of the vector a and ŷ is the predicted category . Note that ` 2 normalization is performed for both query point and centroids . Since we consider the normalized version of the vectors , euclidean distance reduces to the form in Equation 3 , similar to recent works ( Grill et al. , 2020 ) . Our inference can be considered similar to the Linear Classifier method exceptthat our centres ( µj ) are estimated with simple averaging rather than learnt using SGD . Note that similar to ( Yang et al. , 2021 ) , as a preprocessing step , we transform the embeddings by taking the square root of each dimension so that their distribution gets closer to Gaussian form . It can be seen as “ Tukey ’ s Ladders of Power Transformation ” ( Tukey et al. , 1977 ) with λ = 0.5 . At the inference time , the Nearest Centroid ( NC ) method requires less memory and computation compared to a Nearest Neighbor classifier since we only have to store and compare with one prototype per class rather than the entire training set . | This paper 1) proposes a simple transfer learning approach to enable train adversarially robust few-shot classifiers for few-shot image classification, and 2) present a method for novel classification task based on calibrating the centroid of the few-shot category towards the base classes. Results show good performance has been achieved on three benchmarks i.e., Mini-ImageNet, CIFAR-FS, and CUB datasets. My main concern of this work is that the improved performance can be mainly resulted from the pretrained stage (base training) using the base dataset $X_b$. Normally, pretraining can improve the performance on small tasks. | SP:b27d0bb34999cb1d197f68cca7e4c01c433ed6e8 |
NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search | 1 INTRODUCTION . Neural architecture search ( NAS ) aims to automate the design of deep neural networks , ensuring performance on par with hand-crafted architectures while reducing human labor devoted to tedious architecture tuning ( Elsken et al. , 2019 ) . With the growing number of application areas of ML , and thus of use-cases for automating it , NAS has experienced an intense amount of study , with significant progress in search space design ( Zoph et al. , 2018 ; Liu et al. , 2019b ; Cai et al. , 2019 ) , search efficiency ( Pham et al. , 2018 ) , and search algorithms ( Xu et al. , 2020 ; Li et al. , 2021a ; White et al. , 2021 ) . While the use of NAS techniques may be especially impactful in under-explored or under-resourced domains where less expert help is available , the field has largely been dominated by methods designed for and evaluated on benchmarks in computer vision ( Liu et al. , 2019b ; Ying et al. , 2019 ; Dong & Yang , 2020 ) . There have been a few recent efforts to diversify these benchmarks to settings such as vision-based transfer learning ( Duan et al. , 2021 ) and speech and language processing Mehrotra et al . ( 2021 ) ; Klyuchnikov et al . ( 2020 ) ; however , evaluating NAS methods on such wellstudied tasks using traditional CNN search spaces does not give a good indication of their utility on more far-afield applications , which have often necessitated the design of custom neural operations ( Cohen et al. , 2018 ; Li et al. , 2021b ) . We make progress towards studying NAS on more diverse tasks by introducing a suite of benchmark datasets drawn from various data domains that we collectively call NAS-Perf-360 . This benchmark consists of an organized setup of ten suitable datasets that ( a ) can be evaluated in a unified way using existing NAS approaches and ( b ) represent diverse application domains , dataset sizes , problem 1In this work , NAS method refers to a combined search space and algorithm pair , not the algorithm alone . dimensionalities , and learning objectives . We also include standard image classification evaluations as a baseline point of comparison , as many new methods continue to be designed for such tasks . Following our construction of this suite of tasks , we demonstrate both the usefulness of and need for NAS-Perf-360 by using it to investigate whether modern NAS is useful to practitioners faced with diverse tasks , i.e. , whether its success in computer vision is indicative of strong performance on the much broader set of problems to which NAS can conceivably be applied . To address this question , we start with the fact that a common first approach when applying deep learning to a new domain is to try an off-the-shelf CNN ; in our case , this will be the Wide ResNet ( WRN ) ( Zagoruyko & Komodakis , 2016 ) . We then consider the scenario of two practitioners : one with only the resources to train one WRN using the default settings and another that has enough to tune WRN using an off-the-shelf hyperparameter optimizer ( Li et al. , 2018 ) . Both are faced with a decision : should they use these fixed-architecture baselines or try out the best NAS has to offer ? Overall , our empirical investigation suggests the following : 1 . The less-constrained practitioner might usually do better using NAS—20 % relative improvement over WRN on the median task—but risks catastrophic results on specific non-vision applications . 2 . The robustness of NAS in the constrained case may be worse : the practitioner is likely better-off simply using the simple off-the-shelf WRN , as its median rank across NAS-Perf-360 ’ s ten tasks is the same as that of our candidate NAS method . These results are obtained via experiments using two well-studied modern search spaces : the cellbased DARTS space ( Liu et al. , 2019b ) and the efficiency-focused DenseNAS space ( Fang et al. , 2020 ) . Each space is paired with a search method known to find well-performing architectures on ImageNet , specifically the state-of-the-art GAEA PC-DARTS ( Li et al. , 2021a ) for the former and the original weight-sharing algorithm used by DenseNAS for the latter . Note that our assessment includes a more holistic comparison using performance profiles ( c.f . Figure 1 ) to reinforce these ranking-based comparisons , which are useful but can miss a lack of robustness or exaggerate minor differences between methods . The initial experimental results enabled by NAS-Perf-360 suggest that the robustness of modern search methods to diverse tasks beyond image classification is mixed at best . At the same time , our set of tasks can serve as a crucial tool for investigating and rectifying this issue , and it is thus important for moving towards a truly automated pipeline containing NAS . In particular , NAS-Perf-360 will facilitate such progress via a diverse array of tasks for validating NAS methods that are not only challenging , real-life problem settings but also computationally accessible for academic researchers with limited budgets . We demonstrate this potential via further studies on the comparative importance of search spaces v. search algorithms and the usefulness of more-customized approaches , specifically by studying a random search ( RS ) baseline over the DenseNAS space as well as two domain-specific methods : Auto-DeepLab ( Auto-DL ) ( Liu et al. , 2019a ) for dense prediction and AMBER ( Zhang et al. , 2021b ) for prediction from 1D data . Among other insights , these experiments provide evidence that a more robust NAS may require better search spaces with a wider variety of operations . The associated datasets and experiment code will remain open-source and accessible at a temporarily anonymized repository https : //anonymous.4open.science/r/ NAS-Bench-360-26D1 . Reproducibility of all experiments is assured from open-sourcing all relevant code for the end-to-end procedure , with Docker containers and random seeds provided . 2 RELATED WORK . Benchmarks have been critical to the development of NAS in recent years . This includes standard evaluation datasets and protocols , of which the most popular are the CIFAR-10 and ImageNet routines used by DARTS ( Liu et al. , 2019b ) . Another important type of benchmark has been tabular benchmarks such as NAS-Bench-101 ( Ying et al. , 2019 ) , NAS-Bench-201 ( Dong & Yang , 2020 ) , and NAS-Bench-1Shot1 ( Zela et al. , 2020 ) ; these benchmarks exhaustively evaluate all architectures in their search spaces , which is made computationally feasible by defining simple searched cells . Consequently , they are less expressive than the DARTS cell ( Liu et al. , 2019b ) , often regarded as the most powerful search space in the cell-based regime . Notably , our benchmark is not a tabular benchmark , i.e. , we do not evaluate every architecture from a fixed search space ; instead , the focus is on the organization of a suite of tasks for assessing both NAS algorithms and search spaces , which would necessarily be restricted by fixing a search space for a tabular benchmark . Pre-computing on an expansive search space such as DARTS , with 1018 possible architectures , is computationally intractable . Architectures found on lesser search spaces are most likely suboptimal : the vanilla WRN outperforms all networks in the NAS-Bench-201 search space on CIFAR-100 . While NAS methods and benchmarks have generally been focused on computer vision , recent work such as AutoML-Zero ( Real et al. , 2020 ) and XD-operations ( Roberts et al. , 2021 ) has started moving towards a more generically applicable set of tools for AutoML . However , even more recent benchmarks that do go beyond the most popular vision datasets have continued to focus on well-studied tasks , including vision-based transfer learning ( Duan et al. , 2021 ) , speech recognition ( Mehrotra et al. , 2021 ) , and natural language processing ( Klyuchnikov et al. , 2020 ) . We aim to go beyond such areas to evaluate the potential of NAS to automate the application of ML in truly under-explored domains . One analogous work to ours in the field of meta-learning is the Meta-Dataset benchmark of few-shot tasks ( Triantafillou et al. , 2020 ) , which similarly aimed to establish a wide-ranging set of evaluations for that field . For our inclusion of diverse tasks , we title our benchmark NAS-Perf-360 to resemble the idea of a 360-degree camera that covers all possible directions . 3 NAS-PERF-360 : A SUITE OF DIVERSE AND PRACTICAL TASKS . In this section , we introduce the NAS setting targeted by our benchmark , our motivation for organizing a new set of diverse tasks as a NAS evaluation suite , and our task-selection methodology . We report evaluations of specific algorithms on this new benchmark in the next section . 3.1 NEURAL ARCHITECTURE SEARCH : PROBLEM FORMULATION AND BASELINES . For completeness and clarity , we first formally discuss the architecture search problem itself , starting with the extended hypothesis class formulation Li et al . ( 2021a ) . Here the goal is to use a dataset of points x 2 X to find parameters w 2 W and a 2 A of a parameterized function fw , a : X 7 ! R 0 that minimize the expectation Ex⇠Dfw , a ( x ) for some test distribution D over X ; here X is the input space , W is the space of model weights , and A is the set of architectures . For generality , we do not require the training points to be drawn from D to allow for domain adaptation , as is the case for one of our tasks , and we do not require the loss to be supervised . Note also that the goal here does not depend on computational or memory efficiency , which we do not focus on in our evaluations ; our restriction is only that the entire pipeline can be run on an NVIDIA V100 GPU . Notably , this formulation makes no distinction between the model weights w and architectures a , treating both as parameters of a larger model . Indeed , the goal of NAS may be seen as similar to model design , except now we include the design of an ( often-discrete ) architecture space A such that it is easy to find an architecture a 2 A and model weights w 2 W whose test loss EDfw , a is low using a search algorithm . This can be done in a one-shot manner—simultaneously optimizing a and w—or using the standard approach of first finding an architecture a and then keeping it fixed while training model weights w using a pre-specified algorithm such as stochastic gradient descent ( SGD ) . This formulation also includes non-NAS methods by allowing the architecture search space to be a singleton . When the sole architecture is a standard and common network such as WRN ( Zagoruyko & Komodakis , 2016 ) , this yields a natural baseline with an algorithm searching for training hyperparameters , not architectures . On the other hand , any architecture space A allows for non-one-shot methods to search for architectures , such as random search through repeatedly sampling architectures and evaluating them from partial training . We adopt this simple method as our random baseline . For our empirical investigation , we compare the performance of state-of-the-art NAS approaches against that of the two baselines . 3.2 TASK SELECTION : MOTIVATION AND METHODOLOGY . Curating a diverse , practical set of tasks for the study of NAS is our primary motivation behind this work . We observe that past NAS benchmarks focused on creating larger search spaces and more sophisticated search methods for neural networks . However , the utility of these search spaces and methods are only evaluated on canonical computer vision datasets . On a broader range of problems , whether these new methods can improve upon simple baselines remains an open question . This calls for the introduction of new datasets lest NAS research overfits to the biases of CIFAR-10 and ImageNet . By identifying these possible biases , future directions in NAS research can be better primed to suit the needs of practitioners and to increase the deployment of NAS . Summarized in Table 1 , NAS-Perf-360 consists of problems that are conducive to processing by convolutional neural networks , which includes a trove of applications associated with spatial and temporal data , spanning single and multiple dimensions . Most current NAS methods are not implemented to search for other types of architectures to process tabular data and graph data . Therefore , we have set this scope for our investigation . During the selection of tasks , diversity is our primary consideration . We define the following axes of diversity to govern our task-filtering process : the first is problem dimensionality , including both 2D with matrix inputs and 1D with sequence inputs ; the second is dataset size , for which our selection spans the scale from 1,000 to 1,000,000 ; the third is problem type , divisible into tasks requiring a singular prediction ( point prediction ) and multiple predictions ( dense prediction ) ; fourth and finally , diversity is achieved through selecting tasks from various learning objectives from applications of deep learning , where introducing NAS could improve upon the performance of handcrafted neural networks . In lieu of providing raw data , we perform data pre-processing locally and store the processed data on a public Amazon Web Service ’ s S3 data bucket with download links available on our website . Our data treatment largely follows the procedure defined by the researchers who provided them . This enhances reproducibility by ensuring the uniformity of input data for different pipelines . Specific pre-processing and augmentation steps are described below . CIFAR-100 : Standard image classification As a starting point of comparison to existing benchmarks , we include the CIFAR-100 task ( Krizhevksy , 2009 ) , which contains RGB images from natural settings to be classified into 100 fine-grained categories . CIFAR-100 is preferred over CIFAR-10 because it is more challenging and suffers less from over-fitting in previous research . Spherical : Classifying spherically projected CIFAR-100 images To test NAS methods applied to natural-image-like data , we consider the task of classifying spherical projections of the CIFAR-100 images , which we call Spherical . In addition to scientific interest , spherical image data is also present in various applications , such as omnidirectional vision in robotics and weather modeling in meteorology , as sensors usually produce distorted image signals in real-life settings . To create Spherical CIFAR , we project the planar signals of the CIFAR images to the northern hemisphere and add a random rotation to produce spherical signals for each channel following the procedure specified in Cohen et al . ( 2018 ) . The resulting images are 60⇥60 pixels with RGB channels . NinaPro : Classifying electromyography signals NinaPro moves away from the image domain to classify hand gestures indicated by electromyography signals . For this , we use a subset of the NinaPro DB5 dataset ( Atzori et al. , 2012 ) in which two Myo armbands collect EMG signals from 10 test individuals who hold 18 different hand gestures to be classified . These armbands leverage data from muscle movement , which is collected using electrodes in the form of wave signals . Each wave signal is then sampled using a wavelength and frequency prescribed in Côté-Allard et al . ( 2019 ) to produce 2D signals . FSD50K : Labeling sound events FSD50K ( Fonseca et al. , 2020 ) is derived from the larger Freesound dataset ( Fonseca et al. , 2017 ) of Youtube videos with 51,000 clips totaling more than 100 hours of sound . These clips are manually labeled and equally distributed in 200 classes from the AudioSet ontology ( Gemmeke et al. , 2017 ) . Each clip could receive multiple labels . Unlike TIMIT ( Garofolo , 1993 ) , FSD50K does not focus exclusively on sounds of spoken language but includes sound events from physical sources and production mechanisms . The mean average precision ( mAP ) is used to evaluate classification results . Darcy Flow : Solving partial differential equations ( PDEs ) Our first regression task , Darcy Flow , focuses on learning a map from the initial conditions of a PDE to the solution at a later timestep . This application aims to replace traditional solvers with learned neural networks , which can output a result in a single forward pass . The input is a 2d grid specifying the initial conditions of a fluid , and the output is a 2d grid specifying the fluid state at a later time , with the ground truth being the result computed by a traditional solver . We report the mean square error ( MSE or ` 2 ) . PSICOV : Protein distance prediction PSICOV studies the use of neural networks in the protein folding prediction pipeline , which has recently received significant attention to the success of methods like AlphaFold ( Jumper et al. , 2020 ) . While the dataset and method they use are too large-scale for our purposes , we consider a smaller set of protein structures to tackle the specific problem of inter-residual distance predictions outlined in Adhikari ( 2020b ) . 2D large-scale features are extracted from protein sequences , resulting in input feature maps with a massive number of channels . Correspondingly , the labels are pairwise-distance matrices with the same spatial dimension . The evaluation metric is mean absolute error ( MAE or ` 1 ) computed on distances below 8 Å , referred to as MAE8 . Cosmic : Identifying cosmic ray contamination Images from space-based facilities are prone to corruption by charged particles collectively referred to as `` cosmic rays . '' Cosmic rays on images should be identified and masked before the images are used for further analysis ( Zhang & Bloom , 2020 ) . The Cosmic task uses imaging data of local resolved galaxies collected from the Hubble Space Telescope . Inputs and outputs are same-size 2D matrices , with the output predicting whether each pixel in the input is an artifact of cosmic rays . We report the false-negative rate ( FNR ) of identification results . ECG : Detecting heart disease Electrocardiograms are frequently used in medicine to diagnose sinus rhythm irregularities . The ECG task is based on the 2017 PhysioNet Challenge ( Clifford et al. , 2017 ) , with 9 to 60-second ECG recordings sampled at 300 Hz and labeled using four classes : normal , disease , other , or noisy rhythms . Recordings are processed using a fixed sliding window of 1,000 ms and stride of 500 ms. We report the F1-score according to the challenge ’ s guidelines . Satellite : Satellite image time series analysis Satellite image time series ( SITS ) are becoming more widely available in earth monitoring applications . Our dataset comes from Formosat-2 satellite images acquired over Toulouse , France ( Petitjean et al. , 2012 ) . Available in multiple channels , SITS track the land cover changes over several years as each pixel in the image represents a geographical region . The goal of the Satellite task is to generate land cover maps for geo-surveying . Specifically , a series of pixels in a given color channel constitute a time series to be classified into 46 land cover types . DeepSEA : Predicting functional effects from genetic sequences Predicting chromatin effects of genetic sequence alterations is a significant challenge in the field to understand genetic diseases . DeepSEA ( Zhou & Troyanskaya , 2015 ) , provides a compendium of genomic profiles from the Encyclopedia of DNA Elements ( ENCODE ) project ( Consortium et al. , 2004 ) to train a predictive model estimating the behavior of chromatin proteins , divided into 919 categories . Due to computation constraints , we subsample 36 of these categories as per Zhang et al . ( 2021a ) and further take 5 % of the training data for prediction . We report the area under the receiver operating characteristic ( AUROC ) following the previous work . | This paper proposes a new benchmark for NAS methods, which is called NAS-Bench-360. Unlike the existing benchmark datasets for NAS, the proposed benchmark contains ten diverse tasks derived from various fields of research. This paper has tested several standard NAS methods on the proposed benchmark and confirmed that there are many gaps among the ten tasks and NAS methods. | SP:5893f3dba5c2341a1e9dad1002d7ac226417c026 |
NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search | 1 INTRODUCTION . Neural architecture search ( NAS ) aims to automate the design of deep neural networks , ensuring performance on par with hand-crafted architectures while reducing human labor devoted to tedious architecture tuning ( Elsken et al. , 2019 ) . With the growing number of application areas of ML , and thus of use-cases for automating it , NAS has experienced an intense amount of study , with significant progress in search space design ( Zoph et al. , 2018 ; Liu et al. , 2019b ; Cai et al. , 2019 ) , search efficiency ( Pham et al. , 2018 ) , and search algorithms ( Xu et al. , 2020 ; Li et al. , 2021a ; White et al. , 2021 ) . While the use of NAS techniques may be especially impactful in under-explored or under-resourced domains where less expert help is available , the field has largely been dominated by methods designed for and evaluated on benchmarks in computer vision ( Liu et al. , 2019b ; Ying et al. , 2019 ; Dong & Yang , 2020 ) . There have been a few recent efforts to diversify these benchmarks to settings such as vision-based transfer learning ( Duan et al. , 2021 ) and speech and language processing Mehrotra et al . ( 2021 ) ; Klyuchnikov et al . ( 2020 ) ; however , evaluating NAS methods on such wellstudied tasks using traditional CNN search spaces does not give a good indication of their utility on more far-afield applications , which have often necessitated the design of custom neural operations ( Cohen et al. , 2018 ; Li et al. , 2021b ) . We make progress towards studying NAS on more diverse tasks by introducing a suite of benchmark datasets drawn from various data domains that we collectively call NAS-Perf-360 . This benchmark consists of an organized setup of ten suitable datasets that ( a ) can be evaluated in a unified way using existing NAS approaches and ( b ) represent diverse application domains , dataset sizes , problem 1In this work , NAS method refers to a combined search space and algorithm pair , not the algorithm alone . dimensionalities , and learning objectives . We also include standard image classification evaluations as a baseline point of comparison , as many new methods continue to be designed for such tasks . Following our construction of this suite of tasks , we demonstrate both the usefulness of and need for NAS-Perf-360 by using it to investigate whether modern NAS is useful to practitioners faced with diverse tasks , i.e. , whether its success in computer vision is indicative of strong performance on the much broader set of problems to which NAS can conceivably be applied . To address this question , we start with the fact that a common first approach when applying deep learning to a new domain is to try an off-the-shelf CNN ; in our case , this will be the Wide ResNet ( WRN ) ( Zagoruyko & Komodakis , 2016 ) . We then consider the scenario of two practitioners : one with only the resources to train one WRN using the default settings and another that has enough to tune WRN using an off-the-shelf hyperparameter optimizer ( Li et al. , 2018 ) . Both are faced with a decision : should they use these fixed-architecture baselines or try out the best NAS has to offer ? Overall , our empirical investigation suggests the following : 1 . The less-constrained practitioner might usually do better using NAS—20 % relative improvement over WRN on the median task—but risks catastrophic results on specific non-vision applications . 2 . The robustness of NAS in the constrained case may be worse : the practitioner is likely better-off simply using the simple off-the-shelf WRN , as its median rank across NAS-Perf-360 ’ s ten tasks is the same as that of our candidate NAS method . These results are obtained via experiments using two well-studied modern search spaces : the cellbased DARTS space ( Liu et al. , 2019b ) and the efficiency-focused DenseNAS space ( Fang et al. , 2020 ) . Each space is paired with a search method known to find well-performing architectures on ImageNet , specifically the state-of-the-art GAEA PC-DARTS ( Li et al. , 2021a ) for the former and the original weight-sharing algorithm used by DenseNAS for the latter . Note that our assessment includes a more holistic comparison using performance profiles ( c.f . Figure 1 ) to reinforce these ranking-based comparisons , which are useful but can miss a lack of robustness or exaggerate minor differences between methods . The initial experimental results enabled by NAS-Perf-360 suggest that the robustness of modern search methods to diverse tasks beyond image classification is mixed at best . At the same time , our set of tasks can serve as a crucial tool for investigating and rectifying this issue , and it is thus important for moving towards a truly automated pipeline containing NAS . In particular , NAS-Perf-360 will facilitate such progress via a diverse array of tasks for validating NAS methods that are not only challenging , real-life problem settings but also computationally accessible for academic researchers with limited budgets . We demonstrate this potential via further studies on the comparative importance of search spaces v. search algorithms and the usefulness of more-customized approaches , specifically by studying a random search ( RS ) baseline over the DenseNAS space as well as two domain-specific methods : Auto-DeepLab ( Auto-DL ) ( Liu et al. , 2019a ) for dense prediction and AMBER ( Zhang et al. , 2021b ) for prediction from 1D data . Among other insights , these experiments provide evidence that a more robust NAS may require better search spaces with a wider variety of operations . The associated datasets and experiment code will remain open-source and accessible at a temporarily anonymized repository https : //anonymous.4open.science/r/ NAS-Bench-360-26D1 . Reproducibility of all experiments is assured from open-sourcing all relevant code for the end-to-end procedure , with Docker containers and random seeds provided . 2 RELATED WORK . Benchmarks have been critical to the development of NAS in recent years . This includes standard evaluation datasets and protocols , of which the most popular are the CIFAR-10 and ImageNet routines used by DARTS ( Liu et al. , 2019b ) . Another important type of benchmark has been tabular benchmarks such as NAS-Bench-101 ( Ying et al. , 2019 ) , NAS-Bench-201 ( Dong & Yang , 2020 ) , and NAS-Bench-1Shot1 ( Zela et al. , 2020 ) ; these benchmarks exhaustively evaluate all architectures in their search spaces , which is made computationally feasible by defining simple searched cells . Consequently , they are less expressive than the DARTS cell ( Liu et al. , 2019b ) , often regarded as the most powerful search space in the cell-based regime . Notably , our benchmark is not a tabular benchmark , i.e. , we do not evaluate every architecture from a fixed search space ; instead , the focus is on the organization of a suite of tasks for assessing both NAS algorithms and search spaces , which would necessarily be restricted by fixing a search space for a tabular benchmark . Pre-computing on an expansive search space such as DARTS , with 1018 possible architectures , is computationally intractable . Architectures found on lesser search spaces are most likely suboptimal : the vanilla WRN outperforms all networks in the NAS-Bench-201 search space on CIFAR-100 . While NAS methods and benchmarks have generally been focused on computer vision , recent work such as AutoML-Zero ( Real et al. , 2020 ) and XD-operations ( Roberts et al. , 2021 ) has started moving towards a more generically applicable set of tools for AutoML . However , even more recent benchmarks that do go beyond the most popular vision datasets have continued to focus on well-studied tasks , including vision-based transfer learning ( Duan et al. , 2021 ) , speech recognition ( Mehrotra et al. , 2021 ) , and natural language processing ( Klyuchnikov et al. , 2020 ) . We aim to go beyond such areas to evaluate the potential of NAS to automate the application of ML in truly under-explored domains . One analogous work to ours in the field of meta-learning is the Meta-Dataset benchmark of few-shot tasks ( Triantafillou et al. , 2020 ) , which similarly aimed to establish a wide-ranging set of evaluations for that field . For our inclusion of diverse tasks , we title our benchmark NAS-Perf-360 to resemble the idea of a 360-degree camera that covers all possible directions . 3 NAS-PERF-360 : A SUITE OF DIVERSE AND PRACTICAL TASKS . In this section , we introduce the NAS setting targeted by our benchmark , our motivation for organizing a new set of diverse tasks as a NAS evaluation suite , and our task-selection methodology . We report evaluations of specific algorithms on this new benchmark in the next section . 3.1 NEURAL ARCHITECTURE SEARCH : PROBLEM FORMULATION AND BASELINES . For completeness and clarity , we first formally discuss the architecture search problem itself , starting with the extended hypothesis class formulation Li et al . ( 2021a ) . Here the goal is to use a dataset of points x 2 X to find parameters w 2 W and a 2 A of a parameterized function fw , a : X 7 ! R 0 that minimize the expectation Ex⇠Dfw , a ( x ) for some test distribution D over X ; here X is the input space , W is the space of model weights , and A is the set of architectures . For generality , we do not require the training points to be drawn from D to allow for domain adaptation , as is the case for one of our tasks , and we do not require the loss to be supervised . Note also that the goal here does not depend on computational or memory efficiency , which we do not focus on in our evaluations ; our restriction is only that the entire pipeline can be run on an NVIDIA V100 GPU . Notably , this formulation makes no distinction between the model weights w and architectures a , treating both as parameters of a larger model . Indeed , the goal of NAS may be seen as similar to model design , except now we include the design of an ( often-discrete ) architecture space A such that it is easy to find an architecture a 2 A and model weights w 2 W whose test loss EDfw , a is low using a search algorithm . This can be done in a one-shot manner—simultaneously optimizing a and w—or using the standard approach of first finding an architecture a and then keeping it fixed while training model weights w using a pre-specified algorithm such as stochastic gradient descent ( SGD ) . This formulation also includes non-NAS methods by allowing the architecture search space to be a singleton . When the sole architecture is a standard and common network such as WRN ( Zagoruyko & Komodakis , 2016 ) , this yields a natural baseline with an algorithm searching for training hyperparameters , not architectures . On the other hand , any architecture space A allows for non-one-shot methods to search for architectures , such as random search through repeatedly sampling architectures and evaluating them from partial training . We adopt this simple method as our random baseline . For our empirical investigation , we compare the performance of state-of-the-art NAS approaches against that of the two baselines . 3.2 TASK SELECTION : MOTIVATION AND METHODOLOGY . Curating a diverse , practical set of tasks for the study of NAS is our primary motivation behind this work . We observe that past NAS benchmarks focused on creating larger search spaces and more sophisticated search methods for neural networks . However , the utility of these search spaces and methods are only evaluated on canonical computer vision datasets . On a broader range of problems , whether these new methods can improve upon simple baselines remains an open question . This calls for the introduction of new datasets lest NAS research overfits to the biases of CIFAR-10 and ImageNet . By identifying these possible biases , future directions in NAS research can be better primed to suit the needs of practitioners and to increase the deployment of NAS . Summarized in Table 1 , NAS-Perf-360 consists of problems that are conducive to processing by convolutional neural networks , which includes a trove of applications associated with spatial and temporal data , spanning single and multiple dimensions . Most current NAS methods are not implemented to search for other types of architectures to process tabular data and graph data . Therefore , we have set this scope for our investigation . During the selection of tasks , diversity is our primary consideration . We define the following axes of diversity to govern our task-filtering process : the first is problem dimensionality , including both 2D with matrix inputs and 1D with sequence inputs ; the second is dataset size , for which our selection spans the scale from 1,000 to 1,000,000 ; the third is problem type , divisible into tasks requiring a singular prediction ( point prediction ) and multiple predictions ( dense prediction ) ; fourth and finally , diversity is achieved through selecting tasks from various learning objectives from applications of deep learning , where introducing NAS could improve upon the performance of handcrafted neural networks . In lieu of providing raw data , we perform data pre-processing locally and store the processed data on a public Amazon Web Service ’ s S3 data bucket with download links available on our website . Our data treatment largely follows the procedure defined by the researchers who provided them . This enhances reproducibility by ensuring the uniformity of input data for different pipelines . Specific pre-processing and augmentation steps are described below . CIFAR-100 : Standard image classification As a starting point of comparison to existing benchmarks , we include the CIFAR-100 task ( Krizhevksy , 2009 ) , which contains RGB images from natural settings to be classified into 100 fine-grained categories . CIFAR-100 is preferred over CIFAR-10 because it is more challenging and suffers less from over-fitting in previous research . Spherical : Classifying spherically projected CIFAR-100 images To test NAS methods applied to natural-image-like data , we consider the task of classifying spherical projections of the CIFAR-100 images , which we call Spherical . In addition to scientific interest , spherical image data is also present in various applications , such as omnidirectional vision in robotics and weather modeling in meteorology , as sensors usually produce distorted image signals in real-life settings . To create Spherical CIFAR , we project the planar signals of the CIFAR images to the northern hemisphere and add a random rotation to produce spherical signals for each channel following the procedure specified in Cohen et al . ( 2018 ) . The resulting images are 60⇥60 pixels with RGB channels . NinaPro : Classifying electromyography signals NinaPro moves away from the image domain to classify hand gestures indicated by electromyography signals . For this , we use a subset of the NinaPro DB5 dataset ( Atzori et al. , 2012 ) in which two Myo armbands collect EMG signals from 10 test individuals who hold 18 different hand gestures to be classified . These armbands leverage data from muscle movement , which is collected using electrodes in the form of wave signals . Each wave signal is then sampled using a wavelength and frequency prescribed in Côté-Allard et al . ( 2019 ) to produce 2D signals . FSD50K : Labeling sound events FSD50K ( Fonseca et al. , 2020 ) is derived from the larger Freesound dataset ( Fonseca et al. , 2017 ) of Youtube videos with 51,000 clips totaling more than 100 hours of sound . These clips are manually labeled and equally distributed in 200 classes from the AudioSet ontology ( Gemmeke et al. , 2017 ) . Each clip could receive multiple labels . Unlike TIMIT ( Garofolo , 1993 ) , FSD50K does not focus exclusively on sounds of spoken language but includes sound events from physical sources and production mechanisms . The mean average precision ( mAP ) is used to evaluate classification results . Darcy Flow : Solving partial differential equations ( PDEs ) Our first regression task , Darcy Flow , focuses on learning a map from the initial conditions of a PDE to the solution at a later timestep . This application aims to replace traditional solvers with learned neural networks , which can output a result in a single forward pass . The input is a 2d grid specifying the initial conditions of a fluid , and the output is a 2d grid specifying the fluid state at a later time , with the ground truth being the result computed by a traditional solver . We report the mean square error ( MSE or ` 2 ) . PSICOV : Protein distance prediction PSICOV studies the use of neural networks in the protein folding prediction pipeline , which has recently received significant attention to the success of methods like AlphaFold ( Jumper et al. , 2020 ) . While the dataset and method they use are too large-scale for our purposes , we consider a smaller set of protein structures to tackle the specific problem of inter-residual distance predictions outlined in Adhikari ( 2020b ) . 2D large-scale features are extracted from protein sequences , resulting in input feature maps with a massive number of channels . Correspondingly , the labels are pairwise-distance matrices with the same spatial dimension . The evaluation metric is mean absolute error ( MAE or ` 1 ) computed on distances below 8 Å , referred to as MAE8 . Cosmic : Identifying cosmic ray contamination Images from space-based facilities are prone to corruption by charged particles collectively referred to as `` cosmic rays . '' Cosmic rays on images should be identified and masked before the images are used for further analysis ( Zhang & Bloom , 2020 ) . The Cosmic task uses imaging data of local resolved galaxies collected from the Hubble Space Telescope . Inputs and outputs are same-size 2D matrices , with the output predicting whether each pixel in the input is an artifact of cosmic rays . We report the false-negative rate ( FNR ) of identification results . ECG : Detecting heart disease Electrocardiograms are frequently used in medicine to diagnose sinus rhythm irregularities . The ECG task is based on the 2017 PhysioNet Challenge ( Clifford et al. , 2017 ) , with 9 to 60-second ECG recordings sampled at 300 Hz and labeled using four classes : normal , disease , other , or noisy rhythms . Recordings are processed using a fixed sliding window of 1,000 ms and stride of 500 ms. We report the F1-score according to the challenge ’ s guidelines . Satellite : Satellite image time series analysis Satellite image time series ( SITS ) are becoming more widely available in earth monitoring applications . Our dataset comes from Formosat-2 satellite images acquired over Toulouse , France ( Petitjean et al. , 2012 ) . Available in multiple channels , SITS track the land cover changes over several years as each pixel in the image represents a geographical region . The goal of the Satellite task is to generate land cover maps for geo-surveying . Specifically , a series of pixels in a given color channel constitute a time series to be classified into 46 land cover types . DeepSEA : Predicting functional effects from genetic sequences Predicting chromatin effects of genetic sequence alterations is a significant challenge in the field to understand genetic diseases . DeepSEA ( Zhou & Troyanskaya , 2015 ) , provides a compendium of genomic profiles from the Encyclopedia of DNA Elements ( ENCODE ) project ( Consortium et al. , 2004 ) to train a predictive model estimating the behavior of chromatin proteins , divided into 919 categories . Due to computation constraints , we subsample 36 of these categories as per Zhang et al . ( 2021a ) and further take 5 % of the training data for prediction . We report the area under the receiver operating characteristic ( AUROC ) following the previous work . | This paper proposes a benchmark to test the performance of NAS algorithms and search spaces on a diverse set of tasks. The benchmark consists of 10 different datasets across different modalities. On these tasks, a variety of NAS algorithm as well as search spaces are allowed a fixed amount of compute resources (in terms of GPU hours) to explore and train, and reach the final performance. The search space are not limited to architecture topologies, but also hyper-parameters. On this new benchmark, authors find that existing SoTA NAS methods may not generalize to different tasks, especially with low compute budgets. | SP:5893f3dba5c2341a1e9dad1002d7ac226417c026 |
Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning? | 1 INTRODUCTION Machine learning and , specifically , deep learning models show state-of-the-art results in various fields such as computer vision , natural language processing , and signal processing ( e.g. , Carion et al . ( 2020 ) ; Devlin et al . ( 2019 ) ; Balevi & Andrews ( 2021 ) ) . Training these models requires data , which in some problems , e.g. , healthcare , finance , can include private information that should not be made public . Unfortunately , it has been shown ( Fredrikson et al . ( 2015 ) ; Carlini et al . ( 2021 ) ) that private information from the training data can sometimes be extracted from the trained model . One common approach to handle this issue is Differential Privacy ( DP ) . Differential Privacy is a framework that ensures that the distribution of training output would be the same , even if we switch one of the training participants , thus ensuring privacy . As privacy is usually obtained by adding random noise , it is natural to investigate whether Bayesian inference , which uses a distribution over models , can give private predictions . Previous works have shown that sampling from the posterior is differentially private under certain mild conditions ( Wang et al . ( 2015 ) ; Foulds et al . ( 2016 ) ; Dimitrakakis et al . ( 2017 ) ) . The main disadvantage of this method is that sampling from the posterior is generally hard . The posterior usually does not have a closedform solution , and iterative methods such as Markov Chain Monte Carlo ( MCMC ) are needed . While theoretical bounds on the convergence of MCMC methods for non-convex problems exist ( Ma et al. , 2019 ) , they usually require an infeasible number of steps to guarantee convergence in practice . Stochastic Gradient Langevin Dynamics ( SGLD ) is a popular MCMC algorithm used to approximately sample from an unnormalized distribution ( Welling & Teh , 2011 ) . The privacy guarantees of this specific sampling algorithm are interesting as it not only returns a sample from the posterior , which can be private , but the process itself of stochastic gradient descent with Gaussian noise mirrors the common Gaussian mechanism in DP . Previous work Wang et al . ( 2015 ) gives two disjoint privacy analyses : The first is for approximate sampling from the Bayesian posterior , which is only relevant when the SGLD almost converges . The second uses the standard DP analysis utilizing the Gaussian mechanism and the Advanced Composition theorem ( Dwork & Roth , 2014 ) , which only applies for a limited number of steps and is not connected to Bayesian sampling . From these two lines of research , differential privacy bounds for SGLD are provided for its initial steps or when close to convergence . Neither of these cases is suitable for deep learning and many other problems , as one would limit the model ’ s accuracy , and the other is unattainable in a reasonable time . Consequently , the privacy properties of SGLD in the interim region , between these two private sections , are of high importance . One could speculate that since the initial steps of the algorithm are private , and it converges to the posterior that is also private , then sampling at the interim region will be private as well . If so , SGLD could be considered a solution for training differentially private deep neural networks . Unfortunately , as we will show , this is not the case . Our Contributions : This work provides a counter-example , based on a Bayesian linear regression problem , showing that approximate sampling using SGLD might result in an unbounded loss of privacy in the interim regime . Moreover , this loss of privacy can even occur under strong conditions - when sampling from the posterior is as private as desired , and the problem is complex - even stronger conditions than what we can assume for most Deep Neural Network problems . This implies that special care should be given when using SGLD for private predictions , especially for problems where it is infeasible to guarantee convergence . 2 RELATED WORK Several previous works investigate the connection between Bayesian inference and differential privacy ( Wang et al . ( 2015 ) ; Foulds et al . ( 2016 ) ; Zhang et al . ( 2016 ) ; Dimitrakakis et al . ( 2017 ) ; Geumlek et al . ( 2017 ) ; Ganesh & Talwar ( 2020 ) ) . None of these papers provide guarantees over SGLD differential privacy in the interim regime . The closest work to ours is Wang et al . ( 2015 ) that specifically investigates stochastic MCMC algorithms such as SGLD . As mentioned , its analysis only covers the initial phase and when approximate convergence is achieved . As many of the privacy bounds require sampling from the posterior , if SGLD is to be used , it requires non-asymptotic convergence bounds . Dalalyan ( 2014 ) provided non-asymptotic bounds on the error of approximating a target smooth and log-concave distribution by Langevin Monte Carlo . Cheng & Bartlett ( 2018 ) studied the non-asymptotic bounds on the error of approximating a target density p∗ where log p∗ is smooth and strongly convex . For the non-convex setting , Raginsky et al . ( 2017 ) showed non-asymptotic bounds on the 2- Wasserstein distance between SGLD and the invariant distribution solving Itô stochastic differential equation . However , to provide ( , δ ) differential privacy , an algorithm should produce a distribution that is O ( δ ) close to neighbouring databases . Total Variation ( for details about Total Variation see Tsybakov ( 2008 ) ) is a more suitable distance for working with differential privacy . Ma et al . ( 2019 ) examined a target distribution p∗ , which is strongly log-concave outside of a region of radius R , and where − ln p∗ is L-Lipschitz . They provided a bound on the number of steps needed for the Total Variation distance between the distribution at the last step and p∗ to be smaller than . This bound is proportional to O ( e32LR 2 d 2 ) , where d is the model dimension . This result suggests that even little non-convexity will render running until close to convergence impractical . A conclusion from this work is that basing the differential privacy of SGLD on the proximity to the posterior is impractical for non-convex settings . 3 BACKGROUND 3.1 DIFFERENTIAL PRIVACY Differential Privacy ( Dwork et al . ( 2006b ; a ) ; Dwork ( 2011 ) ; Dwork & Roth ( 2014 ) ) is a definition and a framework that enables performing data analysis on a database while reducing one ’ s risk of exposing its personal data to the database . An algorithm is differentially private if it does not change its output distribution by much due to a single record change in its database . Definition 1 . Approximate Differential Privacy : A randomized algorithm M : D → Range ( M ) is ( , δ ) -differentially private if ∀S ⊆ Range ( M ) and { ∀D , D̂ ∈ D : ‖D − D̂‖ ≤ 1 } eq . 1 holds . D , D̂ are called neighboring databases , and while the metric can change per application , Hamming distance is typically used . Pr [ M ( D ) ∈ S ] ≤ exp ( ) Pr [ M ( D̂ ) ∈ S ] + δ ( 1 ) Mironov ( 2017 ) suggested Rényi Differential Privacy ( Definition 3 ) , a relaxation to differential privacy , and a way to translate RDP guarantees into approximate differential privacy guarantees . Definition 2 . Rényi Divergence ( Rényi , 1961 ) : For two probability distributions Z and Q over R , the Réyni divergence of order ν > 1 is Dν ( Z||Q ) ∆ = 1 ν − 1 logEx∼Q [ ( Z ( x ) Q ( x ) ) ν ] . Definition 3 . ( ν , ) -RDP : A randomized mechanism f : D → R is said to have -Rényi differential privacy of order ν , or ( ν , ) -RDP in short , if for any adjacent databases D , D̂ ∈ D eq . 2 holds , where Dν is Rényi divergence of order ν. Dν ( f ( D ) ||f ( D̂ ) ) ≤ ( 2 ) Lemma 3.1 . ( Mironov ( 2017 ) Proposition 3 ) . If f is ( ν , ) -RDP , it also satisfies ( + log 1 δ ν−1 , δ ) Differential Privacy for any 0 < δ < 1 . 3.2 STOCHASTIC GRADIENT LANGEVIN DYNAMICS Stochastic Gradient Langevin Dynamics ( SGLD ) is an MCMC method that is commonly used for Bayesian Inference ( Welling & Teh , 2011 ) . The update step of SGLD is shown in eq . 3 , where θj is the parameter vector at step j , ηj is the step size at step j , p ( θj ) is the prior distribution , p ( yi|θj ) is the likelihood of sample yi given model parameterized by θj , b is the batch size , and n is the database size . SGLD can be seen as a Stochastic Gradient Descent with Gaussian noise , where the variance of the noise is calibrated to the step size . θj+1 = θj + ηj 2 [ ∇θj ln p ( θj ) + n b b∑ i=1 ∇θj ln p ( yij |θj ) ] + √ ηjξj ij ∼ uniform { 1 , ... , n } ξj ∼ N ( 0 , 1 ) ( 3 ) A common practice in deep learning is to use cyclic Stochastic Gradient Descent . This flavour of SGD first randomly shuffles the database samples and then cyclically uses the samples in this order . For optimization , there is empirical evidence that it works as well or better than SGD with reshuffling , and it was conjectured that it converges at a faster rate ( Yun et al . ( 2021 ) ) . Cyclic-SGLD is the analog of cyclic-SGD for SGLD , where the difference is the use of the SGLD step instead of the SGD step . For simplicity , we will consider cyclic-SGLD in this work . 4 METHOD Our goal is to prove that even when the posterior is as private as desired , sampling using SGLD for T steps can be as non-private as desired . This requires analysing the distribution of SGLD after T steps , which is hard in the general case . However , we show that we can get the desired behaviour when looking at a simple Bayesian linear regression problem where everything is a Gaussian with closed-form expressions . Our result is summarized in theorem 1 . Theorem 1 . ∀ δ < 0.5 , , ′ there exists a domain and a Bayesian inference problem where a single sample from the posterior distribution is ( , δ ) differentially private , but , there is a number , T , for which performing approximate sampling by running SGLD for T steps is not ( ′ , δ ) differentially private . As ′ can be as big as desired , and can be as small as desired , a corollary of Theorem 1 is that we could always find a problem for which the posterior is ( , δ ) differentially private , but there will be a step in which SGLD will result in unbounded loss of privacy . Therefore , SGLD alone can not provide any privacy guarantees in the interim regime , even if the posterior is private . To prove our theorem , we consider a Bayesian regression problem for a linear model with Gaussian noise , as defined in eq . 4 , on domain D defined in eq . 5. y = θx+ ξ ξ ∼ N ( 0 , β−1 ) θ ∼ N ( 0 , α−1 ) log p ( y|x , θ ) = −β ( y − θx ) 2/2− 1 2 log ( 2π/β ) ( 4 ) D ( n , γ1 , xh , xl , c ) = { xi , yi|| yi xi − c| ≤ nγ1 ; xi , yi , c , γ1 ∈ R > 0 ; xl ≤ xi ≤ xh } ni=1 We assume that x2hβ > 3 and that γ1 < 1 2 ( 5 ) n , c , xl , xh , γ1 are parameters of the problem ( c , xl , xh , and γ1 are used , together with the database size - n , to bound the database samples to a chosen region ) . For every , ′ and δ , we will show there exist parameters n , c , xl , xh , γ1 that have the privacy properties required to prove Theorem 1 . The restrictions on the dataset simplify the proof but are a bit unnatural as it assumes we approximately know c , the parameter we are trying to estimate . Later we show in subsection 4.3 that they can be replaced with a Propose-Test-Release phase . We will address the problem of Bayesian Linear Regression for the model described in eq . 4 on domain D as Bayesian linear regression problem on domain D. This problem has a closed-form solution for both the posterior distribution and the distribution at each SGLD step , thus enabling us to get tight bounds on the differential privacy in each case . The heart of our proof is showing that for n big enough sampling from the posterior is ( , δ ) differentially private , with ∼ O ( c 2 n3 ) , while for SGLD there exists a step in which releasing a sample will not be ( ′ , δ ) differentially private for ′ = Ω ( c 2 n2 ) . Therefore , by considering instances of the problem where c ∼ O ( n 32 √ ) and n is big enough , sampling from the posterior will be ( , δ ) differentially private , while there will be an SGLD step in which releasing a sample will not be ( ′ , δ ) differentially private for ′ = Ω ( n ) . We note that the bounds contain dependency over δ , but since we are using a fixed and equal δ for both the posterior and SGLD privacy analysis , we omit it from the bounds for simplicity . Figure 1 depicts an indicative value of the distance between the distributions of samples from two SGLD processes running on adjacent databases for the Bayesian linear regression problem . As we will later show , SGLD on one of these examples is a Gaussian while the other is a mixture of n Gaussians . We plot 1n ∑ i ( µt−µit ) 2 ( σit ) 2 , where µt is the mean of the single Gaussian at timestep t , µit is the mean of the i ’ th Gaussian component at timestep t and ( σit ) 2 its variance . We can see that even though the distributions are close at the initial iterations and at convergence ( which implies differential privacy in those areas ) , in the interim region , they are significantly apart , which implies a lack of differential privacy . 4.1 POSTERIOR SAMPLING PRIVACY To prove Theorem 1 , we first need to show that ∀δ < 0.5 , , there exists a domain and a Bayesian inference problem where a single sample from the posterior distribution is ( , δ ) differentially private . In order to do so , this section will consider the differential privacy guarantees provided by one sample from the posterior for the Bayesian linear regression problem on domain D. We begin by using a well-known result for the closed-form-solution of the posterior distribution for a Bayesian linear regression problem ( see Bishop ( 2006 ) for further details ) . By using the parameters of our problem , we get Lemma 4.1 . Lemma 4.1 . The posterior distribution for Bayesian linear regression problem on domain D is p ( θ|D ) = N ( θ ; µ , σ2 ) ; µ = ∑n i=1 xiyiβ α+ ∑n i=1 x 2 iβ ; σ2 = 1 α+ ∑n i=1 x 2 iβ . ( 6 ) Using the posterior distribution , one can calculate the Renyi divergence between every two neighbouring databases , thus getting an expression for the Rényi differential privacy , as shown in Lemma 4.2 . Lemma 4.2 . For a Bayesian linear regression problem on domain D , such that n > max { 1 + 10 x2h x2l ν β , 1 + ν x2h x2l } , one sample from the posterior is ( ν , 1 ) -Rényi differentially private . 1 ∼ O ( c 2 n3 ) for c > > n1+γ1 . 1 = x2h 2 ( n− 1 ) x2l + 1 2 ( ν − 1 ) νx 2 h ( n− 1 ) x2l − νx2h + 2νβ x4h 9 10n 1−2γ1x2l + 2νβ · ( x 2 hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 · ( c+ n γ1 ) n2−γ1 + ν 2 · ( x 2 hα+ x 4 hβ ) 2 9 10x 6 l β · ( c+ n γ1 ) 2 n3 . We can show that for c > > n1+γ2 , each of the terms is bounded by O ( c 2 n3 ) . The first and second terms are bounded by O ( 1n ) . The third term is bounded by O ( n 2γ1−1 ) . Noticing that n2γ1−1 = n2 ( 1+γ1 ) n3 < c2 n3 , we get that the third term is bounded by O ( c2 n3 ) . As c > > n γ1 , the fourth term is bounded by O ( cn γ1 n2 ) , and since cnγ1 n2 = cn1+γ1 n3 < c2 n3 , the term is bounded by O ( c2 n3 ) . Lastly , since c > > n1+γ1 , the last term is bounded by O ( c 2 n3 ) . For the full proof , see subsection A.1 in the appendix . Translating the Rényi differential privacy guarantees into approximate differential privacy terms can be done according to Lemma 3.1 , which gives Lemma 4.3 . Lemma 4.3 . With the conditions of Lemma 4.2 , one sample from the posterior is ( 1 + ln ( 1δ ) ν−1 , δ ) differentially private . By choosing ν such that ln ( 1 δ ) ν−1 < 2 and then choosing n big enough such that 1 < 2 , we get that the posterior is ( , δ ) differentially private . 4.2 STOCHASTIC GRADIENT LANGEVIN DYNAMICS PRIVACY To complete the proof of Theorem 1 , we need to show that even if one sample from the posterior is ( , δ ) differentially private for a Bayesian linear regression problem on domain D , it does not provide any guarantees on the privacy of SGLD for that problem . In order to do so , this section will first consider the loss in privacy when using SGLD for the Bayesian linear regression problem on domain D , and then , together with the results of section 4.1 , will prove Theorem 1 . In order to show that SGLD is not differentially private after initial steps and before convergence , it is enough to find two neighbouring databases for which the loss in privacy is as big as desired in those steps . We define neighbouring databases D1 and D2 in eq . 7 and consider the Bayesian linear regression problem on D1 and D2 . We set the learning rate to be η = 2 ( α+nx2hβ ) 2 . D1 = { xi , yi : xi = xh , yi = c · xh } ni=1 D2 = { xi , yi : xi = xh , yi = c · xh } n−1i=1 ∪ { xh 2 , c · xh 2 } ( 7 ) To tightly analyze the differential privacy loss when approximately sampling via SGLD at each step , we need to get a closed-form solution for the distribution for each step . For databaseD1 , the solution is Normal distribution . For database D2 , different shuffling of samples produces different Gaussian distributions , therefore giving a mixture of Gaussians . We look at cyclic-SGLD with a batch size of 1 and mark by θj , θ̂j the samples on the j ’ th SGLD step when using databases D1 and D2 accordingly . Since D1 samples are all equal , the update step of the cyclic-SGLD is the same for every step ( with different noise generated for each step ) . This update-step contains only multiplication by a scalar , addition of a scalar , and addition of Gaussian noise , therefore , together with a conjugate prior results in Normal distribution for θj : N ( θj ; µj , σ2j ) . For D2 , there is only one sample different from the rest . We mark by r the index in which this sample is used in the cyclic-SGLD and call this order r-order . Note that there are only n different values for r and , as such , effectively only n different samples orders . Since every order of samples is chosen with the same probability , r is distributed uniformly in { 1 , .. , n } . We mark by θ̂rj the sample on the j ’ th SGLD step when using r-order . Since , for a given order , θ̂rj is formed by a series of multiplications by a scalar , addition of scalar , and addition of Gaussian noise , and since the prior is also Gaussian , then θ̂rj is distributed Normally , N ( θ̂rj ; µ̂rj , ( σ̂rj ) 2 ) . As r is distributed uniformly , θ̂j distribution mass is distributed evenly between all θ̂rj , resulting in a mixture of Gaussians . Intuitively what will happen is that each Gaussian components , θ̂j as well as θj , will move towards the similar posterior Gaussian . However , at each epoch , θ̂j will drag a bit behind because in one batch one gradient is smaller . While this gap can be quite small , for large n , the Gaussians are very peaked with very small standard deviations ; thus , they separate enough that we can easily distinguish between the two distributions . According to approximate differential privacy definition ( Definition 1 ) , it is enough to find one set S such that p ( θj ∈ S ) > e p ( θ̂j ∈ S ) + δ to show that releasing θj is not ( , δ ) private . We choose S = { s|s > µj } at some step j we will define later on . It is clear from symmetry that p ( θj > µj ) = 1/2 , and by using Chernoff bound we can bound p ( θ̂j > µj ) . Lemma 4.4. p ( θ̂j > µj ) ≤ 1n ∑n r=1 exp ( − ( µj−µ̂rj ) 2 2 ( σ̂rj ) 2 ) . Using Lemma 4.4 , we can upper bound the mass of θ̂j in S , and thus lower bound the difference between θj and θ̂j distribution masses in S for some step - j . To use Lemma 4.4 , we first need to lower bound ( µj−µ̂rj ) 2 ( σrj ) 2 for a certain step . This is done in Lemma 4.5 . Lemma 4.5 . ∃k ∈ Z > 0 such that ( µ ( k+1 ) n−µ̂r ( k+1 ) n ) 2 ( σ̂r ( k+1 ) n ) 2 = Ω ( c2 n2 ) , for n big enough . To prove Lemma 4.5 , we first find closed-form solutions for θ̂r ( k+1 ) n , θ ( k+1 ) n distributions ( Lemma A.1 ) . Using the closed-form solutions , we find a lower bound over ( µ ( k+1 ) n − µ̂r ( k+1 ) n ) 2 as a function of k , which applies for all k ( Lemma A.2 ) . To upper bound ( σ̂r ( k+1 ) n ) 2 , we find an approximation to the epoch in which the data and prior effects on the variance are approximately equal , marked k̇ . We choose the step in which we will consider the privacy loss as ( dk̇e+1 ) n and show that ( σ̂r ( dk̇e+1 ) n ) 2 is upper bounded at this step ( Lemma A.4 ) . Using the upper bound on the difference in means and the lower bound on the variance , Lemma 4.5 is proved . By using the lower bound from Lemma 4.5 in Lemma 4.4 , we get Lemma 4.6 . Lemma 4.6 . For the Bayesian linear regression problem over database D1 , such that n is big enough , ∃T ∈ Z > 0 such that approximate sampling by running SGLD for T steps will not be ( , δ ) private for < Ω ( c 2 n2 ) , δ < 0.5 . From Lemma 4.3 , we see that sampling from the posterior is ( , δ ) differentially private for = O ( c 2 n3 ) . From Lemma 4.6 , we see that for SGLD , there exists a step in which releasing a sample will not be ( ′ , δ ) differentially private for ′ = Ω ( c 2 n2 ) . Therefore , considering instances of the problem where c = O ( n 32 √ ) , sampling from the posterior will be ( , δ ) differentially private . However , there will be an SGLD step in which releasing a sample will not be ( ′ , δ ) differentially private for ′ = Ω ( n ) . Since we can choose n to be big as desired , we can make the lower bound over ′ as big as we desire . This completes the proof of Theorem 1 . 4.3 PROPOSE TEST SAMPLE Our analysis of the posterior and SGLD is done on a restricted domain D as defined in eq . 5 . These restrictions over the dataset simplify the proof but are a bit unnatural as they assume we approximately know c , the parameter we are trying to estimate . This section shows that these restrictions could be replaced with a Propose-Test-Release phase ( Dwork & Lei , 2009 ) and common practices in deep learning . When training a statistical model , it is common to first preprocess the data by enforcing it in a bounded region and removing outliers . After the data is cleaned , the training process is performed . This is especially important in DP , as outliers can significantly increase the algorithm ’ s sensitivity to a single data point and thus hamper privacy . Informally , algorithm 1 starts by clipping the input to the accepted range . It then estimates a weighted average of the ratio yixi ( line 12 ) and throws away outliers that deviate too much from it . The actual implementation of this notion is a bit more complicated because of the requirement to do so privately . Once the database is cleaned , algorithm 1 privately verifies that the number of samples is big enough , so the sensitivity of p ( θ|W ) to a single change in the database will be small , therefore making sampling from p ( θ|W ) ( , δ ) differentially private . This method is regarded as Propose-Test-Release , where we first propose a bound over the sensitivity , then test if the database holds this bound , and finally release the result if so . We define nmin in eq . 26 in the appendix to be the minimum size of W for which the algorithm will sample from p ( θ|W ) with high probability . We will show later on that this limit ensures that sampling from p ( θ|W ) is ( , δ ) differentially private . We define p ( θ|W ) to be the posterior for the Bayesian linear regression problem over database W. From Lemma 4.1 , it follows that p ( θ|W ) has the form of p ( θ|W ) = N ( θ ; µ , σ2 ) ; µ = ∑ ( xi , yi ) ∈W xiyiβ α+ ∑ ( xi , yi ) ∈W x 2 iβ ; σ2 = 1 α+ ∑ ( xi , yi ) ∈W x 2 iβ . Claim 4.1 . Algorithm 1 is ( 5 , 2δ ) differentially private . By claim C.9 , steps 6-13 are ( 3 , δ ) differentially private . By corollary C.3 , steps 14-19 are ( 2 , δ ) differentially private for given m̆ and n2 . Therefore by the sequential composition theorem , the composition is ( 5 , 2δ ) differentially private . The claim proved by noticing that if steps 6-19 are private with respect to the updated database ( after step 5 ) , then they are also private for the original database . Claim 4.2 . When replacing line 19 with sampling via SGLD with step size η = 1 ( α+n1x2hβ ) 2 , then ∃T ( n1 ) : Z > 0 → Z > 0 such that the updated algorithm is not ( , δ ) differentially private ∀ ∈ R > 0 , δ < 16 if ran for T ( n1 ) steps . Algorithm 1 Propose Test Sample Input : D = { xi , yi } n1i=1 Parameters : , δ < 0.5 , xl > 0 , xh > xl , α > 0 , β ≥ 3x2h , ρ1 ∈ ( 1 , 3 2 ) , ρ2 ∈ ( 0 , 1 2 ) , γ1 ∈ ( ρ2 , 1 2 ) 1 : for i = 1 , 2 , . . . , N do 2 : xi ← max { xi , xl } 3 : xi ← min { xi , xh } 4 : yi ← max { yi , 0 } 5 : end for 6 : n̆1 ← n1 − 1 log 1 2δ + Lap ( 1 ) 7 : V = { xi , yi| yixi ≤ n̆ ρ1 1 } 8 : n2 ← |V | − 1 log 1 2δ + Lap ( 1 ) 9 : if n2 ≤ 1 then 10 : return null 11 : end if 12 : m← ∑ ( xi , yi ) ∈V xiyi∑ ( xi , yi ) ∈V x2i 13 : m̆← m+ Lap ( 1 n̆ ρ1 1 2 ( n2−1 ) x2hx 2 l+x 4 h n2 ( n2−1 ) x4l ) 14 : W ← { ( xi , yi ) : | yixi − m̆| ≤ n ρ2 2 } 15 : nW ← |W | − 1 log ( 1 2δ ) + Lap ( 1 ) 16 : if nW < nmin then 17 : return null 18 : end if 19 : return sample from p ( θ|W ) Proof sketch ( See appendix for full proof ) . We first note that by choosing 1+ρ2 > ρ1 , the sensitivity of m̆ grows slower than the bound over the distance | yixi − m̆| . Therefore for n1 big enough , samples for which yixi = m will be included in W with high probability . Consequently , databases D3 , D4 ∈ D will reach , with high probability , to step 19 , which from our previous analysis over SGLD ( see subsection 4.2 ) will cause an unbounded loss in privacy . ρ1 > ρ3 > 1 D3 = { xi , yi : xi = xh , yi = nρ31 · xh } n1 i=1 D4 = { xi , yi : xi = xh , yi = nρ31 · xh } n−1 i=1 ∪ { xh 2 , nρ31 · xh 2 } ( 8 ) 5 WASSERSTEIN DISTANCE AND DIFFERENTIAL PRIVACY As we have shown in Theorem 1 , one can not give any DP guarantees for SGLD in the interim region . That means that to get private samples using SGLD , one must limit the number of iterations , thus utilizing the Gaussian mechanism , or run until approximate convergence . Therefore , it is of interest to get non-asymptotic convergence bounds for SGLD so that we guarantee privacy after a known number of steps . Previously , several works have given non-asymptotic bounds ; however , some of those do so for the 2-Wasserstein metric ( Raginsky et al . ( 2017 ) ; Cheng et al . ( 2018 ) ) . This is unfortunate as the 2-Wasserstein metric is unsuitable for differential privacy - it is easy to create two distributions with 2-Wasserstein distance as small as desired but with disjoint support . It is , however , interesting to ask whether combining bounds on the 2-Wasserstein metric with Lipschitz continuous probability densities will allow us to get privacy guarantees . The intuition why this should be enough is simple : If p , q are two distributions with small 2-Wasserstein distance , then there is ( under mild conditions ) a mapping , f : X → X , such that the pushforward maintains f ] p = q ( i.e . for each measurable set S q ( S ) = p ( f−1 ( S ) ) ) and that Ep [ ||x−f ( x ) ||2 ] < . One can assume that p ( x ) ≈ q ( f ( x ) ) and q ( x ) ≈ q ( f ( x ) ) as x ≈ f ( x ) with high probability . Unfortunately , this intuition does not hold exactly , as the map f can change the density considerably but still be a pushforward by changing the volume . For example , if we assume f is smooth and bijective , we get the standard change of variable formula such that p ( x ) = q ( f ( x ) ) |det ( Jf ) | , so p ( x ) ≈ q ( f ( x ) ) only if |det ( Jf ) | ≈ 1 . This issue becomes more severe as the dimensionality increases . For completeness , we will share our results connecting p ( x ) to q ( x ) when W2 ( p , q ) is small , and both distributions are L-Lipschitz continuous . This bound scale poorly with dimension , and as such ill-suited for SGLD on deep networks , but can still be useful for Bayesian sampling in low-dimensional problems . For distribution p , we define the density pλ ( x ) as the average of p ( x ) on a ball of radius λ centered around x - pλ ( x ) = 1vold ( λ ) ∫ Bdλ ( 0 ) p ( x + z ) dz , where Bdλ ( x ) is the ball in Rd of radius λ centered around x , and vold ( λ ) is its volume . Claim 5.1 . For L-Lipschitz continuous distribution p we have |p ( x ) − pλ ( x ) | ≤ λL . Theorem 2 . Let P , Q be absolutely continuous w.r.t the Lebesgue measure in Rd , with finite secondmoment and L-Lipschitz continuous densities p , q. IfW2 ( p , q ) < 2 then we have pλ ( x ) ≤ vold ( λ ) vold ( λ− ) qλ ( x ) + ( vold ( λ ) vold ( λ− ) − 1 ) 2λL+ vold ( λ− ) . ( 9 ) The proof is an extension of the proof of theorem 2.1 in Walker ( 2004 ) to dimensions larger than 1 . The detailed proof is in the supplementary material . It is easy to see that as vold ( λ ) vold ( λ− ) = ( 1 + λ− ) d , the bounds usefulness quickly diminishes with dimensionality as it requires extremely small to give non-vacuous results . This , however , can still give useful results in low-dimensional problems . 6 CONCLUSION As shown in this work , while SGLD has interesting connections to privacy and some guarantees , caution is required if one wishes to use it to get private predictions . This is especially important for models such as deep neural networks , where it is infeasible to guarantee convergence . REFERENCES Eren Balevi and Jeffrey G. Andrews . Wideband channel estimation with a generative adversarial network . IEEE Transactions on Wireless Communications , 20 ( 5 ) :3049–3060 , 2021. doi : 10 . 1109/TWC.2020.3047100 . Christopher Bishop . Pattern Recognition and Machine Learning . Information Science and Statistics . Springer-Verlag New York , 2006 . Nicolas Carion , Francisco Massa , Gabriel Synnaeve , Nicolas Usunier , Alexander Kirillov , and Sergey Zagoruyko . End-to-end object detection with transformers . In Andrea Vedaldi , Horst Bischof , Thomas Brox , and Jan-Michael Frahm ( eds . ) , Computer Vision – ECCV 2020 , pp . 213– 229 . Springer International Publishing , 2020 . ISBN 978-3-030-58452-8 . Nicholas Carlini , Florian Tramèr , Eric Wallace , Matthew Jagielski , Ariel Herbert-Voss , Katherine Lee , Adam Roberts , Tom B . Brown , D. Song , Ú . Erlingsson , Alina Oprea , and Colin Raffel . Extracting training data from large language models . In USENIX Security Symposium , 2021 . X. Cheng and P. Bartlett . Convergence of langevin mcmc in kl-divergence . In ALT , 2018 . Xiang Cheng , Niladri S. Chatterji , Peter L. Bartlett , and Michael I. Jordan . Underdamped langevin MCMC : A non-asymptotic analysis . In Conference On Learning Theory COLT , 2018 . Arnak Dalalyan . Theoretical guarantees for approximate sampling from smooth and log-concave densities . Journal of the Royal Statistical Society : Series B ( Statistical Methodology ) , 79 , 12 2014. doi : 10.1111/rssb.12183 . Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . BERT : Pre-training of deep bidirectional transformers for language understanding . In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics : Human Language Technologies , Volume 1 ( Long and Short Papers ) , pp . 4171–4186 , Minneapolis , Minnesota , June 2019 . Association for Computational Linguistics . doi : 10.18653/v1/N19-1423 . URL https : //aclanthology.org/N19-1423 . Christos Dimitrakakis , Blaine Nelson , Zuhe Zhang , Aikaterini Mitrokotsa , and Benjamin I. P. Rubinstein . Differential privacy for bayesian inference through posterior sampling . Journal of Machine Learning Research , 18 ( 11 ) :1–39 , 2017 . URL http : //jmlr.org/papers/v18/ 15-257.html . Cynthia Dwork . A firm foundation for private data analysis . Commun . ACM , 54 ( 1 ) :86–95 , January 2011 . ISSN 0001-0782. doi : 10.1145/1866739.1866758 . URL https : //doi.org/10 . 1145/1866739.1866758 . Cynthia Dwork and Jing Lei . Differential privacy and robust statistics . In Proceedings of the FortyFirst Annual ACM Symposium on Theory of Computing , STOC ’ 09 , pp . 371–380 , New York , NY , USA , 2009 . Association for Computing Machinery . ISBN 9781605585062. doi : 10.1145/ 1536414.1536466 . URL https : //doi.org/10.1145/1536414.1536466 . Cynthia Dwork and Aaron Roth . The algorithmic foundations of differential privacy . Found . Trends Theor . Comput . Sci. , 9 ( 3–4 ) :211–407 , August 2014 . ISSN 1551-305X . doi : 10.1561/0400000042 . URL https : //doi.org/10.1561/0400000042 . Cynthia Dwork , Krishnaram Kenthapadi , Frank McSherry , Ilya Mironov , and Moni Naor . Our data , ourselves : Privacy via distributed noise generation . In Advances in Cryptology - EUROCRYPT 2006 , 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques , volume 4004 of Lecture Notes in Computer Science , pp . 486– 503 . Springer , 2006a . doi : 10.1007/11761679 29 . URL https : //iacr.org/archive/ eurocrypt2006/40040493/40040493.pdf . Cynthia Dwork , Frank McSherry , Kobbi Nissim , and Adam Smith . Calibrating noise to sensitivity in private data analysis . In Shai Halevi and Tal Rabin ( eds . ) , Theory of Cryptography , pp . 265–284 , Berlin , Heidelberg , 2006b . Springer Berlin Heidelberg . ISBN 978-3-540-32732-5 . James R. Foulds , Joseph Geumlek , Max Welling , and Kamalika Chaudhuri . On the theory and practice of privacy-preserving bayesian data analysis . In Uncertainty in Artificial Intelligence , UAI , 2016 . Matt Fredrikson , Somesh Jha , and Thomas Ristenpart . Model inversion attacks that exploit confidence information and basic countermeasures . In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security , CCS ’ 15 , pp . 1322–1333 , New York , NY , USA , 2015 . Association for Computing Machinery . ISBN 9781450338325. doi : 10.1145/2810103 . 2813677 . URL https : //doi.org/10.1145/2810103.2813677 . Arun Ganesh and Kunal Talwar . Faster differentially private samplers via rényi divergence analysis of discretized langevin mcmc . ArXiv , abs/2010.14658 , 2020 . Joseph Geumlek , Shuang Song , and Kamalika Chaudhuri . Renyi differential privacy mechanisms for posterior sampling . In Advances in Neural Information Processing NeurIPS , 2017 . Alison L. Gibbs and Francis Edward Su . On choosing and bounding probability metrics . International Statistical Review , 70 ( 3 ) :419–435 , 2002 . M. Gil , F. Alajaji , and T. Linder . Rényi divergence measures for commonly used univariate continuous distributions . Information Sciences , 249:124–131 , 2013 . ISSN 0020-0255. doi : https : //doi . org/10.1016/j.ins.2013.06.018 . URL https : //www.sciencedirect.com/science/ article/pii/S0020025513004441 . Yi-An Ma , Yuansi Chen , Chi Jin , Nicolas Flammarion , and Michael I. Jordan . Sampling can be faster than optimization . Proceedings of the National Academy of Sciences , 116 ( 42 ) :20881– 20885 , 2019 . ISSN 0027-8424. doi : 10.1073/pnas.1820003116 . URL https : //www.pnas . org/content/116/42/20881 . Ilya Mironov . Renyi differential privacy . CoRR , abs/1702.07476 , 2017 . URL http : //arxiv . org/abs/1702.07476 . Maxim Raginsky , Alexander Rakhlin , and Matus Telgarsky . Non-convex learning via stochastic gradient langevin dynamics : a nonasymptotic analysis . In Satyen Kale and Ohad Shamir ( eds . ) , Proceedings of the 2017 Conference on Learning Theory , volume 65 of Proceedings of Machine Learning Research , pp . 1674–1703 . PMLR , 07–10 Jul 2017 . URL https : //proceedings . mlr.press/v65/raginsky17a.html . Alfréd Rényi . On measures of entropy and information . In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability , Volume 1 : Contributions to the Theory of Statistics , pp . 547–561 . University of California Press , 1961 . Alexandre B. Tsybakov . Introduction to Nonparametric Estimation . Springer Publishing Company , Incorporated , 1st edition , 2008 . ISBN 0387790519 . Martin J. Wainwright . High-Dimensional Statistics : A Non-Asymptotic Viewpoint . Cambridge Series in Statistical and Probabilistic Mathematics . Cambridge University Press , 2019 . Stephen Walker . New approaches to Bayesian consistency . The Annals of Statistics , 32 , 2004 . Yu-Xiang Wang , Stephen Fienberg , and Alex Smola . Privacy for free : Posterior sampling and stochastic gradient monte carlo . In Proceedings of the 32nd International Conference on Machine Learning , volume 37 of Proceedings of Machine Learning Research , pp . 2493–2502 , Lille , France , 07–09 Jul 2015 . PMLR . URL https : //proceedings.mlr.press/v37/ wangg15.html . M. Welling and Y. Teh . Bayesian learning via stochastic gradient langevin dynamics . In ICML , 2011 . Chulhee Yun , Suvrit Sra , and Ali Jadbabaie . Open problem : Can single-shuffle SGD be better than reshuffling SGD and gd ? In Conference on Learning Theory , COLT , 2021 . Zuhe Zhang , Benjamin I. P. Rubinstein , and Christos Dimitrakakis . On the differential privacy of bayesian inference . In AAAI Conference on Artificial Intelligence , 2016 . A SGLD AND POSTERIOR PRIVACY Proof Theorem 1 . Define 1 2 > γ1 > 0 ; 3 2 > γ2 > 1 + γ1 ; xl = xh 2 ν1 = 2 ln ( 1δ ) + 1 n1 = max { 1 2αx2hβ − 1 x2hβ , α x2hβ , α x2hβ ( e 2 x2 h β − 2 ) + 1 2x2β } n2 = max { 1 + x2h x2l 8 , 1 + ν x2h x2l ( 1 + 8 ( ν − 1 ) ) , ( 16νβx4h 9 10 x 2 l ) 1 1−2γ1 , ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 1 2−γ1−γ2 , ( 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 2 3−2γ2 } n3 = max { 1 + 10 x2h x2l ν1 β , 1 + ν x2h x2l } np = max { n1 , n2 , n3 , ( ( ′ − ln ( 0.5− δ ) ) e 2 x2β ( 32x2hβ 3 ) 2 2v1 α ) 1 2 ( γ2−1 ) } v1 = max { 6 , 1 + 2e 1 x2 h β } cp = n γ2 p . We consider the Bayesian linear regression problem over databaseD1 ( defined in eq . 7 ) with n = np and c = cp . Since np > n1 , the problem holds the constraints of lemma A.6 . Consequently , there exists a step for which one approximate sample from the posterior using SGLD is not ( ′′ , δ ) private for all ′′ such that ′′ ≤ e− 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( cp np ) 2 + ln ( 0.5 − δ ) . From eq . 10 , the choice of np promises that ′ ≤ e− 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( cp np ) 2 + ln ( 0.5 − δ ) ; Therefore approximate sampling from the posterior using SGLD is not ( ′ , δ ) differentially private . Since np > n2 and np > n3 , the problem holds the constraints of Claim D.30 . Therefore one sample from the posterior is ( , δ ) differentially private . e − 2 x2 h β α 2v1 ( 3 32x2hβ ) 2 ( cp np ) 2 + ln ( 0.5− δ ) ≥ ′ ( cp np ) 2 ≥ ( ′ − ln ( 0.5− δ ) ) e 2 x2 h β ( 32x2hβ 3 ) 2 2v1 α n2 ( γ2−1 ) p ≥ ( ′ − ln ( 0.5− δ ) ) e 2 x2 h β ( 32x2hβ 3 ) 2 2v1 α np ≥ ( ( ′ − ln ( 0.5− δ ) ) e 2 x2 h β ( 32x2hβ 3 ) 2 2v1 α ) 1 2 ( γ2−1 ) ( 10 ) A.1 POSTERIOR SAMPLING PRIVACY Proof Lemma 4.1. eq . 11 is a known result for the Bayesian inference problem for a linear model with Gaussian noise with known precision parameter ( β ) and a conjugate prior ( Bishop ( 2006 ) - 3.49-4.51. for details ) . By choosing the basis function to be φ ( x ) = x , working in one dimension , and choosing m0 = 0 , S0 = α−1 , we get the linear model defined in eq . 4 and matching posterior described in Lemma 4.1. p ( w|t ) = N ( w ; mN , SN ) ; mN = SN ( S−10 m0 + βΦT t ) ; S −1 N = S −1 0 + βΦ TΦ ( 11 ) Proof Lemma 4.2 . By definition 3 , for a single sample from the posterior to be ( ν , ′ ) RDP , the Rényi divergence of order ν between any adjacent databases needs to be bounded . We consider two adjacent databases D , D̂ ∈ D , and w.l.o.g , define that they differ in the last sample ( where it is also allowed to be ( 0 , 0 ) for one of them , which saves us the need to consider also a neighbouring database with a size smaller by 1 ) . To ease the already complex and detailed calculations , we use definitions in eq .12 . D = { xi , yi } n−1i=1 ∪ { xn , yn } , D̂ = { xi , yi } n−1 i=1 ∪ { x̂n , ŷn } z = n−1∑ i=1 x2i , q = n−1∑ i=1 yixi ( 12 ) According to Lemma 4.1 and with definitions in eq . 12 , the posterior distributions are p ( θ|D ) = N ( θ ; µ , σ2 ) ; µ = β ( q + xnyn ) α+ ( z + x2n ) β ; σ2 = 1 α+ ( z + xn ) β p ( θ|D̂ ) = N ( θ ; µ̂ , σ̂2 ) ; µ̂ = β ( q + x̂nŷn ) α+ ( z + x̂2n ) β ; σ̂2 = 1 α+ ( z + x̂n ) β . ( 13 ) By Gil et al . ( 2013 ) , the Réyni divergence of order ν , - Dν ( f1||f2 ) - for f1 , f2 , uni-variate normal distributions with means µ1 , µ2 and variances σ1 , σ2 accordingly , is Dν ( f1||f2 ) = ln σ1 σ2 + 1 2 ( ν − 1 ) ln σ 2 2 ( σ2f1 , f2 ) ∗ ν + 1 2 ν ( µ1 − µ2 ) 2 ( σ2 ) ∗ν ( σ2f1 , f2 ) ∗ ν = νσ 2 2 + ( 1− ν ) σ21 > 0 . Therefore , for p ( θ|D ) and p ( θ|D̂ ) , the Rényi divergence of order ν is as shown in eq . 14 , where we omit the subscript for ( σ2 ) ∗ν since it is clear from context to which distributions it applies . Dν ( p ( θ|D ) ||p ( θ|D̂ ) ) = ln σ σ̂ + 1 2 ( ν − 1 ) ln σ̂ 2 ( σ2 ) ∗ν + 1 2 ν ( µ− µ̂ ) 2 ( σ2 ) ∗ν ( σ2 ) ∗ν = νσ̂ 2 + ( 1− ν ) σ2 ( 14 ) According to claim D.25 , ( σ2 ) ∗ν > 0 . Therefore the value Dν ( p ( θ|D ) , p ( θ|D̂ ) ) exists . In order to prove Rényi differential privacy , each of the terms of Dν ( p ( θ|D ) , p ( θ|D̂ ) ) is bounded separately . The bounds on each of the terms are proved at claims D.26 , D.27 , and D.28 . Proof Lemma 4.3 . By Lemma 4.2 , sampling from the posterior is ( ν , 1 ) -RDP , therefore by Lemma 3.1 , sampling from the posterior is also ( 1 + ln ( 1δ ) ν−1 , δ ) differentially private . A.2 STOCHASTIC GRADIENT LANGEVIN DYNAMICS PRIVACY Proof Lemma 4.4. p ( θ̂j > µj |D2 ) = n∑ r=1 p ( θ̂rj > µj |D2 ) p ( θ̂j = θ̂rj |D2 ) = n∑ r=1 p ( θ̂j − µ̂rj > µj − µ̂rj |D2 ) p ( θ̂j = θ̂rj |D2 ) = 1 n n∑ r=1 p ( θ̂j − µ̂rj > µj − µ̂rj |D2 ) ≤ 1 n n∑ r=1 exp ( − ( µj − µ̂rj ) 2 2 ( σrj ) 2 ) Where the inequality holds due to Chernoff bound ( For further details , see Wainwright ( 2019 ) ) . Proof Lemma 4.5 . By lemma A.5 , for n > max { αx2β , α x2hβ ( e 2 x2β − 2 ) + 12x2β , 1 2αx2hβ − 1 x2hβ } eq . 15 holds for some k̇ ∈ R > 0 . We can see that this lower bound is dominated by c 2 n2 , therefore proving Lemma 4.5 . ( µ ( dk̇e+1 ) n − µ̂r ( dk̇e+1 ) n ) 2 ( σr ( dk̇e+1 ) n ) 2 ≥ e − 2 x2 h β α v1 ( 3 32x2hβ ) 2 ( c n ) 2 v1 = max { 6 , 1 + 2e 1 x2 h β } ( 15 ) Proof Lemma 4.6 . By Lemma A.6 , for n > max { α x2hβ , α x2hβ ( e 2 x2 h β −2 ) + 1 2x2hβ , 1 2αx2hβ − 1 x2hβ } , there exists T ∈ Z > 0 ( Marked in Lemma A.6 as dk̇e ) such that running SGLD for the Bayesian linear regression problem over D1 for T steps will not be ( , δ ) differentially for < ′ , as defined in eq . 16 , and δ < 0.5 . Since ′ is dominated by c 2 n2 , this proves the lemma . ′ = e − 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( c n ) 2 + ln ( 0.5− δ ) v1 = max { 6 , 1 + 2e 1 x2 h β } ( 16 ) A.3 STOCHASTIC GRADIENT LANGEVIN DYNAMICS DETAILED ANALYSIS In order to ease the analysis of the SGLD process , markings in eq . 17 are used . xh , α , β , c , n are as defined for the Bayesian linear regression problem , and η is defined in subsection 4.2. λ = [ 1− η 2 ( α+ nx2hβ ) ] , λ̂ = [ 1− η 2 ( α+ n ( xh 2 ) 2β ) ] , ρ = η 2 ncx2hβ , ρ̂ = η 2 nc ( xh 2 ) 2β ( 17 ) Lemma A.1 . The forms of θ̂r ( k+1 ) n are θ̂1 ( k+1 ) n = θ0λ̂ k+1λ ( n−1 ) ( k+1 ) + k∑ j=0 ( λ̂λn−1 ) j [ ρ̂λn−1 + ρ n−2∑ i=0 λi + √ η n−1∑ i=0 λiξi ] θ̂r > 1 ( k+1 ) n = θ0 ( λ̂λ n−1 ) k+1+ ( r−1∑ i=1 ( ρ+ √ ηξ ) λ̂λn−i−1 + ( ρ̂+ √ ηξ ) λn−r + n∑ j=r+1 ( ρ+ √ ξη ) λn−j ) k∑ l=0 ( λ̂λn−1 ) l. Proof Lemma A.1 . Welling & Teh ( 2011 ) define the SGLD update rule as in eq . 3 . This rule can be applied to the Bayesian linear regression problem over databases D1 , D2 as following p ( θj ) = N ( θj ; 0 , α−1 ) ⇒ ln p ( θj ) = ln ( 1√ 2πα−1 ) − 1 2 θ2jα⇒ ∇θjp ( θj ) = −θjα p ( yi|θj ) = N ( yj ; θjxi , β−1 ) ⇒ ln p ( yi|θj ) = ln ( 1√ 2πβ−1 ) − 1 2 ( yi − θjxi ) 2β ⇒ ∇θjp ( yi|θj ) = ( yi − θjxi ) xiβ ⇒ θj+1 = θj + η 2 [ −θjα+ n ( yi − θjxj ) xiβ ] + √ ηjξi = θj [ 1− η 2 ( α+ nx2jβ ) ] + η 2 nyixiβ + √ ηξj = θj [ 1− η 2 ( α+ nx2jβ ) ] + η 2 ncx2iβ + √ ηξj . By using standard tools for solving first-order non-homogeneous recurrence relations with variable coefficients , the value of θ̂1n can be found . θ̂1n = λ̂λ n−1 ( θ0λ̂+ ρ̂+ √ ηξ λ̂ + n∑ i=2 ρ+ √ ηξ λ̂λi−1 ) = θ0λ̂λ n−1 + ( ρ̂+ √ ηξ ) λn−1 + ( ρ+ √ ηξ ) n∑ i=2 λn−1− ( i−1 ) = θ0λ̂λ n−1 + ( ρ̂+ √ ηξ ) λn−1 + ( ρ+ √ ηξ ) n∑ i=2 λn−1− ( i−1 ) = θ0λ̂λ n−1 + ( ρ̂+ √ ηξ ) λn−1 + ( ρ+ √ ηξ ) n−2∑ i=0 λi = θ0λ̂λ n−1 + ρ̂λn−1 + ρ n−2∑ i=0 λi + √ ηξ n−1∑ i=0 λi . Now by defining a new series - θ̂1 ( k+1 ) n = c1θ̂ 1 kn + c2 , and using the tools for solving first order non-homogeneous recurrence relations with constant coefficients , the value of θ̂1kn can be found θ̂1kn = c k 1 ( θ̂1n c1 + k∑ i=2 c2 ci1 ) = θ1nc k−1 1 + k∑ i=2 c2c k−i 1 = θ1nc k−1 1 + c2 k−2∑ i=0 ci1 = ( θ0c1 + c2 ) c k−1 1 + c2 k−2∑ i=0 ci1 = θ0 ( λ̂λ n−1 ) k + ( ρ̂λn−1 + ρ n−2∑ i=0 λi + √ ηξ n−1∑ i=0 λi ) k−1∑ j=0 ( λ̂λn−1 ) j . The proof for θ̂rkn is done in similar manner . Corollary A.1 . θ̂r ( k+1 ) n ∼ N ( θ̂ r ( k+1 ) n ; µ̂ r ( k+1 ) n , ( σ̂ r ( k+1 ) n ) 2 ) . Lemma A.2 . µkn+n − µ̂rkn+n ≥ λn−1 ncx2hβ α+nx2hβ λk ( n−1 ) ( λ̂k+1 − λk+1 ) . Proof Lemma A.2 . The proof of this lemma is separated into two cases , for r = 1 and for r > 1 . For r = 1 , it is easy to derive eq . 18 from lemma A.1 , using E [ θ0 ] = 0 and E [ ξ ] = 0. µ̂1 ( k+1 ) n = ρ n−2∑ i=0 λi k∑ j=0 ( λ̂λ ( n−1 ) ) j + ρ̂λn−1 k∑ j=0 ( λ̂λ ( n−1 ) ) j µkn+n = ρ n−2∑ i=0 λi k∑ j=0 λjn + ρλn−1 k∑ r=0 λrn ( 18 ) We use the sum of a geometric sequence to get µ̂1 ( k+1 ) n = ρ n−2∑ i=0 λi k∑ j=0 ( λ̂λ ( n−1 ) ) j + ρ̂λn−1 k∑ j=0 ( λ̂λ ( n−1 ) ) j = ( ρ ( 1− λn−1 1− λ ) + ρ̂λn−1 ) 1− ( λ̂λn−1 ) k+1 1− λ̂λn−1 . Therefore the difference between the means can be lower bounded : µkn+n − µ̂1kn+n = 1− λ ( k+1 ) n 1− λn [ ρ ( 1− λn−1 1− λ ) + ρλn−1 ] − 1− ( λ̂λ ( n−1 ) ) k+1 1− λ̂λn−1 [ ρ ( 1− λn−1 1− λ ) + ρ̂λn−1 ] =∗ 1− λ ( k+1 ) n 1− λn ncx2β α+ nx2hβ ( 1− λn ) − 1− ( λ̂λ ( n−1 ) ) k+1 1− λ̂λn−1 ncx2β α+ nx2hβ ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) = ( 1− λ ( k+1 ) n ) ncx 2β α+ nx2hβ − 1− ( λ̂λ ( n−1 ) ) k+1 1− λ̂λn−1 ncx2β α+ nx2hβ ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) = ncx2β α+ nx2hβ [ ( 1− λ ( k+1 ) n ) − 1− ( λ̂λ ( n−1 ) ) k+1 1− λ̂λn−1 ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) ] =∗∗ ncx2β α+ nx2hβ ( λn−1 34 η2α ( 1− λ̂k+1λ ( k+1 ) ( n−1 ) ) + λ ( k+1 ) ( n−1 ) ( λ̂k+1 − λk+1 ) ( 1− λn−1λ̂ ) 1− λn−1λ̂ ) ≥ ncx2β α+ nx2hβ ( λ ( k+1 ) ( n−1 ) ( λ̂k+1 − λk+1 ) ( 1− λn−1λ̂ ) 1− λn−1λ̂ ) = ncx2β α+ nx2hβ λ ( k+1 ) ( n−1 ) ( λ̂k+1 − λk+1 ) = λn−1 ncx2β α+ nx2hβ λk ( n−1 ) ( λ̂k+1 − λk+1 ) where equality * holds from claims D.1 , D.2 , D.3 , equality * * holds from claim D.5 , and the inequality holds because λ < λ̂ < 1 . This proves Lemma A.2 for r = 1 . For the case of r > 1 , from Lemma A.1 it is easy to see θ̂r > 1 ( k+1 ) n = [ [ θ0λ r−1 + ρ r−2∑ i=0 λi + √ η r−2∑ i=0 λiξi ] λ̂ kλk ( n−1 ) + k−1∑ j=0 ( λ̂λn−1 ) j [ ρ̂λn−1 + ρ n−2∑ i=0 λi + √ η n−1∑ i=0 λiξi ] ] λ̂λn−r+ ρ̂λn−r + ρ n−r−1∑ j=0 λj + √ η n−r∑ j=0 ξλj . Therefore µ̂r > 1kn+n follows µ̂r > 1 ( k+1 ) n = [ [ ρ r−2∑ i=0 λi ] λ̂kλk ( n−1 ) + k−1∑ j=0 ( λ̂λn−1 ) j [ ρ̂λn−1 + ρ n−2∑ i=0 λi ] ] λ̂λn−r + ρ̂λn−r + ρ n−r−1∑ j=0 λj . Consequently the difference in means for r > 1 can be lower bounded : µkn+n − µ̂rkn+n = λn−r [ λρλkλk ( n−1 ) r−2∑ i=0 λi + λ k−1∑ j=0 ( λλn−1 ) j ( ρλn−1 + ρ n−2∑ i=0 λi ) − λ̂ρλ̂kλk ( n−1 ) r−2∑ i=0 λi − λ̂ k−1∑ j=0 ( λ̂λn−1 ) j ( ρ̂λn−1 + ρ n−2∑ i=0 λi ) ] + λn−r ( ρ− ρ̂ ) = λn−rλk ( n−1 ) ρ ( λk+1 − λ̂k+1 ) r−2∑ i=0 λi + λn−r k−1∑ j=0 λ ( n−1 ) j [ λn−1 ( ρλj+1− ρ̂λ̂j+1 ) + ( λj+1 − λ̂j+1 ) ρ n−2∑ i=0 λi ] + λn−r ( ρ− ρ̂ ) =∗ λn−rλk ( n−1 ) ρ ( λk+1 − λ̂k+1 ) 1− λ r−1 1− λ + λn−r ncx2hβ α+ nx2hβ [ λ ( 1− λkn ) − λ̂ 1− ( λn−1λ̂ ) k 1− λn−1λ̂ ( 1− λn ( 3 4 λ−1 + 1 4 ) ) ] + λn−r ( ρ− ρ̂ ) =∗∗ λn−rλk ( n−1 ) ρ ( λk+1 − λ̂k+1 ) 1− λ r−1 1− λ + λn−r ncx2hβ α+ nx2hβ [ ( λ− λ̂ ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + λk ( n−1 ) ( λ̂k+1 − λk+1 ) ] + λn−r ( ρ− ρ̂ ) =∗∗∗ λn−r ncx2hβ α+ nx2hβ λk ( n−1 ) ( λk+1 − λ̂k+1 ) ( 1− λr−1 ) + λn−r ncx 2 hβ α+ nx2hβ [ ( λ− λ̂ ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + λk ( n−1 ) ( λ̂k+1 − λk+1 ) ] + λn−r ( ρ− ρ̂ ) = λn−r ncx2hβ α+ nx2hβ λk ( n−1 ) ( λk+1 − λ̂k+1 ) [ 1− λr−1 − 1 ] + λn−r ncx2hβ α+ nx2hβ ( λ− λ̂+ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + λn−r ( ρ− ρ̂ ) = λn−r ncx2hβ α+ nx2hβ λk ( n−1 ) ( λ̂k+1 − λk+1 ) λr−1+ λn−r ncx2hβ α+ nx2hβ ( λ− λ̂+ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + λn−r ( ρ− ρ̂ ) = λn−1 ncx2hβ α+ nx2hβ λk ( n−1 ) ( λ̂k+1 − λk+1 ) + λn−r ncx2hβ α+ nx2hβ ( λ− λ̂+ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + λn−r ( ρ− ρ̂ ) > ∗∗∗∗ λn−1 ncx2hβ α+ nx2hβ λk ( n−1 ) ( λ̂k+1 − λk+1 ) where equality * holds from claims D.6 and D.7 , equality * * holds from claim D.10 , equality * * * holds from claim D.1 , and equality * * * * holds from claim D.11 and λ̂ > λ. Lemma A.3 . For x2hβ > 3 , n > 1 2αx2hβ − 1 x2hβ , ∃k̇ ∈ R+ such that upper bounds defined in eq . 19 hold for all 0 < k ≤ k̇ : ( σ1 ( k+1 ) n ) 2 ≤ 2 ( λ̂λn−1 ) 2 1 α ( λ̂λn−1 ) 2k ( σr > 1 ( k+1 ) n ) 2 ≤ 6 ( λ̂λn−r ) 2 1 α ( λ̂λn−1 ) 2k . ( 19 ) Proof Lemma A.3 . The proof will be separated into two cases , r = 1 and r > 1 . ( σ̂1kn+n ) 2 can be easily computed from lemma A.1 using the fact that both the noise and prior are distributed normally . A first general upper bound on ( σ̂1kn+n ) 2 is found at eq . 20 . ( σ1kn+n ) 2 = 1 α ( λ̂λ ( n−1 ) ) 2 ( k+1 ) + η n−1∑ i=0 λ2i k∑ j=0 ( λ̂2λ2 ( n−1 ) ) j = ( λ̂λ ( n−1 ) ) 2 ( k+1 ) [ 1 α + η n−1∑ i=0 λ2i k∑ j=0 ( λ̂λ ( n−1 ) ) 2j ( λ̂λ ( n−1 ) ) 2 ( k+1 ) ] = ( λ̂λ ( n−1 ) ) 2 ( k+1 ) [ 1 α + η n−1∑ i=0 λ2i k∑ j=0 ( λ̂λ ( n−1 ) ) 2 ( j− ( k+1 ) ) ] ≤ ( λ̂λ ( n−1 ) ) 2 ( k+1 ) [ 1 α + η n−1∑ i=0 λ2i k∑ j=0 λ2n ( j− ( k+1 ) ) ] = ( λ̂λ ( n−1 ) ) 2 ( k+1 ) [ 1 α + η ( k+1 ) n∑ i=1 λ−2i ] = ( λ̂λ ( n−1 ) ) 2 ( k+1 ) [ 1 α + ηλ−2 1− λ−2 ( k+1 ) n 1− λ−2 ] ( 20 ) where the inequality holds because λ < λ̂ By claim D.12 , this upper bound can be further bounded for k̇ ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) − 1 such that eq . 21 will hold , therefore proving the bound for r = 1 . ( λ̂λ ( n−1 ) ) 2 ( k̇+1 ) [ 1 α + ηλ−2 1− λ−2 ( k̇+1 ) n 1− λ−2 ] ≤ 2 ( λ̂λ ( n−1 ) ) 2 ( k̇+1 ) 1 α ( 21 ) For r > 1 , ( σ̂r > 1kn+n ) 2 can be bounded as following ( σr > 1kn+n ) 2 = ( λ̂λn−r ) 2η [ ( λ̂kλk ( n−1 ) ) 2 r−2∑ i=0 λ2i + k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] + η n−r∑ i=0 λ2i + 1 α ( λ̂λn−1 ) 2k ( λ̂λn−1 ) 2 ≤∗ ( λ̂λn−r ) 2η [ ( λ̂kλk ( n−1 ) ) 2 r−2∑ i=0 λ2i + k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] + η n−r∑ i=0 λ2i + 1 α ( λ̂λn−1 ) 2k ( λ̂λn−r ) 2 = ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k + η ( λ̂λn−1 ) 2k r−2∑ i=0 λ2i + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] + η n−r∑ i=0 λ2i ≤∗∗ ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k + η ( λ̂λn−1 ) 2k n−1∑ i=0 λ2i + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] + η n−r∑ i=0 λ2i ≤∗∗∗ 2 ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k̇ + η ( λ̂λn−1 ) 2k n−1∑ i=0 λ2i + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] ( 22 ) where inequality * follows from λ < 1 and r > 1 , inequality * * follows from r ≤ n , and inequality * * * holds from claim D.15 . For k ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) − 1 , this bound can be further developed 2 ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k̇ + η ( λ̂λn−1 ) 2k n−1∑ i=0 λ2i + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] ≤ 6 ( λ̂λn−r ) 2 1 α ( λ̂λn−1 ) 2k̇ . ( 23 ) The inequality holds from claims D.12 , D.14 , which provide the bound for r > 1 . All that is left is to prove that 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) − 1 > 0 , which is done in Claim D.22 . Lemma A.4 . Mark k̇ = 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) − 1 , for the conditions of Lemma A.3 ( σ1dk̇en+n ) 2 ≤ ( 1 + 2e 1 x2β ) ( λ̂λn−1 ) 2 1 α ( λ̂λ ( n−1 ) ) 2dk̇e ( σr > 1dk̇en+n ) 2 ≤ 6 ( λ̂λn−r ) 2 1 α ( λ̂λn−1 ) 2dk̇e . Proof Lemma A.4 . This proof will be separated into two cases , for r > 1 and for r = 1 . For r > 1 , the bound found in eq . 22 , has no dependence on the choice of k , therefore holds also for dk̇e . This bound was , in turn , developed for k̇ at eq . 23 using three claims . If these claims also hold for dk̇e , then the bound in eq . 23 also holds for dk̇e , and the lemma is proved for r > 1 . Claims D.14 , D.12 hold for all k ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) , and since dk̇e ≤ k̇ + 1 = 1 2n logλ ( 1 1+ 1αη ( 1−λ2 ) ) , they holds for dk̇e . Claim D.15 was proved for all k , hence also holds for dk̇e . For r = 1 , the bound found at eq . 20 is applicable for all k , hence ( σ1 ( dk̇e+1 ) n ) 2 ≤ ( λ̂λn−1 ) 2 ( dk̇e+1 ) [ 1 α + ηλ−2 1− λ−2 ( dk̇e+1 ) n 1− λ−2 ] ≤ ( λ̂λn−1 ) 2 ( dk̇e+1 ) 1 α ( 1 + 2e 1 x2β ) where the last inequality holds from claim D.17 . Lemma A.5 . For k̇ defined in Lemma A.4 , the conditions of Lemma A.3 , and n > max { α x2hβ , α x2hβ ( e 2 x2 h β − 2 ) + 1 2x2hβ } ( µdk̇en+n − µ̂rdk̇en+n ) 2 ( σrdk̇en+n ) 2 ≥ e − 2 x2 h β α v1 ( 3 32x2hβ ) 2 ( c n ) 2 v1 = max { 6 , 1 + 2e 1 x2 h β } . Proof Lemma A.5 . ( µdk̇en+n − µ̂rdk̇en+n ) 2 σrdk̇en+n ≥ ( λn−1 ncx2hβ α+nx2hβ λdk̇e ( n−1 ) ( λ̂dk̇e+1 − λdk̇e+1 ) ) 2 v1 ( λ̂λn−r ) 2 1 α ( λ̂λ n−1 ) 2dk̇e = λ2dk̇e ( n−1 ) ( λn−1 ncx2hβ α+nx2hβ ( λ̂dk̇e+1 − λdk̇e+1 ) ) 2 v1 ( λ̂λn−r ) 2 1 α ( λ̂λ n−1 ) 2dk̇e = ( λn−1 ncx2hβ α+nx2hβ ( λ̂dk̇e+1 − λdk̇e+1 ) ) 2 v1 ( λ̂λn−r ) 2 1 α λ̂ 2dk̇e = αλ2 ( r−1 ) v1 ( ncx2hβ α+ nx2hβ ) 2 ( λ̂dk̇e+1 − λdk̇e+1 ) 2 λ̂2dk̇e+1 = αλ2 ( r−1 ) v1 ( ncx2hβ α+ nx2hβ ) 2 ( 1− λ dk̇e+1 λ̂dk̇e+1 ) 2 ≥ αλ2 ( r−1 ) v1 ( ncx2hβ α+ nx2hβ ) 2 ( 1− ( 1− 3 4nx 2β ( α+ nx2hβ ) 2 − ( α+ 14nx2β ) ) ) 2 = αλ2 ( r−1 ) v1 ( ncx2hβ α+ nx2hβ ) 2 ( 3 4nx 2 hβ ( α+ nx2hβ ) 2 − ( α+ 14nx2β ) ) 2 ≥ αλ2 ( r−1 ) v1 ( ncx2hβ α+ nx2hβ ) 2 ( 3 4nx 2β ( α+ nx2hβ ) 2 ) 2 ≥ αλ 2 ( r−1 ) v1 ( ncx2β 2nx2hβ ) 2 ( 3 4nx 2β ( 2nx2β ) 2 ) 2 = αλ2 ( r−1 ) v1 ( c 2 ) 2 ( 3 4 4nx2hβ ) 2 = αλ2 ( r−1 ) v1 ( 3 32x2hβ ) 2 ( c n ) 2 ≥ αλ2 ( n−1 ) v1 ( 3 32x2hβ ) 2 ( c n ) 2 ≥ e − 2 x2 h β α v1 ( 3 32x2hβ ) 2 ( c n ) 2 where first inequality holds from A.3 , A.4 and the definition of v1 , the second inequality follows claim D.17 and claim D.22 , fourth inequality holds under the assumption of nx2hβ > α ⇐⇒ n > α x2hβ , and last inequality holds from claim D.19 . Lemma A.6 . For the Bayesian linear regression problem over database D1 , the conditions of Lemma A.5 , and k̇ , as defined in Lemma A.4 , approximate sampling , by running SGLD for ( dk̇e+1 ) n steps , will not be ( , δ ) differentially private for δ < 0.5 , < e − 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( c n ) 2 + ln ( 0.5− δ ) v1 = max { 6 , 1 + 2e 1 x2 h β } . Proof Lemma A.6 . According to definition 1 , it is enough that there is one group , S , such that p ( θ ( dk̇e+1 ) n ∈ S|D1 ) > e p ( θ̂ ( dk̇e+1 ) n ∈ S|D2 ) + δ , to show that releasing θ ( dk̇e+1 ) n is not ( , δ ) private . Consider S = { s|s > µ ( dk̇e+1 ) n } . From claim D.23 and since θ ( dk̇e+1 ) n ∼ N ( θ ( dk̇e+1 ) n ; µ ( dk̇e+1 ) n , σ 2 ( dk̇e+1 ) n ) , eq . 24 holds . The conditions for the right term to be smaller than 0 ( thus making the approximate sampling not ( , δ ) private ) are found in eq . 25 , therefore proving the lemma . e p ( θ̂ ( dk̇e+1 ) n ∈ S|D2 ) + δ − p ( θ̂ ( dk̇e+1 ) n ∈ S|D1 ) ≤ e e −e − 2 x2β α 2v1 ( 3 32x2 h β ) 2 ( cn ) 2 + δ − 0.5 ( 24 ) e −e − 2 x2β α 2v1 ( 3 32x2 h β ) 2 ( cn ) 2 + δ − 0.5 < 0 e −e − 2 x2β α 2v1 ( 3 32x2 h β ) 2 ( cn ) 2 < 0.5− δ − e− 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( c n ) 2 < ln ( 0.5− δ ) < e − 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( c n ) 2 + ln ( 0.5− δ ) ( 25 ) B PROPOSE TEST SAMPLE SUPPLEMENTARY nmin which is used in algorithm 4.3 is defined as following ν = 2 ln ( 1δ ) + 1 nb1 = max { 1 + x2h x2l 8 , 1 + ν x2h x2l ( 1 + 8 ( ν − 1 ) ) , ( 16νβx4h 9 10 x 2 l ) 1 1−2γ1 , ( 32νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) m̆ ) 1 2−γ1 , ( 32νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ) 1 2−2γ1 , ( 8ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) m̆ ) 2 3 , ( 8ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ) 2 3−2γ1 } nb2 = max { 1 + x2h x2l 10ν β , 1 + ν x2h x2l } nmin = max { nb1 , nb2 , n1 ρ2 γ1 } . ( 26 ) C PROPOSE TEST SAMPLE PRIVACY Proof Claim 4.2 . We set the algorithm parameters in eq . 27 , and matching databasesD3 , D4 defined in eq . 8 . We note that we only define a lower bound over n1 , which will be updated later on . ρ3 = 1.15 ; ρ2 = 0.45 ; ρ1 = 1.25 , γ1 = 0.49 ; xl = xh/2 n1 > max { 2 1 log 1 2δ , 4 1 log 1 2δ , 210ρ1 , 29+10ρ2 } β = 3 ; xh = 1 α = 1 ( 27 ) Mark the return value of the algorithm as r , the event of the algorithm running on database D3 and W = D3 as AD3 , the event of the algorithm running on database D4 and W = D4 as AD4 , and S = { s|s > µi } , where µi is the mean of the sample distribution at the SGLD i ’ th step given database D3 ( Similarly to as defined in subsection 4.2 ) . We will show that ∀ ∈ R > 0 , δ < 16 , ∃n1 such that eq . 28 holds . P ( r ∈ S|D3 ) > e P ( r ∈ S|D4 ) + δ ( 28 ) We first show that P ( r ∈ S ∧AcD3 |D3 ) = 0 P ( r ∈ S ∧AcD4 |D4 ) = 0 ( 29 ) Notice that the algorithm can return result in S only if it reached step 19 . Consider an event where the algorithm reached step 19 and AcD3 . From A c D3 , ∃ ( xi , yi ) ∈ D3 such that | yixi − m̆| ≥ n ρ2 2 . However , since ∀ ( xi , yi ) ∈ D3 : yixi = n ρ3 1 then ∀ ( xi , yi ) ∈ D3 : | yi xi − m̆| > nρ22 and therefore |W | = 0 . Under the assumption that sample from p ( θ| { } ) returns null then in this case the algorithm also returns null and therefore P ( r ∈ S ∧AcD3 |D3 ) = 0 . Same arguments hold for D4 . Following eq . 29 , to prove eq . 28 it is enough to prove eq . 30 . P ( r ∈ S|D3 , A3 ) P ( A3|D3 ) ≥∗ P ( r ∈ S|D3 , A3 ) − 5δ > ∗∗ e P ( r ∈ S|D4 , A4 ) + δ ≥ e P ( r ∈ S|D4 , A4 ) P ( A4|D4 ) + δ = e P ( r ∈ S ∧A4|D4 ) + δ ( 30 ) From claim C.1 ∃nbound1 such that ∀n1 > nbound1 inequality * holds . From Lemma 4.6 , for n1 big enough ∃T ∈ Z > 0 such that eq . 31 hold ( Where 6δ < 0.5 according to the claim conditions ) . Therefore , ∃k , nbound2 ∈ R > 0 such that ∀n1 > nbound2 : ′ > kn 2 ( 1−ρ3 ) 1 and eq . 31 hold . As ρ3 > 1 , by choosing n1 > max { nbound2 , ( k ) 1 2 ( ρ3−1 ) } get that ′ > . Consequently , by choosing n1 > max { nbound1 , nbound2 , ( k ) 1 2 ( ρ3−1 ) } , inequalities * and * * hold , and the claim is proved . ′ = Ω ( n 2 ( ρ3−1 ) 1 ) P ( r ∈ S|D3 , A3 ) > e ′ P ( r ∈ S|D4 , A4 ) + 6δ ( 31 ) Claim C.1 . ∃nbound1 ∈ Z > 0 such that the probability for algorithm 4.3 to reach step 19 with W = D3 ( marked event A ) is greater or equal to 1− 5δ for all n1 > nbound1 . Proof . Mark the event of nW > nmin ∧ m̆ ∈ [ m − nρ22 , m + n ρ2 2 ] ∧ n 1+ρ2−0.1 2 > n̆ ρ1 1 ∧ n̆1 ≤ n1 ∧ V = D as B . Since P ( A|D3 , B ) = 1 it follows that P ( A|D3 ) ≥ P ( A ∧ B|D3 ) = P ( A|B , D3 ) P ( B|D3 ) = P ( B|D3 ) . Therefore if ∃nlb such that ∀n1 > nlb : P ( B|D3 ) ≥ 1 − 5δ the claim is proved . P ( B|D3 ) = P ( m̆ ∈ [ m− nρ22 , m+ n ρ2 2 ] ∧ nW > nmin|D3 , V = D , n 1+ρ2−0.1 2 > n̆ ρ1 1 , n̆1 ≤ n1 ) · P ( n1+ρ2−0.12 > n̆ ρ1 1 |V = D , n̆1 ≤ n1 , D3 ) P ( V = D , n̆1 ≤ n1|D3 ) ≥ P ( m̆ ∈ [ m− nρ22 , m+ n ρ2 2 ] ∧ nW > nmin|D3 , V = D , n 1+ρ2−0.1 2 > n̆ ρ1 1 , n̆1 ≤ n1 ) − 3δ = P ( nW > nmin|D3 , V = D , n1+ρ2−0.12 > n̆ ρ1 1 , n̆1 ≤ n1 , m̆ ∈ [ m− n ρ2 2 , m+ n ρ2 2 ] ) P ( m̆ ∈ [ m− nρ22 , m+ n ρ2 2 ] |D3 , V = D , n 1+ρ2−0.1 2 > n̆ ρ1 1 , n̆1 ≤ n1 ) − 3δ ≥ P ( nW > nmin|D3 , V = D , n1+ρ2−0.12 > n̆ ρ1 1 , n̆1 ≤ n1 , m̆ ∈ [ m− n ρ2 2 , m+ n ρ2 2 ] ) − 4δ ≥ 1− 5δ ( 32 ) By corollary C.1 and claim C.3 for n1 big enough first inequality holds . By claim C.4 for n1 big enough second inequality holds , and by claim C.5 for n1 big enough third inequality holds . Therefore for n1 big enough eq . 32 holds and the claim is proved . Claim C.2 . For n1 > max { 12 −10ρ1 , 4 1 log 1 2δ } P ( n̆ρ11 ≥ n ρ3 1 ∧ n̆1 ≤ n1|D3 ) ≥ 1− 2δ . Proof . Mark the noise added at step 6 as l1 P ( n̆ρ11 ≥ n ρ3 1 ∧ n̆1 ≤ n1|D3 ) = P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ∧ n1 + l1 − 1 log 1 2δ ≤ n1|D3 ) = P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ∧ l1 ≤ 1 log 1 2δ |D3 ) = P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ∧ |l1| ≤ 1 log 1 2δ |D3 ) + P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ∧ l1 ≤ − 1 log 1 2δ |D3 ) ≥ P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ∧ |l1| ≤ 1 log 1 2δ |D3 ) = P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ||l1| ≤ 1 log 1 2δ , D3 ) P ( |l1| ≤ 1 log 1 2δ |D3 ) ≥ P ( ( n1 − 2 1 log 1 2δ ) ρ1 ≥ nρ31 |D3 ) − 2δ = 1− 2δ Where last inequality holds from following equation P ( |l1| ≤ 1 log 1 2δ ) = 1− 2P ( l1 ≤ − 1 log 1 2δ ) = 1− exp ( − 1 log 1 2δ 1 ) = 1− 2δ Last equality holds since n1 > 12 −10ρ1 ⇒ n−11 < ( 12 ) 10ρ1 ⇒ n−0.11 < ( 12 ) ρ1 and therefore ( n1 − 2 1 log 1 2δ ) ρ1 > ( 12n1 ) ρ1 > nρ1−0.11 = n ρ3 1 Corollary C.1 . ∀n1 > max { 1 2 −10ρ1 , 4 1 log 1 2δ } : P ( V = D3 ∧ n̆1 ≤ n1 ) ≥ 1− 2δ . Claim C.3 . For n1 > max { 12 − ( 9+10ρ2 ) , 4 1 log 1 2δ } P ( n1+ρ2−0.12 > n̆ ρ1 1 |D3 , V = D3 , n̆1 ≤ n1 ) ≥ 1− δ . Proof . P ( n0.9+ρ22 > n̆ ρ1 1 |D3 , V = D3 , n̆1 ≤ n1 ) ≥ P ( n 0.9+ρ2 2 > n ρ1 1 |D3 , V = D3 ) ≥ P ( ( n1 − 2 1 log 1 2δ ) 0.9+ρ2 > nρ11 |D3 ) p ( n2 ≥ |V | − 2 1 log 1 2δ ) ≥ P ( ( n1 − 2 1 log 1 2δ ) 0.9+ρ2 > nρ11 |D3 ) − δ = 1− δ where the second inequality holds since P ( Lap ( 1 ) < − 1 log 1 2δ ) < δ , and last equality holds since ( n1 − 2 1 log 1 2δ ) 0.9+ρ2 > ( 12n1 ) 0.9+ρ2 > n1+ρ2−0.21 = n ρ1 1 Claim C.4 . ∃nlb1 ∈ Z > 0 such that ∀n1 > nlb1 : P ( m̆ ∈ [ m− n ρ2 2 , m+ n ρ2 2 ] |D3 , n 0.9+ρ2 2 > n̆ ρ1 1 ) ≥ 1− δ . Proof . Mark the noise added at step 13 as l1 P ( m̆ ∈ [ m− nρ22 , m+ n ρ2 2 ] |D3 , n 0.9+ρ2 2 > n̆ ρ1 1 ) = P ( l1 ∈ [ −nρ22 , n ρ2 2 ] |D3 , n 0.9+ρ2 2 > n̆ ρ1 1 ) ≥ 1− 2 ( 1 2 exp ( −nρ22 1 1 n̆ ρ1 1 2 ( n2−1 ) x2hx 2 l+x 4 h n2 ( n2−1 ) x4l ) = 1− exp ( − n 1+ρ2 2 ( n2 − 1 ) x4l n̆ρ11 ( 2 ( n2 − 1 ) x2hx2l + x4h ) ) Since n0.9+ρ22 > n̆ ρ1 1 then n̆ ρ1 1 = o ( n 1+ρ2 2 ) and therefore for n1 big enough the exponent is smaller than δ . Claim C.5 . ∃nlb2 ∈ Z > 0 such that ∀n1 > nlb2 : p ( nW > nmin||V | = D3 ∧ n 0.9+ρ2 2 > n̆ ρ1 1 ∧ m̆ ∈ [ m− n ρ2 2 , m+ n ρ2 2 ] , D3 ) ≥ 1− δ . Proof . For abbreviation mark eventB asB = |V | = D3∧n0.9+ρ22 > n̆ ρ1 1 ∧m̆ ∈ [ m−n ρ2 2 , m+n ρ2 2 ] . Mark the Laplace noise used in step 8 as l1 and the Laplace noise used in step 15 as l2 . P ( nW > nmin|B , D3 ) = P ( n1 − 1 log 1 2δ + l2 > nmin|B , D3 ) > P ( n1 − 1 log 1 2δ − 1 log 1 δ > nmin|B , D3 ) P ( l2 > − 1 log 1 δ ) + P ( n1 − 1 log 1 2δ + l2 > nmin ∧ l2 < − 1 log 1 δ |B , D3 ) > P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|B , D3 ) ( 1− δ 2 ) > P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|B , D3 ) − δ 2 = P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|l1 < 1 log 1 δ , B , D3 ) P ( l1 < 1 log 1 δ |B , D3 ) + P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin ∧ l1 ≥ 1 log 1 δ |B , D3 ) − δ 2 ≥ P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|l1 < 1 log 1 δ , B , D3 ) P ( l1 < 1 log 1 δ |B , D3 ) − δ 2 ≥ P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|l1 < 1 log 1 δ , B , D3 ) − δ From B it holds that |m− m̆| < nρ22 and therefore m̆ < m+ n ρ2 2 , and for the case of l1 < 1 log 1 δ it holds that n2 < n1 − 1 log 1 2δ + 1 log 1 δ < n1 + 1 log 1 δ . Therefore m̆ ≤ m+ ( n1 + 1 log 1 δ ) ρ2 . As nmin = O ( max { m̆ 2 3 , n ρ2 γ1 1 } ) then for the case of l1 < 1 log 1 δ and B , it holds that nmin = O ( max { ( m + nρ21 ) 2 3 , n ρ2 γ1 1 } ) = O ( max { n 2ρ3 3 1 , n ρ2 γ1 1 } ) < o ( n1 ) , therefore ∃nlb2 such that ∀n1 > nlb2 : n1 − 2 log 1 2δ − 1 log 1 δ > nmin . Consequently , ∀n1 > nlb2 : P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|l1 < 1 log 1 δ , B , D3 ) = 1 . Definition 4 . A randomized function f ( X , y ) : χn1 ×Rn2 → R , is ( , δ ) -differentially private with respect to X if ∀S ⊆ R , and ∀X , X̂ ∈ χn : ‖X − X̂‖ ≤ 1 , eq . 33 holds . P ( f ( X , y ) ∈ S ) ≤ exp ( ) P ( f ( X̂ , y ) ∈ S ) + δ ( 33 ) Claim C.6 . Calculating n̆1 , n2 is ( 2 , 0 ) differentially private . Proof . Since n1 can differ by up to 1 for neighbouring databases , calculating n̆1 is protected via the Laplace mechanism . Since for a given n̆1 the value |V | can change by up to 1 for two neighbouring databases then calculating n2 is ( , 0 ) by the Laplace mechanism . Consequently from sequential composition theorem the sequential composition is ( 2 , 0 ) differentially private . Claim C.7 . P ( n2 ≤ |V ||D , n̆1 ) = 1− δ . Proof . Mark l ∼ Lap ( 1 ) , P ( n2 ≤ |V ||D , n̆1 ) = P ( |V | − 1 log 1 2δ + l ≤ |V ||D , n̆1 ) = P ( l ≤ 1 log 1 2δ |D , n̆1 ) = 1− 1 2 exp ( − 1 log 1 2δ 1 ) = 1− δ Claim C.8 . Calculating m̆ is ( , 0 ) differentially private with respect to D for given n̆1 , n2 and n2 < |V | . Proof . Mark by D̂ a neighbouring database to D , and V̂ as V induced by this database . If V = V̂ then the claim follows trivially . In case the V ’ s differ , assume w.l.o.g that |V | ≥ |V̂ | , and that if |V | = |V̂ | then they differ in their last sample . Define q = ∑ ( xi , yi ) ∈V/ { x|V | , y|V | } xiyi , z =∑ ( xi , yi ) ∈V/ { x|V | , y|V | } x 2 i . | q + x|V |y|V | z + x2|V | − q + x̂|V |ŷ|V | z + x̂2|V | | = | qx̂2|V | + x|V |y|V |x̂ 2 |V | + x|V |y|V |z − qx 2 |V | − x̂|V |ŷ|V |x 2 |V | − x̂|V |ŷ|V |z ( z + x2|V | ) ( z + x̂ 2 |V | ) | ≤ qx2h + n̆ ρ1 1 x 2 hz + n̆ ρ1 1 x 4 h ( z + x2l ) z ≤ n̆ρ11 2zx2h + x 4 h ( z + x2l ) z = n̆ρ11 ( 2x2h z + xl + x4h ( z + x2l ) z ) ≤ n̆ρ11 ( 2x2h |V |x2l + x4h |V | ( |V | − 1 ) x4l ) ≤ n̆ρ11 ( 2x2h n2x2l + x4h n2 ( n2 − 1 ) x4l ) = n̆ρ11 2 ( n2 − 1 ) x2hx2l + x4h n2 ( n2 − 1 ) x4l therefore by the Laplace mechanism calculating m̆ is ( , 0 ) differentially private . Claim C.9 . Steps 6-13 are ( 3 , δ ) differentially private . Proof . Mark D̂ as a neighbouring database , P ( m̆ ∈ S|D ) =∫ r1 , r2∈R > 0×R > 0 P ( m̆ ∈ S|D , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D ) dr1dr2 =∫ r1 , r2∈R > 0× [ 1 , |V | ] P ( m̆ ∈ S|D , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D ) dr1dr2+∫ r1 , r2∈R > 0× ( |V | , ∞ ] P ( m̆ ∈ S|D , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D ) dr1dr2 ≤∗∫ r1 , r2∈R > 0× [ 1 , |V | ] P ( m̆ ∈ S|D , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D ) dr1dr2 + δ ≤∗∗∫ r1 , r2∈R > 0× [ 1 , |V | ] e2 P ( m̆ ∈ S|D̂ , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D̂ ) dr1dr2 + δ ≤∫ r1 , r2∈R > 0×R > 0 e2 P ( m̆ ∈ S|D̂ , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D̂ ) dr1dr2 + δ = e2 P ( m̆ ∈ S|D̂ ) + δ where inequality * follows claim C.7 and inequality * * follows claims C.8 and C.6 . Claim C.10 . Steps 14-19 are ( , δ ) differentially private with respect to D for |W | < nmin and given n2 , m̆ . Proof . Mark l ∼ Lap ( 1 ) , and D̂ as a neighbouring database . Eq . 34 proves the claim . P ( S|D , |W | < nmin , m̆ , n2 ) = P ( S ∩ { null } |D , |W | < nmin , m̆ , n2 ) + P ( S ∩ { null } c|D , |W | < nmin , m̆ , n2 ) ≤ e P ( S ∩ { null } |D̂ , |W | < nmin , m̆ , n2 ) + δ ≤ e P ( S|D̂ , |W | < d , m̆ , n2 ) + δ ( 34 ) where first inequality is true from eq . 35 and the Laplace mechanism for nW . P ( null|D , |W | < nmin , m̆ , n2 ) = P ( nW < nmin + 1 log ( 1 2δ ) |D , |W | < nmin , m̆ , n2 ) ≥ P ( l < 1 log ( 1 2δ ) ) ≥ 1− δ ( 35 ) Claim C.11 . Step 19 is ( , δ ) differentially private with respect to D for |W | ≥ nmin and given n2 , m̆ . Proof . For a given n2 , m̆ and a neighbouring database , the groupW can change by up to one sample . Mark n = |W | and c = m̆ . From eq . 36 , it follows that W ∈ D , as defined in eq . 5. n ≥ n ρ2 γ1 2 ⇒ n 1 2 > nγ1 ≥ nρ22 ( 36 ) As W ∈ D , n ≥ nb1 , and n ≥ nb2 , the problem of sampling from p ( θ|W ) for |W | ≥ nmin holds the constraints of claim D.29 . Therefore one sample from p ( θ|W ) is ( , δ ) differentially private . Claim C.12 . Steps 14-18 are ( , 0 ) differentially private with respect to D for |W | > nmin and given m̆ , n2 . Proof . Only data released is nW , and since the sensitivity of |W | given m̆ , n2 is 1 , then the Laplace mechanism ensures ( , 0 ) differential privacy . Corollary C.2 . Steps 14-19 are ( 2 , δ ) differentially private with respect to D for |W | > nmin and given m̆ , n2 . Corollary C.3 . Steps 14-19 are ( 2 , δ ) differentially private with respect to D given m̆ , n2 . D AUXILIARY CLAIMS This subsection contains simple claims used to simplify the reading of the proofs . Claims described in this subsection uses the marking defined in eq . 17 . Claim D.1 . ρ 11−λ = ncx2hβ α+nx2hβ . Proof Claim D.1 . ρ 1 1− λ = η 2 ncx2hβ 1 1− ( 1− η2 ( α+ n ( xh 2 ) 2β ) ) = ncx2hβ 1 α+ nx2hβ = ncx2hβ α+ nx2hβ Claim D.2 . ρ 1−λ n−1 1−λ + ρλ n−1 = ncx2hβ α+nx2hβ ( 1− λn ) . Proof Claim D.2 . ρ 1− λn−1 1− λ + ρλn−1 = ρ ( 1− λn−1 + λn−1 − λn 1− λ ) = ρ ( 1− λn 1− λ ) = ncx2hβ α+ nx2hβ ( 1− λn ) where the last equality holds from Claim D.1 Claim D.3 . ρ ( 1−λ n−1 1−λ ) + ρ̂λ n−1 = ncx2hβ α+nx2hβ ( 1− λn ( 34λ −1 + 14 ) ) . Proof Claim D.3 . ρ ( 1− λn−1 1− λ ) + ρ̂λn−1 = ρ ( 1− λn−1 1− λ ) + ρ 1 4 λn−1 = ρ ( 1− 34λ n−1 − 14λ n 1− λ ) = ρ ( 1− λn ( 34λ −1 + 14 ) 1− λ ) = ncx2hβ α+ nx2hβ ( 1− λn ( 3 4 λ−1 + 1 4 ) ) where the last equality holds from Claim D.1 . Claim D.4 . 14λ+ 3 4 − λ̂ = 3 4 η 2α . Proof Claim D.4 . 1 4 λ+ 3 4 − λ̂ = 1 4 ( 1− η 2 ( α+ nx2β ) ) + 3 4 − ( 1− η 2 ( α+ 1 4 nx2β ) ) = η 2 [ α+ 1 4 nx2β − 1 4 ( α+ nx2β ) ] = 3 4 η 2 α Claim D.5 . ( 1− λkn ) ( 1− λ̂λn−1 ) − ( 1− ( λ̂λ ( n−1 ) ) ) k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) = λn−1 3 4 η 2 α ( 1− λ̂kλk ( n−1 ) ) + λk ( n−1 ) ( λ̂k − λk ) ( 1− λn−1λ̂ ) . Proof Claim D.5 . ( 1− λkn ) ( 1− λ̂λn−1 ) − ( 1− ( λ̂λ ( n−1 ) ) ) k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) = λn−1 ( 1 4 λ+ 3 4 − λ̂ ) + λkn ( λ̂λn−1 − 1 ) + ( λ̂λn−1 ) k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) = λn−1 ( 1 4 λ+ 3 4 − λ̂ ) + λk ( n−1 ) ( λk ( λ̂λn−1 − 1 ) + λ̂k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) ) = λn−1 ( 1 4 λ+ 3 4 − λ̂ ) + λk ( n−1 ) ( λ̂k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) − λk ( 1− λ̂λn−1 ) ) = λn−1 ( 1 4 λ+ 3 4 − λ̂ ) + λk ( n−1 ) ( λ̂k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) − λk ( 1− λn−1λ̂ ) ) =∗ λn−1 η 2 3 4 α+ λk ( n−1 ) ( λ̂k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) − λk ( 1− λn−1λ̂ ) ) =∗ λn−1 η 2 3 4 α+ λk ( n−1 ) ( λ̂k ( 1− λn−1 ( λ̂+ 3 4 η 2 α ) − λk ( 1− λn−1λ̂ ) ) = λn−1 η 2 3 4 α− λ̂kλn−1 3 4 η 2 α+ λk ( n−1 ) ( λ̂k ( 1− λn−1λ̂ ) − λk ( 1− λn−1λ̂ ) ) = λn−1 3 4 η 2 α ( 1− λ̂kλk ( n−1 ) ) + λk ( n−1 ) ( λ̂k − λk ) ( 1− λn−1λ̂ ) where equality * holds from claim D.4 Claim D.6 . λ ∑k−1 j=0 λ ( n−1 ) jλj [ λn−1ρ+ ρ ∑n−2 i=0 λ i ] = λ ( 1− λkn ) ncx 2 hβ α+nx2hβ . Proof Claim D.6 . λ k−1∑ j=0 λ ( n−1 ) jλj [ λn−1ρ+ ρ n−2∑ i=0 λi ] = ρλ kn−1∑ i=0 λi = ρλ 1− λkn 1− λ =∗ λ ncx2hβ α+ nx2hβ ( 1− λkn ) Where equality * follows from claim D.1 . Claim D.7 . λ̂ ∑k−1 j=0 λ ( n−1 ) j λ̂j [ λn−1ρ̂+ ρ ∑n−2 i=0 λ i ] = λ̂ 1− ( λ n−1λ̂ ) k 1−λn−1λ̂ ncx2hβ α+nx2hβ ( 1−λn ( 34λ −1 + 14 ) ) . Proof Claim D.7 . λ̂ k−1∑ j=0 λ ( n−1 ) j λ̂j [ λn−1ρ̂+ ρ n−2∑ i=0 λi ] = λ̂ 1− ( λn−1λ̂ ) k 1− λn−1λ̂ [ λn−1ρ̂+ ρ 1− λn−1 1− λ ] =∗ λ̂ 1− ( λn−1λ̂ ) k 1− λn−1λ̂ ncx2hβ α+ nx2hβ ( 1− λn ( 3 4 λ−1 + 1 4 ) ) Where equality * follows from claims D.1 , D.3 . Claim D.8 . λλk − λkλnλ̂− λ̂λ̂k + λ̂λ̂kλn ( 34λ −1 + 14 ) = ( 1− λ̂λ n−1 ) ( λk+1 − λ̂k+1 ) + λ̂k+1λn−1 ( 34 η 2α ) . Proof Claim D.8 . λλk − λkλnλ̂− λ̂λ̂k + λ̂λ̂kλn ( 3 4 λ−1 + 1 4 ) = λk+1 ( 1− λ̂λn−1 ) − λ̂k+1 ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) =∗ λk+1 ( 1− λ̂λn−1 ) − λ̂k+1 ( 1− λn−1 ( λ̂+ 3 4 η 2 α ) ) = ( 1− λ̂λn−1 ) ( λk+1 − λ̂k+1 ) + λ̂k+1λn−1 ( 3 4 η 2 α ) where equality * holds from claim D.4 . Claim D.9 . λ ( 1− λkn ) ( 1− λn−1λ̂ ) − λ̂ ( 1− ( λn−1λ̂ ) k ) ( 1− λn ( 34λ −1 + 14 ) ) = ( λ− λ̂ ) ( 1− λ̂λn−1 ) + λn−1λ̂ [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] + λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λ̂k+1 − λk+1 ) ] . Proof Claim D.9 . λ ( 1− λkn ) ( 1− λn−1λ̂ ) − λ̂ ( 1− ( λn−1λ̂ ) k ) ( 1− λn ( 3 4 λ−1 + 1 4 ) ) = λ− λ̂− λnλ̂ ( 1− ( 3 4 λ−1 + 1 4 ) ) − λk ( n−1 ) [ λλk − λkλnλ̂− λ̂λ̂k + λ̂λ̂kλn ( 3 4 λ−1 + 1 4 ) ] =∗ λ− λ̂− λnλ̂ ( 1− ( 3 4 λ−1 + 1 4 ) ) − λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λk+1 − λ̂k+1 ) + λ̂k+1λn−1 ( 3 4 η 2 α ) ] = λ− λ̂− λn−1λ̂ ( λ− ( 3 4 + 1 4 λ ) ) − λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λk+1 − λ̂k+1 ) + λ̂k+1λn−1 ( 3 4 η 2 α ) ] =∗∗ λ− λ̂− λn−1λ̂ ( λ− ( λ̂+ 3 4 η 2 α ) ) − λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λk+1 − λ̂k+1 ) + λ̂k+1λn−1 ( 3 4 η 2 α ) ] = ( λ− λ̂ ) ( 1− λ̂λn−1 ) + λn−1λ̂ [ 3 4 η 2 α ( 1− λ̂kλk ( n−1 ) ) ] + λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λ̂k+1 − λk+1 ) ] Where equality * follows from claim D.8 and equality * * follows from claim D.4 . Claim D.10 . λ ( 1− λkn ) − λ̂1− ( λ n−1λ̂ ) k 1− λn−1λ̂ ( 1− λn ( 3 4 λ−1 + 1 4 ) ) = ( λ− λ̂ ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + λk ( n−1 ) ( λ̂k+1 − λk+1 ) . Proof Claim D.10 . λ ( 1− λkn ) − λ̂1− ( λ n−1λ̂ ) k 1− λn−1λ̂ ( 1− λn ( 3 4 λ−1 + 1 4 ) ) = [ M5.d ] ( λ− λ̂ ) ( 1− λ̂λn−1 ) + λn−1λ̂ [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] + λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λ̂k+1 − λk+1 ) ] ( 1− λ̂λn−1 ) = ( λ− λ̂ ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + λk ( n−1 ) ( λ̂k+1 − λk+1 ) Claim D.11 . ncx 2 hβ α+nx2hβ ( λ− λ̂+ λ n−1 [ 34 η 2α ( 1−λ̂ kλk ( n−1 ) ) ] 1−λ̂λn−1 ) + ( ρ− ρ̂ ) > 0 . Proof Claim D.11 . ncx2hβ α+ nx2hβ ( λ− λ̂+ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + ( ρ− ρ̂ ) = ncx2hβ α+ nx2hβ ( λ− λ̂+ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + η 2 ncx2hβ ( 1− 1 4 ) = ncx2hβ α+ nx2hβ ( 1− η 2 ( α+ nx2β ) − ( 1− η 2 ( α+ 1 4 nx2β ) ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + 3 4 η 2 ncx2hβ = ncx2hβ α+ nx2hβ ( −3 4 η 2 ( nx2β ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + 3 4 η 2 ncx2hβ = ncx2hβ α+ nx2hβ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + ncx2hβ [ 3 4 η 2 − 3 4 η 2 nx2β α+ nx2β ] = ncx2hβ α+ nx2hβ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + ncx2hβ 3 4 η 2 [ 1− nx 2β α+ nx2β ] > 0 where the last inequality holds because λ , λ̂ < 1 and α > 0 Claim D.12 . 1α > λ −2η 1−λ −2 ( k+1 ) n 1−λ−2 is true for k ≤ 1 2n logλ ( 1 1+ 1αη ( 1−λ2 ) ) − 1 . Proof Claim D.12 . 1 α ≥ λ−2η 1− λ −2k̇n 1− λ−2 ⇐⇒ λ2 1 α 1 η ( 1− λ−2 ) ≤ 1− λ−2k̇n ⇐⇒ λ2 1 α 1 η ( λ−2 − 1 ) ≥ λ−2k̇n − 1 ⇐⇒ 1 + λ2 1 α 1 η ( λ−2 − 1 ) ≥ λ−2k̇n ⇐⇒ − k̇ ≥ 1 2n logλ ( 1 + 1 αη ( 1− λ2 ) ) ⇐⇒ k̇ ≤ 1 2n logλ ( 1 1 + 1αη ( 1− λ2 ) ) Claim D.13 . 1 α ( λ̂λ ( n−1 ) ) 2k̇ > η ∑n−1 i=0 λ 2i ∑k̇−1 j=0 ( λ̂ 2λ2 ( n−1 ) ) j is true for k̇ ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) . Proof Claim D.13 . First note that the inequality can also be written as 1 α > η ∑n−1 i=0 λ 2i ∑k−1 j=0 ( λ̂λ ( n−1 ) ) 2 ( j−k ) . Secondly , the right hand term of the inequality could be upper bound as in eq . 37 . Therefore for the claim ’ s inequality to holds it is enough that 1α ≥ ηλ −2 1−λ−2nk 1−λ−2 , which proved by claim D.12 to be true for k̇ ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) η n−1∑ i=0 λ2i k−1∑ j=0 ( λ̂λ ( n−1 ) ) 2 ( j−k ) = η n−1∑ i=0 λ2i k−1∑ j=0 1 ( λ̂λ ( n−1 ) ) 2 ( k−j ) < k > j η n−1∑ i=0 λ2i k−1∑ j=0 1 ( λλ ( n−1 ) ) 2 ( k−j ) = η n−1∑ i=0 λ2i k−1∑ j=0 1 λ2n ( k−j ) = η n−1∑ i=0 k−1∑ j=0 1 λ2 [ nk−nj−i ] =r=nj+i η nk−1∑ r=0 1 λ2 [ nk−r ] =r′=nk−r,1 < r′ < nk η nk∑ r′=1 1 λ2 [ r′ ] = η nk∑ i=1 λ−2i = η λ−2 − λ−2 ( nk+1 ) 1− λ−2 = ηλ−2 1− λ−2nk 1− λ−2 ( 37 ) Claim D.14 . 1α ( λ̂λ n−1 ) 2k̇ ≥ η ( λ̂λn−1 ) 2k̇ ∑n−1 i=0 λ 2i is true for k̇ ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) . Proof Claim D.14 . eq . 38 holds because λ , λ̂ < 1 . By multiplying both sides with ∑n−1 i=0 λ 2i get eq . 39 . Then noticing that the right term equals to the right term of claim D.13 , and hence smaller than the left term of the claim , the claim is proved . ( λ̂λn−1 ) 2k < 1 < k−1∑ i=0 ( λ̂λn−1 ) 2j ( 38 ) η ( λ̂λn−1 ) 2k̇ n−1∑ i=0 λ2i < η k̇−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ( 39 ) Claim D.15 . The inequality ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] > η n−r∑ i=0 λ2i holds for x2hβ > 3 , n > 1 2αx2hβ − 1 x2hβ . Proof Claim D.15 . Left hand side can be lower bounded according to eq . 40 , while right hand side can be upper bounded according to eq . 41 . Therefore it ’ s enough to show that λ2n [ 1αλ 2kn + η 1−λ 2kn 1−λ2 ] > η 1−λ2n 1−λ2 , which according to eq . 42 is equivalent to showing that ( 2nx2hβ−1 ) 1αλ 2 ( k+1 ) n+2 ( 2λ2n−1 ) > 0 . Since n > 1 2αx2hβ − 1 x2hβ claim D.19 applies and therefore λ2n ≥ e − 2 x2 h β . Consequently it ’ s enough to show that ( 2nx2hβ−1 ) 1αλ 2 ( k+1 ) n+2 ( 2e − 2 x2 h β −1 ) > 0 , which is true for x2hβ > 3 by claim D.16 . ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] > ( λ̂λn−1 ) 2 [ 1 α ( λ̂λn−1 ) 2k + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] > λ2n [ 1 α λ2kn + η k−1∑ j=0 λ2jn n−1∑ i=0 λ2i ] = λ2n [ 1 α λ2kn + η 1− λ2kn 1− λ2 ] ( 40 ) First inequality holds because λ < 1 and r > 1 , and second inequality holds because λ < λ̂ . η n−r∑ i=0 λ2i < η n−1∑ i=0 λ2i = η 1− λ2n 1− λ2 ( 41 ) Inequality holds because λ < λ̂ and r > 1. λ2n [ 1 α λ2kn + η 1− λ2kn 1− λ2 ] > η 1− λ2n 1− λ2 λ2n ( 1− λ2 ) 1 α λ2kn + ηλ2n ( 1− λ2kn ) > η ( 1− λ2n ) ( 1− λ2 ) 1 α λ2 ( k+1 ) n + η ( 2λ2n − λ2 ( k+1 ) n − 1 ) > 0 ( α+ nx2hβ ) 2 ( 1− λ2 ) 1 α λ2 ( k+1 ) n + 2 ( 2λ2n − λ2 ( k+1 ) n − 1 ) > 0 ( α+ nx2hβ ) 2 ( 1− ( 1− 1 α+ nx2hβ ) 2 ) 1 α λ2 ( k+1 ) n + 2 ( 2λ2n − λ2 ( k+1 ) n − 1 ) > 0 ( 2 ( α+ nx2hβ ) − 1 ) 1 α λ2 ( k+1 ) n + 2 ( 2λ2n − λ2 ( k+1 ) n − 1 ) > 0 2λ2 ( k+1 ) n + ( 2nx2β − 1 ) 1 α λ2 ( k+1 ) n + 2 ( 2λ2n − λ2 ( k+1 ) n − 1 ) > 0 ( 2nx2hβ − 1 ) 1 α λ2 ( k+1 ) n + 2 ( 2λ2n − 1 ) > 0 ( 42 ) Claim D.16 . For x2β > 3 the inequality ( 2e− 2 x2β − 1 ) > 0 holds . Proof Claim D.16 . It ’ s easy to see that the inequality holds only if x2β ≥ −2 ln 12 . Since −2 ln 12 < 3 claim is proved . Claim D.17 . For k̇ as defined in lemma A.4 , and the conditions of claim D.19 1 α ( e 2 x2 h β + α ( e 2 x2 h β − 1 ) ( α+ nx2β ) + 18 ) > λ−2η 1− λ−2 ( dk̇e+1 ) n 1− λ−2 . Proof Claim D.17 . η 1− λ−2 ( dk̇e+1 ) n λ2 − 1 ≤ ηλ −2 ( k̇+2 ) n − 1 1− λ2 = η λ −2 ( 12n logλ ( 1 1+ 1 αη ( 1−λ2 ) ) −1+2 ) n − 1 1− λ2 = η λ − logλ ( 11+ 1 αη ( 1−λ2 ) ) λ−2n − 1 1− λ2 = η [ 1 + 1αη ( 1− λ 2 ) ] λ−2n − 1 1− λ2 = η ( 1− λ2 ) λ−2n 1αη 1− λ2 + η λ−2n − 1 1− λ2 = 1 α λ−2n + ( λ−2n − 1 ) ( α+ nx2β ) + 18 ≤ e 2 x2β 1 α + 1 α α ( e 2 x2β − 1 ) ( α+ nx2β ) + 18 = 1 α [ e 2 x2β + α ( e 2 x2β − 1 ) ( α+ nx2β ) + 18 ] where the fourth equality holds from eq . 43 and the second inequality holds from D.19 . η λ2 − 1 = η 1 ( 1− η2 ( α+ nx2β ) ) 2 − 1 = η 1 η ( α+ nx2β ) + ( η2 ( α+ nx 2β ) ) 2 = 1 ( α+ nx2β ) + η4 ( α+ nx 2β ) 2 = 1 ( α+ nx2β ) + 18 ( 43 ) Claim D.18 . ∀k > 0 : 1− ( λ λ̂ ) k ≥ 3 4nx 2β ( α+ nx2β ) 2 − ( α+ 14nx2β ) . Proof Claim D.18 . 1− ( λ λ̂ ) k ≥ 1− λ λ̂ = 1− 1− 1α+nx2β 1− α+ 1 4nx 2β ( α+nx2β ) 2 = 1− ( α+ nx 2β ) 2 − ( α+ nx2β ) ( α+ nx2β ) 2 − ( α+ 14nx2β ) = 1− α 2 + 2nx2αβ + ( nx2β ) 2 − α− nx2β α2 + 2nx2αβ + ( nx2β ) 2 − α− 14nx2β = 3 4nx 2β ( α+ nx2β ) 2 − ( α+ 14nx2β ) Where first inequality holds because λ < λ̂ . Claim D.19 . For the conditions of claim D.21 , ( 1− 1 α+ nx2β ) 2n ≥ e− 2 x2β . Proof Claim D.19 . The proof is easily deduced from claims D.20 and D.21 Claim D.20 . lim n→∞ ( 1− 1 α+ nx2β ) 2n = e − 2 x2β . Proof Lemma D.20 . From eq . 44 , it is enough to find limn→∞ ln ( 1− 1 α+nx2β ) 1 2n . Since limn→∞ ln ( 1− 1 α+nx2β ) 1 2n = 00 , and both the numerator and denominator are differentiable around∞ , the use of L ’ Hôpital ’ s rule is possible as shown in eq . 45 . This proves the claim . ( 1− 1 α+ nx2β ) 2n = e ln [ ( 1− 1 α+nx2β ) 2n ] = e 2n ln ( 1− 1 α+nx2β ) = e ln ( 1− 1 α+nx2β ) 1 2n ( 44 ) lim n→∞ d dn ln ( 1− 1 α+nx2β ) d dn 1 2n = lim x2β ( α+nx2β−1 ) ( α+nx2β ) − 12n2 = − lim 2n 2x2β ( nx2β ) 2 = − 2 x2β ( 45 ) Claim D.21 . ∀n > 1 2αx2β − 1 x2β : d dn ( 1− 1 α+ nx2β ) 2n < 0 . Proof claim D.21 . First , a simplified term for the derivative is found at eq . 46. d dn ( 1− 1 α+ nx2β ) 2n = d dn e 2n ln ( 1− 1 α+nx2β ) = ( 1− 1 α+ nx2β ) 2n [ 2 ln ( 1− 1 α+ nx2β ) + 2n 1 1− 1α+nx2β · x 2β ( α+ nx2β ) 2 ] = ( 1− 1 α+ nx2β ) 2n [ 2 ln ( 1− 1 α+ nx2β ) + 2nx2β ( α+ nx2β − 1 ) ( α+ nx2β ) ] ( 46 ) A lower bound for the ln term can be found using Taylor ’ s theorem as shown in eq .47 , where 0 ≤ ξ ≤ 1α+nx2β . ln ( 1− 1 α+ nx2β ) = − 1 α+ nx2β − 1 2 1 ( 1− ξ ) 2 ( 1 α+ nx2β ) 2 ≤ − 1 α+ nx2β − 1 2 ( 1 α+ nx2β ) 2 ( 47 ) From equations 46 and 47 it is enough to find the terms for which nx 2β ( α+nx2β−1 ) ( α+nx2β ) < 1 α+nx2β + 1 2 1 ( α+nx2β ) 2 holds . A simplified version of this inequality is found at ( 48 ) , and it can be easily seen that for α > 12 ( 1 nx2β + 1 ) ⇐⇒ n > 1 2αx2β − 1 x2β this inequality holds . nx2β ( α+ nx2β − 1 ) ( α+ nx2β ) < 1 α+ nx2β + 1 2 1 ( α+ nx2β ) 2 ⇐⇒ 0 < 2α2 + 2nx2βα− 2α− 2nx2β + α+ nx2β − 1 ⇐⇒ 0 < nx2β ( 2α− 1 ) + α ( 2α− 1 ) − 1 ( 48 ) Claim D.22 . For n > α x2hβ ( e 2 x2 h β − 2 ) + 1 2x2hβ and the conditions of claim D.19 , k̇ , as defined in lemma A.4 , is positive . Proof Claim D.22 . The claim ’ s inequality is simplified at eq . 49 k̇ > 0 1 2n logλ ( 1 1 + 1αη ( 1− λ2 ) ) − 1 > 0 logλ ( 1 1 + 1αη ( 1− λ2 ) ) > 2n ln ( 1 1+ 1αη ( 1−λ2 ) ) lnλ > 2n ln ( 1 1 + 1αη ( 1− λ2 ) ) < 2n lnλ ln ( 1 1 + 1αη ( 1− λ2 ) ) < lnλ2n 1 1 + 1αη ( 1− λ2 ) ) < λ2n λ−2n < 1 + 1 αη ( 1− λ2 ) λ−2n − 1 < 1 αη ( 1− λ2 ) ( 49 ) By claim D.19 λ−2n−1 < e 2 x2 h β −1 , therefore it is enough to find terms for e 2 x2 h β −1 < 1αη ( 1−λ 2 ) , which is done at eq . 50 , which proves the claim . e 2 x2 h β − 1 < 1 αη ( 1− λ2 ) αη ( e 2 x2 h β − 1 ) < ( 1− λ2 ) αη ( e 2 x2 h β − 1 ) < 1− ( 1− η 2 ( α+ nx2hβ ) ) 2 α ( e 2 x2 h β − 1 ) < ( α+ nx2hβ ) − η 4 ( α+ nx2β ) 2 α ( e 2 x2 h β − 1 ) < ( α+ nx2hβ ) − 1 2 α ( e 2 x2 h β − 2 ) + 1 2 < nx2hβ α x2hβ ( e 2 x2 h β − 2 ) + 1 2x2hβ < n ( 50 ) Claim D.23 . For k̇ as defined in lemma A.4 , and the conditions of lemma A.5 p ( θ̂ ( dk̇e+1 ) n > µ ( dk̇e+1 ) n|D̂ ) ≤ e −e − 2 x2β α 2v1 ( 3 32x2β ) 2 ( cn ) 2 . Proof claim D.23 . p ( θ̂ ( dk̇e+1 ) n > µ ( dk̇e+1 ) n|D̂ ) ≤ 1 n n∑ r=1 exp ( − ( µ ( dk̇e+1 ) n − µ̂r ( dk̇e+1 ) n ) 2 2 ( σr ( dk̇e+1 ) n ) 2 ) ≤ 1 n n∑ r=1 exp ( −e− 2 x2β α 2v1 ( 3 32x2β ) 2 ( c n ) 2 ) = exp ( −e− 2 x2β α 2v1 ( 3 32x2β ) 2 ( c n ) 2 ) Where the first inequality holds due to lemma 4.4 and second inequality holds due to lemma A.5 . Claim D.24 . for n > 1 + 10x 2 h x2l ν β , the inequality 1 10 ( α+ ( z + x 2 n ) β ) > ν ( x̂ 2 n − x2n ) holds . Proof Claim D.24 . Notice that 110 ( α+ ( z+x 2 n ) β ) > 1 10zβ > 1 10 ( n−1 ) x 2 l β and νx 2 h > ν ( x̂ 2 n−x2n ) , Therefore a sufficient condition will be that 110 ( n − 1 ) x 2 l β > νx 2 h , which is equivalent to n > 1 + x2h x2l 10ν β . Claim D.25 . For the ( σ2 ) ∗ν as defined in eq . 14 ( σ2 ) ∗ν > 0 . Proof Claim D.25 . ( σ2 ) ∗ν = νσ 2 + ( 1− ν ) σ̂2 = ν α+ ( z + x2n ) β + 1− ν α+ ( z + x̂2n ) β = ν ( α+ ( z + x̂2n ) β ) + ( 1− ν ) ( α+ ( z + x2n ) β ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) = α+ ( z + x2n ) β + ν ( x 2 n − x̂2n ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) ( 51 ) Therefore , a sufficient condition is that α + ( z + x2n ) β + ν ( x 2 n − x̂2n ) > 0 . Since the condition of Lemma 4.2 dictates n > 1 + 10x 2 h x2l ν β then claim D.24 holds , which satisfy this condition . Claim D.26 . For the Bayesian linear regression problem on domain D , and σ , σ̂ defined in eq . 14 ln σ σ̂ ≤ x 2 h 2 ( n− 1 ) x2l . Proof Claim D.26 . Consider c1 = x2h ( n−1 ) x2l , c1 = x2h ( n− 1 ) x2l > x̂2n − x2n z + x2n > x̂2nβ − x2nβ α+ ( z + x2n ) β = α+ ( z + x̂2n ) β α+ ( z + x2n ) β − 1 ( 52 ) Where eq . 52 holds trivially for x̂n ≤ xn , therefore it is assumed that x̂n > xn . From eq . 52 , by Taylor theorem and 0 ≤ ζ ≤ c1 following inequality holds ec1 = 1 + c1 + eζ 2 ( c1 ) 2 > 1 + c1 > α+ ( z + x̂2n ) β α+ ( z + x2n ) β Consequently , because the natural logarithm is monotonically increasing the following equation also holds 1 2 c1 > 1 2 ln α+ ( z + x̂n ) β α+ ( z + xn ) β = ln σ σ̂ Therefore ln σσ̂ < 1 2 x2h ( n−1 ) x2l Claim D.27 . For the Bayesian linear regression problem on domain D , the conditions of Lemma 4.2 and ( σ2 ) ∗ν , σ̂ defined in eq . 14 1 2 ( ν − 1 ) ln σ̂ 2 ( σ2 ) ∗ν ≤ 1 2 ( ν − 1 ) νx 2 h 2 ( ( n− 1 ) x2l − νx2h ) . Proof Claim D.27 . consider c1 = νx2h ( ( n−1 ) x2l−νx 2 h ) , c1 = νx2h ( n− 1 ) x2l − νx2h ≥∗ νβx 2 h α+ ( n− 1 ) x2l β − νβx2h ≥∗ νβx̂2n α+ ( z + x2n ) β − νβx2n ≥ νβ ( x̂ 2 n − x2n ) α+ ( z + x2n ) β − νβ ( x2n − x̂2n ) = α+ ( z + x2n ) β α+ ( z + x2n ) β + νβ ( x 2 n − x̂2n ) − 1 = 1 α+ ( z + x̂2n ) β · ( α+ ( z + x 2 n ) β ) ( α+ ( z + x̂ 2 n ) β ) α+ ( z + x2n ) β + νβ ( x 2 n − x̂2n ) − 1 = σ̂2 ( σ2 ) ∗ν − 1 Where inequalities * holds under assumption that n > 1 + ν x 2 h x2l , and last equality holds from eq . 51 . Therefore , by using Taylor theorem and 0 ≤ ζ ≤ c1 following inequality holds ec1 = 1 + c1 + eζ 2 ( c1 ) 2 > 1 + c1 ≥ σ̂2 ( σ2 ) ∗ν From this inequality , and because the natural logarithm is monotonically increasing ln σ̂ 2 ( σ2 ) ∗ν ≤ c1 , therefore 1 2 ( ν − 1 ) ln σ̂ 2 ( σ2 ) ∗ν ≤ 1 2 ( ν − 1 ) c1 = 1 2 ( ν − 1 ) νx 2 h ( ( n− 1 ) x2l − νx2h ) . Claim D.28 . For the Bayesian linear regression problem on domain D , the definitions of eq . 14 , and the conditions of Lemma 4.2 , the value ν2 ( µ−µ̂ ) 2 ( σ2 ) ∗ν is bounded by 2νβ ( x4h 9 10n 1−2γ1x2l ) + 2νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) n2−γ1 + ν 2 ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) 2 n3 . Proof Claim D.28 . First bound |µ− µ̂| , |µ− µ̂| = β| q + xnyn α+ ( z + x2n ) β − q + x̂nŷn α+ ( z + x̂2n ) β | = | ( q + xnyn ) ( α+ ( z + x̂ 2 n ) β ) − ( q + x̂nŷn ) ( α+ ( z + x2n ) β ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) | = β|qx̂ 2 nβ + xnynα+ xnynzβ + xnynx̂ 2 nβ − qx2nβ − x̂nŷnα− x̂nŷnzβ − x̂nŷnx2nβ ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) | = β| x̂2nz ( q z − ŷn x̂n ) β − x2nz ( q z − yn xn ) β + α ( xnyn − x̂nŷn ) + xnx̂nβ ( ynx̂n − ŷnxn ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) | < β| x̂ 2 hz ( 2n γ1 ) β + αx2h ( c+ n γ1 ) + x4hβ ( c+ n γ1 ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) | = β|2x̂ 2 hβzn γ1 + ( x2hα+ x 4 hβ ) ( c+ n γ1 ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) | Therefore , ν 2 ( µ− µ̂ ) 2 ( σ2 ) ∗ν ≤ ν 2 β2 ( 2x̂2hβzn γ1 + ( x2hα+ x 4 hβ ) ( c+ n γ1 ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) ) 2 · ( α+ ( z + x 2 n ) β + ν ( x 2 n − x̂2n ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) ) −1 = ν 2 β2 ( 2x̂2hβzn γ1 + ( x2hα+ x 4 hβ ) ( c+ n γ1 ) ) 2 ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) ( α+ ( z + x 2 n ) β + ν ( x 2 n − x̂2n ) ) ≤∗ ν 2 β2 ( 2x2hβzn γ1 + ( x2hα+ x 4 hβ ) ( c+ n γ1 ) ) 2 9 10 ( α+ ( z + x 2 n ) β ) ( α+ ( z + x̂ 2 n ) β ) ( α+ ( z + x 2 n ) β ) = ν 2 β2 ( ( 2x2hβ ) 2z2n2γ1 + 2 ( 2x2hβ ) ( x 2 hα+ x 4 hβ ) zn γ1 ( c+ nγ1 ) + ( x2hα+ x 4 hβ ) 2 ( c+ nγ1 ) 2 9 10 ( α+ ( z + x 2 n ) β ) 2 ( α+ ( z + x̂2n ) β ) ) ≤ ν 2 β2 ( ( 2x2hβ ) 2z2n2γ1 + ( 4x2hβ ) ( x 2 hα+ x 4 hβ ) zn γ1 ( c+ nγ1 ) + ( x2hα+ x 4 hβ ) 2 ( c+ nγ1 ) 2 9 10 ( ( z + x 2 n ) β ) 2 ( ( z + x̂2n ) β ) ) ≤∗∗ ν 2 β2 ( ( 2x2hβ ) 2n2γ1 9 10nx 2 l β 3 ) + ν 2 β2 ( ( 4x2hβ ) ( x 2 hα+ x 4 hβ ) n γ1 ( c+ nγ1 ) 9 10 ( nx 2 l ) 2β3 ) + ν 2 β2 ( ( x2hα+ x 4 hβ ) 2 ( c+ nγ1 ) 2 9 10 ( nx 2 l β ) 3 ) = 2νβ ( x4h 9 10n 1−2γ1x2l ) + 2νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) n2−γ1 + ν 2 ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) 2 n3 Inequality * is true because Lemma 4.2 conditions dictates that n > 1 + x 2 h x2l 10ν β , and according to claim D.24 this promises that 110 ( α + ( z + x 2 n ) β ) > ν ( x̂ 2 n − x2n ) . Inequality * * follows from n > > 1⇒ ( n− 1 ) xl ≈ nxl . Claim D.29 . For the conditions and definitions of Lemma 4.3 , one sample from the posterior is ( , δ ) differentially private for the following terms on n and ν. ν = 1 + 2 ln ( 1δ ) n ≥ max { 1 + x 2 h x2l 8 , 1 + ν x2h x2l ( 1 + 8 ( ν − 1 ) ) , ( 16νβx4h 9 10 x 2 l ) 1 1−2γ1 , ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) ) 1 2−γ1 , ( 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) ) 2 3 } Proof Claim D.29 . By Lemma 4.3 , one sample from the posterior is ( 1 + ln ( 1δ ) ν−1 , δ ) differentially private . For each of the 6 terms of 1 + ln ( 1δ ) ν−1 , a lower bound on n and ν is found at equations 53 , 54 , 55 , 56 , 57 , 58 such that the sum of terms is upper bounded by . These bounds match the claim ’ s guarantee over n and ν therefore proving the claim . For term ln ( 1 δ ) ν−1 ln ( 1δ ) ν − 1 = 2 ⇐⇒ 2 ln ( 1δ ) + 1 = ν ( 53 ) For term x 2 h ( n−1 ) x2l x2h 2 ( n− 1 ) x2l ≤ 16 n ≥ 1 + x 2 h x2l 8 ( 54 ) For term 12 ( ν − 1 ) νx2h ( n−1 ) x2l−νx 2 h 1 2 ( ν − 1 ) νx 2 h ( n− 1 ) x2l − νx2h ≤ 16 1 2 ( ν − 1 ) 16νx 2 h ≤ ( n− 1 ) x2l − νx2h n ≥ 1 + 1 2 ( ν − 1 ) 16νx 2 h x2l + ν x2h x2l = 1 + ν x2h x2l ( 1 + 8 ( ν − 1 ) ) ( 55 ) For term 2νβ ( x 4 h 9 10n 1−2γ1x2l ) 2νβ ( x4h 9 10n 1−2γ1x2l ) ≤ 8 16 νβ x4h 9 10x 2 l ≤ n1−2γ1 n ≥ ( 16νβx 4 h 9 10 x 2 l ) 1 1−2γ1 ( 56 ) For term 2νβ ( ( x 2 hβ ) ( x 2 hα+x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+nγ1 ) n2−γ1 2νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) n2−γ1 ≤ 8 n2−γ1 ≥ 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) n ≥ ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) ) 1 2−γ1 ( 57 ) For term ν2 ( ( x2hα+x 4 hβ ) 2 9 10x 6 l β ) ( c+n γ1 ) 2 n3 ν 2 ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) 2 n3 ≤ 8 n3 ≥ 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) 2 n ≥ ( 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) ) 2 3 ( 58 ) Claim D.30 . For c = nγ2 , γ1 < γ2 < 32 , and the conditions and definitions of Lemma 4.3 , one sample from the posterior is ( , δ ) differentially private for following terms on n and ν. ν = 2 ln ( 1δ ) + 1 n ≥ max { 1 + x 2 h x2l 8 , 1 + ν x2h x2l ( 1 + 8 ( ν − 1 ) ) , ( 16νβx4h 9 10 x 2 l ) 1 1−2γ1 , ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 1 2−γ1−γ2 , ( 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 2 3−2γ2 } Proof Claim D.30 . Claim D.29 provides general lower bounds on n for ( , δ ) differential privacy . When c = nγ2 , γ2 > γ1 , these bounds can be simplified . For condition n ≥ ( 16νβ ( ( x2hβ ) ( x 2 hα+x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ n γ1 ) ) 1 2−γ1 , ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) ) 1 2−γ1 = ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) nγ2 ( 1 + 1 nγ2−γ1 ) ) 1 2−γ1 ≤ ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) nγ2 ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 1 2−γ1 , where the inequality holds since Lemma 4.3 dictates that n ≥ 1 + 10x 2 h x2l ν β . Consequently it ’ s enough that n > ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 1 2−γ1−γ2 . Following same considerations for condition n ≥ ( 4ν ( ( x2hα+x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) ) 2 3 , it is enough that n > ( 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 2 3−2γ2 E WASSERSTEIN DISTANCE PROOF Claim E.1 . If p , q are distributions with 2-Wasserstein distance W2 ( p , q ) = 2 , then we have p ( Br ( x ) ) ≤ q ( Br+ ( x ) ) + . Proof . If is the claim that d2P ≤ dw from Gibbs & Su ( 2002 ) . Picking an optimal coupling and using Markov inequality we get P ( d ( x , y ) > ) ≤ 1 E [ d ( x , y ) ] = . As { ( x̃ , ỹ ) : x̃ ∈ Br ( x ) } ⊂ { ( x̃ , ỹ ) : ỹ ∈ Br+ ( y ) } ∪ { ( x̃ , ỹ ) : d ( x̃ , ỹ ) > } we get p ( Br ( x ) ) ≤ q ( Br+ ( x ) ) + ( special case of Strassen theorem ) . Claim E.2 . Let p , q be continuous distributions on Rd with Wasserstein distance W2 ( p , q ) < 2 , and let pδ , qδ be their convolutions with uniform distribution on Bδ ( 0 ) . We assume both density functions are L−Lipshitz continuous . For λ > we have pλ ( x ) ≤ vold ( λ ) vold ( λ− ) qλ ( x ) + vold ( λ− ) + 2 ( vold ( λ ) vold ( λ− ) − 1 ) λL . ( 59 ) Proof . We have P ( Bλ ( x ) ) = P ( Bλ− ( x ) ) + P ( A ( x ; λ − , λ ) ) where A ( x ; r1 , r2 ) is the annulus around x between radius r1 and r2 . From continuity there exists z ∈ P ( Bλ ( x ) ) such that p ( z ) = P ( Bλ ( x ) ) vold ( λ ) , where vold ( r ) is the volume of a ball of radius r in Rd . From Lipshitz continuity we have P ( A ( x ; λ − , λ ) ) ≤ ( vold ( λ ) − vold ( λ − ) ) ( p ( z ) + 2λL ) = ( 1− vold ( λ− ) vold ( λ ) ) P ( Bλ ( x ) ) + ∆ , where ∆ = ( vold ( λ ) − vold ( λ− ) ) 2λL . From this , we get P ( Bλ− ( x ) ) ≥ vold ( λ− ) vold ( λ ) P ( Bλ ( x ) ) −∆ . ( 60 ) Combining this with claim E.1 , we get P ( Bλ ( x ) ) ≤ vold ( λ ) vold ( λ− ) ( P ( Bλ− ( x ) ) + ∆ ) ≤ vold ( λ ) vold ( λ− ) ( Q ( Bλ ( x ) ) + ∆ + ) . ( 61 ) We divide by vold ( λ ) to get the densities pλ , qλ . pλ ( x ) ≤ vold ( λ ) vold ( λ− ) qλ ( x ) + ∆ + vold ( λ− ) ( 62 ) | This paper shows that even when the posterior is as private as targeted in the beginning, sampling from posterior with SGLS might not be as private as targeted. The authors prove the theorem on Bayesian linear regression problem. They prove that for n big enough sampling from the posterior is (\epsilon, \delta) differentially private (DP), but there is a step in which releasing a sample will not be (\epsilon^\prime, \delta)-DP for \epsilon^\prime=\omega(n \epsilon). | SP:e11a3ee0dce61bc8647a345d8947b9d36e2323f8 |
Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning? | 1 INTRODUCTION Machine learning and , specifically , deep learning models show state-of-the-art results in various fields such as computer vision , natural language processing , and signal processing ( e.g. , Carion et al . ( 2020 ) ; Devlin et al . ( 2019 ) ; Balevi & Andrews ( 2021 ) ) . Training these models requires data , which in some problems , e.g. , healthcare , finance , can include private information that should not be made public . Unfortunately , it has been shown ( Fredrikson et al . ( 2015 ) ; Carlini et al . ( 2021 ) ) that private information from the training data can sometimes be extracted from the trained model . One common approach to handle this issue is Differential Privacy ( DP ) . Differential Privacy is a framework that ensures that the distribution of training output would be the same , even if we switch one of the training participants , thus ensuring privacy . As privacy is usually obtained by adding random noise , it is natural to investigate whether Bayesian inference , which uses a distribution over models , can give private predictions . Previous works have shown that sampling from the posterior is differentially private under certain mild conditions ( Wang et al . ( 2015 ) ; Foulds et al . ( 2016 ) ; Dimitrakakis et al . ( 2017 ) ) . The main disadvantage of this method is that sampling from the posterior is generally hard . The posterior usually does not have a closedform solution , and iterative methods such as Markov Chain Monte Carlo ( MCMC ) are needed . While theoretical bounds on the convergence of MCMC methods for non-convex problems exist ( Ma et al. , 2019 ) , they usually require an infeasible number of steps to guarantee convergence in practice . Stochastic Gradient Langevin Dynamics ( SGLD ) is a popular MCMC algorithm used to approximately sample from an unnormalized distribution ( Welling & Teh , 2011 ) . The privacy guarantees of this specific sampling algorithm are interesting as it not only returns a sample from the posterior , which can be private , but the process itself of stochastic gradient descent with Gaussian noise mirrors the common Gaussian mechanism in DP . Previous work Wang et al . ( 2015 ) gives two disjoint privacy analyses : The first is for approximate sampling from the Bayesian posterior , which is only relevant when the SGLD almost converges . The second uses the standard DP analysis utilizing the Gaussian mechanism and the Advanced Composition theorem ( Dwork & Roth , 2014 ) , which only applies for a limited number of steps and is not connected to Bayesian sampling . From these two lines of research , differential privacy bounds for SGLD are provided for its initial steps or when close to convergence . Neither of these cases is suitable for deep learning and many other problems , as one would limit the model ’ s accuracy , and the other is unattainable in a reasonable time . Consequently , the privacy properties of SGLD in the interim region , between these two private sections , are of high importance . One could speculate that since the initial steps of the algorithm are private , and it converges to the posterior that is also private , then sampling at the interim region will be private as well . If so , SGLD could be considered a solution for training differentially private deep neural networks . Unfortunately , as we will show , this is not the case . Our Contributions : This work provides a counter-example , based on a Bayesian linear regression problem , showing that approximate sampling using SGLD might result in an unbounded loss of privacy in the interim regime . Moreover , this loss of privacy can even occur under strong conditions - when sampling from the posterior is as private as desired , and the problem is complex - even stronger conditions than what we can assume for most Deep Neural Network problems . This implies that special care should be given when using SGLD for private predictions , especially for problems where it is infeasible to guarantee convergence . 2 RELATED WORK Several previous works investigate the connection between Bayesian inference and differential privacy ( Wang et al . ( 2015 ) ; Foulds et al . ( 2016 ) ; Zhang et al . ( 2016 ) ; Dimitrakakis et al . ( 2017 ) ; Geumlek et al . ( 2017 ) ; Ganesh & Talwar ( 2020 ) ) . None of these papers provide guarantees over SGLD differential privacy in the interim regime . The closest work to ours is Wang et al . ( 2015 ) that specifically investigates stochastic MCMC algorithms such as SGLD . As mentioned , its analysis only covers the initial phase and when approximate convergence is achieved . As many of the privacy bounds require sampling from the posterior , if SGLD is to be used , it requires non-asymptotic convergence bounds . Dalalyan ( 2014 ) provided non-asymptotic bounds on the error of approximating a target smooth and log-concave distribution by Langevin Monte Carlo . Cheng & Bartlett ( 2018 ) studied the non-asymptotic bounds on the error of approximating a target density p∗ where log p∗ is smooth and strongly convex . For the non-convex setting , Raginsky et al . ( 2017 ) showed non-asymptotic bounds on the 2- Wasserstein distance between SGLD and the invariant distribution solving Itô stochastic differential equation . However , to provide ( , δ ) differential privacy , an algorithm should produce a distribution that is O ( δ ) close to neighbouring databases . Total Variation ( for details about Total Variation see Tsybakov ( 2008 ) ) is a more suitable distance for working with differential privacy . Ma et al . ( 2019 ) examined a target distribution p∗ , which is strongly log-concave outside of a region of radius R , and where − ln p∗ is L-Lipschitz . They provided a bound on the number of steps needed for the Total Variation distance between the distribution at the last step and p∗ to be smaller than . This bound is proportional to O ( e32LR 2 d 2 ) , where d is the model dimension . This result suggests that even little non-convexity will render running until close to convergence impractical . A conclusion from this work is that basing the differential privacy of SGLD on the proximity to the posterior is impractical for non-convex settings . 3 BACKGROUND 3.1 DIFFERENTIAL PRIVACY Differential Privacy ( Dwork et al . ( 2006b ; a ) ; Dwork ( 2011 ) ; Dwork & Roth ( 2014 ) ) is a definition and a framework that enables performing data analysis on a database while reducing one ’ s risk of exposing its personal data to the database . An algorithm is differentially private if it does not change its output distribution by much due to a single record change in its database . Definition 1 . Approximate Differential Privacy : A randomized algorithm M : D → Range ( M ) is ( , δ ) -differentially private if ∀S ⊆ Range ( M ) and { ∀D , D̂ ∈ D : ‖D − D̂‖ ≤ 1 } eq . 1 holds . D , D̂ are called neighboring databases , and while the metric can change per application , Hamming distance is typically used . Pr [ M ( D ) ∈ S ] ≤ exp ( ) Pr [ M ( D̂ ) ∈ S ] + δ ( 1 ) Mironov ( 2017 ) suggested Rényi Differential Privacy ( Definition 3 ) , a relaxation to differential privacy , and a way to translate RDP guarantees into approximate differential privacy guarantees . Definition 2 . Rényi Divergence ( Rényi , 1961 ) : For two probability distributions Z and Q over R , the Réyni divergence of order ν > 1 is Dν ( Z||Q ) ∆ = 1 ν − 1 logEx∼Q [ ( Z ( x ) Q ( x ) ) ν ] . Definition 3 . ( ν , ) -RDP : A randomized mechanism f : D → R is said to have -Rényi differential privacy of order ν , or ( ν , ) -RDP in short , if for any adjacent databases D , D̂ ∈ D eq . 2 holds , where Dν is Rényi divergence of order ν. Dν ( f ( D ) ||f ( D̂ ) ) ≤ ( 2 ) Lemma 3.1 . ( Mironov ( 2017 ) Proposition 3 ) . If f is ( ν , ) -RDP , it also satisfies ( + log 1 δ ν−1 , δ ) Differential Privacy for any 0 < δ < 1 . 3.2 STOCHASTIC GRADIENT LANGEVIN DYNAMICS Stochastic Gradient Langevin Dynamics ( SGLD ) is an MCMC method that is commonly used for Bayesian Inference ( Welling & Teh , 2011 ) . The update step of SGLD is shown in eq . 3 , where θj is the parameter vector at step j , ηj is the step size at step j , p ( θj ) is the prior distribution , p ( yi|θj ) is the likelihood of sample yi given model parameterized by θj , b is the batch size , and n is the database size . SGLD can be seen as a Stochastic Gradient Descent with Gaussian noise , where the variance of the noise is calibrated to the step size . θj+1 = θj + ηj 2 [ ∇θj ln p ( θj ) + n b b∑ i=1 ∇θj ln p ( yij |θj ) ] + √ ηjξj ij ∼ uniform { 1 , ... , n } ξj ∼ N ( 0 , 1 ) ( 3 ) A common practice in deep learning is to use cyclic Stochastic Gradient Descent . This flavour of SGD first randomly shuffles the database samples and then cyclically uses the samples in this order . For optimization , there is empirical evidence that it works as well or better than SGD with reshuffling , and it was conjectured that it converges at a faster rate ( Yun et al . ( 2021 ) ) . Cyclic-SGLD is the analog of cyclic-SGD for SGLD , where the difference is the use of the SGLD step instead of the SGD step . For simplicity , we will consider cyclic-SGLD in this work . 4 METHOD Our goal is to prove that even when the posterior is as private as desired , sampling using SGLD for T steps can be as non-private as desired . This requires analysing the distribution of SGLD after T steps , which is hard in the general case . However , we show that we can get the desired behaviour when looking at a simple Bayesian linear regression problem where everything is a Gaussian with closed-form expressions . Our result is summarized in theorem 1 . Theorem 1 . ∀ δ < 0.5 , , ′ there exists a domain and a Bayesian inference problem where a single sample from the posterior distribution is ( , δ ) differentially private , but , there is a number , T , for which performing approximate sampling by running SGLD for T steps is not ( ′ , δ ) differentially private . As ′ can be as big as desired , and can be as small as desired , a corollary of Theorem 1 is that we could always find a problem for which the posterior is ( , δ ) differentially private , but there will be a step in which SGLD will result in unbounded loss of privacy . Therefore , SGLD alone can not provide any privacy guarantees in the interim regime , even if the posterior is private . To prove our theorem , we consider a Bayesian regression problem for a linear model with Gaussian noise , as defined in eq . 4 , on domain D defined in eq . 5. y = θx+ ξ ξ ∼ N ( 0 , β−1 ) θ ∼ N ( 0 , α−1 ) log p ( y|x , θ ) = −β ( y − θx ) 2/2− 1 2 log ( 2π/β ) ( 4 ) D ( n , γ1 , xh , xl , c ) = { xi , yi|| yi xi − c| ≤ nγ1 ; xi , yi , c , γ1 ∈ R > 0 ; xl ≤ xi ≤ xh } ni=1 We assume that x2hβ > 3 and that γ1 < 1 2 ( 5 ) n , c , xl , xh , γ1 are parameters of the problem ( c , xl , xh , and γ1 are used , together with the database size - n , to bound the database samples to a chosen region ) . For every , ′ and δ , we will show there exist parameters n , c , xl , xh , γ1 that have the privacy properties required to prove Theorem 1 . The restrictions on the dataset simplify the proof but are a bit unnatural as it assumes we approximately know c , the parameter we are trying to estimate . Later we show in subsection 4.3 that they can be replaced with a Propose-Test-Release phase . We will address the problem of Bayesian Linear Regression for the model described in eq . 4 on domain D as Bayesian linear regression problem on domain D. This problem has a closed-form solution for both the posterior distribution and the distribution at each SGLD step , thus enabling us to get tight bounds on the differential privacy in each case . The heart of our proof is showing that for n big enough sampling from the posterior is ( , δ ) differentially private , with ∼ O ( c 2 n3 ) , while for SGLD there exists a step in which releasing a sample will not be ( ′ , δ ) differentially private for ′ = Ω ( c 2 n2 ) . Therefore , by considering instances of the problem where c ∼ O ( n 32 √ ) and n is big enough , sampling from the posterior will be ( , δ ) differentially private , while there will be an SGLD step in which releasing a sample will not be ( ′ , δ ) differentially private for ′ = Ω ( n ) . We note that the bounds contain dependency over δ , but since we are using a fixed and equal δ for both the posterior and SGLD privacy analysis , we omit it from the bounds for simplicity . Figure 1 depicts an indicative value of the distance between the distributions of samples from two SGLD processes running on adjacent databases for the Bayesian linear regression problem . As we will later show , SGLD on one of these examples is a Gaussian while the other is a mixture of n Gaussians . We plot 1n ∑ i ( µt−µit ) 2 ( σit ) 2 , where µt is the mean of the single Gaussian at timestep t , µit is the mean of the i ’ th Gaussian component at timestep t and ( σit ) 2 its variance . We can see that even though the distributions are close at the initial iterations and at convergence ( which implies differential privacy in those areas ) , in the interim region , they are significantly apart , which implies a lack of differential privacy . 4.1 POSTERIOR SAMPLING PRIVACY To prove Theorem 1 , we first need to show that ∀δ < 0.5 , , there exists a domain and a Bayesian inference problem where a single sample from the posterior distribution is ( , δ ) differentially private . In order to do so , this section will consider the differential privacy guarantees provided by one sample from the posterior for the Bayesian linear regression problem on domain D. We begin by using a well-known result for the closed-form-solution of the posterior distribution for a Bayesian linear regression problem ( see Bishop ( 2006 ) for further details ) . By using the parameters of our problem , we get Lemma 4.1 . Lemma 4.1 . The posterior distribution for Bayesian linear regression problem on domain D is p ( θ|D ) = N ( θ ; µ , σ2 ) ; µ = ∑n i=1 xiyiβ α+ ∑n i=1 x 2 iβ ; σ2 = 1 α+ ∑n i=1 x 2 iβ . ( 6 ) Using the posterior distribution , one can calculate the Renyi divergence between every two neighbouring databases , thus getting an expression for the Rényi differential privacy , as shown in Lemma 4.2 . Lemma 4.2 . For a Bayesian linear regression problem on domain D , such that n > max { 1 + 10 x2h x2l ν β , 1 + ν x2h x2l } , one sample from the posterior is ( ν , 1 ) -Rényi differentially private . 1 ∼ O ( c 2 n3 ) for c > > n1+γ1 . 1 = x2h 2 ( n− 1 ) x2l + 1 2 ( ν − 1 ) νx 2 h ( n− 1 ) x2l − νx2h + 2νβ x4h 9 10n 1−2γ1x2l + 2νβ · ( x 2 hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 · ( c+ n γ1 ) n2−γ1 + ν 2 · ( x 2 hα+ x 4 hβ ) 2 9 10x 6 l β · ( c+ n γ1 ) 2 n3 . We can show that for c > > n1+γ2 , each of the terms is bounded by O ( c 2 n3 ) . The first and second terms are bounded by O ( 1n ) . The third term is bounded by O ( n 2γ1−1 ) . Noticing that n2γ1−1 = n2 ( 1+γ1 ) n3 < c2 n3 , we get that the third term is bounded by O ( c2 n3 ) . As c > > n γ1 , the fourth term is bounded by O ( cn γ1 n2 ) , and since cnγ1 n2 = cn1+γ1 n3 < c2 n3 , the term is bounded by O ( c2 n3 ) . Lastly , since c > > n1+γ1 , the last term is bounded by O ( c 2 n3 ) . For the full proof , see subsection A.1 in the appendix . Translating the Rényi differential privacy guarantees into approximate differential privacy terms can be done according to Lemma 3.1 , which gives Lemma 4.3 . Lemma 4.3 . With the conditions of Lemma 4.2 , one sample from the posterior is ( 1 + ln ( 1δ ) ν−1 , δ ) differentially private . By choosing ν such that ln ( 1 δ ) ν−1 < 2 and then choosing n big enough such that 1 < 2 , we get that the posterior is ( , δ ) differentially private . 4.2 STOCHASTIC GRADIENT LANGEVIN DYNAMICS PRIVACY To complete the proof of Theorem 1 , we need to show that even if one sample from the posterior is ( , δ ) differentially private for a Bayesian linear regression problem on domain D , it does not provide any guarantees on the privacy of SGLD for that problem . In order to do so , this section will first consider the loss in privacy when using SGLD for the Bayesian linear regression problem on domain D , and then , together with the results of section 4.1 , will prove Theorem 1 . In order to show that SGLD is not differentially private after initial steps and before convergence , it is enough to find two neighbouring databases for which the loss in privacy is as big as desired in those steps . We define neighbouring databases D1 and D2 in eq . 7 and consider the Bayesian linear regression problem on D1 and D2 . We set the learning rate to be η = 2 ( α+nx2hβ ) 2 . D1 = { xi , yi : xi = xh , yi = c · xh } ni=1 D2 = { xi , yi : xi = xh , yi = c · xh } n−1i=1 ∪ { xh 2 , c · xh 2 } ( 7 ) To tightly analyze the differential privacy loss when approximately sampling via SGLD at each step , we need to get a closed-form solution for the distribution for each step . For databaseD1 , the solution is Normal distribution . For database D2 , different shuffling of samples produces different Gaussian distributions , therefore giving a mixture of Gaussians . We look at cyclic-SGLD with a batch size of 1 and mark by θj , θ̂j the samples on the j ’ th SGLD step when using databases D1 and D2 accordingly . Since D1 samples are all equal , the update step of the cyclic-SGLD is the same for every step ( with different noise generated for each step ) . This update-step contains only multiplication by a scalar , addition of a scalar , and addition of Gaussian noise , therefore , together with a conjugate prior results in Normal distribution for θj : N ( θj ; µj , σ2j ) . For D2 , there is only one sample different from the rest . We mark by r the index in which this sample is used in the cyclic-SGLD and call this order r-order . Note that there are only n different values for r and , as such , effectively only n different samples orders . Since every order of samples is chosen with the same probability , r is distributed uniformly in { 1 , .. , n } . We mark by θ̂rj the sample on the j ’ th SGLD step when using r-order . Since , for a given order , θ̂rj is formed by a series of multiplications by a scalar , addition of scalar , and addition of Gaussian noise , and since the prior is also Gaussian , then θ̂rj is distributed Normally , N ( θ̂rj ; µ̂rj , ( σ̂rj ) 2 ) . As r is distributed uniformly , θ̂j distribution mass is distributed evenly between all θ̂rj , resulting in a mixture of Gaussians . Intuitively what will happen is that each Gaussian components , θ̂j as well as θj , will move towards the similar posterior Gaussian . However , at each epoch , θ̂j will drag a bit behind because in one batch one gradient is smaller . While this gap can be quite small , for large n , the Gaussians are very peaked with very small standard deviations ; thus , they separate enough that we can easily distinguish between the two distributions . According to approximate differential privacy definition ( Definition 1 ) , it is enough to find one set S such that p ( θj ∈ S ) > e p ( θ̂j ∈ S ) + δ to show that releasing θj is not ( , δ ) private . We choose S = { s|s > µj } at some step j we will define later on . It is clear from symmetry that p ( θj > µj ) = 1/2 , and by using Chernoff bound we can bound p ( θ̂j > µj ) . Lemma 4.4. p ( θ̂j > µj ) ≤ 1n ∑n r=1 exp ( − ( µj−µ̂rj ) 2 2 ( σ̂rj ) 2 ) . Using Lemma 4.4 , we can upper bound the mass of θ̂j in S , and thus lower bound the difference between θj and θ̂j distribution masses in S for some step - j . To use Lemma 4.4 , we first need to lower bound ( µj−µ̂rj ) 2 ( σrj ) 2 for a certain step . This is done in Lemma 4.5 . Lemma 4.5 . ∃k ∈ Z > 0 such that ( µ ( k+1 ) n−µ̂r ( k+1 ) n ) 2 ( σ̂r ( k+1 ) n ) 2 = Ω ( c2 n2 ) , for n big enough . To prove Lemma 4.5 , we first find closed-form solutions for θ̂r ( k+1 ) n , θ ( k+1 ) n distributions ( Lemma A.1 ) . Using the closed-form solutions , we find a lower bound over ( µ ( k+1 ) n − µ̂r ( k+1 ) n ) 2 as a function of k , which applies for all k ( Lemma A.2 ) . To upper bound ( σ̂r ( k+1 ) n ) 2 , we find an approximation to the epoch in which the data and prior effects on the variance are approximately equal , marked k̇ . We choose the step in which we will consider the privacy loss as ( dk̇e+1 ) n and show that ( σ̂r ( dk̇e+1 ) n ) 2 is upper bounded at this step ( Lemma A.4 ) . Using the upper bound on the difference in means and the lower bound on the variance , Lemma 4.5 is proved . By using the lower bound from Lemma 4.5 in Lemma 4.4 , we get Lemma 4.6 . Lemma 4.6 . For the Bayesian linear regression problem over database D1 , such that n is big enough , ∃T ∈ Z > 0 such that approximate sampling by running SGLD for T steps will not be ( , δ ) private for < Ω ( c 2 n2 ) , δ < 0.5 . From Lemma 4.3 , we see that sampling from the posterior is ( , δ ) differentially private for = O ( c 2 n3 ) . From Lemma 4.6 , we see that for SGLD , there exists a step in which releasing a sample will not be ( ′ , δ ) differentially private for ′ = Ω ( c 2 n2 ) . Therefore , considering instances of the problem where c = O ( n 32 √ ) , sampling from the posterior will be ( , δ ) differentially private . However , there will be an SGLD step in which releasing a sample will not be ( ′ , δ ) differentially private for ′ = Ω ( n ) . Since we can choose n to be big as desired , we can make the lower bound over ′ as big as we desire . This completes the proof of Theorem 1 . 4.3 PROPOSE TEST SAMPLE Our analysis of the posterior and SGLD is done on a restricted domain D as defined in eq . 5 . These restrictions over the dataset simplify the proof but are a bit unnatural as they assume we approximately know c , the parameter we are trying to estimate . This section shows that these restrictions could be replaced with a Propose-Test-Release phase ( Dwork & Lei , 2009 ) and common practices in deep learning . When training a statistical model , it is common to first preprocess the data by enforcing it in a bounded region and removing outliers . After the data is cleaned , the training process is performed . This is especially important in DP , as outliers can significantly increase the algorithm ’ s sensitivity to a single data point and thus hamper privacy . Informally , algorithm 1 starts by clipping the input to the accepted range . It then estimates a weighted average of the ratio yixi ( line 12 ) and throws away outliers that deviate too much from it . The actual implementation of this notion is a bit more complicated because of the requirement to do so privately . Once the database is cleaned , algorithm 1 privately verifies that the number of samples is big enough , so the sensitivity of p ( θ|W ) to a single change in the database will be small , therefore making sampling from p ( θ|W ) ( , δ ) differentially private . This method is regarded as Propose-Test-Release , where we first propose a bound over the sensitivity , then test if the database holds this bound , and finally release the result if so . We define nmin in eq . 26 in the appendix to be the minimum size of W for which the algorithm will sample from p ( θ|W ) with high probability . We will show later on that this limit ensures that sampling from p ( θ|W ) is ( , δ ) differentially private . We define p ( θ|W ) to be the posterior for the Bayesian linear regression problem over database W. From Lemma 4.1 , it follows that p ( θ|W ) has the form of p ( θ|W ) = N ( θ ; µ , σ2 ) ; µ = ∑ ( xi , yi ) ∈W xiyiβ α+ ∑ ( xi , yi ) ∈W x 2 iβ ; σ2 = 1 α+ ∑ ( xi , yi ) ∈W x 2 iβ . Claim 4.1 . Algorithm 1 is ( 5 , 2δ ) differentially private . By claim C.9 , steps 6-13 are ( 3 , δ ) differentially private . By corollary C.3 , steps 14-19 are ( 2 , δ ) differentially private for given m̆ and n2 . Therefore by the sequential composition theorem , the composition is ( 5 , 2δ ) differentially private . The claim proved by noticing that if steps 6-19 are private with respect to the updated database ( after step 5 ) , then they are also private for the original database . Claim 4.2 . When replacing line 19 with sampling via SGLD with step size η = 1 ( α+n1x2hβ ) 2 , then ∃T ( n1 ) : Z > 0 → Z > 0 such that the updated algorithm is not ( , δ ) differentially private ∀ ∈ R > 0 , δ < 16 if ran for T ( n1 ) steps . Algorithm 1 Propose Test Sample Input : D = { xi , yi } n1i=1 Parameters : , δ < 0.5 , xl > 0 , xh > xl , α > 0 , β ≥ 3x2h , ρ1 ∈ ( 1 , 3 2 ) , ρ2 ∈ ( 0 , 1 2 ) , γ1 ∈ ( ρ2 , 1 2 ) 1 : for i = 1 , 2 , . . . , N do 2 : xi ← max { xi , xl } 3 : xi ← min { xi , xh } 4 : yi ← max { yi , 0 } 5 : end for 6 : n̆1 ← n1 − 1 log 1 2δ + Lap ( 1 ) 7 : V = { xi , yi| yixi ≤ n̆ ρ1 1 } 8 : n2 ← |V | − 1 log 1 2δ + Lap ( 1 ) 9 : if n2 ≤ 1 then 10 : return null 11 : end if 12 : m← ∑ ( xi , yi ) ∈V xiyi∑ ( xi , yi ) ∈V x2i 13 : m̆← m+ Lap ( 1 n̆ ρ1 1 2 ( n2−1 ) x2hx 2 l+x 4 h n2 ( n2−1 ) x4l ) 14 : W ← { ( xi , yi ) : | yixi − m̆| ≤ n ρ2 2 } 15 : nW ← |W | − 1 log ( 1 2δ ) + Lap ( 1 ) 16 : if nW < nmin then 17 : return null 18 : end if 19 : return sample from p ( θ|W ) Proof sketch ( See appendix for full proof ) . We first note that by choosing 1+ρ2 > ρ1 , the sensitivity of m̆ grows slower than the bound over the distance | yixi − m̆| . Therefore for n1 big enough , samples for which yixi = m will be included in W with high probability . Consequently , databases D3 , D4 ∈ D will reach , with high probability , to step 19 , which from our previous analysis over SGLD ( see subsection 4.2 ) will cause an unbounded loss in privacy . ρ1 > ρ3 > 1 D3 = { xi , yi : xi = xh , yi = nρ31 · xh } n1 i=1 D4 = { xi , yi : xi = xh , yi = nρ31 · xh } n−1 i=1 ∪ { xh 2 , nρ31 · xh 2 } ( 8 ) 5 WASSERSTEIN DISTANCE AND DIFFERENTIAL PRIVACY As we have shown in Theorem 1 , one can not give any DP guarantees for SGLD in the interim region . That means that to get private samples using SGLD , one must limit the number of iterations , thus utilizing the Gaussian mechanism , or run until approximate convergence . Therefore , it is of interest to get non-asymptotic convergence bounds for SGLD so that we guarantee privacy after a known number of steps . Previously , several works have given non-asymptotic bounds ; however , some of those do so for the 2-Wasserstein metric ( Raginsky et al . ( 2017 ) ; Cheng et al . ( 2018 ) ) . This is unfortunate as the 2-Wasserstein metric is unsuitable for differential privacy - it is easy to create two distributions with 2-Wasserstein distance as small as desired but with disjoint support . It is , however , interesting to ask whether combining bounds on the 2-Wasserstein metric with Lipschitz continuous probability densities will allow us to get privacy guarantees . The intuition why this should be enough is simple : If p , q are two distributions with small 2-Wasserstein distance , then there is ( under mild conditions ) a mapping , f : X → X , such that the pushforward maintains f ] p = q ( i.e . for each measurable set S q ( S ) = p ( f−1 ( S ) ) ) and that Ep [ ||x−f ( x ) ||2 ] < . One can assume that p ( x ) ≈ q ( f ( x ) ) and q ( x ) ≈ q ( f ( x ) ) as x ≈ f ( x ) with high probability . Unfortunately , this intuition does not hold exactly , as the map f can change the density considerably but still be a pushforward by changing the volume . For example , if we assume f is smooth and bijective , we get the standard change of variable formula such that p ( x ) = q ( f ( x ) ) |det ( Jf ) | , so p ( x ) ≈ q ( f ( x ) ) only if |det ( Jf ) | ≈ 1 . This issue becomes more severe as the dimensionality increases . For completeness , we will share our results connecting p ( x ) to q ( x ) when W2 ( p , q ) is small , and both distributions are L-Lipschitz continuous . This bound scale poorly with dimension , and as such ill-suited for SGLD on deep networks , but can still be useful for Bayesian sampling in low-dimensional problems . For distribution p , we define the density pλ ( x ) as the average of p ( x ) on a ball of radius λ centered around x - pλ ( x ) = 1vold ( λ ) ∫ Bdλ ( 0 ) p ( x + z ) dz , where Bdλ ( x ) is the ball in Rd of radius λ centered around x , and vold ( λ ) is its volume . Claim 5.1 . For L-Lipschitz continuous distribution p we have |p ( x ) − pλ ( x ) | ≤ λL . Theorem 2 . Let P , Q be absolutely continuous w.r.t the Lebesgue measure in Rd , with finite secondmoment and L-Lipschitz continuous densities p , q. IfW2 ( p , q ) < 2 then we have pλ ( x ) ≤ vold ( λ ) vold ( λ− ) qλ ( x ) + ( vold ( λ ) vold ( λ− ) − 1 ) 2λL+ vold ( λ− ) . ( 9 ) The proof is an extension of the proof of theorem 2.1 in Walker ( 2004 ) to dimensions larger than 1 . The detailed proof is in the supplementary material . It is easy to see that as vold ( λ ) vold ( λ− ) = ( 1 + λ− ) d , the bounds usefulness quickly diminishes with dimensionality as it requires extremely small to give non-vacuous results . This , however , can still give useful results in low-dimensional problems . 6 CONCLUSION As shown in this work , while SGLD has interesting connections to privacy and some guarantees , caution is required if one wishes to use it to get private predictions . This is especially important for models such as deep neural networks , where it is infeasible to guarantee convergence . REFERENCES Eren Balevi and Jeffrey G. Andrews . Wideband channel estimation with a generative adversarial network . IEEE Transactions on Wireless Communications , 20 ( 5 ) :3049–3060 , 2021. doi : 10 . 1109/TWC.2020.3047100 . Christopher Bishop . Pattern Recognition and Machine Learning . Information Science and Statistics . Springer-Verlag New York , 2006 . Nicolas Carion , Francisco Massa , Gabriel Synnaeve , Nicolas Usunier , Alexander Kirillov , and Sergey Zagoruyko . End-to-end object detection with transformers . In Andrea Vedaldi , Horst Bischof , Thomas Brox , and Jan-Michael Frahm ( eds . ) , Computer Vision – ECCV 2020 , pp . 213– 229 . Springer International Publishing , 2020 . ISBN 978-3-030-58452-8 . Nicholas Carlini , Florian Tramèr , Eric Wallace , Matthew Jagielski , Ariel Herbert-Voss , Katherine Lee , Adam Roberts , Tom B . Brown , D. Song , Ú . Erlingsson , Alina Oprea , and Colin Raffel . Extracting training data from large language models . In USENIX Security Symposium , 2021 . X. Cheng and P. Bartlett . Convergence of langevin mcmc in kl-divergence . In ALT , 2018 . Xiang Cheng , Niladri S. Chatterji , Peter L. Bartlett , and Michael I. Jordan . Underdamped langevin MCMC : A non-asymptotic analysis . In Conference On Learning Theory COLT , 2018 . Arnak Dalalyan . Theoretical guarantees for approximate sampling from smooth and log-concave densities . Journal of the Royal Statistical Society : Series B ( Statistical Methodology ) , 79 , 12 2014. doi : 10.1111/rssb.12183 . Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . BERT : Pre-training of deep bidirectional transformers for language understanding . In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics : Human Language Technologies , Volume 1 ( Long and Short Papers ) , pp . 4171–4186 , Minneapolis , Minnesota , June 2019 . Association for Computational Linguistics . doi : 10.18653/v1/N19-1423 . URL https : //aclanthology.org/N19-1423 . Christos Dimitrakakis , Blaine Nelson , Zuhe Zhang , Aikaterini Mitrokotsa , and Benjamin I. P. Rubinstein . Differential privacy for bayesian inference through posterior sampling . Journal of Machine Learning Research , 18 ( 11 ) :1–39 , 2017 . URL http : //jmlr.org/papers/v18/ 15-257.html . Cynthia Dwork . A firm foundation for private data analysis . Commun . ACM , 54 ( 1 ) :86–95 , January 2011 . ISSN 0001-0782. doi : 10.1145/1866739.1866758 . URL https : //doi.org/10 . 1145/1866739.1866758 . Cynthia Dwork and Jing Lei . Differential privacy and robust statistics . In Proceedings of the FortyFirst Annual ACM Symposium on Theory of Computing , STOC ’ 09 , pp . 371–380 , New York , NY , USA , 2009 . Association for Computing Machinery . ISBN 9781605585062. doi : 10.1145/ 1536414.1536466 . URL https : //doi.org/10.1145/1536414.1536466 . Cynthia Dwork and Aaron Roth . The algorithmic foundations of differential privacy . Found . Trends Theor . Comput . Sci. , 9 ( 3–4 ) :211–407 , August 2014 . ISSN 1551-305X . doi : 10.1561/0400000042 . URL https : //doi.org/10.1561/0400000042 . Cynthia Dwork , Krishnaram Kenthapadi , Frank McSherry , Ilya Mironov , and Moni Naor . Our data , ourselves : Privacy via distributed noise generation . In Advances in Cryptology - EUROCRYPT 2006 , 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques , volume 4004 of Lecture Notes in Computer Science , pp . 486– 503 . Springer , 2006a . doi : 10.1007/11761679 29 . URL https : //iacr.org/archive/ eurocrypt2006/40040493/40040493.pdf . Cynthia Dwork , Frank McSherry , Kobbi Nissim , and Adam Smith . Calibrating noise to sensitivity in private data analysis . In Shai Halevi and Tal Rabin ( eds . ) , Theory of Cryptography , pp . 265–284 , Berlin , Heidelberg , 2006b . Springer Berlin Heidelberg . ISBN 978-3-540-32732-5 . James R. Foulds , Joseph Geumlek , Max Welling , and Kamalika Chaudhuri . On the theory and practice of privacy-preserving bayesian data analysis . In Uncertainty in Artificial Intelligence , UAI , 2016 . Matt Fredrikson , Somesh Jha , and Thomas Ristenpart . Model inversion attacks that exploit confidence information and basic countermeasures . In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security , CCS ’ 15 , pp . 1322–1333 , New York , NY , USA , 2015 . Association for Computing Machinery . ISBN 9781450338325. doi : 10.1145/2810103 . 2813677 . URL https : //doi.org/10.1145/2810103.2813677 . Arun Ganesh and Kunal Talwar . Faster differentially private samplers via rényi divergence analysis of discretized langevin mcmc . ArXiv , abs/2010.14658 , 2020 . Joseph Geumlek , Shuang Song , and Kamalika Chaudhuri . Renyi differential privacy mechanisms for posterior sampling . In Advances in Neural Information Processing NeurIPS , 2017 . Alison L. Gibbs and Francis Edward Su . On choosing and bounding probability metrics . International Statistical Review , 70 ( 3 ) :419–435 , 2002 . M. Gil , F. Alajaji , and T. Linder . Rényi divergence measures for commonly used univariate continuous distributions . Information Sciences , 249:124–131 , 2013 . ISSN 0020-0255. doi : https : //doi . org/10.1016/j.ins.2013.06.018 . URL https : //www.sciencedirect.com/science/ article/pii/S0020025513004441 . Yi-An Ma , Yuansi Chen , Chi Jin , Nicolas Flammarion , and Michael I. Jordan . Sampling can be faster than optimization . Proceedings of the National Academy of Sciences , 116 ( 42 ) :20881– 20885 , 2019 . ISSN 0027-8424. doi : 10.1073/pnas.1820003116 . URL https : //www.pnas . org/content/116/42/20881 . Ilya Mironov . Renyi differential privacy . CoRR , abs/1702.07476 , 2017 . URL http : //arxiv . org/abs/1702.07476 . Maxim Raginsky , Alexander Rakhlin , and Matus Telgarsky . Non-convex learning via stochastic gradient langevin dynamics : a nonasymptotic analysis . In Satyen Kale and Ohad Shamir ( eds . ) , Proceedings of the 2017 Conference on Learning Theory , volume 65 of Proceedings of Machine Learning Research , pp . 1674–1703 . PMLR , 07–10 Jul 2017 . URL https : //proceedings . mlr.press/v65/raginsky17a.html . Alfréd Rényi . On measures of entropy and information . In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability , Volume 1 : Contributions to the Theory of Statistics , pp . 547–561 . University of California Press , 1961 . Alexandre B. Tsybakov . Introduction to Nonparametric Estimation . Springer Publishing Company , Incorporated , 1st edition , 2008 . ISBN 0387790519 . Martin J. Wainwright . High-Dimensional Statistics : A Non-Asymptotic Viewpoint . Cambridge Series in Statistical and Probabilistic Mathematics . Cambridge University Press , 2019 . Stephen Walker . New approaches to Bayesian consistency . The Annals of Statistics , 32 , 2004 . Yu-Xiang Wang , Stephen Fienberg , and Alex Smola . Privacy for free : Posterior sampling and stochastic gradient monte carlo . In Proceedings of the 32nd International Conference on Machine Learning , volume 37 of Proceedings of Machine Learning Research , pp . 2493–2502 , Lille , France , 07–09 Jul 2015 . PMLR . URL https : //proceedings.mlr.press/v37/ wangg15.html . M. Welling and Y. Teh . Bayesian learning via stochastic gradient langevin dynamics . In ICML , 2011 . Chulhee Yun , Suvrit Sra , and Ali Jadbabaie . Open problem : Can single-shuffle SGD be better than reshuffling SGD and gd ? In Conference on Learning Theory , COLT , 2021 . Zuhe Zhang , Benjamin I. P. Rubinstein , and Christos Dimitrakakis . On the differential privacy of bayesian inference . In AAAI Conference on Artificial Intelligence , 2016 . A SGLD AND POSTERIOR PRIVACY Proof Theorem 1 . Define 1 2 > γ1 > 0 ; 3 2 > γ2 > 1 + γ1 ; xl = xh 2 ν1 = 2 ln ( 1δ ) + 1 n1 = max { 1 2αx2hβ − 1 x2hβ , α x2hβ , α x2hβ ( e 2 x2 h β − 2 ) + 1 2x2β } n2 = max { 1 + x2h x2l 8 , 1 + ν x2h x2l ( 1 + 8 ( ν − 1 ) ) , ( 16νβx4h 9 10 x 2 l ) 1 1−2γ1 , ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 1 2−γ1−γ2 , ( 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 2 3−2γ2 } n3 = max { 1 + 10 x2h x2l ν1 β , 1 + ν x2h x2l } np = max { n1 , n2 , n3 , ( ( ′ − ln ( 0.5− δ ) ) e 2 x2β ( 32x2hβ 3 ) 2 2v1 α ) 1 2 ( γ2−1 ) } v1 = max { 6 , 1 + 2e 1 x2 h β } cp = n γ2 p . We consider the Bayesian linear regression problem over databaseD1 ( defined in eq . 7 ) with n = np and c = cp . Since np > n1 , the problem holds the constraints of lemma A.6 . Consequently , there exists a step for which one approximate sample from the posterior using SGLD is not ( ′′ , δ ) private for all ′′ such that ′′ ≤ e− 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( cp np ) 2 + ln ( 0.5 − δ ) . From eq . 10 , the choice of np promises that ′ ≤ e− 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( cp np ) 2 + ln ( 0.5 − δ ) ; Therefore approximate sampling from the posterior using SGLD is not ( ′ , δ ) differentially private . Since np > n2 and np > n3 , the problem holds the constraints of Claim D.30 . Therefore one sample from the posterior is ( , δ ) differentially private . e − 2 x2 h β α 2v1 ( 3 32x2hβ ) 2 ( cp np ) 2 + ln ( 0.5− δ ) ≥ ′ ( cp np ) 2 ≥ ( ′ − ln ( 0.5− δ ) ) e 2 x2 h β ( 32x2hβ 3 ) 2 2v1 α n2 ( γ2−1 ) p ≥ ( ′ − ln ( 0.5− δ ) ) e 2 x2 h β ( 32x2hβ 3 ) 2 2v1 α np ≥ ( ( ′ − ln ( 0.5− δ ) ) e 2 x2 h β ( 32x2hβ 3 ) 2 2v1 α ) 1 2 ( γ2−1 ) ( 10 ) A.1 POSTERIOR SAMPLING PRIVACY Proof Lemma 4.1. eq . 11 is a known result for the Bayesian inference problem for a linear model with Gaussian noise with known precision parameter ( β ) and a conjugate prior ( Bishop ( 2006 ) - 3.49-4.51. for details ) . By choosing the basis function to be φ ( x ) = x , working in one dimension , and choosing m0 = 0 , S0 = α−1 , we get the linear model defined in eq . 4 and matching posterior described in Lemma 4.1. p ( w|t ) = N ( w ; mN , SN ) ; mN = SN ( S−10 m0 + βΦT t ) ; S −1 N = S −1 0 + βΦ TΦ ( 11 ) Proof Lemma 4.2 . By definition 3 , for a single sample from the posterior to be ( ν , ′ ) RDP , the Rényi divergence of order ν between any adjacent databases needs to be bounded . We consider two adjacent databases D , D̂ ∈ D , and w.l.o.g , define that they differ in the last sample ( where it is also allowed to be ( 0 , 0 ) for one of them , which saves us the need to consider also a neighbouring database with a size smaller by 1 ) . To ease the already complex and detailed calculations , we use definitions in eq .12 . D = { xi , yi } n−1i=1 ∪ { xn , yn } , D̂ = { xi , yi } n−1 i=1 ∪ { x̂n , ŷn } z = n−1∑ i=1 x2i , q = n−1∑ i=1 yixi ( 12 ) According to Lemma 4.1 and with definitions in eq . 12 , the posterior distributions are p ( θ|D ) = N ( θ ; µ , σ2 ) ; µ = β ( q + xnyn ) α+ ( z + x2n ) β ; σ2 = 1 α+ ( z + xn ) β p ( θ|D̂ ) = N ( θ ; µ̂ , σ̂2 ) ; µ̂ = β ( q + x̂nŷn ) α+ ( z + x̂2n ) β ; σ̂2 = 1 α+ ( z + x̂n ) β . ( 13 ) By Gil et al . ( 2013 ) , the Réyni divergence of order ν , - Dν ( f1||f2 ) - for f1 , f2 , uni-variate normal distributions with means µ1 , µ2 and variances σ1 , σ2 accordingly , is Dν ( f1||f2 ) = ln σ1 σ2 + 1 2 ( ν − 1 ) ln σ 2 2 ( σ2f1 , f2 ) ∗ ν + 1 2 ν ( µ1 − µ2 ) 2 ( σ2 ) ∗ν ( σ2f1 , f2 ) ∗ ν = νσ 2 2 + ( 1− ν ) σ21 > 0 . Therefore , for p ( θ|D ) and p ( θ|D̂ ) , the Rényi divergence of order ν is as shown in eq . 14 , where we omit the subscript for ( σ2 ) ∗ν since it is clear from context to which distributions it applies . Dν ( p ( θ|D ) ||p ( θ|D̂ ) ) = ln σ σ̂ + 1 2 ( ν − 1 ) ln σ̂ 2 ( σ2 ) ∗ν + 1 2 ν ( µ− µ̂ ) 2 ( σ2 ) ∗ν ( σ2 ) ∗ν = νσ̂ 2 + ( 1− ν ) σ2 ( 14 ) According to claim D.25 , ( σ2 ) ∗ν > 0 . Therefore the value Dν ( p ( θ|D ) , p ( θ|D̂ ) ) exists . In order to prove Rényi differential privacy , each of the terms of Dν ( p ( θ|D ) , p ( θ|D̂ ) ) is bounded separately . The bounds on each of the terms are proved at claims D.26 , D.27 , and D.28 . Proof Lemma 4.3 . By Lemma 4.2 , sampling from the posterior is ( ν , 1 ) -RDP , therefore by Lemma 3.1 , sampling from the posterior is also ( 1 + ln ( 1δ ) ν−1 , δ ) differentially private . A.2 STOCHASTIC GRADIENT LANGEVIN DYNAMICS PRIVACY Proof Lemma 4.4. p ( θ̂j > µj |D2 ) = n∑ r=1 p ( θ̂rj > µj |D2 ) p ( θ̂j = θ̂rj |D2 ) = n∑ r=1 p ( θ̂j − µ̂rj > µj − µ̂rj |D2 ) p ( θ̂j = θ̂rj |D2 ) = 1 n n∑ r=1 p ( θ̂j − µ̂rj > µj − µ̂rj |D2 ) ≤ 1 n n∑ r=1 exp ( − ( µj − µ̂rj ) 2 2 ( σrj ) 2 ) Where the inequality holds due to Chernoff bound ( For further details , see Wainwright ( 2019 ) ) . Proof Lemma 4.5 . By lemma A.5 , for n > max { αx2β , α x2hβ ( e 2 x2β − 2 ) + 12x2β , 1 2αx2hβ − 1 x2hβ } eq . 15 holds for some k̇ ∈ R > 0 . We can see that this lower bound is dominated by c 2 n2 , therefore proving Lemma 4.5 . ( µ ( dk̇e+1 ) n − µ̂r ( dk̇e+1 ) n ) 2 ( σr ( dk̇e+1 ) n ) 2 ≥ e − 2 x2 h β α v1 ( 3 32x2hβ ) 2 ( c n ) 2 v1 = max { 6 , 1 + 2e 1 x2 h β } ( 15 ) Proof Lemma 4.6 . By Lemma A.6 , for n > max { α x2hβ , α x2hβ ( e 2 x2 h β −2 ) + 1 2x2hβ , 1 2αx2hβ − 1 x2hβ } , there exists T ∈ Z > 0 ( Marked in Lemma A.6 as dk̇e ) such that running SGLD for the Bayesian linear regression problem over D1 for T steps will not be ( , δ ) differentially for < ′ , as defined in eq . 16 , and δ < 0.5 . Since ′ is dominated by c 2 n2 , this proves the lemma . ′ = e − 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( c n ) 2 + ln ( 0.5− δ ) v1 = max { 6 , 1 + 2e 1 x2 h β } ( 16 ) A.3 STOCHASTIC GRADIENT LANGEVIN DYNAMICS DETAILED ANALYSIS In order to ease the analysis of the SGLD process , markings in eq . 17 are used . xh , α , β , c , n are as defined for the Bayesian linear regression problem , and η is defined in subsection 4.2. λ = [ 1− η 2 ( α+ nx2hβ ) ] , λ̂ = [ 1− η 2 ( α+ n ( xh 2 ) 2β ) ] , ρ = η 2 ncx2hβ , ρ̂ = η 2 nc ( xh 2 ) 2β ( 17 ) Lemma A.1 . The forms of θ̂r ( k+1 ) n are θ̂1 ( k+1 ) n = θ0λ̂ k+1λ ( n−1 ) ( k+1 ) + k∑ j=0 ( λ̂λn−1 ) j [ ρ̂λn−1 + ρ n−2∑ i=0 λi + √ η n−1∑ i=0 λiξi ] θ̂r > 1 ( k+1 ) n = θ0 ( λ̂λ n−1 ) k+1+ ( r−1∑ i=1 ( ρ+ √ ηξ ) λ̂λn−i−1 + ( ρ̂+ √ ηξ ) λn−r + n∑ j=r+1 ( ρ+ √ ξη ) λn−j ) k∑ l=0 ( λ̂λn−1 ) l. Proof Lemma A.1 . Welling & Teh ( 2011 ) define the SGLD update rule as in eq . 3 . This rule can be applied to the Bayesian linear regression problem over databases D1 , D2 as following p ( θj ) = N ( θj ; 0 , α−1 ) ⇒ ln p ( θj ) = ln ( 1√ 2πα−1 ) − 1 2 θ2jα⇒ ∇θjp ( θj ) = −θjα p ( yi|θj ) = N ( yj ; θjxi , β−1 ) ⇒ ln p ( yi|θj ) = ln ( 1√ 2πβ−1 ) − 1 2 ( yi − θjxi ) 2β ⇒ ∇θjp ( yi|θj ) = ( yi − θjxi ) xiβ ⇒ θj+1 = θj + η 2 [ −θjα+ n ( yi − θjxj ) xiβ ] + √ ηjξi = θj [ 1− η 2 ( α+ nx2jβ ) ] + η 2 nyixiβ + √ ηξj = θj [ 1− η 2 ( α+ nx2jβ ) ] + η 2 ncx2iβ + √ ηξj . By using standard tools for solving first-order non-homogeneous recurrence relations with variable coefficients , the value of θ̂1n can be found . θ̂1n = λ̂λ n−1 ( θ0λ̂+ ρ̂+ √ ηξ λ̂ + n∑ i=2 ρ+ √ ηξ λ̂λi−1 ) = θ0λ̂λ n−1 + ( ρ̂+ √ ηξ ) λn−1 + ( ρ+ √ ηξ ) n∑ i=2 λn−1− ( i−1 ) = θ0λ̂λ n−1 + ( ρ̂+ √ ηξ ) λn−1 + ( ρ+ √ ηξ ) n∑ i=2 λn−1− ( i−1 ) = θ0λ̂λ n−1 + ( ρ̂+ √ ηξ ) λn−1 + ( ρ+ √ ηξ ) n−2∑ i=0 λi = θ0λ̂λ n−1 + ρ̂λn−1 + ρ n−2∑ i=0 λi + √ ηξ n−1∑ i=0 λi . Now by defining a new series - θ̂1 ( k+1 ) n = c1θ̂ 1 kn + c2 , and using the tools for solving first order non-homogeneous recurrence relations with constant coefficients , the value of θ̂1kn can be found θ̂1kn = c k 1 ( θ̂1n c1 + k∑ i=2 c2 ci1 ) = θ1nc k−1 1 + k∑ i=2 c2c k−i 1 = θ1nc k−1 1 + c2 k−2∑ i=0 ci1 = ( θ0c1 + c2 ) c k−1 1 + c2 k−2∑ i=0 ci1 = θ0 ( λ̂λ n−1 ) k + ( ρ̂λn−1 + ρ n−2∑ i=0 λi + √ ηξ n−1∑ i=0 λi ) k−1∑ j=0 ( λ̂λn−1 ) j . The proof for θ̂rkn is done in similar manner . Corollary A.1 . θ̂r ( k+1 ) n ∼ N ( θ̂ r ( k+1 ) n ; µ̂ r ( k+1 ) n , ( σ̂ r ( k+1 ) n ) 2 ) . Lemma A.2 . µkn+n − µ̂rkn+n ≥ λn−1 ncx2hβ α+nx2hβ λk ( n−1 ) ( λ̂k+1 − λk+1 ) . Proof Lemma A.2 . The proof of this lemma is separated into two cases , for r = 1 and for r > 1 . For r = 1 , it is easy to derive eq . 18 from lemma A.1 , using E [ θ0 ] = 0 and E [ ξ ] = 0. µ̂1 ( k+1 ) n = ρ n−2∑ i=0 λi k∑ j=0 ( λ̂λ ( n−1 ) ) j + ρ̂λn−1 k∑ j=0 ( λ̂λ ( n−1 ) ) j µkn+n = ρ n−2∑ i=0 λi k∑ j=0 λjn + ρλn−1 k∑ r=0 λrn ( 18 ) We use the sum of a geometric sequence to get µ̂1 ( k+1 ) n = ρ n−2∑ i=0 λi k∑ j=0 ( λ̂λ ( n−1 ) ) j + ρ̂λn−1 k∑ j=0 ( λ̂λ ( n−1 ) ) j = ( ρ ( 1− λn−1 1− λ ) + ρ̂λn−1 ) 1− ( λ̂λn−1 ) k+1 1− λ̂λn−1 . Therefore the difference between the means can be lower bounded : µkn+n − µ̂1kn+n = 1− λ ( k+1 ) n 1− λn [ ρ ( 1− λn−1 1− λ ) + ρλn−1 ] − 1− ( λ̂λ ( n−1 ) ) k+1 1− λ̂λn−1 [ ρ ( 1− λn−1 1− λ ) + ρ̂λn−1 ] =∗ 1− λ ( k+1 ) n 1− λn ncx2β α+ nx2hβ ( 1− λn ) − 1− ( λ̂λ ( n−1 ) ) k+1 1− λ̂λn−1 ncx2β α+ nx2hβ ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) = ( 1− λ ( k+1 ) n ) ncx 2β α+ nx2hβ − 1− ( λ̂λ ( n−1 ) ) k+1 1− λ̂λn−1 ncx2β α+ nx2hβ ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) = ncx2β α+ nx2hβ [ ( 1− λ ( k+1 ) n ) − 1− ( λ̂λ ( n−1 ) ) k+1 1− λ̂λn−1 ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) ] =∗∗ ncx2β α+ nx2hβ ( λn−1 34 η2α ( 1− λ̂k+1λ ( k+1 ) ( n−1 ) ) + λ ( k+1 ) ( n−1 ) ( λ̂k+1 − λk+1 ) ( 1− λn−1λ̂ ) 1− λn−1λ̂ ) ≥ ncx2β α+ nx2hβ ( λ ( k+1 ) ( n−1 ) ( λ̂k+1 − λk+1 ) ( 1− λn−1λ̂ ) 1− λn−1λ̂ ) = ncx2β α+ nx2hβ λ ( k+1 ) ( n−1 ) ( λ̂k+1 − λk+1 ) = λn−1 ncx2β α+ nx2hβ λk ( n−1 ) ( λ̂k+1 − λk+1 ) where equality * holds from claims D.1 , D.2 , D.3 , equality * * holds from claim D.5 , and the inequality holds because λ < λ̂ < 1 . This proves Lemma A.2 for r = 1 . For the case of r > 1 , from Lemma A.1 it is easy to see θ̂r > 1 ( k+1 ) n = [ [ θ0λ r−1 + ρ r−2∑ i=0 λi + √ η r−2∑ i=0 λiξi ] λ̂ kλk ( n−1 ) + k−1∑ j=0 ( λ̂λn−1 ) j [ ρ̂λn−1 + ρ n−2∑ i=0 λi + √ η n−1∑ i=0 λiξi ] ] λ̂λn−r+ ρ̂λn−r + ρ n−r−1∑ j=0 λj + √ η n−r∑ j=0 ξλj . Therefore µ̂r > 1kn+n follows µ̂r > 1 ( k+1 ) n = [ [ ρ r−2∑ i=0 λi ] λ̂kλk ( n−1 ) + k−1∑ j=0 ( λ̂λn−1 ) j [ ρ̂λn−1 + ρ n−2∑ i=0 λi ] ] λ̂λn−r + ρ̂λn−r + ρ n−r−1∑ j=0 λj . Consequently the difference in means for r > 1 can be lower bounded : µkn+n − µ̂rkn+n = λn−r [ λρλkλk ( n−1 ) r−2∑ i=0 λi + λ k−1∑ j=0 ( λλn−1 ) j ( ρλn−1 + ρ n−2∑ i=0 λi ) − λ̂ρλ̂kλk ( n−1 ) r−2∑ i=0 λi − λ̂ k−1∑ j=0 ( λ̂λn−1 ) j ( ρ̂λn−1 + ρ n−2∑ i=0 λi ) ] + λn−r ( ρ− ρ̂ ) = λn−rλk ( n−1 ) ρ ( λk+1 − λ̂k+1 ) r−2∑ i=0 λi + λn−r k−1∑ j=0 λ ( n−1 ) j [ λn−1 ( ρλj+1− ρ̂λ̂j+1 ) + ( λj+1 − λ̂j+1 ) ρ n−2∑ i=0 λi ] + λn−r ( ρ− ρ̂ ) =∗ λn−rλk ( n−1 ) ρ ( λk+1 − λ̂k+1 ) 1− λ r−1 1− λ + λn−r ncx2hβ α+ nx2hβ [ λ ( 1− λkn ) − λ̂ 1− ( λn−1λ̂ ) k 1− λn−1λ̂ ( 1− λn ( 3 4 λ−1 + 1 4 ) ) ] + λn−r ( ρ− ρ̂ ) =∗∗ λn−rλk ( n−1 ) ρ ( λk+1 − λ̂k+1 ) 1− λ r−1 1− λ + λn−r ncx2hβ α+ nx2hβ [ ( λ− λ̂ ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + λk ( n−1 ) ( λ̂k+1 − λk+1 ) ] + λn−r ( ρ− ρ̂ ) =∗∗∗ λn−r ncx2hβ α+ nx2hβ λk ( n−1 ) ( λk+1 − λ̂k+1 ) ( 1− λr−1 ) + λn−r ncx 2 hβ α+ nx2hβ [ ( λ− λ̂ ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + λk ( n−1 ) ( λ̂k+1 − λk+1 ) ] + λn−r ( ρ− ρ̂ ) = λn−r ncx2hβ α+ nx2hβ λk ( n−1 ) ( λk+1 − λ̂k+1 ) [ 1− λr−1 − 1 ] + λn−r ncx2hβ α+ nx2hβ ( λ− λ̂+ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + λn−r ( ρ− ρ̂ ) = λn−r ncx2hβ α+ nx2hβ λk ( n−1 ) ( λ̂k+1 − λk+1 ) λr−1+ λn−r ncx2hβ α+ nx2hβ ( λ− λ̂+ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + λn−r ( ρ− ρ̂ ) = λn−1 ncx2hβ α+ nx2hβ λk ( n−1 ) ( λ̂k+1 − λk+1 ) + λn−r ncx2hβ α+ nx2hβ ( λ− λ̂+ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + λn−r ( ρ− ρ̂ ) > ∗∗∗∗ λn−1 ncx2hβ α+ nx2hβ λk ( n−1 ) ( λ̂k+1 − λk+1 ) where equality * holds from claims D.6 and D.7 , equality * * holds from claim D.10 , equality * * * holds from claim D.1 , and equality * * * * holds from claim D.11 and λ̂ > λ. Lemma A.3 . For x2hβ > 3 , n > 1 2αx2hβ − 1 x2hβ , ∃k̇ ∈ R+ such that upper bounds defined in eq . 19 hold for all 0 < k ≤ k̇ : ( σ1 ( k+1 ) n ) 2 ≤ 2 ( λ̂λn−1 ) 2 1 α ( λ̂λn−1 ) 2k ( σr > 1 ( k+1 ) n ) 2 ≤ 6 ( λ̂λn−r ) 2 1 α ( λ̂λn−1 ) 2k . ( 19 ) Proof Lemma A.3 . The proof will be separated into two cases , r = 1 and r > 1 . ( σ̂1kn+n ) 2 can be easily computed from lemma A.1 using the fact that both the noise and prior are distributed normally . A first general upper bound on ( σ̂1kn+n ) 2 is found at eq . 20 . ( σ1kn+n ) 2 = 1 α ( λ̂λ ( n−1 ) ) 2 ( k+1 ) + η n−1∑ i=0 λ2i k∑ j=0 ( λ̂2λ2 ( n−1 ) ) j = ( λ̂λ ( n−1 ) ) 2 ( k+1 ) [ 1 α + η n−1∑ i=0 λ2i k∑ j=0 ( λ̂λ ( n−1 ) ) 2j ( λ̂λ ( n−1 ) ) 2 ( k+1 ) ] = ( λ̂λ ( n−1 ) ) 2 ( k+1 ) [ 1 α + η n−1∑ i=0 λ2i k∑ j=0 ( λ̂λ ( n−1 ) ) 2 ( j− ( k+1 ) ) ] ≤ ( λ̂λ ( n−1 ) ) 2 ( k+1 ) [ 1 α + η n−1∑ i=0 λ2i k∑ j=0 λ2n ( j− ( k+1 ) ) ] = ( λ̂λ ( n−1 ) ) 2 ( k+1 ) [ 1 α + η ( k+1 ) n∑ i=1 λ−2i ] = ( λ̂λ ( n−1 ) ) 2 ( k+1 ) [ 1 α + ηλ−2 1− λ−2 ( k+1 ) n 1− λ−2 ] ( 20 ) where the inequality holds because λ < λ̂ By claim D.12 , this upper bound can be further bounded for k̇ ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) − 1 such that eq . 21 will hold , therefore proving the bound for r = 1 . ( λ̂λ ( n−1 ) ) 2 ( k̇+1 ) [ 1 α + ηλ−2 1− λ−2 ( k̇+1 ) n 1− λ−2 ] ≤ 2 ( λ̂λ ( n−1 ) ) 2 ( k̇+1 ) 1 α ( 21 ) For r > 1 , ( σ̂r > 1kn+n ) 2 can be bounded as following ( σr > 1kn+n ) 2 = ( λ̂λn−r ) 2η [ ( λ̂kλk ( n−1 ) ) 2 r−2∑ i=0 λ2i + k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] + η n−r∑ i=0 λ2i + 1 α ( λ̂λn−1 ) 2k ( λ̂λn−1 ) 2 ≤∗ ( λ̂λn−r ) 2η [ ( λ̂kλk ( n−1 ) ) 2 r−2∑ i=0 λ2i + k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] + η n−r∑ i=0 λ2i + 1 α ( λ̂λn−1 ) 2k ( λ̂λn−r ) 2 = ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k + η ( λ̂λn−1 ) 2k r−2∑ i=0 λ2i + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] + η n−r∑ i=0 λ2i ≤∗∗ ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k + η ( λ̂λn−1 ) 2k n−1∑ i=0 λ2i + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] + η n−r∑ i=0 λ2i ≤∗∗∗ 2 ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k̇ + η ( λ̂λn−1 ) 2k n−1∑ i=0 λ2i + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] ( 22 ) where inequality * follows from λ < 1 and r > 1 , inequality * * follows from r ≤ n , and inequality * * * holds from claim D.15 . For k ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) − 1 , this bound can be further developed 2 ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k̇ + η ( λ̂λn−1 ) 2k n−1∑ i=0 λ2i + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] ≤ 6 ( λ̂λn−r ) 2 1 α ( λ̂λn−1 ) 2k̇ . ( 23 ) The inequality holds from claims D.12 , D.14 , which provide the bound for r > 1 . All that is left is to prove that 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) − 1 > 0 , which is done in Claim D.22 . Lemma A.4 . Mark k̇ = 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) − 1 , for the conditions of Lemma A.3 ( σ1dk̇en+n ) 2 ≤ ( 1 + 2e 1 x2β ) ( λ̂λn−1 ) 2 1 α ( λ̂λ ( n−1 ) ) 2dk̇e ( σr > 1dk̇en+n ) 2 ≤ 6 ( λ̂λn−r ) 2 1 α ( λ̂λn−1 ) 2dk̇e . Proof Lemma A.4 . This proof will be separated into two cases , for r > 1 and for r = 1 . For r > 1 , the bound found in eq . 22 , has no dependence on the choice of k , therefore holds also for dk̇e . This bound was , in turn , developed for k̇ at eq . 23 using three claims . If these claims also hold for dk̇e , then the bound in eq . 23 also holds for dk̇e , and the lemma is proved for r > 1 . Claims D.14 , D.12 hold for all k ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) , and since dk̇e ≤ k̇ + 1 = 1 2n logλ ( 1 1+ 1αη ( 1−λ2 ) ) , they holds for dk̇e . Claim D.15 was proved for all k , hence also holds for dk̇e . For r = 1 , the bound found at eq . 20 is applicable for all k , hence ( σ1 ( dk̇e+1 ) n ) 2 ≤ ( λ̂λn−1 ) 2 ( dk̇e+1 ) [ 1 α + ηλ−2 1− λ−2 ( dk̇e+1 ) n 1− λ−2 ] ≤ ( λ̂λn−1 ) 2 ( dk̇e+1 ) 1 α ( 1 + 2e 1 x2β ) where the last inequality holds from claim D.17 . Lemma A.5 . For k̇ defined in Lemma A.4 , the conditions of Lemma A.3 , and n > max { α x2hβ , α x2hβ ( e 2 x2 h β − 2 ) + 1 2x2hβ } ( µdk̇en+n − µ̂rdk̇en+n ) 2 ( σrdk̇en+n ) 2 ≥ e − 2 x2 h β α v1 ( 3 32x2hβ ) 2 ( c n ) 2 v1 = max { 6 , 1 + 2e 1 x2 h β } . Proof Lemma A.5 . ( µdk̇en+n − µ̂rdk̇en+n ) 2 σrdk̇en+n ≥ ( λn−1 ncx2hβ α+nx2hβ λdk̇e ( n−1 ) ( λ̂dk̇e+1 − λdk̇e+1 ) ) 2 v1 ( λ̂λn−r ) 2 1 α ( λ̂λ n−1 ) 2dk̇e = λ2dk̇e ( n−1 ) ( λn−1 ncx2hβ α+nx2hβ ( λ̂dk̇e+1 − λdk̇e+1 ) ) 2 v1 ( λ̂λn−r ) 2 1 α ( λ̂λ n−1 ) 2dk̇e = ( λn−1 ncx2hβ α+nx2hβ ( λ̂dk̇e+1 − λdk̇e+1 ) ) 2 v1 ( λ̂λn−r ) 2 1 α λ̂ 2dk̇e = αλ2 ( r−1 ) v1 ( ncx2hβ α+ nx2hβ ) 2 ( λ̂dk̇e+1 − λdk̇e+1 ) 2 λ̂2dk̇e+1 = αλ2 ( r−1 ) v1 ( ncx2hβ α+ nx2hβ ) 2 ( 1− λ dk̇e+1 λ̂dk̇e+1 ) 2 ≥ αλ2 ( r−1 ) v1 ( ncx2hβ α+ nx2hβ ) 2 ( 1− ( 1− 3 4nx 2β ( α+ nx2hβ ) 2 − ( α+ 14nx2β ) ) ) 2 = αλ2 ( r−1 ) v1 ( ncx2hβ α+ nx2hβ ) 2 ( 3 4nx 2 hβ ( α+ nx2hβ ) 2 − ( α+ 14nx2β ) ) 2 ≥ αλ2 ( r−1 ) v1 ( ncx2hβ α+ nx2hβ ) 2 ( 3 4nx 2β ( α+ nx2hβ ) 2 ) 2 ≥ αλ 2 ( r−1 ) v1 ( ncx2β 2nx2hβ ) 2 ( 3 4nx 2β ( 2nx2β ) 2 ) 2 = αλ2 ( r−1 ) v1 ( c 2 ) 2 ( 3 4 4nx2hβ ) 2 = αλ2 ( r−1 ) v1 ( 3 32x2hβ ) 2 ( c n ) 2 ≥ αλ2 ( n−1 ) v1 ( 3 32x2hβ ) 2 ( c n ) 2 ≥ e − 2 x2 h β α v1 ( 3 32x2hβ ) 2 ( c n ) 2 where first inequality holds from A.3 , A.4 and the definition of v1 , the second inequality follows claim D.17 and claim D.22 , fourth inequality holds under the assumption of nx2hβ > α ⇐⇒ n > α x2hβ , and last inequality holds from claim D.19 . Lemma A.6 . For the Bayesian linear regression problem over database D1 , the conditions of Lemma A.5 , and k̇ , as defined in Lemma A.4 , approximate sampling , by running SGLD for ( dk̇e+1 ) n steps , will not be ( , δ ) differentially private for δ < 0.5 , < e − 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( c n ) 2 + ln ( 0.5− δ ) v1 = max { 6 , 1 + 2e 1 x2 h β } . Proof Lemma A.6 . According to definition 1 , it is enough that there is one group , S , such that p ( θ ( dk̇e+1 ) n ∈ S|D1 ) > e p ( θ̂ ( dk̇e+1 ) n ∈ S|D2 ) + δ , to show that releasing θ ( dk̇e+1 ) n is not ( , δ ) private . Consider S = { s|s > µ ( dk̇e+1 ) n } . From claim D.23 and since θ ( dk̇e+1 ) n ∼ N ( θ ( dk̇e+1 ) n ; µ ( dk̇e+1 ) n , σ 2 ( dk̇e+1 ) n ) , eq . 24 holds . The conditions for the right term to be smaller than 0 ( thus making the approximate sampling not ( , δ ) private ) are found in eq . 25 , therefore proving the lemma . e p ( θ̂ ( dk̇e+1 ) n ∈ S|D2 ) + δ − p ( θ̂ ( dk̇e+1 ) n ∈ S|D1 ) ≤ e e −e − 2 x2β α 2v1 ( 3 32x2 h β ) 2 ( cn ) 2 + δ − 0.5 ( 24 ) e −e − 2 x2β α 2v1 ( 3 32x2 h β ) 2 ( cn ) 2 + δ − 0.5 < 0 e −e − 2 x2β α 2v1 ( 3 32x2 h β ) 2 ( cn ) 2 < 0.5− δ − e− 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( c n ) 2 < ln ( 0.5− δ ) < e − 2 x2β α 2v1 ( 3 32x2hβ ) 2 ( c n ) 2 + ln ( 0.5− δ ) ( 25 ) B PROPOSE TEST SAMPLE SUPPLEMENTARY nmin which is used in algorithm 4.3 is defined as following ν = 2 ln ( 1δ ) + 1 nb1 = max { 1 + x2h x2l 8 , 1 + ν x2h x2l ( 1 + 8 ( ν − 1 ) ) , ( 16νβx4h 9 10 x 2 l ) 1 1−2γ1 , ( 32νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) m̆ ) 1 2−γ1 , ( 32νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ) 1 2−2γ1 , ( 8ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) m̆ ) 2 3 , ( 8ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ) 2 3−2γ1 } nb2 = max { 1 + x2h x2l 10ν β , 1 + ν x2h x2l } nmin = max { nb1 , nb2 , n1 ρ2 γ1 } . ( 26 ) C PROPOSE TEST SAMPLE PRIVACY Proof Claim 4.2 . We set the algorithm parameters in eq . 27 , and matching databasesD3 , D4 defined in eq . 8 . We note that we only define a lower bound over n1 , which will be updated later on . ρ3 = 1.15 ; ρ2 = 0.45 ; ρ1 = 1.25 , γ1 = 0.49 ; xl = xh/2 n1 > max { 2 1 log 1 2δ , 4 1 log 1 2δ , 210ρ1 , 29+10ρ2 } β = 3 ; xh = 1 α = 1 ( 27 ) Mark the return value of the algorithm as r , the event of the algorithm running on database D3 and W = D3 as AD3 , the event of the algorithm running on database D4 and W = D4 as AD4 , and S = { s|s > µi } , where µi is the mean of the sample distribution at the SGLD i ’ th step given database D3 ( Similarly to as defined in subsection 4.2 ) . We will show that ∀ ∈ R > 0 , δ < 16 , ∃n1 such that eq . 28 holds . P ( r ∈ S|D3 ) > e P ( r ∈ S|D4 ) + δ ( 28 ) We first show that P ( r ∈ S ∧AcD3 |D3 ) = 0 P ( r ∈ S ∧AcD4 |D4 ) = 0 ( 29 ) Notice that the algorithm can return result in S only if it reached step 19 . Consider an event where the algorithm reached step 19 and AcD3 . From A c D3 , ∃ ( xi , yi ) ∈ D3 such that | yixi − m̆| ≥ n ρ2 2 . However , since ∀ ( xi , yi ) ∈ D3 : yixi = n ρ3 1 then ∀ ( xi , yi ) ∈ D3 : | yi xi − m̆| > nρ22 and therefore |W | = 0 . Under the assumption that sample from p ( θ| { } ) returns null then in this case the algorithm also returns null and therefore P ( r ∈ S ∧AcD3 |D3 ) = 0 . Same arguments hold for D4 . Following eq . 29 , to prove eq . 28 it is enough to prove eq . 30 . P ( r ∈ S|D3 , A3 ) P ( A3|D3 ) ≥∗ P ( r ∈ S|D3 , A3 ) − 5δ > ∗∗ e P ( r ∈ S|D4 , A4 ) + δ ≥ e P ( r ∈ S|D4 , A4 ) P ( A4|D4 ) + δ = e P ( r ∈ S ∧A4|D4 ) + δ ( 30 ) From claim C.1 ∃nbound1 such that ∀n1 > nbound1 inequality * holds . From Lemma 4.6 , for n1 big enough ∃T ∈ Z > 0 such that eq . 31 hold ( Where 6δ < 0.5 according to the claim conditions ) . Therefore , ∃k , nbound2 ∈ R > 0 such that ∀n1 > nbound2 : ′ > kn 2 ( 1−ρ3 ) 1 and eq . 31 hold . As ρ3 > 1 , by choosing n1 > max { nbound2 , ( k ) 1 2 ( ρ3−1 ) } get that ′ > . Consequently , by choosing n1 > max { nbound1 , nbound2 , ( k ) 1 2 ( ρ3−1 ) } , inequalities * and * * hold , and the claim is proved . ′ = Ω ( n 2 ( ρ3−1 ) 1 ) P ( r ∈ S|D3 , A3 ) > e ′ P ( r ∈ S|D4 , A4 ) + 6δ ( 31 ) Claim C.1 . ∃nbound1 ∈ Z > 0 such that the probability for algorithm 4.3 to reach step 19 with W = D3 ( marked event A ) is greater or equal to 1− 5δ for all n1 > nbound1 . Proof . Mark the event of nW > nmin ∧ m̆ ∈ [ m − nρ22 , m + n ρ2 2 ] ∧ n 1+ρ2−0.1 2 > n̆ ρ1 1 ∧ n̆1 ≤ n1 ∧ V = D as B . Since P ( A|D3 , B ) = 1 it follows that P ( A|D3 ) ≥ P ( A ∧ B|D3 ) = P ( A|B , D3 ) P ( B|D3 ) = P ( B|D3 ) . Therefore if ∃nlb such that ∀n1 > nlb : P ( B|D3 ) ≥ 1 − 5δ the claim is proved . P ( B|D3 ) = P ( m̆ ∈ [ m− nρ22 , m+ n ρ2 2 ] ∧ nW > nmin|D3 , V = D , n 1+ρ2−0.1 2 > n̆ ρ1 1 , n̆1 ≤ n1 ) · P ( n1+ρ2−0.12 > n̆ ρ1 1 |V = D , n̆1 ≤ n1 , D3 ) P ( V = D , n̆1 ≤ n1|D3 ) ≥ P ( m̆ ∈ [ m− nρ22 , m+ n ρ2 2 ] ∧ nW > nmin|D3 , V = D , n 1+ρ2−0.1 2 > n̆ ρ1 1 , n̆1 ≤ n1 ) − 3δ = P ( nW > nmin|D3 , V = D , n1+ρ2−0.12 > n̆ ρ1 1 , n̆1 ≤ n1 , m̆ ∈ [ m− n ρ2 2 , m+ n ρ2 2 ] ) P ( m̆ ∈ [ m− nρ22 , m+ n ρ2 2 ] |D3 , V = D , n 1+ρ2−0.1 2 > n̆ ρ1 1 , n̆1 ≤ n1 ) − 3δ ≥ P ( nW > nmin|D3 , V = D , n1+ρ2−0.12 > n̆ ρ1 1 , n̆1 ≤ n1 , m̆ ∈ [ m− n ρ2 2 , m+ n ρ2 2 ] ) − 4δ ≥ 1− 5δ ( 32 ) By corollary C.1 and claim C.3 for n1 big enough first inequality holds . By claim C.4 for n1 big enough second inequality holds , and by claim C.5 for n1 big enough third inequality holds . Therefore for n1 big enough eq . 32 holds and the claim is proved . Claim C.2 . For n1 > max { 12 −10ρ1 , 4 1 log 1 2δ } P ( n̆ρ11 ≥ n ρ3 1 ∧ n̆1 ≤ n1|D3 ) ≥ 1− 2δ . Proof . Mark the noise added at step 6 as l1 P ( n̆ρ11 ≥ n ρ3 1 ∧ n̆1 ≤ n1|D3 ) = P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ∧ n1 + l1 − 1 log 1 2δ ≤ n1|D3 ) = P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ∧ l1 ≤ 1 log 1 2δ |D3 ) = P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ∧ |l1| ≤ 1 log 1 2δ |D3 ) + P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ∧ l1 ≤ − 1 log 1 2δ |D3 ) ≥ P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ∧ |l1| ≤ 1 log 1 2δ |D3 ) = P ( ( n1 + l1 − 1 log 1 2δ ) ρ1 ≥ nρ31 ||l1| ≤ 1 log 1 2δ , D3 ) P ( |l1| ≤ 1 log 1 2δ |D3 ) ≥ P ( ( n1 − 2 1 log 1 2δ ) ρ1 ≥ nρ31 |D3 ) − 2δ = 1− 2δ Where last inequality holds from following equation P ( |l1| ≤ 1 log 1 2δ ) = 1− 2P ( l1 ≤ − 1 log 1 2δ ) = 1− exp ( − 1 log 1 2δ 1 ) = 1− 2δ Last equality holds since n1 > 12 −10ρ1 ⇒ n−11 < ( 12 ) 10ρ1 ⇒ n−0.11 < ( 12 ) ρ1 and therefore ( n1 − 2 1 log 1 2δ ) ρ1 > ( 12n1 ) ρ1 > nρ1−0.11 = n ρ3 1 Corollary C.1 . ∀n1 > max { 1 2 −10ρ1 , 4 1 log 1 2δ } : P ( V = D3 ∧ n̆1 ≤ n1 ) ≥ 1− 2δ . Claim C.3 . For n1 > max { 12 − ( 9+10ρ2 ) , 4 1 log 1 2δ } P ( n1+ρ2−0.12 > n̆ ρ1 1 |D3 , V = D3 , n̆1 ≤ n1 ) ≥ 1− δ . Proof . P ( n0.9+ρ22 > n̆ ρ1 1 |D3 , V = D3 , n̆1 ≤ n1 ) ≥ P ( n 0.9+ρ2 2 > n ρ1 1 |D3 , V = D3 ) ≥ P ( ( n1 − 2 1 log 1 2δ ) 0.9+ρ2 > nρ11 |D3 ) p ( n2 ≥ |V | − 2 1 log 1 2δ ) ≥ P ( ( n1 − 2 1 log 1 2δ ) 0.9+ρ2 > nρ11 |D3 ) − δ = 1− δ where the second inequality holds since P ( Lap ( 1 ) < − 1 log 1 2δ ) < δ , and last equality holds since ( n1 − 2 1 log 1 2δ ) 0.9+ρ2 > ( 12n1 ) 0.9+ρ2 > n1+ρ2−0.21 = n ρ1 1 Claim C.4 . ∃nlb1 ∈ Z > 0 such that ∀n1 > nlb1 : P ( m̆ ∈ [ m− n ρ2 2 , m+ n ρ2 2 ] |D3 , n 0.9+ρ2 2 > n̆ ρ1 1 ) ≥ 1− δ . Proof . Mark the noise added at step 13 as l1 P ( m̆ ∈ [ m− nρ22 , m+ n ρ2 2 ] |D3 , n 0.9+ρ2 2 > n̆ ρ1 1 ) = P ( l1 ∈ [ −nρ22 , n ρ2 2 ] |D3 , n 0.9+ρ2 2 > n̆ ρ1 1 ) ≥ 1− 2 ( 1 2 exp ( −nρ22 1 1 n̆ ρ1 1 2 ( n2−1 ) x2hx 2 l+x 4 h n2 ( n2−1 ) x4l ) = 1− exp ( − n 1+ρ2 2 ( n2 − 1 ) x4l n̆ρ11 ( 2 ( n2 − 1 ) x2hx2l + x4h ) ) Since n0.9+ρ22 > n̆ ρ1 1 then n̆ ρ1 1 = o ( n 1+ρ2 2 ) and therefore for n1 big enough the exponent is smaller than δ . Claim C.5 . ∃nlb2 ∈ Z > 0 such that ∀n1 > nlb2 : p ( nW > nmin||V | = D3 ∧ n 0.9+ρ2 2 > n̆ ρ1 1 ∧ m̆ ∈ [ m− n ρ2 2 , m+ n ρ2 2 ] , D3 ) ≥ 1− δ . Proof . For abbreviation mark eventB asB = |V | = D3∧n0.9+ρ22 > n̆ ρ1 1 ∧m̆ ∈ [ m−n ρ2 2 , m+n ρ2 2 ] . Mark the Laplace noise used in step 8 as l1 and the Laplace noise used in step 15 as l2 . P ( nW > nmin|B , D3 ) = P ( n1 − 1 log 1 2δ + l2 > nmin|B , D3 ) > P ( n1 − 1 log 1 2δ − 1 log 1 δ > nmin|B , D3 ) P ( l2 > − 1 log 1 δ ) + P ( n1 − 1 log 1 2δ + l2 > nmin ∧ l2 < − 1 log 1 δ |B , D3 ) > P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|B , D3 ) ( 1− δ 2 ) > P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|B , D3 ) − δ 2 = P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|l1 < 1 log 1 δ , B , D3 ) P ( l1 < 1 log 1 δ |B , D3 ) + P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin ∧ l1 ≥ 1 log 1 δ |B , D3 ) − δ 2 ≥ P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|l1 < 1 log 1 δ , B , D3 ) P ( l1 < 1 log 1 δ |B , D3 ) − δ 2 ≥ P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|l1 < 1 log 1 δ , B , D3 ) − δ From B it holds that |m− m̆| < nρ22 and therefore m̆ < m+ n ρ2 2 , and for the case of l1 < 1 log 1 δ it holds that n2 < n1 − 1 log 1 2δ + 1 log 1 δ < n1 + 1 log 1 δ . Therefore m̆ ≤ m+ ( n1 + 1 log 1 δ ) ρ2 . As nmin = O ( max { m̆ 2 3 , n ρ2 γ1 1 } ) then for the case of l1 < 1 log 1 δ and B , it holds that nmin = O ( max { ( m + nρ21 ) 2 3 , n ρ2 γ1 1 } ) = O ( max { n 2ρ3 3 1 , n ρ2 γ1 1 } ) < o ( n1 ) , therefore ∃nlb2 such that ∀n1 > nlb2 : n1 − 2 log 1 2δ − 1 log 1 δ > nmin . Consequently , ∀n1 > nlb2 : P ( n1 − 2 log 1 2δ − 1 log 1 δ > nmin|l1 < 1 log 1 δ , B , D3 ) = 1 . Definition 4 . A randomized function f ( X , y ) : χn1 ×Rn2 → R , is ( , δ ) -differentially private with respect to X if ∀S ⊆ R , and ∀X , X̂ ∈ χn : ‖X − X̂‖ ≤ 1 , eq . 33 holds . P ( f ( X , y ) ∈ S ) ≤ exp ( ) P ( f ( X̂ , y ) ∈ S ) + δ ( 33 ) Claim C.6 . Calculating n̆1 , n2 is ( 2 , 0 ) differentially private . Proof . Since n1 can differ by up to 1 for neighbouring databases , calculating n̆1 is protected via the Laplace mechanism . Since for a given n̆1 the value |V | can change by up to 1 for two neighbouring databases then calculating n2 is ( , 0 ) by the Laplace mechanism . Consequently from sequential composition theorem the sequential composition is ( 2 , 0 ) differentially private . Claim C.7 . P ( n2 ≤ |V ||D , n̆1 ) = 1− δ . Proof . Mark l ∼ Lap ( 1 ) , P ( n2 ≤ |V ||D , n̆1 ) = P ( |V | − 1 log 1 2δ + l ≤ |V ||D , n̆1 ) = P ( l ≤ 1 log 1 2δ |D , n̆1 ) = 1− 1 2 exp ( − 1 log 1 2δ 1 ) = 1− δ Claim C.8 . Calculating m̆ is ( , 0 ) differentially private with respect to D for given n̆1 , n2 and n2 < |V | . Proof . Mark by D̂ a neighbouring database to D , and V̂ as V induced by this database . If V = V̂ then the claim follows trivially . In case the V ’ s differ , assume w.l.o.g that |V | ≥ |V̂ | , and that if |V | = |V̂ | then they differ in their last sample . Define q = ∑ ( xi , yi ) ∈V/ { x|V | , y|V | } xiyi , z =∑ ( xi , yi ) ∈V/ { x|V | , y|V | } x 2 i . | q + x|V |y|V | z + x2|V | − q + x̂|V |ŷ|V | z + x̂2|V | | = | qx̂2|V | + x|V |y|V |x̂ 2 |V | + x|V |y|V |z − qx 2 |V | − x̂|V |ŷ|V |x 2 |V | − x̂|V |ŷ|V |z ( z + x2|V | ) ( z + x̂ 2 |V | ) | ≤ qx2h + n̆ ρ1 1 x 2 hz + n̆ ρ1 1 x 4 h ( z + x2l ) z ≤ n̆ρ11 2zx2h + x 4 h ( z + x2l ) z = n̆ρ11 ( 2x2h z + xl + x4h ( z + x2l ) z ) ≤ n̆ρ11 ( 2x2h |V |x2l + x4h |V | ( |V | − 1 ) x4l ) ≤ n̆ρ11 ( 2x2h n2x2l + x4h n2 ( n2 − 1 ) x4l ) = n̆ρ11 2 ( n2 − 1 ) x2hx2l + x4h n2 ( n2 − 1 ) x4l therefore by the Laplace mechanism calculating m̆ is ( , 0 ) differentially private . Claim C.9 . Steps 6-13 are ( 3 , δ ) differentially private . Proof . Mark D̂ as a neighbouring database , P ( m̆ ∈ S|D ) =∫ r1 , r2∈R > 0×R > 0 P ( m̆ ∈ S|D , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D ) dr1dr2 =∫ r1 , r2∈R > 0× [ 1 , |V | ] P ( m̆ ∈ S|D , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D ) dr1dr2+∫ r1 , r2∈R > 0× ( |V | , ∞ ] P ( m̆ ∈ S|D , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D ) dr1dr2 ≤∗∫ r1 , r2∈R > 0× [ 1 , |V | ] P ( m̆ ∈ S|D , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D ) dr1dr2 + δ ≤∗∗∫ r1 , r2∈R > 0× [ 1 , |V | ] e2 P ( m̆ ∈ S|D̂ , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D̂ ) dr1dr2 + δ ≤∫ r1 , r2∈R > 0×R > 0 e2 P ( m̆ ∈ S|D̂ , n̆1 = r1 , n2 = r2 ) p ( n̆1 = r1 , n2 = r2|D̂ ) dr1dr2 + δ = e2 P ( m̆ ∈ S|D̂ ) + δ where inequality * follows claim C.7 and inequality * * follows claims C.8 and C.6 . Claim C.10 . Steps 14-19 are ( , δ ) differentially private with respect to D for |W | < nmin and given n2 , m̆ . Proof . Mark l ∼ Lap ( 1 ) , and D̂ as a neighbouring database . Eq . 34 proves the claim . P ( S|D , |W | < nmin , m̆ , n2 ) = P ( S ∩ { null } |D , |W | < nmin , m̆ , n2 ) + P ( S ∩ { null } c|D , |W | < nmin , m̆ , n2 ) ≤ e P ( S ∩ { null } |D̂ , |W | < nmin , m̆ , n2 ) + δ ≤ e P ( S|D̂ , |W | < d , m̆ , n2 ) + δ ( 34 ) where first inequality is true from eq . 35 and the Laplace mechanism for nW . P ( null|D , |W | < nmin , m̆ , n2 ) = P ( nW < nmin + 1 log ( 1 2δ ) |D , |W | < nmin , m̆ , n2 ) ≥ P ( l < 1 log ( 1 2δ ) ) ≥ 1− δ ( 35 ) Claim C.11 . Step 19 is ( , δ ) differentially private with respect to D for |W | ≥ nmin and given n2 , m̆ . Proof . For a given n2 , m̆ and a neighbouring database , the groupW can change by up to one sample . Mark n = |W | and c = m̆ . From eq . 36 , it follows that W ∈ D , as defined in eq . 5. n ≥ n ρ2 γ1 2 ⇒ n 1 2 > nγ1 ≥ nρ22 ( 36 ) As W ∈ D , n ≥ nb1 , and n ≥ nb2 , the problem of sampling from p ( θ|W ) for |W | ≥ nmin holds the constraints of claim D.29 . Therefore one sample from p ( θ|W ) is ( , δ ) differentially private . Claim C.12 . Steps 14-18 are ( , 0 ) differentially private with respect to D for |W | > nmin and given m̆ , n2 . Proof . Only data released is nW , and since the sensitivity of |W | given m̆ , n2 is 1 , then the Laplace mechanism ensures ( , 0 ) differential privacy . Corollary C.2 . Steps 14-19 are ( 2 , δ ) differentially private with respect to D for |W | > nmin and given m̆ , n2 . Corollary C.3 . Steps 14-19 are ( 2 , δ ) differentially private with respect to D given m̆ , n2 . D AUXILIARY CLAIMS This subsection contains simple claims used to simplify the reading of the proofs . Claims described in this subsection uses the marking defined in eq . 17 . Claim D.1 . ρ 11−λ = ncx2hβ α+nx2hβ . Proof Claim D.1 . ρ 1 1− λ = η 2 ncx2hβ 1 1− ( 1− η2 ( α+ n ( xh 2 ) 2β ) ) = ncx2hβ 1 α+ nx2hβ = ncx2hβ α+ nx2hβ Claim D.2 . ρ 1−λ n−1 1−λ + ρλ n−1 = ncx2hβ α+nx2hβ ( 1− λn ) . Proof Claim D.2 . ρ 1− λn−1 1− λ + ρλn−1 = ρ ( 1− λn−1 + λn−1 − λn 1− λ ) = ρ ( 1− λn 1− λ ) = ncx2hβ α+ nx2hβ ( 1− λn ) where the last equality holds from Claim D.1 Claim D.3 . ρ ( 1−λ n−1 1−λ ) + ρ̂λ n−1 = ncx2hβ α+nx2hβ ( 1− λn ( 34λ −1 + 14 ) ) . Proof Claim D.3 . ρ ( 1− λn−1 1− λ ) + ρ̂λn−1 = ρ ( 1− λn−1 1− λ ) + ρ 1 4 λn−1 = ρ ( 1− 34λ n−1 − 14λ n 1− λ ) = ρ ( 1− λn ( 34λ −1 + 14 ) 1− λ ) = ncx2hβ α+ nx2hβ ( 1− λn ( 3 4 λ−1 + 1 4 ) ) where the last equality holds from Claim D.1 . Claim D.4 . 14λ+ 3 4 − λ̂ = 3 4 η 2α . Proof Claim D.4 . 1 4 λ+ 3 4 − λ̂ = 1 4 ( 1− η 2 ( α+ nx2β ) ) + 3 4 − ( 1− η 2 ( α+ 1 4 nx2β ) ) = η 2 [ α+ 1 4 nx2β − 1 4 ( α+ nx2β ) ] = 3 4 η 2 α Claim D.5 . ( 1− λkn ) ( 1− λ̂λn−1 ) − ( 1− ( λ̂λ ( n−1 ) ) ) k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) = λn−1 3 4 η 2 α ( 1− λ̂kλk ( n−1 ) ) + λk ( n−1 ) ( λ̂k − λk ) ( 1− λn−1λ̂ ) . Proof Claim D.5 . ( 1− λkn ) ( 1− λ̂λn−1 ) − ( 1− ( λ̂λ ( n−1 ) ) ) k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) = λn−1 ( 1 4 λ+ 3 4 − λ̂ ) + λkn ( λ̂λn−1 − 1 ) + ( λ̂λn−1 ) k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) = λn−1 ( 1 4 λ+ 3 4 − λ̂ ) + λk ( n−1 ) ( λk ( λ̂λn−1 − 1 ) + λ̂k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) ) = λn−1 ( 1 4 λ+ 3 4 − λ̂ ) + λk ( n−1 ) ( λ̂k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) − λk ( 1− λ̂λn−1 ) ) = λn−1 ( 1 4 λ+ 3 4 − λ̂ ) + λk ( n−1 ) ( λ̂k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) − λk ( 1− λn−1λ̂ ) ) =∗ λn−1 η 2 3 4 α+ λk ( n−1 ) ( λ̂k ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) − λk ( 1− λn−1λ̂ ) ) =∗ λn−1 η 2 3 4 α+ λk ( n−1 ) ( λ̂k ( 1− λn−1 ( λ̂+ 3 4 η 2 α ) − λk ( 1− λn−1λ̂ ) ) = λn−1 η 2 3 4 α− λ̂kλn−1 3 4 η 2 α+ λk ( n−1 ) ( λ̂k ( 1− λn−1λ̂ ) − λk ( 1− λn−1λ̂ ) ) = λn−1 3 4 η 2 α ( 1− λ̂kλk ( n−1 ) ) + λk ( n−1 ) ( λ̂k − λk ) ( 1− λn−1λ̂ ) where equality * holds from claim D.4 Claim D.6 . λ ∑k−1 j=0 λ ( n−1 ) jλj [ λn−1ρ+ ρ ∑n−2 i=0 λ i ] = λ ( 1− λkn ) ncx 2 hβ α+nx2hβ . Proof Claim D.6 . λ k−1∑ j=0 λ ( n−1 ) jλj [ λn−1ρ+ ρ n−2∑ i=0 λi ] = ρλ kn−1∑ i=0 λi = ρλ 1− λkn 1− λ =∗ λ ncx2hβ α+ nx2hβ ( 1− λkn ) Where equality * follows from claim D.1 . Claim D.7 . λ̂ ∑k−1 j=0 λ ( n−1 ) j λ̂j [ λn−1ρ̂+ ρ ∑n−2 i=0 λ i ] = λ̂ 1− ( λ n−1λ̂ ) k 1−λn−1λ̂ ncx2hβ α+nx2hβ ( 1−λn ( 34λ −1 + 14 ) ) . Proof Claim D.7 . λ̂ k−1∑ j=0 λ ( n−1 ) j λ̂j [ λn−1ρ̂+ ρ n−2∑ i=0 λi ] = λ̂ 1− ( λn−1λ̂ ) k 1− λn−1λ̂ [ λn−1ρ̂+ ρ 1− λn−1 1− λ ] =∗ λ̂ 1− ( λn−1λ̂ ) k 1− λn−1λ̂ ncx2hβ α+ nx2hβ ( 1− λn ( 3 4 λ−1 + 1 4 ) ) Where equality * follows from claims D.1 , D.3 . Claim D.8 . λλk − λkλnλ̂− λ̂λ̂k + λ̂λ̂kλn ( 34λ −1 + 14 ) = ( 1− λ̂λ n−1 ) ( λk+1 − λ̂k+1 ) + λ̂k+1λn−1 ( 34 η 2α ) . Proof Claim D.8 . λλk − λkλnλ̂− λ̂λ̂k + λ̂λ̂kλn ( 3 4 λ−1 + 1 4 ) = λk+1 ( 1− λ̂λn−1 ) − λ̂k+1 ( 1− λn−1 ( 1 4 λ+ 3 4 ) ) =∗ λk+1 ( 1− λ̂λn−1 ) − λ̂k+1 ( 1− λn−1 ( λ̂+ 3 4 η 2 α ) ) = ( 1− λ̂λn−1 ) ( λk+1 − λ̂k+1 ) + λ̂k+1λn−1 ( 3 4 η 2 α ) where equality * holds from claim D.4 . Claim D.9 . λ ( 1− λkn ) ( 1− λn−1λ̂ ) − λ̂ ( 1− ( λn−1λ̂ ) k ) ( 1− λn ( 34λ −1 + 14 ) ) = ( λ− λ̂ ) ( 1− λ̂λn−1 ) + λn−1λ̂ [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] + λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λ̂k+1 − λk+1 ) ] . Proof Claim D.9 . λ ( 1− λkn ) ( 1− λn−1λ̂ ) − λ̂ ( 1− ( λn−1λ̂ ) k ) ( 1− λn ( 3 4 λ−1 + 1 4 ) ) = λ− λ̂− λnλ̂ ( 1− ( 3 4 λ−1 + 1 4 ) ) − λk ( n−1 ) [ λλk − λkλnλ̂− λ̂λ̂k + λ̂λ̂kλn ( 3 4 λ−1 + 1 4 ) ] =∗ λ− λ̂− λnλ̂ ( 1− ( 3 4 λ−1 + 1 4 ) ) − λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λk+1 − λ̂k+1 ) + λ̂k+1λn−1 ( 3 4 η 2 α ) ] = λ− λ̂− λn−1λ̂ ( λ− ( 3 4 + 1 4 λ ) ) − λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λk+1 − λ̂k+1 ) + λ̂k+1λn−1 ( 3 4 η 2 α ) ] =∗∗ λ− λ̂− λn−1λ̂ ( λ− ( λ̂+ 3 4 η 2 α ) ) − λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λk+1 − λ̂k+1 ) + λ̂k+1λn−1 ( 3 4 η 2 α ) ] = ( λ− λ̂ ) ( 1− λ̂λn−1 ) + λn−1λ̂ [ 3 4 η 2 α ( 1− λ̂kλk ( n−1 ) ) ] + λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λ̂k+1 − λk+1 ) ] Where equality * follows from claim D.8 and equality * * follows from claim D.4 . Claim D.10 . λ ( 1− λkn ) − λ̂1− ( λ n−1λ̂ ) k 1− λn−1λ̂ ( 1− λn ( 3 4 λ−1 + 1 4 ) ) = ( λ− λ̂ ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + λk ( n−1 ) ( λ̂k+1 − λk+1 ) . Proof Claim D.10 . λ ( 1− λkn ) − λ̂1− ( λ n−1λ̂ ) k 1− λn−1λ̂ ( 1− λn ( 3 4 λ−1 + 1 4 ) ) = [ M5.d ] ( λ− λ̂ ) ( 1− λ̂λn−1 ) + λn−1λ̂ [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] + λk ( n−1 ) [ ( 1− λ̂λn−1 ) ( λ̂k+1 − λk+1 ) ] ( 1− λ̂λn−1 ) = ( λ− λ̂ ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + λk ( n−1 ) ( λ̂k+1 − λk+1 ) Claim D.11 . ncx 2 hβ α+nx2hβ ( λ− λ̂+ λ n−1 [ 34 η 2α ( 1−λ̂ kλk ( n−1 ) ) ] 1−λ̂λn−1 ) + ( ρ− ρ̂ ) > 0 . Proof Claim D.11 . ncx2hβ α+ nx2hβ ( λ− λ̂+ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + ( ρ− ρ̂ ) = ncx2hβ α+ nx2hβ ( λ− λ̂+ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + η 2 ncx2hβ ( 1− 1 4 ) = ncx2hβ α+ nx2hβ ( 1− η 2 ( α+ nx2β ) − ( 1− η 2 ( α+ 1 4 nx2β ) ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + 3 4 η 2 ncx2hβ = ncx2hβ α+ nx2hβ ( −3 4 η 2 ( nx2β ) + λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 ) + 3 4 η 2 ncx2hβ = ncx2hβ α+ nx2hβ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + ncx2hβ [ 3 4 η 2 − 3 4 η 2 nx2β α+ nx2β ] = ncx2hβ α+ nx2hβ λn−1 [ 34 η 2α ( 1− λ̂ kλk ( n−1 ) ) ] 1− λ̂λn−1 + ncx2hβ 3 4 η 2 [ 1− nx 2β α+ nx2β ] > 0 where the last inequality holds because λ , λ̂ < 1 and α > 0 Claim D.12 . 1α > λ −2η 1−λ −2 ( k+1 ) n 1−λ−2 is true for k ≤ 1 2n logλ ( 1 1+ 1αη ( 1−λ2 ) ) − 1 . Proof Claim D.12 . 1 α ≥ λ−2η 1− λ −2k̇n 1− λ−2 ⇐⇒ λ2 1 α 1 η ( 1− λ−2 ) ≤ 1− λ−2k̇n ⇐⇒ λ2 1 α 1 η ( λ−2 − 1 ) ≥ λ−2k̇n − 1 ⇐⇒ 1 + λ2 1 α 1 η ( λ−2 − 1 ) ≥ λ−2k̇n ⇐⇒ − k̇ ≥ 1 2n logλ ( 1 + 1 αη ( 1− λ2 ) ) ⇐⇒ k̇ ≤ 1 2n logλ ( 1 1 + 1αη ( 1− λ2 ) ) Claim D.13 . 1 α ( λ̂λ ( n−1 ) ) 2k̇ > η ∑n−1 i=0 λ 2i ∑k̇−1 j=0 ( λ̂ 2λ2 ( n−1 ) ) j is true for k̇ ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) . Proof Claim D.13 . First note that the inequality can also be written as 1 α > η ∑n−1 i=0 λ 2i ∑k−1 j=0 ( λ̂λ ( n−1 ) ) 2 ( j−k ) . Secondly , the right hand term of the inequality could be upper bound as in eq . 37 . Therefore for the claim ’ s inequality to holds it is enough that 1α ≥ ηλ −2 1−λ−2nk 1−λ−2 , which proved by claim D.12 to be true for k̇ ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) η n−1∑ i=0 λ2i k−1∑ j=0 ( λ̂λ ( n−1 ) ) 2 ( j−k ) = η n−1∑ i=0 λ2i k−1∑ j=0 1 ( λ̂λ ( n−1 ) ) 2 ( k−j ) < k > j η n−1∑ i=0 λ2i k−1∑ j=0 1 ( λλ ( n−1 ) ) 2 ( k−j ) = η n−1∑ i=0 λ2i k−1∑ j=0 1 λ2n ( k−j ) = η n−1∑ i=0 k−1∑ j=0 1 λ2 [ nk−nj−i ] =r=nj+i η nk−1∑ r=0 1 λ2 [ nk−r ] =r′=nk−r,1 < r′ < nk η nk∑ r′=1 1 λ2 [ r′ ] = η nk∑ i=1 λ−2i = η λ−2 − λ−2 ( nk+1 ) 1− λ−2 = ηλ−2 1− λ−2nk 1− λ−2 ( 37 ) Claim D.14 . 1α ( λ̂λ n−1 ) 2k̇ ≥ η ( λ̂λn−1 ) 2k̇ ∑n−1 i=0 λ 2i is true for k̇ ≤ 12n logλ ( 1 1+ 1αη ( 1−λ2 ) ) . Proof Claim D.14 . eq . 38 holds because λ , λ̂ < 1 . By multiplying both sides with ∑n−1 i=0 λ 2i get eq . 39 . Then noticing that the right term equals to the right term of claim D.13 , and hence smaller than the left term of the claim , the claim is proved . ( λ̂λn−1 ) 2k < 1 < k−1∑ i=0 ( λ̂λn−1 ) 2j ( 38 ) η ( λ̂λn−1 ) 2k̇ n−1∑ i=0 λ2i < η k̇−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ( 39 ) Claim D.15 . The inequality ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] > η n−r∑ i=0 λ2i holds for x2hβ > 3 , n > 1 2αx2hβ − 1 x2hβ . Proof Claim D.15 . Left hand side can be lower bounded according to eq . 40 , while right hand side can be upper bounded according to eq . 41 . Therefore it ’ s enough to show that λ2n [ 1αλ 2kn + η 1−λ 2kn 1−λ2 ] > η 1−λ2n 1−λ2 , which according to eq . 42 is equivalent to showing that ( 2nx2hβ−1 ) 1αλ 2 ( k+1 ) n+2 ( 2λ2n−1 ) > 0 . Since n > 1 2αx2hβ − 1 x2hβ claim D.19 applies and therefore λ2n ≥ e − 2 x2 h β . Consequently it ’ s enough to show that ( 2nx2hβ−1 ) 1αλ 2 ( k+1 ) n+2 ( 2e − 2 x2 h β −1 ) > 0 , which is true for x2hβ > 3 by claim D.16 . ( λ̂λn−r ) 2 [ 1 α ( λ̂λn−1 ) 2k + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] > ( λ̂λn−1 ) 2 [ 1 α ( λ̂λn−1 ) 2k + η k−1∑ j=0 ( λ̂λn−1 ) 2j n−1∑ i=0 λ2i ] > λ2n [ 1 α λ2kn + η k−1∑ j=0 λ2jn n−1∑ i=0 λ2i ] = λ2n [ 1 α λ2kn + η 1− λ2kn 1− λ2 ] ( 40 ) First inequality holds because λ < 1 and r > 1 , and second inequality holds because λ < λ̂ . η n−r∑ i=0 λ2i < η n−1∑ i=0 λ2i = η 1− λ2n 1− λ2 ( 41 ) Inequality holds because λ < λ̂ and r > 1. λ2n [ 1 α λ2kn + η 1− λ2kn 1− λ2 ] > η 1− λ2n 1− λ2 λ2n ( 1− λ2 ) 1 α λ2kn + ηλ2n ( 1− λ2kn ) > η ( 1− λ2n ) ( 1− λ2 ) 1 α λ2 ( k+1 ) n + η ( 2λ2n − λ2 ( k+1 ) n − 1 ) > 0 ( α+ nx2hβ ) 2 ( 1− λ2 ) 1 α λ2 ( k+1 ) n + 2 ( 2λ2n − λ2 ( k+1 ) n − 1 ) > 0 ( α+ nx2hβ ) 2 ( 1− ( 1− 1 α+ nx2hβ ) 2 ) 1 α λ2 ( k+1 ) n + 2 ( 2λ2n − λ2 ( k+1 ) n − 1 ) > 0 ( 2 ( α+ nx2hβ ) − 1 ) 1 α λ2 ( k+1 ) n + 2 ( 2λ2n − λ2 ( k+1 ) n − 1 ) > 0 2λ2 ( k+1 ) n + ( 2nx2β − 1 ) 1 α λ2 ( k+1 ) n + 2 ( 2λ2n − λ2 ( k+1 ) n − 1 ) > 0 ( 2nx2hβ − 1 ) 1 α λ2 ( k+1 ) n + 2 ( 2λ2n − 1 ) > 0 ( 42 ) Claim D.16 . For x2β > 3 the inequality ( 2e− 2 x2β − 1 ) > 0 holds . Proof Claim D.16 . It ’ s easy to see that the inequality holds only if x2β ≥ −2 ln 12 . Since −2 ln 12 < 3 claim is proved . Claim D.17 . For k̇ as defined in lemma A.4 , and the conditions of claim D.19 1 α ( e 2 x2 h β + α ( e 2 x2 h β − 1 ) ( α+ nx2β ) + 18 ) > λ−2η 1− λ−2 ( dk̇e+1 ) n 1− λ−2 . Proof Claim D.17 . η 1− λ−2 ( dk̇e+1 ) n λ2 − 1 ≤ ηλ −2 ( k̇+2 ) n − 1 1− λ2 = η λ −2 ( 12n logλ ( 1 1+ 1 αη ( 1−λ2 ) ) −1+2 ) n − 1 1− λ2 = η λ − logλ ( 11+ 1 αη ( 1−λ2 ) ) λ−2n − 1 1− λ2 = η [ 1 + 1αη ( 1− λ 2 ) ] λ−2n − 1 1− λ2 = η ( 1− λ2 ) λ−2n 1αη 1− λ2 + η λ−2n − 1 1− λ2 = 1 α λ−2n + ( λ−2n − 1 ) ( α+ nx2β ) + 18 ≤ e 2 x2β 1 α + 1 α α ( e 2 x2β − 1 ) ( α+ nx2β ) + 18 = 1 α [ e 2 x2β + α ( e 2 x2β − 1 ) ( α+ nx2β ) + 18 ] where the fourth equality holds from eq . 43 and the second inequality holds from D.19 . η λ2 − 1 = η 1 ( 1− η2 ( α+ nx2β ) ) 2 − 1 = η 1 η ( α+ nx2β ) + ( η2 ( α+ nx 2β ) ) 2 = 1 ( α+ nx2β ) + η4 ( α+ nx 2β ) 2 = 1 ( α+ nx2β ) + 18 ( 43 ) Claim D.18 . ∀k > 0 : 1− ( λ λ̂ ) k ≥ 3 4nx 2β ( α+ nx2β ) 2 − ( α+ 14nx2β ) . Proof Claim D.18 . 1− ( λ λ̂ ) k ≥ 1− λ λ̂ = 1− 1− 1α+nx2β 1− α+ 1 4nx 2β ( α+nx2β ) 2 = 1− ( α+ nx 2β ) 2 − ( α+ nx2β ) ( α+ nx2β ) 2 − ( α+ 14nx2β ) = 1− α 2 + 2nx2αβ + ( nx2β ) 2 − α− nx2β α2 + 2nx2αβ + ( nx2β ) 2 − α− 14nx2β = 3 4nx 2β ( α+ nx2β ) 2 − ( α+ 14nx2β ) Where first inequality holds because λ < λ̂ . Claim D.19 . For the conditions of claim D.21 , ( 1− 1 α+ nx2β ) 2n ≥ e− 2 x2β . Proof Claim D.19 . The proof is easily deduced from claims D.20 and D.21 Claim D.20 . lim n→∞ ( 1− 1 α+ nx2β ) 2n = e − 2 x2β . Proof Lemma D.20 . From eq . 44 , it is enough to find limn→∞ ln ( 1− 1 α+nx2β ) 1 2n . Since limn→∞ ln ( 1− 1 α+nx2β ) 1 2n = 00 , and both the numerator and denominator are differentiable around∞ , the use of L ’ Hôpital ’ s rule is possible as shown in eq . 45 . This proves the claim . ( 1− 1 α+ nx2β ) 2n = e ln [ ( 1− 1 α+nx2β ) 2n ] = e 2n ln ( 1− 1 α+nx2β ) = e ln ( 1− 1 α+nx2β ) 1 2n ( 44 ) lim n→∞ d dn ln ( 1− 1 α+nx2β ) d dn 1 2n = lim x2β ( α+nx2β−1 ) ( α+nx2β ) − 12n2 = − lim 2n 2x2β ( nx2β ) 2 = − 2 x2β ( 45 ) Claim D.21 . ∀n > 1 2αx2β − 1 x2β : d dn ( 1− 1 α+ nx2β ) 2n < 0 . Proof claim D.21 . First , a simplified term for the derivative is found at eq . 46. d dn ( 1− 1 α+ nx2β ) 2n = d dn e 2n ln ( 1− 1 α+nx2β ) = ( 1− 1 α+ nx2β ) 2n [ 2 ln ( 1− 1 α+ nx2β ) + 2n 1 1− 1α+nx2β · x 2β ( α+ nx2β ) 2 ] = ( 1− 1 α+ nx2β ) 2n [ 2 ln ( 1− 1 α+ nx2β ) + 2nx2β ( α+ nx2β − 1 ) ( α+ nx2β ) ] ( 46 ) A lower bound for the ln term can be found using Taylor ’ s theorem as shown in eq .47 , where 0 ≤ ξ ≤ 1α+nx2β . ln ( 1− 1 α+ nx2β ) = − 1 α+ nx2β − 1 2 1 ( 1− ξ ) 2 ( 1 α+ nx2β ) 2 ≤ − 1 α+ nx2β − 1 2 ( 1 α+ nx2β ) 2 ( 47 ) From equations 46 and 47 it is enough to find the terms for which nx 2β ( α+nx2β−1 ) ( α+nx2β ) < 1 α+nx2β + 1 2 1 ( α+nx2β ) 2 holds . A simplified version of this inequality is found at ( 48 ) , and it can be easily seen that for α > 12 ( 1 nx2β + 1 ) ⇐⇒ n > 1 2αx2β − 1 x2β this inequality holds . nx2β ( α+ nx2β − 1 ) ( α+ nx2β ) < 1 α+ nx2β + 1 2 1 ( α+ nx2β ) 2 ⇐⇒ 0 < 2α2 + 2nx2βα− 2α− 2nx2β + α+ nx2β − 1 ⇐⇒ 0 < nx2β ( 2α− 1 ) + α ( 2α− 1 ) − 1 ( 48 ) Claim D.22 . For n > α x2hβ ( e 2 x2 h β − 2 ) + 1 2x2hβ and the conditions of claim D.19 , k̇ , as defined in lemma A.4 , is positive . Proof Claim D.22 . The claim ’ s inequality is simplified at eq . 49 k̇ > 0 1 2n logλ ( 1 1 + 1αη ( 1− λ2 ) ) − 1 > 0 logλ ( 1 1 + 1αη ( 1− λ2 ) ) > 2n ln ( 1 1+ 1αη ( 1−λ2 ) ) lnλ > 2n ln ( 1 1 + 1αη ( 1− λ2 ) ) < 2n lnλ ln ( 1 1 + 1αη ( 1− λ2 ) ) < lnλ2n 1 1 + 1αη ( 1− λ2 ) ) < λ2n λ−2n < 1 + 1 αη ( 1− λ2 ) λ−2n − 1 < 1 αη ( 1− λ2 ) ( 49 ) By claim D.19 λ−2n−1 < e 2 x2 h β −1 , therefore it is enough to find terms for e 2 x2 h β −1 < 1αη ( 1−λ 2 ) , which is done at eq . 50 , which proves the claim . e 2 x2 h β − 1 < 1 αη ( 1− λ2 ) αη ( e 2 x2 h β − 1 ) < ( 1− λ2 ) αη ( e 2 x2 h β − 1 ) < 1− ( 1− η 2 ( α+ nx2hβ ) ) 2 α ( e 2 x2 h β − 1 ) < ( α+ nx2hβ ) − η 4 ( α+ nx2β ) 2 α ( e 2 x2 h β − 1 ) < ( α+ nx2hβ ) − 1 2 α ( e 2 x2 h β − 2 ) + 1 2 < nx2hβ α x2hβ ( e 2 x2 h β − 2 ) + 1 2x2hβ < n ( 50 ) Claim D.23 . For k̇ as defined in lemma A.4 , and the conditions of lemma A.5 p ( θ̂ ( dk̇e+1 ) n > µ ( dk̇e+1 ) n|D̂ ) ≤ e −e − 2 x2β α 2v1 ( 3 32x2β ) 2 ( cn ) 2 . Proof claim D.23 . p ( θ̂ ( dk̇e+1 ) n > µ ( dk̇e+1 ) n|D̂ ) ≤ 1 n n∑ r=1 exp ( − ( µ ( dk̇e+1 ) n − µ̂r ( dk̇e+1 ) n ) 2 2 ( σr ( dk̇e+1 ) n ) 2 ) ≤ 1 n n∑ r=1 exp ( −e− 2 x2β α 2v1 ( 3 32x2β ) 2 ( c n ) 2 ) = exp ( −e− 2 x2β α 2v1 ( 3 32x2β ) 2 ( c n ) 2 ) Where the first inequality holds due to lemma 4.4 and second inequality holds due to lemma A.5 . Claim D.24 . for n > 1 + 10x 2 h x2l ν β , the inequality 1 10 ( α+ ( z + x 2 n ) β ) > ν ( x̂ 2 n − x2n ) holds . Proof Claim D.24 . Notice that 110 ( α+ ( z+x 2 n ) β ) > 1 10zβ > 1 10 ( n−1 ) x 2 l β and νx 2 h > ν ( x̂ 2 n−x2n ) , Therefore a sufficient condition will be that 110 ( n − 1 ) x 2 l β > νx 2 h , which is equivalent to n > 1 + x2h x2l 10ν β . Claim D.25 . For the ( σ2 ) ∗ν as defined in eq . 14 ( σ2 ) ∗ν > 0 . Proof Claim D.25 . ( σ2 ) ∗ν = νσ 2 + ( 1− ν ) σ̂2 = ν α+ ( z + x2n ) β + 1− ν α+ ( z + x̂2n ) β = ν ( α+ ( z + x̂2n ) β ) + ( 1− ν ) ( α+ ( z + x2n ) β ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) = α+ ( z + x2n ) β + ν ( x 2 n − x̂2n ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) ( 51 ) Therefore , a sufficient condition is that α + ( z + x2n ) β + ν ( x 2 n − x̂2n ) > 0 . Since the condition of Lemma 4.2 dictates n > 1 + 10x 2 h x2l ν β then claim D.24 holds , which satisfy this condition . Claim D.26 . For the Bayesian linear regression problem on domain D , and σ , σ̂ defined in eq . 14 ln σ σ̂ ≤ x 2 h 2 ( n− 1 ) x2l . Proof Claim D.26 . Consider c1 = x2h ( n−1 ) x2l , c1 = x2h ( n− 1 ) x2l > x̂2n − x2n z + x2n > x̂2nβ − x2nβ α+ ( z + x2n ) β = α+ ( z + x̂2n ) β α+ ( z + x2n ) β − 1 ( 52 ) Where eq . 52 holds trivially for x̂n ≤ xn , therefore it is assumed that x̂n > xn . From eq . 52 , by Taylor theorem and 0 ≤ ζ ≤ c1 following inequality holds ec1 = 1 + c1 + eζ 2 ( c1 ) 2 > 1 + c1 > α+ ( z + x̂2n ) β α+ ( z + x2n ) β Consequently , because the natural logarithm is monotonically increasing the following equation also holds 1 2 c1 > 1 2 ln α+ ( z + x̂n ) β α+ ( z + xn ) β = ln σ σ̂ Therefore ln σσ̂ < 1 2 x2h ( n−1 ) x2l Claim D.27 . For the Bayesian linear regression problem on domain D , the conditions of Lemma 4.2 and ( σ2 ) ∗ν , σ̂ defined in eq . 14 1 2 ( ν − 1 ) ln σ̂ 2 ( σ2 ) ∗ν ≤ 1 2 ( ν − 1 ) νx 2 h 2 ( ( n− 1 ) x2l − νx2h ) . Proof Claim D.27 . consider c1 = νx2h ( ( n−1 ) x2l−νx 2 h ) , c1 = νx2h ( n− 1 ) x2l − νx2h ≥∗ νβx 2 h α+ ( n− 1 ) x2l β − νβx2h ≥∗ νβx̂2n α+ ( z + x2n ) β − νβx2n ≥ νβ ( x̂ 2 n − x2n ) α+ ( z + x2n ) β − νβ ( x2n − x̂2n ) = α+ ( z + x2n ) β α+ ( z + x2n ) β + νβ ( x 2 n − x̂2n ) − 1 = 1 α+ ( z + x̂2n ) β · ( α+ ( z + x 2 n ) β ) ( α+ ( z + x̂ 2 n ) β ) α+ ( z + x2n ) β + νβ ( x 2 n − x̂2n ) − 1 = σ̂2 ( σ2 ) ∗ν − 1 Where inequalities * holds under assumption that n > 1 + ν x 2 h x2l , and last equality holds from eq . 51 . Therefore , by using Taylor theorem and 0 ≤ ζ ≤ c1 following inequality holds ec1 = 1 + c1 + eζ 2 ( c1 ) 2 > 1 + c1 ≥ σ̂2 ( σ2 ) ∗ν From this inequality , and because the natural logarithm is monotonically increasing ln σ̂ 2 ( σ2 ) ∗ν ≤ c1 , therefore 1 2 ( ν − 1 ) ln σ̂ 2 ( σ2 ) ∗ν ≤ 1 2 ( ν − 1 ) c1 = 1 2 ( ν − 1 ) νx 2 h ( ( n− 1 ) x2l − νx2h ) . Claim D.28 . For the Bayesian linear regression problem on domain D , the definitions of eq . 14 , and the conditions of Lemma 4.2 , the value ν2 ( µ−µ̂ ) 2 ( σ2 ) ∗ν is bounded by 2νβ ( x4h 9 10n 1−2γ1x2l ) + 2νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) n2−γ1 + ν 2 ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) 2 n3 . Proof Claim D.28 . First bound |µ− µ̂| , |µ− µ̂| = β| q + xnyn α+ ( z + x2n ) β − q + x̂nŷn α+ ( z + x̂2n ) β | = | ( q + xnyn ) ( α+ ( z + x̂ 2 n ) β ) − ( q + x̂nŷn ) ( α+ ( z + x2n ) β ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) | = β|qx̂ 2 nβ + xnynα+ xnynzβ + xnynx̂ 2 nβ − qx2nβ − x̂nŷnα− x̂nŷnzβ − x̂nŷnx2nβ ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) | = β| x̂2nz ( q z − ŷn x̂n ) β − x2nz ( q z − yn xn ) β + α ( xnyn − x̂nŷn ) + xnx̂nβ ( ynx̂n − ŷnxn ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) | < β| x̂ 2 hz ( 2n γ1 ) β + αx2h ( c+ n γ1 ) + x4hβ ( c+ n γ1 ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) | = β|2x̂ 2 hβzn γ1 + ( x2hα+ x 4 hβ ) ( c+ n γ1 ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) | Therefore , ν 2 ( µ− µ̂ ) 2 ( σ2 ) ∗ν ≤ ν 2 β2 ( 2x̂2hβzn γ1 + ( x2hα+ x 4 hβ ) ( c+ n γ1 ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) ) 2 · ( α+ ( z + x 2 n ) β + ν ( x 2 n − x̂2n ) ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) ) −1 = ν 2 β2 ( 2x̂2hβzn γ1 + ( x2hα+ x 4 hβ ) ( c+ n γ1 ) ) 2 ( α+ ( z + x2n ) β ) ( α+ ( z + x̂ 2 n ) β ) ( α+ ( z + x 2 n ) β + ν ( x 2 n − x̂2n ) ) ≤∗ ν 2 β2 ( 2x2hβzn γ1 + ( x2hα+ x 4 hβ ) ( c+ n γ1 ) ) 2 9 10 ( α+ ( z + x 2 n ) β ) ( α+ ( z + x̂ 2 n ) β ) ( α+ ( z + x 2 n ) β ) = ν 2 β2 ( ( 2x2hβ ) 2z2n2γ1 + 2 ( 2x2hβ ) ( x 2 hα+ x 4 hβ ) zn γ1 ( c+ nγ1 ) + ( x2hα+ x 4 hβ ) 2 ( c+ nγ1 ) 2 9 10 ( α+ ( z + x 2 n ) β ) 2 ( α+ ( z + x̂2n ) β ) ) ≤ ν 2 β2 ( ( 2x2hβ ) 2z2n2γ1 + ( 4x2hβ ) ( x 2 hα+ x 4 hβ ) zn γ1 ( c+ nγ1 ) + ( x2hα+ x 4 hβ ) 2 ( c+ nγ1 ) 2 9 10 ( ( z + x 2 n ) β ) 2 ( ( z + x̂2n ) β ) ) ≤∗∗ ν 2 β2 ( ( 2x2hβ ) 2n2γ1 9 10nx 2 l β 3 ) + ν 2 β2 ( ( 4x2hβ ) ( x 2 hα+ x 4 hβ ) n γ1 ( c+ nγ1 ) 9 10 ( nx 2 l ) 2β3 ) + ν 2 β2 ( ( x2hα+ x 4 hβ ) 2 ( c+ nγ1 ) 2 9 10 ( nx 2 l β ) 3 ) = 2νβ ( x4h 9 10n 1−2γ1x2l ) + 2νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) n2−γ1 + ν 2 ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) 2 n3 Inequality * is true because Lemma 4.2 conditions dictates that n > 1 + x 2 h x2l 10ν β , and according to claim D.24 this promises that 110 ( α + ( z + x 2 n ) β ) > ν ( x̂ 2 n − x2n ) . Inequality * * follows from n > > 1⇒ ( n− 1 ) xl ≈ nxl . Claim D.29 . For the conditions and definitions of Lemma 4.3 , one sample from the posterior is ( , δ ) differentially private for the following terms on n and ν. ν = 1 + 2 ln ( 1δ ) n ≥ max { 1 + x 2 h x2l 8 , 1 + ν x2h x2l ( 1 + 8 ( ν − 1 ) ) , ( 16νβx4h 9 10 x 2 l ) 1 1−2γ1 , ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) ) 1 2−γ1 , ( 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) ) 2 3 } Proof Claim D.29 . By Lemma 4.3 , one sample from the posterior is ( 1 + ln ( 1δ ) ν−1 , δ ) differentially private . For each of the 6 terms of 1 + ln ( 1δ ) ν−1 , a lower bound on n and ν is found at equations 53 , 54 , 55 , 56 , 57 , 58 such that the sum of terms is upper bounded by . These bounds match the claim ’ s guarantee over n and ν therefore proving the claim . For term ln ( 1 δ ) ν−1 ln ( 1δ ) ν − 1 = 2 ⇐⇒ 2 ln ( 1δ ) + 1 = ν ( 53 ) For term x 2 h ( n−1 ) x2l x2h 2 ( n− 1 ) x2l ≤ 16 n ≥ 1 + x 2 h x2l 8 ( 54 ) For term 12 ( ν − 1 ) νx2h ( n−1 ) x2l−νx 2 h 1 2 ( ν − 1 ) νx 2 h ( n− 1 ) x2l − νx2h ≤ 16 1 2 ( ν − 1 ) 16νx 2 h ≤ ( n− 1 ) x2l − νx2h n ≥ 1 + 1 2 ( ν − 1 ) 16νx 2 h x2l + ν x2h x2l = 1 + ν x2h x2l ( 1 + 8 ( ν − 1 ) ) ( 55 ) For term 2νβ ( x 4 h 9 10n 1−2γ1x2l ) 2νβ ( x4h 9 10n 1−2γ1x2l ) ≤ 8 16 νβ x4h 9 10x 2 l ≤ n1−2γ1 n ≥ ( 16νβx 4 h 9 10 x 2 l ) 1 1−2γ1 ( 56 ) For term 2νβ ( ( x 2 hβ ) ( x 2 hα+x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+nγ1 ) n2−γ1 2νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) n2−γ1 ≤ 8 n2−γ1 ≥ 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) n ≥ ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) ) 1 2−γ1 ( 57 ) For term ν2 ( ( x2hα+x 4 hβ ) 2 9 10x 6 l β ) ( c+n γ1 ) 2 n3 ν 2 ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) 2 n3 ≤ 8 n3 ≥ 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) 2 n ≥ ( 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) ) 2 3 ( 58 ) Claim D.30 . For c = nγ2 , γ1 < γ2 < 32 , and the conditions and definitions of Lemma 4.3 , one sample from the posterior is ( , δ ) differentially private for following terms on n and ν. ν = 2 ln ( 1δ ) + 1 n ≥ max { 1 + x 2 h x2l 8 , 1 + ν x2h x2l ( 1 + 8 ( ν − 1 ) ) , ( 16νβx4h 9 10 x 2 l ) 1 1−2γ1 , ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 1 2−γ1−γ2 , ( 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 2 3−2γ2 } Proof Claim D.30 . Claim D.29 provides general lower bounds on n for ( , δ ) differential privacy . When c = nγ2 , γ2 > γ1 , these bounds can be simplified . For condition n ≥ ( 16νβ ( ( x2hβ ) ( x 2 hα+x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ n γ1 ) ) 1 2−γ1 , ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( c+ nγ1 ) ) 1 2−γ1 = ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) nγ2 ( 1 + 1 nγ2−γ1 ) ) 1 2−γ1 ≤ ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) nγ2 ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 1 2−γ1 , where the inequality holds since Lemma 4.3 dictates that n ≥ 1 + 10x 2 h x2l ν β . Consequently it ’ s enough that n > ( 16νβ ( ( x2hβ ) ( x 2 hα+ x 4 hβ ) 9 10 ( x 2 l β ) 2 ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 1 2−γ1−γ2 . Following same considerations for condition n ≥ ( 4ν ( ( x2hα+x 4 hβ ) 2 9 10x 6 l β ) ( c+ nγ1 ) ) 2 3 , it is enough that n > ( 4ν ( ( x2hα+ x 4 hβ ) 2 9 10x 6 l β ) ( 1 + 1 ( 1 + 10 x2h x2l ν β ) γ2−γ1 ) ) 2 3−2γ2 E WASSERSTEIN DISTANCE PROOF Claim E.1 . If p , q are distributions with 2-Wasserstein distance W2 ( p , q ) = 2 , then we have p ( Br ( x ) ) ≤ q ( Br+ ( x ) ) + . Proof . If is the claim that d2P ≤ dw from Gibbs & Su ( 2002 ) . Picking an optimal coupling and using Markov inequality we get P ( d ( x , y ) > ) ≤ 1 E [ d ( x , y ) ] = . As { ( x̃ , ỹ ) : x̃ ∈ Br ( x ) } ⊂ { ( x̃ , ỹ ) : ỹ ∈ Br+ ( y ) } ∪ { ( x̃ , ỹ ) : d ( x̃ , ỹ ) > } we get p ( Br ( x ) ) ≤ q ( Br+ ( x ) ) + ( special case of Strassen theorem ) . Claim E.2 . Let p , q be continuous distributions on Rd with Wasserstein distance W2 ( p , q ) < 2 , and let pδ , qδ be their convolutions with uniform distribution on Bδ ( 0 ) . We assume both density functions are L−Lipshitz continuous . For λ > we have pλ ( x ) ≤ vold ( λ ) vold ( λ− ) qλ ( x ) + vold ( λ− ) + 2 ( vold ( λ ) vold ( λ− ) − 1 ) λL . ( 59 ) Proof . We have P ( Bλ ( x ) ) = P ( Bλ− ( x ) ) + P ( A ( x ; λ − , λ ) ) where A ( x ; r1 , r2 ) is the annulus around x between radius r1 and r2 . From continuity there exists z ∈ P ( Bλ ( x ) ) such that p ( z ) = P ( Bλ ( x ) ) vold ( λ ) , where vold ( r ) is the volume of a ball of radius r in Rd . From Lipshitz continuity we have P ( A ( x ; λ − , λ ) ) ≤ ( vold ( λ ) − vold ( λ − ) ) ( p ( z ) + 2λL ) = ( 1− vold ( λ− ) vold ( λ ) ) P ( Bλ ( x ) ) + ∆ , where ∆ = ( vold ( λ ) − vold ( λ− ) ) 2λL . From this , we get P ( Bλ− ( x ) ) ≥ vold ( λ− ) vold ( λ ) P ( Bλ ( x ) ) −∆ . ( 60 ) Combining this with claim E.1 , we get P ( Bλ ( x ) ) ≤ vold ( λ ) vold ( λ− ) ( P ( Bλ− ( x ) ) + ∆ ) ≤ vold ( λ ) vold ( λ− ) ( Q ( Bλ ( x ) ) + ∆ + ) . ( 61 ) We divide by vold ( λ ) to get the densities pλ , qλ . pλ ( x ) ≤ vold ( λ ) vold ( λ− ) qλ ( x ) + ∆ + vold ( λ− ) ( 62 ) | This paper studies the privacy guarantee of Bayesian learning using Stochastic Gradient Langevin Dynamics (SGLD). Since the SGLD updates are stochastic, it is often thought the solution can be suitable for privacy-preserving of the data that is used to train the algorithm. Using a counter-example, this paper shows that it is not necessarily correct to assume so. | SP:e11a3ee0dce61bc8647a345d8947b9d36e2323f8 |
Better state exploration using action sequence equivalence | 1 INTRODUCTION . Despite the rapidly improving performance of Reinforcement Learning ( RL ) agents on a variety of tasks ( Mnih et al. , 2015 ; Silver et al. , 2016 ) , they remain largely sample-inefficient learners compared to humans ( Toromanoff et al. , 2019 ) . Contributing to this is the vast amount of prior knowledge humans bring to the table before their first interaction with a new task , including an understanding of physics , semantics , and affordances ( Dubey et al. , 2018 ) . The considerable quantity of data necessary to train agents is becoming more problematic as RL is applied to ever more challenging and complex tasks . Much research aims at tackling this issue , for example through transfer learning ( Rusu et al. , 2016 ) , meta learning , and hierarchical learning , where agents are encouraged to use what they learn in one environment to solve a new task more quickly . Other approaches attempt to use the structure of Markov Decision Processes ( MDP ) to accelerate learning without resorting to pretraining . Mahajan & Tulabandhula ( 2017 ) and Biza & Jr. ( 2019 ) learn simpler representations of MDPs that exhibit symmetrical structure , while van der Pol et al . ( 2020 ) show that environment invariances can be hard-coded into equivariant neural networks . A fundamental challenge standing in the way of improved sample efficiency is exploration . We consider a situation where the exact transition function of a Markov Decision Process is unknown , but some knowledge of its local dynamics is available under the form of a prior expectation that given sequences of actions have identical results . This way of encoding prior knowledge is sufficiently flexible to describe many useful environment structures , particularly when actions correspond to agent movement . For example , in a gridworld ( called RotationGrid hereafter ) where the agent can move forward ( ↑ ) and rotate 90◦to the left ( x ) or to the right ( y ) , the latter two actions are the inverse of each other , in that performing one undoes the effect of the other . During exploration , to encourage the visitation of not yet seen states , it is natural to simply ban sequences of actions that revert to previously visited states , following the reasoning of Tabu search ( Glover , 1986 ) . We observe further that yy and xx both lead to the same state ( represented as state 4 in Figure 1 ) . If actions were uniformly sampled , the chances of visiting this state would be much higher than any of the others . Based on these observations , we introduce a new method taking advantage of Equivalent Action SEquences for Exploration ( EASEE ) , an overview of which can be found in Figure 1 . EASEE looks ahead several steps and calculates action sampling probabilities to explore as uniformly as possible new states conditionally on the action sequence equivalences given to it . It constructs a partial MDP which corresponds to a local representation of the true MDP around the current state . We then formulate the problem of determining the best distribution over action sequences as a linearly constrained convex optimization problem . Solving this optimization problem is computationally inexpensive and can be done once and for all before learning begins , providing a principled and tractable exploration policy that takes into account environment structure . This policy can easily be injected into existing reinforcement learning algorithms as a substitute for -greedy exploration . Our contribution is threefold . First , we formally introduce the notion of equivalent action sequences , a novel type of structure in Markov Decision Processes . Then , we show that priors on this type of structure can easily be exploited during offline exploration by solving a convex optimization problem . Finally , we provide experimental insights and show that incorporating EASEE into a DQN ( Mnih et al. , 2015 ) improves agent performance in several environments with various structures . Overview We assume that we have sets of equivalent action-sequences for the environment . Equivalent action sequences are sequences that lead to the same state . These sequences are used to build a DAG that models where the agent will end up after any sequence of actions of length d. Because some sequences are equivalent , several parent nodes may share a child node . A naive exploration scheme like -greedy would waste resources by over exploring such child nodes . Instead , we leverage this information using the DAG constructed above ; our method executes an exploratory action that maximizes the entropy of the future visited states . 2 RELATED WORK . Improved Exploration The problem of ensuring that agents see sufficiently diverse states has received a lot of attention from the RL community . Many methods rely on intrinsic rewards ( Schmidhuber , 1991 ; Chentanez et al. , 2005 ; Şimşek & Barto , 2006 ; Lopes et al. , 2012 ; Bellemare et al. , 2016 ; Ostrovski et al. , 2017 ; Pathak et al. , 2017 ) to entice agents to unseen or misunderstood areas . In the tabular setting , these take the form of count-based exploration bonuses which guide the agent toward poorly visited states ( e.g . Strehl & Littman ( 2008 ) ) . Scaling this method requires the use of function approximators ( Burda et al. , 2019 ; Badia et al. , 2020 ; Flet-Berliac et al. , 2021 ) . Unlike EASEE , these methods necessitate the computation of non-stationary and vanishing novelty estimates , which require careful tuning to balance learning stability and exploration incentives . Moreover , because these bonuses are learned , and do not allow for the use of prior structure knowledge , they constitute an orthogonal approach to ours . In Gupta et al . ( 2018 ) exploration strategies are learned from prior experience . Unlike EASEE this requires meta-training over a distribution of tasks . Redundancies in Trajectories The idea that different trajectories can overlap and induce redundancies in state visitation is used in Leurent & Maillard ( 2020 ) and Czech et al . ( 2020 ) in the case of Monte-Carlo tree search . However , they require a generative model , and propose a new Bellman operator to update node values according to newly uncovered transitions rather than modifying exploration . Closer to our work , Caselles-Dupré et al . ( 2020 ) study structure in action sequences , but restrict themselves to commutative properties . Grinsztajn et al . ( 2021 ) quantifies the probability of cycling back to a previously visited state , motivated by the analysis of reversible actions . Tabu search ( Glover , 1986 ) is a meta-heuristic which uses knowledge of the past to escape local optima . It is popular for combinatorial optimization ( Hertz & Werra , 2005 ) . Like our approach , it relies on a local structure : actions which are known to cancel out recent moves are deemed tabu , and are forbidden for a short period of time . This prevents cycling around already found solutions , and thus encourages exploration . In Abramson & Wechsler ( 2003 ) , tabu search is combined with reinforcement learning , using action priors . However , their method can not make use of more complex action-sequence structure . Maximum State-Visitation Entropy Our goal to explore as uniformly as possible every nearby state can be seen as a local version of the Maximum State-Visitation Entropy problem ( MSVE ) ( de Farias & Van Roy , 2003 ; Hazan et al. , 2019 ; Lee et al. , 2019 ; Guo et al. , 2021 ) . MSVE formulates exploration as a policy optimization problem whose solution maximizes the entropy of the distribution of visited states . Although some of these works ( Hazan et al. , 2019 ; Lee et al. , 2019 ; Guo et al. , 2021 ) can make use of priors about state similarities , they learn a global policy and can not exploit structure in action sequences . Action Space Structure The idea of exploiting structure in action spaces is not new . Large discrete action spaces may be embedded in continuous action spaces either by leveraging prior information ( Dulac-Arnold et al. , 2016 ) or learning representations ( Chandak et al. , 2019 ) . Tavakoli et al . ( 2018 ) manage high-dimensional action spaces by assuming a degree of independence between each dimension . Farquhar et al . ( 2020 ) introduce a curriculum of progressively growing action spaces to accelerate learning . These methods aim to improve the generalization of policies to unseen actions in large action spaces rather than enhancing exploration . Leveraging previous trajectories to extract prior knowledge , Tennenholtz & Mannor ( 2019 ) provide an understanding of actions through their context in demonstrations . 3 FORMALISM . 3.1 EQUIVALENCE OVER ACTION SEQUENCES . We consider a Markov Decision Process ( MDP ) defined as a 5-tupleM = ( S , A , T , R , γ ) , with S the set of states , A the action set , T the transition function , R the reward function and the discount factor γ . The set of actions is assumed to be finite |A| < ∞ . We restrict ourselves to deterministic MDPs . A possible extension to MDPs with stochastic dynamics is discussed in Appendix A.6 . In the following , the notations are borrowed from formal language theory . Sequences of actions are analogous to strings over the set of symbols A ( possible actions ) . The set of all possible sequences of actions is denoted A ? = ⋃∞ k=0Ak where Ak is the set of all sequences of length k and A0 contains as single element the empty sequence Λ . We use . for the concatenation operator , such that for v1 ∈ Ah1 , v2 ∈ Ah2 , v1.v2 ∈ Ah1+h2 . The transition function T : S ×A → S gives the next state s′ when action a is taken in state s : T ( s , a ) = s′ . We recursively extend this operator to action sequences T : S ×A ? → S such that , ∀s ∈ S , ∀a ∈ A , ∀w ∈ A ? : T ( s , Λ ) = s T ( s , w.a ) = T ( T ( s , w ) , a ) Intuitively , this operator gives the new state of the MDP after a sequence of actions is performed from state s. Definition 1 ( Equivalent sequences ) . We say that two action sequences a1 . . . an and a′1 . . . a′m ∈ A ? are equivalent at state s ∈ S if T ( s , a1 . . . an ) = T ( s , a ′ 1 . . . a ′ m ) ( 1 ) Two sequences of actions are equivalent overM if they are equivalent at state s for all s in S. This is written : a1 . . . an ∼M a′1 . . . a′m ( 2 ) This means that we consider two sequences of actions to be equivalent when following one or the other will always lead to the same state . When the considered MDPM is unambiguous , we simplify the notation by writing ∼ instead of ∼M . We argue that some priors about the environments can be easily encoded as a small set of action sequence equivalences . For example , we may know that going left then right is the same thing as going right then left , that rotating two times to the left is the same thing as rotating two times to the right , or that opening a door twice is the same thing as opening the door once . All these priors can be encoded as a set of equivalences : Definition 2 ( Equivalence set ) . Given a MDP M and several equivalent sequence pairs v1 ∼ w1 , v2 ∼ w2 , . . . , vn ∼ wn , we say that Ω = { { v1 , w1 } , { v2 , w2 } , . . . { vn , wn } } is an equivalence set overM . Formally , Ω is a set of pairs of elements of A ? , such that Ω ⊂ ( A ? ) 2 . By abuse of notation , we write v ∼ w ∈ Ω if { v , w } ∈ Ω . Intuitively , it is clear that action sequence equivalences can be combined to form new , longer equivalences . For example , knowing that going left then right is the same thing as going right then left , we can deduce that going two times left then two times right is the same thing as going two times right then two times left . In the same fashion , if opening a door twice produces the same effect as opening it once , opening three times the door does the same . We formalize these notions in what follows . First , we note that equivalent sequences can be concatenated . Proposition 1 . If we have two pairs of equivalent sequences overM , i.e . w1 , w2 , w3 , w4 ∈ A ? such that w1 ∼ w2 w3 ∼ w4 then the concatenation of the sequences are also equivalent sequences : w1 · w3 ∼ w2 · w4 The proof is given in Appendix A.1 . We are now going to define formally the fact that the equivalence of two sequences can be deduced from an equivalence set Ω . We first consider the previous example where an action a has the effect of opening a door , such that a.a ∼ a . We can then write a.a.a ∼ ( a.a ) .a ∼ ( a ) .a ∼ a.a ∼ a by applying two times the equivalence a.a ∼ a and rearranging the parentheses . More generally and intuitively , the equivalence of two action sequences v and w can be deduced from Ω , which we denote v ∼Ω w , if v can be changed into w iteratively , chaining equivalences of Ω . More formally , we write v ∼1Ω w if v can be changed to w in one steps , meaning : ∃u1 , u2 , v1 , w1 ∈ A ? such that v = u1.v1.u2 w = u1.w1.u2 v1 ∼ w1 ∈ Ω ( 3 ) For n ≥ 2 , we say that v can be changed into w in n steps if there is a sequence v1 , . . . , vn ∈ A ? such that v ∼1Ω v1 ∼1Ω · · · ∼1Ω vn = w. Finally , we say that v ∼Ω w if there is n ∈ N such that v can be changed into w in n steps . The relation ∼Ω is thus a formal way of extending equivalences from a fixed equivalence set Ω , and at first glance not connected with ∼ , which deals with the equivalences of the MDP dynamics . We now show a connection between the two notions . Theorem 1 . Given an equivalence set Ω , ∼Ω is an equivalence relationship . Furthermore , for v , w ∈ A ? , v ∼Ω w ⇒ v ∼ w. The proof is given in Appendix A.2 . Given this relation between ∼ and ∼Ω , we will simplify the notation in what follows by writing ∼ instead of ∼Ω when the equivalence set considered is unambiguous . As ∼Ω is an equivalence relationship , it provides a partition over action sequences : two action sequences in the same set lead to the same final state from any given state . | In the context of reinforcement learning, authors propose an exploration strategy based on environment-specific prior knowledge of action equivalence. An example of such equivalence is rotating 180° twice in a grid world, as the agent comes back to the same original state: the action sequence forms an identity in this case. The proposed exploration, instead of picking a random action with probability (\epsilon)/(number of actions), builds an abstracted lookahead tree of specified depth that merges equivalent action sequences and adjusts the distribution of \epsilon over them. Authors demonstrate this i) increases the number of unique state visitations in grid-world examples, and ii) DQN with their exploration strategy achieves a higher reward faster than the baseline. | SP:1f35871b8ec295dd84e991fdc57a45024bc07607 |
Better state exploration using action sequence equivalence | 1 INTRODUCTION . Despite the rapidly improving performance of Reinforcement Learning ( RL ) agents on a variety of tasks ( Mnih et al. , 2015 ; Silver et al. , 2016 ) , they remain largely sample-inefficient learners compared to humans ( Toromanoff et al. , 2019 ) . Contributing to this is the vast amount of prior knowledge humans bring to the table before their first interaction with a new task , including an understanding of physics , semantics , and affordances ( Dubey et al. , 2018 ) . The considerable quantity of data necessary to train agents is becoming more problematic as RL is applied to ever more challenging and complex tasks . Much research aims at tackling this issue , for example through transfer learning ( Rusu et al. , 2016 ) , meta learning , and hierarchical learning , where agents are encouraged to use what they learn in one environment to solve a new task more quickly . Other approaches attempt to use the structure of Markov Decision Processes ( MDP ) to accelerate learning without resorting to pretraining . Mahajan & Tulabandhula ( 2017 ) and Biza & Jr. ( 2019 ) learn simpler representations of MDPs that exhibit symmetrical structure , while van der Pol et al . ( 2020 ) show that environment invariances can be hard-coded into equivariant neural networks . A fundamental challenge standing in the way of improved sample efficiency is exploration . We consider a situation where the exact transition function of a Markov Decision Process is unknown , but some knowledge of its local dynamics is available under the form of a prior expectation that given sequences of actions have identical results . This way of encoding prior knowledge is sufficiently flexible to describe many useful environment structures , particularly when actions correspond to agent movement . For example , in a gridworld ( called RotationGrid hereafter ) where the agent can move forward ( ↑ ) and rotate 90◦to the left ( x ) or to the right ( y ) , the latter two actions are the inverse of each other , in that performing one undoes the effect of the other . During exploration , to encourage the visitation of not yet seen states , it is natural to simply ban sequences of actions that revert to previously visited states , following the reasoning of Tabu search ( Glover , 1986 ) . We observe further that yy and xx both lead to the same state ( represented as state 4 in Figure 1 ) . If actions were uniformly sampled , the chances of visiting this state would be much higher than any of the others . Based on these observations , we introduce a new method taking advantage of Equivalent Action SEquences for Exploration ( EASEE ) , an overview of which can be found in Figure 1 . EASEE looks ahead several steps and calculates action sampling probabilities to explore as uniformly as possible new states conditionally on the action sequence equivalences given to it . It constructs a partial MDP which corresponds to a local representation of the true MDP around the current state . We then formulate the problem of determining the best distribution over action sequences as a linearly constrained convex optimization problem . Solving this optimization problem is computationally inexpensive and can be done once and for all before learning begins , providing a principled and tractable exploration policy that takes into account environment structure . This policy can easily be injected into existing reinforcement learning algorithms as a substitute for -greedy exploration . Our contribution is threefold . First , we formally introduce the notion of equivalent action sequences , a novel type of structure in Markov Decision Processes . Then , we show that priors on this type of structure can easily be exploited during offline exploration by solving a convex optimization problem . Finally , we provide experimental insights and show that incorporating EASEE into a DQN ( Mnih et al. , 2015 ) improves agent performance in several environments with various structures . Overview We assume that we have sets of equivalent action-sequences for the environment . Equivalent action sequences are sequences that lead to the same state . These sequences are used to build a DAG that models where the agent will end up after any sequence of actions of length d. Because some sequences are equivalent , several parent nodes may share a child node . A naive exploration scheme like -greedy would waste resources by over exploring such child nodes . Instead , we leverage this information using the DAG constructed above ; our method executes an exploratory action that maximizes the entropy of the future visited states . 2 RELATED WORK . Improved Exploration The problem of ensuring that agents see sufficiently diverse states has received a lot of attention from the RL community . Many methods rely on intrinsic rewards ( Schmidhuber , 1991 ; Chentanez et al. , 2005 ; Şimşek & Barto , 2006 ; Lopes et al. , 2012 ; Bellemare et al. , 2016 ; Ostrovski et al. , 2017 ; Pathak et al. , 2017 ) to entice agents to unseen or misunderstood areas . In the tabular setting , these take the form of count-based exploration bonuses which guide the agent toward poorly visited states ( e.g . Strehl & Littman ( 2008 ) ) . Scaling this method requires the use of function approximators ( Burda et al. , 2019 ; Badia et al. , 2020 ; Flet-Berliac et al. , 2021 ) . Unlike EASEE , these methods necessitate the computation of non-stationary and vanishing novelty estimates , which require careful tuning to balance learning stability and exploration incentives . Moreover , because these bonuses are learned , and do not allow for the use of prior structure knowledge , they constitute an orthogonal approach to ours . In Gupta et al . ( 2018 ) exploration strategies are learned from prior experience . Unlike EASEE this requires meta-training over a distribution of tasks . Redundancies in Trajectories The idea that different trajectories can overlap and induce redundancies in state visitation is used in Leurent & Maillard ( 2020 ) and Czech et al . ( 2020 ) in the case of Monte-Carlo tree search . However , they require a generative model , and propose a new Bellman operator to update node values according to newly uncovered transitions rather than modifying exploration . Closer to our work , Caselles-Dupré et al . ( 2020 ) study structure in action sequences , but restrict themselves to commutative properties . Grinsztajn et al . ( 2021 ) quantifies the probability of cycling back to a previously visited state , motivated by the analysis of reversible actions . Tabu search ( Glover , 1986 ) is a meta-heuristic which uses knowledge of the past to escape local optima . It is popular for combinatorial optimization ( Hertz & Werra , 2005 ) . Like our approach , it relies on a local structure : actions which are known to cancel out recent moves are deemed tabu , and are forbidden for a short period of time . This prevents cycling around already found solutions , and thus encourages exploration . In Abramson & Wechsler ( 2003 ) , tabu search is combined with reinforcement learning , using action priors . However , their method can not make use of more complex action-sequence structure . Maximum State-Visitation Entropy Our goal to explore as uniformly as possible every nearby state can be seen as a local version of the Maximum State-Visitation Entropy problem ( MSVE ) ( de Farias & Van Roy , 2003 ; Hazan et al. , 2019 ; Lee et al. , 2019 ; Guo et al. , 2021 ) . MSVE formulates exploration as a policy optimization problem whose solution maximizes the entropy of the distribution of visited states . Although some of these works ( Hazan et al. , 2019 ; Lee et al. , 2019 ; Guo et al. , 2021 ) can make use of priors about state similarities , they learn a global policy and can not exploit structure in action sequences . Action Space Structure The idea of exploiting structure in action spaces is not new . Large discrete action spaces may be embedded in continuous action spaces either by leveraging prior information ( Dulac-Arnold et al. , 2016 ) or learning representations ( Chandak et al. , 2019 ) . Tavakoli et al . ( 2018 ) manage high-dimensional action spaces by assuming a degree of independence between each dimension . Farquhar et al . ( 2020 ) introduce a curriculum of progressively growing action spaces to accelerate learning . These methods aim to improve the generalization of policies to unseen actions in large action spaces rather than enhancing exploration . Leveraging previous trajectories to extract prior knowledge , Tennenholtz & Mannor ( 2019 ) provide an understanding of actions through their context in demonstrations . 3 FORMALISM . 3.1 EQUIVALENCE OVER ACTION SEQUENCES . We consider a Markov Decision Process ( MDP ) defined as a 5-tupleM = ( S , A , T , R , γ ) , with S the set of states , A the action set , T the transition function , R the reward function and the discount factor γ . The set of actions is assumed to be finite |A| < ∞ . We restrict ourselves to deterministic MDPs . A possible extension to MDPs with stochastic dynamics is discussed in Appendix A.6 . In the following , the notations are borrowed from formal language theory . Sequences of actions are analogous to strings over the set of symbols A ( possible actions ) . The set of all possible sequences of actions is denoted A ? = ⋃∞ k=0Ak where Ak is the set of all sequences of length k and A0 contains as single element the empty sequence Λ . We use . for the concatenation operator , such that for v1 ∈ Ah1 , v2 ∈ Ah2 , v1.v2 ∈ Ah1+h2 . The transition function T : S ×A → S gives the next state s′ when action a is taken in state s : T ( s , a ) = s′ . We recursively extend this operator to action sequences T : S ×A ? → S such that , ∀s ∈ S , ∀a ∈ A , ∀w ∈ A ? : T ( s , Λ ) = s T ( s , w.a ) = T ( T ( s , w ) , a ) Intuitively , this operator gives the new state of the MDP after a sequence of actions is performed from state s. Definition 1 ( Equivalent sequences ) . We say that two action sequences a1 . . . an and a′1 . . . a′m ∈ A ? are equivalent at state s ∈ S if T ( s , a1 . . . an ) = T ( s , a ′ 1 . . . a ′ m ) ( 1 ) Two sequences of actions are equivalent overM if they are equivalent at state s for all s in S. This is written : a1 . . . an ∼M a′1 . . . a′m ( 2 ) This means that we consider two sequences of actions to be equivalent when following one or the other will always lead to the same state . When the considered MDPM is unambiguous , we simplify the notation by writing ∼ instead of ∼M . We argue that some priors about the environments can be easily encoded as a small set of action sequence equivalences . For example , we may know that going left then right is the same thing as going right then left , that rotating two times to the left is the same thing as rotating two times to the right , or that opening a door twice is the same thing as opening the door once . All these priors can be encoded as a set of equivalences : Definition 2 ( Equivalence set ) . Given a MDP M and several equivalent sequence pairs v1 ∼ w1 , v2 ∼ w2 , . . . , vn ∼ wn , we say that Ω = { { v1 , w1 } , { v2 , w2 } , . . . { vn , wn } } is an equivalence set overM . Formally , Ω is a set of pairs of elements of A ? , such that Ω ⊂ ( A ? ) 2 . By abuse of notation , we write v ∼ w ∈ Ω if { v , w } ∈ Ω . Intuitively , it is clear that action sequence equivalences can be combined to form new , longer equivalences . For example , knowing that going left then right is the same thing as going right then left , we can deduce that going two times left then two times right is the same thing as going two times right then two times left . In the same fashion , if opening a door twice produces the same effect as opening it once , opening three times the door does the same . We formalize these notions in what follows . First , we note that equivalent sequences can be concatenated . Proposition 1 . If we have two pairs of equivalent sequences overM , i.e . w1 , w2 , w3 , w4 ∈ A ? such that w1 ∼ w2 w3 ∼ w4 then the concatenation of the sequences are also equivalent sequences : w1 · w3 ∼ w2 · w4 The proof is given in Appendix A.1 . We are now going to define formally the fact that the equivalence of two sequences can be deduced from an equivalence set Ω . We first consider the previous example where an action a has the effect of opening a door , such that a.a ∼ a . We can then write a.a.a ∼ ( a.a ) .a ∼ ( a ) .a ∼ a.a ∼ a by applying two times the equivalence a.a ∼ a and rearranging the parentheses . More generally and intuitively , the equivalence of two action sequences v and w can be deduced from Ω , which we denote v ∼Ω w , if v can be changed into w iteratively , chaining equivalences of Ω . More formally , we write v ∼1Ω w if v can be changed to w in one steps , meaning : ∃u1 , u2 , v1 , w1 ∈ A ? such that v = u1.v1.u2 w = u1.w1.u2 v1 ∼ w1 ∈ Ω ( 3 ) For n ≥ 2 , we say that v can be changed into w in n steps if there is a sequence v1 , . . . , vn ∈ A ? such that v ∼1Ω v1 ∼1Ω · · · ∼1Ω vn = w. Finally , we say that v ∼Ω w if there is n ∈ N such that v can be changed into w in n steps . The relation ∼Ω is thus a formal way of extending equivalences from a fixed equivalence set Ω , and at first glance not connected with ∼ , which deals with the equivalences of the MDP dynamics . We now show a connection between the two notions . Theorem 1 . Given an equivalence set Ω , ∼Ω is an equivalence relationship . Furthermore , for v , w ∈ A ? , v ∼Ω w ⇒ v ∼ w. The proof is given in Appendix A.2 . Given this relation between ∼ and ∼Ω , we will simplify the notation in what follows by writing ∼ instead of ∼Ω when the equivalence set considered is unambiguous . As ∼Ω is an equivalence relationship , it provides a partition over action sequences : two action sequences in the same set lead to the same final state from any given state . | The paper proposes a method that, from a simple encoding of sequences of actions that have equivalent outcome in an MDP, allows to compute a local policy for local high-quality exploration (it replaces the random action of $\varepsilon$-greedy with an action that maximizes the entropy of future visited states). Full algorithmic details about how to do that are given in the paper, and experiments on a few environments show that the method is promising. The Freeway experiment is particularly interesting, as it shows that a (small) benefit can be obtained from the method even on an Atari game, with minimal domain knowledge (only a few equivalent action sequences are encoded), and using a slightly modified DQN algorithm. | SP:1f35871b8ec295dd84e991fdc57a45024bc07607 |
Personalized Heterogeneous Federated Learning with Gradient Similarity | 1 INTRODUCTION . With the popularity of smartphones , personal computers , and other devices , the data stored on them has increased dramatically . These data are related to each other but exist independently on different devices . Also , their owners are often not willing to share their private data , which prompted Federated Learning ( FL ) ( McMahan et al. , 2017 ) . Conventional FL is a distributed machine learning framework in which multiple clients train a global model together under a central server ’ s coordination without sharing their local data . It ’ s common process is as follows : ( 1 ) At each global iteration , the central server broadcasts its current global model to the client ; ( 2 ) With the help of the global model , the client updates its local model which trained on its local data , and sends it back to the central server ; ( 3 ) The central server aggregates the local models to get a new global model . The FL process will repeat these three steps until convergence . Under the premise of privacy protection , it can obtain a shared global model with higher generality . Nonetheless , the wide application of FL still is facing the following two major challenges : • Statistical heterogeneity . As the user preference of each device may be different , the data distribution on each device may also be inconsistent . Consequently , the data as a whole on all the clients may be unbalanced and non-IID . This may cause undesirable performance of the FL participants . • Systems heterogeneity . Due to the differences of users ’ hardware and network bandwidth , conventional FL models are prone to generate stragglers whose models are easy to be discarded after server ’ s time-out . Firstly , to address the statistical heterogeneity , the personalization of FL is widely studied . The majority of the available personalized FL ( PFL ) methods focus on one shared global model ’ s parameters updating by client ’ s fine-tune . FedAvg ( McMahan et al. , 2017 ) is treated by ( Jiang et al. , 2019a ) as a client ’ s meta-learning-like process fine-tuning on one shared global model with higher accuracy . FedMeta ( Chen et al. , 2018 ) , also a personalized FL based on meta-learning , generates a local and personalized model for each client using a common global model . Different from the aforementioned methods , we propose a Subclass Personalized Federated Learning ( SPFL ) algorithm , which utilizes the Softmax Normalized Gradient Similarity ( SNGS ) to generate global models with each specific to one client for its parameter fine-tune . Secondly , to tackle the systems heterogeneity , the server of an asynchronous FL algorithm named FedAsync ( Xie et al. , 2019 ) , can immediately perform aggregation after receiving a local model , reducing the waiting time for stragglers . In TWAFL ( Chen et al. , 2019 ) , a time-weighted aggregation strategy is used by server to handle stragglers according to the staleness of the received model parameters . Although these asynchronous FL algorithms can overcome some difficulties encountered in synchronous aggregation , such as waiting for the response from client , they are still inefficient for directly aggregating the received local model on the server , without taking the iteration gap of stragglers ’ local models at global communication round into consideration . Therefore , we propose the Leap Gradient Approximation ( LGA ) algorithm using SNGS to predict the local model of the stragglers in each global communication round . These local models estimated by server are also aggregated with non-stragglers ’ local models on the server to attained one shared global model . Further , to consider the real scenarios where system heterogeneity and statistical heterogeneity often co-exist , personalization is employed in our asynchronous FL . Before each individual personalized model for client is dispatched from server , it is computed by using the shared global model and the SNGS value between server and that client . We refer this personalized asynchronous FL approach as Personalized Leap Gradient Approximation ( PLGA ) algorithm . Contribution The main contributions of this paper towards personalized heterogeneous FL can be summarized as follows : ( 1 ) We propose the SPFL algorithm for both IID and non-IID data in personalized synchronous FL , where the server uses the SNGS to weight the relationship between clients , and delivers the personalized global model to each client ; ( 2 ) We propose the PLGA algorithm for the personalized asynchronous FL , in which the server also applies the SNGS to weight the relationship between client and itself , and uses the first-order Taylor expansion of gradient to approximate the model of the delayed clients ; ( 3 ) The stage strategy of ResNet is applied to improve the performance of both SPFL and PLGA algorithms . 2 RELATED WORK . In general , the personalized process in FL can be achieved by a variety of ways . The work ( Hard et al. , 2018 ) shows that by adding user-related features into the local inference process to get a personalized model may be better than the global one . The completely decentralized algorithms for each client by smoothing some similar tasks or model parameters between similar data distributions are studied in ( Bellet et al. , 2018 ; Vanhaesebrouck et al. , 2017 ) . Algorithms are also proposed to achieve personalization by jointly learning the model on a similar graph structure ( Lalitha et al. , 2019 ; He et al. , 2021 ) . FedFOMO ( Zhang et al. , 2020 ) is another approach to PFL , where each client only federates with other relevant clients for model aggregation . The pFedHN ( Shamsian et al. , 2021 ) algorithm is proposed using hypernetwork to generate different personalized model for each client . Furthermore , PFL can also be studied by its relationship with other machine learning paradigms . The close relationship between FL and meta-learning is analyzed by ( Khodak et al. , 2019 ; Jiang et al. , 2019b ) . Following the classic meta-learning algorithm named MAML ( Finn et al. , 2017 ) , Reptile algorithm ( Nichol et al. , 2018 ) is proposed , which has a close relationship with the Federated Averaging algorithm . When each individualized model learning is treated as a task , such a scenario can be viewed as a multi-task learning ( Zhang & Yang , 2017 ) , in which each task learns a taskrelated model . MOCHA ( Smith et al. , 2017 ) is designed by regarding FL as a multi-task , which solves communication efficiency and fault tolerance . To provide a good global model , there are PFL applying transfer learning and domain transfer ( Mansour et al. , 2009 ; Cortes & Mohri , 2014 ; BenDavid et al. , 2010 ) . Model mixing is another strategy to approach PFL ( Deng et al. , 2020 ; Hanzely & Richtárik , 2020 ; Liang et al. , 2020 ) . Alternatively , Other PFL methods apply clustering ( Mansour et al. , 2020 ; Ghosh et al. , 2020 ; Duan et al. , 2020 ) . Finally , pFedMe ( Dinh et al. , 2020 ) applies Moreau Envelopes as clients ’ regularized loss functions to achieve personalization . There are just few studies on asynchronous FL . TWAFL ( Chen et al. , 2019 ) aggregates a temporally weighted local models on the server . FedAsync ( Xie et al. , 2019 ) introduced an algorithm to balance the parameters of the updated model and the last global model . Meanwhile , the distillation technique has also been applied in ( Li & Wang , 2019 ; Bistritz et al. , 2020 ) . 3 METHOD . Our method generates a personalized model for each client during the server aggregation process . Therefore , the initialization model of each client is personalized . At the same time , we also consider the contribution of different neural network layers to the personalized model . We will introduce the application of this personalized algorithm to synchronous FL and asynchronous FL in the following subsections . Algorithm 1 Subclass Personalized Federated Learning ( SPFL ) Algorithm Input : initialized global model parameters w0 ; initialized SNGS matrix S̃s ∈ RN×N ; initialized personalized local model of each client ’ s { w0i } = w0 ; update frequency Γ and the number S of SNGS matrix ; client learning rate β ; server learning rate α ; local epoch E ; local batch size B of each client Output : Model Parameters { wi } Def MainLoop : for t = 0 , 1 , . . . do perform the following steps in parallel for each client Ci ∈ C if t % Γ == 0 then wtgi = 1 N ∑N i w t i { gti } = ClientUpdate ( wtgi , Ci , β ) S̃s = UpdateS̃Matrix ( { gti } ) { gti } = ClientUpdate ( wtgi , Ci , β ) { wt+1si } = ServerUpdate ( { wtgi } , { gti } , S̃s , α ) Def ClientUpdate ( wtgi , Ci , β ) : wti = w t gi for e = 0 ; e < E ; e++ do for batch size b in B do wti ← wti − 2β∇Li ( f ( wti , xb ) , yb ) return { wtgi − wti } Def UpdateS̃Matrix ( { gi } ) : for s ∈ S do perform ( 3 ) and ( 6 ) to return S̃s ( i , j ) Def ServerUpdate ( { wtsi } , { gti } , S̃s , α ) : for s ∈ S do perform ( 7 ) to get wt+1si return { wt+1gi } Definition of personalized FL algorithm In FL , there is a set of N clients C = { Ci } , i = 1 , . . . , N . Define the dataset on all clients as D = { Di } , i = 1 , . . . , N , with Di as the corresponding dataset of Ci . The sample size of each datasets Di is |Di| . ( xj , yj ) is a sample of Di , with xj ∈ Rd as the corresponding input feature and yj ∈ Rt as the corresponding input label . The prediction model of client Ci defined as ŷj = f ( wi , xj ) , where wi ∈ Rz is the model parameter . The dataset on each client Ci is assumed to obey a distribution of Pi . And the loss function on each client Ci is Li ( wi ) . The optimization goal for each client is : argmin wi Li ( wi ) = EDi∼Pi [ Li ( f ( wi , xj ) , yj ) ] ≈ 1 |Di| |Di|∑ j Li ( f ( wi , xj ) , yj ) . ( 1 ) 3.1 SUBCLASS PERSONALIZED FL ( SPFL ) ALGORITHM . We compared the performance of each client ’ s independent training ( i.e. , the model in each client performs SGD only on local data available , and model averaging is not performed ) and conventional global FL algorithms , such as FedAvg . The results in Appendix B. show that FL usually outperforms each client ’ s independent training on IID data , demonstrating the global model trained with multiple client data improves the model generalizability ability of each client . On the other hand , for some clients , independent training has better performance on non-IID data . Our experiment in Appendix B illustrates that the global FL model can not adapt well to the heterogeneous data . To tackle this problem , we propose a personalized FL algorithm that improved the performance of conventional global FL algorithms on non-IID data . At the same time , it still maintains a high performance on IID data . Firstly , we introduce a similarity matrix S ∈ RN×N to model the relationship between different clients , with S ( i , j ) represents the similarity between Ci and Cj . If the data distributions Pi and Pj are similar , then the value of S ( i , j ) is relatively high , and vice versa . For any two clients , Ci and Cj , supposing they receive the global model parameters wt at the same time t. After the local update of both clients using wt , their local model parameters are wti and w t j , respectively . Then the SGD updates of the corresponding gradients of wti and w t j are : gti = w t − wti , gtj = w t − wtj . ( 2 ) The correlation between Ci and Cj based on the global model wt is measured by the cosine similarity of the gradient updates gti and g t j , as calculated in ( 3 ) : S ( i , j ) = gti T gtj ‖gti‖‖gtj‖ , ( 3 ) where ‖ · ‖ is the vector L2 normal . And the aggregation strategy on the server for Ci is as follows : wt+1i = w t i − α N∑ j |Dj | |D| S ( i , j ) g t j , ( 4 ) where α is the learning rate of the server . As the gradient itself is extremely directionality sensitive , when S ( i , j ) is negative , multiplying it with the original gradient gtj will reverse the gradient direction . Therefore , softmax function is applied to make S ( i , j ) nonnegative and normalized as S̃ ( i , j ) . S̃ ( i , j ) , the softmax version of similarity relationship between Ci and Cj , is defined as : S̃ ( i , j ) = eS ( i , j ) ∑N j e S ( i , j ) . ( 5 ) The experimental results in Appendix B show that different layers influence personalization differently in the convolutional neural network ( CNN ) . Inspired by ResNet , we use stage strategy that contains a set of network layers to measure the similarity between clients . Therefore , the similarity matrix S̃ is calculated as S̃s based on a stage as in ( 6 ) , where its subscript s indicates a stage : S̃s ( i , j ) = eSs ( i , j ) ∑N j e Ss ( i , j ) . ( 6 ) Finally , the aggregation strategy on the server for Ci base on stage is defined as : wt+1si = w t si − α N∑ j |Dj | |D| S̃s ( i , j ) g t j , ( 7 ) where α is the learning rate of the server , gtsj and w t si are the updated gradient and model parameters of the stage s on the client i at iteration round t , respectively . As the model changes with iteration round t , so does the gradient of the model . Consequently , the similarity matrix S̃ obtained based on the gradient will also change . Therefore , the similarity matrix is updated regularly during the training process . During each update , the model parameters of all clients are aggregated with equal weight to get wt of ( 2 ) , and the similarity matrix is calculated based on wt . In the view of each aggregation , it is still similar to the federated average algorithm . However , each client obtains a personalized model based on its similarity matrix in the training rounds in each aggregation . In this process , each client is individually regarded as a subclass , and the algorithm is called Subclass Personalized Federated Algorithm ( SPFL ) . The algorithm is shown in Alg . 1 . | This paper proposed a personalized federated learning algorithm which takes into account the similarity of gradient of different users to update the model. More formally, the authors define $\tilde{S}(i,j)$ as a measure of similarity between the gradients of two user $i$ and $j$, and then update the model of user $i$by weighting the gradient of user $j$ by $\tilde{S}(i,j)$. The authors study their method in various numerical settings. | SP:edda5940fcfe72533d9925b2d73f5ed4c411e4bb |
Personalized Heterogeneous Federated Learning with Gradient Similarity | 1 INTRODUCTION . With the popularity of smartphones , personal computers , and other devices , the data stored on them has increased dramatically . These data are related to each other but exist independently on different devices . Also , their owners are often not willing to share their private data , which prompted Federated Learning ( FL ) ( McMahan et al. , 2017 ) . Conventional FL is a distributed machine learning framework in which multiple clients train a global model together under a central server ’ s coordination without sharing their local data . It ’ s common process is as follows : ( 1 ) At each global iteration , the central server broadcasts its current global model to the client ; ( 2 ) With the help of the global model , the client updates its local model which trained on its local data , and sends it back to the central server ; ( 3 ) The central server aggregates the local models to get a new global model . The FL process will repeat these three steps until convergence . Under the premise of privacy protection , it can obtain a shared global model with higher generality . Nonetheless , the wide application of FL still is facing the following two major challenges : • Statistical heterogeneity . As the user preference of each device may be different , the data distribution on each device may also be inconsistent . Consequently , the data as a whole on all the clients may be unbalanced and non-IID . This may cause undesirable performance of the FL participants . • Systems heterogeneity . Due to the differences of users ’ hardware and network bandwidth , conventional FL models are prone to generate stragglers whose models are easy to be discarded after server ’ s time-out . Firstly , to address the statistical heterogeneity , the personalization of FL is widely studied . The majority of the available personalized FL ( PFL ) methods focus on one shared global model ’ s parameters updating by client ’ s fine-tune . FedAvg ( McMahan et al. , 2017 ) is treated by ( Jiang et al. , 2019a ) as a client ’ s meta-learning-like process fine-tuning on one shared global model with higher accuracy . FedMeta ( Chen et al. , 2018 ) , also a personalized FL based on meta-learning , generates a local and personalized model for each client using a common global model . Different from the aforementioned methods , we propose a Subclass Personalized Federated Learning ( SPFL ) algorithm , which utilizes the Softmax Normalized Gradient Similarity ( SNGS ) to generate global models with each specific to one client for its parameter fine-tune . Secondly , to tackle the systems heterogeneity , the server of an asynchronous FL algorithm named FedAsync ( Xie et al. , 2019 ) , can immediately perform aggregation after receiving a local model , reducing the waiting time for stragglers . In TWAFL ( Chen et al. , 2019 ) , a time-weighted aggregation strategy is used by server to handle stragglers according to the staleness of the received model parameters . Although these asynchronous FL algorithms can overcome some difficulties encountered in synchronous aggregation , such as waiting for the response from client , they are still inefficient for directly aggregating the received local model on the server , without taking the iteration gap of stragglers ’ local models at global communication round into consideration . Therefore , we propose the Leap Gradient Approximation ( LGA ) algorithm using SNGS to predict the local model of the stragglers in each global communication round . These local models estimated by server are also aggregated with non-stragglers ’ local models on the server to attained one shared global model . Further , to consider the real scenarios where system heterogeneity and statistical heterogeneity often co-exist , personalization is employed in our asynchronous FL . Before each individual personalized model for client is dispatched from server , it is computed by using the shared global model and the SNGS value between server and that client . We refer this personalized asynchronous FL approach as Personalized Leap Gradient Approximation ( PLGA ) algorithm . Contribution The main contributions of this paper towards personalized heterogeneous FL can be summarized as follows : ( 1 ) We propose the SPFL algorithm for both IID and non-IID data in personalized synchronous FL , where the server uses the SNGS to weight the relationship between clients , and delivers the personalized global model to each client ; ( 2 ) We propose the PLGA algorithm for the personalized asynchronous FL , in which the server also applies the SNGS to weight the relationship between client and itself , and uses the first-order Taylor expansion of gradient to approximate the model of the delayed clients ; ( 3 ) The stage strategy of ResNet is applied to improve the performance of both SPFL and PLGA algorithms . 2 RELATED WORK . In general , the personalized process in FL can be achieved by a variety of ways . The work ( Hard et al. , 2018 ) shows that by adding user-related features into the local inference process to get a personalized model may be better than the global one . The completely decentralized algorithms for each client by smoothing some similar tasks or model parameters between similar data distributions are studied in ( Bellet et al. , 2018 ; Vanhaesebrouck et al. , 2017 ) . Algorithms are also proposed to achieve personalization by jointly learning the model on a similar graph structure ( Lalitha et al. , 2019 ; He et al. , 2021 ) . FedFOMO ( Zhang et al. , 2020 ) is another approach to PFL , where each client only federates with other relevant clients for model aggregation . The pFedHN ( Shamsian et al. , 2021 ) algorithm is proposed using hypernetwork to generate different personalized model for each client . Furthermore , PFL can also be studied by its relationship with other machine learning paradigms . The close relationship between FL and meta-learning is analyzed by ( Khodak et al. , 2019 ; Jiang et al. , 2019b ) . Following the classic meta-learning algorithm named MAML ( Finn et al. , 2017 ) , Reptile algorithm ( Nichol et al. , 2018 ) is proposed , which has a close relationship with the Federated Averaging algorithm . When each individualized model learning is treated as a task , such a scenario can be viewed as a multi-task learning ( Zhang & Yang , 2017 ) , in which each task learns a taskrelated model . MOCHA ( Smith et al. , 2017 ) is designed by regarding FL as a multi-task , which solves communication efficiency and fault tolerance . To provide a good global model , there are PFL applying transfer learning and domain transfer ( Mansour et al. , 2009 ; Cortes & Mohri , 2014 ; BenDavid et al. , 2010 ) . Model mixing is another strategy to approach PFL ( Deng et al. , 2020 ; Hanzely & Richtárik , 2020 ; Liang et al. , 2020 ) . Alternatively , Other PFL methods apply clustering ( Mansour et al. , 2020 ; Ghosh et al. , 2020 ; Duan et al. , 2020 ) . Finally , pFedMe ( Dinh et al. , 2020 ) applies Moreau Envelopes as clients ’ regularized loss functions to achieve personalization . There are just few studies on asynchronous FL . TWAFL ( Chen et al. , 2019 ) aggregates a temporally weighted local models on the server . FedAsync ( Xie et al. , 2019 ) introduced an algorithm to balance the parameters of the updated model and the last global model . Meanwhile , the distillation technique has also been applied in ( Li & Wang , 2019 ; Bistritz et al. , 2020 ) . 3 METHOD . Our method generates a personalized model for each client during the server aggregation process . Therefore , the initialization model of each client is personalized . At the same time , we also consider the contribution of different neural network layers to the personalized model . We will introduce the application of this personalized algorithm to synchronous FL and asynchronous FL in the following subsections . Algorithm 1 Subclass Personalized Federated Learning ( SPFL ) Algorithm Input : initialized global model parameters w0 ; initialized SNGS matrix S̃s ∈ RN×N ; initialized personalized local model of each client ’ s { w0i } = w0 ; update frequency Γ and the number S of SNGS matrix ; client learning rate β ; server learning rate α ; local epoch E ; local batch size B of each client Output : Model Parameters { wi } Def MainLoop : for t = 0 , 1 , . . . do perform the following steps in parallel for each client Ci ∈ C if t % Γ == 0 then wtgi = 1 N ∑N i w t i { gti } = ClientUpdate ( wtgi , Ci , β ) S̃s = UpdateS̃Matrix ( { gti } ) { gti } = ClientUpdate ( wtgi , Ci , β ) { wt+1si } = ServerUpdate ( { wtgi } , { gti } , S̃s , α ) Def ClientUpdate ( wtgi , Ci , β ) : wti = w t gi for e = 0 ; e < E ; e++ do for batch size b in B do wti ← wti − 2β∇Li ( f ( wti , xb ) , yb ) return { wtgi − wti } Def UpdateS̃Matrix ( { gi } ) : for s ∈ S do perform ( 3 ) and ( 6 ) to return S̃s ( i , j ) Def ServerUpdate ( { wtsi } , { gti } , S̃s , α ) : for s ∈ S do perform ( 7 ) to get wt+1si return { wt+1gi } Definition of personalized FL algorithm In FL , there is a set of N clients C = { Ci } , i = 1 , . . . , N . Define the dataset on all clients as D = { Di } , i = 1 , . . . , N , with Di as the corresponding dataset of Ci . The sample size of each datasets Di is |Di| . ( xj , yj ) is a sample of Di , with xj ∈ Rd as the corresponding input feature and yj ∈ Rt as the corresponding input label . The prediction model of client Ci defined as ŷj = f ( wi , xj ) , where wi ∈ Rz is the model parameter . The dataset on each client Ci is assumed to obey a distribution of Pi . And the loss function on each client Ci is Li ( wi ) . The optimization goal for each client is : argmin wi Li ( wi ) = EDi∼Pi [ Li ( f ( wi , xj ) , yj ) ] ≈ 1 |Di| |Di|∑ j Li ( f ( wi , xj ) , yj ) . ( 1 ) 3.1 SUBCLASS PERSONALIZED FL ( SPFL ) ALGORITHM . We compared the performance of each client ’ s independent training ( i.e. , the model in each client performs SGD only on local data available , and model averaging is not performed ) and conventional global FL algorithms , such as FedAvg . The results in Appendix B. show that FL usually outperforms each client ’ s independent training on IID data , demonstrating the global model trained with multiple client data improves the model generalizability ability of each client . On the other hand , for some clients , independent training has better performance on non-IID data . Our experiment in Appendix B illustrates that the global FL model can not adapt well to the heterogeneous data . To tackle this problem , we propose a personalized FL algorithm that improved the performance of conventional global FL algorithms on non-IID data . At the same time , it still maintains a high performance on IID data . Firstly , we introduce a similarity matrix S ∈ RN×N to model the relationship between different clients , with S ( i , j ) represents the similarity between Ci and Cj . If the data distributions Pi and Pj are similar , then the value of S ( i , j ) is relatively high , and vice versa . For any two clients , Ci and Cj , supposing they receive the global model parameters wt at the same time t. After the local update of both clients using wt , their local model parameters are wti and w t j , respectively . Then the SGD updates of the corresponding gradients of wti and w t j are : gti = w t − wti , gtj = w t − wtj . ( 2 ) The correlation between Ci and Cj based on the global model wt is measured by the cosine similarity of the gradient updates gti and g t j , as calculated in ( 3 ) : S ( i , j ) = gti T gtj ‖gti‖‖gtj‖ , ( 3 ) where ‖ · ‖ is the vector L2 normal . And the aggregation strategy on the server for Ci is as follows : wt+1i = w t i − α N∑ j |Dj | |D| S ( i , j ) g t j , ( 4 ) where α is the learning rate of the server . As the gradient itself is extremely directionality sensitive , when S ( i , j ) is negative , multiplying it with the original gradient gtj will reverse the gradient direction . Therefore , softmax function is applied to make S ( i , j ) nonnegative and normalized as S̃ ( i , j ) . S̃ ( i , j ) , the softmax version of similarity relationship between Ci and Cj , is defined as : S̃ ( i , j ) = eS ( i , j ) ∑N j e S ( i , j ) . ( 5 ) The experimental results in Appendix B show that different layers influence personalization differently in the convolutional neural network ( CNN ) . Inspired by ResNet , we use stage strategy that contains a set of network layers to measure the similarity between clients . Therefore , the similarity matrix S̃ is calculated as S̃s based on a stage as in ( 6 ) , where its subscript s indicates a stage : S̃s ( i , j ) = eSs ( i , j ) ∑N j e Ss ( i , j ) . ( 6 ) Finally , the aggregation strategy on the server for Ci base on stage is defined as : wt+1si = w t si − α N∑ j |Dj | |D| S̃s ( i , j ) g t j , ( 7 ) where α is the learning rate of the server , gtsj and w t si are the updated gradient and model parameters of the stage s on the client i at iteration round t , respectively . As the model changes with iteration round t , so does the gradient of the model . Consequently , the similarity matrix S̃ obtained based on the gradient will also change . Therefore , the similarity matrix is updated regularly during the training process . During each update , the model parameters of all clients are aggregated with equal weight to get wt of ( 2 ) , and the similarity matrix is calculated based on wt . In the view of each aggregation , it is still similar to the federated average algorithm . However , each client obtains a personalized model based on its similarity matrix in the training rounds in each aggregation . In this process , each client is individually regarded as a subclass , and the algorithm is called Subclass Personalized Federated Algorithm ( SPFL ) . The algorithm is shown in Alg . 1 . | This paper proposes two methods for personalized federated learning, one synchronous and one asynchronous. The general approach taken in both cases is to adapt the weights when averaging information from different clients, so that clients with more similar gradients are given more weight in the update for each client. The two approaches are called SPFL (synchronous) and PLGA (asynchronous). The approaches are illustrated on small image classification tasks. | SP:edda5940fcfe72533d9925b2d73f5ed4c411e4bb |
Continual Backprop: Stochastic Gradient Descent with Persistent Randomness | 1 INTRODUCTION . In the last decade , deep learning methods have been successful and become the state-of-the-art in many machine learning problems and applications , including supervised classification , reinforcement learning ( Silver et al. , 2016 ) , computer vision ( Krizhevsky et al. , 2012 ) , and natural language processing ( Brown et al. , 2020 ) . These methods learn the weights of an artificial neural network using Backprop , which is primarily applied to stationary problems . However , a primary challenge to leveraging the strengths of deep learning beyond current applications is that Backprop does not work well in non-stationary problems ( McCloskey and Cohen , 1989 ; French , 1997 ; Sahoo et al. , 2018 ) , for example , a problem that consists a sequence of stationary problems . Leveraging the strengths of deep learning methods in non-stationary problems is important as many real-world applications of machine learning like robotics involve non-stationrities . Nonstationarities can arise due to changes in the environment ( Thurn , 1998 ) , high complexity ( Sutton et al. , 2007 ) , partial observability ( Khetarpal et al. , 2020 ) , or other actors ( Foerster et al. , 2018 ) . Many works have tried to make deep learning work in non-stationary problems . Some works have proposed methods that can remember previously learned information ( Kirkpatrick et al. , 2017 ; Aljundi et al. , 2018 ; Riemer et al. , 2019 ) . While , others have proposed methods that can adapt fast to nonstationarities ( Rusu et al. , 2016 ; Al-Shedivat et al. , 2018 ; Finn et al. , 2019 ) . A limitation of prior works on non-stationary problems is that they did not study cases where there are many non-stationarities . Number of non-stationarities refers to the number of times the data distribution changes . Most works only look at problems with less than ten non-stationarities ( Rusu et al. , 2016 ; Kirkpatrick et al. , 2017 ; Al-Shedivat et al. , 2018 ) . Finn et al . ( 2019 ) studied problems with up to a hundred non-stationarities . Dealing with many non-stationarities is important for systems that continually interact with the real world as non-stationarities frequently occur in the world . In this work , we study problems with a large number ( thousands ) of non-stationarities . We start with a special class of problems that we call semi-stationary problems . These are online supervised learning problems where the input distribution is non-stationary while the target function is stationary . The target function is the function being approximated , for example , the true regression function . These problems are a natural way to study problems with many non-stationarities ; slowly moving through a large input space can cause thousands of non-stationarities . These problems are also important for systems in the real world as inputs from the real world often depend on previous inputs , making the input distribution non-stationary . Finally , we study non-stationary RL problems . These are full non-stationarity problems as the input distribution changes when the agent ’ s behaviour changes and the target function , optimal policy , changes as the dynamics of the environment change . Semi-stationary problems reveal a new difficulty with Backprop ( BP ) in non-stationary problems ; they help us clearly understand one way in which Backprop fails . We show that in non-stationary problems , Backprop performs well initially , but surprisingly , its performance degrades substantially over time as Backprop loses its ability to adapt . Backprop relies on proper random initialization for its effectiveness ( Glorot et al. , 2010 ; Sutskerver et al. , 2013 ; He et al. , 2015 ) . However , randomization in Backprop only happens in the beginning . We hypothesize that Backprop ’ s ability to adapt degrades because the benefits of initial random distribution are not present at all times . To extend the benefits of initial random distribution throughout learning , we propose the Continual Backprop ( CBP ) algorithm . CBP uses a generate-and-test method to continually inject random features alongside SGD . We show that unlike BP , CBP can continually adapt in non-stationary problems . Our generate-and-test algorithm consists of two parts : the generator , which proposes new features , and the second is the tester , which finds and replaces low utility features with the features proposed by the generator . Our generate-and-test algorithm is built on the one proposed by Mahmood and Sutton , ( 2013 ) and our algorithm is compatible with modern deep learning . We overcome three significant limitations of their work . First , our algorithm is applicable to feed-forward networks with arbitrary shape and size while theirs was only applicable to networks with a single hidden layer and one output . Second , our algorithms works with modern activations and optimizers like Adam while theirs was limited to LTU activations , binary weights , and SGD . And third , we combine generate-and-test with SGD and show that the resulting algorithm , CBP , is significantly better than BP in complex non-stationary problems while their work was an isolated study of generate-and-test . The first contribution of our work is to show that in non-stationary problems with many nonstationarities BP loses its ability to adapt over time . In other words , we contribute to the understanding of why BP and its variants fails in non-stationary problems . Our second contribution is that we propose the CBP algorithm that extends the benefits of the initialization in BP to all times . 2 NON-STATIONARY PROBLEMS . We study Backprop , Continual Backprop , and other learning algorithms in semi-stationary and Reinforcement Learning ( RL ) problems . First , we consider a novel idealized semi-stationary problem . The strength of this problem is that in this problem we can study continual learning algorithms extensively and yet in a computationally inexpensive way and without the confounders that arise in more complex problems . Then we study the permuted MNIST problem , an online image classification problem , and two non-stationary RL problems . We demonstrate on these problems that the findings from the idealized problem scale to large neural networks in more realistic settings . Performances measure in semi-stationary problems In supervised learning , the task is to learn a function using examples of input-output pairs . This function is called the target function . In online supervised learning ( Orabona , 2019 ) , there is a stream of samples ( xt , yt ) , and the predictions have to be made sequentially . The performance measure is the loss on the next sample . Thus , learning and evaluation happen simultaneously . This is fundamentally different from offline supervised learning , where there are two separate phases , one for learning and another for evaluation . Another common measure in non-stationary problems is the performance on previously seen data . However , measuring performance on previously seen data is only meaningful when studying the catastrophic forgetting aspect of BP . As we do not study the forgetting problem , we do not measure the performance on old data . 2.1 BIT-FLIPPING PROBLEM . Our first problem is the Bit-Flipping problem . It differs slightly from most supervised learning in two ways . First , it is conventionally assumed that samples are independently and identically distributed , whereas we focus on the case where the sample at the current time-step depends on the previous sample . Second , it is often assumed that the learner has sufficient capacity to closely approximate the target function , whereas we assume that the target function is more complex than the learner . The best approximation continually changes in this problem as the input distribution is non-stationary , and the target function has high complexity . Therefore , there is a need for continual adaptation . The target function in the Bit-Flipping problem is represented by a multi-layered target network . The target network has two layers of weights . We limit the learning networks to networks with the same depth . This allows us to control the relative complexity of the target function and learner . If the target network has a lot more hidden units than the learner , then the target function is more complex than the learner . We set the target network to be wider . The input at time step t , xt , is a binary vector of size m. Here , xt ∈ { 0 , 1 } m where each element , xi , t ∈ { 0 , 1 } for i in 1 , ... , m. After every T time-steps , one of the first f bits is randomly selected and its value is flipped ; the values of these f bits do not change at other times . We refer to the first f bits as flipping bits . Each of the next m−f bits is randomly sampled from U { 0 , 1 } at every timestep . The value of T allows us to control the correlation among the consecutive values of the input . Note that when a flipping bit flips , the input distribution changes . We usem=20 , f=15 , and T =1e4 . In the target network , all weights are randomly sampled from U { −1 , 1 } . The activation function is a Linear Threshold Unit ( LTU ) , McCulloch ( 1943 ) . The output of an LTU , with input xt is 1 if ∑m+1 i=0 vixi , t > θi else 0 . Here , v is the input weight vector . We set θi = ( m + 1 ) ∗ β − Si , where Si is the number of input weights with the value of −1 and β ∈ [ 0 , 1 ] . This form of LTU is taken from Sutton and Whitehead , ( 1994 ) . We use β = 0.7 . The output of the network at time-step t is a scalar yt ∈ R. Figure 1 shows the input and the target network . The Bit-Flipping problem is a regression problem ; we use the squared error to measure performance . 2.2 PERMUTED MNIST We use an online variant of the Permuted MNIST problem ( Zenke et al. , 2017 ) . They used this problem with just 10 non-stationarities . This is an image classification problem with 10 classes . The images in permuted MNIST are generated from the images in the MNIST dataset by randomly permuting the pixels in the images . We present the images sequentially and measure online classification accuracy . The MNIST dataset has 60,000 images . We present these 60k images in random order , and after all the images have been presented , we use a single permutation to change all the images . This cycle of presenting all the 60,000 images and then changing the permutation of pixels can be continued indefinitely , which allows us to create a long-term continual learning problem . 2.3 NON-STATIONARY RL PROBLEMS . We use two non-stationary RL problems and both are continual variants of a corresponding Pybullet ( Coumans and Bai , 2016 ) problem . Non-stationary variants are needed as all the problems in Pybullet are stationary . In our problems , either the environment or the distribution of input changes after a pre-specified time , making it necessary for the learner to adapt . Slippery Ant : In this problem , we change the friction between the agent and the ground of the standard Pybullet Ant problem . In the standard problem , the value of the friction is 1.5 . We change the friction after every 10M time-steps by log-uniformly sampling it from [ 1e−4 , 1e4 ] . Semi-stationary Reacher : This problem is a modification of PyBullet ’ s Reacher problem . In Reacher , the goal is to get the tip of an arm to a target point in a square . At the beginning of each episode , a new target point is uniformly randomly chosen inside the square . Each episode lasts for 150 time-steps . In our problem , we change the distribution of input after every 30 episodes ( 4500 time-steps ) . We do so by dividing the square into 100 sub-squares , as shown in Figure 2 . At the beginning of each episode , the target point is randomly sampled from within a sub-square . This sub-square is changed after every 4500 time-steps when a new sub-square is randomly selected . | This paper demonstrates and proposes a solution for a new problem in continual learning which is the inverse of catastrophic forgetting. Compared to prior work, they study problems where the data distribution changes much more rapidly. They demonstrate that backpropagation based optimization loses its ability to adapt when tracking these rapidly changing continual learning problems. They show a degradation in performance over time on permuated MNIST, non-stationary RL problems, and the bit-flipping problem. They propose a solution to this problem by reinitializing some portion of the weights of every layer. They propose a utility function to choose which layers to reinitialize based on a combination of adaptation-utility and contribution-utility. They demonstrate that utilizing this method, they can achieve better performance that does not degrade over time and that it works in more cases than l2 weight decay. | SP:2416c3d070cc9b54e096cc57687749731f3b9193 |
Continual Backprop: Stochastic Gradient Descent with Persistent Randomness | 1 INTRODUCTION . In the last decade , deep learning methods have been successful and become the state-of-the-art in many machine learning problems and applications , including supervised classification , reinforcement learning ( Silver et al. , 2016 ) , computer vision ( Krizhevsky et al. , 2012 ) , and natural language processing ( Brown et al. , 2020 ) . These methods learn the weights of an artificial neural network using Backprop , which is primarily applied to stationary problems . However , a primary challenge to leveraging the strengths of deep learning beyond current applications is that Backprop does not work well in non-stationary problems ( McCloskey and Cohen , 1989 ; French , 1997 ; Sahoo et al. , 2018 ) , for example , a problem that consists a sequence of stationary problems . Leveraging the strengths of deep learning methods in non-stationary problems is important as many real-world applications of machine learning like robotics involve non-stationrities . Nonstationarities can arise due to changes in the environment ( Thurn , 1998 ) , high complexity ( Sutton et al. , 2007 ) , partial observability ( Khetarpal et al. , 2020 ) , or other actors ( Foerster et al. , 2018 ) . Many works have tried to make deep learning work in non-stationary problems . Some works have proposed methods that can remember previously learned information ( Kirkpatrick et al. , 2017 ; Aljundi et al. , 2018 ; Riemer et al. , 2019 ) . While , others have proposed methods that can adapt fast to nonstationarities ( Rusu et al. , 2016 ; Al-Shedivat et al. , 2018 ; Finn et al. , 2019 ) . A limitation of prior works on non-stationary problems is that they did not study cases where there are many non-stationarities . Number of non-stationarities refers to the number of times the data distribution changes . Most works only look at problems with less than ten non-stationarities ( Rusu et al. , 2016 ; Kirkpatrick et al. , 2017 ; Al-Shedivat et al. , 2018 ) . Finn et al . ( 2019 ) studied problems with up to a hundred non-stationarities . Dealing with many non-stationarities is important for systems that continually interact with the real world as non-stationarities frequently occur in the world . In this work , we study problems with a large number ( thousands ) of non-stationarities . We start with a special class of problems that we call semi-stationary problems . These are online supervised learning problems where the input distribution is non-stationary while the target function is stationary . The target function is the function being approximated , for example , the true regression function . These problems are a natural way to study problems with many non-stationarities ; slowly moving through a large input space can cause thousands of non-stationarities . These problems are also important for systems in the real world as inputs from the real world often depend on previous inputs , making the input distribution non-stationary . Finally , we study non-stationary RL problems . These are full non-stationarity problems as the input distribution changes when the agent ’ s behaviour changes and the target function , optimal policy , changes as the dynamics of the environment change . Semi-stationary problems reveal a new difficulty with Backprop ( BP ) in non-stationary problems ; they help us clearly understand one way in which Backprop fails . We show that in non-stationary problems , Backprop performs well initially , but surprisingly , its performance degrades substantially over time as Backprop loses its ability to adapt . Backprop relies on proper random initialization for its effectiveness ( Glorot et al. , 2010 ; Sutskerver et al. , 2013 ; He et al. , 2015 ) . However , randomization in Backprop only happens in the beginning . We hypothesize that Backprop ’ s ability to adapt degrades because the benefits of initial random distribution are not present at all times . To extend the benefits of initial random distribution throughout learning , we propose the Continual Backprop ( CBP ) algorithm . CBP uses a generate-and-test method to continually inject random features alongside SGD . We show that unlike BP , CBP can continually adapt in non-stationary problems . Our generate-and-test algorithm consists of two parts : the generator , which proposes new features , and the second is the tester , which finds and replaces low utility features with the features proposed by the generator . Our generate-and-test algorithm is built on the one proposed by Mahmood and Sutton , ( 2013 ) and our algorithm is compatible with modern deep learning . We overcome three significant limitations of their work . First , our algorithm is applicable to feed-forward networks with arbitrary shape and size while theirs was only applicable to networks with a single hidden layer and one output . Second , our algorithms works with modern activations and optimizers like Adam while theirs was limited to LTU activations , binary weights , and SGD . And third , we combine generate-and-test with SGD and show that the resulting algorithm , CBP , is significantly better than BP in complex non-stationary problems while their work was an isolated study of generate-and-test . The first contribution of our work is to show that in non-stationary problems with many nonstationarities BP loses its ability to adapt over time . In other words , we contribute to the understanding of why BP and its variants fails in non-stationary problems . Our second contribution is that we propose the CBP algorithm that extends the benefits of the initialization in BP to all times . 2 NON-STATIONARY PROBLEMS . We study Backprop , Continual Backprop , and other learning algorithms in semi-stationary and Reinforcement Learning ( RL ) problems . First , we consider a novel idealized semi-stationary problem . The strength of this problem is that in this problem we can study continual learning algorithms extensively and yet in a computationally inexpensive way and without the confounders that arise in more complex problems . Then we study the permuted MNIST problem , an online image classification problem , and two non-stationary RL problems . We demonstrate on these problems that the findings from the idealized problem scale to large neural networks in more realistic settings . Performances measure in semi-stationary problems In supervised learning , the task is to learn a function using examples of input-output pairs . This function is called the target function . In online supervised learning ( Orabona , 2019 ) , there is a stream of samples ( xt , yt ) , and the predictions have to be made sequentially . The performance measure is the loss on the next sample . Thus , learning and evaluation happen simultaneously . This is fundamentally different from offline supervised learning , where there are two separate phases , one for learning and another for evaluation . Another common measure in non-stationary problems is the performance on previously seen data . However , measuring performance on previously seen data is only meaningful when studying the catastrophic forgetting aspect of BP . As we do not study the forgetting problem , we do not measure the performance on old data . 2.1 BIT-FLIPPING PROBLEM . Our first problem is the Bit-Flipping problem . It differs slightly from most supervised learning in two ways . First , it is conventionally assumed that samples are independently and identically distributed , whereas we focus on the case where the sample at the current time-step depends on the previous sample . Second , it is often assumed that the learner has sufficient capacity to closely approximate the target function , whereas we assume that the target function is more complex than the learner . The best approximation continually changes in this problem as the input distribution is non-stationary , and the target function has high complexity . Therefore , there is a need for continual adaptation . The target function in the Bit-Flipping problem is represented by a multi-layered target network . The target network has two layers of weights . We limit the learning networks to networks with the same depth . This allows us to control the relative complexity of the target function and learner . If the target network has a lot more hidden units than the learner , then the target function is more complex than the learner . We set the target network to be wider . The input at time step t , xt , is a binary vector of size m. Here , xt ∈ { 0 , 1 } m where each element , xi , t ∈ { 0 , 1 } for i in 1 , ... , m. After every T time-steps , one of the first f bits is randomly selected and its value is flipped ; the values of these f bits do not change at other times . We refer to the first f bits as flipping bits . Each of the next m−f bits is randomly sampled from U { 0 , 1 } at every timestep . The value of T allows us to control the correlation among the consecutive values of the input . Note that when a flipping bit flips , the input distribution changes . We usem=20 , f=15 , and T =1e4 . In the target network , all weights are randomly sampled from U { −1 , 1 } . The activation function is a Linear Threshold Unit ( LTU ) , McCulloch ( 1943 ) . The output of an LTU , with input xt is 1 if ∑m+1 i=0 vixi , t > θi else 0 . Here , v is the input weight vector . We set θi = ( m + 1 ) ∗ β − Si , where Si is the number of input weights with the value of −1 and β ∈ [ 0 , 1 ] . This form of LTU is taken from Sutton and Whitehead , ( 1994 ) . We use β = 0.7 . The output of the network at time-step t is a scalar yt ∈ R. Figure 1 shows the input and the target network . The Bit-Flipping problem is a regression problem ; we use the squared error to measure performance . 2.2 PERMUTED MNIST We use an online variant of the Permuted MNIST problem ( Zenke et al. , 2017 ) . They used this problem with just 10 non-stationarities . This is an image classification problem with 10 classes . The images in permuted MNIST are generated from the images in the MNIST dataset by randomly permuting the pixels in the images . We present the images sequentially and measure online classification accuracy . The MNIST dataset has 60,000 images . We present these 60k images in random order , and after all the images have been presented , we use a single permutation to change all the images . This cycle of presenting all the 60,000 images and then changing the permutation of pixels can be continued indefinitely , which allows us to create a long-term continual learning problem . 2.3 NON-STATIONARY RL PROBLEMS . We use two non-stationary RL problems and both are continual variants of a corresponding Pybullet ( Coumans and Bai , 2016 ) problem . Non-stationary variants are needed as all the problems in Pybullet are stationary . In our problems , either the environment or the distribution of input changes after a pre-specified time , making it necessary for the learner to adapt . Slippery Ant : In this problem , we change the friction between the agent and the ground of the standard Pybullet Ant problem . In the standard problem , the value of the friction is 1.5 . We change the friction after every 10M time-steps by log-uniformly sampling it from [ 1e−4 , 1e4 ] . Semi-stationary Reacher : This problem is a modification of PyBullet ’ s Reacher problem . In Reacher , the goal is to get the tip of an arm to a target point in a square . At the beginning of each episode , a new target point is uniformly randomly chosen inside the square . Each episode lasts for 150 time-steps . In our problem , we change the distribution of input after every 30 episodes ( 4500 time-steps ) . We do so by dividing the square into 100 sub-squares , as shown in Figure 2 . At the beginning of each episode , the target point is randomly sampled from within a sub-square . This sub-square is changed after every 4500 time-steps when a new sub-square is randomly selected . | This paper investigates the problem of fast adaptation in a non-stationary online continual learning(CL) setting. It argues that keeping weight randomnization is important to fast adaptation in CL. However, current CL methods only performs weight randomization in the beginning of the algorithm; the weights loss randomness overtime, leading to degraded model performance. The paper presents a continual weight reinitialization algorithm to overcome the issue. In particular, it proposes to evaluate the utility of each hidden unit -- including importance to the current task and adaptation capability. Then selects a set of hidden units with low score and resets their incoming and outgoing weights. The authors conduct experiments to evaluate the performance of the proposed method. | SP:2416c3d070cc9b54e096cc57687749731f3b9193 |
Omni-Dimensional Dynamic Convolution | 1 INTRODUCTION . In the past decade , we have witnessed the tremendous success of deep Convolutional Neural Networks ( CNNs ) in many computer vision applications ( Krizhevsky et al. , 2012 ; Girshick et al. , 2014 ; Long et al. , 2015 ; He et al. , 2017 ) . The most common way of constructing a deep CNN is to stack a number of convolutional layers as well as other basic layers organized with the selected feature connection topology . Along with great advances in CNN architecture design by manual engineering ( Krizhevsky et al. , 2012 ; He et al. , 2016 ; Howard et al. , 2017 ) and automatic searching ( Zoph & Le , 2017 ; Pham et al. , 2018 ; Howard et al. , 2019 ) , lots of prevailing classification backbones have been presented . Recent works ( Wang et al. , 2017 ; Hu et al. , 2018b ; Park et al. , 2018 ; Woo et al. , 2018 ; Yang et al. , 2019 ; Chen et al. , 2020 ) show that incorporating attention mechanisms into convolutional blocks can further push the performance boundaries of modern CNNs , and thus it has attracted great research interest in the deep learning community . The well-known SENet ( Hu et al. , 2018b ) uses an attention module consisting of squeeze and excitation operations to adaptively recalibrate the output features of convolutional layers , strengthening the representation power of a CNN via encouraging informative feature channels while suppressing less important ones . Numerous attentive feature recalibration variants ( Woo et al. , 2018 ; Park et al. , 2018 ; Hu et al. , 2018a ) have been proposed since then . In Lin et al . ( 2020 ) and Quader et al . ( 2020 ) , two attention extensions to modulate the convolutional weights instead of the output features are also presented . Unlike the aforementioned methods in which the number of convolutional parameters of a target network is fixed , dynamic convolution , which applies the attention mechanism over n 1Here we follow the definitions in Yang et al . ( 2019 ) and Chen et al . ( 2020 ) where a convolutional kernel refers to the filter set of a convolutional layer . additive convolutional kernels to increase the size and the capacity of a network while maintaining efficient inference , has recently become popular in optimizing efficient CNNs . This line of research is pioneered by Conditionally Parameterized Convolutions ( CondConv ) ( Yang et al. , 2019 ) and Dynamic Convolution ( DyConv ) ( Chen et al. , 2020 ) whose basic ideas are the same . Generally , unlike a regular convolutional layer which applies the same ( i.e. , static ) convolutional kernel to all input samples , a dynamic convolutional layer learns a linear combination of n convolutional kernels weighted with their attentions conditioned on the input features . Despite significant accuracy improvements for light-weight CNNs , dynamic convolution designed in this way has two limitations . Firstly , the main limitation lies in the attention mechanism design . The dynamic property of CondConv and DyConv comes from computing convolutional kernels as a function of the input features . However , we observe that they endow the dynamic property to convolutional kernels through one dimension ( regarding the convolutional kernel number ) of the kernel space while the other three dimensions ( regarding the spatial size , the input channel number and the output channel number for each convolutional kernel ) are overlooked . As a result , the weights of each convolutional kernel share the same attention scalar for a given input , limiting their abilities to capture rich contextual cues . That is , the potentials of dynamic convolutional property have not been fully explored by existing works , and thus they leave much room for improving the model performance . Secondly , at a convolutional layer , replacing regular convolution by dynamic convolution increases the number of convolutional parameters by n times . When applying dynamic convolution to a lot of convolutional layers , it will heavily increase the model size . To handle this limitation , Li et al . ( 2021 ) proposes a dynamic convolution decomposition method which can get more compact yet competitive models . Instead , in this paper we address both of the above limitations from a new perspective : inserting a more diverse and effective attention mechanism into the convolutional kernel space . Our core contribution is an improved dynamic convolution design called Omni-dimensional Dynamic Convolution ( ODConv ) . Unlike existing works discussed above , at any convolutional layer , ODConv leverages a multi-dimensional attention mechanism to learn four types of attentions for convolutional kernels along all four dimensions of the kernel space in a parallel manner . We show that these four types of attentions learnt by our ODConv are complementary to each other , and progressively applying them to the corresponding convolutional kernels can substantially strengthen the feature extraction ability of basic convolution operations of a CNN . Consequently , ODConv with even one single kernel can compete with or outperform existing dynamic convolution counterparts with multiple kernels , introducing substantially fewer extra parameters to the final models . As a drop-in design , ODConv can be used to replace regular convolutions in many prevailing CNN architectures . It strikes a better tradeoff between model accuracy and efficiency compared to existing dynamic convolution designs , as validated by extensive experiments on the large-scale ImageNet classification dataset ( Russakovsky et al. , 2015 ) with various CNN backbones . ODConv also shows better recognition performance under similar model complexities when compared to other state-ofthe-art attention methods for output feature recalibration ( Woo et al. , 2018 ; Hu et al. , 2018b ; Wang et al. , 2020 ; Lin et al. , 2020 ) and for convolutional weights modification ( Ma et al. , 2020 ; Lin et al. , 2020 ; Quader et al. , 2020 ) . Furthermore , the performance improvements by ODConv for the pretrained classification models can transfer well to downstream tasks such as object detection on the MS-COCO dataset ( Lin et al. , 2014 ) , validating its promising generalization ability . 2 RELATED WORK . Deep CNN Architectures . AlexNet ( Krizhevsky et al. , 2012 ) ignited the surge of deep CNNs by winning the ImageNet classification challenge 2012 . Since then , lots of well-known CNN architectures such as VGGNet ( Simonyan & Zisserman , 2015 ) , InceptionNet ( Szegedy et al. , 2015 ) , ResNet ( He et al. , 2016 ) , DenseNet ( Huang et al. , 2017 ) and ResNeXt ( Xie et al. , 2017 ) have been proposed , which are designed to be much deeper and have more sophisticated connection topologies compared with AlexNet . To ease the deployment of inference models on resource-limited platforms , MobileNets ( Howard et al. , 2017 ; Sandler et al. , 2018 ) and ShuffleNet ( Zhang et al. , 2018b ; Ma et al. , 2018 ) are presented , which particularly make a tradeoff between model accuracy and efficiency . All the aforementioned CNNs are manually designed . Recently , researchers have also made great efforts ( Zoph & Le , 2017 ; Pham et al. , 2018 ; Howard et al. , 2019 ) to automate the network design process . Our ODConv could be potentially used to boost the performance of most of these prevailing CNN architectures constructed with regular convolutional layers . Attentive Feature Recalibration . Designing attentive feature recalibration modules to improve the performance of a CNN has been widely studied in recent years . Wang et al . ( 2017 ) proposes a specialized attention module consisting of a trunk branch and a mask branch , and inserts it into the intermediate stages of deep residual networks . SENet ( Hu et al. , 2018b ) uses a seminal channel attention module termed Squeeze-and-Excitation ( SE ) to exploit the interdependencies between the channels of convolutional features . Many subsequent works improve SE from different aspects , following its two-stage design ( i.e. , feature aggregation and feature recalibration ) . BAM ( Park et al. , 2018 ) and CBAM ( Woo et al. , 2018 ) combine the channel attention module with the spatial attention module . Misra et al . ( 2021 ) presents an attention module having three branches conditioned on the features rotated along three different dimensions . GE ( Hu et al. , 2018a ) introduces a gather operator to extract better global context from a large spatial extent . To enhance the feature aggregation capability , SRM ( Lee et al. , 2019 ) replaces the global average by the channel-wise mean and standard deviation . SKNets ( Li et al. , 2019 ) add an attention design over two branches with different sized convolutions to fuse multi-scale feature outputs . ECA ( Wang et al. , 2020 ) provides a more efficient channel attention design using cheaper 1D convolutions to replace the first fully connected layer of SE . Instead of recalibrating the output convolutional features by attention modules , dynamic convolution methods apply attention mechanisms to a linear combination of n convolutional kernels . Dynamic Weight Networks . Making the weights of a neural network to be sample-adaptive via dynamic mechanisms has shown great potentials for boosting model capacity and generalization . Hypernetworks ( Ha et al. , 2017 ) use a small network called hypernetwork to generate the weights for a larger recurrent network called main network . MetaNet ( Munkhdalai & Yu , 2017 ) adopts a meta learning model to parameterize the task-adaptive network for rapid generalization across a sequence of tasks . Jaderberg et al . ( 2015 ) proposes a Spatial Transformer module conditioned on the learnt features to predict the parametric transformation , and applies it to align the distorted input image . Dynamic Filter Network ( Jia et al. , 2016 ) uses a filter generation network to produce filters conditioned on an input , and processes another input with the generated filters . DynamoNet ( Diba et al. , 2019 ) uses dynamically generated motion filters to handle the action recognition problem . Kernel Prediction Networks ( Bako et al. , 2017 ; Mildenhall et al. , 2018 ) leverage a CNN architecture to predict spatially varying kernels used for video denoising . WeightNet ( Ma et al. , 2020 ) appends a grouped fully connected layer to the attention feature vector of an SE block , generating the weights of a CNN used for image recognition . Lin et al . ( 2020 ) modifies the weights of convolutional layers with a gated module under the guidance of global context , while Quader et al . ( 2020 ) directly uses either an SE block or a simple activation function conditioned on the magnitudes of convolutional weights to modify the weights themselves . Our ODConv aims to address the limitations of recently proposed dynamic convolution ( Yang et al. , 2019 ; Chen et al. , 2020 ) which differs from these methods both in focus and formulation , see the Introduction and Method sections for details . 3 METHOD . In this section , we first make a review of dynamic convolution via a general formulation . Then , we describe the formulation of our ODConv , clarify its properties and detail its implementation . 3.1 REVIEW OF DYNAMIC CONVOLUTION . Basic concept . A regular convolutional layer has a single static convolutional kernel which is applied to all input samples . For a dynamic convolutional layer , it uses a linear combination of n convolutional kernels weighted dynamically with an attention mechanism , making convolution operations be input-dependent . Mathematically , dynamic convolution operations can be defined as y = ( αw1W1 + ... + αwnWn ) ∗ x , ( 1 ) where x ∈ Rh×w×cin and y ∈ Rh×w×cout denote the input features and the output features ( having cin/cout channels with the height h and the width w ) , respectively ; Wi denotes the ith convolutional kernel consisting of cout filters Wmi ∈ Rk×k×cin , m = 1 , ... , cout ; αwi ∈ R is the attention scalar for weighting Wi , which is computed by an attention function πwi ( x ) conditioned on the input features ; ∗ denotes the convolution operation . For conciseness , here we omit the bias term . CondConv vs. DyConv . Although the concept of dynamic convolution defined in Eq . 1 is proposed separately in CondConv ( Yang et al. , 2019 ) and DyConv ( Chen et al. , 2020 ) , their implementations are different , mainly in the structure of πwi ( x ) to compute αwi , the model training strategy , and the layer locations to apply dynamic convolutions . Specifically , both methods choose the modified SE structure for πwi ( x ) , and CondConv uses a Sigmoid function while DyConv uses a Softmax function as the activation function to compute αwi . DyConv adopts a temperature annealing strategy in the training process to suppress the near one-hot output of the Softmax function . For all their tested CNN architectures , CondConv replaces the convolutional layers in the final several blocks ( e.g. , 6 for the MobileNetV2 backbones and 3 for the ResNet backbones ) and the last fully connected layer , while DyConv replaces all convolutional layers except the first layer . These implementation differences lead to different results in model accuracy , size and efficiency for CondConv and DyConv . Limitation Discussions . According to Eq . 1 , dynamic convolution has two basic components : the convolutional kernels { W1 , ... Wn } , and the attention function πwi ( x ) to compute their attention scalars { αw1 , ... αwn } . Given n convolutional kernels , the corresponding kernel space has four dimensions regarding the spatial kernel size k×k , the input channel number cin and the output channel number cout for each convolutional kernel , and the convolutional kernel number n. However , for CondConv and DyConv , we can observe that πwi ( x ) allocates a single attention scalar αwi to the convolutional kernel Wi , meaning that all its cout filters Wmi ∈ Rk×k×cin , m = 1 , ... , cout have the same attention value for the input x . In other words , the spatial dimension , the input channel dimension and the output channel dimension for the convolutional kernel Wi are ignored by CondConv and DyConv . This leads to a coarse exploitation of the kernel space when they design their attention mechanisms for endowing n convolutional kernels with the dynamic property . This may also be one of the reasons why CondConv and DyConv show much lower performance gains to relatively large CNNs compared to efficient ones . Besides , compared to a regular convolutional layer , a dynamic convolutional layer increases the number of convolutional parameters by n times ( but the increase of Multiply-Adds ( MAdds ) is marginal due to the additive property of n convolutional kernels ) . Typically , CondConv sets n = 8 and DyConv sets n = 4 . Therefore , it will heavily increase the model size when applying dynamic convolution to a lot of convolutional layers . However , we empirically find that removing the attention mechanism from CondConv|DyConv ( i.e. , setting αwi = 1 ) almost diminishes the accuracy boosts for prevailing CNN backbones on the ImageNet dataset close to zero . For instance , on ResNet18 , the top-1 gain averaged over 3 runs decreases from 1.78 % |2.51 % to 0.08 % |0.14 % when removing the attention mechanism from CondConv|DyConv , respectively . These observations indicate that the attention mechanism design plays the key role in dynamic convolution , and a more effective design may make a good balance between model accuracy and size . | This work mainly focuses on designing a new dynamic network for large-scale image recognition problems. Specifically, the author discussed the weakness of the existing dynamic convolution operations, and based on the analysis the author proposed a novel framework with the name ODConv. Extensive experiments confirm the superiority of the proposed new framework. | SP:8467ec8e80c64d6648e1053b1f7cb593de940132 |
Omni-Dimensional Dynamic Convolution | 1 INTRODUCTION . In the past decade , we have witnessed the tremendous success of deep Convolutional Neural Networks ( CNNs ) in many computer vision applications ( Krizhevsky et al. , 2012 ; Girshick et al. , 2014 ; Long et al. , 2015 ; He et al. , 2017 ) . The most common way of constructing a deep CNN is to stack a number of convolutional layers as well as other basic layers organized with the selected feature connection topology . Along with great advances in CNN architecture design by manual engineering ( Krizhevsky et al. , 2012 ; He et al. , 2016 ; Howard et al. , 2017 ) and automatic searching ( Zoph & Le , 2017 ; Pham et al. , 2018 ; Howard et al. , 2019 ) , lots of prevailing classification backbones have been presented . Recent works ( Wang et al. , 2017 ; Hu et al. , 2018b ; Park et al. , 2018 ; Woo et al. , 2018 ; Yang et al. , 2019 ; Chen et al. , 2020 ) show that incorporating attention mechanisms into convolutional blocks can further push the performance boundaries of modern CNNs , and thus it has attracted great research interest in the deep learning community . The well-known SENet ( Hu et al. , 2018b ) uses an attention module consisting of squeeze and excitation operations to adaptively recalibrate the output features of convolutional layers , strengthening the representation power of a CNN via encouraging informative feature channels while suppressing less important ones . Numerous attentive feature recalibration variants ( Woo et al. , 2018 ; Park et al. , 2018 ; Hu et al. , 2018a ) have been proposed since then . In Lin et al . ( 2020 ) and Quader et al . ( 2020 ) , two attention extensions to modulate the convolutional weights instead of the output features are also presented . Unlike the aforementioned methods in which the number of convolutional parameters of a target network is fixed , dynamic convolution , which applies the attention mechanism over n 1Here we follow the definitions in Yang et al . ( 2019 ) and Chen et al . ( 2020 ) where a convolutional kernel refers to the filter set of a convolutional layer . additive convolutional kernels to increase the size and the capacity of a network while maintaining efficient inference , has recently become popular in optimizing efficient CNNs . This line of research is pioneered by Conditionally Parameterized Convolutions ( CondConv ) ( Yang et al. , 2019 ) and Dynamic Convolution ( DyConv ) ( Chen et al. , 2020 ) whose basic ideas are the same . Generally , unlike a regular convolutional layer which applies the same ( i.e. , static ) convolutional kernel to all input samples , a dynamic convolutional layer learns a linear combination of n convolutional kernels weighted with their attentions conditioned on the input features . Despite significant accuracy improvements for light-weight CNNs , dynamic convolution designed in this way has two limitations . Firstly , the main limitation lies in the attention mechanism design . The dynamic property of CondConv and DyConv comes from computing convolutional kernels as a function of the input features . However , we observe that they endow the dynamic property to convolutional kernels through one dimension ( regarding the convolutional kernel number ) of the kernel space while the other three dimensions ( regarding the spatial size , the input channel number and the output channel number for each convolutional kernel ) are overlooked . As a result , the weights of each convolutional kernel share the same attention scalar for a given input , limiting their abilities to capture rich contextual cues . That is , the potentials of dynamic convolutional property have not been fully explored by existing works , and thus they leave much room for improving the model performance . Secondly , at a convolutional layer , replacing regular convolution by dynamic convolution increases the number of convolutional parameters by n times . When applying dynamic convolution to a lot of convolutional layers , it will heavily increase the model size . To handle this limitation , Li et al . ( 2021 ) proposes a dynamic convolution decomposition method which can get more compact yet competitive models . Instead , in this paper we address both of the above limitations from a new perspective : inserting a more diverse and effective attention mechanism into the convolutional kernel space . Our core contribution is an improved dynamic convolution design called Omni-dimensional Dynamic Convolution ( ODConv ) . Unlike existing works discussed above , at any convolutional layer , ODConv leverages a multi-dimensional attention mechanism to learn four types of attentions for convolutional kernels along all four dimensions of the kernel space in a parallel manner . We show that these four types of attentions learnt by our ODConv are complementary to each other , and progressively applying them to the corresponding convolutional kernels can substantially strengthen the feature extraction ability of basic convolution operations of a CNN . Consequently , ODConv with even one single kernel can compete with or outperform existing dynamic convolution counterparts with multiple kernels , introducing substantially fewer extra parameters to the final models . As a drop-in design , ODConv can be used to replace regular convolutions in many prevailing CNN architectures . It strikes a better tradeoff between model accuracy and efficiency compared to existing dynamic convolution designs , as validated by extensive experiments on the large-scale ImageNet classification dataset ( Russakovsky et al. , 2015 ) with various CNN backbones . ODConv also shows better recognition performance under similar model complexities when compared to other state-ofthe-art attention methods for output feature recalibration ( Woo et al. , 2018 ; Hu et al. , 2018b ; Wang et al. , 2020 ; Lin et al. , 2020 ) and for convolutional weights modification ( Ma et al. , 2020 ; Lin et al. , 2020 ; Quader et al. , 2020 ) . Furthermore , the performance improvements by ODConv for the pretrained classification models can transfer well to downstream tasks such as object detection on the MS-COCO dataset ( Lin et al. , 2014 ) , validating its promising generalization ability . 2 RELATED WORK . Deep CNN Architectures . AlexNet ( Krizhevsky et al. , 2012 ) ignited the surge of deep CNNs by winning the ImageNet classification challenge 2012 . Since then , lots of well-known CNN architectures such as VGGNet ( Simonyan & Zisserman , 2015 ) , InceptionNet ( Szegedy et al. , 2015 ) , ResNet ( He et al. , 2016 ) , DenseNet ( Huang et al. , 2017 ) and ResNeXt ( Xie et al. , 2017 ) have been proposed , which are designed to be much deeper and have more sophisticated connection topologies compared with AlexNet . To ease the deployment of inference models on resource-limited platforms , MobileNets ( Howard et al. , 2017 ; Sandler et al. , 2018 ) and ShuffleNet ( Zhang et al. , 2018b ; Ma et al. , 2018 ) are presented , which particularly make a tradeoff between model accuracy and efficiency . All the aforementioned CNNs are manually designed . Recently , researchers have also made great efforts ( Zoph & Le , 2017 ; Pham et al. , 2018 ; Howard et al. , 2019 ) to automate the network design process . Our ODConv could be potentially used to boost the performance of most of these prevailing CNN architectures constructed with regular convolutional layers . Attentive Feature Recalibration . Designing attentive feature recalibration modules to improve the performance of a CNN has been widely studied in recent years . Wang et al . ( 2017 ) proposes a specialized attention module consisting of a trunk branch and a mask branch , and inserts it into the intermediate stages of deep residual networks . SENet ( Hu et al. , 2018b ) uses a seminal channel attention module termed Squeeze-and-Excitation ( SE ) to exploit the interdependencies between the channels of convolutional features . Many subsequent works improve SE from different aspects , following its two-stage design ( i.e. , feature aggregation and feature recalibration ) . BAM ( Park et al. , 2018 ) and CBAM ( Woo et al. , 2018 ) combine the channel attention module with the spatial attention module . Misra et al . ( 2021 ) presents an attention module having three branches conditioned on the features rotated along three different dimensions . GE ( Hu et al. , 2018a ) introduces a gather operator to extract better global context from a large spatial extent . To enhance the feature aggregation capability , SRM ( Lee et al. , 2019 ) replaces the global average by the channel-wise mean and standard deviation . SKNets ( Li et al. , 2019 ) add an attention design over two branches with different sized convolutions to fuse multi-scale feature outputs . ECA ( Wang et al. , 2020 ) provides a more efficient channel attention design using cheaper 1D convolutions to replace the first fully connected layer of SE . Instead of recalibrating the output convolutional features by attention modules , dynamic convolution methods apply attention mechanisms to a linear combination of n convolutional kernels . Dynamic Weight Networks . Making the weights of a neural network to be sample-adaptive via dynamic mechanisms has shown great potentials for boosting model capacity and generalization . Hypernetworks ( Ha et al. , 2017 ) use a small network called hypernetwork to generate the weights for a larger recurrent network called main network . MetaNet ( Munkhdalai & Yu , 2017 ) adopts a meta learning model to parameterize the task-adaptive network for rapid generalization across a sequence of tasks . Jaderberg et al . ( 2015 ) proposes a Spatial Transformer module conditioned on the learnt features to predict the parametric transformation , and applies it to align the distorted input image . Dynamic Filter Network ( Jia et al. , 2016 ) uses a filter generation network to produce filters conditioned on an input , and processes another input with the generated filters . DynamoNet ( Diba et al. , 2019 ) uses dynamically generated motion filters to handle the action recognition problem . Kernel Prediction Networks ( Bako et al. , 2017 ; Mildenhall et al. , 2018 ) leverage a CNN architecture to predict spatially varying kernels used for video denoising . WeightNet ( Ma et al. , 2020 ) appends a grouped fully connected layer to the attention feature vector of an SE block , generating the weights of a CNN used for image recognition . Lin et al . ( 2020 ) modifies the weights of convolutional layers with a gated module under the guidance of global context , while Quader et al . ( 2020 ) directly uses either an SE block or a simple activation function conditioned on the magnitudes of convolutional weights to modify the weights themselves . Our ODConv aims to address the limitations of recently proposed dynamic convolution ( Yang et al. , 2019 ; Chen et al. , 2020 ) which differs from these methods both in focus and formulation , see the Introduction and Method sections for details . 3 METHOD . In this section , we first make a review of dynamic convolution via a general formulation . Then , we describe the formulation of our ODConv , clarify its properties and detail its implementation . 3.1 REVIEW OF DYNAMIC CONVOLUTION . Basic concept . A regular convolutional layer has a single static convolutional kernel which is applied to all input samples . For a dynamic convolutional layer , it uses a linear combination of n convolutional kernels weighted dynamically with an attention mechanism , making convolution operations be input-dependent . Mathematically , dynamic convolution operations can be defined as y = ( αw1W1 + ... + αwnWn ) ∗ x , ( 1 ) where x ∈ Rh×w×cin and y ∈ Rh×w×cout denote the input features and the output features ( having cin/cout channels with the height h and the width w ) , respectively ; Wi denotes the ith convolutional kernel consisting of cout filters Wmi ∈ Rk×k×cin , m = 1 , ... , cout ; αwi ∈ R is the attention scalar for weighting Wi , which is computed by an attention function πwi ( x ) conditioned on the input features ; ∗ denotes the convolution operation . For conciseness , here we omit the bias term . CondConv vs. DyConv . Although the concept of dynamic convolution defined in Eq . 1 is proposed separately in CondConv ( Yang et al. , 2019 ) and DyConv ( Chen et al. , 2020 ) , their implementations are different , mainly in the structure of πwi ( x ) to compute αwi , the model training strategy , and the layer locations to apply dynamic convolutions . Specifically , both methods choose the modified SE structure for πwi ( x ) , and CondConv uses a Sigmoid function while DyConv uses a Softmax function as the activation function to compute αwi . DyConv adopts a temperature annealing strategy in the training process to suppress the near one-hot output of the Softmax function . For all their tested CNN architectures , CondConv replaces the convolutional layers in the final several blocks ( e.g. , 6 for the MobileNetV2 backbones and 3 for the ResNet backbones ) and the last fully connected layer , while DyConv replaces all convolutional layers except the first layer . These implementation differences lead to different results in model accuracy , size and efficiency for CondConv and DyConv . Limitation Discussions . According to Eq . 1 , dynamic convolution has two basic components : the convolutional kernels { W1 , ... Wn } , and the attention function πwi ( x ) to compute their attention scalars { αw1 , ... αwn } . Given n convolutional kernels , the corresponding kernel space has four dimensions regarding the spatial kernel size k×k , the input channel number cin and the output channel number cout for each convolutional kernel , and the convolutional kernel number n. However , for CondConv and DyConv , we can observe that πwi ( x ) allocates a single attention scalar αwi to the convolutional kernel Wi , meaning that all its cout filters Wmi ∈ Rk×k×cin , m = 1 , ... , cout have the same attention value for the input x . In other words , the spatial dimension , the input channel dimension and the output channel dimension for the convolutional kernel Wi are ignored by CondConv and DyConv . This leads to a coarse exploitation of the kernel space when they design their attention mechanisms for endowing n convolutional kernels with the dynamic property . This may also be one of the reasons why CondConv and DyConv show much lower performance gains to relatively large CNNs compared to efficient ones . Besides , compared to a regular convolutional layer , a dynamic convolutional layer increases the number of convolutional parameters by n times ( but the increase of Multiply-Adds ( MAdds ) is marginal due to the additive property of n convolutional kernels ) . Typically , CondConv sets n = 8 and DyConv sets n = 4 . Therefore , it will heavily increase the model size when applying dynamic convolution to a lot of convolutional layers . However , we empirically find that removing the attention mechanism from CondConv|DyConv ( i.e. , setting αwi = 1 ) almost diminishes the accuracy boosts for prevailing CNN backbones on the ImageNet dataset close to zero . For instance , on ResNet18 , the top-1 gain averaged over 3 runs decreases from 1.78 % |2.51 % to 0.08 % |0.14 % when removing the attention mechanism from CondConv|DyConv , respectively . These observations indicate that the attention mechanism design plays the key role in dynamic convolution , and a more effective design may make a good balance between model accuracy and size . | The authors present ODConv, a type of dynamic convolutional operation. ODConv combines two prior ideas, i.e. (1) filter recalibration with attention in SENet and (2) additive kernels in CondConv/DyConv, and also generalizes to all remaining dimensions of convolutional filters. The authors propose to use ODConv as drop-in replacements for regular convolutions in standard CNNs. Experiments and analysis over several tasks demonstrate that ODConv has noticeable advantages over alternatives and offers a good tradeoff between performance and compute. | SP:8467ec8e80c64d6648e1053b1f7cb593de940132 |
The Manifold Hypothesis for Gradient-Based Explanations | 1 INTRODUCTION . A large number of algorithms aim to provide post-hoc explanations for the output of neural networks ( Simonyan et al. , 2014 ; Bach et al. , 2015 ; Shrikumar et al. , 2017 ; Ancona et al. , 2018 ; Lim et al. , 2021 ) . Many of them are , directly or indirectly , based on the gradient with respect to the input ( Smilkov et al. , 2017 ; Sundararajan et al. , 2017 ; Garreau & Mardaoui , 2021 ; Agarwal et al. , 2021 ) . A particularly interesting application for gradient-based input attribution methods is natural image classification . Despite recent attempts to provide a priori interpretable image classifiers , neural networks remain exceptionally successful at classification ( Chen et al. , 2018a ) . However , recent work has demonstrated that post-hoc explanation methods fail various sanity checks ( Adebayo et al. , 2018b ; a ; Kindermans et al. , 2019 ; Arun et al. , 2020 ) , and some have even suggested that they should not be used at all ( Rudin , 2019 ) . In this paper , we try to understand when and why gradient-based input attribution methods can be meaningful . To this end , we propose the following hypothesis : Gradient-based explanations are more meaningful the more they are aligned with the tangent space of the data manifold . Consider image classification . It is widely believed that natural image data concentrates around a low-dimensional image manifold ( Goodfellow et al. , 2016 , Section 5.11.3 ) . This image manifold captures the geometric structure of the data . In particular , the tangent space captures all components of an image that can be changed while still staying within the realm of natural images . If a gradientbased explanation approximately lies in the tangent space , this means that it highlights a meaningful way in which the different components of an image contribute to the prediction . If a gradient based explanation lies orthogonal to the tangent space , this means that it points in some direction that would not lead to realistic images , and a human would have a hard time to understand its meaning . Our motivation for proposing this hypothesis is twofold . First , we believe that it is intuitive , and we provide empirical evidence in its support . Second , we hope that the hypothesis can provide a perspective on why obtaining gradient-based explanations might be more difficult than classification . To evaluate the hypothesis empirically , we employ autoencoders to estimate the data manifolds of five different datasets - MNIST , EMNIST , CIFAR10 , X-ray pneumonia and diabetic retinopathy detection . As depicted in Figure 1 , we also use variational autoencoders as generative models . This allows us to generate datasets with completely known manifold structure . With this approach , we provide qualitative and quantitative evidence that explanations that are more aligned with the tangent space of the data are more interpretable . To study when and why model gradients are aligned with the tangent space of the data , we first show that the gradients of neural networks at initialization are unrelated to the structure of the data manifold . This means that the learning algorithm picks up some aspects of the structure of the data manifold during training . We show that this happens early during training , and to some extent even when training with random labels . Moreover , under standard training procedures , the alignment between model gradients and the data manifold deteriorates as the model increasingly fits the labels . This is avoided by l2 adversarial training , which significantly aligns the model ’ s gradients with the tangent space of the data . Is it always the case that a neural network that generalizes necessarily adapts its gradients to the data manifold , at least to some degree ? The answer is no , as we show theoretically . Without further assumptions , the relation between data manifold and model gradients is ambiguous : The alignment between the two quantities can be arbitrarily good or bad . The organization of the paper is as follows . Sec.2 formally introduces the manifold hypothesis and outlines our conceptual approach . Sec.3 evaluates the hypothesis on five datasets . Sec.4 discusses the effects of adversarial training and the evolution of model gradients over the course of training . Sec.5 contains a formal proof that generalization does not imply alignment with the data manifold , Sec.6 discusses the related work and Sec.7 discusses the implications of our results . 2 THE MANIFOLD HYPOTHESIS . Our goal is to evaluate the following hypothesis : A gradient-based explanation E ∈ Rd at a point x ∈ M is more meaningful the more it is aligned with the tangent space of the data manifold at x . Below we first give a background on data manifolds , tangent spaces and model gradients ; then we detail our evaluation approach . 2.1 BACKGROUND . Data manifolds and tangent spaces . A k-dimensional differentiable manifold M ⊂ Rd is a subset of a d-dimensional space that locally resembles Rk . At every point x ∈M , the tangent space Tx is a k-dimensional subspace of Rd . The tangent space Tx consists of all directions v such that x+ v , for ‖v‖ small , is again close to the manifold . Manifolds and tangent spaces are the subject of differential geometry , to which we refer for a comprehensive introduction . The long-standing hypothesis that natural image data concentrates around a low-dimensional image manifold is supported by a number of empirical studies ( Weinberger & Saul , 2006 ; Fefferman et al. , 2016 ) . However , accurately learning the data manifolds of natural image datasets – manifold learning – is difficult and the exact properties of these manifolds remain unknown ( Cayton , 2005 ; Aamari & Levrard , 2019 ) . Shao et al . ( 2018 ) investigate the properties of manifolds generated by deep generative models and find that they have mostly low curvature . Model gradients and the fraction of the gradient in tangent space . We consider neural networks that learn differentiable functions f : Rd → RC . Here C is the number of classes and the model prediction is given by argmaxi f ( x ) i . The gradient of class i with respect to the input is given by gradi ( x ) = ∂ ( f ( x ) i ) ∂x . Unless mentioned otherwise , we always consider the model gradient with respect to the predicted class and before the softmax . At every point x that lies on the data manifoldM , we can decompose the gradient into a part that lies in tangent space and a part that is orthogonal to it . Formally , we have gradi ( x ) = v1 + v2 with v1 ∈ Tx , v2 ∈ T ⊥x and v1 ⊥ v2 . Here v1 is the part of the gradient that lies in the tangent space , and v2 is the part of the gradient that is orthogonal to the tangent space . If the gradient completely lies in the tangent space , we have v2 = 0 . If the gradient is completely orthogonal to the tangent space , we have v1 = 0 . In practice , some part of the gradient will lie in the tangent space and another part be orthogonal to it , that is we have v1 6= 0 and v2 6= 0 . To quantitatively measure how well the gradient is aligned with the tangent space , we compute the Fraction of the Gradient in Tangent Space = ‖v1‖ ‖gradi ( x ) ‖ ∈ [ 0 , 1 ] . ( 1 ) 2.2 HOW DO WE KNOW THE DATA MANIFOLD ? . To estimate whether an explanation is aligned with the tangent space we make use of autoencoders . The various variants of variational autoencoders ( Kingma & Welling , 2013 ; Higgins et al. , 2017 ) , allow to estimate the data manifolds of existing datasets . Importantly , they also allow to generate datasets with completely known manifold structure ( Algorithm 1 ) . We make use of two related approaches that we term the generative approach and the reconstructive approach . In both approaches , we first train an autoencoder on the orignal dataset . The generative approach to create datasets with a completely known manifold structure . To generate a dataset with completely known manifold structure , we have to train a variational- or another generative autoencoder ( Tagasovska et al. , 2019 ) . After training , we pass the original dataset through the autoencoder . Then we train an additional classifier that reproduces the original labels from latent codes and reconstructed images . Equipped with this labeling function , we sample from the prior and use decoder and labeling function to generate a dataset . If the decoder is differentiable , we can compute the tangent space at each datapoint x ( Shao et al. , 2018 ; Anders et al. , 2020 ) . The reconstructive approach to create datasets with an estimated manifold structure . The main limitation of the generative approach is that we might not be able to obtain high-quality samples with reasonably small latent spaces . While there have been great advances in generative modeling , stateof-the-art models like hierarchical variational autoencoders ( Vahdat & Kautz , 2020 ) require very large latent spaces , i.e . k ≈ d. For our analysis , it is however critical that √ k/d is small – with k = d , the fraction of the gradient in tangent space is always 1 . To evaluate our hypothesis on realworld high-dimensional image data where it is difficult to obtain realistic samples with not-too-large latent spaces , we rely on estimating the tangent space . That is we simply pass the original dataset through the autoencoder and take the reconstucted images with the original labels as our new dataset . 3 PUTTING THE HYPOTHESIS TO THE TEST . Explanation algorithms . We consider four gradient-based input attribution methods : The gradient ( Simonyan et al. , 2014 ) , Integrated Gradients ( Sundararajan et al. , 2017 ) , Input × Gradient ( Ancona et al. , 2018 ) , and SmoothGrad ( Smilkov et al. , 2017 ) . The motivation behind Integrated Gradients is axiomatic . The motivation behind SmoothGrad is to reduce noise in the gradient . All four methods provide explanations as vectors in Rd . We can evaluate how each method is aligned with the tangent space of the data manifold by computing the fraction of the explanation method in tangent space . While other methods also provide explanations as vectors in Rd , we restrict ourselves to these four methods because they are directly related to the gradient with respect to the input , which is our main object of investigation . Experimental setting . Given a dataset , obtained either with the generative or the reconstructive approach , we train a neural network to minimize the test error . For this network , we then evaluate how gradients and other gradient-based explanation methods relate to the data manifold . To evaluate whether an explanation is meaningful , we use qualitative evaluations as demonstrated in ( Simonyan et al. , 2014 ; Bach et al. , 2015 ; Sundararajan et al. , 2017 ; Smilkov et al. , 2017 ) . We also rely on the literature that demonstrates the utility of Integrated Gradients and SmoothGrad for diabetic retinopathy detection ( Sayres et al. , 2019 ; Van Craenendonck et al. , 2020 ) . When we quantitatively evaluate the fraction of an explanation in tangent space ( 1 ) , we need to account for the fact that even a random vector has a non-zero fraction in tangent space . A random vector is by definition completely unrelated to the structure of the data manifold . The expected fraction of a random vector that lies in any k-dimensional subspace is √ k/d . In our MNIST32 task , for example , d = 1024 , k = 10 and √ 10/1024 ≈ 0.1 . Thus , we could only say that a gradientbased explanation is systematically related to the tangent space of the data manifold if , on average , the fraction of the explanation in tangent space is significantly larger than 0.1 . Datasets . We evaluate our hypothesis several datasets . This includes ( i ) MNIST32 and MNIST256 , two variants of the MNIST dataset ( LeCun et al. , 1998 ) with 10 classes and 60000 grayscale training images and 10000 grayscale test images of size 32 × 32 and 256 × 256 , respectively . ( ii ) EMNIST128 , a variant of the EMNIST dataset ( Cohen et al. , 2017 ) that extends MNIST with handwritten letters and has over 60 classes , ( iii ) the CIFAR10 dataset ( Krizhevsky et al. , 2009 ) . We also evaluate our hypothesis on two real world high dimensional image datasets : X-ray Pneumonia ( Kermany et al. , 2018 ) and Diabetic Retinopathy Detection 1 . Both tasks have been used before to study the properties of post-hoc explanation methods ( Rajaraman et al. , 2019 ; Luján-Garcı́a et al. , 2020 ; Amyar et al. , 2020 ; Arun et al. , 2020 ; Chetoui & Akhloufi , 2020 ; Van Craenendonck et al. , 2020 ) . All further details on the datasets are provided in appendix A . | The paper argues that the main reason (or a good reason) for the "meaningfulness" of a gradient with data manifold. The authors perform a set of controlled experiments with different feature attribution methods. Finally, they theoretically show that alignment of the gradient with data manifold has nothing to do with generalizability. | SP:c3276f7bbc7faa158569f67c2cd806e4154e0048 |
The Manifold Hypothesis for Gradient-Based Explanations | 1 INTRODUCTION . A large number of algorithms aim to provide post-hoc explanations for the output of neural networks ( Simonyan et al. , 2014 ; Bach et al. , 2015 ; Shrikumar et al. , 2017 ; Ancona et al. , 2018 ; Lim et al. , 2021 ) . Many of them are , directly or indirectly , based on the gradient with respect to the input ( Smilkov et al. , 2017 ; Sundararajan et al. , 2017 ; Garreau & Mardaoui , 2021 ; Agarwal et al. , 2021 ) . A particularly interesting application for gradient-based input attribution methods is natural image classification . Despite recent attempts to provide a priori interpretable image classifiers , neural networks remain exceptionally successful at classification ( Chen et al. , 2018a ) . However , recent work has demonstrated that post-hoc explanation methods fail various sanity checks ( Adebayo et al. , 2018b ; a ; Kindermans et al. , 2019 ; Arun et al. , 2020 ) , and some have even suggested that they should not be used at all ( Rudin , 2019 ) . In this paper , we try to understand when and why gradient-based input attribution methods can be meaningful . To this end , we propose the following hypothesis : Gradient-based explanations are more meaningful the more they are aligned with the tangent space of the data manifold . Consider image classification . It is widely believed that natural image data concentrates around a low-dimensional image manifold ( Goodfellow et al. , 2016 , Section 5.11.3 ) . This image manifold captures the geometric structure of the data . In particular , the tangent space captures all components of an image that can be changed while still staying within the realm of natural images . If a gradientbased explanation approximately lies in the tangent space , this means that it highlights a meaningful way in which the different components of an image contribute to the prediction . If a gradient based explanation lies orthogonal to the tangent space , this means that it points in some direction that would not lead to realistic images , and a human would have a hard time to understand its meaning . Our motivation for proposing this hypothesis is twofold . First , we believe that it is intuitive , and we provide empirical evidence in its support . Second , we hope that the hypothesis can provide a perspective on why obtaining gradient-based explanations might be more difficult than classification . To evaluate the hypothesis empirically , we employ autoencoders to estimate the data manifolds of five different datasets - MNIST , EMNIST , CIFAR10 , X-ray pneumonia and diabetic retinopathy detection . As depicted in Figure 1 , we also use variational autoencoders as generative models . This allows us to generate datasets with completely known manifold structure . With this approach , we provide qualitative and quantitative evidence that explanations that are more aligned with the tangent space of the data are more interpretable . To study when and why model gradients are aligned with the tangent space of the data , we first show that the gradients of neural networks at initialization are unrelated to the structure of the data manifold . This means that the learning algorithm picks up some aspects of the structure of the data manifold during training . We show that this happens early during training , and to some extent even when training with random labels . Moreover , under standard training procedures , the alignment between model gradients and the data manifold deteriorates as the model increasingly fits the labels . This is avoided by l2 adversarial training , which significantly aligns the model ’ s gradients with the tangent space of the data . Is it always the case that a neural network that generalizes necessarily adapts its gradients to the data manifold , at least to some degree ? The answer is no , as we show theoretically . Without further assumptions , the relation between data manifold and model gradients is ambiguous : The alignment between the two quantities can be arbitrarily good or bad . The organization of the paper is as follows . Sec.2 formally introduces the manifold hypothesis and outlines our conceptual approach . Sec.3 evaluates the hypothesis on five datasets . Sec.4 discusses the effects of adversarial training and the evolution of model gradients over the course of training . Sec.5 contains a formal proof that generalization does not imply alignment with the data manifold , Sec.6 discusses the related work and Sec.7 discusses the implications of our results . 2 THE MANIFOLD HYPOTHESIS . Our goal is to evaluate the following hypothesis : A gradient-based explanation E ∈ Rd at a point x ∈ M is more meaningful the more it is aligned with the tangent space of the data manifold at x . Below we first give a background on data manifolds , tangent spaces and model gradients ; then we detail our evaluation approach . 2.1 BACKGROUND . Data manifolds and tangent spaces . A k-dimensional differentiable manifold M ⊂ Rd is a subset of a d-dimensional space that locally resembles Rk . At every point x ∈M , the tangent space Tx is a k-dimensional subspace of Rd . The tangent space Tx consists of all directions v such that x+ v , for ‖v‖ small , is again close to the manifold . Manifolds and tangent spaces are the subject of differential geometry , to which we refer for a comprehensive introduction . The long-standing hypothesis that natural image data concentrates around a low-dimensional image manifold is supported by a number of empirical studies ( Weinberger & Saul , 2006 ; Fefferman et al. , 2016 ) . However , accurately learning the data manifolds of natural image datasets – manifold learning – is difficult and the exact properties of these manifolds remain unknown ( Cayton , 2005 ; Aamari & Levrard , 2019 ) . Shao et al . ( 2018 ) investigate the properties of manifolds generated by deep generative models and find that they have mostly low curvature . Model gradients and the fraction of the gradient in tangent space . We consider neural networks that learn differentiable functions f : Rd → RC . Here C is the number of classes and the model prediction is given by argmaxi f ( x ) i . The gradient of class i with respect to the input is given by gradi ( x ) = ∂ ( f ( x ) i ) ∂x . Unless mentioned otherwise , we always consider the model gradient with respect to the predicted class and before the softmax . At every point x that lies on the data manifoldM , we can decompose the gradient into a part that lies in tangent space and a part that is orthogonal to it . Formally , we have gradi ( x ) = v1 + v2 with v1 ∈ Tx , v2 ∈ T ⊥x and v1 ⊥ v2 . Here v1 is the part of the gradient that lies in the tangent space , and v2 is the part of the gradient that is orthogonal to the tangent space . If the gradient completely lies in the tangent space , we have v2 = 0 . If the gradient is completely orthogonal to the tangent space , we have v1 = 0 . In practice , some part of the gradient will lie in the tangent space and another part be orthogonal to it , that is we have v1 6= 0 and v2 6= 0 . To quantitatively measure how well the gradient is aligned with the tangent space , we compute the Fraction of the Gradient in Tangent Space = ‖v1‖ ‖gradi ( x ) ‖ ∈ [ 0 , 1 ] . ( 1 ) 2.2 HOW DO WE KNOW THE DATA MANIFOLD ? . To estimate whether an explanation is aligned with the tangent space we make use of autoencoders . The various variants of variational autoencoders ( Kingma & Welling , 2013 ; Higgins et al. , 2017 ) , allow to estimate the data manifolds of existing datasets . Importantly , they also allow to generate datasets with completely known manifold structure ( Algorithm 1 ) . We make use of two related approaches that we term the generative approach and the reconstructive approach . In both approaches , we first train an autoencoder on the orignal dataset . The generative approach to create datasets with a completely known manifold structure . To generate a dataset with completely known manifold structure , we have to train a variational- or another generative autoencoder ( Tagasovska et al. , 2019 ) . After training , we pass the original dataset through the autoencoder . Then we train an additional classifier that reproduces the original labels from latent codes and reconstructed images . Equipped with this labeling function , we sample from the prior and use decoder and labeling function to generate a dataset . If the decoder is differentiable , we can compute the tangent space at each datapoint x ( Shao et al. , 2018 ; Anders et al. , 2020 ) . The reconstructive approach to create datasets with an estimated manifold structure . The main limitation of the generative approach is that we might not be able to obtain high-quality samples with reasonably small latent spaces . While there have been great advances in generative modeling , stateof-the-art models like hierarchical variational autoencoders ( Vahdat & Kautz , 2020 ) require very large latent spaces , i.e . k ≈ d. For our analysis , it is however critical that √ k/d is small – with k = d , the fraction of the gradient in tangent space is always 1 . To evaluate our hypothesis on realworld high-dimensional image data where it is difficult to obtain realistic samples with not-too-large latent spaces , we rely on estimating the tangent space . That is we simply pass the original dataset through the autoencoder and take the reconstucted images with the original labels as our new dataset . 3 PUTTING THE HYPOTHESIS TO THE TEST . Explanation algorithms . We consider four gradient-based input attribution methods : The gradient ( Simonyan et al. , 2014 ) , Integrated Gradients ( Sundararajan et al. , 2017 ) , Input × Gradient ( Ancona et al. , 2018 ) , and SmoothGrad ( Smilkov et al. , 2017 ) . The motivation behind Integrated Gradients is axiomatic . The motivation behind SmoothGrad is to reduce noise in the gradient . All four methods provide explanations as vectors in Rd . We can evaluate how each method is aligned with the tangent space of the data manifold by computing the fraction of the explanation method in tangent space . While other methods also provide explanations as vectors in Rd , we restrict ourselves to these four methods because they are directly related to the gradient with respect to the input , which is our main object of investigation . Experimental setting . Given a dataset , obtained either with the generative or the reconstructive approach , we train a neural network to minimize the test error . For this network , we then evaluate how gradients and other gradient-based explanation methods relate to the data manifold . To evaluate whether an explanation is meaningful , we use qualitative evaluations as demonstrated in ( Simonyan et al. , 2014 ; Bach et al. , 2015 ; Sundararajan et al. , 2017 ; Smilkov et al. , 2017 ) . We also rely on the literature that demonstrates the utility of Integrated Gradients and SmoothGrad for diabetic retinopathy detection ( Sayres et al. , 2019 ; Van Craenendonck et al. , 2020 ) . When we quantitatively evaluate the fraction of an explanation in tangent space ( 1 ) , we need to account for the fact that even a random vector has a non-zero fraction in tangent space . A random vector is by definition completely unrelated to the structure of the data manifold . The expected fraction of a random vector that lies in any k-dimensional subspace is √ k/d . In our MNIST32 task , for example , d = 1024 , k = 10 and √ 10/1024 ≈ 0.1 . Thus , we could only say that a gradientbased explanation is systematically related to the tangent space of the data manifold if , on average , the fraction of the explanation in tangent space is significantly larger than 0.1 . Datasets . We evaluate our hypothesis several datasets . This includes ( i ) MNIST32 and MNIST256 , two variants of the MNIST dataset ( LeCun et al. , 1998 ) with 10 classes and 60000 grayscale training images and 10000 grayscale test images of size 32 × 32 and 256 × 256 , respectively . ( ii ) EMNIST128 , a variant of the EMNIST dataset ( Cohen et al. , 2017 ) that extends MNIST with handwritten letters and has over 60 classes , ( iii ) the CIFAR10 dataset ( Krizhevsky et al. , 2009 ) . We also evaluate our hypothesis on two real world high dimensional image datasets : X-ray Pneumonia ( Kermany et al. , 2018 ) and Diabetic Retinopathy Detection 1 . Both tasks have been used before to study the properties of post-hoc explanation methods ( Rajaraman et al. , 2019 ; Luján-Garcı́a et al. , 2020 ; Amyar et al. , 2020 ; Arun et al. , 2020 ; Chetoui & Akhloufi , 2020 ; Van Craenendonck et al. , 2020 ) . All further details on the datasets are provided in appendix A . | The paper constructs a synthetic classification task with a known manifold structure by training the classifier with data from a variational autoencoder with a low-dimensional latent space. The paper argues that the components of image gradients that lie in the tangent space of the data manifold are semantically meaningful, whereas the part orthogonal to the image manifold is nonsensical. The experiments in the paper support this hypothesis to an extent. This is an interesting, although not unexpected, conclusion. | SP:c3276f7bbc7faa158569f67c2cd806e4154e0048 |
Deep learning via message passing algorithms based on belief propagation | 1 INTRODUCTION . Belief Propagation is a method for computing marginals and entropies in probabilistic inference problems ( Bethe , 1935 ; Peierls , 1936 ; Gallager , 1962 ; Pearl , 1982 ) . These include optimization problems as well once they are written as zero temperature limit of a Gibbs distribution that uses the cost function as energy . Learning is one particular case , in which one wants to minimize a cost which is a data dependent loss function . These problems are generally intractable and message-passing techniques have been particularly successful at providing principled approximations through efficient distributed computations . A particularly compact representation of inference/optimization problems that is used to build massage-passing algorithms is provided by factor graphs . A factor graph is a bipartite graph composed of variables nodes and factor nodes expressing the interactions among variables . Belief Propagation is exact for tree-like factor graphs ( Yedidia et al. , 2003 ) ) , where the Gibbs distribution is naturally factorized , whereas it is approximate for graphs with loops . Still , loopy BP is routinely used with success in many real world applications ranging from error correcting codes , vision , clustering , just to mention a few . In all these problems , loops are indeed present in the factor graph and yet the variables are weakly correlated at long range and BP gives good results . A field in which BP has a long history is the statistical physics of disordered systems where it is known as Cavity Method ( Mézard et al. , 1987 ) . It has been used to study the typical properties of spin glass models which represent binary variables interacting through random interactions over a given graph . It is very well known that in spin glass models defined on complete graphs and in locally tree-like random graphs , which are both loopy , the weak correlation conditions between variables may hold and BP give asymptotic exact results ( Mézard & Montanari , 2009 ) . Here we will mostly focus on neural networks ±1 binary weights and sign activation functions , for which the messages and the marginals can be described simply by the difference between the probabilities associated with the +1 and -1 states , the so called magnetizations . The effectiveness of BP for deep learning has never been numerically tested in a systematic way , however there is clear evidence that the weak correlation decay condition does not hold and thus BP convergence and approximation quality is unpredictable . In this paper we explore the effectiveness of a variant of BP that has shown excellent convergence properties in hard optimization problems and in non-convex shallow networks . It goes under the name of focusing BP ( fBP ) and is based on a probability distribution , a likelihood , that focuses on highly entropic wide minima , neglecting the contribution to marginals from narrow minima even when they are the majority ( and hence dominate the Gibbs distribution ) . This version of BP is thus expected to give good results only in models that have such wide entropic minima as part of their energy landscape . As discussed in ( Baldassi et al. , 2016a ) , a simple way to define fBP is to add a `` reinforcement '' term to the BP equations : an iteration-dependent local field is introduced for each variable , with an intensity proportional to its marginal probability computed in the previous iteration step . This field is gradually increased until the entire system becomes fully biased on a configuration . The first version of reinforced BP was introduced in ( Braunstein & Zecchina , 2006 ) as a heuristic algorithm to solve the learning problem in shallow binary networks . Baldassi et al . ( 2016a ) showed that this version of BP is a limiting case of fBP , i.e. , BP equations written for a likelihood that uses the local entropy function instead of the error ( energy ) loss function . As discussed in depth in that study , one way to introduce a likelihood that focuses on highly entropic regions is to create y coupled replicas of the original system . fBP equations are obtained as BP equations for the replicated system . It turns out that the fBP equations are identical to the BP equations for the original system with the only addition of a self-reinforcing term in the message passing scheme . The fBP algorithm can be used as a solver by gradually increasing the effect of the reinforcement : one can control the size of the regions over which the fBP equations estimate the marginals by tuning the parameters that appear in the expression of the reinforcement , until the high entropy regions reduce to a single configuration . Interestingly , by keeping the size of the high entropy region fixed , the fBP fixed point allows one to estimate the marginals and entropy relative to the region . In this work , we present and adapt to GPU computation a family of fBP inspired message passing algorithms that are capable of training multi-layer neural networks with generalization performance and computational speed comparable to SGD . This is the first work that shows that learning by message passing in deep neural networks 1 ) is possible and 2 ) is a viable alternative to SGD . Our version of fBP adds the reinforcement term at each mini-batch step in what we call the Posterioras-Prior ( PasP ) rule . Furthermore , using the message-passing algorithm not as a solver but as an estimator of marginals allows us to make locally Bayesian predictions , averaging the predictions over the approximate posterior . The resulting generalization error is significantly better than those of the solver , showing that , although approximate , the marginals of the weights estimated by messagepassing retain useful information . Consistently with the assumptions underlying fBP , we find that the solutions provided by the message passing algorithms belong to flat entropic regions of the loss landscape and have good performance in continual learning tasks and on sparse networks as well . We also remark that our PasP update scheme is of independent interest and can be combined with different posterior approximation techniques . The paper is structured as follows : in Sec . 2 we give a brief review of some related works . In Sec . 3 we provide a detailed description of the message-passing equations and of the high level structure of the algorithms . In Sec . 4 we compare the performance of the message passing algorithms versus SGD based approaches in different learning settings . 2 RELATED WORKS . The literature on message passing algorithms is extensive , we refer to Mézard & Montanari ( 2009 ) and Zdeborová & Krzakala ( 2016 ) for a general overview . More related to our work , multilayer message-passing algorithms have been developed in inference contexts ( Manoel et al. , 2017 ; Fletcher et al. , 2018 ) , where they have been shown to produce exact marginals under certain statistical assumptions on ( unlearned ) weight matrices . The properties of message-passing for learning shallow neural networks have been extensively studied ( see Baldassi et al . ( 2020 ) and reference therein ) . Barbier et al . ( 2019 ) rigorously show that message passing algorithms in generalized linear models perform asymptotically exact inference under some statistical assumptions . Dictionary learning and matrix factorization are harder problems closely related to deep network learning problems , in particular to the modelling of a single intermediate layer . They have been approached using message passing in Kabashima et al . ( 2016 ) and Parker et al . ( 2014 ) , although the resulting predictions are found to be asymptotically inexact ( Maillard et al. , 2021 ) . The same problem is faced by the message passing algorithm recently proposed for a multi-layer matrix factorization scenario ( Zou et al. , 2021 ) . Unfortunately , our framework as well doesn ’ t yield asymptotic exact predictions . Nonetheless , it gives a message passing heuristic that for the first time is able to train deep neural networks on natural datasets , therefore sets a reference for the algorithmic applications of this research line . A few papers advocate the success of SGD to the geometrical structure ( smoothness and flatness ) of the loss landscape in neural networks ( Baldassi et al. , 2015 ; Chaudhari et al. , 2017 ; Garipov et al. , 2018 ; Li et al. , 2018 ; Pittorino et al. , 2021 ; Feng & Tu , 2021 ) . These considerations do not depend on the particular form of the SGD dynamics and should extend also to other types of algorithms , although SGD is by far the most popular choice among NNs practitioners due to its simplicity , flexibility , speed , and generalization performance . While our work focuses on message passing schemes , some of the ideas presented here , such as the PasP rule , can be combined with algorithms for Bayesian neural networks ’ training ( HernándezLobato & Adams , 2015 ; Wu et al. , 2018 ) . Recent work extends BP by combining it with graph neural networks ( Kuck et al. , 2020 ; Satorras & Welling , 2021 ) . Finally , some work in computational neuroscience shows similarities to our approach ( Rao , 2007 ) . 3 LEARNING BY MESSAGE PASSING . 3.1 POSTERIOR-AS-PRIOR UPDATES . We consider a multi-layer perceptron with L hidden neuron layers , having weight and bias parameters W = { W ` , b ` } L ` =0 . We allow for stochastic activations P ` ( x ` +1|z ` ) , where z ` is the neuron ’ s preactivation vector for layer ` , and P ` is assumed to be factorized over the neurons . If no stochasticity is present , P ` just encodes an element-wise activation function . The probability of output y given an input x is then given by P ( y |x , W ) = ∫ dx1 : L L∏ ` =0 P ` +1 ( x ` +1 | W ` x ` + b ` ) , ( 1 ) where for convenience we defined x0 = x and xL+1 = y . In a Bayesian framework , given a training set D = { ( xn , yn ) } n and a prior distribution over the weights qθ ( W ) in some parametric family , the posterior distribution is given by P ( W |D , θ ) ∝ ∏ n P ( yn |xn , W ) qθ ( W ) . ( 2 ) Here ∝ denotes equality up to a normalization factor . Using the posterior one can compute the Bayesian prediction for a new data-point x through the average P ( y |x , D , θ ) =∫ dW P ( y |x , W ) P ( W |D , θ ) . Unfortunately , the posterior is generically intractable due to the hard-to-compute normalization factor . On the other hand , we are mainly interested in training a distribution that covers wide minima of the loss landscape that generalize well ( Baldassi et al. , 2016a ) and in recovering pointwise estimators within these regions . The Bayesian modeling becomes an auxiliary tool to set the stage for the message passing algorithms seeking flat minima . We also need a formalism that allows for mini-batch training to speed-up the computation and deal with large datasets . Therefore , we devise an update scheme that we call Posterior-as-Prior ( PasP ) , where we evolve the parameters θt of a distribution qθt ( W ) computed as an approximate mini-batch posterior , in such a way that the outcome of the previous iteration becomes the prior in the following step . In the PasP scheme , θt retains the memory of past observations . We also add an exponential factor ρ , that we typically set close to 1 , tuning the forgetting rate and playing a role similar to the learning rate in SGD . Given a mini-batch ( Xt , yt ) sampled from the training set at time t and a scalar ρ > 0 , the PasP update reads qθt+1 ( W ) ≈ [ P ( W |yt , Xt , θt ) ] ρ , ( 3 ) where ≈ denotes approximate equality and we do not account for the normalization factor . A first approximation may be needed in the computation of the posterior , a second to project the approximate posterior onto the distribution manifold spanned by θ ( Minka , 2001 ) . In practice , we will consider factorized approximate posterior in an exponential family and priors qθ in the same family , although Eq . 3 generically allow for more refined approximations . Notice that setting ρ = 1 , the batch-size to 1 , and taking a single pass over the dataset , we recover the Assumed Density Filtering algorithm ( Minka , 2001 ) . For large enough ρ ( including ρ = 1 ) , the iterations of qθt will concentrate on a pointwise estimator . This mechanism mimics the reinforcement heuristic commonly used to turn Belief Propagation into a solver for constrained satisfaction problems ( Braunstein & Zecchina , 2006 ) and related to flat-minima discovery ( see focusing-BP in Baldassi et al . ( 2016a ) ) . A different prior updating mechanism which can be understood as empirical Bayes has been used in Baldassi et al . ( 2016b ) . | This paper introduces a belief-propagation message-passing training algorithm for multi-layer neural networks. This algorithm is adapted to mini-batch training and biases distributions toward high entropy solutions. Empirical results show that neural networks with discrete weights and activations trained with this algorithm achieve comparable performance the same networks trained with SGD (BinaryNet), and can make approximate Bayesian predictions that have higher accuracy than pointwise solutions. | SP:a736d2fa98e58e22b69e55daf8b678d1583cc7e8 |
Deep learning via message passing algorithms based on belief propagation | 1 INTRODUCTION . Belief Propagation is a method for computing marginals and entropies in probabilistic inference problems ( Bethe , 1935 ; Peierls , 1936 ; Gallager , 1962 ; Pearl , 1982 ) . These include optimization problems as well once they are written as zero temperature limit of a Gibbs distribution that uses the cost function as energy . Learning is one particular case , in which one wants to minimize a cost which is a data dependent loss function . These problems are generally intractable and message-passing techniques have been particularly successful at providing principled approximations through efficient distributed computations . A particularly compact representation of inference/optimization problems that is used to build massage-passing algorithms is provided by factor graphs . A factor graph is a bipartite graph composed of variables nodes and factor nodes expressing the interactions among variables . Belief Propagation is exact for tree-like factor graphs ( Yedidia et al. , 2003 ) ) , where the Gibbs distribution is naturally factorized , whereas it is approximate for graphs with loops . Still , loopy BP is routinely used with success in many real world applications ranging from error correcting codes , vision , clustering , just to mention a few . In all these problems , loops are indeed present in the factor graph and yet the variables are weakly correlated at long range and BP gives good results . A field in which BP has a long history is the statistical physics of disordered systems where it is known as Cavity Method ( Mézard et al. , 1987 ) . It has been used to study the typical properties of spin glass models which represent binary variables interacting through random interactions over a given graph . It is very well known that in spin glass models defined on complete graphs and in locally tree-like random graphs , which are both loopy , the weak correlation conditions between variables may hold and BP give asymptotic exact results ( Mézard & Montanari , 2009 ) . Here we will mostly focus on neural networks ±1 binary weights and sign activation functions , for which the messages and the marginals can be described simply by the difference between the probabilities associated with the +1 and -1 states , the so called magnetizations . The effectiveness of BP for deep learning has never been numerically tested in a systematic way , however there is clear evidence that the weak correlation decay condition does not hold and thus BP convergence and approximation quality is unpredictable . In this paper we explore the effectiveness of a variant of BP that has shown excellent convergence properties in hard optimization problems and in non-convex shallow networks . It goes under the name of focusing BP ( fBP ) and is based on a probability distribution , a likelihood , that focuses on highly entropic wide minima , neglecting the contribution to marginals from narrow minima even when they are the majority ( and hence dominate the Gibbs distribution ) . This version of BP is thus expected to give good results only in models that have such wide entropic minima as part of their energy landscape . As discussed in ( Baldassi et al. , 2016a ) , a simple way to define fBP is to add a `` reinforcement '' term to the BP equations : an iteration-dependent local field is introduced for each variable , with an intensity proportional to its marginal probability computed in the previous iteration step . This field is gradually increased until the entire system becomes fully biased on a configuration . The first version of reinforced BP was introduced in ( Braunstein & Zecchina , 2006 ) as a heuristic algorithm to solve the learning problem in shallow binary networks . Baldassi et al . ( 2016a ) showed that this version of BP is a limiting case of fBP , i.e. , BP equations written for a likelihood that uses the local entropy function instead of the error ( energy ) loss function . As discussed in depth in that study , one way to introduce a likelihood that focuses on highly entropic regions is to create y coupled replicas of the original system . fBP equations are obtained as BP equations for the replicated system . It turns out that the fBP equations are identical to the BP equations for the original system with the only addition of a self-reinforcing term in the message passing scheme . The fBP algorithm can be used as a solver by gradually increasing the effect of the reinforcement : one can control the size of the regions over which the fBP equations estimate the marginals by tuning the parameters that appear in the expression of the reinforcement , until the high entropy regions reduce to a single configuration . Interestingly , by keeping the size of the high entropy region fixed , the fBP fixed point allows one to estimate the marginals and entropy relative to the region . In this work , we present and adapt to GPU computation a family of fBP inspired message passing algorithms that are capable of training multi-layer neural networks with generalization performance and computational speed comparable to SGD . This is the first work that shows that learning by message passing in deep neural networks 1 ) is possible and 2 ) is a viable alternative to SGD . Our version of fBP adds the reinforcement term at each mini-batch step in what we call the Posterioras-Prior ( PasP ) rule . Furthermore , using the message-passing algorithm not as a solver but as an estimator of marginals allows us to make locally Bayesian predictions , averaging the predictions over the approximate posterior . The resulting generalization error is significantly better than those of the solver , showing that , although approximate , the marginals of the weights estimated by messagepassing retain useful information . Consistently with the assumptions underlying fBP , we find that the solutions provided by the message passing algorithms belong to flat entropic regions of the loss landscape and have good performance in continual learning tasks and on sparse networks as well . We also remark that our PasP update scheme is of independent interest and can be combined with different posterior approximation techniques . The paper is structured as follows : in Sec . 2 we give a brief review of some related works . In Sec . 3 we provide a detailed description of the message-passing equations and of the high level structure of the algorithms . In Sec . 4 we compare the performance of the message passing algorithms versus SGD based approaches in different learning settings . 2 RELATED WORKS . The literature on message passing algorithms is extensive , we refer to Mézard & Montanari ( 2009 ) and Zdeborová & Krzakala ( 2016 ) for a general overview . More related to our work , multilayer message-passing algorithms have been developed in inference contexts ( Manoel et al. , 2017 ; Fletcher et al. , 2018 ) , where they have been shown to produce exact marginals under certain statistical assumptions on ( unlearned ) weight matrices . The properties of message-passing for learning shallow neural networks have been extensively studied ( see Baldassi et al . ( 2020 ) and reference therein ) . Barbier et al . ( 2019 ) rigorously show that message passing algorithms in generalized linear models perform asymptotically exact inference under some statistical assumptions . Dictionary learning and matrix factorization are harder problems closely related to deep network learning problems , in particular to the modelling of a single intermediate layer . They have been approached using message passing in Kabashima et al . ( 2016 ) and Parker et al . ( 2014 ) , although the resulting predictions are found to be asymptotically inexact ( Maillard et al. , 2021 ) . The same problem is faced by the message passing algorithm recently proposed for a multi-layer matrix factorization scenario ( Zou et al. , 2021 ) . Unfortunately , our framework as well doesn ’ t yield asymptotic exact predictions . Nonetheless , it gives a message passing heuristic that for the first time is able to train deep neural networks on natural datasets , therefore sets a reference for the algorithmic applications of this research line . A few papers advocate the success of SGD to the geometrical structure ( smoothness and flatness ) of the loss landscape in neural networks ( Baldassi et al. , 2015 ; Chaudhari et al. , 2017 ; Garipov et al. , 2018 ; Li et al. , 2018 ; Pittorino et al. , 2021 ; Feng & Tu , 2021 ) . These considerations do not depend on the particular form of the SGD dynamics and should extend also to other types of algorithms , although SGD is by far the most popular choice among NNs practitioners due to its simplicity , flexibility , speed , and generalization performance . While our work focuses on message passing schemes , some of the ideas presented here , such as the PasP rule , can be combined with algorithms for Bayesian neural networks ’ training ( HernándezLobato & Adams , 2015 ; Wu et al. , 2018 ) . Recent work extends BP by combining it with graph neural networks ( Kuck et al. , 2020 ; Satorras & Welling , 2021 ) . Finally , some work in computational neuroscience shows similarities to our approach ( Rao , 2007 ) . 3 LEARNING BY MESSAGE PASSING . 3.1 POSTERIOR-AS-PRIOR UPDATES . We consider a multi-layer perceptron with L hidden neuron layers , having weight and bias parameters W = { W ` , b ` } L ` =0 . We allow for stochastic activations P ` ( x ` +1|z ` ) , where z ` is the neuron ’ s preactivation vector for layer ` , and P ` is assumed to be factorized over the neurons . If no stochasticity is present , P ` just encodes an element-wise activation function . The probability of output y given an input x is then given by P ( y |x , W ) = ∫ dx1 : L L∏ ` =0 P ` +1 ( x ` +1 | W ` x ` + b ` ) , ( 1 ) where for convenience we defined x0 = x and xL+1 = y . In a Bayesian framework , given a training set D = { ( xn , yn ) } n and a prior distribution over the weights qθ ( W ) in some parametric family , the posterior distribution is given by P ( W |D , θ ) ∝ ∏ n P ( yn |xn , W ) qθ ( W ) . ( 2 ) Here ∝ denotes equality up to a normalization factor . Using the posterior one can compute the Bayesian prediction for a new data-point x through the average P ( y |x , D , θ ) =∫ dW P ( y |x , W ) P ( W |D , θ ) . Unfortunately , the posterior is generically intractable due to the hard-to-compute normalization factor . On the other hand , we are mainly interested in training a distribution that covers wide minima of the loss landscape that generalize well ( Baldassi et al. , 2016a ) and in recovering pointwise estimators within these regions . The Bayesian modeling becomes an auxiliary tool to set the stage for the message passing algorithms seeking flat minima . We also need a formalism that allows for mini-batch training to speed-up the computation and deal with large datasets . Therefore , we devise an update scheme that we call Posterior-as-Prior ( PasP ) , where we evolve the parameters θt of a distribution qθt ( W ) computed as an approximate mini-batch posterior , in such a way that the outcome of the previous iteration becomes the prior in the following step . In the PasP scheme , θt retains the memory of past observations . We also add an exponential factor ρ , that we typically set close to 1 , tuning the forgetting rate and playing a role similar to the learning rate in SGD . Given a mini-batch ( Xt , yt ) sampled from the training set at time t and a scalar ρ > 0 , the PasP update reads qθt+1 ( W ) ≈ [ P ( W |yt , Xt , θt ) ] ρ , ( 3 ) where ≈ denotes approximate equality and we do not account for the normalization factor . A first approximation may be needed in the computation of the posterior , a second to project the approximate posterior onto the distribution manifold spanned by θ ( Minka , 2001 ) . In practice , we will consider factorized approximate posterior in an exponential family and priors qθ in the same family , although Eq . 3 generically allow for more refined approximations . Notice that setting ρ = 1 , the batch-size to 1 , and taking a single pass over the dataset , we recover the Assumed Density Filtering algorithm ( Minka , 2001 ) . For large enough ρ ( including ρ = 1 ) , the iterations of qθt will concentrate on a pointwise estimator . This mechanism mimics the reinforcement heuristic commonly used to turn Belief Propagation into a solver for constrained satisfaction problems ( Braunstein & Zecchina , 2006 ) and related to flat-minima discovery ( see focusing-BP in Baldassi et al . ( 2016a ) ) . A different prior updating mechanism which can be understood as empirical Bayes has been used in Baldassi et al . ( 2016b ) . | This manuscript provides an interesting try on alterative training algorithms for deep neural networks, based on (approximate) message-passing algorithms based on the well-known belief propagation (BP) algorithm. In particular, the binary neural network is considered and four algorithms (BP, three variants of BP, i.e., BPI, MF, AMP) are proposed within a unified Posterior-As-Prior Update framework. Experiments are conducted on standard supervised classification tasks and continual learning settings, which shows comparable performances as standard SGD based methods. ========================== After rebuttal: ========================= I have read the authors' feedback (many thanks for the detailed point-to-point feedback) and other reviewers' comments and modified the score accordingly. Overall, the proposed scheme is interesting, though strictly speaking the results are not very advantageous (at least from its current results) compared to traditional ones, and some of the comparisons seem not very reasonable/fair. | SP:a736d2fa98e58e22b69e55daf8b678d1583cc7e8 |
Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization | 1 INTRODUCTION . Deep neural networks have achieved dramatic progress in the past decade . This dramatic progress largely hinged on improvements in terms of optimization techniques . One of the most prominent recent optimization techniques is Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) . BN is an operation that can be introduced in between layers to normalize the output of each layer and it has been shown to be extremely effective in stabilizing and accelerating training of deep neural networks . Hence , it became standard in numerous state-of-the-art architectures , e.g. , ResNets ( He et al. , 2016 ) . Despite its empirical success , it is still theoretically elusive why BN is extremely effective for training deep neural networks . Therefore , we investigate the mechanisms behind the success of BN through convex duality . 1.1 RELATED WORK . Batch Normalization : One line of research has focused particularly on alternatives to BN , such as Layer Normalization ( Ba et al. , 2016 ) , Instance Normalization ( Ulyanov et al. , 2016 ) , Weight Normalization ( Salimans & Kingma , 2016 ) , and Group Normalization ( Wu & He , 2018 ) . Although these techniques achieved performance competitive with BN , they do not provide any theoretical insight about its empirical success . Another line of research studied the effects of BN on neural network training and identified several benefits . For example , Im et al . ( 2016 ) showed that training deep networks with BN reduces dependence on the parameter initialization . Wei et al . ( 2019 ) analyzed BN via mean-field theory to quantify its impact on the geometry of the optimization landscape . They reported that BN flattens the optimization landscape so that it enables the use of larger learning rates . In addition , Bjorck et al . ( 2018 ) ; Santurkar et al . ( 2018 ) ; Arora et al . ( 2018 ) showed that networks trained with BN achieve faster convergence and generalize better . Furthermore , Daneshmand et al . ( 2020 ) proved that BN avoids rank collapse so that gradient-based algorithms , e.g. , Stochastic Gradient Descent ( SGD ) , are able to effectively train deep networks . Even though these studies are important to understand the benefits of BN , they fail to provide a theoretically complete characterization for training deep networks with BN . Convex Neural Networks : Recently , a series of papers ( Pilanci & Ergen , 2020 ; Ergen & Pilanci , 2021 ; Sahiner et al. , 2021a ; b ) studied ReLU networks through the lens of convex optimization theory . Particularly , Pilanci & Ergen ( 2020 ) introduced exact convex representations for two-layer ReLU networks , which can be trained in polynomial-time via standard convex solvers . However , this work is restricted to two-layer fully connected networks with scalar outputs . Later on , Ergen & Pilanci ( 2021 ) first extended this approach to two-layer scalar output Convolutional Neural Networks ( CNNs ) with average and max pooling and provided further improvements on the training complexity . These results were extended to two-layer fully convolutional networks and two-layer networks with vector outputs ( Sahiner et al. , 2021a ; b ) . However , these convex approaches are restricted to two-layer ReLU networks without BN , thus , do not reflect the exact training framework in practice , i.e. , regularized deep ReLU networks with BN . Unlike these prior works , we not only analyze training problems with batch normalization but also extend the analysis to deeper architectures in Section 4 . Furthermore , we prove that the optimal layer weight can be obtained as closed-form solutions in the high-dimensional and/or overparameterized regimes ( see Theorem 2.1 , 2.4 , 4.1 , 4.2 , G.2 ) . 1.2 OUR CONTRIBUTIONS . • We introduce an exact convex framework to explicitly characterize optimal solutions to ReLU network training problems with weight-decay regularization and BN . Thus , we obtain closed-form solutions for the optimal layer weights in the high-dimensional and overparameterized regime . • We prove that regularized ReLU network training problems with BN can be equivalently stated as a finite-dimensional convex problem . As a corollary , we also show that the equivalent convex problems involve whitened data matrices unlike the original non-convex training problem . Hence , using convex optimization , we reveal an implicit whitening effect introduced by BN . • We demonstrate that GD applied to BN networks provides an implicit regularization effect by learning high singular value directions of the training data more aggressively , whereas this regularization is absent for GD applied to the equivalent whitened data formulation . We propose techniques to explicitly regularize BN networks to capture this implicit regularization effect . • Unlike previous studies , our derivations extend to deep ReLU networks with BN , CNNs , BN after ReLU1 , vector output networks , and arbitrary convex loss functions . 1.3 PRELIMINARIES . Notation : We denote matrices and vectors as uppercase and lowercase bold letters , respectively , where a subscript indicates a certain element or column . We use 0 ( or 1 ) to denote a vector or matrix of zeros ( or ones ) , where the sizes are appropriately chosen depending on the context . We also use In to denote the identity matrix of size n. To represent Euclidean , Frobenius , and nuclear norms , we use ‖ · ‖2 , ‖ · ‖F , and ‖ · ‖∗ , respectively . Lastly , we denote the elementwise 0-1 valued indicator function and ReLU activation as 1 [ x ≥ 0 ] and ( x ) + = max { x , 0 } , respectively . We2 consider an L-layer ReLU network with layer weights W ( l ) ∈ Rml−1×ml , where m0 = d and mL = C are the input and output dimensions , respectively . Given a training data X ∈ Rn×d and a label matrix Y ∈ Rn×C , we particularly focus on the following regularized training problem min θ∈Θ L ( fθ , L ( X ) , Y ) + βR ( θ ) , ( 1 ) where we compactly represent the parameters as θ : = { W ( l ) , γl , αl } Ll=1 and the corresponding parameter space as Θ : = { { W ( l ) , γ ( l ) , α ( l ) } Ll=1 : W ( l ) ∈ Rml−1×ml , γl ∈ Rml , αl ∈ Rml , ∀l ∈ [ L ] } . We note that γl and αl are the parameters of the BN operator , for which we discuss the details below . In addition , L ( · , · ) is an arbitrary convex loss function , including squared , hinge , and cross entropy loss , andR ( · ) is the regularization function for the layer weights with the tuning parameter β > 0 . We also compactly define the network output as fθ , L ( X ) : = A ( L−1 ) W ( L ) , where A ( l ) : = ( BNγ , α ( A ( l−1 ) W ( l ) ) ) + we denote the lth layer activations as A ( l ) ∈ Rn×ml , and A ( 0 ) = X . Here , BNγ , α ( · ) represents the BN operation introduced in Ioffe & Szegedy ( 2015 ) and applies matrices columnwise . 1Presented in Appendix G. 2All the proofs and some extensions are presented in Appendix . Remark 1.1 . Note that above we use BN before ReLU activations , which is common practice and consistent with the way introduced in Ioffe & Szegedy ( 2015 ) . However , BN can be placed after ReLU as well , e.g. , Chen et al . ( 2019 ) , and thus in Section G of Appendix , we will also consider architectures where BN layers are placed after ReLU . For a layer with weight matrix W ( l ) ∈ Rml−1×ml and arbitrary batch of activations denoted as A ( l−1 ) b ∈ Rs×ml−1 , BN applies to each column j independently as follows BNγ , α ( A ( l−1 ) b w ( l ) j ) : = ( Is − 1s11 T ) A ( l−1 ) b w ( l ) j ‖ ( Is − 1s11T ) A ( l−1 ) b w ( l ) j ‖2 γ ( l ) j + α ( l ) j 1√ n , ( 2 ) where γ ( l ) and α ( l ) are trainable parameters that scale and shift the normalized value . In this work , we focus on the full-batch case , i.e , A ( l−1 ) b = A ( l−1 ) ∈ Rn×ml . This corresponds to training the network with GD as opposed to mini-batch SGD . We note that our empirical findings with GD indicate identical if not better performance compared to the mini-batch case , which is also consistent with the previous studies Lian & Liu ( 2019 ) ; Summers & Dinneen ( 2020 ) . Throughout the paper , we consider a regression framework with squared loss and standard weightdecay regularization . Extensions to general convex loss functions are presented in Appendix B.1 . Moreover , below , we first focus on scalar outputs i.e . C = 1 , and then extend it to vector outputs . 1.4 OVERVIEW OF OUR RESULTS . Here , we provide an overview of our main results . To simplify the notation , we consider L-layer ReLU networks with scalar outputs , i.e. , mL = C = 1 thus the label vector is y ∈ Rn , and extend the analysis to vector output networks with the label matrix Y ∈ Rn×C in the next sections . The regularized training problems for an L-layer network with scalar output and BN is given by p∗L : = min θ∈Θ 1 2 ‖fθ , L ( X ) − y‖22 + β 2 L∑ l=1 ( ∥∥∥γ ( l ) ∥∥∥2 2 + ∥∥∥α ( l ) ∥∥∥2 2 + ∥∥∥W ( l ) ∥∥∥2 F ) , ( 3 ) where we use γ ( L ) = α ( L ) = 0 as dummy variables for notational simplicity . Lemma 1.1 . The problem in ( 3 ) is equivalent to the following optimization problem min θ∈Θs 1 2 ‖fθ , L ( X ) − y‖22 + β ∥∥∥w ( L ) ∥∥∥ 1 , ( 4 ) where Θs : = { θ ∈ Θ : γ ( L−1 ) j 2 + α ( L−1 ) j 2 = 1 , ∀j ∈ [ mL−1 ] } . Using the equivalence in Lemma 1.1 , we now take dual of ( 4 ) with respect to the output layer weights w ( L ) to obtain 3 p∗L ≥ d∗L : = max v −1 2 ‖v − y‖22 + 1 2 ‖y‖22 s.t . max θ∈Θs ∣∣∣∣v > ( BNγ , α ( A ( L−2 ) w ( L−1 ) ) ) + ∣∣∣∣ ≤ β . ( 5 ) Since the original formulation in ( 3 ) is a non-convex optimization problem , any solution v in the dual domain yields a lower bound for the primal problem , i.e. , p∗L ≥ d∗L . In this paper , we first show that strong duality holds in this case , i.e. , p∗L = d ∗ L , and then derive an exact equivalent convex formulation for the non-convex problem ( 3 ) . Furthermore , we even obtain closed-form solutions for the layer weights in some cases so that there is no need to train a network in an end-to-end manner . | The claim of this paper is casting training neural networks with batch normalization to a convex program solvable in polynomial time. The convex reduction sparks an implicit regularization of batch normalization. Taking inspiration from the convex program and the implicit regularization, the authors improve BN. | SP:b392cdc4ce546566457a48e95bfbaea6cad5b44b |
Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization | 1 INTRODUCTION . Deep neural networks have achieved dramatic progress in the past decade . This dramatic progress largely hinged on improvements in terms of optimization techniques . One of the most prominent recent optimization techniques is Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) . BN is an operation that can be introduced in between layers to normalize the output of each layer and it has been shown to be extremely effective in stabilizing and accelerating training of deep neural networks . Hence , it became standard in numerous state-of-the-art architectures , e.g. , ResNets ( He et al. , 2016 ) . Despite its empirical success , it is still theoretically elusive why BN is extremely effective for training deep neural networks . Therefore , we investigate the mechanisms behind the success of BN through convex duality . 1.1 RELATED WORK . Batch Normalization : One line of research has focused particularly on alternatives to BN , such as Layer Normalization ( Ba et al. , 2016 ) , Instance Normalization ( Ulyanov et al. , 2016 ) , Weight Normalization ( Salimans & Kingma , 2016 ) , and Group Normalization ( Wu & He , 2018 ) . Although these techniques achieved performance competitive with BN , they do not provide any theoretical insight about its empirical success . Another line of research studied the effects of BN on neural network training and identified several benefits . For example , Im et al . ( 2016 ) showed that training deep networks with BN reduces dependence on the parameter initialization . Wei et al . ( 2019 ) analyzed BN via mean-field theory to quantify its impact on the geometry of the optimization landscape . They reported that BN flattens the optimization landscape so that it enables the use of larger learning rates . In addition , Bjorck et al . ( 2018 ) ; Santurkar et al . ( 2018 ) ; Arora et al . ( 2018 ) showed that networks trained with BN achieve faster convergence and generalize better . Furthermore , Daneshmand et al . ( 2020 ) proved that BN avoids rank collapse so that gradient-based algorithms , e.g. , Stochastic Gradient Descent ( SGD ) , are able to effectively train deep networks . Even though these studies are important to understand the benefits of BN , they fail to provide a theoretically complete characterization for training deep networks with BN . Convex Neural Networks : Recently , a series of papers ( Pilanci & Ergen , 2020 ; Ergen & Pilanci , 2021 ; Sahiner et al. , 2021a ; b ) studied ReLU networks through the lens of convex optimization theory . Particularly , Pilanci & Ergen ( 2020 ) introduced exact convex representations for two-layer ReLU networks , which can be trained in polynomial-time via standard convex solvers . However , this work is restricted to two-layer fully connected networks with scalar outputs . Later on , Ergen & Pilanci ( 2021 ) first extended this approach to two-layer scalar output Convolutional Neural Networks ( CNNs ) with average and max pooling and provided further improvements on the training complexity . These results were extended to two-layer fully convolutional networks and two-layer networks with vector outputs ( Sahiner et al. , 2021a ; b ) . However , these convex approaches are restricted to two-layer ReLU networks without BN , thus , do not reflect the exact training framework in practice , i.e. , regularized deep ReLU networks with BN . Unlike these prior works , we not only analyze training problems with batch normalization but also extend the analysis to deeper architectures in Section 4 . Furthermore , we prove that the optimal layer weight can be obtained as closed-form solutions in the high-dimensional and/or overparameterized regimes ( see Theorem 2.1 , 2.4 , 4.1 , 4.2 , G.2 ) . 1.2 OUR CONTRIBUTIONS . • We introduce an exact convex framework to explicitly characterize optimal solutions to ReLU network training problems with weight-decay regularization and BN . Thus , we obtain closed-form solutions for the optimal layer weights in the high-dimensional and overparameterized regime . • We prove that regularized ReLU network training problems with BN can be equivalently stated as a finite-dimensional convex problem . As a corollary , we also show that the equivalent convex problems involve whitened data matrices unlike the original non-convex training problem . Hence , using convex optimization , we reveal an implicit whitening effect introduced by BN . • We demonstrate that GD applied to BN networks provides an implicit regularization effect by learning high singular value directions of the training data more aggressively , whereas this regularization is absent for GD applied to the equivalent whitened data formulation . We propose techniques to explicitly regularize BN networks to capture this implicit regularization effect . • Unlike previous studies , our derivations extend to deep ReLU networks with BN , CNNs , BN after ReLU1 , vector output networks , and arbitrary convex loss functions . 1.3 PRELIMINARIES . Notation : We denote matrices and vectors as uppercase and lowercase bold letters , respectively , where a subscript indicates a certain element or column . We use 0 ( or 1 ) to denote a vector or matrix of zeros ( or ones ) , where the sizes are appropriately chosen depending on the context . We also use In to denote the identity matrix of size n. To represent Euclidean , Frobenius , and nuclear norms , we use ‖ · ‖2 , ‖ · ‖F , and ‖ · ‖∗ , respectively . Lastly , we denote the elementwise 0-1 valued indicator function and ReLU activation as 1 [ x ≥ 0 ] and ( x ) + = max { x , 0 } , respectively . We2 consider an L-layer ReLU network with layer weights W ( l ) ∈ Rml−1×ml , where m0 = d and mL = C are the input and output dimensions , respectively . Given a training data X ∈ Rn×d and a label matrix Y ∈ Rn×C , we particularly focus on the following regularized training problem min θ∈Θ L ( fθ , L ( X ) , Y ) + βR ( θ ) , ( 1 ) where we compactly represent the parameters as θ : = { W ( l ) , γl , αl } Ll=1 and the corresponding parameter space as Θ : = { { W ( l ) , γ ( l ) , α ( l ) } Ll=1 : W ( l ) ∈ Rml−1×ml , γl ∈ Rml , αl ∈ Rml , ∀l ∈ [ L ] } . We note that γl and αl are the parameters of the BN operator , for which we discuss the details below . In addition , L ( · , · ) is an arbitrary convex loss function , including squared , hinge , and cross entropy loss , andR ( · ) is the regularization function for the layer weights with the tuning parameter β > 0 . We also compactly define the network output as fθ , L ( X ) : = A ( L−1 ) W ( L ) , where A ( l ) : = ( BNγ , α ( A ( l−1 ) W ( l ) ) ) + we denote the lth layer activations as A ( l ) ∈ Rn×ml , and A ( 0 ) = X . Here , BNγ , α ( · ) represents the BN operation introduced in Ioffe & Szegedy ( 2015 ) and applies matrices columnwise . 1Presented in Appendix G. 2All the proofs and some extensions are presented in Appendix . Remark 1.1 . Note that above we use BN before ReLU activations , which is common practice and consistent with the way introduced in Ioffe & Szegedy ( 2015 ) . However , BN can be placed after ReLU as well , e.g. , Chen et al . ( 2019 ) , and thus in Section G of Appendix , we will also consider architectures where BN layers are placed after ReLU . For a layer with weight matrix W ( l ) ∈ Rml−1×ml and arbitrary batch of activations denoted as A ( l−1 ) b ∈ Rs×ml−1 , BN applies to each column j independently as follows BNγ , α ( A ( l−1 ) b w ( l ) j ) : = ( Is − 1s11 T ) A ( l−1 ) b w ( l ) j ‖ ( Is − 1s11T ) A ( l−1 ) b w ( l ) j ‖2 γ ( l ) j + α ( l ) j 1√ n , ( 2 ) where γ ( l ) and α ( l ) are trainable parameters that scale and shift the normalized value . In this work , we focus on the full-batch case , i.e , A ( l−1 ) b = A ( l−1 ) ∈ Rn×ml . This corresponds to training the network with GD as opposed to mini-batch SGD . We note that our empirical findings with GD indicate identical if not better performance compared to the mini-batch case , which is also consistent with the previous studies Lian & Liu ( 2019 ) ; Summers & Dinneen ( 2020 ) . Throughout the paper , we consider a regression framework with squared loss and standard weightdecay regularization . Extensions to general convex loss functions are presented in Appendix B.1 . Moreover , below , we first focus on scalar outputs i.e . C = 1 , and then extend it to vector outputs . 1.4 OVERVIEW OF OUR RESULTS . Here , we provide an overview of our main results . To simplify the notation , we consider L-layer ReLU networks with scalar outputs , i.e. , mL = C = 1 thus the label vector is y ∈ Rn , and extend the analysis to vector output networks with the label matrix Y ∈ Rn×C in the next sections . The regularized training problems for an L-layer network with scalar output and BN is given by p∗L : = min θ∈Θ 1 2 ‖fθ , L ( X ) − y‖22 + β 2 L∑ l=1 ( ∥∥∥γ ( l ) ∥∥∥2 2 + ∥∥∥α ( l ) ∥∥∥2 2 + ∥∥∥W ( l ) ∥∥∥2 F ) , ( 3 ) where we use γ ( L ) = α ( L ) = 0 as dummy variables for notational simplicity . Lemma 1.1 . The problem in ( 3 ) is equivalent to the following optimization problem min θ∈Θs 1 2 ‖fθ , L ( X ) − y‖22 + β ∥∥∥w ( L ) ∥∥∥ 1 , ( 4 ) where Θs : = { θ ∈ Θ : γ ( L−1 ) j 2 + α ( L−1 ) j 2 = 1 , ∀j ∈ [ mL−1 ] } . Using the equivalence in Lemma 1.1 , we now take dual of ( 4 ) with respect to the output layer weights w ( L ) to obtain 3 p∗L ≥ d∗L : = max v −1 2 ‖v − y‖22 + 1 2 ‖y‖22 s.t . max θ∈Θs ∣∣∣∣v > ( BNγ , α ( A ( L−2 ) w ( L−1 ) ) ) + ∣∣∣∣ ≤ β . ( 5 ) Since the original formulation in ( 3 ) is a non-convex optimization problem , any solution v in the dual domain yields a lower bound for the primal problem , i.e. , p∗L ≥ d∗L . In this paper , we first show that strong duality holds in this case , i.e. , p∗L = d ∗ L , and then derive an exact equivalent convex formulation for the non-convex problem ( 3 ) . Furthermore , we even obtain closed-form solutions for the layer weights in some cases so that there is no need to train a network in an end-to-end manner . | The paper studies batch normalization in deep neural networks. For a two-layer network with scalar output and batch normalization, a dual of the problem is derived. It is then shown that in the high-dimensional regime, the dual can be further simplified so that an optimal solution can be computed in closed form. In the general case, the concept of hyperplane arrangements can be used to formulate an equivalent finite-dimensional convex program which can be solved in polynomial time. The analysis is then extended for vector output networks and different architectures including L-layer neural networks and CNNs. It is then shown that the convex problems tend to fit low singular value directions which will lead to poorer generalization. As a remedy, the authors propose a truncated variant of the problem by obtaining a low-rank approximation of the data. In the experimental section, it is shown on the CIFAR-100 dataset that the closed form solution leads to superior results compared to the solution obtained by gradient descent. Moreover it is shown that the truncated variant of the approach is needed to achieve a good generalization performance. | SP:b392cdc4ce546566457a48e95bfbaea6cad5b44b |
Blur Is an Ensemble: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness | Bayesian neural networks ( BNNs ) have shown success in the areas of uncertainty1 estimation and robustness . However , a crucial challenge prohibits their use in2 practice . Bayesian NNs require a large number of predictions to produce reliable3 results , leading to a significant increase in computational cost . To alleviate this4 issue , we propose spatial smoothing , a method that ensembles neighboring feature5 map points of CNNs . By simply adding a few blur layers to the models , we6 empirically show that spatial smoothing improves accuracy , uncertainty estimation,7 and robustness of BNNs across a whole range of ensemble sizes . In particular,8 BNNs incorporating spatial smoothing achieve high predictive performance merely9 with a handful of ensembles . Moreover , this method also can be applied to canonical10 deterministic neural networks to improve the performances . A number of evidences11 suggest that the improvements can be attributed to the stabilized feature maps12 and the flattening of the loss landscape . In addition , we provide a fundamental13 explanation for prior works—namely , global average pooling , pre-activation , and14 ReLU6—by addressing them as special cases of spatial smoothing . These not15 only enhance accuracy , but also improve uncertainty estimation and robustness by16 making the loss landscape smoother in the same manner as spatial smoothing.17 1 INTRODUCTION18 Azulay & Weiss , 2019 ) .33 Bayesian neural networks ( BNNs ) , such as Monte Carlo34 ( MC ) dropout ( Gal & Ghahramani , 2016 ) , provide a35 probabilistic representation of NN weights . They com-36 bine a number of models selected based on weight prob-37 ability to make predictions of desired results . Thanks to38 this feature , BNNs have been widely used in the areas of uncertainty estimation ( Kendall & Gal , 2017 ) 39 and robustness ( Ovadia et al. , 2019 ) . They are also promising in other fields like out-of-distribution40 detection ( Malinin & Gales , 2018 ) and meta-learning ( Yoon et al. , 2018 ) .41 Nevertheless , there remains a significant challenge that prohibits their use in practice . BNNs require42 an ensemble size of up to fifty to achieve high predictive performance , which results in a fiftyfold43 increase in computational cost ( Kendall & Gal , 2017 ; Loquercio et al. , 2020 ) . Therefore , if BNNs44 can achieve high predictive performance merely with a handful of ensembles , they could be applied45 to a much wider range of areas.46 1.1 PRELIMINARY47 We would first like to discuss BNN inference in detail , then move on to Vector-Quantized BNN48 ( VQ-BNN ) inference ( Park et al. , 2021 ) , an efficient approximated BNN inference.49 BNN inference . Suppose we have access to posterior probability of NN weight p ( w|D ) for training dataset D. The predictive result of BNN is given by the following predictive distribution : p ( y|x0 , D ) = ∫ p ( y|x0 , w ) p ( w|D ) dw ( 1 ) wherex0 is observed input data vector , y is output vector , and p ( y|x , w ) is the probabilistic prediction parameterized by the result of NN for an input x and weight w. In most cases , the integral can not be solved analytically . Thus , we use the MC estimator to approximate it as follows : p ( y|x0 , D ) ' N−1∑ i=0 1 N p ( y|x0 , wi ) ( 2 ) wherewi ∼ p ( w|D ) and N is the number of the samples . The equation indicates that BNN inference50 is ensemble average of NN predictions for one observed data point as shown on the left of Fig . 2.51 Using N neural networks in the ensemble would requires N times more computational complexity52 than one NN execution.53 1.00 Data-complemented BNN inference . To reduce the computational cost of BNN inference , VQBNN ( Park et al. , 2021 ) executes NN for an observed data only once and complements the result with previously calculated predictions for other data . If we have access to previous predictions , the computational performance of VQ-BNN becomes comparable to that of one NN execution . To be specific , VQ-BNN inference is : p ( y|x0 , D ) ' N−1∑ i=0 π ( xi|x0 ) p ( y|xi , wi ) ( 3 ) where π ( xi|x0 ) is the importance of data xi with respect to the observed data x0 , and it is defined as a54 similarity betweenxi andx0 . p ( y|x0 , w0 ) is the newly calculated prediction , and { p ( y|x1 , w1 ) , · · · } 55 are previously calculated predictions . To accurately infer the results , the previous predictions should56 consist of predictions for “ data similar to the observed data ” .57 Thanks to the temporal consistency of real-world data streams , aggregating predictions for similar58 data in data streams is straightforward . Since temporally proximate data sequences tend to be similar,59 we can memorize recent predictions and calculates their average using exponentially decreasing60 importance . In other words , VQ-BNN inference for data streams is simply temporal smoothing of61 recent predictions as shown in the middle of Fig . 2.62 VQ-BNN has two limitations , although it may be a promising approach to obtain reliable results63 in an efficient way . First , it was only applicable to data streams such as video sequences . Applying64 VQ-BNN to images is challenging because it is impossible to memorize all similar images in advance.65 Second , Park et al . ( 2021 ) used VQ-BNN only in the testing phase , not in the training phase . We find66 that ensembling predictions for similar data helps in NN training by smoothing the loss landscape.67 1.2 MAIN CONTRIBUTION68 1 Spatially neighboring points in visual imagery tend to be similar , as do feature maps of convolu-69 tional neural networks ( CNNs ) . By exploiting this spatial consistency , we propose spatial smoothing70 as a method of ensembling nearby feature maps to improve the efficiency of ensemble size in BNN71 inference . The right side of Fig . 2 visualizes spatial smoothing aggregating neighboring feature maps.72 2 We empirically demonstrate that spatial smoothing improves the efficiency in vision tasks , such73 as image classification on CIFAR ( Krizhevsky et al. , 2009 ) and ImageNet ( Russakovsky et al. , 2015 ) ,74 without any additional training parameters . Figure 3 shows that negative log-likelihood ( NLL ) of75 “ MC dropout + spatial smoothing ” with an ensemble size of two is comparable to that of vanilla MC76 dropout with an ensemble size of fifty . We also demonstrate that spatial smoothing improves accuracy,77 uncertainty , and robustness all at the same time . Figure 1 shows that spatial smoothing improves both78 the accuracy and uncertainty of various deterministic and Bayesian NNs with an ensemble size of79 fifty on CIFAR-100.80 3 Global average pooling ( GAP ) ( Lin et al. , 2014 ; Zhou et al. , 2016 ) , pre-activation ( He et al.,81 2016b ) , and ReLU6 ( Krizhevsky & Hinton , 2010 ; Sandler et al. , 2018 ) have been widely used in vision82 tasks . However , their motives are largely justified by the experiments . We provide an explanation for83 these methods by addressing them as special cases of spatial smoothing . Experiments support the84 claim by showing that the methods improve not only accuracy but also uncertainty and robustness.85 2 PROBABILISTIC SPATIAL SMOOTHING86 To improve the computational perfor-87 mance of BNN inference , VQ-BNN88 ( Park et al. , 2021 ) executes NN pre-89 diction only once and complements90 the result with previously calculated91 predictions . The key to the success92 of this approach largely depends on93 the collection of previous predictions94 for proximate data . Gathering tempo-95 rally proximate data and their predic-96 tions from data streams is easy be-97 cause recent data and predictions can98 be aggregated using temporal consis-99 tency . On the other hand , gathering100 time-independent proximate data , e.g . images , is more difficult because they lack such consistency.101 2.1 MODULE ARCHITECTURE FOR ENSEMBLING NEIGHBORING FEATURE MAP POINTS102 So instead of temporal consistency , we use spatial consistency—where neighboring pixels of images103 are similar—for real-world images . Under this hypothesis , we take the feature maps as predictions104 and aggregate neighboring feature maps.105 Most CNN architectures , including ResNet , consist of multiple stages that begin with increasing the number of channels while reducing the spatial dimension of the input volume . We decompose an entire BNN inference into several steps by rewriting each stage in a recurrence relation as follows : p ( zi+1|zi , D ) = ∫ p ( zi+1|zi , wi ) p ( wi|D ) dwi ( 4 ) where zi is input volume of the i-th stage , and the first and the last volume are input data and106 output . wi and p ( wi|D ) are NN weight in the i-th stage and its probability . p ( zi+1|zi , wi ) is output107 probability of zi+1 with respect to the input volume zi . To derive the probability from the output108 feature map , we transform each point of the feature map into a Bernoulli distribution . To do so , a109 composition of tanh and ReLU , a function from value of range [ −∞ , ∞ ] into probability , is added110 after each stage . Put shortly , we use neural networks for point-wise binary feature classification.111 Since Eq . ( 4 ) is a kind of BNN inference , it can be approximated using Eq . ( 3 ) . In other words , each112 stage predicts feature map points only once and complements predictions with similar feature maps.113 Under spatial consistency , it averages probabilities of spatially neighboring feature map points , which114 is well known as blur operation in image processing . For the sake of implementation simplicity,115 average pooling with a kernel size of 2 and a stride of 1 is used as a box blur . This operation ensembles116 four neighboring probabilities with the same importances.117 In summary , as shown in Fig . 4 , we propose the following probabilistic spatial smoothing layer : Smooth ( z ) = Blur ◦ Prob ( z ) ( 5 ) where Prob ( · ) is a point-wise function from a feature map to probability , and Blur ( · ) is importance-118 weighted average for ensembling spatially neighboring probabilities from feature maps . Smooth layer119 is added after each stage . Prob and Blur are further elaborated below.120 Prob : Feature map to probability . Prob is a function that transforms a real-valued feature map into probability . We use tanh–ReLU composition for this purpose . However , tanh is commonly known to suffer from the vanishing gradient problem . To alleviate this issue , we propose the following temperature-scaled tanh : tanhτ ( z ) = τ tanh ( z/τ ) ( 6 ) where τ is a hyperparameter called temperature . τ is 1 in conventional tanh and ∞ in identity121 function . tanhτ imposes an upper bound on a value , but does not limit the upper bound to 1.122 MC dropout MC dropout + Smooth . An unnormalized probability , ranging from 0 to τ , is allowed as the output of Prob . Then , thanks to the linearity of integration , we obtain an unnormalized predictive distribution accordingly . Taking this into account , we propose the following Prob : Prob ( z ) = ReLU ◦ tanhτ ( z ) ( 7 ) where τ > 1 . We empirically determine τ to minimize NLL , a metric that measures both accuracy123 and uncertainty . See Fig . B.3 for more detailed ablation studies . In addition , we expect upper-bounded124 functions , e.g. , ReLU6 ( z ) = ReLU ◦ min ( z , 6 ) and feature map scaling z/τ with τ > 1 which125 is BatchNorm , to be able to replace tanhτ in Prob ; and as expected , these alternatives improve126 uncertainty estimation in addition to accuracy . See Appendix C.2 and Appendix C.3 for detailed127 discussions on activation ( ReLU ◦ BatchNorm ) and ReLU6 as Prob.128 Blur : Averaging neighboring probabilities . Blur averages the probabilities from feature maps . We primarily use the average pool with a kernel size of 2 and a stride of 1 as the implementation of Blur for the sake of simplicity . Nevertheless , we could generalize Blur by using the following depth-wise convolution , which acts on each input channel separately , with non-trainable kernel K = 1 ||k||21 k ⊗ k > ( 8 ) where k is a 1D matrix , e.g. , k ∈ { ( 1 ) , ( 1 , 1 ) , ( 1 , 2 , 1 ) , ( 1 , 4 , 6 , 4 , 1 ) } . Different ks derive different129 importances for neighboring feature maps . We empirically show that most Blurs improve the130 predictive performance and that optimalK varies by model . For more ablation studies , see Table B.2.131 2.2 HOW DOES SPATIAL SMOOTHING HELP OPTIMIZATION ? 132 We present theoretical and empirical aspects to show that spatial smoothing ensembles feature maps.133 Feature map variance . BNNs have two types of uncertainties : One is model uncertainty and the134 other is data uncertainty ( Park et al. , 2021 ) . These randomnesses increase the variance of the feature135 maps . To demonstrate that spatial smoothing is an ensemble , we use the following proposition:136 Proposition 1 . Ensembles reduce the variance of predictions.137 We omit the proof since it is straightforward . In our context , predictions are output feature maps of a138 stage . We investigate model and data uncertainties of the predictions along NN layers to show that139 spatial smoothing reduces the randomnesses and ensembles feature maps . Figure 5 shows the model140 uncertainty and data uncertainty of Bayesian ResNet including MC dropout layers . In this figure , the141 uncertainty of MC dropout ’ s feature map only accumulates , and almost monotonically increases in142 every NN layer . In contrast , the uncertainty of “ MC dropout + spatial smoothing ” ’ s feature map is143 significantly decreases at the end of stages , suggesting that the smoothing layers ensemble the feature144 map . In other words , they make the feature map more accurate and stabilized input volumes for the145 next stages . In addition , consistently , the spatial smoothing layer close to the last layer significantly146 improves performance because it reduces the uncertainty of predictions largely . See Fig . B.5 for more147 detailed results . Deterministic NNs do not have model uncertainty but data uncertainty . Therefore,148 spatial smoothing improves the performance of deterministic NNs as well as Bayesian NNs.149 Fourier analysis . We also analyze spatial smoothing through the lens of Fourier transform:150 Proposition 2 . Ensembles filter high-frequency signals.151 The proof is provided in Eqs . ( 16 ) to ( 17 ) . Figure 6b shows the 2D Fourier transformed output152 feature map at the end of the stage 1 . This figure reveals that MC dropout almost does not affect153 low-frequency ( < 0.3π ) ranges , and it adds high-frequency ( ≥ 0.3π ) noises . Since spatial smoothing154 is a low-pass filter , it effectively filters high-frequency signals , including the noises caused by MC155 dropout.156 We also find that CNNs are particularly vulnerable to high-frequency noises . To demonstrate this157 claim , following Shao et al . ( 2021 ) , we measure accuracy with respect to data with frequency-based158 random noise xnoise = x0 +F−1 ( F ( δ ) Mf ) , where x0 is clean data , F ( · ) and F−1 ( · ) are Fourier159 transform and inverse Fourier transform , δ is random noise , and Mf is frequency mask as shown160 in Fig . 6a . Figure 6c exhibits the results . In sum , high-frequency noises , including those caused by161 MC dropout , significantly impair accuracy . Spatial smoothing improves the robustness by effectively162 removing high-frequency noises.163 Loss landscape . Lastly , we show that the randomness hinders NN training as follows:164 Proposition 3 . Randomness of predictions sharpens the loss landscape , and ensembles flatten it.165 The proof is provided in Eqs . ( 18 ) to ( 25 ) . Since a sharp loss function disturbs NN optimization166 ( Keskar et al. , 2017 ; Santurkar et al. , 2018 ; Foret et al. , 2020 ) , reducing the uncertainty helps NN167 learn strong representations . For example , training phase NN ensemble averages out the randomness,168 and it flattens the loss function . In consequence , an ensemble of BNN outputs in training phase169 significantly improves the predictive performance . See Fig . D.4 for numerical results . However , we170 do not use training phase ensemble because it significantly increases the training time . Instead , we171 use spatial smoothing as a method that ensembles feature maps without sacrificing training time.172 We visualizes the loss landscapes ( Li et al. , 2018 ) , the contours of NLL on training dataset . Figure 8b173 shows that the loss landscapes of MC dropout fluctuate and have irregular surfaces due to the174 randomness . As Li et al . ( 2018 ) ; Foret et al . ( 2020 ) pointed out,175 this may lead to poor generalization and predictive performance.176 Spatial smoothing reduces randomness as discussed above , and177 spatial smoothing aids in optimization by stabilizing and flattening178 the loss landscape of BNN as shown in Fig . 8c.179 Furthermore , we use Hessian to quantitatively represent the sharp-180 ness of the loss landscapes . Figure 7 shows the Hessian max eigen-181 value spectra of the models in Fig . 8 with a batch size of 128 , which182 reveals that spatial smoothing reduces the magnitude of Hessian183 eigenvalues and suppresses outliers . Since large Hessian eigenval-184 ues disturb NN training ( Ghorbani et al. , 2019 ) , we come to the185 same conclusion that spatial smoothing helps NN optimization . See186 Appendix C.1 for a more detailed description of the configurations187 of the Hessian max eigenvalue spectra . In addition , from these188 observations , we propose the conjecture that the flatter the loss189 landscape , the better the uncertainty estimation , and vice versa.190 2.3 REVISITING GLOBAL AVERAGE POOLING191 The success of GAP classifier in image classification is192 indisputable . The initial motivation and the most widely193 accepted explanation for this success is that GAP prevents194 overfitting by using far fewer parameters than multi-layer195 perceptron ( MLP ) ( Lin et al. , 2014 ) . However , we discover196 that the explanation is poorly supported . We compares197 GAP with other classifiers including MLP . Contrary to198 popular belief , Table 1 suggests that MLP does not overfit199 the training dataset . MLP underfits or gives comparable200 performance to GAP on the training dataset . On the test201 dataset , GAP provides better results compared with MLP . See Table C.1 for more detailed results.202 Our argument is that GAP is an extreme case of spatial smoothing . In other words , GAP is successful203 because it ensembles feature maps and smoothens the loss landscape to help optimization . To support204 this claim , we visualizes the loss landscape of MLP as shown in Fig . 8a . It is chaotic compared to205 that of GAP as shown in Fig . 8b . Hessian shows the consistent results as demonstrated by Fig . 7.206 3 EXPERIMENTS207 This section presents two experiments . The first experiment is image classification through which208 we show that spatial smoothing not only improves the ensemble efficiency , but also the accuracy,209 uncertainty , and robustness of both deterministic NN and MC dropout . The second experiment is210 semantic segmentation on data streams through which we show that spatial smoothing and temporal211 smoothing ( Park et al. , 2021 ) are complementary . See Appendix A for more detailed configurations.212 1.00 Three metrics are measured in these experiments : NLL ( ↓1 ) , accuracy ( ↑ ) , and expected calibration213 error ( ECE , ↓ ) ( Guo et al. , 2017 ) . NLL represents both accuracy and uncertainty , and is the most214 widely used as a proper scoring rule . ECE measures discrepancy between accuracy and confidence.215 3.1 IMAGE CLASSIFICATION216 This section mainly discuss ResNet ( He et al. , 2016a ) . Table E.1 also discuss other settings that217 show the same trend : e.g. , VGG ( Simonyan & Zisserman , 2015 ) , ResNeXt ( Xie et al. , 2017 ) ,218 and pre-activation models ( He et al. , 2016a ) . Spatial smoothing also improves deep ensemble219 ( Lakshminarayanan et al. , 2017 ) , another non-Bayesian probabilistic NN method . See Fig . E.1.220 Performance . Fig . 3 and Fig . 9 show the predictive performances of ResNet-18 on CIFAR-100221 and ResNet-50 on ImageNet , respectively . The results indicate that spatial smoothing improves both222 accuracy and uncertainty in many respects . Let us be more specific . First , spatial smoothing improves223 the efficiency of ensemble size . In these examples , the NLL of “ MC dropout + spatial smoothing ” 224 with an ensemble size of 2 is comparable to or even better than that of MC dropout with an ensemble225 size of 50 . In other words , “ MC dropout + spatial smoothing ” is 25× faster than MC dropout with226 a similar predictive performance . Second , the predictive performance of “ MC dropout + spatial227 smoothing ” is better than that of MC dropout , at an ensemble size of 50 . Third , spatial smoothing228 improves the predictive performance of deterministic NN , as well as MC dropout.229 Robustness . To evaluate robustness against data corruption , we230 measure predictive performance of ResNet-18 on CIFAR-100-231 C ( Hendrycks & Dietterich , 2019 ) . This dataset consists of data232 corrupted by 15 different types , each with 5 levels of intensity233 each . We use mean corruption NLL ( mCNLL , ↓ ) , the averages234 of NLL over intensities and corruption types , to summarize the235 performance of corrupted data in a single value . See Eq . ( 32 ) for236 a more rigorous definition . Figure 10 shows that spatial smoothing237 not only improves the efficiency but also corruption robustness238 across a whole range of ensemble size . See Fig . E.3 for more239 details . Spatial smoothing also improves adversarial robustness240 and perturbation consistency ( ↑ ) ( Hendrycks & Dietterich , 2019 ; 241 Zhang , 2019a ) , shift-transformation invariance . See Table E.2,242 Table E.3 , and Fig . E.4 for more details.243 3.2 SEMANTIC SEGMENTATION244 Table 2 summarizes the result of semantic segmentation on CamVid dataset ( Brostow et al. , 2008 ) 245 that consists of real-world 360×480 pixels videos . The table shows that spatial smoothing improves246 predictive performance , which is consistent with the image classification experiment . Moreover , the247 result reveals that spatial smoothing and temporal smoothing ( Park et al. , 2021 ) are complementary.248 See Table E.4 for more results.249 1We use arrows to indicate which direction is better . Spatial smoothing can be compared with prior works in the following areas.251 Anti-aliased CNNs . Local means ( Zhang , 2019a ; Zou et al. , 2020 ; Vasconcelos et al. , 2020 ; Sinha252 et al. , 2020 ) were introduced for the shift-invariance of deterministic CNNs in image classification.253 They were motivated to prevent the aliasing effect of subsampling . Although the local filtering can254 result in a loss of information , Zhang ( 2019a ) experimentally observed an increase in accuracy that255 was beyond expectation . We provide a fundamental explanation for this phenomenon : Local means256 are a spatial ensemble . An ensemble not only improves accuracy , but also uncertainty and robustness257 of deterministic and Bayesian NNs . In Fig . F.1 , we also show that the predictive performance258 improvement is not due to anti-aliasing of local mean . See Appendix F for more discussion on local259 means . For a discussion on non-local means ( Wang et al. , 2018 ) and self-attention ( Dosovitskiy et al.,260 2021 ) , see Section 5.261 Sampling-free BNNs . Sampling-free BNNs ( Hernández-Lobato & Adams , 2015 ; Wang et al.,262 2016 ; Wu et al. , 2019 ) predict results based on a single or couple of NN executions . To this end , it is263 assumed that posterior and feature maps follow Gaussian distributions . However , the discrepancy264 between reality and assumption accumulates in every NN layer . Consequently , to the best of our265 knowledge , most of the sampling-free BNNs could only be applied to shallow models , such as LeNet,266 and were tested on small datasets . Postels et al . ( 2019 ) applied sampling-free BNNs to SegNet ; 267 nonetheless , Park et al . ( 2021 ) argued that they do not predict well-calibrated results.268 Efficient deep ensembles . Deep ensemble ( Lakshminarayanan et al. , 2017 ; Fort et al. , 2019 ) is269 another probabilistic NN approach for predicting reliable results . BatchEnsemble ( Wen et al. , 2020 ; 270 Dusenberry et al. , 2020 ) ensembles over a low-rank subspace to make deep ensemble more efficient.271 Depth uncertainty network ( Antoran et al. , 2020 ) aggregates feature maps from different depths of272 a single NN to predict results efficiently . Despite being robust against data corruption , it provides273 weaker predictive performance compared to deterministic NN and MC dropout.274 5 DISCUSSION275 We propose spatial smoothing , a simple yet efficient module to improve BNN . Three different per-276 spectives , namely , feature map variance , Fourier analysis , and loss landscape , suggest that spatial277 smoothing ensembles feature maps . The limitation of spatial smoothing is that designing its compo-278 nents requires inductive bias . In other words , the optimal shape of the blur kernel is model-dependent.279 We believe this problem can be solved by introducing self-attention ( Vaswani et al. , 2017 ) . Self-280 attentions for computer vision ( Dosovitskiy et al. , 2021 ; Touvron et al. , 2021 ; Carion et al. , 2020 ) 281 can be deemed as trainable importance-weighted ensembles of feature maps . The observation that282 Transformers are more robust than expected ( Bhojanapalli et al. , 2021 ; Shao et al. , 2021 ) supports this283 claim . Therefore , using self-attentions to generalize spatial smoothing would be a promising future284 work because it not only expands our work , but also helps deepen our understanding of self-attention.285 REPRODUCIBILITY STATEMENT286 To ensure reproducibility , we provide comprehensive resources , such as code and experimental details.287 The codebase will be released as open source under the Apache License 2.0 . See the supplemental288 material for the code . Appendix A provides the specifications of all models used in this work . Detailed289 experimental setup including hyperparameters and ablation study are also available in Appendix A290 and Appendix B. De-facto image datasets are used for all experiments as described in Appendix A.291 REFERENCES292 Javier Antoran , James Allingham , and José Miguel Hernández-Lobato . Depth uncertainty in neural293 networks . Advances in Neural Information Processing Systems , 2020.294 Aharon Azulay and Yair Weiss . Why do deep convolutional networks generalize so poorly to small295 image transformations ? Journal of Machine Learning Research , 2019.296 Srinadh Bhojanapalli , Ayan Chakrabarti , Daniel Glasner , Daliang Li , Thomas Unterthiner , and297 Andreas Veit . Understanding robustness of transformers for image classification . In Proceedings298 of the IEEE/CVF International Conference on Computer Vision , 2021.299 Gabriel J Brostow , Jamie Shotton , Julien Fauqueur , and Roberto Cipolla . Segmentation and recog-300 nition using structure from motion point clouds . In European Conference on Computer Vision.301 Springer , 2008.302 Nicolas Carion , Francisco Massa , Gabriel Synnaeve , Nicolas Usunier , Alexander Kirillov , and Sergey303 Zagoruyko . End-to-end object detection with transformers . In European Conference on Computer304 Vision . Springer , 2020.305 Alexey Dosovitskiy , Lucas Beyer , Alexander Kolesnikov , Dirk Weissenborn , Xiaohua Zhai , Thomas306 Unterthiner , Mostafa Dehghani , Matthias Minderer , Georg Heigold , Sylvain Gelly , et al . An image307 is worth 16x16 words : Transformers for image recognition at scale . In International Conference308 on Learning Representations , 2021.309 Michael Dusenberry , Ghassen Jerfel , Yeming Wen , Yian Ma , Jasper Snoek , Katherine Heller , Balaji310 Lakshminarayanan , and Dustin Tran . Efficient and scalable bayesian neural nets with rank-1311 factors . In International Conference on Machine Learning . PMLR , 2020.312 Logan Engstrom , Brandon Tran , Dimitris Tsipras , Ludwig Schmidt , and Aleksander Madry . Exploring313 the landscape of spatial robustness . In International Conference on Machine Learning . PMLR,314 2019.315 Pierre Foret , Ariel Kleiner , Hossein Mobahi , and Behnam Neyshabur . Sharpness-aware minimization316 for efficiently improving generalization . In International Conference on Learning Representations,317 2020.318 Stanislav Fort , Huiyi Hu , and Balaji Lakshminarayanan . Deep ensembles : A loss landscape perspec-319 tive . arXiv preprint arXiv:1912.02757 , 2019.320 Jonathan Frankle , David J Schwab , and Ari S Morcos . Training batchnorm and only batchnorm:321 On the expressive power of random features in cnns . In International Conference on Learning322 Representations , 2021.323 Yarin Gal and Zoubin Ghahramani . Dropout as a bayesian approximation : Representing model324 uncertainty in deep learning . In International Conference on Machine Learning . PMLR , 2016.325 Robert Geirhos , Patricia Rubisch , Claudio Michaelis , Matthias Bethge , Felix A Wichmann , and326 Wieland Brendel . Imagenet-trained cnns are biased towards texture ; increasing shape bias improves327 accuracy and robustness . In International Conference on Learning Representations , 2019.328 Behrooz Ghorbani , Shankar Krishnan , and Ying Xiao . An investigation into neural net optimization329 via hessian eigenvalue density . In International Conference on Machine Learning . PMLR , 2019.330 Ian J Goodfellow , Jonathon Shlens , and Christian Szegedy . Explaining and harnessing adversarial331 examples . International Conference on Learning Representations , 2015.332 Priya Goyal , Piotr Dollár , Ross Girshick , Pieter Noordhuis , Lukasz Wesolowski , Aapo Kyrola,333 Andrew Tulloch , Yangqing Jia , and Kaiming He . Accurate , large minibatch sgd : Training imagenet334 in 1 hour . arXiv preprint arXiv:1706.02677 , 2017.335 Chuan Guo , Geoff Pleiss , Yu Sun , and Kilian Q Weinberger . On calibration of modern neural336 networks . In International Conference on Machine Learning . PMLR , 2017.337 Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Deep residual learning for image338 recognition . In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,339 2016a.340 Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Identity mappings in deep residual341 networks . In European Conference on Computer Vision . Springer , 2016b.342 Dan Hendrycks and Thomas Dietterich . Benchmarking neural network robustness to common343 corruptions and perturbations . In International Conference on Learning Representations , 2019.344 José Miguel Hernández-Lobato and Ryan Adams . Probabilistic backpropagation for scalable learning345 of bayesian neural networks . In International Conference on Machine Learning . PMLR , 2015.346 Elad Hoffer , Tal Ben-Nun , Itay Hubara , Niv Giladi , Torsten Hoefler , and Daniel Soudry . Augment347 your batch : Improving generalization through instance repetition . In Proceedings of the IEEE/CVF348 Conference on Computer Vision and Pattern Recognition , 2020.349 A Kendall , V Badrinarayanan , and R Cipolla . Bayesian segnet : Model uncertainty in deep convolu-350 tional encoder-decoder architectures for scene understanding . In BMVC , 2017.351 Alex Kendall and Yarin Gal . What uncertainties do we need in bayesian deep learning for computer352 vision ? Advances in Neural Information Processing Systems , 2017.353 Nitish Shirish Keskar , Dheevatsa Mudigere , Jorge Nocedal , Mikhail Smelyanskiy , and Ping Tak Peter354 Tang . On large-batch training for deep learning : Generalization gap and sharp minima . In355 International Conference on Learning Representations , 2017.356 Alex Krizhevsky and Geoff Hinton . Convolutional deep belief networks on cifar-10 . Unpublished357 manuscript , 2010.358 Alex Krizhevsky , Geoffrey Hinton , et al . Learning multiple layers of features from tiny images . 2009.359 Balaji Lakshminarayanan , Alexander Pritzel , and Charles Blundell . Simple and scalable predictive360 uncertainty estimation using deep ensembles . In Advances in Neural Information Processing361 Systems , 2017.362 Hao Li , Zheng Xu , Gavin Taylor , Christoph Studer , and Tom Goldstein . Visualizing the loss landscape363 of neural nets . In Advances in Neural Information Processing Systems , 2018.364 Min Lin , Qiang Chen , and Shuicheng Yan . Network in network . In International Conference on365 Learning Representations , 2014.366 Antonio Loquercio , Mattia Segu , and Davide Scaramuzza . A general framework for uncertainty367 estimation in deep learning . IEEE Robotics and Automation Letters , 2020.368 Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu.369 Towards deep learning models resistant to adversarial attacks . In International Conference on370 Learning Representations , 2018.371 A Malinin and M Gales . Predictive uncertainty estimation via prior networks . In Advances in Neural372 Information Processing Systems . Curran Associates , Inc. , 2018.373 Yaniv Ovadia , Emily Fertig , Jie Ren , Zachary Nado , David Sculley , Sebastian Nowozin , Joshua374 Dillon , Balaji Lakshminarayanan , and Jasper Snoek . Can you trust your model ’ s uncertainty ? 375 evaluating predictive uncertainty under dataset shift . In Advances in Neural Information Processing376 Systems , 2019.377 Namuk Park , Taekyu Lee , and Songkuk Kim . Vector quantized bayesian neural network inference378 for data streams . In AAAI Conference on Artificial Intelligence , 2021.379 Adam Paszke , Sam Gross , Francisco Massa , Adam Lerer , James Bradbury , Gregory Chanan , Trevor380 Killeen , Zeming Lin , Natalia Gimelshein , Luca Antiga , et al . Pytorch : An imperative style,381 high-performance deep learning library . Advances in Neural Information Processing Systems,382 2019.383 Janis Postels , Francesco Ferroni , Huseyin Coskun , Nassir Navab , and Federico Tombari . Sampling-384 free epistemic uncertainty estimation using approximated variance propagation . In Proceedings of385 the IEEE/CVF International Conference on Computer Vision , 2019.386 Olaf Ronneberger , Philipp Fischer , and Thomas Brox . U-net : Convolutional networks for biomedical387 image segmentation . In International Conference on Medical image computing and computer-388 assisted intervention . Springer , 2015.389 Olga Russakovsky , Jia Deng , Hao Su , Jonathan Krause , Sanjeev Satheesh , Sean Ma , Zhiheng Huang,390 Andrej Karpathy , Aditya Khosla , Michael Bernstein , Alexander C. Berg , and Li Fei-Fei . ImageNet391 Large Scale Visual Recognition Challenge . International Journal of Computer Vision , pp . 211–252,392 2015.393 Mark Sandler , Andrew Howard , Menglong Zhu , Andrey Zhmoginov , and Liang-Chieh Chen . Mo-394 bilenetv2 : Inverted residuals and linear bottlenecks . In Proceedings of the IEEE Conference on395 Computer Vision and Pattern Recognition , 2018.396 Shibani Santurkar , Dimitris Tsipras , Andrew Ilyas , and Aleksander Madry . How does batch normal-397 ization help optimization ? Advances in Neural Information Processing Systems , 2018.398 Rulin Shao , Zhouxing Shi , Jinfeng Yi , Pin-Yu Chen , and Cho-Jui Hsieh . On the adversarial robustness399 of visual transformers . arXiv preprint arXiv:2103.15670 , 2021.400 Karen Simonyan and Andrew Zisserman . Very deep convolutional networks for large-scale image401 recognition . In International Conference on Learning Representations , 2015.402 Samarth Sinha , Animesh Garg , and Hugo Larochelle . Curriculum by smoothing . Advances in Neural403 Information Processing Systems , 2020.404 Hugo Touvron , Matthieu Cord , Matthijs Douze , Francisco Massa , Alexandre Sablayrolles , and Hervé405 Jégou . Training data-efficient image transformers & distillation through attention . In International406 Conference on Machine Learning . PMLR , 2021.407 Cristina Vasconcelos , Hugo Larochelle , Vincent Dumoulin , Nicolas Le Roux , and Ross Goroshin.408 An effective anti-aliasing approach for residual networks . arXiv preprint arXiv:2011.10675 , 2020.409 Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez , Łukasz410 Kaiser , and Illia Polosukhin . Attention is all you need . In Advances in Neural Information411 Processing Systems , 2017.412 Hao Wang , Xingjian Shi , and Dit-Yan Yeung . Natural-parameter networks : A class of probabilistic413 neural networks . Advances in Neural Information Processing Systems , 2016.414 Xiaolong Wang , Ross Girshick , Abhinav Gupta , and Kaiming He . Non-local neural networks . In415 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 2018.416 Yeming Wen , Dustin Tran , and Jimmy Ba . Batchensemble : an alternative approach to efficient417 ensemble and lifelong learning . In International Conference on Learning Representations , 2020.418 Anqi Wu , Sebastian Nowozin , Edward Meeds , Richard E Turner , José Miguel Hernández-Lobato,419 and Alexander L Gaunt . Deterministic variational inference for robust bayesian neural networks.420 In International Conference on Learning Representations , 2019.421 Saining Xie , Ross Girshick , Piotr Dollár , Zhuowen Tu , and Kaiming He . Aggregated residual422 transformations for deep neural networks . In Proceedings of the IEEE Conference on Computer423 Vision and Pattern Recognition , 2017.424 Zhewei Yao , Amir Gholami , Kurt Keutzer , and Michael W Mahoney . Pyhessian : Neural networks425 through the lens of the hessian . In 2020 IEEE International Conference on Big Data ( Big Data ) .426 IEEE , 2020.427 Jaesik Yoon , Taesup Kim , Ousmane Dia , Sungwoong Kim , Yoshua Bengio , and Sungjin Ahn.428 Bayesian model-agnostic meta-learning . In Advances in Neural Information Processing Systems,429 2018.430 Sergey Zagoruyko and Nikos Komodakis . Wide residual networks . In BMVC , 2016.431 Hongyang Zhang , Yaodong Yu , Jiantao Jiao , Eric Xing , Laurent El Ghaoui , and Michael Jordan.432 Theoretically principled trade-off between robustness and accuracy . In International Conference433 on Machine Learning . PMLR , 2019.434 Richard Zhang . Making convolutional networks shift-invariant again . In International Conference on435 Machine Learning . PMLR , 2019a.436 Richard Zhang . Official meta-review of making convolutional networks shift-invariant again , 2019b.437 URL https : //openreview.net/forum ? id=SklVEnR5K7 & noteId=rklZnFS-gN.438 Bolei Zhou , Aditya Khosla , Agata Lapedriza , Aude Oliva , and Antonio Torralba . Learning deep439 features for discriminative localization . In Proceedings of the IEEE Conference on Computer440 Vision and Pattern Recognition , 2016.441 Xueyan Zou , Fanyi Xiao , Zhiding Yu , and Yong Jae Lee . Delving deeper into anti-aliasing in convnets.442 In BMVC , 2020.443 | This paper proposed a spatial `smooth` layer including a feature range bounding layer `prob` and `blur` the intermediate feature map in a CNN. 'Smooth' improves the accuracy and uncertainty of both deterministic CNN and a Bayesian NN approximated by MC-dropout. Authors tried to justify how `smooth` improves the optimization of neural networks by 1. interpolating the `blur` operations as an ensemble of the neighboring features 2. showing `smooth` filter out the high-frequency noises introduced by MC-dropout and smoothen the loss landscapes perturbed by MC-dropout. Authors empirically evaluated `smooth` on image classification and semantic segmentation tasks and showed that it improves both accuracy and uncertainty. Authors also tried to connect common pieces in CNNs like global average pooling, ReLU + BN as special cases of `smooth`. | SP:58d0e331b89085a01a2c56ec63efb1126f616846 |
Blur Is an Ensemble: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness | Bayesian neural networks ( BNNs ) have shown success in the areas of uncertainty1 estimation and robustness . However , a crucial challenge prohibits their use in2 practice . Bayesian NNs require a large number of predictions to produce reliable3 results , leading to a significant increase in computational cost . To alleviate this4 issue , we propose spatial smoothing , a method that ensembles neighboring feature5 map points of CNNs . By simply adding a few blur layers to the models , we6 empirically show that spatial smoothing improves accuracy , uncertainty estimation,7 and robustness of BNNs across a whole range of ensemble sizes . In particular,8 BNNs incorporating spatial smoothing achieve high predictive performance merely9 with a handful of ensembles . Moreover , this method also can be applied to canonical10 deterministic neural networks to improve the performances . A number of evidences11 suggest that the improvements can be attributed to the stabilized feature maps12 and the flattening of the loss landscape . In addition , we provide a fundamental13 explanation for prior works—namely , global average pooling , pre-activation , and14 ReLU6—by addressing them as special cases of spatial smoothing . These not15 only enhance accuracy , but also improve uncertainty estimation and robustness by16 making the loss landscape smoother in the same manner as spatial smoothing.17 1 INTRODUCTION18 Azulay & Weiss , 2019 ) .33 Bayesian neural networks ( BNNs ) , such as Monte Carlo34 ( MC ) dropout ( Gal & Ghahramani , 2016 ) , provide a35 probabilistic representation of NN weights . They com-36 bine a number of models selected based on weight prob-37 ability to make predictions of desired results . Thanks to38 this feature , BNNs have been widely used in the areas of uncertainty estimation ( Kendall & Gal , 2017 ) 39 and robustness ( Ovadia et al. , 2019 ) . They are also promising in other fields like out-of-distribution40 detection ( Malinin & Gales , 2018 ) and meta-learning ( Yoon et al. , 2018 ) .41 Nevertheless , there remains a significant challenge that prohibits their use in practice . BNNs require42 an ensemble size of up to fifty to achieve high predictive performance , which results in a fiftyfold43 increase in computational cost ( Kendall & Gal , 2017 ; Loquercio et al. , 2020 ) . Therefore , if BNNs44 can achieve high predictive performance merely with a handful of ensembles , they could be applied45 to a much wider range of areas.46 1.1 PRELIMINARY47 We would first like to discuss BNN inference in detail , then move on to Vector-Quantized BNN48 ( VQ-BNN ) inference ( Park et al. , 2021 ) , an efficient approximated BNN inference.49 BNN inference . Suppose we have access to posterior probability of NN weight p ( w|D ) for training dataset D. The predictive result of BNN is given by the following predictive distribution : p ( y|x0 , D ) = ∫ p ( y|x0 , w ) p ( w|D ) dw ( 1 ) wherex0 is observed input data vector , y is output vector , and p ( y|x , w ) is the probabilistic prediction parameterized by the result of NN for an input x and weight w. In most cases , the integral can not be solved analytically . Thus , we use the MC estimator to approximate it as follows : p ( y|x0 , D ) ' N−1∑ i=0 1 N p ( y|x0 , wi ) ( 2 ) wherewi ∼ p ( w|D ) and N is the number of the samples . The equation indicates that BNN inference50 is ensemble average of NN predictions for one observed data point as shown on the left of Fig . 2.51 Using N neural networks in the ensemble would requires N times more computational complexity52 than one NN execution.53 1.00 Data-complemented BNN inference . To reduce the computational cost of BNN inference , VQBNN ( Park et al. , 2021 ) executes NN for an observed data only once and complements the result with previously calculated predictions for other data . If we have access to previous predictions , the computational performance of VQ-BNN becomes comparable to that of one NN execution . To be specific , VQ-BNN inference is : p ( y|x0 , D ) ' N−1∑ i=0 π ( xi|x0 ) p ( y|xi , wi ) ( 3 ) where π ( xi|x0 ) is the importance of data xi with respect to the observed data x0 , and it is defined as a54 similarity betweenxi andx0 . p ( y|x0 , w0 ) is the newly calculated prediction , and { p ( y|x1 , w1 ) , · · · } 55 are previously calculated predictions . To accurately infer the results , the previous predictions should56 consist of predictions for “ data similar to the observed data ” .57 Thanks to the temporal consistency of real-world data streams , aggregating predictions for similar58 data in data streams is straightforward . Since temporally proximate data sequences tend to be similar,59 we can memorize recent predictions and calculates their average using exponentially decreasing60 importance . In other words , VQ-BNN inference for data streams is simply temporal smoothing of61 recent predictions as shown in the middle of Fig . 2.62 VQ-BNN has two limitations , although it may be a promising approach to obtain reliable results63 in an efficient way . First , it was only applicable to data streams such as video sequences . Applying64 VQ-BNN to images is challenging because it is impossible to memorize all similar images in advance.65 Second , Park et al . ( 2021 ) used VQ-BNN only in the testing phase , not in the training phase . We find66 that ensembling predictions for similar data helps in NN training by smoothing the loss landscape.67 1.2 MAIN CONTRIBUTION68 1 Spatially neighboring points in visual imagery tend to be similar , as do feature maps of convolu-69 tional neural networks ( CNNs ) . By exploiting this spatial consistency , we propose spatial smoothing70 as a method of ensembling nearby feature maps to improve the efficiency of ensemble size in BNN71 inference . The right side of Fig . 2 visualizes spatial smoothing aggregating neighboring feature maps.72 2 We empirically demonstrate that spatial smoothing improves the efficiency in vision tasks , such73 as image classification on CIFAR ( Krizhevsky et al. , 2009 ) and ImageNet ( Russakovsky et al. , 2015 ) ,74 without any additional training parameters . Figure 3 shows that negative log-likelihood ( NLL ) of75 “ MC dropout + spatial smoothing ” with an ensemble size of two is comparable to that of vanilla MC76 dropout with an ensemble size of fifty . We also demonstrate that spatial smoothing improves accuracy,77 uncertainty , and robustness all at the same time . Figure 1 shows that spatial smoothing improves both78 the accuracy and uncertainty of various deterministic and Bayesian NNs with an ensemble size of79 fifty on CIFAR-100.80 3 Global average pooling ( GAP ) ( Lin et al. , 2014 ; Zhou et al. , 2016 ) , pre-activation ( He et al.,81 2016b ) , and ReLU6 ( Krizhevsky & Hinton , 2010 ; Sandler et al. , 2018 ) have been widely used in vision82 tasks . However , their motives are largely justified by the experiments . We provide an explanation for83 these methods by addressing them as special cases of spatial smoothing . Experiments support the84 claim by showing that the methods improve not only accuracy but also uncertainty and robustness.85 2 PROBABILISTIC SPATIAL SMOOTHING86 To improve the computational perfor-87 mance of BNN inference , VQ-BNN88 ( Park et al. , 2021 ) executes NN pre-89 diction only once and complements90 the result with previously calculated91 predictions . The key to the success92 of this approach largely depends on93 the collection of previous predictions94 for proximate data . Gathering tempo-95 rally proximate data and their predic-96 tions from data streams is easy be-97 cause recent data and predictions can98 be aggregated using temporal consis-99 tency . On the other hand , gathering100 time-independent proximate data , e.g . images , is more difficult because they lack such consistency.101 2.1 MODULE ARCHITECTURE FOR ENSEMBLING NEIGHBORING FEATURE MAP POINTS102 So instead of temporal consistency , we use spatial consistency—where neighboring pixels of images103 are similar—for real-world images . Under this hypothesis , we take the feature maps as predictions104 and aggregate neighboring feature maps.105 Most CNN architectures , including ResNet , consist of multiple stages that begin with increasing the number of channels while reducing the spatial dimension of the input volume . We decompose an entire BNN inference into several steps by rewriting each stage in a recurrence relation as follows : p ( zi+1|zi , D ) = ∫ p ( zi+1|zi , wi ) p ( wi|D ) dwi ( 4 ) where zi is input volume of the i-th stage , and the first and the last volume are input data and106 output . wi and p ( wi|D ) are NN weight in the i-th stage and its probability . p ( zi+1|zi , wi ) is output107 probability of zi+1 with respect to the input volume zi . To derive the probability from the output108 feature map , we transform each point of the feature map into a Bernoulli distribution . To do so , a109 composition of tanh and ReLU , a function from value of range [ −∞ , ∞ ] into probability , is added110 after each stage . Put shortly , we use neural networks for point-wise binary feature classification.111 Since Eq . ( 4 ) is a kind of BNN inference , it can be approximated using Eq . ( 3 ) . In other words , each112 stage predicts feature map points only once and complements predictions with similar feature maps.113 Under spatial consistency , it averages probabilities of spatially neighboring feature map points , which114 is well known as blur operation in image processing . For the sake of implementation simplicity,115 average pooling with a kernel size of 2 and a stride of 1 is used as a box blur . This operation ensembles116 four neighboring probabilities with the same importances.117 In summary , as shown in Fig . 4 , we propose the following probabilistic spatial smoothing layer : Smooth ( z ) = Blur ◦ Prob ( z ) ( 5 ) where Prob ( · ) is a point-wise function from a feature map to probability , and Blur ( · ) is importance-118 weighted average for ensembling spatially neighboring probabilities from feature maps . Smooth layer119 is added after each stage . Prob and Blur are further elaborated below.120 Prob : Feature map to probability . Prob is a function that transforms a real-valued feature map into probability . We use tanh–ReLU composition for this purpose . However , tanh is commonly known to suffer from the vanishing gradient problem . To alleviate this issue , we propose the following temperature-scaled tanh : tanhτ ( z ) = τ tanh ( z/τ ) ( 6 ) where τ is a hyperparameter called temperature . τ is 1 in conventional tanh and ∞ in identity121 function . tanhτ imposes an upper bound on a value , but does not limit the upper bound to 1.122 MC dropout MC dropout + Smooth . An unnormalized probability , ranging from 0 to τ , is allowed as the output of Prob . Then , thanks to the linearity of integration , we obtain an unnormalized predictive distribution accordingly . Taking this into account , we propose the following Prob : Prob ( z ) = ReLU ◦ tanhτ ( z ) ( 7 ) where τ > 1 . We empirically determine τ to minimize NLL , a metric that measures both accuracy123 and uncertainty . See Fig . B.3 for more detailed ablation studies . In addition , we expect upper-bounded124 functions , e.g. , ReLU6 ( z ) = ReLU ◦ min ( z , 6 ) and feature map scaling z/τ with τ > 1 which125 is BatchNorm , to be able to replace tanhτ in Prob ; and as expected , these alternatives improve126 uncertainty estimation in addition to accuracy . See Appendix C.2 and Appendix C.3 for detailed127 discussions on activation ( ReLU ◦ BatchNorm ) and ReLU6 as Prob.128 Blur : Averaging neighboring probabilities . Blur averages the probabilities from feature maps . We primarily use the average pool with a kernel size of 2 and a stride of 1 as the implementation of Blur for the sake of simplicity . Nevertheless , we could generalize Blur by using the following depth-wise convolution , which acts on each input channel separately , with non-trainable kernel K = 1 ||k||21 k ⊗ k > ( 8 ) where k is a 1D matrix , e.g. , k ∈ { ( 1 ) , ( 1 , 1 ) , ( 1 , 2 , 1 ) , ( 1 , 4 , 6 , 4 , 1 ) } . Different ks derive different129 importances for neighboring feature maps . We empirically show that most Blurs improve the130 predictive performance and that optimalK varies by model . For more ablation studies , see Table B.2.131 2.2 HOW DOES SPATIAL SMOOTHING HELP OPTIMIZATION ? 132 We present theoretical and empirical aspects to show that spatial smoothing ensembles feature maps.133 Feature map variance . BNNs have two types of uncertainties : One is model uncertainty and the134 other is data uncertainty ( Park et al. , 2021 ) . These randomnesses increase the variance of the feature135 maps . To demonstrate that spatial smoothing is an ensemble , we use the following proposition:136 Proposition 1 . Ensembles reduce the variance of predictions.137 We omit the proof since it is straightforward . In our context , predictions are output feature maps of a138 stage . We investigate model and data uncertainties of the predictions along NN layers to show that139 spatial smoothing reduces the randomnesses and ensembles feature maps . Figure 5 shows the model140 uncertainty and data uncertainty of Bayesian ResNet including MC dropout layers . In this figure , the141 uncertainty of MC dropout ’ s feature map only accumulates , and almost monotonically increases in142 every NN layer . In contrast , the uncertainty of “ MC dropout + spatial smoothing ” ’ s feature map is143 significantly decreases at the end of stages , suggesting that the smoothing layers ensemble the feature144 map . In other words , they make the feature map more accurate and stabilized input volumes for the145 next stages . In addition , consistently , the spatial smoothing layer close to the last layer significantly146 improves performance because it reduces the uncertainty of predictions largely . See Fig . B.5 for more147 detailed results . Deterministic NNs do not have model uncertainty but data uncertainty . Therefore,148 spatial smoothing improves the performance of deterministic NNs as well as Bayesian NNs.149 Fourier analysis . We also analyze spatial smoothing through the lens of Fourier transform:150 Proposition 2 . Ensembles filter high-frequency signals.151 The proof is provided in Eqs . ( 16 ) to ( 17 ) . Figure 6b shows the 2D Fourier transformed output152 feature map at the end of the stage 1 . This figure reveals that MC dropout almost does not affect153 low-frequency ( < 0.3π ) ranges , and it adds high-frequency ( ≥ 0.3π ) noises . Since spatial smoothing154 is a low-pass filter , it effectively filters high-frequency signals , including the noises caused by MC155 dropout.156 We also find that CNNs are particularly vulnerable to high-frequency noises . To demonstrate this157 claim , following Shao et al . ( 2021 ) , we measure accuracy with respect to data with frequency-based158 random noise xnoise = x0 +F−1 ( F ( δ ) Mf ) , where x0 is clean data , F ( · ) and F−1 ( · ) are Fourier159 transform and inverse Fourier transform , δ is random noise , and Mf is frequency mask as shown160 in Fig . 6a . Figure 6c exhibits the results . In sum , high-frequency noises , including those caused by161 MC dropout , significantly impair accuracy . Spatial smoothing improves the robustness by effectively162 removing high-frequency noises.163 Loss landscape . Lastly , we show that the randomness hinders NN training as follows:164 Proposition 3 . Randomness of predictions sharpens the loss landscape , and ensembles flatten it.165 The proof is provided in Eqs . ( 18 ) to ( 25 ) . Since a sharp loss function disturbs NN optimization166 ( Keskar et al. , 2017 ; Santurkar et al. , 2018 ; Foret et al. , 2020 ) , reducing the uncertainty helps NN167 learn strong representations . For example , training phase NN ensemble averages out the randomness,168 and it flattens the loss function . In consequence , an ensemble of BNN outputs in training phase169 significantly improves the predictive performance . See Fig . D.4 for numerical results . However , we170 do not use training phase ensemble because it significantly increases the training time . Instead , we171 use spatial smoothing as a method that ensembles feature maps without sacrificing training time.172 We visualizes the loss landscapes ( Li et al. , 2018 ) , the contours of NLL on training dataset . Figure 8b173 shows that the loss landscapes of MC dropout fluctuate and have irregular surfaces due to the174 randomness . As Li et al . ( 2018 ) ; Foret et al . ( 2020 ) pointed out,175 this may lead to poor generalization and predictive performance.176 Spatial smoothing reduces randomness as discussed above , and177 spatial smoothing aids in optimization by stabilizing and flattening178 the loss landscape of BNN as shown in Fig . 8c.179 Furthermore , we use Hessian to quantitatively represent the sharp-180 ness of the loss landscapes . Figure 7 shows the Hessian max eigen-181 value spectra of the models in Fig . 8 with a batch size of 128 , which182 reveals that spatial smoothing reduces the magnitude of Hessian183 eigenvalues and suppresses outliers . Since large Hessian eigenval-184 ues disturb NN training ( Ghorbani et al. , 2019 ) , we come to the185 same conclusion that spatial smoothing helps NN optimization . See186 Appendix C.1 for a more detailed description of the configurations187 of the Hessian max eigenvalue spectra . In addition , from these188 observations , we propose the conjecture that the flatter the loss189 landscape , the better the uncertainty estimation , and vice versa.190 2.3 REVISITING GLOBAL AVERAGE POOLING191 The success of GAP classifier in image classification is192 indisputable . The initial motivation and the most widely193 accepted explanation for this success is that GAP prevents194 overfitting by using far fewer parameters than multi-layer195 perceptron ( MLP ) ( Lin et al. , 2014 ) . However , we discover196 that the explanation is poorly supported . We compares197 GAP with other classifiers including MLP . Contrary to198 popular belief , Table 1 suggests that MLP does not overfit199 the training dataset . MLP underfits or gives comparable200 performance to GAP on the training dataset . On the test201 dataset , GAP provides better results compared with MLP . See Table C.1 for more detailed results.202 Our argument is that GAP is an extreme case of spatial smoothing . In other words , GAP is successful203 because it ensembles feature maps and smoothens the loss landscape to help optimization . To support204 this claim , we visualizes the loss landscape of MLP as shown in Fig . 8a . It is chaotic compared to205 that of GAP as shown in Fig . 8b . Hessian shows the consistent results as demonstrated by Fig . 7.206 3 EXPERIMENTS207 This section presents two experiments . The first experiment is image classification through which208 we show that spatial smoothing not only improves the ensemble efficiency , but also the accuracy,209 uncertainty , and robustness of both deterministic NN and MC dropout . The second experiment is210 semantic segmentation on data streams through which we show that spatial smoothing and temporal211 smoothing ( Park et al. , 2021 ) are complementary . See Appendix A for more detailed configurations.212 1.00 Three metrics are measured in these experiments : NLL ( ↓1 ) , accuracy ( ↑ ) , and expected calibration213 error ( ECE , ↓ ) ( Guo et al. , 2017 ) . NLL represents both accuracy and uncertainty , and is the most214 widely used as a proper scoring rule . ECE measures discrepancy between accuracy and confidence.215 3.1 IMAGE CLASSIFICATION216 This section mainly discuss ResNet ( He et al. , 2016a ) . Table E.1 also discuss other settings that217 show the same trend : e.g. , VGG ( Simonyan & Zisserman , 2015 ) , ResNeXt ( Xie et al. , 2017 ) ,218 and pre-activation models ( He et al. , 2016a ) . Spatial smoothing also improves deep ensemble219 ( Lakshminarayanan et al. , 2017 ) , another non-Bayesian probabilistic NN method . See Fig . E.1.220 Performance . Fig . 3 and Fig . 9 show the predictive performances of ResNet-18 on CIFAR-100221 and ResNet-50 on ImageNet , respectively . The results indicate that spatial smoothing improves both222 accuracy and uncertainty in many respects . Let us be more specific . First , spatial smoothing improves223 the efficiency of ensemble size . In these examples , the NLL of “ MC dropout + spatial smoothing ” 224 with an ensemble size of 2 is comparable to or even better than that of MC dropout with an ensemble225 size of 50 . In other words , “ MC dropout + spatial smoothing ” is 25× faster than MC dropout with226 a similar predictive performance . Second , the predictive performance of “ MC dropout + spatial227 smoothing ” is better than that of MC dropout , at an ensemble size of 50 . Third , spatial smoothing228 improves the predictive performance of deterministic NN , as well as MC dropout.229 Robustness . To evaluate robustness against data corruption , we230 measure predictive performance of ResNet-18 on CIFAR-100-231 C ( Hendrycks & Dietterich , 2019 ) . This dataset consists of data232 corrupted by 15 different types , each with 5 levels of intensity233 each . We use mean corruption NLL ( mCNLL , ↓ ) , the averages234 of NLL over intensities and corruption types , to summarize the235 performance of corrupted data in a single value . See Eq . ( 32 ) for236 a more rigorous definition . Figure 10 shows that spatial smoothing237 not only improves the efficiency but also corruption robustness238 across a whole range of ensemble size . See Fig . E.3 for more239 details . Spatial smoothing also improves adversarial robustness240 and perturbation consistency ( ↑ ) ( Hendrycks & Dietterich , 2019 ; 241 Zhang , 2019a ) , shift-transformation invariance . See Table E.2,242 Table E.3 , and Fig . E.4 for more details.243 3.2 SEMANTIC SEGMENTATION244 Table 2 summarizes the result of semantic segmentation on CamVid dataset ( Brostow et al. , 2008 ) 245 that consists of real-world 360×480 pixels videos . The table shows that spatial smoothing improves246 predictive performance , which is consistent with the image classification experiment . Moreover , the247 result reveals that spatial smoothing and temporal smoothing ( Park et al. , 2021 ) are complementary.248 See Table E.4 for more results.249 1We use arrows to indicate which direction is better . Spatial smoothing can be compared with prior works in the following areas.251 Anti-aliased CNNs . Local means ( Zhang , 2019a ; Zou et al. , 2020 ; Vasconcelos et al. , 2020 ; Sinha252 et al. , 2020 ) were introduced for the shift-invariance of deterministic CNNs in image classification.253 They were motivated to prevent the aliasing effect of subsampling . Although the local filtering can254 result in a loss of information , Zhang ( 2019a ) experimentally observed an increase in accuracy that255 was beyond expectation . We provide a fundamental explanation for this phenomenon : Local means256 are a spatial ensemble . An ensemble not only improves accuracy , but also uncertainty and robustness257 of deterministic and Bayesian NNs . In Fig . F.1 , we also show that the predictive performance258 improvement is not due to anti-aliasing of local mean . See Appendix F for more discussion on local259 means . For a discussion on non-local means ( Wang et al. , 2018 ) and self-attention ( Dosovitskiy et al.,260 2021 ) , see Section 5.261 Sampling-free BNNs . Sampling-free BNNs ( Hernández-Lobato & Adams , 2015 ; Wang et al.,262 2016 ; Wu et al. , 2019 ) predict results based on a single or couple of NN executions . To this end , it is263 assumed that posterior and feature maps follow Gaussian distributions . However , the discrepancy264 between reality and assumption accumulates in every NN layer . Consequently , to the best of our265 knowledge , most of the sampling-free BNNs could only be applied to shallow models , such as LeNet,266 and were tested on small datasets . Postels et al . ( 2019 ) applied sampling-free BNNs to SegNet ; 267 nonetheless , Park et al . ( 2021 ) argued that they do not predict well-calibrated results.268 Efficient deep ensembles . Deep ensemble ( Lakshminarayanan et al. , 2017 ; Fort et al. , 2019 ) is269 another probabilistic NN approach for predicting reliable results . BatchEnsemble ( Wen et al. , 2020 ; 270 Dusenberry et al. , 2020 ) ensembles over a low-rank subspace to make deep ensemble more efficient.271 Depth uncertainty network ( Antoran et al. , 2020 ) aggregates feature maps from different depths of272 a single NN to predict results efficiently . Despite being robust against data corruption , it provides273 weaker predictive performance compared to deterministic NN and MC dropout.274 5 DISCUSSION275 We propose spatial smoothing , a simple yet efficient module to improve BNN . Three different per-276 spectives , namely , feature map variance , Fourier analysis , and loss landscape , suggest that spatial277 smoothing ensembles feature maps . The limitation of spatial smoothing is that designing its compo-278 nents requires inductive bias . In other words , the optimal shape of the blur kernel is model-dependent.279 We believe this problem can be solved by introducing self-attention ( Vaswani et al. , 2017 ) . Self-280 attentions for computer vision ( Dosovitskiy et al. , 2021 ; Touvron et al. , 2021 ; Carion et al. , 2020 ) 281 can be deemed as trainable importance-weighted ensembles of feature maps . The observation that282 Transformers are more robust than expected ( Bhojanapalli et al. , 2021 ; Shao et al. , 2021 ) supports this283 claim . Therefore , using self-attentions to generalize spatial smoothing would be a promising future284 work because it not only expands our work , but also helps deepen our understanding of self-attention.285 REPRODUCIBILITY STATEMENT286 To ensure reproducibility , we provide comprehensive resources , such as code and experimental details.287 The codebase will be released as open source under the Apache License 2.0 . See the supplemental288 material for the code . Appendix A provides the specifications of all models used in this work . Detailed289 experimental setup including hyperparameters and ablation study are also available in Appendix A290 and Appendix B. De-facto image datasets are used for all experiments as described in Appendix A.291 REFERENCES292 Javier Antoran , James Allingham , and José Miguel Hernández-Lobato . Depth uncertainty in neural293 networks . Advances in Neural Information Processing Systems , 2020.294 Aharon Azulay and Yair Weiss . Why do deep convolutional networks generalize so poorly to small295 image transformations ? Journal of Machine Learning Research , 2019.296 Srinadh Bhojanapalli , Ayan Chakrabarti , Daniel Glasner , Daliang Li , Thomas Unterthiner , and297 Andreas Veit . Understanding robustness of transformers for image classification . In Proceedings298 of the IEEE/CVF International Conference on Computer Vision , 2021.299 Gabriel J Brostow , Jamie Shotton , Julien Fauqueur , and Roberto Cipolla . Segmentation and recog-300 nition using structure from motion point clouds . In European Conference on Computer Vision.301 Springer , 2008.302 Nicolas Carion , Francisco Massa , Gabriel Synnaeve , Nicolas Usunier , Alexander Kirillov , and Sergey303 Zagoruyko . End-to-end object detection with transformers . In European Conference on Computer304 Vision . Springer , 2020.305 Alexey Dosovitskiy , Lucas Beyer , Alexander Kolesnikov , Dirk Weissenborn , Xiaohua Zhai , Thomas306 Unterthiner , Mostafa Dehghani , Matthias Minderer , Georg Heigold , Sylvain Gelly , et al . An image307 is worth 16x16 words : Transformers for image recognition at scale . In International Conference308 on Learning Representations , 2021.309 Michael Dusenberry , Ghassen Jerfel , Yeming Wen , Yian Ma , Jasper Snoek , Katherine Heller , Balaji310 Lakshminarayanan , and Dustin Tran . Efficient and scalable bayesian neural nets with rank-1311 factors . In International Conference on Machine Learning . PMLR , 2020.312 Logan Engstrom , Brandon Tran , Dimitris Tsipras , Ludwig Schmidt , and Aleksander Madry . Exploring313 the landscape of spatial robustness . In International Conference on Machine Learning . PMLR,314 2019.315 Pierre Foret , Ariel Kleiner , Hossein Mobahi , and Behnam Neyshabur . Sharpness-aware minimization316 for efficiently improving generalization . In International Conference on Learning Representations,317 2020.318 Stanislav Fort , Huiyi Hu , and Balaji Lakshminarayanan . Deep ensembles : A loss landscape perspec-319 tive . arXiv preprint arXiv:1912.02757 , 2019.320 Jonathan Frankle , David J Schwab , and Ari S Morcos . Training batchnorm and only batchnorm:321 On the expressive power of random features in cnns . In International Conference on Learning322 Representations , 2021.323 Yarin Gal and Zoubin Ghahramani . Dropout as a bayesian approximation : Representing model324 uncertainty in deep learning . In International Conference on Machine Learning . PMLR , 2016.325 Robert Geirhos , Patricia Rubisch , Claudio Michaelis , Matthias Bethge , Felix A Wichmann , and326 Wieland Brendel . Imagenet-trained cnns are biased towards texture ; increasing shape bias improves327 accuracy and robustness . In International Conference on Learning Representations , 2019.328 Behrooz Ghorbani , Shankar Krishnan , and Ying Xiao . An investigation into neural net optimization329 via hessian eigenvalue density . In International Conference on Machine Learning . PMLR , 2019.330 Ian J Goodfellow , Jonathon Shlens , and Christian Szegedy . Explaining and harnessing adversarial331 examples . International Conference on Learning Representations , 2015.332 Priya Goyal , Piotr Dollár , Ross Girshick , Pieter Noordhuis , Lukasz Wesolowski , Aapo Kyrola,333 Andrew Tulloch , Yangqing Jia , and Kaiming He . Accurate , large minibatch sgd : Training imagenet334 in 1 hour . arXiv preprint arXiv:1706.02677 , 2017.335 Chuan Guo , Geoff Pleiss , Yu Sun , and Kilian Q Weinberger . On calibration of modern neural336 networks . In International Conference on Machine Learning . PMLR , 2017.337 Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Deep residual learning for image338 recognition . In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,339 2016a.340 Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . Identity mappings in deep residual341 networks . In European Conference on Computer Vision . Springer , 2016b.342 Dan Hendrycks and Thomas Dietterich . Benchmarking neural network robustness to common343 corruptions and perturbations . In International Conference on Learning Representations , 2019.344 José Miguel Hernández-Lobato and Ryan Adams . Probabilistic backpropagation for scalable learning345 of bayesian neural networks . In International Conference on Machine Learning . PMLR , 2015.346 Elad Hoffer , Tal Ben-Nun , Itay Hubara , Niv Giladi , Torsten Hoefler , and Daniel Soudry . Augment347 your batch : Improving generalization through instance repetition . In Proceedings of the IEEE/CVF348 Conference on Computer Vision and Pattern Recognition , 2020.349 A Kendall , V Badrinarayanan , and R Cipolla . Bayesian segnet : Model uncertainty in deep convolu-350 tional encoder-decoder architectures for scene understanding . In BMVC , 2017.351 Alex Kendall and Yarin Gal . What uncertainties do we need in bayesian deep learning for computer352 vision ? Advances in Neural Information Processing Systems , 2017.353 Nitish Shirish Keskar , Dheevatsa Mudigere , Jorge Nocedal , Mikhail Smelyanskiy , and Ping Tak Peter354 Tang . On large-batch training for deep learning : Generalization gap and sharp minima . In355 International Conference on Learning Representations , 2017.356 Alex Krizhevsky and Geoff Hinton . Convolutional deep belief networks on cifar-10 . Unpublished357 manuscript , 2010.358 Alex Krizhevsky , Geoffrey Hinton , et al . Learning multiple layers of features from tiny images . 2009.359 Balaji Lakshminarayanan , Alexander Pritzel , and Charles Blundell . Simple and scalable predictive360 uncertainty estimation using deep ensembles . In Advances in Neural Information Processing361 Systems , 2017.362 Hao Li , Zheng Xu , Gavin Taylor , Christoph Studer , and Tom Goldstein . Visualizing the loss landscape363 of neural nets . In Advances in Neural Information Processing Systems , 2018.364 Min Lin , Qiang Chen , and Shuicheng Yan . Network in network . In International Conference on365 Learning Representations , 2014.366 Antonio Loquercio , Mattia Segu , and Davide Scaramuzza . A general framework for uncertainty367 estimation in deep learning . IEEE Robotics and Automation Letters , 2020.368 Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu.369 Towards deep learning models resistant to adversarial attacks . In International Conference on370 Learning Representations , 2018.371 A Malinin and M Gales . Predictive uncertainty estimation via prior networks . In Advances in Neural372 Information Processing Systems . Curran Associates , Inc. , 2018.373 Yaniv Ovadia , Emily Fertig , Jie Ren , Zachary Nado , David Sculley , Sebastian Nowozin , Joshua374 Dillon , Balaji Lakshminarayanan , and Jasper Snoek . Can you trust your model ’ s uncertainty ? 375 evaluating predictive uncertainty under dataset shift . In Advances in Neural Information Processing376 Systems , 2019.377 Namuk Park , Taekyu Lee , and Songkuk Kim . Vector quantized bayesian neural network inference378 for data streams . In AAAI Conference on Artificial Intelligence , 2021.379 Adam Paszke , Sam Gross , Francisco Massa , Adam Lerer , James Bradbury , Gregory Chanan , Trevor380 Killeen , Zeming Lin , Natalia Gimelshein , Luca Antiga , et al . Pytorch : An imperative style,381 high-performance deep learning library . Advances in Neural Information Processing Systems,382 2019.383 Janis Postels , Francesco Ferroni , Huseyin Coskun , Nassir Navab , and Federico Tombari . Sampling-384 free epistemic uncertainty estimation using approximated variance propagation . In Proceedings of385 the IEEE/CVF International Conference on Computer Vision , 2019.386 Olaf Ronneberger , Philipp Fischer , and Thomas Brox . U-net : Convolutional networks for biomedical387 image segmentation . In International Conference on Medical image computing and computer-388 assisted intervention . Springer , 2015.389 Olga Russakovsky , Jia Deng , Hao Su , Jonathan Krause , Sanjeev Satheesh , Sean Ma , Zhiheng Huang,390 Andrej Karpathy , Aditya Khosla , Michael Bernstein , Alexander C. Berg , and Li Fei-Fei . ImageNet391 Large Scale Visual Recognition Challenge . International Journal of Computer Vision , pp . 211–252,392 2015.393 Mark Sandler , Andrew Howard , Menglong Zhu , Andrey Zhmoginov , and Liang-Chieh Chen . Mo-394 bilenetv2 : Inverted residuals and linear bottlenecks . In Proceedings of the IEEE Conference on395 Computer Vision and Pattern Recognition , 2018.396 Shibani Santurkar , Dimitris Tsipras , Andrew Ilyas , and Aleksander Madry . How does batch normal-397 ization help optimization ? Advances in Neural Information Processing Systems , 2018.398 Rulin Shao , Zhouxing Shi , Jinfeng Yi , Pin-Yu Chen , and Cho-Jui Hsieh . On the adversarial robustness399 of visual transformers . arXiv preprint arXiv:2103.15670 , 2021.400 Karen Simonyan and Andrew Zisserman . Very deep convolutional networks for large-scale image401 recognition . In International Conference on Learning Representations , 2015.402 Samarth Sinha , Animesh Garg , and Hugo Larochelle . Curriculum by smoothing . Advances in Neural403 Information Processing Systems , 2020.404 Hugo Touvron , Matthieu Cord , Matthijs Douze , Francisco Massa , Alexandre Sablayrolles , and Hervé405 Jégou . Training data-efficient image transformers & distillation through attention . In International406 Conference on Machine Learning . PMLR , 2021.407 Cristina Vasconcelos , Hugo Larochelle , Vincent Dumoulin , Nicolas Le Roux , and Ross Goroshin.408 An effective anti-aliasing approach for residual networks . arXiv preprint arXiv:2011.10675 , 2020.409 Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez , Łukasz410 Kaiser , and Illia Polosukhin . Attention is all you need . In Advances in Neural Information411 Processing Systems , 2017.412 Hao Wang , Xingjian Shi , and Dit-Yan Yeung . Natural-parameter networks : A class of probabilistic413 neural networks . Advances in Neural Information Processing Systems , 2016.414 Xiaolong Wang , Ross Girshick , Abhinav Gupta , and Kaiming He . Non-local neural networks . In415 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 2018.416 Yeming Wen , Dustin Tran , and Jimmy Ba . Batchensemble : an alternative approach to efficient417 ensemble and lifelong learning . In International Conference on Learning Representations , 2020.418 Anqi Wu , Sebastian Nowozin , Edward Meeds , Richard E Turner , José Miguel Hernández-Lobato,419 and Alexander L Gaunt . Deterministic variational inference for robust bayesian neural networks.420 In International Conference on Learning Representations , 2019.421 Saining Xie , Ross Girshick , Piotr Dollár , Zhuowen Tu , and Kaiming He . Aggregated residual422 transformations for deep neural networks . In Proceedings of the IEEE Conference on Computer423 Vision and Pattern Recognition , 2017.424 Zhewei Yao , Amir Gholami , Kurt Keutzer , and Michael W Mahoney . Pyhessian : Neural networks425 through the lens of the hessian . In 2020 IEEE International Conference on Big Data ( Big Data ) .426 IEEE , 2020.427 Jaesik Yoon , Taesup Kim , Ousmane Dia , Sungwoong Kim , Yoshua Bengio , and Sungjin Ahn.428 Bayesian model-agnostic meta-learning . In Advances in Neural Information Processing Systems,429 2018.430 Sergey Zagoruyko and Nikos Komodakis . Wide residual networks . In BMVC , 2016.431 Hongyang Zhang , Yaodong Yu , Jiantao Jiao , Eric Xing , Laurent El Ghaoui , and Michael Jordan.432 Theoretically principled trade-off between robustness and accuracy . In International Conference433 on Machine Learning . PMLR , 2019.434 Richard Zhang . Making convolutional networks shift-invariant again . In International Conference on435 Machine Learning . PMLR , 2019a.436 Richard Zhang . Official meta-review of making convolutional networks shift-invariant again , 2019b.437 URL https : //openreview.net/forum ? id=SklVEnR5K7 & noteId=rklZnFS-gN.438 Bolei Zhou , Aditya Khosla , Agata Lapedriza , Aude Oliva , and Antonio Torralba . Learning deep439 features for discriminative localization . In Proceedings of the IEEE Conference on Computer440 Vision and Pattern Recognition , 2016.441 Xueyan Zou , Fanyi Xiao , Zhiding Yu , and Yong Jae Lee . Delving deeper into anti-aliasing in convnets.442 In BMVC , 2020.443 | The motivation of this work is on the computational cost of using BNNs in practice, where applications might require a large number of BNNs in an ensemble formation for achieving good performance. The work in this manuscript aims to reduce the computational cost ensemble. The manuscript's insight for solving the computational cost is to exploit similarities in spatial neighbouring from images. This is a similar approach used in convolutional neural networks. Experimental results show improved efficiency in image tasks, while also considerably reducing the computational cost when compared to competitor methods. | SP:58d0e331b89085a01a2c56ec63efb1126f616846 |
Safe Deep RL in 3D Environments using Human Feedback | Agents should avoid unsafe behaviour during both training and deployment . This typically requires a simulator and a procedural specification of unsafe behaviour . Unfortunately , a simulator is not always available , and procedurally specifying constraints can be difficult or impossible for many real-world tasks . A recently introduced technique , ReQueST , aims to solve this problem by learning a neural simulator of the environment from safe human trajectories , then using the learned simulator to efficiently learn a reward model from human feedback . However , it is yet unknown whether this approach is feasible in complex 3D environments with feedback obtained from real humans - whether sufficient pixel-based neural simulator quality can be achieved , and whether the human data requirements are viable in terms of both quantity and quality . In this paper we answer this question in the affirmative , using ReQueST to train an agent to perform a 3D first-person object collection task using data entirely from human contractors . We show that the resulting agent exhibits an order of magnitude reduction in unsafe behaviour compared to standard reinforcement learning . 1 INTRODUCTION . Many of deep reinforcement learning ’ s recent successes have relied on the availability of a procedural reward function and a simulated environment for the task in question . As a result , research has been largely insulated from many of the difficulties of learning in the real world . One of these issues is safe exploration ( Garcıa & Fernández , 2015 ) . Online reinforcement learning is dependent on first-hand experience in order to learn the constraints of safe behaviour : the agent must drive the car off a cliff to learn not to drive the car off a cliff . While such actions may be fine in simulation , in the real world these actions may have unacceptable consequences , such as injury to humans . Further , such constraints are not always easy to describe using procedural functions . A recently proposed approach to safe exploration is reward query synthesis via trajectory optimization , ReQueST ( Reddy et al. , 2019 ) . In this approach , the agent is trained in a learned dynamics model ( a neural environment simulator ) with rewards from a learned reward model . Given models of sufficient fidelity , this should allow us to train an agent with close to zero instances of unsafe behaviour in the real environment . However , as of Reddy et al . ( 2019 ) , ReQueST has only been demonstrated to work in simple 2D environments – a simple navigation task , and a 2D car racing game – with a dynamics model learned from ( potentially unsafe ) random exploration , and a reward model learned from binary feedback generated by a procedural reward function . In this work , we aim to answer the question : is ReQueST feasible in complex 3D environments , with data used to train both dynamics and reward models sourced from real humans ? In particular , can we learn a pixel-based dynamics model of sufficient quality to enable informative human feedback , and are the data requirements viable , especially in terms of quantity ? Our key contributions in this work are as follows . • We demonstrate that ReQueST is feasible in a complex 3D environment , training a pixelbased dynamics model and reward model from 160 person-hours of safe human exploratory trajectories and 10 person-hours of reward sketches . We also show that performance degrades smoothly when models are trained on smaller amounts of data . • On a 3D first-person object collection task , we show that ReQueST enables training of a competent agent with 3 to 20 times fewer instances of unsafe behaviour during training ( close to zero instances if not counting mistakes by human contractors ) than a traditional RL algorithm . 2 RELATED WORK . Safe exploration Safe exploration has been studied extensively ( Garcıa & Fernández , 2015 ) . Most existing work achieves safety by making strong assumptions about the state space , such as all unsafe states being known in advance ( Geibel & Wysotzki , 2005 ; Luo & Ma , 2021 ) or the state space being reasonably smooth ( Berkenkamp et al. , 2017 ; Dalal et al. , 2018 ) . Other approaches require additional inputs , such as a procedural constraint function ( Altman , 1999 ; Achiam et al. , 2017 ; Ray et al. , 2019 ; Dalal et al. , 2018 ) , a safe baseline policy ( Garcia & Fernández , 2012 ) , or a separate system that can determine whether an action is safe ( Alshiekh et al. , 2018 ) . In contrast , the only assumption we make is that a human can recognise when a trajectory contains or is heading towards an unsafe state . Acceptability of unsafe behaviour Another important dimension is whether safety is treated as a soft or a hard constraint . Most work assumes the former , seeking to minimise time spent in unsafe states ( e.g . Geibel & Wysotzki ( 2005 ) ) . Two notable examples of the latter include Saunders et al . ( 2017 ) , which avoids unsafe behaviour by having a human intervene during training to block unsafe actions , and Luo & Ma ( 2021 ) , which starts with a trivial-but-safe policy and slowly broadens the policy while guaranteeing the policy will avoid a set of unsafe states specified by the user in advance . Learned dynamics models in prior work The broad structure of our approach – learning a dynamics model ( Chiappa et al. , 2017 ) from trajectories , then learning a policy or planning using that model – has been successfully used in simple control tasks ( Hafner et al. , 2019 ) , autonomous helicopter flight ( Abbeel et al. , 2010 ) , fabric manipulation ( Hoque et al. , 2021 ) , Atari games ( Buesing et al. , 2018 ) , and in simple 3D environments ( Ha & Schmidhuber , 2018 ) . Our work shows that this approach is viable even with complex 3D scenes , with a dynamics model learned from humandemonstrated trajectories , and with a reward model learned from human feedback rather than relying on environment rewards ( Hafner et al. , 2019 ; Buesing et al. , 2018 ) , trajectory following ( Abbeel et al. , 2010 ) , or maximisation of episode length ( Ha & Schmidhuber , 2018 ) . Prior work on ReQueST Our work expands on the original ReQueST ( Reddy et al. , 2019 ) in two main ways . First , we source all data from humans , showing that ReQueST is still applicable with imperfect , real-world data . Second , rather than simple 2D environments , we use visually-complex 3D environments , requiring a much more sophisticated dynamics model . Reward modeling As with the original ReQueST , we rely on a learned model of the reward function ( Knox , 2012 ; Leike et al. , 2018 ) . In contrast to the classification-based reward model used in the original , our reward model regresses to a continuous-valued reward , trained using reward sketches ( Cabi et al. , 2019 ) on imagined trajectories . Other forms of feedback on which such reward models can be trained include real-time scalar rewards ( Knox & Stone , 2009 ; Warnell et al. , 2018 ) , goal states ( Bahdanau et al. , 2018 ) , trajectory demonstrations ( Finn et al. , 2016 ) , trajectory comparisons ( Christiano et al. , 2017 ) , and combinations thereof ( Ibarz et al. , 2018 ; Stiennon et al. , 2020 ; Jeon et al. , 2020 ) . 3 METHODS . 3.1 REQUEST . Approach overview For online reinforcement learning algorithms , the only way an agent can learn about unsafe states is by visiting them . Intuitively , ReQueST avoids this problem by allowing agents to explore these states in a simulated model without having to visit them in the real environment . First , the agent learns a model of the environment by watching a human , who already knows how to navigate the environment safely . The agent then uses this model to ‘ imagine ’ various scenarios in the environment both safe and unsafe , asking the human for feedback on those scenarios . This process continues until the human is satisfied with the agent ’ s understanding of the task and its safety constraints , at which point the agent can be deployed in the real environment . If all goes to plan , this agent should avoid unsafe states in the real environment without having needed to visit those states in the first place . For further details , see Reddy et al . ( 2019 ) . ReQueST in this work ReQueST is flexible in i ) the type of feedback used to train the reward model , ii ) the proxies for value of information used to elicit informative feedback , and iii ) the algorithm used to train the agent in the learned simulation . In this work , for i ) we use reward sketching ( Cabi et al. , 2019 ) , one of the highest-bandwidth feedback mechanisms currently available . For ii ) , we use maximisation and minimisation of reward as predicted by the current reward model . Finally , for iii ) we use model predictive control – appealing for its simplicity , requiring no addi- tional training once the dynamics model and reward model are complete . See Fig . 2 for details , and Appendix A for pseudocode . 3.2 PROBLEM SETTING . Our 3D environment consists of an arena with two apples out in the open , and a third apple behind a gate that must be opened by stepping on a button ( shown in Fig . 3 ) . Agents receive 96×72×3 RGB pixel observations from a first-person perspective , and take actions in a two-dimensional continuous action space allowing movement forward and backward and turns left and right . Agent spawn position , positions of the two apples out in the open , and wall and floor colour are all randomised . Task The task is to eat the apples by moving close to them , ideally eating all 3 apples . The episode ends when either the agent eats the apple behind the gate or 900 steps have elapsed . To test ReQueST ’ s ability to avoid unsafe states , we use two variations on this setup . Cliff edge environment In the first environment variant , we remove the walls from the arena , so that it is possible for the agent to fall off the side . Such a fall is considered unsafe , and immediately ends the episode . Dangerous blocks environment The second environment variant models a more subtle case : where some part of the agent ’ s internal mechanisms is exposed in environment , and we would like to use safe exploration to discourage the agent from tampering ( Everitt et al. , 2021 ) with those mechanisms . We use a scenario based on the REALab framework ( Kumar et al. , 2020 ) , where instead of training rewards being communicated to the agent directly , rewards are communicated through a pair of blocks , where the reward given to the agent is based on the distance between these blocks . If the agent happens to bump into one of these blocks , changing the distance between them , this will affect the reward the agent receives , likely interfering with the agent ’ s ability to learn the task . Unsafe behaviour in this environment corresponds to making contact with the blocks . Sized subvariants We further use three subvariants , small , medium and large , of each of these two main variants . In each subvariant , the apples are the same distance from the agent spawn position , but the arena size is different , varying the difficulty . The larger the arena , the easier it is to avoid unsafe states : the edges are further away in the cliff edge variant , and the blocks are further away in the dangerous blocks variant . See Appendix B for details . | ### Contributions * This paper proposes a safe model-based deep RL approach where * No simulator is needed. The dynamic model is learned from data. * No constraint is specified. * This work is an extension of reward query synthesis via trajectory optimization (ReQueST). * ReQueSt (previous work) * Algorithm * Dynamics learned from (potentially unsafe) random exploration. * Reward learned from binary human feedback that is generated by a procedural reward function. * Demonstrated in State-based 2D navigation (non-pixel-based), Image-based Car Racing (pixel-based, 64x64×3). * Regarding safety * ReQueST avoids the problem of safe exploration by allowing agents to explore these states in a simulated model without having to visit them in the real environment. * This work * Algorithm * Dynamics learned from safe demonstration provided by humans (160 person-hours). * Similar to ReQueSt, `except that we use a larger encoder network, a deconvolutional decoder network, and train using a simple mean-squared error loss between ground-truth pixel observations and predicted pixel observations.` * Reward learned from * Type of feedback from humans: reward sketch (previous work) (10 person-hours). * The proxies for value of information: maximization and minimization of reward as predicted by the current reward model. * The algorithm for reward learning: MPC. * Demonstrated in 3D env * Task goal = eat all the 3 apples * Env 1 = Cliff edge environment * Safety = not fall off the end of the world * Env 2 = Dangerous blocks environment * Safety = bumping into blocks * Each env has 3 Sized subvariants. * Each env is pixel-based, 96×72×3. * Regarding safety * Safety is achieved by focusing the learned dynamic model in the safe scenarios, with only the safe data. `Given models of sufficient fidelity, this should allow us to train an agent with close to zero instances of unsafe behaviour in the real environment.` ### Results * Env 1 = Cliff edge environment (Fig5) * Safety violations during training: proposed method << model-free RL * Safety violations during testing: proposed method = model-free RL << random policy * Apples eaten: random policy <= proposed method < model-free RL * => Proposed method vs model-free RL => tradeoff between safety violations during training and apples eaten * Env 2 = Dangerous blocks environment (Fig6) * Safety violations during training: Proposed method << model-free RL * Safety violations during testing: Proposed method << model-free RL = random policy * Apples eaten: random policy = model-free RL < proposed method * => Proposed method is better than model-free RL and random | SP:d8fe4568447b255f04befefa320abc6d2f32ccdc |
Safe Deep RL in 3D Environments using Human Feedback | Agents should avoid unsafe behaviour during both training and deployment . This typically requires a simulator and a procedural specification of unsafe behaviour . Unfortunately , a simulator is not always available , and procedurally specifying constraints can be difficult or impossible for many real-world tasks . A recently introduced technique , ReQueST , aims to solve this problem by learning a neural simulator of the environment from safe human trajectories , then using the learned simulator to efficiently learn a reward model from human feedback . However , it is yet unknown whether this approach is feasible in complex 3D environments with feedback obtained from real humans - whether sufficient pixel-based neural simulator quality can be achieved , and whether the human data requirements are viable in terms of both quantity and quality . In this paper we answer this question in the affirmative , using ReQueST to train an agent to perform a 3D first-person object collection task using data entirely from human contractors . We show that the resulting agent exhibits an order of magnitude reduction in unsafe behaviour compared to standard reinforcement learning . 1 INTRODUCTION . Many of deep reinforcement learning ’ s recent successes have relied on the availability of a procedural reward function and a simulated environment for the task in question . As a result , research has been largely insulated from many of the difficulties of learning in the real world . One of these issues is safe exploration ( Garcıa & Fernández , 2015 ) . Online reinforcement learning is dependent on first-hand experience in order to learn the constraints of safe behaviour : the agent must drive the car off a cliff to learn not to drive the car off a cliff . While such actions may be fine in simulation , in the real world these actions may have unacceptable consequences , such as injury to humans . Further , such constraints are not always easy to describe using procedural functions . A recently proposed approach to safe exploration is reward query synthesis via trajectory optimization , ReQueST ( Reddy et al. , 2019 ) . In this approach , the agent is trained in a learned dynamics model ( a neural environment simulator ) with rewards from a learned reward model . Given models of sufficient fidelity , this should allow us to train an agent with close to zero instances of unsafe behaviour in the real environment . However , as of Reddy et al . ( 2019 ) , ReQueST has only been demonstrated to work in simple 2D environments – a simple navigation task , and a 2D car racing game – with a dynamics model learned from ( potentially unsafe ) random exploration , and a reward model learned from binary feedback generated by a procedural reward function . In this work , we aim to answer the question : is ReQueST feasible in complex 3D environments , with data used to train both dynamics and reward models sourced from real humans ? In particular , can we learn a pixel-based dynamics model of sufficient quality to enable informative human feedback , and are the data requirements viable , especially in terms of quantity ? Our key contributions in this work are as follows . • We demonstrate that ReQueST is feasible in a complex 3D environment , training a pixelbased dynamics model and reward model from 160 person-hours of safe human exploratory trajectories and 10 person-hours of reward sketches . We also show that performance degrades smoothly when models are trained on smaller amounts of data . • On a 3D first-person object collection task , we show that ReQueST enables training of a competent agent with 3 to 20 times fewer instances of unsafe behaviour during training ( close to zero instances if not counting mistakes by human contractors ) than a traditional RL algorithm . 2 RELATED WORK . Safe exploration Safe exploration has been studied extensively ( Garcıa & Fernández , 2015 ) . Most existing work achieves safety by making strong assumptions about the state space , such as all unsafe states being known in advance ( Geibel & Wysotzki , 2005 ; Luo & Ma , 2021 ) or the state space being reasonably smooth ( Berkenkamp et al. , 2017 ; Dalal et al. , 2018 ) . Other approaches require additional inputs , such as a procedural constraint function ( Altman , 1999 ; Achiam et al. , 2017 ; Ray et al. , 2019 ; Dalal et al. , 2018 ) , a safe baseline policy ( Garcia & Fernández , 2012 ) , or a separate system that can determine whether an action is safe ( Alshiekh et al. , 2018 ) . In contrast , the only assumption we make is that a human can recognise when a trajectory contains or is heading towards an unsafe state . Acceptability of unsafe behaviour Another important dimension is whether safety is treated as a soft or a hard constraint . Most work assumes the former , seeking to minimise time spent in unsafe states ( e.g . Geibel & Wysotzki ( 2005 ) ) . Two notable examples of the latter include Saunders et al . ( 2017 ) , which avoids unsafe behaviour by having a human intervene during training to block unsafe actions , and Luo & Ma ( 2021 ) , which starts with a trivial-but-safe policy and slowly broadens the policy while guaranteeing the policy will avoid a set of unsafe states specified by the user in advance . Learned dynamics models in prior work The broad structure of our approach – learning a dynamics model ( Chiappa et al. , 2017 ) from trajectories , then learning a policy or planning using that model – has been successfully used in simple control tasks ( Hafner et al. , 2019 ) , autonomous helicopter flight ( Abbeel et al. , 2010 ) , fabric manipulation ( Hoque et al. , 2021 ) , Atari games ( Buesing et al. , 2018 ) , and in simple 3D environments ( Ha & Schmidhuber , 2018 ) . Our work shows that this approach is viable even with complex 3D scenes , with a dynamics model learned from humandemonstrated trajectories , and with a reward model learned from human feedback rather than relying on environment rewards ( Hafner et al. , 2019 ; Buesing et al. , 2018 ) , trajectory following ( Abbeel et al. , 2010 ) , or maximisation of episode length ( Ha & Schmidhuber , 2018 ) . Prior work on ReQueST Our work expands on the original ReQueST ( Reddy et al. , 2019 ) in two main ways . First , we source all data from humans , showing that ReQueST is still applicable with imperfect , real-world data . Second , rather than simple 2D environments , we use visually-complex 3D environments , requiring a much more sophisticated dynamics model . Reward modeling As with the original ReQueST , we rely on a learned model of the reward function ( Knox , 2012 ; Leike et al. , 2018 ) . In contrast to the classification-based reward model used in the original , our reward model regresses to a continuous-valued reward , trained using reward sketches ( Cabi et al. , 2019 ) on imagined trajectories . Other forms of feedback on which such reward models can be trained include real-time scalar rewards ( Knox & Stone , 2009 ; Warnell et al. , 2018 ) , goal states ( Bahdanau et al. , 2018 ) , trajectory demonstrations ( Finn et al. , 2016 ) , trajectory comparisons ( Christiano et al. , 2017 ) , and combinations thereof ( Ibarz et al. , 2018 ; Stiennon et al. , 2020 ; Jeon et al. , 2020 ) . 3 METHODS . 3.1 REQUEST . Approach overview For online reinforcement learning algorithms , the only way an agent can learn about unsafe states is by visiting them . Intuitively , ReQueST avoids this problem by allowing agents to explore these states in a simulated model without having to visit them in the real environment . First , the agent learns a model of the environment by watching a human , who already knows how to navigate the environment safely . The agent then uses this model to ‘ imagine ’ various scenarios in the environment both safe and unsafe , asking the human for feedback on those scenarios . This process continues until the human is satisfied with the agent ’ s understanding of the task and its safety constraints , at which point the agent can be deployed in the real environment . If all goes to plan , this agent should avoid unsafe states in the real environment without having needed to visit those states in the first place . For further details , see Reddy et al . ( 2019 ) . ReQueST in this work ReQueST is flexible in i ) the type of feedback used to train the reward model , ii ) the proxies for value of information used to elicit informative feedback , and iii ) the algorithm used to train the agent in the learned simulation . In this work , for i ) we use reward sketching ( Cabi et al. , 2019 ) , one of the highest-bandwidth feedback mechanisms currently available . For ii ) , we use maximisation and minimisation of reward as predicted by the current reward model . Finally , for iii ) we use model predictive control – appealing for its simplicity , requiring no addi- tional training once the dynamics model and reward model are complete . See Fig . 2 for details , and Appendix A for pseudocode . 3.2 PROBLEM SETTING . Our 3D environment consists of an arena with two apples out in the open , and a third apple behind a gate that must be opened by stepping on a button ( shown in Fig . 3 ) . Agents receive 96×72×3 RGB pixel observations from a first-person perspective , and take actions in a two-dimensional continuous action space allowing movement forward and backward and turns left and right . Agent spawn position , positions of the two apples out in the open , and wall and floor colour are all randomised . Task The task is to eat the apples by moving close to them , ideally eating all 3 apples . The episode ends when either the agent eats the apple behind the gate or 900 steps have elapsed . To test ReQueST ’ s ability to avoid unsafe states , we use two variations on this setup . Cliff edge environment In the first environment variant , we remove the walls from the arena , so that it is possible for the agent to fall off the side . Such a fall is considered unsafe , and immediately ends the episode . Dangerous blocks environment The second environment variant models a more subtle case : where some part of the agent ’ s internal mechanisms is exposed in environment , and we would like to use safe exploration to discourage the agent from tampering ( Everitt et al. , 2021 ) with those mechanisms . We use a scenario based on the REALab framework ( Kumar et al. , 2020 ) , where instead of training rewards being communicated to the agent directly , rewards are communicated through a pair of blocks , where the reward given to the agent is based on the distance between these blocks . If the agent happens to bump into one of these blocks , changing the distance between them , this will affect the reward the agent receives , likely interfering with the agent ’ s ability to learn the task . Unsafe behaviour in this environment corresponds to making contact with the blocks . Sized subvariants We further use three subvariants , small , medium and large , of each of these two main variants . In each subvariant , the apples are the same distance from the agent spawn position , but the arena size is different , varying the difficulty . The larger the arena , the easier it is to avoid unsafe states : the edges are further away in the cliff edge variant , and the blocks are further away in the dangerous blocks variant . See Appendix B for details . | The paper proposes an extension to ReQueST, which learn learns a neural simulator of the environment from safe human trajectories and then learns a reward model from human feedback. This work extends ReQueST by dense reward sketches on imagines trajectories and evaluates the idea on a visually-complex 3D environment. The paper also discusses the amount of training data that is needed for ReQueST to work in such a setting. The authors collect 160 person-hours of "safe" human exploratory trajectories and 10 person-hours of reward sketches. Their results show that the application of ReQuest results in "3 to 20" times less constraint violations compared to a non-safe baseline. | SP:d8fe4568447b255f04befefa320abc6d2f32ccdc |
Communicate Then Adapt: An Effective Decentralized Adaptive Method for Deep Training | 1 INTRODUCTION . Decentralized SGD ( Lopes & Sayed , 2008 ; Nedic & Ozdaglar , 2009 ; Chen & Sayed , 2012 ; Lian et al. , 2017 ; Assran et al. , 2019 ) is an emerging training approach for deep learning known for its much less communication overhead . In contrast to parallel SGD in which a global averaging across all computing nodes is required per iteration , decentralized SGD does not involve any global operations . Building upon partial averaging , in which each node only needs to compute the locally averaged model within its neighborhood , decentralized SGD can save remarkable communications and training time in large-scale distributed deep learning tasks compared to parallel SGD . Although simple to use , the vanilla decentralized SGD sometimes suffers from the slow convergence . Inspired by the well-documented success of adaptive methods such as AdaGrad ( Duchi et al. , 2011a ) , Adam ( Kingma & Ba , 2014 ) and AMSGrad ( Reddi et al. , 2019 ) , several decentralized adaptive methods ( Nazari et al. , 2019 ; Lin et al. , 2021 ) have been proposed to accelerate decentralized SGD training . While these algorithms have achieved remarkable success in several practical applications , they have also been observed to not converge to the desired solution ( i.e. , global optimal solution in the convex scenario or stationary solution in the non-convex scenario ) in some other settings . For example , it has been observed in the convex setting ( see Sec . 3 ) that DAdam ( Nazari et al. , 2019 ) and QG-DAdam ( Lin et al. , 2021 ) do not converge to the global optimal solution . This paper studies this situation in detail . We rigorously uncover the reason why DAdam and QGDAdam fail to achieve the desired solution , and propose a novel decentralized adaptive method to resolve the convergence issue . In particular , we make the following key contributions : • We find the algorithms DAdam and QG-DAdam , while different in concrete recursions , share a similar structure : each node scales its gradient with the past squared gradients ( which is referred to as the adaptive step ) before or while it communicates with neighbors . We identify the limitation of such adapt-then/while-communicate structure : it will make the developed algorithms highly sensitive to data heterogeneity , and hence deviate their limiting points from the desired solution . • To overcome these limitations , we propose a novel communicate-then-adapt algorithm structure , in which each node will conduct the adaptive step after all neighborhood communications . It is not just trivially switching the order of communication step and adaptive step . The key component to guarantee the effectiveness of the communicate-then-adapt structure is the utilization of the decentralized augmented gradient ( DAG ) rather than the standard stochastic gradient in algorithm development . The newly proposed algorithm , which is coined as DAG-Adam , can provably converge to the desired solution . While this paper mainly focuses on the vanilla Adam algorithm , the core idea behind DAG-Adam can be easilty extended to AMSGrad or other adaptive methods . • Experimental results on a variety of computer vision and natural language processing tasks show that DAG-Adam outperforms various existing state-of-the-art decentralized adaptive baselines under different network and data configurations . Furthermore , the performance of our proposed algorithm is persistently competitive to the centralized counterpart . Related work on decentralized optimization and deep training . Decentralized optimization was extensively studied in the control and signal processing community . The first decentralized algorithms on general optimization problems include decentralized gradient descent ( Nedic & Ozdaglar , 2009 ) , diffusion ( Lopes & Sayed , 2008 ; Chen & Sayed , 2012 ; Sayed , 2014 ) , and dual averaging ( Duchi et al. , 2011b ) . After that , various primal-dual algorithms come out to further speed up the convergence , and they are based on alternating direction method of multipliers ( Adam ) ( Shi et al. , 2014 ) , explicit bias-correction ( Shi et al. , 2015 ; Yuan et al. , 2019 ; Li et al. , 2019 ) , gradient tracking ( Xu et al. , 2015 ; Di Lorenzo & Scutari , 2016 ; Nedic et al. , 2017 ; Qu & Li , 2018 ; Lu et al. , 2019 ) , and dual acceleration ( Scaman et al. , 2017 ; Uribe et al. , 2020 ) . In deep learning tasks , decentralize SGD , which was established in ( Lian et al. , 2017 ) to achieve the same linear speedup as parallel SGD in convergence rate , has attracted a lot of attentions . Many efforts have been made to extend the algorithm to directed topologies ( Assran et al. , 2019 ) , time-varying topologies ( Koloskova et al. , 2020 ) , asynchronous settings ( Lian et al. , 2018 ) , and data-heterogeneous scenarios ( Tang et al. , 2018 ; Xin et al. , 2020 ; Lin et al. , 2021 ; Yuan et al. , 2021 ) . With careful consensus control ( Kong et al. , 2021 ) or periodic global averaging ( Chen et al. , 2021b ) , decentralize SGD can achieve 1.3 ∼ 2× training time speedup without severe performance degradation . Techniques such as quantization/compression ( Alistarh et al. , 2017 ; Bernstein et al. , 2018 ; Koloskova et al. , 2019a ; b ; Tang et al. , 2019 ; Liu et al. , 2020 ) , periodic updates ( Stich , 2019 ; Koloskova et al. , 2020 ; Yu et al. , 2019 ) , and lazy communication ( Chen et al. , 2018a ; Liu et al. , 2019b ) were also integrated into decentralized SGD to further reduce communication overheads . There are also a few studies on accelerated variants of decentralized SGD , and most of them are on ( static ) momentum acceleration . ( Assran et al. , 2019 ; Gao & Huang , 2020 ) propose to run a local momentum SGD step first before the partial averaging is conducted . Another work ( Yu et al. , 2019 ) imposes an additional partial averaging over momentum to increase stability . Recent works ( Lin et al. , 2021 ; Yuan et al. , 2021 ) developed strategies that can remove the momentum-incurred bias in decentralized momentum SGD . All these methods are not with adaptive strategies to scale gradients . Related work on adaptive gradient method . Adaptive gradient methods , with AdaGrad ( Duchi et al. , 2011a ) and Adam ( Kingma & Ba , 2014 ) as two representatives , have shown strong performance in training deep neural networks . With adaptive adjustment in the gradient direction and automatic tune in the learning rate , adaptive gradient methods can boost the performance of SGD training significantly . In spite of its remarkable empirical success , Adam suffers from a convergence issue : it may not converge to the desired solution with a fixed mini-batch size ( Reddi et al. , 2019 ; Zaheer et al. , 2018 ) . Many algorithms have been proposed to resolve this issue . AMSGrad ( Reddi et al. , 2019 ) preserves the long-term memory of past gradients to improve convergence , and ( Zaheer et al. , 2018 ) suggests the usage of increasing mini-batch sizes in convergence guarantees . ( Chen et al. , 2018b ) has studied the convergence of a family of Adam-type algorithms . ( Zhou et al. , 2018 ) provides a better convergence rate for AMSGrad . From the empirical side , RAdam ( Liu et al. , 2019a ) and AdamW ( Loshchilov & Hutter , 2018 ) can significantly improve Adam performance . The exploration in decentralized adaptive methods are rather limited . DAdam ( Nazari et al. , 2019 ) is the first consensus-based adaptive methods for distributed optimization to our knowledge . ( Lin et al. , 2021 ) proposes QG-DAdam , which utilizes quasi-global momentum to locally approximates the globally descent direction . A recent work ( Chen et al. , 2021a ) proposes a unified framework that incorporates various adaptive methods into decentralized setting . While these methods have shown strong empirical performance in several practical applications , they either suffer from unstable convergence or heavy communications . We leave more discussion on their limitations in Sec . 2 . Adaptive gradient methods have also been extended to the federated learning setting in which multiple clients cooperate to learn a model under the supervision of a central server ( McMahan et al. , 2017 ) . Useful references in this direction can be found in ( Xie et al. , 2019 ; Reddi et al. , 2020 ) . 2 ADAPT-THEN/WHILE-COMMUNICATE STRUCTURE AND ITS LIMITATION . Problem . Suppose n computing nodes cooperate to solve the distributed optimization problem : min x∈Rd f ( x ) = 1 n n∑ i=1 fi ( x ) , where fi ( x ) : = Eξi∼DiF ( x ; ξi ) ( 1 ) In the above problem , fi ( x ) is local to node i , and random variable ξi denotes the local data that follows distribution Di . We do not assume each distribution Di is the same across all nodes . Network topology and weights . We assume all computing nodes are connected by a network topology . We define wij , the weight to scale information flowing from node j to node i , as follows : wij { > 0 if node j is connected to i , or i = j ; = 0 otherwise . ( 2 ) Ni : = { j|wij > 0 } is defined as the set of neighbors of node i which also includes node i itself and the weight matrix W : = [ wij ] ni , j=1 ∈ Rn×n are denoted as a matrix that stacks the weights of all nodes . This matrix W characterizes the sparsity and connectivity of the underlying topology . Partial averaging . Decentralized methods are based on partial averaging within neighborhood defined by the network topology . With weights { wij } and the set of neighbors Ni , the partial averaging operation of node i can be expressed as Partial averaging : x+i ← ∑ j∈Ni wijxj . ( 3 ) Partial averaging requires much less communication than global averaging on sparse topologies . Assumptions . We will make the following standard assumptions throughout the paper : A.1 [ SMOOTHNESS ] Each fi ( x ) isL-smooth , i.e. , ‖∇fi ( x ) −∇fi ( y ) ‖ ≤ L‖x−y‖ for any x , y ∈ Rd . A.2 [ GRADIENT NOISE ] The random sample ξ ( t ) i is independent of each other for any k and i . We also assume E [ ∇F ( x ; ξi ) ] = ∇fi ( x ) and E‖∇F ( x ; ξi ) −∇fi ( x ) ‖2 ≤ σ2 . A.3 [ WEIGHT MATRIX ] The weight matrix W is symmetric and doubly-stochastic , i.e . W1 = 1 and 1TW = 1T , where 1 is an all-ones vector . Further , we assume ‖W − 1n11 T ‖2 ≤ ρ ∈ ( 0 , 1 ) . A.4 [ BOUNDED GRADIENT ] The loss function F has bounded gradient , i.e. , the maximum norm ‖∇F ( x ; ξi ) ‖∞ ≤ G for all x and ξi , and i ∈ [ n ] . Notation . Given a constant n , we let [ n ] : = { 1 , · · · , n } . Given a vector x ∈ Rd , we let diag ( x ) = diag { x1 , · · · , xd } ∈ Rd×d be a diagonal matrix . Given two vectors x ∈ Rd and y ∈ Rd , we let x y be the element-wise product between x and y . | This paper proposes a decentralized adaptive method for distributed deep learning, termed DAG-Adam. Convergence results are provided for smooth non-convex objectives under a bounded gradient assumption. Numerical experiments are conducted on Image Classification (CIFAR10, ImageNet-1k) and Language Modelling (fine-tuning pre-trained BERT models on SQuAD). | SP:20b3645f16dca8252b31b43661e565d180529a4e |
Communicate Then Adapt: An Effective Decentralized Adaptive Method for Deep Training | 1 INTRODUCTION . Decentralized SGD ( Lopes & Sayed , 2008 ; Nedic & Ozdaglar , 2009 ; Chen & Sayed , 2012 ; Lian et al. , 2017 ; Assran et al. , 2019 ) is an emerging training approach for deep learning known for its much less communication overhead . In contrast to parallel SGD in which a global averaging across all computing nodes is required per iteration , decentralized SGD does not involve any global operations . Building upon partial averaging , in which each node only needs to compute the locally averaged model within its neighborhood , decentralized SGD can save remarkable communications and training time in large-scale distributed deep learning tasks compared to parallel SGD . Although simple to use , the vanilla decentralized SGD sometimes suffers from the slow convergence . Inspired by the well-documented success of adaptive methods such as AdaGrad ( Duchi et al. , 2011a ) , Adam ( Kingma & Ba , 2014 ) and AMSGrad ( Reddi et al. , 2019 ) , several decentralized adaptive methods ( Nazari et al. , 2019 ; Lin et al. , 2021 ) have been proposed to accelerate decentralized SGD training . While these algorithms have achieved remarkable success in several practical applications , they have also been observed to not converge to the desired solution ( i.e. , global optimal solution in the convex scenario or stationary solution in the non-convex scenario ) in some other settings . For example , it has been observed in the convex setting ( see Sec . 3 ) that DAdam ( Nazari et al. , 2019 ) and QG-DAdam ( Lin et al. , 2021 ) do not converge to the global optimal solution . This paper studies this situation in detail . We rigorously uncover the reason why DAdam and QGDAdam fail to achieve the desired solution , and propose a novel decentralized adaptive method to resolve the convergence issue . In particular , we make the following key contributions : • We find the algorithms DAdam and QG-DAdam , while different in concrete recursions , share a similar structure : each node scales its gradient with the past squared gradients ( which is referred to as the adaptive step ) before or while it communicates with neighbors . We identify the limitation of such adapt-then/while-communicate structure : it will make the developed algorithms highly sensitive to data heterogeneity , and hence deviate their limiting points from the desired solution . • To overcome these limitations , we propose a novel communicate-then-adapt algorithm structure , in which each node will conduct the adaptive step after all neighborhood communications . It is not just trivially switching the order of communication step and adaptive step . The key component to guarantee the effectiveness of the communicate-then-adapt structure is the utilization of the decentralized augmented gradient ( DAG ) rather than the standard stochastic gradient in algorithm development . The newly proposed algorithm , which is coined as DAG-Adam , can provably converge to the desired solution . While this paper mainly focuses on the vanilla Adam algorithm , the core idea behind DAG-Adam can be easilty extended to AMSGrad or other adaptive methods . • Experimental results on a variety of computer vision and natural language processing tasks show that DAG-Adam outperforms various existing state-of-the-art decentralized adaptive baselines under different network and data configurations . Furthermore , the performance of our proposed algorithm is persistently competitive to the centralized counterpart . Related work on decentralized optimization and deep training . Decentralized optimization was extensively studied in the control and signal processing community . The first decentralized algorithms on general optimization problems include decentralized gradient descent ( Nedic & Ozdaglar , 2009 ) , diffusion ( Lopes & Sayed , 2008 ; Chen & Sayed , 2012 ; Sayed , 2014 ) , and dual averaging ( Duchi et al. , 2011b ) . After that , various primal-dual algorithms come out to further speed up the convergence , and they are based on alternating direction method of multipliers ( Adam ) ( Shi et al. , 2014 ) , explicit bias-correction ( Shi et al. , 2015 ; Yuan et al. , 2019 ; Li et al. , 2019 ) , gradient tracking ( Xu et al. , 2015 ; Di Lorenzo & Scutari , 2016 ; Nedic et al. , 2017 ; Qu & Li , 2018 ; Lu et al. , 2019 ) , and dual acceleration ( Scaman et al. , 2017 ; Uribe et al. , 2020 ) . In deep learning tasks , decentralize SGD , which was established in ( Lian et al. , 2017 ) to achieve the same linear speedup as parallel SGD in convergence rate , has attracted a lot of attentions . Many efforts have been made to extend the algorithm to directed topologies ( Assran et al. , 2019 ) , time-varying topologies ( Koloskova et al. , 2020 ) , asynchronous settings ( Lian et al. , 2018 ) , and data-heterogeneous scenarios ( Tang et al. , 2018 ; Xin et al. , 2020 ; Lin et al. , 2021 ; Yuan et al. , 2021 ) . With careful consensus control ( Kong et al. , 2021 ) or periodic global averaging ( Chen et al. , 2021b ) , decentralize SGD can achieve 1.3 ∼ 2× training time speedup without severe performance degradation . Techniques such as quantization/compression ( Alistarh et al. , 2017 ; Bernstein et al. , 2018 ; Koloskova et al. , 2019a ; b ; Tang et al. , 2019 ; Liu et al. , 2020 ) , periodic updates ( Stich , 2019 ; Koloskova et al. , 2020 ; Yu et al. , 2019 ) , and lazy communication ( Chen et al. , 2018a ; Liu et al. , 2019b ) were also integrated into decentralized SGD to further reduce communication overheads . There are also a few studies on accelerated variants of decentralized SGD , and most of them are on ( static ) momentum acceleration . ( Assran et al. , 2019 ; Gao & Huang , 2020 ) propose to run a local momentum SGD step first before the partial averaging is conducted . Another work ( Yu et al. , 2019 ) imposes an additional partial averaging over momentum to increase stability . Recent works ( Lin et al. , 2021 ; Yuan et al. , 2021 ) developed strategies that can remove the momentum-incurred bias in decentralized momentum SGD . All these methods are not with adaptive strategies to scale gradients . Related work on adaptive gradient method . Adaptive gradient methods , with AdaGrad ( Duchi et al. , 2011a ) and Adam ( Kingma & Ba , 2014 ) as two representatives , have shown strong performance in training deep neural networks . With adaptive adjustment in the gradient direction and automatic tune in the learning rate , adaptive gradient methods can boost the performance of SGD training significantly . In spite of its remarkable empirical success , Adam suffers from a convergence issue : it may not converge to the desired solution with a fixed mini-batch size ( Reddi et al. , 2019 ; Zaheer et al. , 2018 ) . Many algorithms have been proposed to resolve this issue . AMSGrad ( Reddi et al. , 2019 ) preserves the long-term memory of past gradients to improve convergence , and ( Zaheer et al. , 2018 ) suggests the usage of increasing mini-batch sizes in convergence guarantees . ( Chen et al. , 2018b ) has studied the convergence of a family of Adam-type algorithms . ( Zhou et al. , 2018 ) provides a better convergence rate for AMSGrad . From the empirical side , RAdam ( Liu et al. , 2019a ) and AdamW ( Loshchilov & Hutter , 2018 ) can significantly improve Adam performance . The exploration in decentralized adaptive methods are rather limited . DAdam ( Nazari et al. , 2019 ) is the first consensus-based adaptive methods for distributed optimization to our knowledge . ( Lin et al. , 2021 ) proposes QG-DAdam , which utilizes quasi-global momentum to locally approximates the globally descent direction . A recent work ( Chen et al. , 2021a ) proposes a unified framework that incorporates various adaptive methods into decentralized setting . While these methods have shown strong empirical performance in several practical applications , they either suffer from unstable convergence or heavy communications . We leave more discussion on their limitations in Sec . 2 . Adaptive gradient methods have also been extended to the federated learning setting in which multiple clients cooperate to learn a model under the supervision of a central server ( McMahan et al. , 2017 ) . Useful references in this direction can be found in ( Xie et al. , 2019 ; Reddi et al. , 2020 ) . 2 ADAPT-THEN/WHILE-COMMUNICATE STRUCTURE AND ITS LIMITATION . Problem . Suppose n computing nodes cooperate to solve the distributed optimization problem : min x∈Rd f ( x ) = 1 n n∑ i=1 fi ( x ) , where fi ( x ) : = Eξi∼DiF ( x ; ξi ) ( 1 ) In the above problem , fi ( x ) is local to node i , and random variable ξi denotes the local data that follows distribution Di . We do not assume each distribution Di is the same across all nodes . Network topology and weights . We assume all computing nodes are connected by a network topology . We define wij , the weight to scale information flowing from node j to node i , as follows : wij { > 0 if node j is connected to i , or i = j ; = 0 otherwise . ( 2 ) Ni : = { j|wij > 0 } is defined as the set of neighbors of node i which also includes node i itself and the weight matrix W : = [ wij ] ni , j=1 ∈ Rn×n are denoted as a matrix that stacks the weights of all nodes . This matrix W characterizes the sparsity and connectivity of the underlying topology . Partial averaging . Decentralized methods are based on partial averaging within neighborhood defined by the network topology . With weights { wij } and the set of neighbors Ni , the partial averaging operation of node i can be expressed as Partial averaging : x+i ← ∑ j∈Ni wijxj . ( 3 ) Partial averaging requires much less communication than global averaging on sparse topologies . Assumptions . We will make the following standard assumptions throughout the paper : A.1 [ SMOOTHNESS ] Each fi ( x ) isL-smooth , i.e. , ‖∇fi ( x ) −∇fi ( y ) ‖ ≤ L‖x−y‖ for any x , y ∈ Rd . A.2 [ GRADIENT NOISE ] The random sample ξ ( t ) i is independent of each other for any k and i . We also assume E [ ∇F ( x ; ξi ) ] = ∇fi ( x ) and E‖∇F ( x ; ξi ) −∇fi ( x ) ‖2 ≤ σ2 . A.3 [ WEIGHT MATRIX ] The weight matrix W is symmetric and doubly-stochastic , i.e . W1 = 1 and 1TW = 1T , where 1 is an all-ones vector . Further , we assume ‖W − 1n11 T ‖2 ≤ ρ ∈ ( 0 , 1 ) . A.4 [ BOUNDED GRADIENT ] The loss function F has bounded gradient , i.e. , the maximum norm ‖∇F ( x ; ξi ) ‖∞ ≤ G for all x and ξi , and i ∈ [ n ] . Notation . Given a constant n , we let [ n ] : = { 1 , · · · , n } . Given a vector x ∈ Rd , we let diag ( x ) = diag { x1 , · · · , xd } ∈ Rd×d be a diagonal matrix . Given two vectors x ∈ Rd and y ∈ Rd , we let x y be the element-wise product between x and y . | This paper developed a new decentralized adaptive gradient descent method to address the data heterogeneity problem. The motivation is clear and the experimental results show improvement over existing methods. However, the theoretical analysis is not solid. | SP:20b3645f16dca8252b31b43661e565d180529a4e |
1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed | 1 INTRODUCTION . Training large-scale deep learning models in a distributed fashion is computation-heavy and expensive ( Brown et al. , 2020 ) . In addition to computation , communication overhead becomes a serious system challenge for such large models . A recent study of BERT pre-training with Adam demonstrates that the allreduce communication can take up to 94 % and 75 % of total training time per step on clusters with Ethernet and InfiniBand inter-node connections , respectively ( Tang et al. , 2021 ) . To achieve communication efficient distributed training , there are two promising directions : large batch optimization and communication compression . LAMB optimizer , which can be viewed as Adam with adaptive layerwise learning rates , is an example of large batch optimization ( You et al. , 2020 ) . LAMB can scale the batch size of BERT pre-training to 64K without losing accuracy , thereby greatly reducing the total training time as larger batch sizes leads to less frequent communication . On the other hand , recent works on communication compression such as 1-bit Adam demonstrate that it is possible to combine 1-bit compression with Adam ’ s convergence speed , thereby reduce BERT pre-training communication volume by 5x ( Tang et al. , 2021 ) . Both LAMB and 1-bit Adam demonstrate great benefit for distributed training . Unfortunately , our studies show that simply using one of them is not sufficient to fully address the communication issue , especially under limited network bandwidth and large number of GPUs/machines ( Section 3 ) . We find that communication is still a non-trivial overhead when running large-scale distributed training with LAMB , even with the larger batch sizes . Previous study shows that Adam provides slower convergence speed compared to LAMB at batch sizes 16K or larger for BERT pre-training ( You et al. , 2020 ) . Using the same methodology , our BERT experiments show that 1-bit Adam , similar to Adam , also has slower convergence speed compared to LAMB at batch size 16K . Even with the communication compression , this batch size limitation would hurt the communication efficiency when the number of GPUs/machines is large . LAMB and 1-bit Adam are two unique optimizers . However , the techniques behind them are complementary : large batch optimization reduces the frequency of communication , and compression reduces the volume of communication . Motivated by this we aim to combine LAMB ’ s large batch optimization algorithm with compression strategies behind 1-bit Adam . However , we find that they are not directly compatible due to LAMB ’ s unique layerwise learning rate update strategy , which requires information that are missing when communication and optimizer states are compressed ( Section 3 ) . The studies and challenges above motivate us to design a new algorithm called 1-bit LAMB ( Section 4 ) . Learning from the insights behind 1-bit Adam , 1-bit LAMB is a 2-stage algorithm which uses LAMB ( warmup stage ) to “ pre-condition ” a communication compressed momentum SGD algoirthm ( compression stage ) . At compression stage where original LAMB algorithm can not be used to update the layerwise learning rates , 1-bit LAMB employs a novel way to adaptively scale layerwise learning rates based on information from both warmup and compression stages . As a result , 1-bit LAMB is able to achieve large batch optimization ( LAMB ) ’ s convergence speed under compressed communication , which is impossible using existing approaches . In addition to the 1-bit LAMB algorithm , we propose a new NCCL-based compressed communication backend which provides better usability and performance than previous work ( Section 5 ) . This backend can be applied to 1-bit LAMB , 1-bit Adam , and other communication compression algorithms . We evaluate 1-bit LAMB using BERT pre-training and GLUE/SQuAD fine-tuning tasks ( Section 6 ) . Results show that under different batch sizes from 8K to 64K and with up to 256 GPUs , 1-bit LAMB with NCCL-based backend is able to achieve up to 4.6x communication volume reduction and up to 2.8x end-to-end time-wise speedup for BERT pre-training compared to uncompressed LAMB , together with the same sample-wise convergence speed and same GLUE/SQuAD fine-tuning task accuracy . The 1-bit LAMB optimizer as well as the NCCL-based communication backend has been open sourced in a deep learning optimization library ( name hidden to maintain anonymity ) . 2 RELATED WORK AND BACKGROUND . To achieve communication efficient distributed training , techniques include decentralization ( Lian et al. , 2017 ; Koloskova * et al. , 2020 ; Li et al. , 2018 ) , asynchronous communication ( Zheng et al. , 2016 ; Chaturapruek et al. , 2015 ) , and gradient compression/quantization which we focus on in this paper . Before communication , we could compress the original gradient g into Cω [ g ] , where Cω [ · ] is the compress operator1 . As a result the communication volume could be greatly reduced . Compression can be achieved by quantization , sparsification , sketching , etc . ( Ye & Abbe , 2018 ; Alistarh et al. , 2017 ; Agarwal et al. , 2018 ; Yu et al. , 2019 ; Spring et al. , 2019 ; Ivkin et al. , 2019 ; Shi et al. , 2021 ) . Several works focus on unbiased compression methods ( original and compressed tensors have the same expectation ) , such as centralized compressed parallel SGD ( Alistarh et al. , 2017 ) and many others ( Wangni et al. , 2018 ; Shen et al. , 2018 ; Zhang et al. , 2017 ; Wen et al. , 2017 ; Jiang & Agrawal , 2018 ) . On the other hand , recent works about biased compression methods demonstrate better compression rate and the same convergence rate by using an error compensation technique ( Seide et al. , 2014 ; Bernstein et al. , 2019 ; Stich et al. , 2018 ; Zheng et al. , 2019 ; Phuong & Phong , 2020 ; Yu et al. , 2019 ; Shi et al. , 2019 ; Ivkin et al. , 2019 ; Sun et al. , 2019 ; Basu et al. , 2019 ; Vogels et al. , 2019 ; Tang et al. , 2021 ) . The error-compensated compression is proposed in the 1-bit SGD work ( Seide et al. , 2014 ) : instead of compressing the gradient at each iteration directly , they compress the sum of the gradient and the last step ’ s compression error . By using error compensation the training can achieve promising convergence speed even with 1-bit compression ( representing the gradient by ±1 signs and a scale ) . Recent works provide theoretical guarantee of this method ( Bernstein et al. , 2019 ) , and also demonstrate that it admits the same asymptotic convergence rate as the uncompressed one ( Stich et al. , 2018 ) . In addition , error compensation method enables almost any compression methods ( Stich et al. , 2018 ) , either biased or unbiased , to converge as fast as the uncompressed case . 1Cω [ · ] could also include randomness . Adam ( Kingma & Ba , 2015 ) can be viewed as SGD with momentum and adaptive learning rate scaling on each coordinate of the gradient . It has demonstrated promising convergence speed and hyperparameter robustness on many deep learning tasks . Recently , Tang et al . ( 2021 ) proposed 1-bit Adam which combines the efficiency of error-compensated 1-bit compression with Adam ’ s convergence speed . They show that error-compensated compression does not work for Adam directly , because Adam is non-linearly dependent on the gradient ( the variance term ) . On the other hand , they find that Adam ’ s variance becomes stable at an early stage of training . To this end , they design a new 2-stage algorithm , 1-bit Adam : At warmup stage , vanilla Adam is used . At compression stage , they stop updating the variance and use it as a fixed precondition , and communicate based on the momentum applied with error-compensated 1-bit compression . Their experiments on up to 256 GPUs show that 1-bit Adam achieve the same convergence behaviour and final accuracy as Adam , together with up to 5x less communication volume and 3.3x faster end-to-end throughput . To further improve training efficiency at large scale , being able to support large minibatches while keeping the convergence speed is a critical factor . Recently You et al . ( 2020 ) find that it is difficult to keep Adam ’ s convergence speed at batch sizes 16K or larger for BERT pre-training . To this end they proposed LAMB which can be viewed as Adam with adaptive layerwise learning rates . By using LAMB , they are able to scale the batch size of BERT pre-training to 64K without losing accuracy , thereby , reducing the BERT training time from 3 days to around 76 minutes . The major idea of LAMB is that it utilizes a layerwise scaling coefficient to regulate the update of each layer , and the updating rule can be summarized as2 : m ( l ) t =β1m ( l ) t−1 + ( 1− β1 ) g ( l ) t , v ( l ) t = β2v ( l ) t−1 + ( 1− β2 ) ( g ( l ) t ) 2 u ( l ) t = m ( l ) t√ v ( l ) t + η , c ( l ) t = clip ( ‖x ( l ) t−1‖ ‖u ( l ) t ‖ , cmin , cmax ) x ( l ) t =x ( l ) t−1 − γc ( l ) t u ( l ) t . ( 1 ) Here g ( l ) t = ∇F ( xt ; ξt ) , m ( l ) t , v ( l ) t , x ( l ) t denote the stochastic gradient , momentum , second moment ( i.e. , the variance ) , and the model parameters at the model ’ s l-th layer at step t ; β1 and β2 are the decaying factor ; γ is the learning rate ; η is an additive constant to avoid division by 0 ; clip ( x , a , b ) : = min { max { x , a } , b } is the clipping operation3 ; c ( l ) t is a layer-wise scaling factor that regulates the update of x ( l ) t into certain range . One thing to note is that within each layer , each tensor ( e.g. , weight and bias ) will have its own scaling coefficient c ( l ) t . The underlying intuition of LAMB ’ s scaling coefficient is that when the update is relatively large compared to the parameter , we should apply a lower learning rate to that layer ( and vice versa ) . 3 MOTIVATION AND INSIGHTS . 3.1 1-BIT ADAM IS NOT SUFFICIENT FOR LARGE-BATCH DISTRIBUTED TRAINING . 1-bit Adam demonstrates the same convergence speed as Adam for BERT pre-training task with batch size 4K ( Tang et al. , 2021 ) . On the other hand , the LAMB work shows that it is difficult to keep Adam ’ s convergence speed at batch sizes 16K or larger for BERT pre-training ( You et al. , 2020 ) . To find out whether 1-bit Adam is sufficient for large-batch distributed training , we perform a similar experiment using BERT pre-training task at batch size 16K . Using You et al . ( 2020 ) ’ s training parameters and tuning procedure ( details in Appendix A.1 ) for LAMB and Adam , we perform BERT pre-training with LAMB and 1-bit Adam , respectively . Then we use the two pre-trained BERT model to perform SQuAD 1.1 fine-tuning ( details in Section 6 ) . Results in Table 1 show that similar to Adam , 1-bit Adam has slower convergence speed compared to LAMB at larger batch size . 2Here ( x ) 2 , √ x and x y all denote element-wise operations . For simplicity weight decay is omitted . 3In the LAMB paper the clip function is only applied to ‖x ( l ) t−1‖ without mentioning the exact clipping function configurations , and our experiments show that ‖x ( l ) t−1‖ varies a lot among different layers . Thus we apply the clipping function to the whole ratio , which is more stable among different layers . With this clipping function we are able to achieve similar SQuAD accuracy compared to the original LAMB . | This work studies the problem of distributed training with large batches in a communication-bottlenecked setup, where regular versions of algorithms such as LAMB become a constraint. Authors propose a new algorithm, which compresses the gradient momentum before aggregation and then reconstructs the gradients to recover the scaling coefficients for LAMB. This idea allows authors to achieve faster convergence compared to naive compression for LAMB while maintaining higher communication efficiency than a non-compressed version. | SP:348371af70bb81a998b7dcfc8de2d60ea9b506e5 |
1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed | 1 INTRODUCTION . Training large-scale deep learning models in a distributed fashion is computation-heavy and expensive ( Brown et al. , 2020 ) . In addition to computation , communication overhead becomes a serious system challenge for such large models . A recent study of BERT pre-training with Adam demonstrates that the allreduce communication can take up to 94 % and 75 % of total training time per step on clusters with Ethernet and InfiniBand inter-node connections , respectively ( Tang et al. , 2021 ) . To achieve communication efficient distributed training , there are two promising directions : large batch optimization and communication compression . LAMB optimizer , which can be viewed as Adam with adaptive layerwise learning rates , is an example of large batch optimization ( You et al. , 2020 ) . LAMB can scale the batch size of BERT pre-training to 64K without losing accuracy , thereby greatly reducing the total training time as larger batch sizes leads to less frequent communication . On the other hand , recent works on communication compression such as 1-bit Adam demonstrate that it is possible to combine 1-bit compression with Adam ’ s convergence speed , thereby reduce BERT pre-training communication volume by 5x ( Tang et al. , 2021 ) . Both LAMB and 1-bit Adam demonstrate great benefit for distributed training . Unfortunately , our studies show that simply using one of them is not sufficient to fully address the communication issue , especially under limited network bandwidth and large number of GPUs/machines ( Section 3 ) . We find that communication is still a non-trivial overhead when running large-scale distributed training with LAMB , even with the larger batch sizes . Previous study shows that Adam provides slower convergence speed compared to LAMB at batch sizes 16K or larger for BERT pre-training ( You et al. , 2020 ) . Using the same methodology , our BERT experiments show that 1-bit Adam , similar to Adam , also has slower convergence speed compared to LAMB at batch size 16K . Even with the communication compression , this batch size limitation would hurt the communication efficiency when the number of GPUs/machines is large . LAMB and 1-bit Adam are two unique optimizers . However , the techniques behind them are complementary : large batch optimization reduces the frequency of communication , and compression reduces the volume of communication . Motivated by this we aim to combine LAMB ’ s large batch optimization algorithm with compression strategies behind 1-bit Adam . However , we find that they are not directly compatible due to LAMB ’ s unique layerwise learning rate update strategy , which requires information that are missing when communication and optimizer states are compressed ( Section 3 ) . The studies and challenges above motivate us to design a new algorithm called 1-bit LAMB ( Section 4 ) . Learning from the insights behind 1-bit Adam , 1-bit LAMB is a 2-stage algorithm which uses LAMB ( warmup stage ) to “ pre-condition ” a communication compressed momentum SGD algoirthm ( compression stage ) . At compression stage where original LAMB algorithm can not be used to update the layerwise learning rates , 1-bit LAMB employs a novel way to adaptively scale layerwise learning rates based on information from both warmup and compression stages . As a result , 1-bit LAMB is able to achieve large batch optimization ( LAMB ) ’ s convergence speed under compressed communication , which is impossible using existing approaches . In addition to the 1-bit LAMB algorithm , we propose a new NCCL-based compressed communication backend which provides better usability and performance than previous work ( Section 5 ) . This backend can be applied to 1-bit LAMB , 1-bit Adam , and other communication compression algorithms . We evaluate 1-bit LAMB using BERT pre-training and GLUE/SQuAD fine-tuning tasks ( Section 6 ) . Results show that under different batch sizes from 8K to 64K and with up to 256 GPUs , 1-bit LAMB with NCCL-based backend is able to achieve up to 4.6x communication volume reduction and up to 2.8x end-to-end time-wise speedup for BERT pre-training compared to uncompressed LAMB , together with the same sample-wise convergence speed and same GLUE/SQuAD fine-tuning task accuracy . The 1-bit LAMB optimizer as well as the NCCL-based communication backend has been open sourced in a deep learning optimization library ( name hidden to maintain anonymity ) . 2 RELATED WORK AND BACKGROUND . To achieve communication efficient distributed training , techniques include decentralization ( Lian et al. , 2017 ; Koloskova * et al. , 2020 ; Li et al. , 2018 ) , asynchronous communication ( Zheng et al. , 2016 ; Chaturapruek et al. , 2015 ) , and gradient compression/quantization which we focus on in this paper . Before communication , we could compress the original gradient g into Cω [ g ] , where Cω [ · ] is the compress operator1 . As a result the communication volume could be greatly reduced . Compression can be achieved by quantization , sparsification , sketching , etc . ( Ye & Abbe , 2018 ; Alistarh et al. , 2017 ; Agarwal et al. , 2018 ; Yu et al. , 2019 ; Spring et al. , 2019 ; Ivkin et al. , 2019 ; Shi et al. , 2021 ) . Several works focus on unbiased compression methods ( original and compressed tensors have the same expectation ) , such as centralized compressed parallel SGD ( Alistarh et al. , 2017 ) and many others ( Wangni et al. , 2018 ; Shen et al. , 2018 ; Zhang et al. , 2017 ; Wen et al. , 2017 ; Jiang & Agrawal , 2018 ) . On the other hand , recent works about biased compression methods demonstrate better compression rate and the same convergence rate by using an error compensation technique ( Seide et al. , 2014 ; Bernstein et al. , 2019 ; Stich et al. , 2018 ; Zheng et al. , 2019 ; Phuong & Phong , 2020 ; Yu et al. , 2019 ; Shi et al. , 2019 ; Ivkin et al. , 2019 ; Sun et al. , 2019 ; Basu et al. , 2019 ; Vogels et al. , 2019 ; Tang et al. , 2021 ) . The error-compensated compression is proposed in the 1-bit SGD work ( Seide et al. , 2014 ) : instead of compressing the gradient at each iteration directly , they compress the sum of the gradient and the last step ’ s compression error . By using error compensation the training can achieve promising convergence speed even with 1-bit compression ( representing the gradient by ±1 signs and a scale ) . Recent works provide theoretical guarantee of this method ( Bernstein et al. , 2019 ) , and also demonstrate that it admits the same asymptotic convergence rate as the uncompressed one ( Stich et al. , 2018 ) . In addition , error compensation method enables almost any compression methods ( Stich et al. , 2018 ) , either biased or unbiased , to converge as fast as the uncompressed case . 1Cω [ · ] could also include randomness . Adam ( Kingma & Ba , 2015 ) can be viewed as SGD with momentum and adaptive learning rate scaling on each coordinate of the gradient . It has demonstrated promising convergence speed and hyperparameter robustness on many deep learning tasks . Recently , Tang et al . ( 2021 ) proposed 1-bit Adam which combines the efficiency of error-compensated 1-bit compression with Adam ’ s convergence speed . They show that error-compensated compression does not work for Adam directly , because Adam is non-linearly dependent on the gradient ( the variance term ) . On the other hand , they find that Adam ’ s variance becomes stable at an early stage of training . To this end , they design a new 2-stage algorithm , 1-bit Adam : At warmup stage , vanilla Adam is used . At compression stage , they stop updating the variance and use it as a fixed precondition , and communicate based on the momentum applied with error-compensated 1-bit compression . Their experiments on up to 256 GPUs show that 1-bit Adam achieve the same convergence behaviour and final accuracy as Adam , together with up to 5x less communication volume and 3.3x faster end-to-end throughput . To further improve training efficiency at large scale , being able to support large minibatches while keeping the convergence speed is a critical factor . Recently You et al . ( 2020 ) find that it is difficult to keep Adam ’ s convergence speed at batch sizes 16K or larger for BERT pre-training . To this end they proposed LAMB which can be viewed as Adam with adaptive layerwise learning rates . By using LAMB , they are able to scale the batch size of BERT pre-training to 64K without losing accuracy , thereby , reducing the BERT training time from 3 days to around 76 minutes . The major idea of LAMB is that it utilizes a layerwise scaling coefficient to regulate the update of each layer , and the updating rule can be summarized as2 : m ( l ) t =β1m ( l ) t−1 + ( 1− β1 ) g ( l ) t , v ( l ) t = β2v ( l ) t−1 + ( 1− β2 ) ( g ( l ) t ) 2 u ( l ) t = m ( l ) t√ v ( l ) t + η , c ( l ) t = clip ( ‖x ( l ) t−1‖ ‖u ( l ) t ‖ , cmin , cmax ) x ( l ) t =x ( l ) t−1 − γc ( l ) t u ( l ) t . ( 1 ) Here g ( l ) t = ∇F ( xt ; ξt ) , m ( l ) t , v ( l ) t , x ( l ) t denote the stochastic gradient , momentum , second moment ( i.e. , the variance ) , and the model parameters at the model ’ s l-th layer at step t ; β1 and β2 are the decaying factor ; γ is the learning rate ; η is an additive constant to avoid division by 0 ; clip ( x , a , b ) : = min { max { x , a } , b } is the clipping operation3 ; c ( l ) t is a layer-wise scaling factor that regulates the update of x ( l ) t into certain range . One thing to note is that within each layer , each tensor ( e.g. , weight and bias ) will have its own scaling coefficient c ( l ) t . The underlying intuition of LAMB ’ s scaling coefficient is that when the update is relatively large compared to the parameter , we should apply a lower learning rate to that layer ( and vice versa ) . 3 MOTIVATION AND INSIGHTS . 3.1 1-BIT ADAM IS NOT SUFFICIENT FOR LARGE-BATCH DISTRIBUTED TRAINING . 1-bit Adam demonstrates the same convergence speed as Adam for BERT pre-training task with batch size 4K ( Tang et al. , 2021 ) . On the other hand , the LAMB work shows that it is difficult to keep Adam ’ s convergence speed at batch sizes 16K or larger for BERT pre-training ( You et al. , 2020 ) . To find out whether 1-bit Adam is sufficient for large-batch distributed training , we perform a similar experiment using BERT pre-training task at batch size 16K . Using You et al . ( 2020 ) ’ s training parameters and tuning procedure ( details in Appendix A.1 ) for LAMB and Adam , we perform BERT pre-training with LAMB and 1-bit Adam , respectively . Then we use the two pre-trained BERT model to perform SQuAD 1.1 fine-tuning ( details in Section 6 ) . Results in Table 1 show that similar to Adam , 1-bit Adam has slower convergence speed compared to LAMB at larger batch size . 2Here ( x ) 2 , √ x and x y all denote element-wise operations . For simplicity weight decay is omitted . 3In the LAMB paper the clip function is only applied to ‖x ( l ) t−1‖ without mentioning the exact clipping function configurations , and our experiments show that ‖x ( l ) t−1‖ varies a lot among different layers . Thus we apply the clipping function to the whole ratio , which is more stable among different layers . With this clipping function we are able to achieve similar SQuAD accuracy compared to the original LAMB . | The paper proposes a communication-efficient distributed LAMB optimizer with 1-bit compression. It follows previous work to first warm-up the variance, but proposes to inference the scaling factor based on reconstructed variance. Experiments show training speedup due to communication compression. The proposed 1-bit LAMB achieves similar model performance to full-precision LAMB. | SP:348371af70bb81a998b7dcfc8de2d60ea9b506e5 |
Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics | 1 INTRODUCTION . Soft body manipulation has a wide application in cooking ( Bollini et al. , 2013 ) , fabric manipulation ( Wu et al. , 2020 ) , healthcare ( Mayer et al. , 2008 ) and manufacturing of deformable objects ( Sanchez et al. , 2018 ) . Differentiable physics has recently been shown as a powerful and effective tool for solving control problems for soft-body manipulation tasks . As demonstrated in Huang et al . ( 2021 ) , given a parameterized manipulation policy , the differentiable physics solver computes the gradients of the policy parameters , enabling gradient-based optimization much more efficiently than reinforcement learning algorithms at finding optimal solutions for soft-body manipulation tasks on a diverse collection of environments . However , the performance of the stand-alone gradient-based solver can be heavily influenced by the policy initialization . Especially , the end effectors ’ initial contact points with objects play critical roles in the optimization . Different contact points may lead to vast differences in manipulation performance due to local optima . Besides , some tasks require agents to switch contact points during the manipulation , where the local optima issue becomes a serious bottleneck for completing these multi-stage tasks . For example , as shown in Figure 2 , an agent needs to control the capsule “ pen ” to sculpt two scribbles on the surface of a yellow plasticine cube . In order to complete the second line , the agent needs to switch contact points after drawing the first one . While the stand-alone differentiable physics solver could possibly draw the first line , it often gets stuck and struggles to draw the second one , due to the lack of gradients that push the pen to a new contact point to begin the second line . How to automatically find proper contact points for soft body manipulation tasks remains a challenge in differentiable physics . 1https : //sites.google.com/view/cpdeformiclr2022 In this paper , we propose a principled framework that integrates an optimal transport-based contact discovery method into differentiable physics ( CPDeform ) to address this important challenge . CPDeform heuristically find contact points for end effectors by using transport priorities computed from optimal transport to compare the current shape with the target shape , where soft-body manipulation is treated as a particle transportation problem . After finding contact points , CPDeform can combine the differentiable physics solver to solve soft body manipulation tasks . On single-stage tasks that do not require contact point switching , CPDeform can find suitable initial contact points to finish the task . On multi-stage tasks , using an example shown in Figure 1 ( right ) where the goal is to reshape a plasticine cube into an airplane , CPDeform can iteratively switch the contact points of end effectors based on transport priorities . This iterative deformation process is motivated by how humans manipulate plasticine . As shown in Figure 1 ( left ) , when humans manipulate a plasticine dough , they tend to repeatedly focus on the point of interest and modify it towards the target shape . CPDeform can mimic this process by iteratively switching contact points of interests based on transport priorities and deforming the soft bodies into the target shape with the help of the differentiable solver . By integrating contact points discovery into the differentiable physics solver , CPDeform can skip over the local minima caused by contact switching and improve the performance of the stand-alone solver . To evaluate the effectiveness of CPDeform , we introduce PlasticineLab-M that extends the existing differentiable physics benchmark PlasticineLab ( Huang et al. , 2021 ) to seven new challenging multistage soft body tasks . Extensive experimental results suggest that : on single-stage tasks where the vanilla differentiable physics solver performs sub-optimally or near-optimally in PlasticineLab , we find that the backbone of CPDeform , a contact point discovery method based on optimal transport , single-handedly performs better than or on par with the manipulation performance obtained with random-selected or human-defined contact points . On multi-stage tasks that are infeasible for the vanilla gradient-based solver , we find that CPDeform performs reasonably well in practice and the iterative deformation method equipped with contact point discovery could serve as an alternative to the expensive long-horizon searching algorithm . In summary , our work makes the following contributions : • We perform an in-depth study of local optimum issues of differentiable physics solver for initial contact points and switching contact points . • We propose a principled framework , CPDeform , that integrates optimal transport-based contact discovery into differentiable physics . • We find that the backbone of CPDeform , the contact point discovery method , can be directly employed by the stand-alone solver to find better initial contact points for single-stage tasks . • On multi-stage tasks , which are infeasible for the vanilla solver , CPDeform employs a heuristic searching approach to iteratively complete the tasks . 2 MOTIVATION . In this section , we provide an intuitive analysis of the drawback of the differentiable physics solver through motivating toy examples . We start with a brief review of how the differentiable physics solver could be employed to optimize manipulation policies . We then demonstrate how initial contact points affect the optimization performance . Finally , we take a simple but representative multi-stage task as an example and discuss why contact switching would often lead to local minima . We study a Writer task as shown in Figure 2 ( a ) . In this task , an agent needs to manipulate the capsule “ pen ” to initiate contact with the yellow plasticine cube , and sculpt a line scribble on the plasticine surface . The agent can move the tip of the pen along three dimensions . To solve this task with differentiable physics , we manually initialize the end-effector “ pen ” near the suitable contact point that allows the `` pen '' to initiate contact with the plasticine . We then parameterize the desired motion trajectory of the pen as a sequence of three-dimensional actions { a1 , . . . , aT } where T is the number of simulation steps . Let { st } 0≤t≤T be simulation states at different time steps which include the state of the plasticine and manipulator . The differentiable simulator starts from the initial state s0 and completes the action sequences by repeatedly executing the forward function st+1 = φ ( st , at+1 ) until the ending state sT has been reached . The objective of this optimizer is to minimize the distance between the current shape and the target shape . We represent the objective as a loss function L ( sT , g ) where g is the target shape . Since the simulation forward function φ is fully-differentiable , we can compute gradients ∂L/∂at of the loss L with respect to action at , and run multiple gradient descent steps to optimize the action sequences by at = at − α∂L/∂at where α is the learning rate . As shown in Figure 2 ( b ) , we can see that the agent succeeds at sculpting the target scribble by moving the “ pen ” downwards . We refer the readers to Algorithm 2 in Appendix D for more details on differentiable physics for controller optimization . However , if we do not initialize the positions of the end effectors well , the solver would get stuck in the local minima . For example , in the task as shown in Figure 3 , we illustrate the optimization outcomes for end effectors with different contact points by showing their corresponding resulting shapes . Even with an arbitrarily large number of steps T given , the gradient-based solver is unable to discover a policy that moves away from the local optimum to the a new contact point that allows for task completion . Such phenomena are commonly observed across soft body manipulation tasks using the differentiable physics solver ( Huang et al. , 2021 ) . When end effectors are far away from the region of interest , it is unlikely that the gradient could push the end-effectors towards the desired region . This observation poses the question of how to efficiently find optimal contact points to place the end effectors . The local minima problem caused by inappropriate contact points becomes a more serious issue in multi-stage tasks . Taking the multi-stage writer task illustrated in Figure 2 ( c ) as an example , the agent now needs to write an additional straight line on the plasticine surface by switching its contact point . Even with a well-initialized contact point for the first line , the solver is unable to relocate to the new region of interest for the upcoming line . We observe that , differing from the vanilla differentiable physics solver , humans tend to employ an explicit “ iterative deformation ” schema to complete such a task . Humans would decompose this task into two stages . In each stage , we tend to iteratively derive the correspondence between the current shape and target shape to arrive at useful contact points from observations and then subsequently move the “ pen ” and write the lines . This motivates us to combine contact point discovery with iterative deformation . 3 METHOD . In this section , we introduce CPDeform , a principled framework that integrates optimal transportbased contact discovery into differentiable physics for solving challenging soft-body manipulation tasks . We first describe our contact point discovery method in relation to the transport priorities found by optimal transport in Section 3.1 . In Section 3.2 , we describe how transport priorities can be used to place end-effectors . Finally , in Section 3.3 , we show how CPDeform integrates contact points discovery with differentiable physics and iteratively deforms the soft bodies for multi-stage tasks . 3.1 OPTIMAL TRANSPORT AND CONTACT POINT DISCOVERY . One way to consider soft-body manipulation is by treating it as a particle transportation problem . By evaluating the cost of transporting the current state particles µ to the target state particles ν , optimal transport provides a useful framework for comparing differences between any given pair of shapes , which can guide us to discover contact points . Let all particles be weighted equally in our simulator . Given a cost matrix M , optimal transport finds a transportation plan P in transportation polytope U by minimizing the transportation cost minP∈U 〈P , M〉 . Casting the problem into dual form , we have OT ( α , β ) : = Eµ [ f ] +Eν [ g ] such that ∀i , j , Lagrange multipliers fi , gj satisfy f i+gj ≤Mij , where α , β are the mass vectors that for the particles in µ , ν respectively . We refer the reader to Appendix B for more details on optimal transport . We focus on the Lagrange multipliers f of the source particles , which we refer to as the dual potentials . Since it represents the supports of the source measure , we interpret f as the transport priorities for the source particles µ . Transport priorities are helpful for selecting contact points . Given a pair of current and target softbody shapes , we intuitively would place the end-effectors around the region of the largest difference between the two shapes , in order to substantially modify the shapes . This observation leads us to place the end-effectors at contact points whose corresponding optimal manipulation policies can minimize the difference between the current and target shapes . However , given the computation budget , it is not easy to directly evaluate the optimality of the contact point by enumerating through a set of contact points . Thus , we propose to use optimal transport priorities to heuristically identify contact points , based on a simple rule of selecting contact points with high transport priorities . We observe that contact points with high transport priorities mostly correspond with superior optimization performances . | The paper proposed an algorithm to discover appropriate contact points for deformable object manipulation. A key component of the proposed algorithm is to use an optimal-transport approach that computes a transport priority score for each particle in the deformable body. This score is then used to guide a grid search procedure to determine the best initial contact point for the manipulation. For multiple manipulators, a pre-defined set of poses are enumerated to find the best pose to use. The proposed algorithm is evaluated on tasks that requires a single manipulation motion or a sequence of motions to shape a deformable object into desired shapes. The result shows improved performance compared to prior methods and enables completion of novel tasks that prior methods failed. | SP:92e58feb55f1f058d36bac600b9f8f196fe4cc43 |
Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics | 1 INTRODUCTION . Soft body manipulation has a wide application in cooking ( Bollini et al. , 2013 ) , fabric manipulation ( Wu et al. , 2020 ) , healthcare ( Mayer et al. , 2008 ) and manufacturing of deformable objects ( Sanchez et al. , 2018 ) . Differentiable physics has recently been shown as a powerful and effective tool for solving control problems for soft-body manipulation tasks . As demonstrated in Huang et al . ( 2021 ) , given a parameterized manipulation policy , the differentiable physics solver computes the gradients of the policy parameters , enabling gradient-based optimization much more efficiently than reinforcement learning algorithms at finding optimal solutions for soft-body manipulation tasks on a diverse collection of environments . However , the performance of the stand-alone gradient-based solver can be heavily influenced by the policy initialization . Especially , the end effectors ’ initial contact points with objects play critical roles in the optimization . Different contact points may lead to vast differences in manipulation performance due to local optima . Besides , some tasks require agents to switch contact points during the manipulation , where the local optima issue becomes a serious bottleneck for completing these multi-stage tasks . For example , as shown in Figure 2 , an agent needs to control the capsule “ pen ” to sculpt two scribbles on the surface of a yellow plasticine cube . In order to complete the second line , the agent needs to switch contact points after drawing the first one . While the stand-alone differentiable physics solver could possibly draw the first line , it often gets stuck and struggles to draw the second one , due to the lack of gradients that push the pen to a new contact point to begin the second line . How to automatically find proper contact points for soft body manipulation tasks remains a challenge in differentiable physics . 1https : //sites.google.com/view/cpdeformiclr2022 In this paper , we propose a principled framework that integrates an optimal transport-based contact discovery method into differentiable physics ( CPDeform ) to address this important challenge . CPDeform heuristically find contact points for end effectors by using transport priorities computed from optimal transport to compare the current shape with the target shape , where soft-body manipulation is treated as a particle transportation problem . After finding contact points , CPDeform can combine the differentiable physics solver to solve soft body manipulation tasks . On single-stage tasks that do not require contact point switching , CPDeform can find suitable initial contact points to finish the task . On multi-stage tasks , using an example shown in Figure 1 ( right ) where the goal is to reshape a plasticine cube into an airplane , CPDeform can iteratively switch the contact points of end effectors based on transport priorities . This iterative deformation process is motivated by how humans manipulate plasticine . As shown in Figure 1 ( left ) , when humans manipulate a plasticine dough , they tend to repeatedly focus on the point of interest and modify it towards the target shape . CPDeform can mimic this process by iteratively switching contact points of interests based on transport priorities and deforming the soft bodies into the target shape with the help of the differentiable solver . By integrating contact points discovery into the differentiable physics solver , CPDeform can skip over the local minima caused by contact switching and improve the performance of the stand-alone solver . To evaluate the effectiveness of CPDeform , we introduce PlasticineLab-M that extends the existing differentiable physics benchmark PlasticineLab ( Huang et al. , 2021 ) to seven new challenging multistage soft body tasks . Extensive experimental results suggest that : on single-stage tasks where the vanilla differentiable physics solver performs sub-optimally or near-optimally in PlasticineLab , we find that the backbone of CPDeform , a contact point discovery method based on optimal transport , single-handedly performs better than or on par with the manipulation performance obtained with random-selected or human-defined contact points . On multi-stage tasks that are infeasible for the vanilla gradient-based solver , we find that CPDeform performs reasonably well in practice and the iterative deformation method equipped with contact point discovery could serve as an alternative to the expensive long-horizon searching algorithm . In summary , our work makes the following contributions : • We perform an in-depth study of local optimum issues of differentiable physics solver for initial contact points and switching contact points . • We propose a principled framework , CPDeform , that integrates optimal transport-based contact discovery into differentiable physics . • We find that the backbone of CPDeform , the contact point discovery method , can be directly employed by the stand-alone solver to find better initial contact points for single-stage tasks . • On multi-stage tasks , which are infeasible for the vanilla solver , CPDeform employs a heuristic searching approach to iteratively complete the tasks . 2 MOTIVATION . In this section , we provide an intuitive analysis of the drawback of the differentiable physics solver through motivating toy examples . We start with a brief review of how the differentiable physics solver could be employed to optimize manipulation policies . We then demonstrate how initial contact points affect the optimization performance . Finally , we take a simple but representative multi-stage task as an example and discuss why contact switching would often lead to local minima . We study a Writer task as shown in Figure 2 ( a ) . In this task , an agent needs to manipulate the capsule “ pen ” to initiate contact with the yellow plasticine cube , and sculpt a line scribble on the plasticine surface . The agent can move the tip of the pen along three dimensions . To solve this task with differentiable physics , we manually initialize the end-effector “ pen ” near the suitable contact point that allows the `` pen '' to initiate contact with the plasticine . We then parameterize the desired motion trajectory of the pen as a sequence of three-dimensional actions { a1 , . . . , aT } where T is the number of simulation steps . Let { st } 0≤t≤T be simulation states at different time steps which include the state of the plasticine and manipulator . The differentiable simulator starts from the initial state s0 and completes the action sequences by repeatedly executing the forward function st+1 = φ ( st , at+1 ) until the ending state sT has been reached . The objective of this optimizer is to minimize the distance between the current shape and the target shape . We represent the objective as a loss function L ( sT , g ) where g is the target shape . Since the simulation forward function φ is fully-differentiable , we can compute gradients ∂L/∂at of the loss L with respect to action at , and run multiple gradient descent steps to optimize the action sequences by at = at − α∂L/∂at where α is the learning rate . As shown in Figure 2 ( b ) , we can see that the agent succeeds at sculpting the target scribble by moving the “ pen ” downwards . We refer the readers to Algorithm 2 in Appendix D for more details on differentiable physics for controller optimization . However , if we do not initialize the positions of the end effectors well , the solver would get stuck in the local minima . For example , in the task as shown in Figure 3 , we illustrate the optimization outcomes for end effectors with different contact points by showing their corresponding resulting shapes . Even with an arbitrarily large number of steps T given , the gradient-based solver is unable to discover a policy that moves away from the local optimum to the a new contact point that allows for task completion . Such phenomena are commonly observed across soft body manipulation tasks using the differentiable physics solver ( Huang et al. , 2021 ) . When end effectors are far away from the region of interest , it is unlikely that the gradient could push the end-effectors towards the desired region . This observation poses the question of how to efficiently find optimal contact points to place the end effectors . The local minima problem caused by inappropriate contact points becomes a more serious issue in multi-stage tasks . Taking the multi-stage writer task illustrated in Figure 2 ( c ) as an example , the agent now needs to write an additional straight line on the plasticine surface by switching its contact point . Even with a well-initialized contact point for the first line , the solver is unable to relocate to the new region of interest for the upcoming line . We observe that , differing from the vanilla differentiable physics solver , humans tend to employ an explicit “ iterative deformation ” schema to complete such a task . Humans would decompose this task into two stages . In each stage , we tend to iteratively derive the correspondence between the current shape and target shape to arrive at useful contact points from observations and then subsequently move the “ pen ” and write the lines . This motivates us to combine contact point discovery with iterative deformation . 3 METHOD . In this section , we introduce CPDeform , a principled framework that integrates optimal transportbased contact discovery into differentiable physics for solving challenging soft-body manipulation tasks . We first describe our contact point discovery method in relation to the transport priorities found by optimal transport in Section 3.1 . In Section 3.2 , we describe how transport priorities can be used to place end-effectors . Finally , in Section 3.3 , we show how CPDeform integrates contact points discovery with differentiable physics and iteratively deforms the soft bodies for multi-stage tasks . 3.1 OPTIMAL TRANSPORT AND CONTACT POINT DISCOVERY . One way to consider soft-body manipulation is by treating it as a particle transportation problem . By evaluating the cost of transporting the current state particles µ to the target state particles ν , optimal transport provides a useful framework for comparing differences between any given pair of shapes , which can guide us to discover contact points . Let all particles be weighted equally in our simulator . Given a cost matrix M , optimal transport finds a transportation plan P in transportation polytope U by minimizing the transportation cost minP∈U 〈P , M〉 . Casting the problem into dual form , we have OT ( α , β ) : = Eµ [ f ] +Eν [ g ] such that ∀i , j , Lagrange multipliers fi , gj satisfy f i+gj ≤Mij , where α , β are the mass vectors that for the particles in µ , ν respectively . We refer the reader to Appendix B for more details on optimal transport . We focus on the Lagrange multipliers f of the source particles , which we refer to as the dual potentials . Since it represents the supports of the source measure , we interpret f as the transport priorities for the source particles µ . Transport priorities are helpful for selecting contact points . Given a pair of current and target softbody shapes , we intuitively would place the end-effectors around the region of the largest difference between the two shapes , in order to substantially modify the shapes . This observation leads us to place the end-effectors at contact points whose corresponding optimal manipulation policies can minimize the difference between the current and target shapes . However , given the computation budget , it is not easy to directly evaluate the optimality of the contact point by enumerating through a set of contact points . Thus , we propose to use optimal transport priorities to heuristically identify contact points , based on a simple rule of selecting contact points with high transport priorities . We observe that contact points with high transport priorities mostly correspond with superior optimization performances . | This paper proposes a method to solve the multistage manipulation tasks for soft materials. Differentiable physics has been used in controlling and manipulating soft materials since DiffTaichi. However, optimization methods based on local gradient information can be easily trapped in local minima. This paper divides a long task into small stages, within which the contact area will remain unchanged. The contact points are selected by choosing the regions with the largest transport priorities. The end effectors are then placed with poses from the candidate set. After running trajectory optimization using physics gradients, the final pose is finalized as the pose with minimum loss. | SP:92e58feb55f1f058d36bac600b9f8f196fe4cc43 |
Accuracy-Privacy Trade-off in Deep Ensemble: A Membership Inference Perspective | 1 INTRODUCTION . Ensemble learning has been shown to improve classification accuracy of neural networks in particular , and machine learning classifiers in general ( Kondratyuk et al. , 2020 ; Kuncheva & Whitaker , 2003 ; Sagi & Rokach , 2018 ) . The most commonly used approach for deep models involves averaging the output of multiple neural networks ( NN ) that are independently trained on the same dataset with different random initialization , called deep ensemble ( Lobacheva et al. , 2020 ) . Such a simple approach has been extensively used in practice to improve accuracy ( Lee et al. , 2015 ; Wang et al. , 2020 ) . Notably , a majority of the top performers in machine learning benchmarks , such as the ImageNet Large Scale Visual Recognition Challenge ( Russakovsky et al. , 2015 ) , have adopted some form of ensemble learning ( Lee et al. , 2015 ; Szegedy et al. , 2015 ; He et al. , 2016 ) . Other forms of ensemble learning ( different from deep ensembles ) , such as partitioning , has also been used to defend against privacy-harming membership inference ( MI ) attacks , where the goal of an attacker is to infer whether a sample has been used to train a model–i.e. , whether the sample belongs to the train set . Membership inference attacks generally use the prediction confidence of NN models to infer membership status of a sample ( Salem et al. , 2018 ; Shokri et al. , 2017 ; Truex et al. , 2019 ; Yeom et al. , 2018 ) by leveraging the insight that trained models may output higher prediction confidence on train samples than non-train samples ( Choo et al. , 2020 ) . The intuition behind using ensemble learning approaches , like partitioning , to defend against MI attacks is that training each model on a different subset of data makes the ensemble less prone to overfitting ( Salem et al. , 2018 ) . This idea of using ensemble learning to defend against MI attacks has since been discussed in the literature ( Huang et al. , 2020 ; Li et al. , 2021 ; Rahimian et al. , 2020 ; Yang et al. , 2020 ) . However , none of these papers theoretically or empirically demonstrate usefulness of ensemble learning as a defense mechanism . In this paper , we show that these two goals of ensemble learning , namely improving accuracy and defending against MI attack , do not trivially sum up in a unified solution . Figure 1 illustrates accuracy and privacy trade-off by plotting accuracy and membership inference attack effectiveness for ensembles comprising of varying number of base models ( 1 , 2 , 5 , and 10 ) that are trained for different numbers of epochs ( 5 , 45 , and 90 ) . We make two key observations here . First , there is an increase in both accuracy and MI attack effectiveness as we go from a single model to ensembles comprising of an increasing number of base models . The trade-off is more noticeable for more accurate models trained for a larger number of epochs . Second , we can adapt the design of ensembles to suitably navigate the trade-off between accuracy and privacy . Specifically , the two extreme cases are to : ( 1 ) maximize accuracy by using an ensemble of highly accurate models but at the cost of worse privacy1 ( purple arrow ) ; and ( 2 ) maximize privacy by intentionally using an ensemble of under-fitted models instead of a single model but at the cost of accuracy ( brown arrow ) . To understand the root cause of this trade-off , we show that using deep ensembles to improve accuracy exacerbates its susceptibility to membership inference attacks by making train and non-train samples more distinguishable . By analyzing the confidence averaging mechanism of deep ensembles , we investigate potential factors that enable membership inference . We show that the most influential factor is the level of correct agreement among models . Simply put , the number of models that correctly classify a train sample is often greater than the ones that correctly classify a test sample . This results in a wider confidence gap between train and non-train samples , when confidence values are averaged , enabling more effective membership inference attacks . We further show that the difference in the level of correct agreement between train and non-train samples is correlated with models ’ generalization gap . Hence , a natural question to ask is `` can deep ensembles that use less overfitted models mitigate privacy issues while achieving high accuracy ? '' . To answer this question , we study several regularization techniques , common membership inference defenses , and a few other ensembling approaches . We again observe a privacy-accuracy trade-off pattern similar to that shown in Figure 1 . Summary of contributions : In this paper , we perform a systematic empirical study of MI attacks on deep ensemble models . First , we show that when deep ensembles improve accuracy , it also leads to a different distribution shift in the prediction confidence of train and test samples , which in turn enables more effective membership inference . Second , we analyze various factors that potentially cause the prediction confidence of train and non-train samples to diverge . Among potential factors , we show that the most dominant factor is the level of correct agreement among models which indicates that more models in an ensemble agree on their prediction when a sample is a training sample . Hence , the aggregation of their prediction yields higher confidence output in comparison with non-train samples . We show that common defense mechanisms in membership inference literature , including differential privacy , MMD+Mixup , L1 and L2 regularization , as well as other ensemble training approaches , such as bagging , partitioning , and stacking ( Salem et al. , 2018 ) , can be used to mitigate 1Note that for complicated tasks , such as image classification , the common practice is to train deep models for a large number of epochs and avoid under-fitted models . That is because memorizing samples from long-tailed subpopulations are shown to be necessary to achieve close-to-optimal generalization error ( Feldman , 2020 ) . effectiveness of MI attacks but at the cost of accuracy . Although the main focus of the paper is on deep ensembles , we also cover bagging , partitioning , stacking ( Salem et al. , 2018 ) , logit averaging ( Appendix A.3 ) , weighted averaging ( Appendix A.4 ) , as well as more advanced and state-of-the-art ensembling techniques , such as snapshot ensembles ( Huang et al. , 2017 ) and diversified ensemble networks ( Zhang et al. , 2020 ) ( Appendix A.5 ) . We observe similar trade-off . 2 SYSTEM MODEL . 2.1 ENSEMBLE LEARNING . Background . In literature , ensemble learning refers to various approaches that combine multiple models to make a prediction . Models used to construct an ensemble are often called base learners . There are two main factors to construct an ensemble ( Sagi & Rokach , 2018 ) : 1 ) how base learners are trained to ensure diversity , such as random initialization , bagging , partitioning , etc. , and 2 ) how the output of base learners are fused to obtain the final output , including majority voting , confidence averaging , stacking , etc . Unlike ensemble of traditional machine learning algorithms , in a deep ensemble , the main source of diversity often comes only from random initialization of base learners ( Fort et al. , 2019 ) . In fact , other sources of diversity , such as bagging , have been shown to considerably degrade the overall accuracy of a deep ensemble ( Lee et al. , 2015 ; Lakshminarayanan et al. , 2017 ) . System Model . We mainly focus on the most widely used deep ensemble ( Kondratyuk et al. , 2020 ) unless otherwise specified . In this model , 1 ) base models are trained with random initialization on the same training dataset , and 2 ) their prediction confidence are fused through averaging . A less common approach is to average model logits which has been used in a few studies ( Webb et al. , 2020 ; Wang et al. , 2020 ) . See Appendix A.3 for experimental evaluation of logit averaging and A.4 for weighted averaging ensembles . We also evaluate two state-of-the-art deep ensembling approaches , namely snapshot ensemble and diversified ensemble network , in Appendix A.5 . Other general ensembling approaches , such as bagging , partitioning , and stacking ( Salem et al. , 2018 ) , are studied as defense mechanisms because they degrade accuracy but improve protection against MI attacks . 2.2 MEMBERSHIP INFERENCE . Background . Membership inference is a form of privacy leakage where the goal is to determine if a sample was used during the training of a target model . Samples used during training are often referred to as member or train samples , and other samples are referred to as non-member , non-train , or test samples . The first MI attack on neural networks was proposed in Shokri et al . ( 2017 ) where the attacker trains an attack classifier to predict the membership status . The attack classifier takes the prediction confidence of a target model as an input . Assuming that the attacker has access to a dataset with a similar distribution , she trains a set of shadow models to mimic the target model . Since the membership status of the data with which the shadow models are trained are known to the attacker , she can use the data to train the attack classifier . Many papers use the same idea with different variations or less restrictive assumptions ( Salem et al. , 2018 ; Liu et al. , 2019 ; Song et al. , 2019 ; Long et al. , 2017 ; Truex et al. , 2019 ; Long et al. , 2018 ; Yeom et al. , 2018 ; Rezaei & Liu , 2021 ; Zou et al. , 2020 ; Li & Zhang , 2020 ) . Most previous work built upon the idea of using prediction confidence to infer the membership status , except for Rezaei & Liu ( 2021 ) ; Choo et al . ( 2020 ) ; Rahimian et al . ( 2020 ) . In Rezaei & Liu ( 2021 ) , the authors assumed white-box access to the target model and launched a series of MI attack based on confidence values , distance to the decision boundary , gradient w.r.t model weight , and gradient w.r.t input . In Choo et al . ( 2020 ) , the authors proposed two attacks based on input transformation and distance to the boundary in a black-box setting . Similarly , in Rahimian et al . ( 2020 ) , the attacker randomly perturbs an input to obtain a set of random transformations of the input and uses the predicted labels to infer membership status . System Model . Since most existing attacks use confidence values , we first focus on changes of confidence values when using deep ensembles . We show that when using deep ensembles the distribution of confidence values becomes more distinguishable between train and non-train set in comparison with non-ensemble case . Consequently , any MI attack that relies on confidence values would be more effective on deep ensembles . Since our goal is to show a trade-off between accuracy and privacy , not to show which confidence-based attack can slightly outperforms another , we focus on a confidence-based attack proposed in Rezaei & Liu ( 2021 ) in both white-box and black-box settings . Here , white-box means the attacker has access to base-learners ’ output before aggregation , and black-box means the attacker has only access to the aggregated confidence output . Decision boundary-based attacks are extremely computational and query inefficient and it is not trivial how to adopt them for ensemble learning where essentially an input is copied n times and then fed to n models . Gradient-based approach of Rezaei & Liu ( 2021 ) also needs full knowledge of the entire deep ensemble . We consider adaptation of non-confidence-based attacks for deep ensembles as future work . | This work analyzes the accuracy-privacy trade-off in ensemble learning by performing model inference attacks. The key finding of the paper is that the presence of an ensemble (that averages the predictions of individual learners) exacerbates the disparity between the confidence distribution of samples that were seen during training v/s those that weren't. They highlight how the main reason for this observation is the reduced agreement between base models for data points that were not seen during training. There is some evaluation of prior membership inference defenses in the ensemble setting. | SP:a492824ed04e34de0d1a54373e4cc15348c14a45 |
Accuracy-Privacy Trade-off in Deep Ensemble: A Membership Inference Perspective | 1 INTRODUCTION . Ensemble learning has been shown to improve classification accuracy of neural networks in particular , and machine learning classifiers in general ( Kondratyuk et al. , 2020 ; Kuncheva & Whitaker , 2003 ; Sagi & Rokach , 2018 ) . The most commonly used approach for deep models involves averaging the output of multiple neural networks ( NN ) that are independently trained on the same dataset with different random initialization , called deep ensemble ( Lobacheva et al. , 2020 ) . Such a simple approach has been extensively used in practice to improve accuracy ( Lee et al. , 2015 ; Wang et al. , 2020 ) . Notably , a majority of the top performers in machine learning benchmarks , such as the ImageNet Large Scale Visual Recognition Challenge ( Russakovsky et al. , 2015 ) , have adopted some form of ensemble learning ( Lee et al. , 2015 ; Szegedy et al. , 2015 ; He et al. , 2016 ) . Other forms of ensemble learning ( different from deep ensembles ) , such as partitioning , has also been used to defend against privacy-harming membership inference ( MI ) attacks , where the goal of an attacker is to infer whether a sample has been used to train a model–i.e. , whether the sample belongs to the train set . Membership inference attacks generally use the prediction confidence of NN models to infer membership status of a sample ( Salem et al. , 2018 ; Shokri et al. , 2017 ; Truex et al. , 2019 ; Yeom et al. , 2018 ) by leveraging the insight that trained models may output higher prediction confidence on train samples than non-train samples ( Choo et al. , 2020 ) . The intuition behind using ensemble learning approaches , like partitioning , to defend against MI attacks is that training each model on a different subset of data makes the ensemble less prone to overfitting ( Salem et al. , 2018 ) . This idea of using ensemble learning to defend against MI attacks has since been discussed in the literature ( Huang et al. , 2020 ; Li et al. , 2021 ; Rahimian et al. , 2020 ; Yang et al. , 2020 ) . However , none of these papers theoretically or empirically demonstrate usefulness of ensemble learning as a defense mechanism . In this paper , we show that these two goals of ensemble learning , namely improving accuracy and defending against MI attack , do not trivially sum up in a unified solution . Figure 1 illustrates accuracy and privacy trade-off by plotting accuracy and membership inference attack effectiveness for ensembles comprising of varying number of base models ( 1 , 2 , 5 , and 10 ) that are trained for different numbers of epochs ( 5 , 45 , and 90 ) . We make two key observations here . First , there is an increase in both accuracy and MI attack effectiveness as we go from a single model to ensembles comprising of an increasing number of base models . The trade-off is more noticeable for more accurate models trained for a larger number of epochs . Second , we can adapt the design of ensembles to suitably navigate the trade-off between accuracy and privacy . Specifically , the two extreme cases are to : ( 1 ) maximize accuracy by using an ensemble of highly accurate models but at the cost of worse privacy1 ( purple arrow ) ; and ( 2 ) maximize privacy by intentionally using an ensemble of under-fitted models instead of a single model but at the cost of accuracy ( brown arrow ) . To understand the root cause of this trade-off , we show that using deep ensembles to improve accuracy exacerbates its susceptibility to membership inference attacks by making train and non-train samples more distinguishable . By analyzing the confidence averaging mechanism of deep ensembles , we investigate potential factors that enable membership inference . We show that the most influential factor is the level of correct agreement among models . Simply put , the number of models that correctly classify a train sample is often greater than the ones that correctly classify a test sample . This results in a wider confidence gap between train and non-train samples , when confidence values are averaged , enabling more effective membership inference attacks . We further show that the difference in the level of correct agreement between train and non-train samples is correlated with models ’ generalization gap . Hence , a natural question to ask is `` can deep ensembles that use less overfitted models mitigate privacy issues while achieving high accuracy ? '' . To answer this question , we study several regularization techniques , common membership inference defenses , and a few other ensembling approaches . We again observe a privacy-accuracy trade-off pattern similar to that shown in Figure 1 . Summary of contributions : In this paper , we perform a systematic empirical study of MI attacks on deep ensemble models . First , we show that when deep ensembles improve accuracy , it also leads to a different distribution shift in the prediction confidence of train and test samples , which in turn enables more effective membership inference . Second , we analyze various factors that potentially cause the prediction confidence of train and non-train samples to diverge . Among potential factors , we show that the most dominant factor is the level of correct agreement among models which indicates that more models in an ensemble agree on their prediction when a sample is a training sample . Hence , the aggregation of their prediction yields higher confidence output in comparison with non-train samples . We show that common defense mechanisms in membership inference literature , including differential privacy , MMD+Mixup , L1 and L2 regularization , as well as other ensemble training approaches , such as bagging , partitioning , and stacking ( Salem et al. , 2018 ) , can be used to mitigate 1Note that for complicated tasks , such as image classification , the common practice is to train deep models for a large number of epochs and avoid under-fitted models . That is because memorizing samples from long-tailed subpopulations are shown to be necessary to achieve close-to-optimal generalization error ( Feldman , 2020 ) . effectiveness of MI attacks but at the cost of accuracy . Although the main focus of the paper is on deep ensembles , we also cover bagging , partitioning , stacking ( Salem et al. , 2018 ) , logit averaging ( Appendix A.3 ) , weighted averaging ( Appendix A.4 ) , as well as more advanced and state-of-the-art ensembling techniques , such as snapshot ensembles ( Huang et al. , 2017 ) and diversified ensemble networks ( Zhang et al. , 2020 ) ( Appendix A.5 ) . We observe similar trade-off . 2 SYSTEM MODEL . 2.1 ENSEMBLE LEARNING . Background . In literature , ensemble learning refers to various approaches that combine multiple models to make a prediction . Models used to construct an ensemble are often called base learners . There are two main factors to construct an ensemble ( Sagi & Rokach , 2018 ) : 1 ) how base learners are trained to ensure diversity , such as random initialization , bagging , partitioning , etc. , and 2 ) how the output of base learners are fused to obtain the final output , including majority voting , confidence averaging , stacking , etc . Unlike ensemble of traditional machine learning algorithms , in a deep ensemble , the main source of diversity often comes only from random initialization of base learners ( Fort et al. , 2019 ) . In fact , other sources of diversity , such as bagging , have been shown to considerably degrade the overall accuracy of a deep ensemble ( Lee et al. , 2015 ; Lakshminarayanan et al. , 2017 ) . System Model . We mainly focus on the most widely used deep ensemble ( Kondratyuk et al. , 2020 ) unless otherwise specified . In this model , 1 ) base models are trained with random initialization on the same training dataset , and 2 ) their prediction confidence are fused through averaging . A less common approach is to average model logits which has been used in a few studies ( Webb et al. , 2020 ; Wang et al. , 2020 ) . See Appendix A.3 for experimental evaluation of logit averaging and A.4 for weighted averaging ensembles . We also evaluate two state-of-the-art deep ensembling approaches , namely snapshot ensemble and diversified ensemble network , in Appendix A.5 . Other general ensembling approaches , such as bagging , partitioning , and stacking ( Salem et al. , 2018 ) , are studied as defense mechanisms because they degrade accuracy but improve protection against MI attacks . 2.2 MEMBERSHIP INFERENCE . Background . Membership inference is a form of privacy leakage where the goal is to determine if a sample was used during the training of a target model . Samples used during training are often referred to as member or train samples , and other samples are referred to as non-member , non-train , or test samples . The first MI attack on neural networks was proposed in Shokri et al . ( 2017 ) where the attacker trains an attack classifier to predict the membership status . The attack classifier takes the prediction confidence of a target model as an input . Assuming that the attacker has access to a dataset with a similar distribution , she trains a set of shadow models to mimic the target model . Since the membership status of the data with which the shadow models are trained are known to the attacker , she can use the data to train the attack classifier . Many papers use the same idea with different variations or less restrictive assumptions ( Salem et al. , 2018 ; Liu et al. , 2019 ; Song et al. , 2019 ; Long et al. , 2017 ; Truex et al. , 2019 ; Long et al. , 2018 ; Yeom et al. , 2018 ; Rezaei & Liu , 2021 ; Zou et al. , 2020 ; Li & Zhang , 2020 ) . Most previous work built upon the idea of using prediction confidence to infer the membership status , except for Rezaei & Liu ( 2021 ) ; Choo et al . ( 2020 ) ; Rahimian et al . ( 2020 ) . In Rezaei & Liu ( 2021 ) , the authors assumed white-box access to the target model and launched a series of MI attack based on confidence values , distance to the decision boundary , gradient w.r.t model weight , and gradient w.r.t input . In Choo et al . ( 2020 ) , the authors proposed two attacks based on input transformation and distance to the boundary in a black-box setting . Similarly , in Rahimian et al . ( 2020 ) , the attacker randomly perturbs an input to obtain a set of random transformations of the input and uses the predicted labels to infer membership status . System Model . Since most existing attacks use confidence values , we first focus on changes of confidence values when using deep ensembles . We show that when using deep ensembles the distribution of confidence values becomes more distinguishable between train and non-train set in comparison with non-ensemble case . Consequently , any MI attack that relies on confidence values would be more effective on deep ensembles . Since our goal is to show a trade-off between accuracy and privacy , not to show which confidence-based attack can slightly outperforms another , we focus on a confidence-based attack proposed in Rezaei & Liu ( 2021 ) in both white-box and black-box settings . Here , white-box means the attacker has access to base-learners ’ output before aggregation , and black-box means the attacker has only access to the aggregated confidence output . Decision boundary-based attacks are extremely computational and query inefficient and it is not trivial how to adopt them for ensemble learning where essentially an input is copied n times and then fed to n models . Gradient-based approach of Rezaei & Liu ( 2021 ) also needs full knowledge of the entire deep ensemble . We consider adaptation of non-confidence-based attacks for deep ensembles as future work . | This paper provide a systemantic analysis on the accuracy-privacy trade off for deep ensmebles. They show that the effectiveness of membership inference attacks is likely to increase when ensembling improves accuracy. The authors further study the impact of various factors such as prediction confidence and agreement between models that constitute the ensemble. | SP:a492824ed04e34de0d1a54373e4cc15348c14a45 |
Autoregressive Quantile Flows for Predictive Uncertainty Estimation | 1 INTRODUCTION . Reasoning about uncertainty via the language of probability is important in many application domains of machine learning , including medicine ( Saria , 2018 ) , robotics ( Chua et al. , 2018 ; Buckman et al. , 2018 ) , and operations research ( Van Roy et al. , 1997 ) . Especially important is the estimation of predictive uncertainties ( e.g. , confidence intervals around forecasts ) ; in applications such as clinical diagnosis ( Jiang et al. , 2012 ) or decision support systems ( Werling et al. , 2015 ) , estimating uncertainty can be as important as obtaining high accuracy ( Kuleshov and Liang , 2015 ) . Normalizing flows ( Rezende and Mohamed , 2016 ; Papamakarios et al. , 2019 ; Kingma et al. , 2016 ) are a popular framework for defining probabilistic models , and can be used for density estimation ( Papamakarios et al. , 2017 ) , out-of-distribution detection ( Nalisnick et al. , 2019 ) , content generation ( Kingma and Dhariwal , 2018 ) , and more . Flows feature tractable posterior inference and maximum likelihood estimation ; however , maximum likelihood estimation of flows requires carefully designing a family of bijective functions that are simultaneously expressive and whose Jacobian has a tractable determinant . In practice , this makes flows time-consuming to design and computationally expensive to train . This paper takes a step towards addressing this limitation of normalizing flows by proposing new objectives that contribute towards alleviating the computational cost of calculating determinants of Jacobians . Specifically , we argue for training flows using an objective that is different from classical maximum likelihood and is instead based on proper scoring rules ( Gneiting and Raftery , 2007 ) , a standard tool in the statistics literature for evaluating the quality of probabilistic forecasts . We show that this objective can be used to train normalizing flows and that it simplifies the computation of Jacobians in certain types of flows . We introduce autoregressive quantile flows ( AQFs ) , a framework that combines the above learning objective with a set of architectural choices inspired by classical autoregressive flows . Quantile flows possess characteristics that represent an improvement over existing flow models—including supporting neural architectures that simultaneously provide fast training and sampling— in addition to the usual benefits of flows ( exact posterior inference and density estimation ) . Interestingly , quantile flows can be interpreted as extensions of quantile functions to multiple dimensions . We use AQFs as the basis for quantile flow regression ( QFR ) , an approach to predictive uncertainty estimation in which a probabilistic model directly outputs a normalizing flow as the predictive distribution . The QFR approach enables neural networks to output highly expressive probabilistic predictions that make very little assumptions on the form of the predicted variable and that improve uncertainty estimates in probabilistic and Bayesian models . In the one-dimensional case , our approach yields quantile function regression and cumulative distribution function regression , two simple , general , and principled approaches for flexible probabilistic forecasting in regression . In addition , we demonstrate the benefits of AQFs on probabilistic modeling tasks that include density estimation and autoregressive generation . Across our sets of experiments , we observe improved performance , and we demonstrate properties of quantile flows that traditional flow models do not possess ( e.g. , sampling with flexible neural parameterizations ) . Contributions . In summary , this work ( 1 ) introduces new objectives for flow models that simplify the computation of determinants of Jacobians , which in turn greatly simplifies the implementation of flow models and extends the class of models that can be used to parameterize flows . We also ( 2 ) define autoregressive quantile flows based on this objective , and highlight new architectures supported by this framework . Finally , ( 3 ) we deploy AQFs as part of quantile flow regression , and show that this approach improves upon existing methods for predictive uncertainty estimation . 2 BACKGROUND . Notation . Our goal is to learn a probabilistic model p ( y ) ∈ ∆ ( Rd ) in the space ∆ ( Rd ) of distributions over a high-dimensional y ∈ Rd ; we use yj ∈ R to denote components of y . In some cases , we have access to features x ∈ X associated with y and we want to train a forecaster H : X → ∆ ( Rd ) that outputs a predictive probability over y conditioned on x . 2.1 NORMALIZING FLOWS AND AUTOREGRESSIVE GENERATIVE MODELS . A normalizing flow defines a distribution p ( y ) via an invertible mapping fθ : Rd → Rd with parameters θ ∈ Θ that describes a transformation between y and a random variable z ∈ Rd sampled from a simple prior z ∼ p ( z ) ( Rezende and Mohamed , 2016 ; Papamakarios et al. , 2019 ) . We may compute p ( y ) via the change of variables formula p ( y ) = ∣∣∣∂fθ ( z ) −1∂z ∣∣∣ p ( z ) , where ∣∣∣∂fθ ( z ) −1∂z ∣∣∣ denotes the determinant of the inverse Jacobian of fθ . In order to fit flow-based models using maximum likelihood , we typically choose fθ to be in a family for which the Jacobian is tractable . A common way to define flows with a tractable Jacobian is via autoregressive models of the form yj = τ ( zj ; hj ) hj = cj ( y < j ) , where τ ( zj ; hj ) is an invertible transformer , a strictly monotonic function of zj , and cj is the j-th conditioner , which outputs parameters hj for the transformer . As long as τ is invertible , such autoregressive models can be used to define flows ( Papamakarios et al. , 2019 ) . 2.2 EVALUATING FORECASTS WITH PROPER SCORING RULES . A common way to represent a probabilistic forecast in the statistics and forecasting literature is via a cumulative distribution function ( CDF ) F : Rd → [ 0 , 1 ] ; any probability distribution can be represented this way , including discrete distributions . Since F is monotonically increasing in each coordinate , when y is one dimensional , we may define its inverse Q : [ 0 , 1 ] → R called the quantile function ( QF ) , defined as Q ( α ) = inf { y′ ∈ R | F ( y′ ) ≥ α } . In the statistics literature , the quality of forecasts is often evaluated using proper scoring rules ( or proper scores ; Gneiting and Raftery ( 2007 ) ) . For example , when predictions take the form of CDFs , a popular scoring rule is the continuous ranked probability score ( CRPS ) , defined for two CDFs F and G as CRPS ( F , G ) = ∫ y ( F ( y ) −G ( y ) ) 2 dy . When we only have samples y1 , ... , ym from G , we can generalize this score as 1m ∑m i=1 ∫ y ( F ( y ) − I ( y − yi ) ) 2 dy . Alternatively , we can evaluate the α-th quantile Q ( α ) of a QF Q via the check score L : R × R → R+ defined as Lα ( y , f ) = α ( y − f ) if y ≥ f and ( 1 − α ) ( f − y ) otherwise . The check score also provides a consistent estimator for the conditional quantile of any distribution . 3 TAKING STEPS BEYOND MAXIMUM LIKELIHOOD LEARNING OF FLOWS . Maximum likelihood estimation of flows requires carefully designing a family of bijective functions that are simultaneously expressive and whose Jacobian has a tractable determinant . In practice , this makes flows time-consuming to design and computationally expensive to train . In this paper , we argue for training flows using objectives based on proper scoring rules ( Gneiting and Raftery , 2007 ) . 3.1 LEARNING SIMPLE FLOWS WITH PROPER SCORING RULES . We begin with the one dimensional setting , where a flow fθ : R→ R is a bijective mapping that can be interpreted as a QF . Alternatively , the reverse flow f−1θ can be interpreted as a CDF . We will use Qθ , Fθ to denote fθ and f−1θ , respectively ; our goal is to fit these models from data . In order to fit models of the cumulative distribution and the quantile function , we propose objectives based on proper scoring rules . We propose fitting models Fθ of the CDF using the CRPS : L ( 1 ) ( Fθ , yi ) : = CRPS ( Fθ , yi ) = ∫ ∞ −∞ ( Fθ ( y ) − I ( yi ≤ y ) ) 2 dy . ( 1 ) When dealing with a model Qθ of the QF , we propose an objective based on the expected check score L ( 2 ) ( Qθ , yi ) : = ∫ 1 0 Lα ( Qθ ( α ) , yi ) dα , ( 2 ) where Lα is a check score targeting quantile α . We refer to this objective as the quantile loss . This objective has been used previously to train value functions in reinforcement learning as well as conditional distributions in autoregressive models ( Dabney et al. , 2018a ; b ) . In this paper , we describe its application to modeling aleatoric predictive uncertainties . The parametric form of Qθ or Fθ can be any class of strictly monotonic ( hence invertible ) functions . Previous works have relied on affine or piecewise linear functions ( Wehenkel and Louppe , 2021 ) , sum-of-squares ( Jaini et al. , 2019 ) , monotonic neural networks ( Huang et al. , 2018 ; Cao et al. , 2019 ) , and other models . Any of these choices suits our framework ; we provide more details below . Equivalence Between the CRPS and Quantile Losses So far , we have described two methods for fitting a one-dimensional flow model . Their objectives are actually equivalent . Proposition 1 . For a CDF F : R→ [ 0 , 1 ] and y′ ∈ R , the CRPS and quantile losses are equivalent : L ( 1 ) ( F , y′ ) = a · L ( 2 ) ( F−1 , y′ ) + b a , b ∈ R , a > 0 ( 3 ) This fact appears to be part of statistics folk knowledge , and we have only ever seen it stated briefly in some works . We provide a complete proof in the appendix . See ( Laio and Tamea , 2007 ) for another argument . If the models Fθ , Qθ are analytically invertible ( e.g. , they are piecewise linear ) , we are free to choose fitting the CDF or its inverse . Other representations for F will not lead to analytically invertible models , which require choosing a training direction , as we discuss below . Practical Implementation . The quantile and the CRPS losses both involve a potentially intractable integral . We approximate the integrals using Monte-Carlo ; this allows us to obtain gradients using backpropagation . For the quantile loss , we sample α uniformly at random in [ 0 , 1 ] ; for the CRPS loss , we choose a reasonable range of y ( usually , centered around yi ) and sample uniformly in that range . This approach works well in practice and avoids the complexity of alternative methods such as quadrature ( Durkan et al. , 2019 ) . | This paper proposed a quantile regression method for uncertainty estimation based on autoregressive quantile flow. The flow model can be trained in both forward and reverse setting using different loss functions, and the quantile flow framework can be combined with other linear or non-linear transformations. The authors have conducted diverse empirical evaluations on objective detection (bounding box regression), time series forecasting, and generative models to demonstrate the advantage of the proposed method. | SP:52701ccbe77facd26fa921a2610dad1da60e1a5f |
Autoregressive Quantile Flows for Predictive Uncertainty Estimation | 1 INTRODUCTION . Reasoning about uncertainty via the language of probability is important in many application domains of machine learning , including medicine ( Saria , 2018 ) , robotics ( Chua et al. , 2018 ; Buckman et al. , 2018 ) , and operations research ( Van Roy et al. , 1997 ) . Especially important is the estimation of predictive uncertainties ( e.g. , confidence intervals around forecasts ) ; in applications such as clinical diagnosis ( Jiang et al. , 2012 ) or decision support systems ( Werling et al. , 2015 ) , estimating uncertainty can be as important as obtaining high accuracy ( Kuleshov and Liang , 2015 ) . Normalizing flows ( Rezende and Mohamed , 2016 ; Papamakarios et al. , 2019 ; Kingma et al. , 2016 ) are a popular framework for defining probabilistic models , and can be used for density estimation ( Papamakarios et al. , 2017 ) , out-of-distribution detection ( Nalisnick et al. , 2019 ) , content generation ( Kingma and Dhariwal , 2018 ) , and more . Flows feature tractable posterior inference and maximum likelihood estimation ; however , maximum likelihood estimation of flows requires carefully designing a family of bijective functions that are simultaneously expressive and whose Jacobian has a tractable determinant . In practice , this makes flows time-consuming to design and computationally expensive to train . This paper takes a step towards addressing this limitation of normalizing flows by proposing new objectives that contribute towards alleviating the computational cost of calculating determinants of Jacobians . Specifically , we argue for training flows using an objective that is different from classical maximum likelihood and is instead based on proper scoring rules ( Gneiting and Raftery , 2007 ) , a standard tool in the statistics literature for evaluating the quality of probabilistic forecasts . We show that this objective can be used to train normalizing flows and that it simplifies the computation of Jacobians in certain types of flows . We introduce autoregressive quantile flows ( AQFs ) , a framework that combines the above learning objective with a set of architectural choices inspired by classical autoregressive flows . Quantile flows possess characteristics that represent an improvement over existing flow models—including supporting neural architectures that simultaneously provide fast training and sampling— in addition to the usual benefits of flows ( exact posterior inference and density estimation ) . Interestingly , quantile flows can be interpreted as extensions of quantile functions to multiple dimensions . We use AQFs as the basis for quantile flow regression ( QFR ) , an approach to predictive uncertainty estimation in which a probabilistic model directly outputs a normalizing flow as the predictive distribution . The QFR approach enables neural networks to output highly expressive probabilistic predictions that make very little assumptions on the form of the predicted variable and that improve uncertainty estimates in probabilistic and Bayesian models . In the one-dimensional case , our approach yields quantile function regression and cumulative distribution function regression , two simple , general , and principled approaches for flexible probabilistic forecasting in regression . In addition , we demonstrate the benefits of AQFs on probabilistic modeling tasks that include density estimation and autoregressive generation . Across our sets of experiments , we observe improved performance , and we demonstrate properties of quantile flows that traditional flow models do not possess ( e.g. , sampling with flexible neural parameterizations ) . Contributions . In summary , this work ( 1 ) introduces new objectives for flow models that simplify the computation of determinants of Jacobians , which in turn greatly simplifies the implementation of flow models and extends the class of models that can be used to parameterize flows . We also ( 2 ) define autoregressive quantile flows based on this objective , and highlight new architectures supported by this framework . Finally , ( 3 ) we deploy AQFs as part of quantile flow regression , and show that this approach improves upon existing methods for predictive uncertainty estimation . 2 BACKGROUND . Notation . Our goal is to learn a probabilistic model p ( y ) ∈ ∆ ( Rd ) in the space ∆ ( Rd ) of distributions over a high-dimensional y ∈ Rd ; we use yj ∈ R to denote components of y . In some cases , we have access to features x ∈ X associated with y and we want to train a forecaster H : X → ∆ ( Rd ) that outputs a predictive probability over y conditioned on x . 2.1 NORMALIZING FLOWS AND AUTOREGRESSIVE GENERATIVE MODELS . A normalizing flow defines a distribution p ( y ) via an invertible mapping fθ : Rd → Rd with parameters θ ∈ Θ that describes a transformation between y and a random variable z ∈ Rd sampled from a simple prior z ∼ p ( z ) ( Rezende and Mohamed , 2016 ; Papamakarios et al. , 2019 ) . We may compute p ( y ) via the change of variables formula p ( y ) = ∣∣∣∂fθ ( z ) −1∂z ∣∣∣ p ( z ) , where ∣∣∣∂fθ ( z ) −1∂z ∣∣∣ denotes the determinant of the inverse Jacobian of fθ . In order to fit flow-based models using maximum likelihood , we typically choose fθ to be in a family for which the Jacobian is tractable . A common way to define flows with a tractable Jacobian is via autoregressive models of the form yj = τ ( zj ; hj ) hj = cj ( y < j ) , where τ ( zj ; hj ) is an invertible transformer , a strictly monotonic function of zj , and cj is the j-th conditioner , which outputs parameters hj for the transformer . As long as τ is invertible , such autoregressive models can be used to define flows ( Papamakarios et al. , 2019 ) . 2.2 EVALUATING FORECASTS WITH PROPER SCORING RULES . A common way to represent a probabilistic forecast in the statistics and forecasting literature is via a cumulative distribution function ( CDF ) F : Rd → [ 0 , 1 ] ; any probability distribution can be represented this way , including discrete distributions . Since F is monotonically increasing in each coordinate , when y is one dimensional , we may define its inverse Q : [ 0 , 1 ] → R called the quantile function ( QF ) , defined as Q ( α ) = inf { y′ ∈ R | F ( y′ ) ≥ α } . In the statistics literature , the quality of forecasts is often evaluated using proper scoring rules ( or proper scores ; Gneiting and Raftery ( 2007 ) ) . For example , when predictions take the form of CDFs , a popular scoring rule is the continuous ranked probability score ( CRPS ) , defined for two CDFs F and G as CRPS ( F , G ) = ∫ y ( F ( y ) −G ( y ) ) 2 dy . When we only have samples y1 , ... , ym from G , we can generalize this score as 1m ∑m i=1 ∫ y ( F ( y ) − I ( y − yi ) ) 2 dy . Alternatively , we can evaluate the α-th quantile Q ( α ) of a QF Q via the check score L : R × R → R+ defined as Lα ( y , f ) = α ( y − f ) if y ≥ f and ( 1 − α ) ( f − y ) otherwise . The check score also provides a consistent estimator for the conditional quantile of any distribution . 3 TAKING STEPS BEYOND MAXIMUM LIKELIHOOD LEARNING OF FLOWS . Maximum likelihood estimation of flows requires carefully designing a family of bijective functions that are simultaneously expressive and whose Jacobian has a tractable determinant . In practice , this makes flows time-consuming to design and computationally expensive to train . In this paper , we argue for training flows using objectives based on proper scoring rules ( Gneiting and Raftery , 2007 ) . 3.1 LEARNING SIMPLE FLOWS WITH PROPER SCORING RULES . We begin with the one dimensional setting , where a flow fθ : R→ R is a bijective mapping that can be interpreted as a QF . Alternatively , the reverse flow f−1θ can be interpreted as a CDF . We will use Qθ , Fθ to denote fθ and f−1θ , respectively ; our goal is to fit these models from data . In order to fit models of the cumulative distribution and the quantile function , we propose objectives based on proper scoring rules . We propose fitting models Fθ of the CDF using the CRPS : L ( 1 ) ( Fθ , yi ) : = CRPS ( Fθ , yi ) = ∫ ∞ −∞ ( Fθ ( y ) − I ( yi ≤ y ) ) 2 dy . ( 1 ) When dealing with a model Qθ of the QF , we propose an objective based on the expected check score L ( 2 ) ( Qθ , yi ) : = ∫ 1 0 Lα ( Qθ ( α ) , yi ) dα , ( 2 ) where Lα is a check score targeting quantile α . We refer to this objective as the quantile loss . This objective has been used previously to train value functions in reinforcement learning as well as conditional distributions in autoregressive models ( Dabney et al. , 2018a ; b ) . In this paper , we describe its application to modeling aleatoric predictive uncertainties . The parametric form of Qθ or Fθ can be any class of strictly monotonic ( hence invertible ) functions . Previous works have relied on affine or piecewise linear functions ( Wehenkel and Louppe , 2021 ) , sum-of-squares ( Jaini et al. , 2019 ) , monotonic neural networks ( Huang et al. , 2018 ; Cao et al. , 2019 ) , and other models . Any of these choices suits our framework ; we provide more details below . Equivalence Between the CRPS and Quantile Losses So far , we have described two methods for fitting a one-dimensional flow model . Their objectives are actually equivalent . Proposition 1 . For a CDF F : R→ [ 0 , 1 ] and y′ ∈ R , the CRPS and quantile losses are equivalent : L ( 1 ) ( F , y′ ) = a · L ( 2 ) ( F−1 , y′ ) + b a , b ∈ R , a > 0 ( 3 ) This fact appears to be part of statistics folk knowledge , and we have only ever seen it stated briefly in some works . We provide a complete proof in the appendix . See ( Laio and Tamea , 2007 ) for another argument . If the models Fθ , Qθ are analytically invertible ( e.g. , they are piecewise linear ) , we are free to choose fitting the CDF or its inverse . Other representations for F will not lead to analytically invertible models , which require choosing a training direction , as we discuss below . Practical Implementation . The quantile and the CRPS losses both involve a potentially intractable integral . We approximate the integrals using Monte-Carlo ; this allows us to obtain gradients using backpropagation . For the quantile loss , we sample α uniformly at random in [ 0 , 1 ] ; for the CRPS loss , we choose a reasonable range of y ( usually , centered around yi ) and sample uniformly in that range . This approach works well in practice and avoids the complexity of alternative methods such as quadrature ( Durkan et al. , 2019 ) . | This paper proposes a novel framework for training flow models named Autoregressive Quantile Flows (AQF). The proposed method utilizes a new objective by evaluating forecasts with proper scoring rules, including the continuous ranked probability score and the check score. The advantages of the proposed objective are 1) it could avoid the explicit calculation of the determinant of the Jacobian matrix and 2) it could also provide uncertainty estimation for predictions. Experiments on multiple tasks including regression, object detection, time series forecasting, and generation validate the effectiveness of this framework. | SP:52701ccbe77facd26fa921a2610dad1da60e1a5f |
New Perspective on the Global Convergence of Finite-Sum Optimization | 1 INTRODUCTION . In recent years , deep neural networks ( DNNs ) have shown a great success in many machine learning tasks . However , training these neural networks is challenging since the loss surface of network architecture is generally non-convex , or even non-smooth . Thus , there have been a long-standing question on how optimization algorithms may converge to a global minimum . Many previous work have investigated Gradient Descent algorithm and its stochastic version for over-parameterized setting ( Arora et al. , 2018 ; Soudry et al. , 2018 ; Allen-Zhu et al. , 2019 ; Du et al. , 2019a ; Zou & Gu , 2019 ) . Although these works have shown promising convergence results under certain assumptions , there is still a lack of new efficient methods that can guarantee global convergence for machine learning optimization . In this paper , we address this problem using a different perspective . Instead of analyzing the traditional finite-sum formulation , we adopt a new composite formulation that exactly depicts the structure of machine learning where a data set is used to learn a common classifier . Representation . Let { ( x ( i ) , y ( i ) ) } n i=1 be a given training set with x ( i ) ∈ Rm , y ( i ) ∈ Rc , we investigate the following novel representation for deep learning tasks : min w∈Rd { F ( w ) = 1 n n∑ i=1 φi ( h ( w ; i ) ) } , ( 1 ) where h ( · ; i ) : Rd → Rc , i ∈ [ n ] = { 1 , . . . , n } , is the classifier for each input data x ( i ) ; and φi : Rc → R , i ∈ [ n ] , is the loss function corresponding to each output data y ( i ) . Our composite formulation ( 1 ) is a special case of the finite-sum problem minw∈Rd { F ( w ) = 1n ∑n i=1 f ( w ; i ) } where each individual function f ( · ; i ) is a composition of the loss function φi and the classifier h ( · ; i ) . This problem covers various important applications in machine learning , including logistic regression and neural networks . The most common approach for the finite-sum problem is using first-order methods such as ( stochastic ) gradient algorithms and making assumptions on the component functions f ( · ; i ) . As an alternative , we further investigate the structure of the loss function φi and narrow our assumption on the classifier h ( · ; i ) . For the purpose of this work , we first consider convex and Lipschitz-smooth loss functions while the classifiers can be non-convex . Using this representation , we propose a new framework followed by two algorithms that guarantee global convergence for the minimization problem . Algorithmic Framework . Representation ( 1 ) admits a new perspective . Our key insight is to ( A ) define z ( t ) i = h ( w ( t ) ; i ) , where t is an iteration count of the outer loop in our algorithmic framework . Next ( B ) , we want to approximate the change z ( t+1 ) i − z ( t ) i in terms of a step size times the gradient ∇φi ( z ( t ) i ) = ( ∂φi ( z ) /∂za ) a∈ [ c ] ∣∣ z=z ( t ) i , and ( C ) we approximate the change h ( w ( t+1 ) ; i ) − h ( w ( t ) ; i ) in terms of the first order derivative H ( t ) i = ( ∂ha ( w ; i ) /∂wb ) a∈ [ c ] , b∈ [ d ] ∣∣ w=w ( t ) . Finally , we combine ( A ) , ( B ) , and ( C ) to equate the approximations of z ( t+1 ) i − z ( t ) i and h ( w ( t+1 ) ; i ) − h ( w ( t ) ; i ) . This leads to a recurrence on w ( t ) of the form w ( t+1 ) = w ( t ) − η ( t ) v ( t ) , where η ( t ) is a step size and which involves computing v ( t ) by solving a convex quadratic subproblem , see the details in Section 4 . We explain two methods for approximating a solution for the derived subproblem . We show how to approximate the subproblem by transforming it into a strongly convex problem by adding a regularizer which can be solved in closed form . And we show how to use Gradient Descent ( GD ) on the subproblem to find an approximation v ( t ) of its solution . Convergence Analysis . Our analysis introduces non-standard bounded style assumptions . Intuitively , we assume that our convex and quadratic subproblem has a bounded solution . This allows us to prove a total complexity of Õ ( 1ε3 ) to find an ε- ( global ) solution that satisfies F ( ŵ ) − F∗ ≤ ε , where F∗ is the global minimizer of F . Our analysis applies to a wide range of applications in machine learning : Our results hold for squared loss and softmax cross-entropy loss and applicable for a range of activation functions in DNN as we only assume that the h ( · ; i ) are twice continuously differentiable and their Hessian matrices ( second order derivatives ) as well as their gradients ( first order derivatives ) are bounded . Contributions and Outline . Our contributions in this paper can be summarized as follows . • We propose a new representation ( 1 ) for analyzing the machine learning minimization problem . Our formulation utilizes the structure of machine learning tasks where a training data set of inputs and outputs is used to learn a common classifier . Related work in Section 2 shows how ( 1 ) is different from the classical finite-sum problem . • Based on the new representation we propose a novel algorithm framework . The algorithmic framework approximates a solution to a subproblem for which we show two distinct approaches . • For general DNNs and based on bounded style assumptions , we prove a total complexity of Õ ( 1ε3 ) to find an ε- ( global ) solution that satisfies F ( ŵ ) −F∗ ≤ ε , where F∗ is the global minimizer of F . We emphasize that our focus is on developing a new theoretical foundation and that a translation to a practical implementation with empirical results is for future work . Our theoretical foundation motivates further study , implementation , and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions . This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum . The rest of this paper is organized as follows . Section 2 discusses related work . Section 3 describes our setting and deep learning representation . Section 4 explains our key insight and derives our Framework 1 . Section 5 presents our algorithms and their global convergence . All technical proofs are deferred to the Appendix . 2 RELATED WORK . Formulation for Machine Learning Problems . The finite-sum problem is one of the most important and fundamental problems in machine learning . Analyzing this model is the most popular approach in the machine learning literature and it has been studied intensively throughout the years ( Bottou et al. , 2018 ; Reddi et al. , 2016 ; Duchi et al. , 2011b ) . Our new formulation ( 1 ) is a special case of the finite-sum problem , however , it is much more complicated than the previous model since it involves the data index i both inside the classifiers h ( · ; i ) and the loss functions φi . For a comparison , previous works only consider a common loss function l ( ŷ , y ) for the predicted value ŷ and output data y ( Zou et al. , 2018 ; Soudry et al. , 2018 ) . Our modified version of loss function φi is a natural setting for machine learning . We note that when h ( w ; i ) is the output produced by a model , our goal is to match this output with the corresponding target y ( i ) . For that reason , the loss function for each output has a dependence on the output data y ( i ) , and is denoted by φi . This fact reflects the natural setting of machine learning where the outputs are designed to fit different targets , and the optimization process depends on both outer function φi and inner functions h ( · ; i ) . This complication may potentially bring a challenge to theoretical analysis . However , with separate loss functions , we believe this model will help to exploit better the structure of machine learning problems and gain more insights on the neural network architecture . Other related composite optimization models are also investigated thoroughly in ( Lewis & Wright , 2016 ; Zhang & Xiao , 2019 ; Tran-Dinh et al. , 2020 ) . Our model is different from these works as it does not have a common function wrapping outside the finite-sum term , as in ( Lewis & Wright , 2016 ) . Note that a broad class of variance reduction algorithms ( e.g . SAG ( Le Roux et al. , 2012 ) , SAGA ( Defazio et al. , 2014 ) , SVRG ( Johnson & Zhang , 2013 ) , SARAH ( Nguyen et al. , 2017 ) ) is designed specifically for the finite-sum formulation and is known to have certain benefits over Gradient Descent . In addition , the multilevel composite problem considered in ( Zhang & Xiao , 2021 ) also covers empirical risk minimization problem . However our formulation does not match their work since our inner function h ( w ; i ) is not an independent expectation over some data distribution , but a specific function that depends on the current data . Global Convergence for Neural Networks . A recent popular line of research is studying the dynamics of optimization methods on some specific neural network architectures . There are some early works that show the global convergence of Gradient Descent ( GD ) for simple linear network and two-layer network ( Brutzkus et al. , 2018 ; Soudry et al. , 2018 ; Arora et al. , 2019 ; Du et al. , 2019b ) . Some further works extend these results to deep learning architectures ( Allen-Zhu et al. , 2019 ; Du et al. , 2019a ; Zou & Gu , 2019 ) . These theoretical guarantees are generally proved for the case when the last output layer is fixed , which is not standard in practice . A recent work ( Nguyen & Mondelli , 2020 ) prove the global convergence for GD when all layers are trained with some initial conditions . However , these results are for neural networks without bias neurons and it is unclear how these analyses can be extended to handle the bias terms of deep networks with different activations . Our novel framework and algorithms do not exclude learning bias layers as in ( Nguyen & Mondelli , 2020 ) . Using a different algorithm , Brutzkus et al . ( 2018 ) investigate Stochastic Gradient Descent ( SGD ) for two-layer networks in a restricted linearly separable data setting . This line of research continues with the works from Allen-Zhu et al . ( 2019 ) ; Zou et al . ( 2018 ) and later with Zou & Gu ( 2019 ) . They justify the global convergence of SGD for deep neural networks for some probability depending on the number of input data and the initialization process . Over-Paramaterized Settings and other Assumptions for Machine Learning . Most of the modern learning architectures are over-parameterized , which means that the number of parameters are very large and often far more than the number of input data . Some recent works prove the global convergence of Gradient Descent when the number of neurons are extensively large , e.g . ( Zou & Gu , 2019 ) requires Ω ( n8 ) neurons for every hidden layer , and ( Nguyen & Mondelli , 2020 ) improves this number to Ω ( n3 ) . If the initial point satisfies some special conditions , then they can show a better dependence of Ω ( n ) . In Allen-Zhu et al . ( 2019 ) , the authors initialize the weights using a random Gaussian distribution where the variance depends on the dimension of the problem . In non-convex setting , they prove the convergence of SGD using the assumption that the dimension depends inversely on the tolerance . We will discuss how these over-paramaterized settings might be a necessary condition to develop our theory . Other standard assumptions for machine learning include the bounded gradient assumption ( Nemirovski et al. , 2009 ; Shalev-Shwartz et al. , 2007 ; Reddi et al. , 2016 ; Tran et al. , 2021 ) . It is also common to assume all the iterations of an algorithm stays in a bounded domain ( Duchi et al. , 2011a ; Levy et al. , 2018 ; Gürbüzbalaban et al. , 2019 ; Reddi et al. , 2018 ; Vaswani et al. , 2021 ) . Since we are analyzing a new composite formulation , it is understandable that our assumptions may also not be standard . However , we believe that there is a strong connection between our assumptions and the traditional setting of machine learning . We will discuss this point more clearly in Section 4 . | This paper presents a new optimization method for finding global minima of nonconvex finite sum problems. In particular, the summands are functions of the form $\phi_{i}\circ h$ where $\phi_{i}$ is convex and Lipschitz smooth, while $h$ is nonconvex. Each iteration of the method consists of solving an auxiliary regularized least squares (RLS) problem, followed by a gradient step. Additional analysis is given for when the RLS problem is solved inexactly. Finally, a claimed $\tilde{O}(\varepsilon^{-3})$ complexity is established under strong boundedness assumptions on various solution sets. | SP:030b2045318e6e4189685793b5eab37ffb8b1a82 |
New Perspective on the Global Convergence of Finite-Sum Optimization | 1 INTRODUCTION . In recent years , deep neural networks ( DNNs ) have shown a great success in many machine learning tasks . However , training these neural networks is challenging since the loss surface of network architecture is generally non-convex , or even non-smooth . Thus , there have been a long-standing question on how optimization algorithms may converge to a global minimum . Many previous work have investigated Gradient Descent algorithm and its stochastic version for over-parameterized setting ( Arora et al. , 2018 ; Soudry et al. , 2018 ; Allen-Zhu et al. , 2019 ; Du et al. , 2019a ; Zou & Gu , 2019 ) . Although these works have shown promising convergence results under certain assumptions , there is still a lack of new efficient methods that can guarantee global convergence for machine learning optimization . In this paper , we address this problem using a different perspective . Instead of analyzing the traditional finite-sum formulation , we adopt a new composite formulation that exactly depicts the structure of machine learning where a data set is used to learn a common classifier . Representation . Let { ( x ( i ) , y ( i ) ) } n i=1 be a given training set with x ( i ) ∈ Rm , y ( i ) ∈ Rc , we investigate the following novel representation for deep learning tasks : min w∈Rd { F ( w ) = 1 n n∑ i=1 φi ( h ( w ; i ) ) } , ( 1 ) where h ( · ; i ) : Rd → Rc , i ∈ [ n ] = { 1 , . . . , n } , is the classifier for each input data x ( i ) ; and φi : Rc → R , i ∈ [ n ] , is the loss function corresponding to each output data y ( i ) . Our composite formulation ( 1 ) is a special case of the finite-sum problem minw∈Rd { F ( w ) = 1n ∑n i=1 f ( w ; i ) } where each individual function f ( · ; i ) is a composition of the loss function φi and the classifier h ( · ; i ) . This problem covers various important applications in machine learning , including logistic regression and neural networks . The most common approach for the finite-sum problem is using first-order methods such as ( stochastic ) gradient algorithms and making assumptions on the component functions f ( · ; i ) . As an alternative , we further investigate the structure of the loss function φi and narrow our assumption on the classifier h ( · ; i ) . For the purpose of this work , we first consider convex and Lipschitz-smooth loss functions while the classifiers can be non-convex . Using this representation , we propose a new framework followed by two algorithms that guarantee global convergence for the minimization problem . Algorithmic Framework . Representation ( 1 ) admits a new perspective . Our key insight is to ( A ) define z ( t ) i = h ( w ( t ) ; i ) , where t is an iteration count of the outer loop in our algorithmic framework . Next ( B ) , we want to approximate the change z ( t+1 ) i − z ( t ) i in terms of a step size times the gradient ∇φi ( z ( t ) i ) = ( ∂φi ( z ) /∂za ) a∈ [ c ] ∣∣ z=z ( t ) i , and ( C ) we approximate the change h ( w ( t+1 ) ; i ) − h ( w ( t ) ; i ) in terms of the first order derivative H ( t ) i = ( ∂ha ( w ; i ) /∂wb ) a∈ [ c ] , b∈ [ d ] ∣∣ w=w ( t ) . Finally , we combine ( A ) , ( B ) , and ( C ) to equate the approximations of z ( t+1 ) i − z ( t ) i and h ( w ( t+1 ) ; i ) − h ( w ( t ) ; i ) . This leads to a recurrence on w ( t ) of the form w ( t+1 ) = w ( t ) − η ( t ) v ( t ) , where η ( t ) is a step size and which involves computing v ( t ) by solving a convex quadratic subproblem , see the details in Section 4 . We explain two methods for approximating a solution for the derived subproblem . We show how to approximate the subproblem by transforming it into a strongly convex problem by adding a regularizer which can be solved in closed form . And we show how to use Gradient Descent ( GD ) on the subproblem to find an approximation v ( t ) of its solution . Convergence Analysis . Our analysis introduces non-standard bounded style assumptions . Intuitively , we assume that our convex and quadratic subproblem has a bounded solution . This allows us to prove a total complexity of Õ ( 1ε3 ) to find an ε- ( global ) solution that satisfies F ( ŵ ) − F∗ ≤ ε , where F∗ is the global minimizer of F . Our analysis applies to a wide range of applications in machine learning : Our results hold for squared loss and softmax cross-entropy loss and applicable for a range of activation functions in DNN as we only assume that the h ( · ; i ) are twice continuously differentiable and their Hessian matrices ( second order derivatives ) as well as their gradients ( first order derivatives ) are bounded . Contributions and Outline . Our contributions in this paper can be summarized as follows . • We propose a new representation ( 1 ) for analyzing the machine learning minimization problem . Our formulation utilizes the structure of machine learning tasks where a training data set of inputs and outputs is used to learn a common classifier . Related work in Section 2 shows how ( 1 ) is different from the classical finite-sum problem . • Based on the new representation we propose a novel algorithm framework . The algorithmic framework approximates a solution to a subproblem for which we show two distinct approaches . • For general DNNs and based on bounded style assumptions , we prove a total complexity of Õ ( 1ε3 ) to find an ε- ( global ) solution that satisfies F ( ŵ ) −F∗ ≤ ε , where F∗ is the global minimizer of F . We emphasize that our focus is on developing a new theoretical foundation and that a translation to a practical implementation with empirical results is for future work . Our theoretical foundation motivates further study , implementation , and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions . This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum . The rest of this paper is organized as follows . Section 2 discusses related work . Section 3 describes our setting and deep learning representation . Section 4 explains our key insight and derives our Framework 1 . Section 5 presents our algorithms and their global convergence . All technical proofs are deferred to the Appendix . 2 RELATED WORK . Formulation for Machine Learning Problems . The finite-sum problem is one of the most important and fundamental problems in machine learning . Analyzing this model is the most popular approach in the machine learning literature and it has been studied intensively throughout the years ( Bottou et al. , 2018 ; Reddi et al. , 2016 ; Duchi et al. , 2011b ) . Our new formulation ( 1 ) is a special case of the finite-sum problem , however , it is much more complicated than the previous model since it involves the data index i both inside the classifiers h ( · ; i ) and the loss functions φi . For a comparison , previous works only consider a common loss function l ( ŷ , y ) for the predicted value ŷ and output data y ( Zou et al. , 2018 ; Soudry et al. , 2018 ) . Our modified version of loss function φi is a natural setting for machine learning . We note that when h ( w ; i ) is the output produced by a model , our goal is to match this output with the corresponding target y ( i ) . For that reason , the loss function for each output has a dependence on the output data y ( i ) , and is denoted by φi . This fact reflects the natural setting of machine learning where the outputs are designed to fit different targets , and the optimization process depends on both outer function φi and inner functions h ( · ; i ) . This complication may potentially bring a challenge to theoretical analysis . However , with separate loss functions , we believe this model will help to exploit better the structure of machine learning problems and gain more insights on the neural network architecture . Other related composite optimization models are also investigated thoroughly in ( Lewis & Wright , 2016 ; Zhang & Xiao , 2019 ; Tran-Dinh et al. , 2020 ) . Our model is different from these works as it does not have a common function wrapping outside the finite-sum term , as in ( Lewis & Wright , 2016 ) . Note that a broad class of variance reduction algorithms ( e.g . SAG ( Le Roux et al. , 2012 ) , SAGA ( Defazio et al. , 2014 ) , SVRG ( Johnson & Zhang , 2013 ) , SARAH ( Nguyen et al. , 2017 ) ) is designed specifically for the finite-sum formulation and is known to have certain benefits over Gradient Descent . In addition , the multilevel composite problem considered in ( Zhang & Xiao , 2021 ) also covers empirical risk minimization problem . However our formulation does not match their work since our inner function h ( w ; i ) is not an independent expectation over some data distribution , but a specific function that depends on the current data . Global Convergence for Neural Networks . A recent popular line of research is studying the dynamics of optimization methods on some specific neural network architectures . There are some early works that show the global convergence of Gradient Descent ( GD ) for simple linear network and two-layer network ( Brutzkus et al. , 2018 ; Soudry et al. , 2018 ; Arora et al. , 2019 ; Du et al. , 2019b ) . Some further works extend these results to deep learning architectures ( Allen-Zhu et al. , 2019 ; Du et al. , 2019a ; Zou & Gu , 2019 ) . These theoretical guarantees are generally proved for the case when the last output layer is fixed , which is not standard in practice . A recent work ( Nguyen & Mondelli , 2020 ) prove the global convergence for GD when all layers are trained with some initial conditions . However , these results are for neural networks without bias neurons and it is unclear how these analyses can be extended to handle the bias terms of deep networks with different activations . Our novel framework and algorithms do not exclude learning bias layers as in ( Nguyen & Mondelli , 2020 ) . Using a different algorithm , Brutzkus et al . ( 2018 ) investigate Stochastic Gradient Descent ( SGD ) for two-layer networks in a restricted linearly separable data setting . This line of research continues with the works from Allen-Zhu et al . ( 2019 ) ; Zou et al . ( 2018 ) and later with Zou & Gu ( 2019 ) . They justify the global convergence of SGD for deep neural networks for some probability depending on the number of input data and the initialization process . Over-Paramaterized Settings and other Assumptions for Machine Learning . Most of the modern learning architectures are over-parameterized , which means that the number of parameters are very large and often far more than the number of input data . Some recent works prove the global convergence of Gradient Descent when the number of neurons are extensively large , e.g . ( Zou & Gu , 2019 ) requires Ω ( n8 ) neurons for every hidden layer , and ( Nguyen & Mondelli , 2020 ) improves this number to Ω ( n3 ) . If the initial point satisfies some special conditions , then they can show a better dependence of Ω ( n ) . In Allen-Zhu et al . ( 2019 ) , the authors initialize the weights using a random Gaussian distribution where the variance depends on the dimension of the problem . In non-convex setting , they prove the convergence of SGD using the assumption that the dimension depends inversely on the tolerance . We will discuss how these over-paramaterized settings might be a necessary condition to develop our theory . Other standard assumptions for machine learning include the bounded gradient assumption ( Nemirovski et al. , 2009 ; Shalev-Shwartz et al. , 2007 ; Reddi et al. , 2016 ; Tran et al. , 2021 ) . It is also common to assume all the iterations of an algorithm stays in a bounded domain ( Duchi et al. , 2011a ; Levy et al. , 2018 ; Gürbüzbalaban et al. , 2019 ; Reddi et al. , 2018 ; Vaswani et al. , 2021 ) . Since we are analyzing a new composite formulation , it is understandable that our assumptions may also not be standard . However , we believe that there is a strong connection between our assumptions and the traditional setting of machine learning . We will discuss this point more clearly in Section 4 . | The paper provides a new gradient-based algorithm. The algorithm is based on the observation that a loss function for a single sample can be written as composition of two functions (the logits and the actual loss function). It computes the direction by means of solving a quadratic MSE problem. They provide a convergent analysis of the algorithm (there is a version where the quadratic problem is solved explicitly through a closed form expression and a version where gradient descent is applied to solve the problem approximately). There are no computational experiments. The main contributions are in the algorithms themselves and the accompanying analyses. It's unclear how novel are the proof techniques since everything resembles second order algorithms. The authors also claim the actual reformulation to be a novel contribution however such a formulation is straightforward and used in many contexts (some of my lecture slides from years ago show the formulation used by authors as a possible formulation for the overall loss function). Despite of this, the design of the algorithm should get credit. | SP:030b2045318e6e4189685793b5eab37ffb8b1a82 |
Efficient representations for privacy-preserving inference | 1 INTRODUCTION . In recent years , deep neural networks have achieved state-of-the-art accuracy for tasks such as image recognition . They have been deployed in a range of sectors , powering a wide variety of applications such as recommendation systems , medical diagnosis , and content filtering . Machine Learning as a Service ( MLaaS ) is a framework in which cloud services apply machine learning algorithms on usersupplied data to produce an inference result which is then returned to the user . Cloud systems are an attractive platform for deploying pretrained models due to the relatively low cost and the availability of remote servers . However , the data has to be decrypted before inference , which allows a serverside adversary to have access to the user ’ s information . Homomorphic encryption ( HE ) , can be applied to enable inference to be performed on encrypted data , enabling the result to be delivered to the user without risk of the server accessing the original data or the inference result . CRYPTONETS ( Gilad-Bachrach et al. , 2016 ) was the first application of HE to secure neural network inference , and leveraged the YASHE ’ scheme to perform MNIST classifications . CRYPTONETS suffers from a high number of homomorphic operations ( HOPs ) , with a single MNIST inference requiring ∼290 , 000 homomorphic multiplications and ∼250 seconds of inference latency . Subsequent works such as FASTER CRYPTONETS ( Chou et al. , 2018 ) used neural network surgery and a faster encryption scheme to reduce the inference latency of CRYPTONETS . Later works utilised ciphertext rotations as opposed to the SIMD packing scheme , enabling convolutional and fully connected layers to be computed using much fewer HOPs ( Juvekar et al. , 2018 ; Mishra et al. , 2020 ) . This has been shown to reduce the inference latency of MNIST models by more than an order of magnitude , bringing confidence that private inference can be practical . LOLA ( Brutzkus et al. , 2019 ) proposed novel representations for intermediate tensors and their MNIST model requires only 2.2 seconds for one inference . One drawback of their representations is poorly scalability to harder datasets such as CIFAR-10 due to the limited number of slots per ciphertext acting as a barrier to the size of tensors that are practical . The limitated set of operations supported by HE schemes prevents the secure computation of nonpolynomial activation functions which impedes model training due to the problem of exploding gradients ( Chou et al. , 2018 ) . To address this , others have proposed the use of secure multi-party computation to enable secure computation of non-polynomial activations using multiple parties ( Juvekar et al. , 2018 ; Mishra et al. , 2020 ) . Despite enabling use of popular nonpolynomial activations such as ReLU , relying on multiparty computation incurs large amounts of data transfer between parties and the requirement for the parties involved to be online and have feasibly fast data transfer rates . For example , GAZELLE ( Juvekar et al. , 2018 ) requires ∼1 GB of data transfer per inference for their CIFAR-10 model , and DELPHI ( Mishra et al. , 2020 ) requires ∼2 GB per inference with a ResNet-32 model . Single-party approaches often choose to approximate the ReLU activation using a second-degree polynomial ( Gilad-Bachrach et al. , 2016 ; Chou et al. , 2018 ; Brutzkus et al. , 2019 ) . In this work , we introduce a framework for secure inference on secure CNNs , designed to reduce the number of HOPs required per inference whilst preserving prediction accuracy . Our contributions can be summarised as follows : • We integrate the convolution-packing method from LOLA with the fast matrix-vector product method introduced by Halevi and Shoup ( Halevi & Shoup , 2019 ) and utilised by Juvekar et al . ( 2018 ) in their multi-party computation framework . Intermediate convolutions are converted into fully-connected layers and computed as matrix-vector products . We show that utilising the Halevi-Shoup method allows the use of rotations and ciphertext packing to scale better compared with the representations in LOLA , when applied to larger convolutional layers . We perform a more detailed investigation on the scalability of the methods used in LOLA to larger models and show that they are significantly outperformed by our proposed method . • We compare our framework against LOLA by constructing models for MNIST and CIFAR10 . Our main evaluation criteria is the number of HOPs required per inference for a model . With the same layer parameters as LOLA , we are able to obtain over a two-fold reduction in the number of HOPs per inference . Our CIFAR-10 model achieves similar accuracy to that of LOLA ’ s but uses far fewer operations . 2 BACKGROUND AND PREREQUISITES . 2.1 THREAT MODEL . Our threat model concerns that of the machine learning as a service paradigm ( MLaaS ) , in which the user first sends data to a server , which then performs machine learning inference on the received data using some model . The inference result is then delivered back to the user . For example , consider an online machine learning service which claims to detect the probability of a person having COVID-19 from an audio recording of their cough . Suppose that Alice decides to send a recording of her cough to this service , in the hopes of receiving a diagnosis . There are two key threats in this scenario : ( i ) the risk of an adversary eavesdropping on the data transmission , and ( ii ) the risk of the MLaaS provider performing unauthorised access on the user ’ s data – in this case , the recording produced by Alice . The first threat can be mitigated using standard cryptographic protocols . However , the second risk is harder to address , especially if the user data is decrypted before inference Bae et al . ( 2018 ) . The use of HE mitigates both risks . The data is encrypted using HE , which is sufficient to prevent an adversary from eavesdropping . In addition , the provider is only able to perform computations on the encrypted data and will output the inference result without being able to decrypt . 2.2 HOMOMORPHIC OPERATIONS . Several recent HE schemes such as BFV ( Brakerski & Vaikuntanathan , 2011 ) and CKKS ( Cheon et al. , 2017 ) are based on the RLWE problem and support SIMD ciphertext operations . On a high level , such schemes establish a mapping between real vectors and a plaintext space . The plaintext space is usually the polynomial ring R = Z [ X ] / ( XN + 1 ) . In particular , this is a cyclotomic polynomial ring R = Z [ X ] / ( ΦM ( X ) ) where ΦM ( X ) is the M -th cyclotomic polynomial and M = 2N is a power of two . The decoding operation maps an element in R to a vector that is either real or complex , depending on the scheme used . The encoding operation performs the reverse . Plaintext polynomials are encrypted into ciphertext polynomials using a public key . The operations of addition and multiplication can be performed over ciphertexts using an evaluation key . Since each ciphertext corresponds to a vector of real ( or complex ) values , a single homomorphic operation between two ciphertexts constitutes an element-wise operation between two vectors . In addition , such schemes support rotations of the slots within a ciphertext , with the use of Galois automorphisms . 2.3 FLATTENED CONVOLUTIONS . Consider the convolution of an image I with a filter f . For simplicity , assume that both the image and filter are square , the vertical and horizontal strides of the filter are equal . Let I ∈ Rdin×din×cin , and f ∈ Rk×k×cin . Denote the stride as s and padding as p. Now , the output feature map J is such that J ∈ Rdout×dout where dout = ⌊ din − k + 2p s ⌋ + 1 . A full convolutional layer that outputs cout feature maps will require a convolution of the input image with each of the cout filters . Consider the vector v obtained by flattening each output feature map row-wise . This can be expressed as a matrix-vector product of the form v = A ·w ∈ Rd2out·cout where A ∈ Rd2out·cout×d2in·cin and w is the flattened representation of I . 2.4 FAST CONVOLUTION . The first convolutional layer in a CNN can be represented using convolution-packing ( Brutzkus et al. , 2019 ) . The convolution of an input image f with a filter g of width w , height h and depth d is ( f ∗ g ) [ i , j ] = w−1∑ x=0 h−1∑ y=0 d−1∑ z=0 g [ x , y , z ] f [ i+ x , j + y , z ] . ( 1 ) Observe that the parallelism inherent in this computation enables it to be vectorized as f ∗ g = w−1∑ x=0 h−1∑ y=0 d−1∑ z=0 g [ x , y , z ] · F ( x , y , z ) , ( 2 ) where F ( x , y , z ) is a matrix such that F ( x , y , z ) ij = f [ i+x , j+y , z ] . For an input image I ∈ Rcin×din×din feature maps and kernel of window size k × k , the input image is represented as k2 · cin vectors v1 , . . . , vk2·cin , where vi contains all elements convolved with the i-th value in the filter . Denote corresponding ciphertexts as ct1 , . . . , ctk2·cin . The process of producing the j-th output feature map is now reduced to ciphertext-plaintext multiplication of each cti with the i-th value in the j-th filter . In total , the process requires k2 ·cin ciphertext-scalar multiplications per output feature map , leading to a total of k2 · cin · cout multiplications . 2.5 MATRIX-VECTOR MULTIPLICATION . Multiplying a plaintext weight matrix of A ∈ Rm×n with a ciphertext vector can be achieved naively by first performing m ciphertext multiplications of the vector with each row of A . Then for each product ciphertext , we can compute the sum of all elements within it by applying a rotateand-sum procedure ( Halevi & Shoup , 2019 ) : we first rotate the ciphertext by N/2 slots and add it to the original . Then the procedure is repeated for N/4 slots , N/8 and so on , until the sum of the ciphertext resides within all slots of the ciphertext . The resulting dot products can be summed together . This basic approach requires O ( m log n ) rotations . Halevi & Shoup ( 2019 ) introduced a more efficient method of computing the encrypted matrixvector product A · v for square A. GAZELLE ( Juvekar et al. , 2018 ) extended the approach to support rectangular A ∈ Rm×n . The method works by decomposing A into its m diagonals , denoted { d1 , d2 , . . . , dm } such that di = [ Ai,0 , Ai+1,1 , . . . , Ai+n−1 , n−1 ] . Note that all row positions are in modulo m. Then each di is rotated i positions to align values belonging to the same row of A into the same column ( s ) , and finally each rotated diagonal is multiplied with with corresponding rotations of v. The ciphertexts are summed , and the last stage is to apply a rotate-and-sum procedure to the resulting ciphertext . Overall , this procedure requiresO ( m ) multiplications andO ( m+log2 n ) rotations . We propose an improved variant of this approach for our framework described in section 3.1 . | The paper considers the problem of privacy preserving inference on deep learning models using homomorphic encryption. HE is a special type of encryption that allows one to perform certain types of computations while the data is encrypted. However, the catch is HE based inferences can be significantly slower than the non-private counterparts. The paper claims to improve upon the existing state of the art HE based inference approaches significantly -- two orders of magnitude. | SP:89a1b45eb1420f7259acaf8289fcd30523941e03 |
Efficient representations for privacy-preserving inference | 1 INTRODUCTION . In recent years , deep neural networks have achieved state-of-the-art accuracy for tasks such as image recognition . They have been deployed in a range of sectors , powering a wide variety of applications such as recommendation systems , medical diagnosis , and content filtering . Machine Learning as a Service ( MLaaS ) is a framework in which cloud services apply machine learning algorithms on usersupplied data to produce an inference result which is then returned to the user . Cloud systems are an attractive platform for deploying pretrained models due to the relatively low cost and the availability of remote servers . However , the data has to be decrypted before inference , which allows a serverside adversary to have access to the user ’ s information . Homomorphic encryption ( HE ) , can be applied to enable inference to be performed on encrypted data , enabling the result to be delivered to the user without risk of the server accessing the original data or the inference result . CRYPTONETS ( Gilad-Bachrach et al. , 2016 ) was the first application of HE to secure neural network inference , and leveraged the YASHE ’ scheme to perform MNIST classifications . CRYPTONETS suffers from a high number of homomorphic operations ( HOPs ) , with a single MNIST inference requiring ∼290 , 000 homomorphic multiplications and ∼250 seconds of inference latency . Subsequent works such as FASTER CRYPTONETS ( Chou et al. , 2018 ) used neural network surgery and a faster encryption scheme to reduce the inference latency of CRYPTONETS . Later works utilised ciphertext rotations as opposed to the SIMD packing scheme , enabling convolutional and fully connected layers to be computed using much fewer HOPs ( Juvekar et al. , 2018 ; Mishra et al. , 2020 ) . This has been shown to reduce the inference latency of MNIST models by more than an order of magnitude , bringing confidence that private inference can be practical . LOLA ( Brutzkus et al. , 2019 ) proposed novel representations for intermediate tensors and their MNIST model requires only 2.2 seconds for one inference . One drawback of their representations is poorly scalability to harder datasets such as CIFAR-10 due to the limited number of slots per ciphertext acting as a barrier to the size of tensors that are practical . The limitated set of operations supported by HE schemes prevents the secure computation of nonpolynomial activation functions which impedes model training due to the problem of exploding gradients ( Chou et al. , 2018 ) . To address this , others have proposed the use of secure multi-party computation to enable secure computation of non-polynomial activations using multiple parties ( Juvekar et al. , 2018 ; Mishra et al. , 2020 ) . Despite enabling use of popular nonpolynomial activations such as ReLU , relying on multiparty computation incurs large amounts of data transfer between parties and the requirement for the parties involved to be online and have feasibly fast data transfer rates . For example , GAZELLE ( Juvekar et al. , 2018 ) requires ∼1 GB of data transfer per inference for their CIFAR-10 model , and DELPHI ( Mishra et al. , 2020 ) requires ∼2 GB per inference with a ResNet-32 model . Single-party approaches often choose to approximate the ReLU activation using a second-degree polynomial ( Gilad-Bachrach et al. , 2016 ; Chou et al. , 2018 ; Brutzkus et al. , 2019 ) . In this work , we introduce a framework for secure inference on secure CNNs , designed to reduce the number of HOPs required per inference whilst preserving prediction accuracy . Our contributions can be summarised as follows : • We integrate the convolution-packing method from LOLA with the fast matrix-vector product method introduced by Halevi and Shoup ( Halevi & Shoup , 2019 ) and utilised by Juvekar et al . ( 2018 ) in their multi-party computation framework . Intermediate convolutions are converted into fully-connected layers and computed as matrix-vector products . We show that utilising the Halevi-Shoup method allows the use of rotations and ciphertext packing to scale better compared with the representations in LOLA , when applied to larger convolutional layers . We perform a more detailed investigation on the scalability of the methods used in LOLA to larger models and show that they are significantly outperformed by our proposed method . • We compare our framework against LOLA by constructing models for MNIST and CIFAR10 . Our main evaluation criteria is the number of HOPs required per inference for a model . With the same layer parameters as LOLA , we are able to obtain over a two-fold reduction in the number of HOPs per inference . Our CIFAR-10 model achieves similar accuracy to that of LOLA ’ s but uses far fewer operations . 2 BACKGROUND AND PREREQUISITES . 2.1 THREAT MODEL . Our threat model concerns that of the machine learning as a service paradigm ( MLaaS ) , in which the user first sends data to a server , which then performs machine learning inference on the received data using some model . The inference result is then delivered back to the user . For example , consider an online machine learning service which claims to detect the probability of a person having COVID-19 from an audio recording of their cough . Suppose that Alice decides to send a recording of her cough to this service , in the hopes of receiving a diagnosis . There are two key threats in this scenario : ( i ) the risk of an adversary eavesdropping on the data transmission , and ( ii ) the risk of the MLaaS provider performing unauthorised access on the user ’ s data – in this case , the recording produced by Alice . The first threat can be mitigated using standard cryptographic protocols . However , the second risk is harder to address , especially if the user data is decrypted before inference Bae et al . ( 2018 ) . The use of HE mitigates both risks . The data is encrypted using HE , which is sufficient to prevent an adversary from eavesdropping . In addition , the provider is only able to perform computations on the encrypted data and will output the inference result without being able to decrypt . 2.2 HOMOMORPHIC OPERATIONS . Several recent HE schemes such as BFV ( Brakerski & Vaikuntanathan , 2011 ) and CKKS ( Cheon et al. , 2017 ) are based on the RLWE problem and support SIMD ciphertext operations . On a high level , such schemes establish a mapping between real vectors and a plaintext space . The plaintext space is usually the polynomial ring R = Z [ X ] / ( XN + 1 ) . In particular , this is a cyclotomic polynomial ring R = Z [ X ] / ( ΦM ( X ) ) where ΦM ( X ) is the M -th cyclotomic polynomial and M = 2N is a power of two . The decoding operation maps an element in R to a vector that is either real or complex , depending on the scheme used . The encoding operation performs the reverse . Plaintext polynomials are encrypted into ciphertext polynomials using a public key . The operations of addition and multiplication can be performed over ciphertexts using an evaluation key . Since each ciphertext corresponds to a vector of real ( or complex ) values , a single homomorphic operation between two ciphertexts constitutes an element-wise operation between two vectors . In addition , such schemes support rotations of the slots within a ciphertext , with the use of Galois automorphisms . 2.3 FLATTENED CONVOLUTIONS . Consider the convolution of an image I with a filter f . For simplicity , assume that both the image and filter are square , the vertical and horizontal strides of the filter are equal . Let I ∈ Rdin×din×cin , and f ∈ Rk×k×cin . Denote the stride as s and padding as p. Now , the output feature map J is such that J ∈ Rdout×dout where dout = ⌊ din − k + 2p s ⌋ + 1 . A full convolutional layer that outputs cout feature maps will require a convolution of the input image with each of the cout filters . Consider the vector v obtained by flattening each output feature map row-wise . This can be expressed as a matrix-vector product of the form v = A ·w ∈ Rd2out·cout where A ∈ Rd2out·cout×d2in·cin and w is the flattened representation of I . 2.4 FAST CONVOLUTION . The first convolutional layer in a CNN can be represented using convolution-packing ( Brutzkus et al. , 2019 ) . The convolution of an input image f with a filter g of width w , height h and depth d is ( f ∗ g ) [ i , j ] = w−1∑ x=0 h−1∑ y=0 d−1∑ z=0 g [ x , y , z ] f [ i+ x , j + y , z ] . ( 1 ) Observe that the parallelism inherent in this computation enables it to be vectorized as f ∗ g = w−1∑ x=0 h−1∑ y=0 d−1∑ z=0 g [ x , y , z ] · F ( x , y , z ) , ( 2 ) where F ( x , y , z ) is a matrix such that F ( x , y , z ) ij = f [ i+x , j+y , z ] . For an input image I ∈ Rcin×din×din feature maps and kernel of window size k × k , the input image is represented as k2 · cin vectors v1 , . . . , vk2·cin , where vi contains all elements convolved with the i-th value in the filter . Denote corresponding ciphertexts as ct1 , . . . , ctk2·cin . The process of producing the j-th output feature map is now reduced to ciphertext-plaintext multiplication of each cti with the i-th value in the j-th filter . In total , the process requires k2 ·cin ciphertext-scalar multiplications per output feature map , leading to a total of k2 · cin · cout multiplications . 2.5 MATRIX-VECTOR MULTIPLICATION . Multiplying a plaintext weight matrix of A ∈ Rm×n with a ciphertext vector can be achieved naively by first performing m ciphertext multiplications of the vector with each row of A . Then for each product ciphertext , we can compute the sum of all elements within it by applying a rotateand-sum procedure ( Halevi & Shoup , 2019 ) : we first rotate the ciphertext by N/2 slots and add it to the original . Then the procedure is repeated for N/4 slots , N/8 and so on , until the sum of the ciphertext resides within all slots of the ciphertext . The resulting dot products can be summed together . This basic approach requires O ( m log n ) rotations . Halevi & Shoup ( 2019 ) introduced a more efficient method of computing the encrypted matrixvector product A · v for square A. GAZELLE ( Juvekar et al. , 2018 ) extended the approach to support rectangular A ∈ Rm×n . The method works by decomposing A into its m diagonals , denoted { d1 , d2 , . . . , dm } such that di = [ Ai,0 , Ai+1,1 , . . . , Ai+n−1 , n−1 ] . Note that all row positions are in modulo m. Then each di is rotated i positions to align values belonging to the same row of A into the same column ( s ) , and finally each rotated diagonal is multiplied with with corresponding rotations of v. The ciphertexts are summed , and the last stage is to apply a rotate-and-sum procedure to the resulting ciphertext . Overall , this procedure requiresO ( m ) multiplications andO ( m+log2 n ) rotations . We propose an improved variant of this approach for our framework described in section 3.1 . | To accelerate the privacy-preserving inference through convolution neural networks (CNNs) with homomorphic encryption (HE), the authors aim to reduce the number of homomorphic operations (HOPs) required for the algorithm to save the data needs to be transferred while preserving prediction accuracy. Using a LOLA method as a baseline, the authors used Halevi-Shoup (HS) which requires much fewer rotations operations in general. In the experimental results, the HS method requires half of the operations in comparison to the LOLA method. By further simplifying the structure of the network, only about 20% of HOPs are required. These simplifications bring about 1% and 27% accuracy loss. | SP:89a1b45eb1420f7259acaf8289fcd30523941e03 |
NormFormer: Improved Transformer Pretraining with Extra Normalization | 1 INTRODUCTION . The original transformer architecture ( Vaswani et al. , 2017 ) applies Layer Normalization ( Ba et al. , 2016 ) after each sublayer ’ s residual connection ( “ Post-LN ” ) in order to reduce the variance of the inputs to the following sublayer , i.e . : PostLN ( x ) = LayerNorm ( x+ Sublayer ( x ) ) , with LayerNorm ( x ) = x− E [ x ] √ V ar [ x ] + · γ + β , where γ and β are trainable parameters , and is a small constant . Recent work has shown empirically and theoretically that Post-LN transformers tend to have larger magnitude gradients in later layers compared to earlier layers ( Xiong et al. , 2020 ) and has advocated moving the LayerNorm operation to the beginning of each sublayer ( “ Pre-LN ” ; see Figure 1 , left ) , i.e . : PreLN ( x ) = x+ Sublayer ( LayerNorm ( x ) ) . In practice Pre-LN transformers can be trained with larger learning rates , shorter learning rate warmup and often yield improved performance compared to Post-LN transformers ( Xiong et al. , 2020 ) , so most recent , large pretrained language models tend to use Pre-LN transformers ( Baevski & Auli , 2019 ; Radford et al. , 2019 ; Raffel et al. , 2020 ; Brown et al. , 2020 ; Lieber et al. , 2021 ) . In this work we show that , while Pre-LN improves stability over Post-LN , it has the opposite side effect : gradients at earlier layers tend to be larger than gradients at later layers , thereby limiting the learning rate.1 We propose NormFormer , which alleviates the gradient magnitude mismatch by adding 3 normalization operations to each layer ( see Figure 1 , middle ) . These operations reduce gradients to early layers and increase gradients to later layers , bringing their magnitudes closer together . 1Intuitively , training stably requires that the largest weight update not be too large , while training efficiently requires large weight updates . Compared to compute-matched , well-tuned Pre-LN baselines , NormFormer models reach target pretraining perplexities faster and achieve better pretraining perplexities and downstream task performance . The rest of this paper is organized as follows : Section 2 describes the proposed modifications . Section 3 describes related work . Section 5 shows pretraining and downstream task performance for fully trained NormFormer models against well-tuned , compute-matched baselines . Section 6 shows the gradient mismatch introduced by Pre-LN and how NormFormer alleviates it . Section 6.1 analyzes residual scaling , a related technique proposed to stabilize Post-LN architectures ( Xiong et al. , 2020 ; Zhu et al. , 2021 ) . Section 7 shows that removing any of the added operations degrades performance and that NormFormer improves over the baseline at a wide range of hyperparameter configurations . Section 9.1 compares NormFormer to Related Work from other domains . 2 APPROACH . 2.1 NORMFORMER . NormFormer includes three modifications to the Pre-LN transformer : First , we apply head-wise scaling inside the attention module and add two additional LayerNorm operations : one after the attention module and a second after the first fully connected layer . The modifications introduce a small number of additional learnable parameters , which provide a cost-effective way for each layer to change the magnitude of its features , and therefore the magnitude of the gradients to subsequent components . The changes are visualized in Figure 1 and described below . Scaling Attention Heads The standard multi-head attention operation is defined as : MultiHeadAttention ( Q , K , V ) = Concat ( h1 , . . . , hn ) W O hi = Attention ( QW Q i , KW K i , V W V i ) Attention ( Q , K , V ) = softmax ( QKT√ dk ) V , where n is the number of heads , i is the attention head index , dk is the dimensionality of the keys and WO , WQi , W K i , W V i are learned projection matrices for the output , query , key and value , respectively . We propose scaling the output of each attention head via learned scalar coefficients γi : HeadScaleMHA ( Q , K , V ) = Concat ( γ1h1 , . . . , γnhn ) W O where γ are learnable parameters initialized to 1 . Additional Layer Normalization and Putting it All Together In the Pre-LN transformer each layer l modifies an input xl as follows : xPreLNl+1 = FFN ( MHA ( xl ) ) where MHA ( x ) = x+MultiHeadAttention ( LN ( x ) , LN ( x ) , LN ( x ) ) FFN ( x ) = x+ σ ( LN ( x ) W1 + b1 ) W2 + b2 LN ( x ) = LayerNorm ( x ) In this work σ is the GELU non-linear activation introduced in Hendrycks & Gimpel ( 2016 ) . Our overall method , NormFormer , instead modifies each input xl as : xNormFormerl+1 = NormFFN ( NormScaledMHA ( xl ) ) where NormScaledMHA ( x ) = x+ LN ( HeadScaleMHA ( LN ( x ) , LN ( x ) , LN ( x ) ) ) NormFFN ( x ) = x+ LN ( σ ( LN ( x ) W1 + b1 ) ) W2 + b2 where bolded operations are newly introduced . 3 RELATED WORK . Architectural Modifications GradInit ( Zhu et al. , 2021 ) introduces a set of scalars and biases for initialization based on a variance heuristic , and Admin ( Liu et al. , 2020 ) applies a similar heuristic in profiling and initialization stages . These works also use variants of our ResScale operation , which we find helpful at small scale and harmful at large scale . Our approach , in contrast , only has new learnable parameters without variance heuristics , and has no extra stages or changes in initialization . Shazeer ( 2020 ) proposes FFN-GeGLU , which includes scaling but no normalization , in the same position as our FFN LN . Ding et al . ( 2021 ) propose related stabilization strategies for text to image generation tasks with larger models including a down-scaled embedding gradient , a slightly different LN formulation , LN after the final fully connected layer , and the same post-attention LN . Section 9.1 compares NormFormer to these proposals , as well as the T5 LayerNorm Variant ( Raffel et al. , 2020 ) , which removes the bias and the mean subtraction from the normalization . Our HeadScale operation is related to that used in Chen et al . ( 2021 ) , but used differently . Whereas that work prunes attention heads with low γ parameters , we use the γ parameters to improve pretraining performance . Press et al . ( 2020a ) proposes an architecture where instead of interleaving attention and feed forward sublayers , the attention all happens first . This increases the number of late FFN parameters , rather than increasing their importance and gradient norm , as our FFN LN does , and does not impact stability . Residual Scaling Standard Post-LN transformers simply sum the previous output ( residual ) with the new output . Recent work attempts to stabilize transformers by weighting the residual connection for each layer ( Zhu et al. , 2021 ; Liu et al. , 2020 ; Touvron et al. , 2021 ) . We thus experiment with scaling the residual in each embedding dimension via learned scalar coefficients ( λresid ) i : ResScale ( x ) = λresid ◦ x+ Sublayer ( LayerNorm ( x ) ) where ◦ is elementwise multiplication , and λresid are learned parameters initialized to 1 . While this can be applied at any normalization layer , we find it it most effective for normalizing the feedforward network ( FFN ) submodule for the smaller sized language models . In this setting , NormFFN ( x ) = λresid ◦ x+ LN ( σ ( LN ( x ) W1 + b1 ) ) W2 + b2 For 1.3B parameter models and larger , scaling residuals hurts performance ( see discussion in Section 6.1 ) , so ResScale is not used in our 1.3B and 2.7B CLM results . Additionally , we experiment with initializing λresid = 1e − 5 , following ( Touvron et al. , 2021 ) , as well replacing addition in residual connections with concatenation ( Davis et al. , 2021 ) in Section 9.1 . 4 EXPERIMENTS . Causal Language Models We pretrain causal LMs ( CLM ) that roughly match the “ Small ” ( 125M parameter ) , “ Medium ” ( 355M ) , “ Large ” ( 1.3B ) and “ XL ” ( 2.7B ) sizes from Brown et al . ( 2020 ) . Our model architecture differs from Brown et al . ( 2020 ) in two ways : ( 1 ) we use only dense attention , while they alternate between dense and locally banded sparse attention ; ( 2 ) we train our models with sinusoidal positional embeddings , following Shortformer ( Press et al. , 2020b ) , since early experiments found this to produce comparable results with fewer learned parameters . We train the baseline models for 300 billion tokens . We train NormFormer models for an equivalent number of GPU hours , which typically results in 2-6 % fewer steps and tokens due to the additional overhead of the normalization operations . On our dataset , we find that the learning rates proposed in GPT-3 are suboptimally low.2 For both baseline and NormFormer at each size besides 2.7B , we tune the learning rate by training models for 50,000 steps and selecting the best performing learning rate among : { 1e−4 , 6e−4 , 3e−4 , 6e−4 , 1e−3 , 3e−3 } . The learning rates we obtained from this process , shown in Table 1 , are 3-5 times larger than those used in the GPT-3 paper . Additionally , we have verified that the baseline and NormFormer both perform worse at the full training budget with the GPT-3 learning rates than with the higher learning rates . Other hyperparameters do not differ from GPT-3.3 Large scale experiments We also train three large-scale models with 2.7B parameters . Our first baseline is a replicated version of GPT-3-2.7B with GELU activations , the published learning rate ( 1.6e-4 ) and the same number of training steps and tokens ( 286K steps ; 300B tokens ) . This model slightly exceeds the reference zero shot performance ( Brown et al. , 2020 ) . Next , we train two variants of GPT3-2.7B with Relu2 activations ( So et al. , 2021 ) , but use slightly fewer training steps ( 20 % less ) for compute efficiency . The first of these uses the baseline learning rate ( 1.6e-4 ) and the second uses NormFormer-2.7Bwith a higher learning rate of 6e-4 . We note that training baseline 2.7B CLMs ( i.e. , without NormFormer modifications ) with a higher 6e-4 learning rate diverged and failed to train . However , as opposed to the smaller architectures , we did not exhaustively tune the learning rate , so it is possible that an intermediate value would perform better . Zero Shot Evaluation In addition to validation perplexity , we evaluate CLMs on a subset of the tasks that GPT3 evaluated on in a zero-shot setting ( Brown et al. , 2020 ) , with the same prompts . We select WinoGrande ( Sakaguchi et al. , 2020 ) , StoryCloze ( Mostafazadeh et al. , 2016 ) , OpenBookQA ( Mihaylov et al. , 2018 ) , HellaSwag ( Zellers et al. , 2019 ) and PIQA ( Bisk et al. , 2020 ) because GPT3 showed strong performance on these tasks at small scale , as well as consistently improving performance with scale . 2The difference in optimal learning rates may be due partly to architectural differences between our baseline and GPT-3 ( e.g. , not using locally banded sparse attention ) . 3See Table 2.1 in Brown et al . ( 2020 ) . Masked Language Models ( MLM ) We adopt the RoBERTa-base , Pre-LN architecture and hyperparameters used in Liu et al . ( 2019 ) . For the baseline , we pretrain for 2 million batches of 1 million tokens , about 14 of the training budget of the original roberta-base . NormFormer runs through 1.92 million batches in the same amount of time . Fine-Tuning We fine-tune both the baseline MLM and NormFormer with learning rates 1e−5 , 1e−4 , 3e−4 , 1e−3 , 3e−3 , 6e−3 and report the best performance on the validation set for each GLUE task ( Wang et al. , 2019 ) , following Liu et al . ( 2019 ) . Other fine-tuning hyperparameters match those used for roberta-base in Liu et al . ( 2019 ) . Pretraining data We pretrain all models on a collection of English language text including the English portion of the CC100 corpus ( Conneau et al. , 2020 ) as well as the data from Liu et al . ( 2019 ) , consisting of BookCorpus ( Zhu et al. , 2019 ) , English Wikipedia and filtered subsets of Common Crawl . We encode our data with the byte-level Byte Pair Encoding ( BPE ) vocabulary from Liu et al . ( 2019 ) , originally introduced in Radford et al . ( 2019 ) . The combined dataset contains around 450GB of uncompressed text and 110B BPE tokens . We hold out 40M BPE tokens from this data as a validation set on which we report pretraining perplexities . Implementation details We train our causal and masked language models in fairseq ( Ott et al. , 2019 ; Paszke et al. , 2019 ) . Although NormFormer introduces fewer than 0.07 % additional parameters , it slows individual training updates and increases memory usage between 2 % ( 2.7B model ) to 6 % ( 125M model ) due to the FFN LNs . Accordingly , we compare NormFormer to baseline models trained for an equal amount of GPU time , i.e. , controlling for compute rather than the number of training updates . Finally , we note that the HeadScale operation can be moved outside the self attention module to allow the use of the very efficient pytorch F.multihead attention . This change reduces overhead without noticeable performance degradation . | This paper aims to improve pretraining Pre-LayerNorm transformers by alleviating two issues: early layers have much larger gradients than later ones, and naive residual learning can't provide optimal weighting. To this end, it proposes to add two LayerNorms after the multi-head attention and the GELU non-linear activation in FFN, respectively. It also adds learnable scaling coefficients for the FFN residual and the attention head outputs. The four modifications are applied to both casual and masked language modeling with improvements observed in downstream tasks. | SP:45ba88126844e65868d6284c7175a9893ccaf67e |
NormFormer: Improved Transformer Pretraining with Extra Normalization | 1 INTRODUCTION . The original transformer architecture ( Vaswani et al. , 2017 ) applies Layer Normalization ( Ba et al. , 2016 ) after each sublayer ’ s residual connection ( “ Post-LN ” ) in order to reduce the variance of the inputs to the following sublayer , i.e . : PostLN ( x ) = LayerNorm ( x+ Sublayer ( x ) ) , with LayerNorm ( x ) = x− E [ x ] √ V ar [ x ] + · γ + β , where γ and β are trainable parameters , and is a small constant . Recent work has shown empirically and theoretically that Post-LN transformers tend to have larger magnitude gradients in later layers compared to earlier layers ( Xiong et al. , 2020 ) and has advocated moving the LayerNorm operation to the beginning of each sublayer ( “ Pre-LN ” ; see Figure 1 , left ) , i.e . : PreLN ( x ) = x+ Sublayer ( LayerNorm ( x ) ) . In practice Pre-LN transformers can be trained with larger learning rates , shorter learning rate warmup and often yield improved performance compared to Post-LN transformers ( Xiong et al. , 2020 ) , so most recent , large pretrained language models tend to use Pre-LN transformers ( Baevski & Auli , 2019 ; Radford et al. , 2019 ; Raffel et al. , 2020 ; Brown et al. , 2020 ; Lieber et al. , 2021 ) . In this work we show that , while Pre-LN improves stability over Post-LN , it has the opposite side effect : gradients at earlier layers tend to be larger than gradients at later layers , thereby limiting the learning rate.1 We propose NormFormer , which alleviates the gradient magnitude mismatch by adding 3 normalization operations to each layer ( see Figure 1 , middle ) . These operations reduce gradients to early layers and increase gradients to later layers , bringing their magnitudes closer together . 1Intuitively , training stably requires that the largest weight update not be too large , while training efficiently requires large weight updates . Compared to compute-matched , well-tuned Pre-LN baselines , NormFormer models reach target pretraining perplexities faster and achieve better pretraining perplexities and downstream task performance . The rest of this paper is organized as follows : Section 2 describes the proposed modifications . Section 3 describes related work . Section 5 shows pretraining and downstream task performance for fully trained NormFormer models against well-tuned , compute-matched baselines . Section 6 shows the gradient mismatch introduced by Pre-LN and how NormFormer alleviates it . Section 6.1 analyzes residual scaling , a related technique proposed to stabilize Post-LN architectures ( Xiong et al. , 2020 ; Zhu et al. , 2021 ) . Section 7 shows that removing any of the added operations degrades performance and that NormFormer improves over the baseline at a wide range of hyperparameter configurations . Section 9.1 compares NormFormer to Related Work from other domains . 2 APPROACH . 2.1 NORMFORMER . NormFormer includes three modifications to the Pre-LN transformer : First , we apply head-wise scaling inside the attention module and add two additional LayerNorm operations : one after the attention module and a second after the first fully connected layer . The modifications introduce a small number of additional learnable parameters , which provide a cost-effective way for each layer to change the magnitude of its features , and therefore the magnitude of the gradients to subsequent components . The changes are visualized in Figure 1 and described below . Scaling Attention Heads The standard multi-head attention operation is defined as : MultiHeadAttention ( Q , K , V ) = Concat ( h1 , . . . , hn ) W O hi = Attention ( QW Q i , KW K i , V W V i ) Attention ( Q , K , V ) = softmax ( QKT√ dk ) V , where n is the number of heads , i is the attention head index , dk is the dimensionality of the keys and WO , WQi , W K i , W V i are learned projection matrices for the output , query , key and value , respectively . We propose scaling the output of each attention head via learned scalar coefficients γi : HeadScaleMHA ( Q , K , V ) = Concat ( γ1h1 , . . . , γnhn ) W O where γ are learnable parameters initialized to 1 . Additional Layer Normalization and Putting it All Together In the Pre-LN transformer each layer l modifies an input xl as follows : xPreLNl+1 = FFN ( MHA ( xl ) ) where MHA ( x ) = x+MultiHeadAttention ( LN ( x ) , LN ( x ) , LN ( x ) ) FFN ( x ) = x+ σ ( LN ( x ) W1 + b1 ) W2 + b2 LN ( x ) = LayerNorm ( x ) In this work σ is the GELU non-linear activation introduced in Hendrycks & Gimpel ( 2016 ) . Our overall method , NormFormer , instead modifies each input xl as : xNormFormerl+1 = NormFFN ( NormScaledMHA ( xl ) ) where NormScaledMHA ( x ) = x+ LN ( HeadScaleMHA ( LN ( x ) , LN ( x ) , LN ( x ) ) ) NormFFN ( x ) = x+ LN ( σ ( LN ( x ) W1 + b1 ) ) W2 + b2 where bolded operations are newly introduced . 3 RELATED WORK . Architectural Modifications GradInit ( Zhu et al. , 2021 ) introduces a set of scalars and biases for initialization based on a variance heuristic , and Admin ( Liu et al. , 2020 ) applies a similar heuristic in profiling and initialization stages . These works also use variants of our ResScale operation , which we find helpful at small scale and harmful at large scale . Our approach , in contrast , only has new learnable parameters without variance heuristics , and has no extra stages or changes in initialization . Shazeer ( 2020 ) proposes FFN-GeGLU , which includes scaling but no normalization , in the same position as our FFN LN . Ding et al . ( 2021 ) propose related stabilization strategies for text to image generation tasks with larger models including a down-scaled embedding gradient , a slightly different LN formulation , LN after the final fully connected layer , and the same post-attention LN . Section 9.1 compares NormFormer to these proposals , as well as the T5 LayerNorm Variant ( Raffel et al. , 2020 ) , which removes the bias and the mean subtraction from the normalization . Our HeadScale operation is related to that used in Chen et al . ( 2021 ) , but used differently . Whereas that work prunes attention heads with low γ parameters , we use the γ parameters to improve pretraining performance . Press et al . ( 2020a ) proposes an architecture where instead of interleaving attention and feed forward sublayers , the attention all happens first . This increases the number of late FFN parameters , rather than increasing their importance and gradient norm , as our FFN LN does , and does not impact stability . Residual Scaling Standard Post-LN transformers simply sum the previous output ( residual ) with the new output . Recent work attempts to stabilize transformers by weighting the residual connection for each layer ( Zhu et al. , 2021 ; Liu et al. , 2020 ; Touvron et al. , 2021 ) . We thus experiment with scaling the residual in each embedding dimension via learned scalar coefficients ( λresid ) i : ResScale ( x ) = λresid ◦ x+ Sublayer ( LayerNorm ( x ) ) where ◦ is elementwise multiplication , and λresid are learned parameters initialized to 1 . While this can be applied at any normalization layer , we find it it most effective for normalizing the feedforward network ( FFN ) submodule for the smaller sized language models . In this setting , NormFFN ( x ) = λresid ◦ x+ LN ( σ ( LN ( x ) W1 + b1 ) ) W2 + b2 For 1.3B parameter models and larger , scaling residuals hurts performance ( see discussion in Section 6.1 ) , so ResScale is not used in our 1.3B and 2.7B CLM results . Additionally , we experiment with initializing λresid = 1e − 5 , following ( Touvron et al. , 2021 ) , as well replacing addition in residual connections with concatenation ( Davis et al. , 2021 ) in Section 9.1 . 4 EXPERIMENTS . Causal Language Models We pretrain causal LMs ( CLM ) that roughly match the “ Small ” ( 125M parameter ) , “ Medium ” ( 355M ) , “ Large ” ( 1.3B ) and “ XL ” ( 2.7B ) sizes from Brown et al . ( 2020 ) . Our model architecture differs from Brown et al . ( 2020 ) in two ways : ( 1 ) we use only dense attention , while they alternate between dense and locally banded sparse attention ; ( 2 ) we train our models with sinusoidal positional embeddings , following Shortformer ( Press et al. , 2020b ) , since early experiments found this to produce comparable results with fewer learned parameters . We train the baseline models for 300 billion tokens . We train NormFormer models for an equivalent number of GPU hours , which typically results in 2-6 % fewer steps and tokens due to the additional overhead of the normalization operations . On our dataset , we find that the learning rates proposed in GPT-3 are suboptimally low.2 For both baseline and NormFormer at each size besides 2.7B , we tune the learning rate by training models for 50,000 steps and selecting the best performing learning rate among : { 1e−4 , 6e−4 , 3e−4 , 6e−4 , 1e−3 , 3e−3 } . The learning rates we obtained from this process , shown in Table 1 , are 3-5 times larger than those used in the GPT-3 paper . Additionally , we have verified that the baseline and NormFormer both perform worse at the full training budget with the GPT-3 learning rates than with the higher learning rates . Other hyperparameters do not differ from GPT-3.3 Large scale experiments We also train three large-scale models with 2.7B parameters . Our first baseline is a replicated version of GPT-3-2.7B with GELU activations , the published learning rate ( 1.6e-4 ) and the same number of training steps and tokens ( 286K steps ; 300B tokens ) . This model slightly exceeds the reference zero shot performance ( Brown et al. , 2020 ) . Next , we train two variants of GPT3-2.7B with Relu2 activations ( So et al. , 2021 ) , but use slightly fewer training steps ( 20 % less ) for compute efficiency . The first of these uses the baseline learning rate ( 1.6e-4 ) and the second uses NormFormer-2.7Bwith a higher learning rate of 6e-4 . We note that training baseline 2.7B CLMs ( i.e. , without NormFormer modifications ) with a higher 6e-4 learning rate diverged and failed to train . However , as opposed to the smaller architectures , we did not exhaustively tune the learning rate , so it is possible that an intermediate value would perform better . Zero Shot Evaluation In addition to validation perplexity , we evaluate CLMs on a subset of the tasks that GPT3 evaluated on in a zero-shot setting ( Brown et al. , 2020 ) , with the same prompts . We select WinoGrande ( Sakaguchi et al. , 2020 ) , StoryCloze ( Mostafazadeh et al. , 2016 ) , OpenBookQA ( Mihaylov et al. , 2018 ) , HellaSwag ( Zellers et al. , 2019 ) and PIQA ( Bisk et al. , 2020 ) because GPT3 showed strong performance on these tasks at small scale , as well as consistently improving performance with scale . 2The difference in optimal learning rates may be due partly to architectural differences between our baseline and GPT-3 ( e.g. , not using locally banded sparse attention ) . 3See Table 2.1 in Brown et al . ( 2020 ) . Masked Language Models ( MLM ) We adopt the RoBERTa-base , Pre-LN architecture and hyperparameters used in Liu et al . ( 2019 ) . For the baseline , we pretrain for 2 million batches of 1 million tokens , about 14 of the training budget of the original roberta-base . NormFormer runs through 1.92 million batches in the same amount of time . Fine-Tuning We fine-tune both the baseline MLM and NormFormer with learning rates 1e−5 , 1e−4 , 3e−4 , 1e−3 , 3e−3 , 6e−3 and report the best performance on the validation set for each GLUE task ( Wang et al. , 2019 ) , following Liu et al . ( 2019 ) . Other fine-tuning hyperparameters match those used for roberta-base in Liu et al . ( 2019 ) . Pretraining data We pretrain all models on a collection of English language text including the English portion of the CC100 corpus ( Conneau et al. , 2020 ) as well as the data from Liu et al . ( 2019 ) , consisting of BookCorpus ( Zhu et al. , 2019 ) , English Wikipedia and filtered subsets of Common Crawl . We encode our data with the byte-level Byte Pair Encoding ( BPE ) vocabulary from Liu et al . ( 2019 ) , originally introduced in Radford et al . ( 2019 ) . The combined dataset contains around 450GB of uncompressed text and 110B BPE tokens . We hold out 40M BPE tokens from this data as a validation set on which we report pretraining perplexities . Implementation details We train our causal and masked language models in fairseq ( Ott et al. , 2019 ; Paszke et al. , 2019 ) . Although NormFormer introduces fewer than 0.07 % additional parameters , it slows individual training updates and increases memory usage between 2 % ( 2.7B model ) to 6 % ( 125M model ) due to the FFN LNs . Accordingly , we compare NormFormer to baseline models trained for an equal amount of GPU time , i.e. , controlling for compute rather than the number of training updates . Finally , we note that the HeadScale operation can be moved outside the self attention module to allow the use of the very efficient pytorch F.multihead attention . This change reduces overhead without noticeable performance degradation . | NormFormer improves on Pre-LN transformers by making the following modifications: learnable scaling parameters for each dimension of the output of each attention head prior to concatenation across heads (*Scaled Attention*); layer norm on the attention output (*Post Attn LN*); layer norm on the FFN nonlinearity output (*FFN LN*); and learnable scaling parameters for each dimension of the skip connection around the FFN (*Scaled Residuals*). They apply NormFormer to GPT3- and RoBERTa-style model configurations, and find that NormFormer models reach baseline iso-accuracy in 22%-43% less time, and achieve notably lower (higher) iso-time perplexity (accuracy). They then conduct a number of analyses to attempt to understand why NormFormer works. | SP:45ba88126844e65868d6284c7175a9893ccaf67e |
Local Calibration: Metrics and Recalibration | Probabilistic classifiers output confidence scores along with their predictions , and these confidence scores should be calibrated , i.e. , they should reflect the reliability of the prediction . Confidence scores that minimize standard metrics such as the expected calibration error ( ECE ) accurately measure the reliability on average across the entire population . However , it is in general impossible to measure the reliability of an individual prediction . In this work , we propose the local calibration error ( LCE ) to span the gap between average and individual reliability . For each individual prediction , the LCE measures the average reliability of a set of similar predictions , where similarity is quantified by a kernel function on a pretrained feature space and by a binning scheme over predicted model confidences . We show theoretically that the LCE can be estimated sample-efficiently from data , and empirically find that it reveals miscalibration modes that are more fine-grained than the ECE can detect . Our key result is a novel local recalibration method LoRe , to improve confidence scores for individual predictions and decrease the LCE . Experimentally , we show that our recalibration method produces more accurate confidence scores , which improves downstream fairness and decision making on classification tasks with both image and tabular data . 1 INTRODUCTION . Uncertainty estimation is extremely important in high stakes decision-making tasks . For example , a patient wants to know the probability that a medical diagnosis is correct ; an autonomous driving system wants to know the probability that a pedestrian is correctly identified . Uncertainty estimates are usually achieved by predicting a probability along with each classification . Ideally , we want to achieve individual calibration , i.e. , we want to predict the probability that each sample is misclassified . However , each sample is observed only once for most datasets ( e.g. , image classification datasets do not contain identical images ) , making it impossible to estimate , or even define , the probability of incorrect classification for individual samples . Because of this , commonly used metrics such as the expected calibration error ( ECE ) measure the gap between a classifier ’ s confidence and accuracy averaged across the entire dataset . Consequently , ECE can be accurately estimated but does not measure the reliability of individual predictions . In this work , we propose the local calibration error ( LCE ) , a calibration metric that spans the gap between fully global ( e.g. , ECE ) and fully individual calibration . Motivated by the success of kernel-based locality in other fields such as fairness ( where similar individuals should be treated similarly ) ( Dwork et al. , 2012 ; Pleiss et al. , 2017 ) and causal inference ( where matching techniques are used to find similar neighboring samples ) ( Stuart , 2010 ) , we approximate the probability of misclassification for an individual sample by computing the average classification error over similar samples , where similarity is measured by a kernel function in a pre-trained feature space and a binning scheme over predicted confidences . Intuitively , two samples are similar if they are close in a pretrained feature space and have similar predicted confidence scores . By choosing the bandwidth of the kernel function , we can trade off estimation accuracy and individuality : when the bandwidth is very large , we recover existing global calibration metrics ; when the bandwidth is small , we approximate individual calibration . We choose an intermediate bandwidth , so our metric can be accurately estimated , and provides some measurement on the reliability of individual predictions . Theoretically , we show that the LCE can be estimated with polynomially many samples if the kernel function is bounded . Empirically , we also show that for intermediate values of the bandwidth , the LCE can be accurately estimated and reveals modes of miscalibration that global metrics ( such as ECE ) fail to uncover . In addition , we introduce a non-parametric , post-hoc localized recalibration method ( LoRe ) , for lowering the LCE . Empirically , LoRe improves fairness by achieving low calibration error on all potentially sensitive subsets of the data , such as racial groups . Notably , it can do so without any prior knowledge of those groups , and is more effective than global methods at this task . In addition , our recalibration method improves decision making when there is a “ safe ” action that is selected whenever the predicted confidence is low . For example , an automated system which classifies tissue samples as cancerous should request a human expert opinion whenever it is unsure about a classification . In a simulation on an image classification dataset , we show that recalibrated prediction models more accurately choose whether to use the “ safe ” action , which improves the overall utility . In summary , the contributions of our paper are as follows . ( 1 ) We introduce a local calibration metric , the LCE , that is both easy to compute and can estimate the reliability of individual predictions . ( 2 ) We introduce a post-hoc localized recalibration method LoRe , that transforms a model ’ s confidence predictions to improve the local calibration . ( 3 ) We empirically evaluate LoRe on several downstream tasks and observe that LoRe improves fairness and decision-making more than existing baselines . 2 BACKGROUND AND RELATED WORK . 2.1 GLOBAL CALIBRATION METRICS . Consider a classification task that maps from some input domain ( e.g. , images ) X to a finite set of labels Y = { 1 , · · · , m } . A classifier is a pair ( f , p̂ ) where f : X → Y maps each input x ∈ X to a label y ∈ Y and p̂ : X → [ 0 , 1 ] maps each input x to a confidence value c. Let Pr be a joint distribution on X × Y ( e.g. , from which training or test data pairs ( x , y ) are drawn ) . The classifier ( f , p̂ ) is perfectly calibrated ( Guo et al. , 2017 ) with respect to Pr if for all c ∈ [ 0 , 1 ] Pr [ f ( X ) = Y | p̂ ( X ) = c ] = c. ( 1 ) To numerically measure how well a classifier is calibrated , the most commonly used metric is the expected calibration error ( ECE ) ( Naeini et al. , 2015 ; Guo et al. , 2017 ) , which measures the average absolute deviation from Eq . 1 over the domain . In practice , given a finite dataset , the ECE is approximated by binning . The predicted confidences p̂ are partitioned into bins B1 , . . . , Bk , and then a weighted average is taken of the absolute difference between the average confidence conf ( Bi ) and average accuracy acc ( Bi ) for each bin Bi : ECE ( f , p̂ ) : = k∑ i=1 |Bi| N |conf ( Bi ) − acc ( Bi ) | . ( 2 ) Similarly , the maximum calibration error ( MCE ) ( Naeini et al. , 2015 ; Guo et al. , 2017 ) measures the average deviation from Eq . 1 in the bin with the highest calibration error , and is defined as MCE ( f , p̂ ) : = max i |conf ( Bi ) − acc ( Bi ) | . ( 3 ) 2.2 EXISTING GLOBAL RECALIBRATION METHODS . Many existing methods apply a post-hoc adjustment that changes a model ’ s confidence predictions to improve global calibration , including Platt scaling ( Platt , 1999 ) , temperature scaling ( Guo et al. , 2017 ) , isotonic regression ( Zadrozny & Elkan , 2002 ) , and histogram binning ( Zadrozny & Elkan , 2001 ) . These methods all learn a simple transformation from the original confidence predictions to new confidence predictions , and aim to decrease the expected calibration error ( ECE ) . Platt scaling fits a logistic regression model ; temperature scaling learns a single temperature parameter to rescale confidence scores for all samples simultaneously ; isotonic regression learns a piece-wise constant monotonic function ; histogram binning partitions confidence scores into bins { [ 0 , ) , [ , 2 ) , · · · , [ 1− , 1 ] } and sorts each validation sample into a bin based on its confidence p̂ ( x ) ; it then resets the confidence level of all samples in the bin to match the classification accuracy of that bin . 2.3 LOCAL CALIBRATION . Two notions of calibration that address some of the deficits of global calibration are class-wise calibration and group-wise calibration . Class-wise calibration groups samples by their true class label ( Kull et al. , 2019 ; Nixon et al. , 2019 ) and measures the average class ECE , while group-wise calibration uses pre-specified groupings ( e.g. , race or gender ) ( Kleinberg et al. , 2016 ; Pleiss et al. , 2017 ) and measures the average group-wise ECE or maximum group-wise MCE . A few recalibration methods have been proposed for these notions of calibration as well . Dirichlet calibration ( Kull et al. , 2019 ) achieves calibration for groups defined by class labels , but does not generalize well to settings with many classes ( Zhao et al. , 2021 ) . Multicalibration ( Hébert-Johnson et al. , 2017 ) achieves calibration for any group that can be represented by a polynomial sized circuit , but lacks a tractable algorithm . If the groups are known a priori , one can also apply global calibration methods within each group ; however , this is impractical in many situations where the groups are not known for new examples at inference time . At an even more local level , Zhao et al . ( 2020 ) look at individual calibration in the regression setting and conclude that individual calibration is impossible to verify with a deterministic forecaster , and thus there is no general method to achieve individual calibration . 2.4 KERNEL-BASED CALIBRATION METRICS . Kumar et al . ( 2018 ) introduce the maximum mean calibration error ( MMCE ) , a kernel-based quantity that replaces the hard binning of the standard ECE estimator with a kernel similarity k ( p̂ ( x ) , p̂ ( x′ ) ) between the confidence of two examples . They further propose to optimize the MMCE directly in order to achieve better model calibration globally . Widmann et al . ( 2019 ) extend their work and propose the more general kernel calibration error . Zhang et al . ( 2020 ) and Gupta et al . ( 2020 ) also consider kernel-based calibration . However , these methods only consider the similarity between model confidences p̂ ( x ) , p̂ ( x′ ) , rather than the inputs x , x′ themselves . 3 THE LOCAL CALIBRATION ERROR . Recall that commonly used metrics for calibration , such as the ECE or the MCE , are global in nature and thus only measure an aggregate reliability over the entire dataset , making them insufficient for many applications . An ideal calibration metric would instead measure calibration at an individual level ; however , doing so is impossible without making assumptions about the ground truth distribution ( Zhao et al. , 2020 ) . A localized calibration metric represents an adjustable balance between these two extremes . Ideally , such a metric should measure calibration at a local level ( where the extent of the local neighborhood can be chosen by the user ) and group similar data points together . In this section , we introduce the local calibration error ( LCE ) , a kernel-based metric that allows us to measure the calibration locally around a prediction . Our metric leverages learned features to automatically group similar samples into a soft neighborhood , and allows the neighborhood size to be set with a hyperparameter γ . We also consider only points with a similar model confidence as the prediction , so that similarity is defined in terms of distance both in the feature space and in model confidence . Thus , the LCE effectively creates soft groupings that depend on the feature space ; with a semantically meaningful feature space , these groupings correspond to useful subsets of the data . We then mention a few design choices and visualize LCE maps over a 2D feature space to show that we can use our metric to diagnose regions of local miscalibration . | This work proposes a new metric for calibration in classification where calibration is measured over localities in the input space, and the localities are determined with a kernel over the feature space. A recalibration algorithm (LoRe) is additionally proposed, which aims to recalibrate the class predictions w.r.t. local calibration error (LCE). The experiments show that 1) LoRe achieves better local calibration (measured by MLCE, i.e. maximum LCE) than baseline recalibration methods on ImageNet, 2) LoRe achieves better group-wise calibration (measured by worst group-wise maximum calibration error) across 3 datasets with identifiable groups, and 3) achieves competitive performance in a simple decision making task where confident incorrect predictions incur high costs. | SP:4c8f0cf7f6196f586ec83d16a768742d13a16cea |
Local Calibration: Metrics and Recalibration | Probabilistic classifiers output confidence scores along with their predictions , and these confidence scores should be calibrated , i.e. , they should reflect the reliability of the prediction . Confidence scores that minimize standard metrics such as the expected calibration error ( ECE ) accurately measure the reliability on average across the entire population . However , it is in general impossible to measure the reliability of an individual prediction . In this work , we propose the local calibration error ( LCE ) to span the gap between average and individual reliability . For each individual prediction , the LCE measures the average reliability of a set of similar predictions , where similarity is quantified by a kernel function on a pretrained feature space and by a binning scheme over predicted model confidences . We show theoretically that the LCE can be estimated sample-efficiently from data , and empirically find that it reveals miscalibration modes that are more fine-grained than the ECE can detect . Our key result is a novel local recalibration method LoRe , to improve confidence scores for individual predictions and decrease the LCE . Experimentally , we show that our recalibration method produces more accurate confidence scores , which improves downstream fairness and decision making on classification tasks with both image and tabular data . 1 INTRODUCTION . Uncertainty estimation is extremely important in high stakes decision-making tasks . For example , a patient wants to know the probability that a medical diagnosis is correct ; an autonomous driving system wants to know the probability that a pedestrian is correctly identified . Uncertainty estimates are usually achieved by predicting a probability along with each classification . Ideally , we want to achieve individual calibration , i.e. , we want to predict the probability that each sample is misclassified . However , each sample is observed only once for most datasets ( e.g. , image classification datasets do not contain identical images ) , making it impossible to estimate , or even define , the probability of incorrect classification for individual samples . Because of this , commonly used metrics such as the expected calibration error ( ECE ) measure the gap between a classifier ’ s confidence and accuracy averaged across the entire dataset . Consequently , ECE can be accurately estimated but does not measure the reliability of individual predictions . In this work , we propose the local calibration error ( LCE ) , a calibration metric that spans the gap between fully global ( e.g. , ECE ) and fully individual calibration . Motivated by the success of kernel-based locality in other fields such as fairness ( where similar individuals should be treated similarly ) ( Dwork et al. , 2012 ; Pleiss et al. , 2017 ) and causal inference ( where matching techniques are used to find similar neighboring samples ) ( Stuart , 2010 ) , we approximate the probability of misclassification for an individual sample by computing the average classification error over similar samples , where similarity is measured by a kernel function in a pre-trained feature space and a binning scheme over predicted confidences . Intuitively , two samples are similar if they are close in a pretrained feature space and have similar predicted confidence scores . By choosing the bandwidth of the kernel function , we can trade off estimation accuracy and individuality : when the bandwidth is very large , we recover existing global calibration metrics ; when the bandwidth is small , we approximate individual calibration . We choose an intermediate bandwidth , so our metric can be accurately estimated , and provides some measurement on the reliability of individual predictions . Theoretically , we show that the LCE can be estimated with polynomially many samples if the kernel function is bounded . Empirically , we also show that for intermediate values of the bandwidth , the LCE can be accurately estimated and reveals modes of miscalibration that global metrics ( such as ECE ) fail to uncover . In addition , we introduce a non-parametric , post-hoc localized recalibration method ( LoRe ) , for lowering the LCE . Empirically , LoRe improves fairness by achieving low calibration error on all potentially sensitive subsets of the data , such as racial groups . Notably , it can do so without any prior knowledge of those groups , and is more effective than global methods at this task . In addition , our recalibration method improves decision making when there is a “ safe ” action that is selected whenever the predicted confidence is low . For example , an automated system which classifies tissue samples as cancerous should request a human expert opinion whenever it is unsure about a classification . In a simulation on an image classification dataset , we show that recalibrated prediction models more accurately choose whether to use the “ safe ” action , which improves the overall utility . In summary , the contributions of our paper are as follows . ( 1 ) We introduce a local calibration metric , the LCE , that is both easy to compute and can estimate the reliability of individual predictions . ( 2 ) We introduce a post-hoc localized recalibration method LoRe , that transforms a model ’ s confidence predictions to improve the local calibration . ( 3 ) We empirically evaluate LoRe on several downstream tasks and observe that LoRe improves fairness and decision-making more than existing baselines . 2 BACKGROUND AND RELATED WORK . 2.1 GLOBAL CALIBRATION METRICS . Consider a classification task that maps from some input domain ( e.g. , images ) X to a finite set of labels Y = { 1 , · · · , m } . A classifier is a pair ( f , p̂ ) where f : X → Y maps each input x ∈ X to a label y ∈ Y and p̂ : X → [ 0 , 1 ] maps each input x to a confidence value c. Let Pr be a joint distribution on X × Y ( e.g. , from which training or test data pairs ( x , y ) are drawn ) . The classifier ( f , p̂ ) is perfectly calibrated ( Guo et al. , 2017 ) with respect to Pr if for all c ∈ [ 0 , 1 ] Pr [ f ( X ) = Y | p̂ ( X ) = c ] = c. ( 1 ) To numerically measure how well a classifier is calibrated , the most commonly used metric is the expected calibration error ( ECE ) ( Naeini et al. , 2015 ; Guo et al. , 2017 ) , which measures the average absolute deviation from Eq . 1 over the domain . In practice , given a finite dataset , the ECE is approximated by binning . The predicted confidences p̂ are partitioned into bins B1 , . . . , Bk , and then a weighted average is taken of the absolute difference between the average confidence conf ( Bi ) and average accuracy acc ( Bi ) for each bin Bi : ECE ( f , p̂ ) : = k∑ i=1 |Bi| N |conf ( Bi ) − acc ( Bi ) | . ( 2 ) Similarly , the maximum calibration error ( MCE ) ( Naeini et al. , 2015 ; Guo et al. , 2017 ) measures the average deviation from Eq . 1 in the bin with the highest calibration error , and is defined as MCE ( f , p̂ ) : = max i |conf ( Bi ) − acc ( Bi ) | . ( 3 ) 2.2 EXISTING GLOBAL RECALIBRATION METHODS . Many existing methods apply a post-hoc adjustment that changes a model ’ s confidence predictions to improve global calibration , including Platt scaling ( Platt , 1999 ) , temperature scaling ( Guo et al. , 2017 ) , isotonic regression ( Zadrozny & Elkan , 2002 ) , and histogram binning ( Zadrozny & Elkan , 2001 ) . These methods all learn a simple transformation from the original confidence predictions to new confidence predictions , and aim to decrease the expected calibration error ( ECE ) . Platt scaling fits a logistic regression model ; temperature scaling learns a single temperature parameter to rescale confidence scores for all samples simultaneously ; isotonic regression learns a piece-wise constant monotonic function ; histogram binning partitions confidence scores into bins { [ 0 , ) , [ , 2 ) , · · · , [ 1− , 1 ] } and sorts each validation sample into a bin based on its confidence p̂ ( x ) ; it then resets the confidence level of all samples in the bin to match the classification accuracy of that bin . 2.3 LOCAL CALIBRATION . Two notions of calibration that address some of the deficits of global calibration are class-wise calibration and group-wise calibration . Class-wise calibration groups samples by their true class label ( Kull et al. , 2019 ; Nixon et al. , 2019 ) and measures the average class ECE , while group-wise calibration uses pre-specified groupings ( e.g. , race or gender ) ( Kleinberg et al. , 2016 ; Pleiss et al. , 2017 ) and measures the average group-wise ECE or maximum group-wise MCE . A few recalibration methods have been proposed for these notions of calibration as well . Dirichlet calibration ( Kull et al. , 2019 ) achieves calibration for groups defined by class labels , but does not generalize well to settings with many classes ( Zhao et al. , 2021 ) . Multicalibration ( Hébert-Johnson et al. , 2017 ) achieves calibration for any group that can be represented by a polynomial sized circuit , but lacks a tractable algorithm . If the groups are known a priori , one can also apply global calibration methods within each group ; however , this is impractical in many situations where the groups are not known for new examples at inference time . At an even more local level , Zhao et al . ( 2020 ) look at individual calibration in the regression setting and conclude that individual calibration is impossible to verify with a deterministic forecaster , and thus there is no general method to achieve individual calibration . 2.4 KERNEL-BASED CALIBRATION METRICS . Kumar et al . ( 2018 ) introduce the maximum mean calibration error ( MMCE ) , a kernel-based quantity that replaces the hard binning of the standard ECE estimator with a kernel similarity k ( p̂ ( x ) , p̂ ( x′ ) ) between the confidence of two examples . They further propose to optimize the MMCE directly in order to achieve better model calibration globally . Widmann et al . ( 2019 ) extend their work and propose the more general kernel calibration error . Zhang et al . ( 2020 ) and Gupta et al . ( 2020 ) also consider kernel-based calibration . However , these methods only consider the similarity between model confidences p̂ ( x ) , p̂ ( x′ ) , rather than the inputs x , x′ themselves . 3 THE LOCAL CALIBRATION ERROR . Recall that commonly used metrics for calibration , such as the ECE or the MCE , are global in nature and thus only measure an aggregate reliability over the entire dataset , making them insufficient for many applications . An ideal calibration metric would instead measure calibration at an individual level ; however , doing so is impossible without making assumptions about the ground truth distribution ( Zhao et al. , 2020 ) . A localized calibration metric represents an adjustable balance between these two extremes . Ideally , such a metric should measure calibration at a local level ( where the extent of the local neighborhood can be chosen by the user ) and group similar data points together . In this section , we introduce the local calibration error ( LCE ) , a kernel-based metric that allows us to measure the calibration locally around a prediction . Our metric leverages learned features to automatically group similar samples into a soft neighborhood , and allows the neighborhood size to be set with a hyperparameter γ . We also consider only points with a similar model confidence as the prediction , so that similarity is defined in terms of distance both in the feature space and in model confidence . Thus , the LCE effectively creates soft groupings that depend on the feature space ; with a semantically meaningful feature space , these groupings correspond to useful subsets of the data . We then mention a few design choices and visualize LCE maps over a 2D feature space to show that we can use our metric to diagnose regions of local miscalibration . | This paper proposes a new measure of calibration called Local Calibration. While conventional calibration measures are only defined with probabilistic outputs, the proposed local calibration measure further incorporates the feature space by considering the neighbouring region with a kernel. The authors also propose a calibration method according to the definition and experimentally demonstrate the advantages. | SP:4c8f0cf7f6196f586ec83d16a768742d13a16cea |
X-model: Improving Data Efficiency in Deep Learning with A Minimax Model | 1 INTRODUCTION . In the last decade , deep learning has become the de facto choice for numerous machine learning applications in the presence of large-scale labeled datasets . However , collecting sufficient labeled data through manual labeling , especially for deep regression tasks such as keypoint localization and age estimation , is prohibitively time-costly and labor-expensive in real-world scenarios . To mitigate the requirement of labeled data , great effort ( Lee , 2013 ; Laine & Aila , 2017 ; Grandvalet & Bengio , 2005 ; Sohn et al. , 2020 ; Chen et al. , 2020b ) has been made to improve data efficiency in deep learning from the perspectives of simultaneously exploring both labeled and unlabeled data based on the intuitions for classification , e.g . cluster assumptions , pseudo labeling strategies , or consistency regularization under different data augmentations . However , most of the existing methods of improving data efficiency focus on classification setup while rare attention has been paid to the other side of the coin , i.e . deep regression which usually requires more human effort or expert knowledge to labeling . Moreover , due to the intrinsic difference between categorical and continuous label space , the methods based on the cluster or low-density separation assumptions ( Lee , 2013 ; Grandvalet & Bengio , 2005 ) for data-efficient classification tasks can not be directly adapted into deep regression . Meanwhile , existing data-efficient regression methods adopt k-nearest neighbor ( kNN ) ( Zhou & Li , 2005b ; Yu-Feng Li , 2017 ) , decision tree ( Levati et al. , 2017 ) or Gaussian Process ( Srijith et al. , 2013 ) as regressors on a fixed and shallow feature space , causing them difficult to be extended into deep learning problems . To develop a general data-efficient deep learning method for both classification and regression setups , we first delved into the existing methods and found that they can be briefly grouped into two categories : 1 ) Encourage invariance to data stochasticity with consistency regularization to make the predictions of the model invariant to local input perturbations , such as Π-model ( Laine & Aila , 2017 ) , UDA ( Xie et al. , 2020 ) and FixMatch ( Sohn et al. , 2020 ) ; 2 ) Encourage invariance to model stochasticity with difference penalty for predictions of models generated from different dropout ( Laine & Aila , 2017 ) or initialization ( Zhou & Li , 2005b ) , as well as models exponentially averaged from history models , such as Mean Teacher ( Tarvainen & Valpola , 2017 ) . Through the success of these above two strategies of consistency regularization that encourages invariance under data and model transformations respectively , it is intuitive to draw a conclusion that the invariance to stochasticity matters for improving data efficiency in deep learning . To take the power of both worlds , we propose a novel χ-model by simultaneously encouraging the invariance to data stochasticity and model stochasticity . First , instead of the weak augmentations ( e.g. , flip and crop ) adopted in Π-model , we utilize the strong augmentations ( e.g. , cutout and contrast ) adopted in FixMatch ( Sohn et al. , 2020 ) to enhance invariance to data stochasticity , as shown in Figure 1 with some example images tailored from Jung et al . ( 2020 ) . A natural question arises : Can we further enhance the invariance to model stochasticity similar to that of data stochasticity ? This paper gives a positive answer by introducing a minimax game between the feature extractor and task-specific heads . Compared to the manually designed strategy of adding a dropout layer , this novel approach directly optimizes a minimax loss function in the hypothesis space . By maximizing the inconsistency between task-specific heads , more diverse learners are generated to further enhance the invariance to model stochasticity and thus fully explore the intrinsic structure of unlabeled data . In short , our contributions can be summarized as follows : • We propose the χ-model that jointly encourages the invariance to data stochasticity and model stochasticity to improve data efficiency for both classification and regression setups . • We make the χ-model play a minimax game between the feature extractor and task-specific heads to further enhance invariance to model stochasticity . • Extensive experiments verify the superiority of the χ-model among various tasks , from an age estimation task to a dense-value prediction task of keypoint localization , a 2D synthetic and a 3D realistic dataset , as well as a multi-category object recognition task . 2 RELATED WORK . 2.1 DATA-EFFICENT CLASSIFICATION . In absence of abundant labeled data , it is reasonable to further explore the additional unlabeled data . A popular approach among these algorithms is Pseudo Labeling ( Lee , 2013 ) which leverages the model itself to generate labels for unlabeled data and uses generated labels for training . Besides Pseudo Labeling , there is another family of algorithms under the umbrella of “ self-training ” , which has received much attention both empirically ( Sohn et al. , 2020 ) and theoretically ( Wei et al. , 2021 ) . They either enforce stability of predictions under different data augmentations ( Tarvainen & Valpola , 2017 ; Xie et al. , 2020 ; Sohn et al. , 2020 ) ( a.k.a . input consistency regularization ) or fit the unlabeled data on its predictions generated by a previously learned model ( Lee , 2013 ; Chen et al. , 2020b ) . Further , Co-Training ( Blum & Mitchell , 1998b ) , Deep Co-Training Qiao et al . ( 2018 ) and Tri-Training ( Zhou & Li , 2005a ) improve data efficiency from an interesting perspective of different views of classifiers . MixMatch ( Berthelot et al. , 2019 ) , ReMixMatch ( Berthelot et al. , 2020 ) and UDA ( Xie et al. , 2020 ) reveal the crucial role of noise produced by advanced data augmentation methods . FixMatch ( Sohn et al. , 2020 ) uses predictions from weakly-augmented images to supervise the output of strongly augmented data . Meta Pseudo Labels ( Pham et al. , 2021 ) further improves data efficiency by making the teacher constantly adapted by the feedback of the student ’ s performance on the labeled dataset . SimCLRv2 ( Chen et al. , 2020b ) first fine-tunes the pre-trained model from the labeled data and then distills on the unlabeled data . Self-Tuning ( Wang et al. , 2021 ) introduces a pseudo group contrast ( PGC ) mechanism but is limited on classification setup . Besides of involving unlabeled data from the same distribution , another promising direction for improving data efficiency is introducing a complementary perspective to further improve data efficiency by introducing a related but different domain ( Long et al. , 2015 ; Ganin & Lempitsky , 2015 ; Long et al. , 2017 ; Saito et al. , 2018b ; Lee et al. , 2019 ; Zhang et al. , 2019 ; Saito et al. , 2018a ; 2019 ) . Moreover , various recent methods ( van den Oord et al. , 2018 ; He et al. , 2020 ; Wu et al. , 2018 ; Hadsell et al. , 2006 ; Tian et al. , 2019 ; Chen et al. , 2020a ) improve data efficiency by self-supervised learning . However , most existing data-efficient methods focus on classification setup while rare attention has been paid to deep regression . 2.2 DATA-EFFICIENT REGRESSION . Data-efficient regression methods mainly fall into three categories : Co-Training ( Blum & Mitchell , 1998a ) paradigm , kernel regression paradigm , and graph Laplacian regularization paradigm . COREG ( Zhou & Li , 2005b ) , as well as its following work ( Yu-Feng Li , 2017 ) , is a classic algorithm for regression method for improving data efficiency with two regressors ( kNNs ) learned by different distance metrics . It employs the Co-Training paradigm by predicting unlabeled data and using the most promising predictions to train the rest regressors . Transductive Regression ( Cortes & Mohri , 2007 ) exploits local linear regression for unlabeled data , takes those with relatively close neighbors , and trains a kernel regressor to produce the final prediction . Graph Laplacian regularization is a commonly used manifold regularization technique ( Belkin et al. , 2006 ) . It is reasonable to assume that data points with close input values should have similar output values , thereby regularizing the model output with respect to the unlabeled data . Levati et al . ( 2017 ) and Srijith et al . ( 2013 ) adopt decision tree and Gaussian Process as regressors respectively . However , these existing dataefficient regression methods are mainly designed for shallow regression , which requires closed-form or convex-solvers to solve the problem , causing them not suitable for deep regression problems . The comparison of various methods for improving data efficiency in deep learning is shown in Table 1 . 3 PRELIMINARIES . Denote a labeled dataset L = { ( xLi , y L i ) } nL i=1 with nL samples ( xLi , y L i ) , and an unlabeled dataset U = { ( xUi ) } nU i=1 with nU unlabeled samples . Usually , the size nL of L is much smaller than that nU of U and we define the the label ratio as nL/ ( nL + nU ) . Denote θ the feature generator network , and φ the successive task-specific head network . We aim at improving data efficiency in deep learning by fully exploring the labeled and unlabeled data from the perspective of stochasticity . 3.1 INVARIANCE TO DATA STOCHASTICITY . Data-efficient methods with the insight of invariance to data stochasticity aim at making the predictions of the model invariant to local input perturbations by a consistency regularization term . As shown in Figure 2 ( a ) , Π-model ( Laine & Aila , 2017 ; Sajjadi et al. , 2016 ) first generates two examples with different stochastic data augmentations and then introduces a loss term to minimize the distance between their predictions . With the strategy of invariance to data stochasticity , Π-model is believed to augment the model with information about the intrinsic structure ( “ manifold ” ) of U , avoiding overfitting to the labeled data L. Realizing that training a model on its own predictions ( e.g. , Π-model ) often provides no meaningful information , Data Distillation ( Radosavovic et al. , 2017 ) extends the dual-copies of Π-model to multiple transformations as shown in Figure 2 ( c ) . It first generates an ensembled prediction for each unlabeled input from the predictions of a single model run on different transformations ( e.g. , flipping and scaling ) , and then uses it to guide the training of this input . In summary , the training loss of invariance to data stochasticity can be formalized as min θ , φ Ldata ( x , U ) = Exi∈U ` ( ( φ ◦ θ ) ( aug1 ( xi ) ) , ( φ ◦ θ ) ( aug2 ( xi ) ) ) , ( 1 ) where ` ( · , · ) is a proper loss function for the target task . For clarity , we focus on a particular data example xi here and the superscript U is omitted . Recently , instead of the weak augmentations ( e.g. , flip and crop ) adopted in Π-model , many strong augmentations ( e.g. , cutout and contrast ) are adopted in FixMatch ( Sohn et al. , 2020 ) to further enhance invariance to data stochasticity . | The paper focuses on reducing data labeling efforts by improving data efficiency. In contrast to most existing approaches that address this problem only in the classification setup, the paper focuses on both classification and regression set ups. The proposed method primarily is built on leveraging invariance to data stochasticity and model stochasticity. Experiments are conducted on various tasks like age estimation, key point localization, and object recognition. | SP:fe04f7ffacf4dfa43448503ac2fa7a5f7f14ab3d |
X-model: Improving Data Efficiency in Deep Learning with A Minimax Model | 1 INTRODUCTION . In the last decade , deep learning has become the de facto choice for numerous machine learning applications in the presence of large-scale labeled datasets . However , collecting sufficient labeled data through manual labeling , especially for deep regression tasks such as keypoint localization and age estimation , is prohibitively time-costly and labor-expensive in real-world scenarios . To mitigate the requirement of labeled data , great effort ( Lee , 2013 ; Laine & Aila , 2017 ; Grandvalet & Bengio , 2005 ; Sohn et al. , 2020 ; Chen et al. , 2020b ) has been made to improve data efficiency in deep learning from the perspectives of simultaneously exploring both labeled and unlabeled data based on the intuitions for classification , e.g . cluster assumptions , pseudo labeling strategies , or consistency regularization under different data augmentations . However , most of the existing methods of improving data efficiency focus on classification setup while rare attention has been paid to the other side of the coin , i.e . deep regression which usually requires more human effort or expert knowledge to labeling . Moreover , due to the intrinsic difference between categorical and continuous label space , the methods based on the cluster or low-density separation assumptions ( Lee , 2013 ; Grandvalet & Bengio , 2005 ) for data-efficient classification tasks can not be directly adapted into deep regression . Meanwhile , existing data-efficient regression methods adopt k-nearest neighbor ( kNN ) ( Zhou & Li , 2005b ; Yu-Feng Li , 2017 ) , decision tree ( Levati et al. , 2017 ) or Gaussian Process ( Srijith et al. , 2013 ) as regressors on a fixed and shallow feature space , causing them difficult to be extended into deep learning problems . To develop a general data-efficient deep learning method for both classification and regression setups , we first delved into the existing methods and found that they can be briefly grouped into two categories : 1 ) Encourage invariance to data stochasticity with consistency regularization to make the predictions of the model invariant to local input perturbations , such as Π-model ( Laine & Aila , 2017 ) , UDA ( Xie et al. , 2020 ) and FixMatch ( Sohn et al. , 2020 ) ; 2 ) Encourage invariance to model stochasticity with difference penalty for predictions of models generated from different dropout ( Laine & Aila , 2017 ) or initialization ( Zhou & Li , 2005b ) , as well as models exponentially averaged from history models , such as Mean Teacher ( Tarvainen & Valpola , 2017 ) . Through the success of these above two strategies of consistency regularization that encourages invariance under data and model transformations respectively , it is intuitive to draw a conclusion that the invariance to stochasticity matters for improving data efficiency in deep learning . To take the power of both worlds , we propose a novel χ-model by simultaneously encouraging the invariance to data stochasticity and model stochasticity . First , instead of the weak augmentations ( e.g. , flip and crop ) adopted in Π-model , we utilize the strong augmentations ( e.g. , cutout and contrast ) adopted in FixMatch ( Sohn et al. , 2020 ) to enhance invariance to data stochasticity , as shown in Figure 1 with some example images tailored from Jung et al . ( 2020 ) . A natural question arises : Can we further enhance the invariance to model stochasticity similar to that of data stochasticity ? This paper gives a positive answer by introducing a minimax game between the feature extractor and task-specific heads . Compared to the manually designed strategy of adding a dropout layer , this novel approach directly optimizes a minimax loss function in the hypothesis space . By maximizing the inconsistency between task-specific heads , more diverse learners are generated to further enhance the invariance to model stochasticity and thus fully explore the intrinsic structure of unlabeled data . In short , our contributions can be summarized as follows : • We propose the χ-model that jointly encourages the invariance to data stochasticity and model stochasticity to improve data efficiency for both classification and regression setups . • We make the χ-model play a minimax game between the feature extractor and task-specific heads to further enhance invariance to model stochasticity . • Extensive experiments verify the superiority of the χ-model among various tasks , from an age estimation task to a dense-value prediction task of keypoint localization , a 2D synthetic and a 3D realistic dataset , as well as a multi-category object recognition task . 2 RELATED WORK . 2.1 DATA-EFFICENT CLASSIFICATION . In absence of abundant labeled data , it is reasonable to further explore the additional unlabeled data . A popular approach among these algorithms is Pseudo Labeling ( Lee , 2013 ) which leverages the model itself to generate labels for unlabeled data and uses generated labels for training . Besides Pseudo Labeling , there is another family of algorithms under the umbrella of “ self-training ” , which has received much attention both empirically ( Sohn et al. , 2020 ) and theoretically ( Wei et al. , 2021 ) . They either enforce stability of predictions under different data augmentations ( Tarvainen & Valpola , 2017 ; Xie et al. , 2020 ; Sohn et al. , 2020 ) ( a.k.a . input consistency regularization ) or fit the unlabeled data on its predictions generated by a previously learned model ( Lee , 2013 ; Chen et al. , 2020b ) . Further , Co-Training ( Blum & Mitchell , 1998b ) , Deep Co-Training Qiao et al . ( 2018 ) and Tri-Training ( Zhou & Li , 2005a ) improve data efficiency from an interesting perspective of different views of classifiers . MixMatch ( Berthelot et al. , 2019 ) , ReMixMatch ( Berthelot et al. , 2020 ) and UDA ( Xie et al. , 2020 ) reveal the crucial role of noise produced by advanced data augmentation methods . FixMatch ( Sohn et al. , 2020 ) uses predictions from weakly-augmented images to supervise the output of strongly augmented data . Meta Pseudo Labels ( Pham et al. , 2021 ) further improves data efficiency by making the teacher constantly adapted by the feedback of the student ’ s performance on the labeled dataset . SimCLRv2 ( Chen et al. , 2020b ) first fine-tunes the pre-trained model from the labeled data and then distills on the unlabeled data . Self-Tuning ( Wang et al. , 2021 ) introduces a pseudo group contrast ( PGC ) mechanism but is limited on classification setup . Besides of involving unlabeled data from the same distribution , another promising direction for improving data efficiency is introducing a complementary perspective to further improve data efficiency by introducing a related but different domain ( Long et al. , 2015 ; Ganin & Lempitsky , 2015 ; Long et al. , 2017 ; Saito et al. , 2018b ; Lee et al. , 2019 ; Zhang et al. , 2019 ; Saito et al. , 2018a ; 2019 ) . Moreover , various recent methods ( van den Oord et al. , 2018 ; He et al. , 2020 ; Wu et al. , 2018 ; Hadsell et al. , 2006 ; Tian et al. , 2019 ; Chen et al. , 2020a ) improve data efficiency by self-supervised learning . However , most existing data-efficient methods focus on classification setup while rare attention has been paid to deep regression . 2.2 DATA-EFFICIENT REGRESSION . Data-efficient regression methods mainly fall into three categories : Co-Training ( Blum & Mitchell , 1998a ) paradigm , kernel regression paradigm , and graph Laplacian regularization paradigm . COREG ( Zhou & Li , 2005b ) , as well as its following work ( Yu-Feng Li , 2017 ) , is a classic algorithm for regression method for improving data efficiency with two regressors ( kNNs ) learned by different distance metrics . It employs the Co-Training paradigm by predicting unlabeled data and using the most promising predictions to train the rest regressors . Transductive Regression ( Cortes & Mohri , 2007 ) exploits local linear regression for unlabeled data , takes those with relatively close neighbors , and trains a kernel regressor to produce the final prediction . Graph Laplacian regularization is a commonly used manifold regularization technique ( Belkin et al. , 2006 ) . It is reasonable to assume that data points with close input values should have similar output values , thereby regularizing the model output with respect to the unlabeled data . Levati et al . ( 2017 ) and Srijith et al . ( 2013 ) adopt decision tree and Gaussian Process as regressors respectively . However , these existing dataefficient regression methods are mainly designed for shallow regression , which requires closed-form or convex-solvers to solve the problem , causing them not suitable for deep regression problems . The comparison of various methods for improving data efficiency in deep learning is shown in Table 1 . 3 PRELIMINARIES . Denote a labeled dataset L = { ( xLi , y L i ) } nL i=1 with nL samples ( xLi , y L i ) , and an unlabeled dataset U = { ( xUi ) } nU i=1 with nU unlabeled samples . Usually , the size nL of L is much smaller than that nU of U and we define the the label ratio as nL/ ( nL + nU ) . Denote θ the feature generator network , and φ the successive task-specific head network . We aim at improving data efficiency in deep learning by fully exploring the labeled and unlabeled data from the perspective of stochasticity . 3.1 INVARIANCE TO DATA STOCHASTICITY . Data-efficient methods with the insight of invariance to data stochasticity aim at making the predictions of the model invariant to local input perturbations by a consistency regularization term . As shown in Figure 2 ( a ) , Π-model ( Laine & Aila , 2017 ; Sajjadi et al. , 2016 ) first generates two examples with different stochastic data augmentations and then introduces a loss term to minimize the distance between their predictions . With the strategy of invariance to data stochasticity , Π-model is believed to augment the model with information about the intrinsic structure ( “ manifold ” ) of U , avoiding overfitting to the labeled data L. Realizing that training a model on its own predictions ( e.g. , Π-model ) often provides no meaningful information , Data Distillation ( Radosavovic et al. , 2017 ) extends the dual-copies of Π-model to multiple transformations as shown in Figure 2 ( c ) . It first generates an ensembled prediction for each unlabeled input from the predictions of a single model run on different transformations ( e.g. , flipping and scaling ) , and then uses it to guide the training of this input . In summary , the training loss of invariance to data stochasticity can be formalized as min θ , φ Ldata ( x , U ) = Exi∈U ` ( ( φ ◦ θ ) ( aug1 ( xi ) ) , ( φ ◦ θ ) ( aug2 ( xi ) ) ) , ( 1 ) where ` ( · , · ) is a proper loss function for the target task . For clarity , we focus on a particular data example xi here and the superscript U is omitted . Recently , instead of the weak augmentations ( e.g. , flip and crop ) adopted in Π-model , many strong augmentations ( e.g. , cutout and contrast ) are adopted in FixMatch ( Sohn et al. , 2020 ) to further enhance invariance to data stochasticity . | The paper presents a g data-efficient approach that encourages invariance to both data and model stochasticity that works for both classification and regression tasks. Furthermore, the proposed minimax loss function can specifically enhance invariance to model stochasticity. The extensive experimental results verify that the proposed method is effective. | SP:fe04f7ffacf4dfa43448503ac2fa7a5f7f14ab3d |
A General Theory of Relativity in Reinforcement Learning | 1 INTRODUCTION . Deep reinforcement learning ( RL ) has demonstrated its great successes in recent years , including breakthrough of solving a number of challenging problems like Atari ( Mnih et al. , 2015 ) , GO ( Silver et al. , 2016 ; 2017 ) , DOTA2 ( Berner et al. , 2019 ) and StarCraft II ( Vinyals et al. , 2019 ) , with humanlevel performance or even beyond . These successes demonstrate that current deep RL methods are capable to explore and exploit sufficiently in huge observation and action spaces , as long as sufficient and effective data samples can be generated for training , such as the cases in games . For example , AlphaGo Zero ( Silver et al. , 2017 ) costs 3 days of training over 4.9 millions self-play games , and OpenAI Five ( Berner et al. , 2019 ) and AlphaStar ( Vinyals et al. , 2019 ) spend months of training using thousands of GPUs/TPUs over billions of generated matches . However , for environments that prohibit infinite interactions , e.g. , robotics , real life traffic control and autopilot , etc. , applying general RL is difficult because generating data is extremely expensive and slow . Even if parallel data collection is possible , for example , by deploying multiple robots or vehicles running simultaneously , the scale of collected data is still far below that in virtual games . Worse still , exploration in these environments is considerably limited for safety reasons , which further reduces the effectiveness of the generated data . Due to the above challenges , similar significant advances like in solving virtual games have not been witnessed in these applications yet . Generally , there are three tracks of approaches targeting to alleviate the aforementioned situation to promote the widespread application of RL . They are improving the data efficiency , transfer learning and simulator engineering . To improve data efficiency , many recent efforts have been paid on investigating offline RL algorithms ( Siegel et al. , 2020 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Wang et al. , 2018 ) . Compared to standard on-policy or off-policy RL , offline RL ( also known as batch RL ) aims to effectively use previously collected experiences stored in a given dataset , like supervised learning , without online interactions with the environment . The stored experiences may not be generated by a fixed or known policy , so offline RL algorithms can leverage any previously collected data and learn a provably better policy than those who generated the experiences in the dataset . Although offline RL can effectively take advantage of finite data samples , solving a complex real-world task still requires huge amount of high quality offline experiences . Another way to increase data efficiency is to adopt model-based RL . Compared to model-free methods , model-based RL ( Kaiser et al. , 2019 ; Janner et al. , 2019 ; Moerland et al. , 2020 ) learns a dynamics model that mimics the transitions in the true environment , and then the policy can feel free to interact with the learned dynamics instead of the true environment . It has been proved that the true return can be improved by interacting with the learned dynamics model when the model error is bounded ( Janner et al. , 2019 ) . However , learning an accurate dynamics model still requires sufficient transition data by interacting with the true environment , specifically for complex dynamics with noisy transitions . Transfer learning in RL ( Zhu et al. , 2020 ) is practically useful to adapt a policy learned in a source environment to solve another task in the target environment . In the context of this paper , we consider the case where the policy can feel free to explore in the source environment , while the amount of collected data in the target environment should be as small as possible . When the source environment is a simulated one while the target environment takes place in reality , the transfer problem is also known as the simulation to reality ( sim2real ) problem . The simplest way to do transfer is to train the policy in the source environment and then use the converged parameters as warm start for a new policy or part of its parameters in the target environment , so that the amount of interactions with the target is expected to be largely reduced , as long as the tasks and dynamics in the two environments are closely related . Training a shared or partially shared policy in both the source and target environments is an alternative method which also belongs to the multi-task reinforcement learning scope ( Hessel et al. , 2019 ) . Domain adaptation has been demonstrated to be another useful technique ( Ibarz et al. , 2021 ) . Such methods try to bridge the gap between the source and target environments using some adaptation networks . For example , adapter networks were introduced to convert the input in simulation to be close to the real-world observation , by utilizing the generative adversarial model ( James et al. , 2017 ; Shrivastava et al. , 2017 ; Bousmalis et al. , 2017 ; 2018 ; Rao et al. , 2020 ) , or on the contrary an inverse network was trained to convert real-world observation to that in simulation ( James et al. , 2019 ) . Using such adapter networks , the policy only needs to be trained in the source environment , and then it can directly be applied in the target environment . An important concept in transfer learning is that instead of directly deploying RL in the target environment , a source environment is considered as a proxy . Sharing this spirit , the last track of approaches tries to build a proxy simulator that is as close as possible to the target environment , and hence we refer to such methods as simulator engineering . For example , in robotics control problems , there are many mature toolboxes can offer simulation engineering , including MuJoCo , PyBullet , Gazebo , etc . Model-based RL can also be viewed as a specific form of simulator engineering that the simulator is composed by a pure neural network , which is trained to approach the target environment as with lower model error as possible , while this might require a large amount of dynamics data in the target environment as mentioned above . Actually , to achieve more efficient and accurate simulator engineering , one recent rising direction is to integrate differentiable programming and physical systems to build a trainable simulator , which follows the physical laws as in reality and also whose key factors , such as the mass , length and friction of some objects , are trainable like the parameters in neural networks . Representative examples include the DiffTaichi ( Hu et al. , 2020 ) , Brax ( Freeman et al. , 2021 ) and Nimble ( Werling et al. , 2021 ) . Overall , the existing methods focus on either directly improving the data efficiency in the target environment or bridging/reducing the gap between a proxy environment and the target environment , and there lacks a principled theory that can incorporate the learning in the two environments through an unified framework , and explain the intrinsic relationship between the expected returns in the two environments from the perspective of RL . In this paper , we inherit the spirit in transfer learning and consider two environments , where one is free to interact and another is the goal to solve , and the number of interactions in the goal environment should be as small as possible . We believe that there exist some explicit connections between the expected returns in the two environments , given two different policies , from the very fundamental perspective of RL . To verify this , we formally define two Markov Decision Processes ( MDPs ) and then explicitly derive the difference between the expected returns in the two MDPs . In the following context , with the RL convention , an environment is equivalent to an MDP . Specifically , let P ( s′|s , a ) and P ′ ( s′|s , a ) denote two dynamics transition functions in any two arbitrary MDPs sharing the same state and action spaces , where ( s , a , s′ ) is the tuple of the state , action and next state . Let π′ ( a|s ) and π ( a|s ) denote two arbitrary policies , and denote J ( P , π ) as the cumulative expected return given P and π . Then , we aim to investigate the difference J ( P ′ , π ) − J ( P , π′ ) , which is referred to as the relativity gap between the two MDPs . It turns out that the relativity gap has a very interesting and compact form that integrates the interactions in both environments . Now , suppose P and P ′ are the dynamics functions in the source and target MDPs respectively , and J ( P , π′ ) has been maximized by optimizing π′ . Then , with fixed P , P ′ and π′ , maximizing the relativity gap over π by constraining π to be close to π′ will also improve the return J ( P ′ , π ) in the target MDP ; on the other hand , for trainable P , minimizing the relativity gap by optimizing P given a fixed policy π = π′ will reduce the dynamics gap , similar to what is done by conventional model-based RL methods . Based on the above two principles , our theoretical results suggest two general algorithms referred to as Relative Policy Optimization ( RPO ) and Relative Transition Optimization ( RTO ) , respectively . RPO updates the policy using the relative policy gradient to transfer the policy evaluated in the source environment to maximize the return in the target environment , while RTO updates a dynamics model using the relative transition gradient to reduce the value gap in the two environments . Then , applying RPO and RTO simultaneously offers a complete algorithm named Relative Policy-Transition Optimization ( RPTO ) , which can transfer the policy from the source to the target smoothly . RPO , RTO and RPTO interact with the two environments simultaneously , so that data collections from two environments , policy and/or transition updates are completed in a closed loop to form a principled learning framework . In the experimental section , we show how to practically apply RPO , RTO and RPTO algorithms . We demonstrate the effectiveness of these methods in the classic control problems in OpenAI gym with both discrete and continuous actions , by varying the physical variables like mass , length and gravity of the objects to create policy transfer problems . At the last section , we discuss a few new directions based on the proposed relativity theory , which are worthy future investigations . 2 PRELIMINARIES . 2.1 REINFORCEMENT LEARNING . A standard RL problem can be described by a tuple 〈E , A , S , P , r , γ , π〉 , where E indicates the environment that is an MDP with dynamics transition probability P ; at each time step t , st ∈ S is the global state in the state space S , and at ∈ A is the action executed by the agent at time step t from the action space A ; the dynamics transition function P ( st+1|st , at ) is the probability of the state transition ( st , at ) −→ st+1 ; for the most general case , the reward r ( st , at , st+1 ) can be written as a function of st , at and st+1 , while in many tasks it only relies on one or two of them , or it is even a constant in sparse rewards problem . For notation simplicity , we usually write r ( st , at , st+1 ) as rt ; γ ∈ [ 0 , 1 ] is a discount factor and π ( at|st ) denotes a stochastic policy . The following equations define some important quantities in reinforcement learning . The objective of RL is to maximize the expected discounted return J ( P , π ) = Es0 , a0 , ···∼P , π [ ∞∑ t=0 γtrt ] , where s0 ∼ P ( s0 ) , at ∼ π ( at|st ) , st+1 ∼ P ( st+1|st , at ) . At time step t , the state-action value QP , π , value function V P , π , and advantage AP , π are defined as QP , π ( st , at ) = Est+1 , at+1 , ···∼P , π [ ∑∞ l=0 γ lrt+l ] , V P , π ( st ) = Eat , st+1 , ···∼P , π [ ∑∞ l=0 γ lrt+l ] , AP , π ( s , a ) = QP , π ( s , a ) − V P , π ( s ) . In the above standard definitions , we explicitly show their dependence on both the dynamics P and policy π , since we will analyze these functions defined on variant dynamics and policies . This convention will be kept throughout the paper . | The paper studies transfer in reinforcement learning (RL), beginning with a theorem that relates the performance of one policy under a particular dynamics to another policy under different dynamics. This is broken down into a “dynamics-induced gap” and a “policy-induced gap”, for which explicit expressions are given. Optimizing a bound on the policy-induced gap w.r.t. the policy leads to an algorithm they call Relative Policy Optimization (RPO), and similarly optimizing a bound on the dynamics-induced gap w.r.t. the dynamics leads to an algorithm they call Relative Transition Optimization (RTO). The two algorithms can be combined into a single algorithm, Relative Policy-Transition Optimization (RPTO), which optimizes both the policy and the dynamics. Experiments indicate that RPTO achieves better transfer performance, both in terms of sample efficiency and asymptotic performance, than RPO and PPO warm-started from an expert policy from the source task. | SP:56a799994baed2b0f32c40c7586cb50c8a43f855 |
A General Theory of Relativity in Reinforcement Learning | 1 INTRODUCTION . Deep reinforcement learning ( RL ) has demonstrated its great successes in recent years , including breakthrough of solving a number of challenging problems like Atari ( Mnih et al. , 2015 ) , GO ( Silver et al. , 2016 ; 2017 ) , DOTA2 ( Berner et al. , 2019 ) and StarCraft II ( Vinyals et al. , 2019 ) , with humanlevel performance or even beyond . These successes demonstrate that current deep RL methods are capable to explore and exploit sufficiently in huge observation and action spaces , as long as sufficient and effective data samples can be generated for training , such as the cases in games . For example , AlphaGo Zero ( Silver et al. , 2017 ) costs 3 days of training over 4.9 millions self-play games , and OpenAI Five ( Berner et al. , 2019 ) and AlphaStar ( Vinyals et al. , 2019 ) spend months of training using thousands of GPUs/TPUs over billions of generated matches . However , for environments that prohibit infinite interactions , e.g. , robotics , real life traffic control and autopilot , etc. , applying general RL is difficult because generating data is extremely expensive and slow . Even if parallel data collection is possible , for example , by deploying multiple robots or vehicles running simultaneously , the scale of collected data is still far below that in virtual games . Worse still , exploration in these environments is considerably limited for safety reasons , which further reduces the effectiveness of the generated data . Due to the above challenges , similar significant advances like in solving virtual games have not been witnessed in these applications yet . Generally , there are three tracks of approaches targeting to alleviate the aforementioned situation to promote the widespread application of RL . They are improving the data efficiency , transfer learning and simulator engineering . To improve data efficiency , many recent efforts have been paid on investigating offline RL algorithms ( Siegel et al. , 2020 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Wang et al. , 2018 ) . Compared to standard on-policy or off-policy RL , offline RL ( also known as batch RL ) aims to effectively use previously collected experiences stored in a given dataset , like supervised learning , without online interactions with the environment . The stored experiences may not be generated by a fixed or known policy , so offline RL algorithms can leverage any previously collected data and learn a provably better policy than those who generated the experiences in the dataset . Although offline RL can effectively take advantage of finite data samples , solving a complex real-world task still requires huge amount of high quality offline experiences . Another way to increase data efficiency is to adopt model-based RL . Compared to model-free methods , model-based RL ( Kaiser et al. , 2019 ; Janner et al. , 2019 ; Moerland et al. , 2020 ) learns a dynamics model that mimics the transitions in the true environment , and then the policy can feel free to interact with the learned dynamics instead of the true environment . It has been proved that the true return can be improved by interacting with the learned dynamics model when the model error is bounded ( Janner et al. , 2019 ) . However , learning an accurate dynamics model still requires sufficient transition data by interacting with the true environment , specifically for complex dynamics with noisy transitions . Transfer learning in RL ( Zhu et al. , 2020 ) is practically useful to adapt a policy learned in a source environment to solve another task in the target environment . In the context of this paper , we consider the case where the policy can feel free to explore in the source environment , while the amount of collected data in the target environment should be as small as possible . When the source environment is a simulated one while the target environment takes place in reality , the transfer problem is also known as the simulation to reality ( sim2real ) problem . The simplest way to do transfer is to train the policy in the source environment and then use the converged parameters as warm start for a new policy or part of its parameters in the target environment , so that the amount of interactions with the target is expected to be largely reduced , as long as the tasks and dynamics in the two environments are closely related . Training a shared or partially shared policy in both the source and target environments is an alternative method which also belongs to the multi-task reinforcement learning scope ( Hessel et al. , 2019 ) . Domain adaptation has been demonstrated to be another useful technique ( Ibarz et al. , 2021 ) . Such methods try to bridge the gap between the source and target environments using some adaptation networks . For example , adapter networks were introduced to convert the input in simulation to be close to the real-world observation , by utilizing the generative adversarial model ( James et al. , 2017 ; Shrivastava et al. , 2017 ; Bousmalis et al. , 2017 ; 2018 ; Rao et al. , 2020 ) , or on the contrary an inverse network was trained to convert real-world observation to that in simulation ( James et al. , 2019 ) . Using such adapter networks , the policy only needs to be trained in the source environment , and then it can directly be applied in the target environment . An important concept in transfer learning is that instead of directly deploying RL in the target environment , a source environment is considered as a proxy . Sharing this spirit , the last track of approaches tries to build a proxy simulator that is as close as possible to the target environment , and hence we refer to such methods as simulator engineering . For example , in robotics control problems , there are many mature toolboxes can offer simulation engineering , including MuJoCo , PyBullet , Gazebo , etc . Model-based RL can also be viewed as a specific form of simulator engineering that the simulator is composed by a pure neural network , which is trained to approach the target environment as with lower model error as possible , while this might require a large amount of dynamics data in the target environment as mentioned above . Actually , to achieve more efficient and accurate simulator engineering , one recent rising direction is to integrate differentiable programming and physical systems to build a trainable simulator , which follows the physical laws as in reality and also whose key factors , such as the mass , length and friction of some objects , are trainable like the parameters in neural networks . Representative examples include the DiffTaichi ( Hu et al. , 2020 ) , Brax ( Freeman et al. , 2021 ) and Nimble ( Werling et al. , 2021 ) . Overall , the existing methods focus on either directly improving the data efficiency in the target environment or bridging/reducing the gap between a proxy environment and the target environment , and there lacks a principled theory that can incorporate the learning in the two environments through an unified framework , and explain the intrinsic relationship between the expected returns in the two environments from the perspective of RL . In this paper , we inherit the spirit in transfer learning and consider two environments , where one is free to interact and another is the goal to solve , and the number of interactions in the goal environment should be as small as possible . We believe that there exist some explicit connections between the expected returns in the two environments , given two different policies , from the very fundamental perspective of RL . To verify this , we formally define two Markov Decision Processes ( MDPs ) and then explicitly derive the difference between the expected returns in the two MDPs . In the following context , with the RL convention , an environment is equivalent to an MDP . Specifically , let P ( s′|s , a ) and P ′ ( s′|s , a ) denote two dynamics transition functions in any two arbitrary MDPs sharing the same state and action spaces , where ( s , a , s′ ) is the tuple of the state , action and next state . Let π′ ( a|s ) and π ( a|s ) denote two arbitrary policies , and denote J ( P , π ) as the cumulative expected return given P and π . Then , we aim to investigate the difference J ( P ′ , π ) − J ( P , π′ ) , which is referred to as the relativity gap between the two MDPs . It turns out that the relativity gap has a very interesting and compact form that integrates the interactions in both environments . Now , suppose P and P ′ are the dynamics functions in the source and target MDPs respectively , and J ( P , π′ ) has been maximized by optimizing π′ . Then , with fixed P , P ′ and π′ , maximizing the relativity gap over π by constraining π to be close to π′ will also improve the return J ( P ′ , π ) in the target MDP ; on the other hand , for trainable P , minimizing the relativity gap by optimizing P given a fixed policy π = π′ will reduce the dynamics gap , similar to what is done by conventional model-based RL methods . Based on the above two principles , our theoretical results suggest two general algorithms referred to as Relative Policy Optimization ( RPO ) and Relative Transition Optimization ( RTO ) , respectively . RPO updates the policy using the relative policy gradient to transfer the policy evaluated in the source environment to maximize the return in the target environment , while RTO updates a dynamics model using the relative transition gradient to reduce the value gap in the two environments . Then , applying RPO and RTO simultaneously offers a complete algorithm named Relative Policy-Transition Optimization ( RPTO ) , which can transfer the policy from the source to the target smoothly . RPO , RTO and RPTO interact with the two environments simultaneously , so that data collections from two environments , policy and/or transition updates are completed in a closed loop to form a principled learning framework . In the experimental section , we show how to practically apply RPO , RTO and RPTO algorithms . We demonstrate the effectiveness of these methods in the classic control problems in OpenAI gym with both discrete and continuous actions , by varying the physical variables like mass , length and gravity of the objects to create policy transfer problems . At the last section , we discuss a few new directions based on the proposed relativity theory , which are worthy future investigations . 2 PRELIMINARIES . 2.1 REINFORCEMENT LEARNING . A standard RL problem can be described by a tuple 〈E , A , S , P , r , γ , π〉 , where E indicates the environment that is an MDP with dynamics transition probability P ; at each time step t , st ∈ S is the global state in the state space S , and at ∈ A is the action executed by the agent at time step t from the action space A ; the dynamics transition function P ( st+1|st , at ) is the probability of the state transition ( st , at ) −→ st+1 ; for the most general case , the reward r ( st , at , st+1 ) can be written as a function of st , at and st+1 , while in many tasks it only relies on one or two of them , or it is even a constant in sparse rewards problem . For notation simplicity , we usually write r ( st , at , st+1 ) as rt ; γ ∈ [ 0 , 1 ] is a discount factor and π ( at|st ) denotes a stochastic policy . The following equations define some important quantities in reinforcement learning . The objective of RL is to maximize the expected discounted return J ( P , π ) = Es0 , a0 , ···∼P , π [ ∞∑ t=0 γtrt ] , where s0 ∼ P ( s0 ) , at ∼ π ( at|st ) , st+1 ∼ P ( st+1|st , at ) . At time step t , the state-action value QP , π , value function V P , π , and advantage AP , π are defined as QP , π ( st , at ) = Est+1 , at+1 , ···∼P , π [ ∑∞ l=0 γ lrt+l ] , V P , π ( st ) = Eat , st+1 , ···∼P , π [ ∑∞ l=0 γ lrt+l ] , AP , π ( s , a ) = QP , π ( s , a ) − V P , π ( s ) . In the above standard definitions , we explicitly show their dependence on both the dynamics P and policy π , since we will analyze these functions defined on variant dynamics and policies . This convention will be kept throughout the paper . | This paper proposed a way to decompose the difference between values of two policies in two MDPs respectively. Such a decomposition results in two parts, the first one is the difference between values of one policy under two MDPs; the second part is the difference between values of two policies under the same MDP. Using this decomposition, the paper then proposed three algorithms. The first algorithm, called RPO, is used when there are two MDPs that the agent can interact with. The agent uses data from both two MDPs to get a good policy for one of the two MDPs. Such an algorithm is expected to be useful when gathering data from one MDP is costly while it from the other MDP is much cheaper. The second algorithm is a model learning algorithm. In this setting, the agent only interacts with one MDP and learns a model to approximate the MDP. The RTO algorithm is different from the classic model learning algorithm (regression) in that the model is learned to achieve some consistency between the predicted values from the model and from the MDP. The third algorithm combines the first two algorithms and is a full model-based algorithm. Specifically, now the agent only interacts with one MDP and learns a model of that MDP using RTO. Meanwhile, it also maintains and updates a policy that is expected to perform well in the MDP, using data from both the MDP and the model. | SP:56a799994baed2b0f32c40c7586cb50c8a43f855 |
Flashlight: Enabling Innovation in Tools for Machine Learning | 1 INTRODUCTION . The recent rise of deep learning-based techniques has been accompanied and sustained by the wide availability of dedicated frameworks such as TensorFlow ( Abadi et al. , 2016 ) and PyTorch ( Paszke et al. , 2019 ) . These frameworks have enabled the democratization of machine learning research by providing extensive collections of high level primitives to support common use cases . Lowering the barrier to entry for end users has boosted the popularity of both neural networks and the frameworks in which they are implemented . However , in order to support what are now vast ecosystems and a diverse user base , framework size and complexity have increased dramatically over time . As a result , deep , groundbreaking framework research has become extremely onerous and time consuming , precluding rapid innovation . Given these barriers , major deep learning frameworks have become stuck in their existing operating modes . Innovation in this area remains as important as ever . Indeed , framework innovation accelerates machine learning ( ML ) and artificial intelligence ( AI ) research . Frameworks that are easier to use reduce the engineering burden on researchers , and frameworks that are higher-performance decrease the time required to iterate on experimental work and validate hypotheses . Even more critically , tooling plays a fundamental role in deciding which ideas succeed or fail . For example , LeCun et al . ( 1989 ) pioneered the use of convolutional neural networks ( CNNs ) ( Fukushima & Miyake , 1982 ) trained using backpropagation for computer vision tasks in the late 1980s , which was subsequently applied to handwriting recognition . However , widespread success for CNNs was achieved two decades later when Krizhevsky et al . ( 2012a ) leveraged the CUDA programming model to take advantage of graphics processing units ( GPUs ) to train a much deeper model ( AlexNet ) . While deep learning frameworks have been optimized to leverage existing hardware paradigms for common neural network architectures , they often fail to deliver similar efficiencies on designs that diverge from the mainstream . For example , Barham & Isard ( 2019 ) explain how the design of these frameworks results in poor hardware utilization for a novel type of neural network , known as a capsule network ( Hinton et al. , 2018 ) , that leverages new components such as squashing operations and routing by agreement . More generally , what are now unconventional approaches to modern problems in machine learning require highly-specialized additions to popular frameworks . As a result of narrowly-optimized systems , research beyond deep learning may be discounted due to purported computational infeasibility given modern frameworks ’ capabilities . Furthermore , the waning of Moore ’ s law ( Theis & Wong , 2017 ) coupled with the ever-growing computational demands of deep learning are prompting several shifts in hardware . Massive-scale distributed computing is now required to train leading models — a process that established frameworks remain unable to handle truly automatically . In parallel , multiple specialized hardware products are now available to better support deep learning applications : Nvidia ’ s TensorCores ( Markidis et al. , 2018 ) , Google ’ s TPUs ( Jouppi et al. , 2017 ) , Graphcore ’ s IPUs ( Jia et al. , 2019 ) , Apple ’ s Neural Engine1 , and others have been developed to improve total float-pointing operations ( FLOPs ) , cost per-FLOP , or energy consumption . Additionally , numerous efforts are underway to move away from conventional von Neumann computing architectures in which memory and processing units are physically separated , either by storing data closer to compute units or by switching to in-memory computing altogether . While tooling innovation is alive and well given these incentives for progress , working within large , well-established frameworks has become more and more challenging as framework size and scope grows . As a result , many recent innovations have required the development of ad-hoc tools . For example , efforts in machine learning-driven compilation of neural networks are largely built on top of Halide ( Adams et al. , 2019 ; Steiner et al. , 2021 ) and TVM ( Chen et al. , 2018 ; Zheng et al. , 2020 ) ; FlexFlow ( Jia et al. , 2018 ; 2020 ) underpins recent work aimed at improving the use of distributed computing to accelerate the training of large neural networks ; and PET ( Wang et al. , 2021 ) provides a framework that enables graph-level neural network optimizations . With ad-hoc approaches , researchers are required to start from scratch for new directions or adapt their ideas to fit into the scaffolding these frameworks provide — resulting in significant technical burdens . To sustain framework innovation , we introduce Flashlight , an open source minimalist ML library designed to support research in machine learning frameworks , facilitate rapid iteration on ideas , reduce the engineering burden on researchers , and remove the need for new tools . Flashlight includes : • A modular , component-based architecture that makes every aspect of the implementation fully customizable with simple internal APIs . • A compact yet highly-performant reference implementation of each component . • A comprehensive set of benchmarks representative of the state-of-the-art in machine learning on which to evaluate alternative implementations . 2 RELATED WORK . Numerous frameworks have been implemented in recent years to support machine learning , including Lush ( Bottou & LeCun , 2002 ) , Theano ( Bergstra et al. , 2010 ) , Torch ( Collobert et al. , 2011 ) , Caffe ( Jia et al. , 2014 ) , MXNet ( Chen et al. , 2015 ) , deeplearning4j ( Team , 2016 ) , TensorFlow ( Abadi et al. , 2016 ) , Flux ( Innes , 2018 ) , Jax ( Bradbury et al. , 2018 ) , PyTorch ( Paszke et al. , 2019 ) , Chainer ( Tokui et al. , 2019 ) , and PaddlePaddle ( Ma et al. , 2019 ) . These frameworks offer programming models designed around multidimensional arrays ( TENSORS ) , modeled as first-class objects and supported by a comprehensive set of mathematical primitives ( or operators ) to manipulate them . To provide the computing power required by deep learning-based methods , most natively support hardware accelerators such as general-purpose GPUs or custom-designed ASICs such as TPUs . Generally , framework implementations follow one of two computational models : • In the deferred execution model , the neural network to be trained is first encoded as a dataflow graph which can be optimized for a specific set of target hardware devices . The neural network is then executed in a distinct second phase . Since the dataflow graph represents the entire computation , both local and global optimizations can be applied , making the subsequent execution very efficient . However , only programs that can be represented as 1https : //nr.apple.com/dE9q1p9M7t dataflow graphs can be processed with this approach , thus limiting flexibility . Frameworks such as Theano , TensorFlow2 , Caffe , or MXNet fall into this category . • In the eager model , an interpreter ( such as Python ) is extended with the high level kernelbased operations needed to train a neural network . These operations are executed immediately when called , though this precludes many optimizations . By weaving neural network-related operations into a Turing complete programming language , this approach is extremely flexible . Furthermore , the imperative nature of the underlying programming language allows for fine-grained control over the execution order and memory utilization , which enables more specific user-driven optimization . Frameworks such as Torch , PyTorch , or Chainer exemplify this approach . 3 PRINCIPLES . The aforementioned frameworks are designed and implemented to best-serve their user bases — namely , machine learning researchers and practitioners . They rely on large , internally complex codebases to provide comprehensive solutions , as is further discussed in Section 5 . In contrast , Flashlight targets an audience of researchers interested in experimenting with new designs and implementations of machine learning tools or broader computational or modeling paradigms . To foster this type of innovation , Flashlight balances simplicity and nimbleness with the need to provide enough functionality to support real use cases . Internal and external simplicity is the key design principle of Flashlight ; the ability to dramatically modify software and drive it in new directions is inversely correlated with codebase size and complexity ( Gill & Kemerer , 1990 ) . More specifically : • Flashlight is built on a shallow stack of idiomatic , modular , and customizable abstractions . Framework components interact through small , well-defined , stable APIs , which expose most internal aspects of its implementation . This ensures that every component of Flashlight can be modified or replaced with new custom implementations , even e.g . its memory manager and tensor implementation . To support the exploration of a wide array of alternative approaches , Flashlight interfaces are flexible and unopiniated by design . This is in contrast to other frameworks , which impose stricter implementation requirements based on tight design constraints for their computation models and support requirements across hardware , downstream frameworks , and other ecosystem members . • Flashlight provides deliberately-compact default implementations of its APIs . This reduces out-of-the-gate engineering burden and the need for modifications , and enables fast compilation and rapid iteration when experimenting . Furthermore , to mitigate premature optimization , Flashlight deliberately abstains from adding small efficiency improvements if they conflict with the goals of keeping the codebase simple and APIs clean . • Flashlight is a research-first framework , and is not intended for out of the box production use . To keep codebase size small , it forgoes features such as model servers for deployment and integration with cluster management tools . Flashlight is a viable solution for machine learning research , shipping with a comprehensive set of benchmarks and research setups for state-of-the-art neural network architectures such as convolutional neural networks ( CNNs ) ( Krizhevsky et al. , 2012b ) and Transformers ( Vaswani et al. , 2017 ) , as well as task-specific models such as ViT ( Dosovitskiy et al. , 2020 ) , DETR ( Carion et al. , 2020 ) , or BERT ( Devlin et al. , 2018 ) . The speech recognition system wav2letter ( Pratap et al. , 2019 ) , is also built entirely on Flashlight . Benchmarks built on these state-of-the-art models make Flashlight a turn key solution for system researchers who want to quickly evaluate their design and implementation choices without needing to build test benches from the ground-up . More importantly , Flashlight makes possible end-to-end benchmarking on real models rather than microbenchmarks or small-scale tests . 2TensorFlow 2.0 adds support for eager execution semantics as well . 4 DESIGN . Flashlight ’ s design is centered around internal APIs for framework components which form the building blocks for domain-specific ML packages and applications — this structure is outlined in Figure 1 . Flashlight is implemented as a C++ library and follows a Tensor-based programming methodology , with neural network building blocks that derive from a MODULE interface , communicate by exchanging Tensor data , and are composed functionally or imperatively to form complete neural network architectures . Tensor programming in Flashlight is fundamentally dynamic , but given that C++ is a compiled language , code describing models in Flashlight is compiled . This approach promotes type safety , foregoes the runtime overheads associated with interpreters , and , unlike eager-based approaches , enables global optimizations where possible . 4.1 OPEN FOUNDATIONAL INTERFACES . Flashlight is built on top of three open foundational APIs , each addressing design and implementation challenges faced by machine and deep learning tools : a Tensor interface , a memory management subsystem , and a distributed computing interface . These APIs are backed by reference implementations that enable Flashlight to efficiently target CPUs , GPUs , and other accelerators . These include code generation and dedicated kernels for Intel , AMD , OpenCL , and CUDA devices , and leverage libraries such as cuDNN ( Chetlur et al. , 2014 ) , MKL ( Intel , 2020a ) , oneDNN ( Intel , 2020b ) , ArrayFire ( Malcolm et al. , 2012 ) , and MiOpen ( Khan et al. , 2019 ) . | The paper proposed a minimal design API (or mostly API?) of machine learning framework called Flashlight. The key argument of the paper is that Flashlight is modular and agile. The Flashlight captured the key aspects of machine learning frameworks: Tensor and Operation, Memory Management, and Distributed. There are some evaluations to argue the benefit of Flashlight. | SP:97873277c2891819393aeebbd3256b7445794e89 |
Flashlight: Enabling Innovation in Tools for Machine Learning | 1 INTRODUCTION . The recent rise of deep learning-based techniques has been accompanied and sustained by the wide availability of dedicated frameworks such as TensorFlow ( Abadi et al. , 2016 ) and PyTorch ( Paszke et al. , 2019 ) . These frameworks have enabled the democratization of machine learning research by providing extensive collections of high level primitives to support common use cases . Lowering the barrier to entry for end users has boosted the popularity of both neural networks and the frameworks in which they are implemented . However , in order to support what are now vast ecosystems and a diverse user base , framework size and complexity have increased dramatically over time . As a result , deep , groundbreaking framework research has become extremely onerous and time consuming , precluding rapid innovation . Given these barriers , major deep learning frameworks have become stuck in their existing operating modes . Innovation in this area remains as important as ever . Indeed , framework innovation accelerates machine learning ( ML ) and artificial intelligence ( AI ) research . Frameworks that are easier to use reduce the engineering burden on researchers , and frameworks that are higher-performance decrease the time required to iterate on experimental work and validate hypotheses . Even more critically , tooling plays a fundamental role in deciding which ideas succeed or fail . For example , LeCun et al . ( 1989 ) pioneered the use of convolutional neural networks ( CNNs ) ( Fukushima & Miyake , 1982 ) trained using backpropagation for computer vision tasks in the late 1980s , which was subsequently applied to handwriting recognition . However , widespread success for CNNs was achieved two decades later when Krizhevsky et al . ( 2012a ) leveraged the CUDA programming model to take advantage of graphics processing units ( GPUs ) to train a much deeper model ( AlexNet ) . While deep learning frameworks have been optimized to leverage existing hardware paradigms for common neural network architectures , they often fail to deliver similar efficiencies on designs that diverge from the mainstream . For example , Barham & Isard ( 2019 ) explain how the design of these frameworks results in poor hardware utilization for a novel type of neural network , known as a capsule network ( Hinton et al. , 2018 ) , that leverages new components such as squashing operations and routing by agreement . More generally , what are now unconventional approaches to modern problems in machine learning require highly-specialized additions to popular frameworks . As a result of narrowly-optimized systems , research beyond deep learning may be discounted due to purported computational infeasibility given modern frameworks ’ capabilities . Furthermore , the waning of Moore ’ s law ( Theis & Wong , 2017 ) coupled with the ever-growing computational demands of deep learning are prompting several shifts in hardware . Massive-scale distributed computing is now required to train leading models — a process that established frameworks remain unable to handle truly automatically . In parallel , multiple specialized hardware products are now available to better support deep learning applications : Nvidia ’ s TensorCores ( Markidis et al. , 2018 ) , Google ’ s TPUs ( Jouppi et al. , 2017 ) , Graphcore ’ s IPUs ( Jia et al. , 2019 ) , Apple ’ s Neural Engine1 , and others have been developed to improve total float-pointing operations ( FLOPs ) , cost per-FLOP , or energy consumption . Additionally , numerous efforts are underway to move away from conventional von Neumann computing architectures in which memory and processing units are physically separated , either by storing data closer to compute units or by switching to in-memory computing altogether . While tooling innovation is alive and well given these incentives for progress , working within large , well-established frameworks has become more and more challenging as framework size and scope grows . As a result , many recent innovations have required the development of ad-hoc tools . For example , efforts in machine learning-driven compilation of neural networks are largely built on top of Halide ( Adams et al. , 2019 ; Steiner et al. , 2021 ) and TVM ( Chen et al. , 2018 ; Zheng et al. , 2020 ) ; FlexFlow ( Jia et al. , 2018 ; 2020 ) underpins recent work aimed at improving the use of distributed computing to accelerate the training of large neural networks ; and PET ( Wang et al. , 2021 ) provides a framework that enables graph-level neural network optimizations . With ad-hoc approaches , researchers are required to start from scratch for new directions or adapt their ideas to fit into the scaffolding these frameworks provide — resulting in significant technical burdens . To sustain framework innovation , we introduce Flashlight , an open source minimalist ML library designed to support research in machine learning frameworks , facilitate rapid iteration on ideas , reduce the engineering burden on researchers , and remove the need for new tools . Flashlight includes : • A modular , component-based architecture that makes every aspect of the implementation fully customizable with simple internal APIs . • A compact yet highly-performant reference implementation of each component . • A comprehensive set of benchmarks representative of the state-of-the-art in machine learning on which to evaluate alternative implementations . 2 RELATED WORK . Numerous frameworks have been implemented in recent years to support machine learning , including Lush ( Bottou & LeCun , 2002 ) , Theano ( Bergstra et al. , 2010 ) , Torch ( Collobert et al. , 2011 ) , Caffe ( Jia et al. , 2014 ) , MXNet ( Chen et al. , 2015 ) , deeplearning4j ( Team , 2016 ) , TensorFlow ( Abadi et al. , 2016 ) , Flux ( Innes , 2018 ) , Jax ( Bradbury et al. , 2018 ) , PyTorch ( Paszke et al. , 2019 ) , Chainer ( Tokui et al. , 2019 ) , and PaddlePaddle ( Ma et al. , 2019 ) . These frameworks offer programming models designed around multidimensional arrays ( TENSORS ) , modeled as first-class objects and supported by a comprehensive set of mathematical primitives ( or operators ) to manipulate them . To provide the computing power required by deep learning-based methods , most natively support hardware accelerators such as general-purpose GPUs or custom-designed ASICs such as TPUs . Generally , framework implementations follow one of two computational models : • In the deferred execution model , the neural network to be trained is first encoded as a dataflow graph which can be optimized for a specific set of target hardware devices . The neural network is then executed in a distinct second phase . Since the dataflow graph represents the entire computation , both local and global optimizations can be applied , making the subsequent execution very efficient . However , only programs that can be represented as 1https : //nr.apple.com/dE9q1p9M7t dataflow graphs can be processed with this approach , thus limiting flexibility . Frameworks such as Theano , TensorFlow2 , Caffe , or MXNet fall into this category . • In the eager model , an interpreter ( such as Python ) is extended with the high level kernelbased operations needed to train a neural network . These operations are executed immediately when called , though this precludes many optimizations . By weaving neural network-related operations into a Turing complete programming language , this approach is extremely flexible . Furthermore , the imperative nature of the underlying programming language allows for fine-grained control over the execution order and memory utilization , which enables more specific user-driven optimization . Frameworks such as Torch , PyTorch , or Chainer exemplify this approach . 3 PRINCIPLES . The aforementioned frameworks are designed and implemented to best-serve their user bases — namely , machine learning researchers and practitioners . They rely on large , internally complex codebases to provide comprehensive solutions , as is further discussed in Section 5 . In contrast , Flashlight targets an audience of researchers interested in experimenting with new designs and implementations of machine learning tools or broader computational or modeling paradigms . To foster this type of innovation , Flashlight balances simplicity and nimbleness with the need to provide enough functionality to support real use cases . Internal and external simplicity is the key design principle of Flashlight ; the ability to dramatically modify software and drive it in new directions is inversely correlated with codebase size and complexity ( Gill & Kemerer , 1990 ) . More specifically : • Flashlight is built on a shallow stack of idiomatic , modular , and customizable abstractions . Framework components interact through small , well-defined , stable APIs , which expose most internal aspects of its implementation . This ensures that every component of Flashlight can be modified or replaced with new custom implementations , even e.g . its memory manager and tensor implementation . To support the exploration of a wide array of alternative approaches , Flashlight interfaces are flexible and unopiniated by design . This is in contrast to other frameworks , which impose stricter implementation requirements based on tight design constraints for their computation models and support requirements across hardware , downstream frameworks , and other ecosystem members . • Flashlight provides deliberately-compact default implementations of its APIs . This reduces out-of-the-gate engineering burden and the need for modifications , and enables fast compilation and rapid iteration when experimenting . Furthermore , to mitigate premature optimization , Flashlight deliberately abstains from adding small efficiency improvements if they conflict with the goals of keeping the codebase simple and APIs clean . • Flashlight is a research-first framework , and is not intended for out of the box production use . To keep codebase size small , it forgoes features such as model servers for deployment and integration with cluster management tools . Flashlight is a viable solution for machine learning research , shipping with a comprehensive set of benchmarks and research setups for state-of-the-art neural network architectures such as convolutional neural networks ( CNNs ) ( Krizhevsky et al. , 2012b ) and Transformers ( Vaswani et al. , 2017 ) , as well as task-specific models such as ViT ( Dosovitskiy et al. , 2020 ) , DETR ( Carion et al. , 2020 ) , or BERT ( Devlin et al. , 2018 ) . The speech recognition system wav2letter ( Pratap et al. , 2019 ) , is also built entirely on Flashlight . Benchmarks built on these state-of-the-art models make Flashlight a turn key solution for system researchers who want to quickly evaluate their design and implementation choices without needing to build test benches from the ground-up . More importantly , Flashlight makes possible end-to-end benchmarking on real models rather than microbenchmarks or small-scale tests . 2TensorFlow 2.0 adds support for eager execution semantics as well . 4 DESIGN . Flashlight ’ s design is centered around internal APIs for framework components which form the building blocks for domain-specific ML packages and applications — this structure is outlined in Figure 1 . Flashlight is implemented as a C++ library and follows a Tensor-based programming methodology , with neural network building blocks that derive from a MODULE interface , communicate by exchanging Tensor data , and are composed functionally or imperatively to form complete neural network architectures . Tensor programming in Flashlight is fundamentally dynamic , but given that C++ is a compiled language , code describing models in Flashlight is compiled . This approach promotes type safety , foregoes the runtime overheads associated with interpreters , and , unlike eager-based approaches , enables global optimizations where possible . 4.1 OPEN FOUNDATIONAL INTERFACES . Flashlight is built on top of three open foundational APIs , each addressing design and implementation challenges faced by machine and deep learning tools : a Tensor interface , a memory management subsystem , and a distributed computing interface . These APIs are backed by reference implementations that enable Flashlight to efficiently target CPUs , GPUs , and other accelerators . These include code generation and dedicated kernels for Intel , AMD , OpenCL , and CUDA devices , and leverage libraries such as cuDNN ( Chetlur et al. , 2014 ) , MKL ( Intel , 2020a ) , oneDNN ( Intel , 2020b ) , ArrayFire ( Malcolm et al. , 2012 ) , and MiOpen ( Khan et al. , 2019 ) . | The paper describes the design philosophy and structure of the Flashlight deep learning framework. Flashlight is modular, small, and narrowly oriented toward systems researchers. Rather than (or in addition to) high level productivity, Flashlight focuses on internal and external simplicity of ML tools. The authors evaluate Flashlight by training several standard reference models and it achieves slightly better performance than PyTorch or TensorFlow | SP:97873277c2891819393aeebbd3256b7445794e89 |
Constructing a Good Behavior Basis for Transfer using Generalized Policy Updates | 1 INTRODUCTION . Reinforcement learning ( RL ) studies the problem of building rational decision-making agents that maximize long term cumulative reward through trial-and-error interaction with a given environment . In recent years , RL algorithms combined with powerful function approximators such as deep neural networks have achieved significant successes in a wide range of challenging domains ( see e.g . Mnih et al . ( 2015 ) ; Vinyals et al . ( 2019 ) ; Silver et al . ( 2017 ; 2018 ) ) . However , these algorithms require substantial amounts of data for performing very narrowly-defined tasks . In addition to being datahungry , they are also very brittle to changes in the environment , such as changes in the tasks over time . The most important reasons behind these two shortcomings is that RL algorithms usually learn to perform a task from scratch , without leveraging any form of prior knowledge , and they are trained to optimize performance on only a single task . A promising approach to tackle both of these shortcomings is to learn multiple ways of behaving , i.e. , multiple policies that optimize different reward functions , and to reuse them as needed . Having access to multiple pre-learned policies can allow an agent to quickly solve reoccurring tasks in a lifelong RL setting . It can also allow for learning to combine the existing policies via a meta-policy to quickly learn new tasks , as in hierarchical RL . Recently , Barreto et al . ( 2020 ) have proposed the “ generalized policy updates ” framework , which generalizes the classical policy evaluation and policy improvement operations that underlie many of today ’ s RL algorithms . Its goal is to allow reusing policies resulting from previously learned tasks in order to perform well on downstream tasks , while also being data-efficient . More precisely , after learning the successor features of several policies in a policy set , also referred to as a behavior basis , they were able to instantaneously “ synthesize ” , in a zero-shot manner , new policies to solve downstream tasks , via generalized policy improvement . However , this work leaves open two important questions : ( i ) what set of policies should the agent learn so that its instantaneous performance on all possible downstream tasks is guaranteed to be good , and ( ii ) under what conditions does such a set of policies exist . In this paper , we provide answers to the questions above by proving that under certain assumptions about the environment dynamics and features , learning a diverse set of policies , which we call a set of independent policies , indeed guarantees good instantaneous performance on all possible downstream tasks . After providing an iterative algorithm for the construction of this set , we perform several experiments that validate our theoretical findings . In addition to the validation experiments , we compare this algorithm with recently proposed diverse policy set construction methods ( Eysenbach et al. , 2018 ; Zahavy et al. , 2020 ; 2021 ) and show that , unlike these methods , our approach is able to construct a behavior basis that enables instantaneous transfer to all possible tasks . We also show empirically that learning a set of independent policies can better bootstrap the learning process on downstream tasks where the reward function can not be described by a linear combination of the features . Finally , we demonstrate the usefulness of this set in a lifelong RL scenario , in which the agent faces different tasks over its lifetime . We hope that our study will bring the community a step closer to building lifelong RL agents that are able to perform multiple tasks and are able to instantaneously/quickly adapt to new ones during its lifetime . 2 BACKGROUND . Reinforcement Learning . In RL ( Sutton & Barto , 2018 ) , an agent interacts with its environment by choosing actions to get as much as cumulative long-term reward . The interaction between the agent and its environment is usually modeled as a Markov Decision Process ( MDP ) . An MDP is a tuple M ≡ ( S , A , P , r , d0 , γ ) , where S is the ( finite ) set of states , A is the ( finite ) set of actions , P : S × A × S → [ 0 , 1 ] is the transition distribution , r : S × A × S → R is the reward function , which specifies the task of interest , d0 : S → [ 0 , 1 ] is the initial state distribution and γ ∈ [ 0 , 1 ) is the discount factor . In RL , typically the agent does not have any knowledge about P and r beforehand , and its goal is to find , through pure interaction , a policy π : S → A that maximizes the expected sum of discounted rewards Eπ , P [ ∑∞ t=0 γ tr ( St , At , St+1 ) |S0 ∼ d0 ] , where Eπ , P [ · ] denotes the expectation over trajectories induced by π and P . Successor Features . The successor representation for a state s under a policy π allows s to be represented by the ( discounted ) distribution of states encountered when following π from s ( Dayan , 1993 ) . Given a policy , successor features ( SF , Barreto et al. , 2017 ) are a generalization of the idea of successor representations from the tabular setting to function approximation . Following Barreto et al . ( 2017 ) , we define SFs of a policy π for state-action ( s , a ) as : ψπ ( s , a ) ≡ Eπ , P [ ∞∑ i=0 γiφ ( St+i , At+i , St+i+1 ) ∣∣∣St = s , At = a ] , ( 1 ) where the ith component of ψπ gives the expected discounted sum of the ith component of the feature vector , φi , when following policy π , starting from the state-action pair ( s , a ) . Successor features allow a decoupling between the reward function and the environment dynamics . More concretely , if the reward function for a task can be represented as a linear combination of a feature vector φ ( s , a , s′ ) ∈ Rn : rw ( s , a , s ′ ) = φ ( s , a , s′ ) > ·w , ( 2 ) where w ∈ Rn , then , as we will detail below , the state-action value function Qπrw can be computed immediately as the dot-product of ψπ and w. The elements of w , wi , can be viewed as indicating a “ preference ” towards each of the features . Thus , we refer to w interchangeably as either the preference vector or the task . Intuitively , the elements φi of the feature vector φ can be viewed as salient events that maybe desirable or undesirable to the agent , such as picking up or leaving objects of certain type , and reaching and/or avoiding certain states . Generalized Policy Evaluation and Improvement . Generalized Policy Evaluation ( GPE ) and Generalized Policy Improvement ( GPI ) , together referred to as Generalized Policy Updates , are generalizations of the well-known policy evaluation and policy improvement operations in standard dynamic programming to a set of tasks and a set of policies ( Barreto et al. , 2020 ) . They are used as a transfer mechanism in RL to quickly construct a solution for a newly given task . One particularly efficient instantiation of GPE & GPI is through the use of SFs and value-based action selection . More concretely , given a set of MDPs having the following form : Mφ ≡ { ( S , A , P , rw = φ ·w , d0 , γ ) |w ∈ Rn } , ( 3 ) and given SFs ψπ ( s , a ) of a policy π , an efficient form of GPE on the task rw for policy π can be performed as follows : ψπ ( s , a ) > ·w = Qπrw ( s , a ) , ( 4 ) where Qπrw ( s , a ) is the state-action value function of π on the task rw . And , after performing GPE for all the policies π in a finite policy set Π , following Barreto et al . ( 2017 ) , an efficient form of GPI can be performed as follows : πGPIΠ ( s ) ∈ arg max a∈A max π∈Π Qπrw ( s , a ) . ( 5 ) We will refer to this specific use of SFs and value-based action selection for performing GPE & GPI as simply GPE & GPI throughout the rest of this study . Note that πGPI will in general outperform all the policies in Π , and that the actions selected by πGPI on a state may not coincide with any of the actions selected by π ∈ Π on that state . Hence , the policy space that can be attained by GPI can in principle be a lot larger than , e.g. , the space accessible by calling policies sequentially from the original set . 3 PROBLEM FORMULATION AND THEORETICAL ANALYSIS . GPE & GPI provide a guarantee that , for any reward function linear in the features , πGPI is at least as good as any of the policies π from the “ base set ” which was used to construct it . While this is an appealing guarantee of monotonic improvement , it does not say much , for two reasons . First , it is not clear how big an improvement can be expected for different tasks . More importantly , it leaves open the question of how one should choose base policies in order to ensure as much improvement as possible . After all , if we had a weak set of policies and we simply matched their value with πGPI , this may not be very useful . We will now show that , under certain assumptions , having access to a specific set of diverse policies , which we call a set of independent policies , can allow for instantaneously achieving high-level performance on all possible downstream tasks . Let us start by assuming that we are interested in a set of MDPsMφ , as defined in ( 3 ) , with deterministic transition functions ( the reason for the determinism assumption will become clear by the end of this section ) . For convenience , we also restrict the possible w values from Rn toW , where W is the surface of the ` 2 n-dimensional ball . Note that this choice does not alter the optimal policies of the MDPs inMφ , as an optimal policy is invariant with respect to the scale of the reward and W contains all possible directions . Next , we assume that the features φi that make up the feature vectors φ form a set of independent features ( SIF ) , defined as follows : Definition 1 ( SIF ) . A set of features Φ = { φi|φi : S × A × S → { 0 , C } , C ∈ R+ } ni=1 is called independent if , for any feature φi ∈ Φ and any initial state s0 ∼ d0 , we have : ( i ) φi ( s0 , a0 , s1 ) = 0 ∀a0 ∈ A and ∀s1 ∼ P ( s0 , a0 , · ) , and ( ii ) there exists at least one trajectory , starting from s0 , in which all the states associated with φi ( st , at , st+1 ) = C are visited , while the states associated with φj ( st , at , st+1 ) = C , ∀j 6= i , are not visited . It should be noted that a specific instantiation of this definition is the case where each feature is set to a positive constant at certain independently reachable state/states and zero elsewhere , which is the most common instantiation of feature vectors used in previous related work ( Barreto et al. , 2017 ; 2020 ) . We define the performance of an arbitrary policy π on a task rw as : Jπrw ≡ Eπ , P [ ∞∑ t=0 rw ( St , At , St+1 ) ∣∣∣S0 ∼ d0 ] , ( 6 ) whereEπ , P [ · ] denotes the expectation over trajectories induced by π and P . Note that Jπrw is a scalar corresponding to the expected undiscounted return of policy π under the initial state distribution d0 , which is the expected total reward obtained by π when starting from s0 ∼ d0 . We are now ready to formalize the problem we want to tackle : Problem Formulation . Given a set of MDPsMφ with deterministic transition functions and a SIF , we want to construct a set of n policies Πn = { πi } ni=1 , such that the performance of the policy πGPIΠn obtained by performing GPI on that set will maximize ( 6 ) for all rewards rw , where w ∈ W . That is , we want to solve the following problem : arg max Πn⊆Π J πGPIΠn rw for all w ∈ W. ( 7 ) It should be noted that the performance measure provided in ( 6 ) only measures the expected total reward and thus can not capture the optimality of the GPI policy . For instance , this measure can not distinguish between two policies that achieve the same expected total reward in a different number of time steps . However , Theorem 2 in Barreto et al . ( 2017 ) implies that , in general , the only way to guarantee the optimality of the GPI policy is to construct a behavior basis that contains all possible policies induced by all w ∈ W . Since there are infinitely many w values , this is impractical . Thus , in this study , we only consider GPI policies that maximize the expected total reward ( 6 ) . As a solution candidate to the problem in ( 7 ) , we now focus on a specific set of deterministic policies , called set of independent policies ( SIP ) , that are able to obtain features independently of each other : Definition 2 ( SIP ) . Let Φ = { φi } ni=1 be a SIF and let Π = { πi } ni=1 be a set of deterministic policies that are induced by each of the features in Φ. Π is defined to be a SIP if its elements , πi , satisfy : φj ( st , at , st+1 ) = φj ( s0 , a0 , s1 ) ∀j 6= i , ∀i , ∀s0 ∼ d0 and ∀t ∈ { 1 , . . . , T } , ( 8 ) where T is the horizon in episodic environments and T → ∞ in non-episodic ones , a0 = πi ( s0 ) and ( st , at , st+1 ) Tt=1 is the sequence of state-action-state triples generated by πi ’ s interaction with the environment . In general , having a SIP in a set of MDPs with stochastic transition functions is not possible , as the stochasticity can prevent ( 8 ) from holding . Thus , the assumption of a set of MDPs with deterministic transition functions is critical to our analysis . An immediate consequence of having this set of policies is that the corresponding SFs can be expressed in a simpler form , as follows : Lemma 1 . Let Φ be a SIF and let πi be a policy that is induced by the feature φi ∈ Φ and is a member of a SIP Π . Then , the entries of the SF ψπi of policy πi has the following form : ψπij ( s , a ) = { ψπii ( s , a ) , if i = j 0 , otherwise . ( 9 ) Due to the space constraints , we provide all the proofs in the supp . material . Lemma 1 implies that once we have a SIP , the SFs take the much simpler form ( 9 ) , which can allow for the GPI policy to maximize performance on all possible tasks according to the measure in ( 6 ) , solving the optimization objective in ( 7 ) : Theorem 1 . Let Φ be a SIF and let Π be a SIP induced by each of the features in Φ . Then , the GPI policy πGPIΠ is a solution to the optimization problem defined in ( 7 ) . Theorem 1 implies that having access to a SIP which consists of only n policies , where n is the dimensionality of φ , and applying the policy composition operator GPI on top , allows for instantaneously achieving maximum performance across all possible downstream tasks . Considering the fact that there are infinitely many such tasks , this provides a significant gain in the number of policies that are required to be learned for full downstream task coverage . | The paper extends the successor features framework to answer the following question: which policies should we learn and store so that, when presented with a new task, we achieve the best performance possible? The paper defines the notion of independent policies (forming a kind of basis over policy space), which can then be combined to solve new tasks (whose rewards are expressible as a linear combination of features) immediately. Experimental results support the theoretical results and show that it is best to learn these independent policies if we wish to maximise performance on downstream tasks. | SP:fbacc4b906328e10a7f61a351bc02cf99aa33c4c |
Constructing a Good Behavior Basis for Transfer using Generalized Policy Updates | 1 INTRODUCTION . Reinforcement learning ( RL ) studies the problem of building rational decision-making agents that maximize long term cumulative reward through trial-and-error interaction with a given environment . In recent years , RL algorithms combined with powerful function approximators such as deep neural networks have achieved significant successes in a wide range of challenging domains ( see e.g . Mnih et al . ( 2015 ) ; Vinyals et al . ( 2019 ) ; Silver et al . ( 2017 ; 2018 ) ) . However , these algorithms require substantial amounts of data for performing very narrowly-defined tasks . In addition to being datahungry , they are also very brittle to changes in the environment , such as changes in the tasks over time . The most important reasons behind these two shortcomings is that RL algorithms usually learn to perform a task from scratch , without leveraging any form of prior knowledge , and they are trained to optimize performance on only a single task . A promising approach to tackle both of these shortcomings is to learn multiple ways of behaving , i.e. , multiple policies that optimize different reward functions , and to reuse them as needed . Having access to multiple pre-learned policies can allow an agent to quickly solve reoccurring tasks in a lifelong RL setting . It can also allow for learning to combine the existing policies via a meta-policy to quickly learn new tasks , as in hierarchical RL . Recently , Barreto et al . ( 2020 ) have proposed the “ generalized policy updates ” framework , which generalizes the classical policy evaluation and policy improvement operations that underlie many of today ’ s RL algorithms . Its goal is to allow reusing policies resulting from previously learned tasks in order to perform well on downstream tasks , while also being data-efficient . More precisely , after learning the successor features of several policies in a policy set , also referred to as a behavior basis , they were able to instantaneously “ synthesize ” , in a zero-shot manner , new policies to solve downstream tasks , via generalized policy improvement . However , this work leaves open two important questions : ( i ) what set of policies should the agent learn so that its instantaneous performance on all possible downstream tasks is guaranteed to be good , and ( ii ) under what conditions does such a set of policies exist . In this paper , we provide answers to the questions above by proving that under certain assumptions about the environment dynamics and features , learning a diverse set of policies , which we call a set of independent policies , indeed guarantees good instantaneous performance on all possible downstream tasks . After providing an iterative algorithm for the construction of this set , we perform several experiments that validate our theoretical findings . In addition to the validation experiments , we compare this algorithm with recently proposed diverse policy set construction methods ( Eysenbach et al. , 2018 ; Zahavy et al. , 2020 ; 2021 ) and show that , unlike these methods , our approach is able to construct a behavior basis that enables instantaneous transfer to all possible tasks . We also show empirically that learning a set of independent policies can better bootstrap the learning process on downstream tasks where the reward function can not be described by a linear combination of the features . Finally , we demonstrate the usefulness of this set in a lifelong RL scenario , in which the agent faces different tasks over its lifetime . We hope that our study will bring the community a step closer to building lifelong RL agents that are able to perform multiple tasks and are able to instantaneously/quickly adapt to new ones during its lifetime . 2 BACKGROUND . Reinforcement Learning . In RL ( Sutton & Barto , 2018 ) , an agent interacts with its environment by choosing actions to get as much as cumulative long-term reward . The interaction between the agent and its environment is usually modeled as a Markov Decision Process ( MDP ) . An MDP is a tuple M ≡ ( S , A , P , r , d0 , γ ) , where S is the ( finite ) set of states , A is the ( finite ) set of actions , P : S × A × S → [ 0 , 1 ] is the transition distribution , r : S × A × S → R is the reward function , which specifies the task of interest , d0 : S → [ 0 , 1 ] is the initial state distribution and γ ∈ [ 0 , 1 ) is the discount factor . In RL , typically the agent does not have any knowledge about P and r beforehand , and its goal is to find , through pure interaction , a policy π : S → A that maximizes the expected sum of discounted rewards Eπ , P [ ∑∞ t=0 γ tr ( St , At , St+1 ) |S0 ∼ d0 ] , where Eπ , P [ · ] denotes the expectation over trajectories induced by π and P . Successor Features . The successor representation for a state s under a policy π allows s to be represented by the ( discounted ) distribution of states encountered when following π from s ( Dayan , 1993 ) . Given a policy , successor features ( SF , Barreto et al. , 2017 ) are a generalization of the idea of successor representations from the tabular setting to function approximation . Following Barreto et al . ( 2017 ) , we define SFs of a policy π for state-action ( s , a ) as : ψπ ( s , a ) ≡ Eπ , P [ ∞∑ i=0 γiφ ( St+i , At+i , St+i+1 ) ∣∣∣St = s , At = a ] , ( 1 ) where the ith component of ψπ gives the expected discounted sum of the ith component of the feature vector , φi , when following policy π , starting from the state-action pair ( s , a ) . Successor features allow a decoupling between the reward function and the environment dynamics . More concretely , if the reward function for a task can be represented as a linear combination of a feature vector φ ( s , a , s′ ) ∈ Rn : rw ( s , a , s ′ ) = φ ( s , a , s′ ) > ·w , ( 2 ) where w ∈ Rn , then , as we will detail below , the state-action value function Qπrw can be computed immediately as the dot-product of ψπ and w. The elements of w , wi , can be viewed as indicating a “ preference ” towards each of the features . Thus , we refer to w interchangeably as either the preference vector or the task . Intuitively , the elements φi of the feature vector φ can be viewed as salient events that maybe desirable or undesirable to the agent , such as picking up or leaving objects of certain type , and reaching and/or avoiding certain states . Generalized Policy Evaluation and Improvement . Generalized Policy Evaluation ( GPE ) and Generalized Policy Improvement ( GPI ) , together referred to as Generalized Policy Updates , are generalizations of the well-known policy evaluation and policy improvement operations in standard dynamic programming to a set of tasks and a set of policies ( Barreto et al. , 2020 ) . They are used as a transfer mechanism in RL to quickly construct a solution for a newly given task . One particularly efficient instantiation of GPE & GPI is through the use of SFs and value-based action selection . More concretely , given a set of MDPs having the following form : Mφ ≡ { ( S , A , P , rw = φ ·w , d0 , γ ) |w ∈ Rn } , ( 3 ) and given SFs ψπ ( s , a ) of a policy π , an efficient form of GPE on the task rw for policy π can be performed as follows : ψπ ( s , a ) > ·w = Qπrw ( s , a ) , ( 4 ) where Qπrw ( s , a ) is the state-action value function of π on the task rw . And , after performing GPE for all the policies π in a finite policy set Π , following Barreto et al . ( 2017 ) , an efficient form of GPI can be performed as follows : πGPIΠ ( s ) ∈ arg max a∈A max π∈Π Qπrw ( s , a ) . ( 5 ) We will refer to this specific use of SFs and value-based action selection for performing GPE & GPI as simply GPE & GPI throughout the rest of this study . Note that πGPI will in general outperform all the policies in Π , and that the actions selected by πGPI on a state may not coincide with any of the actions selected by π ∈ Π on that state . Hence , the policy space that can be attained by GPI can in principle be a lot larger than , e.g. , the space accessible by calling policies sequentially from the original set . 3 PROBLEM FORMULATION AND THEORETICAL ANALYSIS . GPE & GPI provide a guarantee that , for any reward function linear in the features , πGPI is at least as good as any of the policies π from the “ base set ” which was used to construct it . While this is an appealing guarantee of monotonic improvement , it does not say much , for two reasons . First , it is not clear how big an improvement can be expected for different tasks . More importantly , it leaves open the question of how one should choose base policies in order to ensure as much improvement as possible . After all , if we had a weak set of policies and we simply matched their value with πGPI , this may not be very useful . We will now show that , under certain assumptions , having access to a specific set of diverse policies , which we call a set of independent policies , can allow for instantaneously achieving high-level performance on all possible downstream tasks . Let us start by assuming that we are interested in a set of MDPsMφ , as defined in ( 3 ) , with deterministic transition functions ( the reason for the determinism assumption will become clear by the end of this section ) . For convenience , we also restrict the possible w values from Rn toW , where W is the surface of the ` 2 n-dimensional ball . Note that this choice does not alter the optimal policies of the MDPs inMφ , as an optimal policy is invariant with respect to the scale of the reward and W contains all possible directions . Next , we assume that the features φi that make up the feature vectors φ form a set of independent features ( SIF ) , defined as follows : Definition 1 ( SIF ) . A set of features Φ = { φi|φi : S × A × S → { 0 , C } , C ∈ R+ } ni=1 is called independent if , for any feature φi ∈ Φ and any initial state s0 ∼ d0 , we have : ( i ) φi ( s0 , a0 , s1 ) = 0 ∀a0 ∈ A and ∀s1 ∼ P ( s0 , a0 , · ) , and ( ii ) there exists at least one trajectory , starting from s0 , in which all the states associated with φi ( st , at , st+1 ) = C are visited , while the states associated with φj ( st , at , st+1 ) = C , ∀j 6= i , are not visited . It should be noted that a specific instantiation of this definition is the case where each feature is set to a positive constant at certain independently reachable state/states and zero elsewhere , which is the most common instantiation of feature vectors used in previous related work ( Barreto et al. , 2017 ; 2020 ) . We define the performance of an arbitrary policy π on a task rw as : Jπrw ≡ Eπ , P [ ∞∑ t=0 rw ( St , At , St+1 ) ∣∣∣S0 ∼ d0 ] , ( 6 ) whereEπ , P [ · ] denotes the expectation over trajectories induced by π and P . Note that Jπrw is a scalar corresponding to the expected undiscounted return of policy π under the initial state distribution d0 , which is the expected total reward obtained by π when starting from s0 ∼ d0 . We are now ready to formalize the problem we want to tackle : Problem Formulation . Given a set of MDPsMφ with deterministic transition functions and a SIF , we want to construct a set of n policies Πn = { πi } ni=1 , such that the performance of the policy πGPIΠn obtained by performing GPI on that set will maximize ( 6 ) for all rewards rw , where w ∈ W . That is , we want to solve the following problem : arg max Πn⊆Π J πGPIΠn rw for all w ∈ W. ( 7 ) It should be noted that the performance measure provided in ( 6 ) only measures the expected total reward and thus can not capture the optimality of the GPI policy . For instance , this measure can not distinguish between two policies that achieve the same expected total reward in a different number of time steps . However , Theorem 2 in Barreto et al . ( 2017 ) implies that , in general , the only way to guarantee the optimality of the GPI policy is to construct a behavior basis that contains all possible policies induced by all w ∈ W . Since there are infinitely many w values , this is impractical . Thus , in this study , we only consider GPI policies that maximize the expected total reward ( 6 ) . As a solution candidate to the problem in ( 7 ) , we now focus on a specific set of deterministic policies , called set of independent policies ( SIP ) , that are able to obtain features independently of each other : Definition 2 ( SIP ) . Let Φ = { φi } ni=1 be a SIF and let Π = { πi } ni=1 be a set of deterministic policies that are induced by each of the features in Φ. Π is defined to be a SIP if its elements , πi , satisfy : φj ( st , at , st+1 ) = φj ( s0 , a0 , s1 ) ∀j 6= i , ∀i , ∀s0 ∼ d0 and ∀t ∈ { 1 , . . . , T } , ( 8 ) where T is the horizon in episodic environments and T → ∞ in non-episodic ones , a0 = πi ( s0 ) and ( st , at , st+1 ) Tt=1 is the sequence of state-action-state triples generated by πi ’ s interaction with the environment . In general , having a SIP in a set of MDPs with stochastic transition functions is not possible , as the stochasticity can prevent ( 8 ) from holding . Thus , the assumption of a set of MDPs with deterministic transition functions is critical to our analysis . An immediate consequence of having this set of policies is that the corresponding SFs can be expressed in a simpler form , as follows : Lemma 1 . Let Φ be a SIF and let πi be a policy that is induced by the feature φi ∈ Φ and is a member of a SIP Π . Then , the entries of the SF ψπi of policy πi has the following form : ψπij ( s , a ) = { ψπii ( s , a ) , if i = j 0 , otherwise . ( 9 ) Due to the space constraints , we provide all the proofs in the supp . material . Lemma 1 implies that once we have a SIP , the SFs take the much simpler form ( 9 ) , which can allow for the GPI policy to maximize performance on all possible tasks according to the measure in ( 6 ) , solving the optimization objective in ( 7 ) : Theorem 1 . Let Φ be a SIF and let Π be a SIP induced by each of the features in Φ . Then , the GPI policy πGPIΠ is a solution to the optimization problem defined in ( 7 ) . Theorem 1 implies that having access to a SIP which consists of only n policies , where n is the dimensionality of φ , and applying the policy composition operator GPI on top , allows for instantaneously achieving maximum performance across all possible downstream tasks . Considering the fact that there are infinitely many such tasks , this provides a significant gain in the number of policies that are required to be learned for full downstream task coverage . | The paper focuses on reinforcement learning problems with known successor features and rewards expressible as their linear combination. Building on recent research, it presents a concept of independent features and independent policies and way to construct them. Theoretically it shows that the set of independent policies and their combination with GPE & GPI is enough to solve any induced task. Experimentally, the authors verify the theory and compare to existing approaches to create policy sets, outperforming all. They also provide a set of relevant questions and answers, supported by separate experiments. Finally, they perform experiments in problems without the linear combination assumption and lifelong RL setting, with positive results. | SP:fbacc4b906328e10a7f61a351bc02cf99aa33c4c |
Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks | 1 INTRODUCTION The laws driving the physical world are often best described by partial differential equations ( PDEs ) that relate how a magnitude of interest changes in time with its change in space . They describe how the atmosphere and oceans circulate and interact , how structures deform under load and how electromagnetic waves propagate ( Courant & Hilbert , 2008 ) . Knowledge of these equations lets us predict the weather ( Coiffier , 2011 ) , build sturdier structures , and communicate wirelessly . Yet , in many cases we only know the PDEs governing a system partially ( Isakov , 2006 ) or not at all , or solving them is too computationally costly to be practical ( Ames , 2014 ) . Machine learning researchers try to fill in these gaps with models trained on collected data . For example , neural networks have been trained for weather forecasts ( Shi et al. , 2015 ) and fluid flow simulations ( Belbute-Peres et al. , 2020 ) , both of which are traditionally outcomes of PDE solvers . Even the dynamics of discrete dynamical systems such as traffic ( Li et al. , 2018 ) and crowds ( Zhang et al. , 2017 ) have been learned from data . A challenge facing these models is the high cost of acquiring training data , so the data is usually only available sparsely distributed in space . Since graphs are a natural way to structure sparse data , models incorporating graph neural networks ( GNNs ) have been particularly successful for spatio-temporal forecasting ( Yu et al. , 2018 ; Wu et al. , 2019 ) . In the domain of physical processes we can reasonably assume that the observed system follows a PDE . There are mainly two ways to incorporate this assumption as a-priori knowledge into a model . First , we can encode a known PDE into a loss function that encourages the model to fulfill the equation ( Raissi et al. , 2019 ) . Another way to go about this is to derive the model structure itself from known laws such as the convection-diffusion equation ( de Bézenac et al. , 2018 ) . In this paper we will follow the second approach . Consider a dynamical system on a bounded domain Ω ⊂ Rd that is governed by the PDE ∂tu = F ( t , x , u , ∂xu , ∂ 2 xu , ... ) ( 1 ) on functions u : [ 0 , T ] × Ω → Rm . If we have a dense measurement u0 : Ω → Rm of the current state of the system and a solution u that satisfies Eq . ( 1 ) for all t ∈ [ 0 , T ] and also fulfills the initial condition u ( 0 , x ) = u0 ( x ) at all points x ∈ Ω , we can use u as a forecast for the state of the system until time T . From a spatio-temporal forecasting perspective , this means that we can forecast the evolution of the system if we have a continuous measurement of the state , know the dynamics F , and can find solutions of Eq . ( 1 ) efficiently . Unfortunately , in practice we only have a finite number of measurements at arbitrary points and only know the dynamics partially or not at all . Contributions . An established numerical method for forecasts in systems with fully specified dynamics is the finite element method ( FEM ) ( Brenner et al. , 2008 ) . In this paper , we introduce the first graph-based model for spatio-temporal forecasting that is derived from FEM in a principled way . Our derivation establishes a direct connection between the form of the unknown dynamics and the structure of the model . Through this connection our model can incorporate prior knowledge on the governing physical processes via assumptions on the form of the underlying dynamics . We employ this mechanism to derive a specialized model for transport problems from the convection equation . The way that the model structure arises from the underlying equation makes our models uniquely interpretable . We show that our transport model disentangles convection and the remainder of the learned dynamics such as source/sink behavior , and that the activations of the model correspond to a learned flow field , which can be visualized and analyzed . In experiments on multi-step forecasting of sea surface temperature and gas flow , our model improves upon baselines from recurrent , temporalconvolutional , and continuous-time model classes with further improvement by the transport model . 2 BACKGROUND . 2.1 FINITE ELEMENT METHOD . In the following , we will outline how to approximate a solution u to the dynamics in Eq . ( 1 ) from an initial value u0 by discretizing u in space using finite elements . Let X be a set of points with a triangulation T of d-dimensional , non-overlapping simplices X = { x ( i ) ∈ Rd } Ni=1 T = { ∆ ( j ) | ∆ ( j ) ⊂ X , |∆ ( j ) | = d+ 1 } NT j=1 ( 2 ) such that ∪∆∈T CH ( ∆ ) equals the domain Ω where CH ( ∆ ) is the convex hull of simplex ∆ . So we define a simplex ∆ ( j ) ∈ T representing the j-th mesh cell as the set of vertices of the cell and denote the domain volume covered by the cell by the convex hull CH ( ∆ ( j ) ) of the vertices . We will assume u to be a scalar field , i.e . u : [ 0 , T ] ×Ω→ R. If u is a vector field , we treat it as a system of m scalar fields instead . For a detailed introduction to FEM , we refer the reader to Igel ( 2017 ) . Basis Functions . A priori , we assume that the unknown solution u to our problem lies in an infinitedimensional function space U . The first step in FEM to make the problem numerically feasible is to approximate U with a finite-dimensional linear subspace Ũ . This subspace can then be written in terms of linear combinations of basis functions Ũ = span { ϕ ( 1 ) , ... , ϕ ( N ) } . There are many possible bases and the choice determines various qualities of the resulting procedure such as continuity of the approximation and the sparsity pattern of the mass matrix in Eq . ( 7 ) . In our case , we choose the so-called P1 basis of piecewise linear functions ( hat functions ) , see Fig . 2a ( Igel , 2017 ) . There are as many basis functions as there are points and each is uniquely defined by being linear when restricted to a single cell ∆ ∈ T and the constraint ϕ ( j ) ( x ( i ) ) = { 1 if x ( i ) = x ( j ) 0 otherwise ∀x ( i ) ∈ X . ( 3 ) So the basis function ϕ ( j ) is 1 at x ( j ) , falls linearly to 0 on mesh cells adjacent to x ( j ) and is 0 everywhere else . The resulting finite-dimensional function space Ũ is the space of linear interpolators between values at the vertices , see Fig . 2b . An important property is that if we expand u ∈ Ũ in this basis , the value of u at the i-th node is just its i-th coefficient . u ( x ( i ) ) = ∑N j=1 cjϕ ( j ) ( x ( i ) ) = ci ( 4 ) Galerkin Method . A piecewise linear approximation u ∈ Ũ is not differentiable everywhere and therefore can not fulfill Eq . ( 1 ) exactly . So instead of requiring an exact solution , we ask that the residual R ( u ) = ∂tu−F ( t , x , u , ... ) be orthogonal to the approximation space Ũ with respect to the inner product 〈u , v〉Ω = ∫ Ω u ( x ) · v ( x ) dx at any fixed time t. In effect we are looking for the best possible solution within Ũ . Because Ũ is generated by a finite basis , the orthogonality requirement decomposes into N equations , one for each basis function . 〈R ( u ) , v〉Ω = 0 ∀v ∈ Ũ ⇐⇒ 〈R ( u ) , ϕ ( i ) 〉Ω = 0 ∀i = 1 , ... , N ( 5 ) Plugging the residual back in and using the linearity of the inner product , we can reconstruct a system of equations that resemble the PDE that we started with . 〈∂tu , ϕ ( i ) 〉 = 〈F ( t , x , u , ... ) , ϕ ( i ) 〉Ω ∀i = 1 , ... , N ( 6 ) At this point we can stack the system of N equations into a vector equation . If we plug in the basis expansion ∑N j=1 cjϕ ( j ) for u into the left hand side , we get a linear system A∂tc = m ( 7 ) where Aij = 〈ϕ ( i ) , ϕ ( j ) 〉Ω is the so called mass matrix , c is the vector of basis coefficients of u , and mi = 〈F ( t , x , u , ... ) , ϕ ( i ) 〉Ω captures the effect of the dynamics F . The left hand side evaluates to A∂tc , because the basis functions are constant with respect to time . The right hand side can not be further simplified without additional assumptions on F . Method of Lines . If we can evaluate the right hand sidem , we can solve the linear system in Eq . ( 7 ) for the temporal derivatives of the coefficients of u at each point in time . In fact we have converted the PDE into a system of ordinary differential equations ( ODEs ) which we can solve with an arbitrary ODE solver given an initial value c ( 0 ) as in Fig . 2c . This is known as the method of lines because we solve for u along parallel lines in time . To find a vector field u : [ 0 , T ] × Ω → Rm instead of a scalar field , we treat the m dimensions of u as a system of m scalar fields . This results in m copies of Eq . ( 7 ) , which we need to solve simultaneously . Because the mass matrixA is constant with respect to u , we can combine the system into a matrix equation A∂tC = M ( 8 ) where C , M ∈ RN×m are the stacked c and m vectors , respectively . In summary , the spatial discretization with finite elements allows us to turn the PDE ( 1 ) into the matrix ODE ( 8 ) . 2.2 MESSAGE PASSING NEURAL NETWORKS . Message-Passing Neural Networks ( MPNNs ) are a general framework for learning on graphs that encompass many variants of graph neural networks ( Gilmer et al. , 2017 ) . It prescribes that nodes in a graph iteratively exchange messages and update their state based on the received messages for P steps . For a graph G = ( V , E ) with nodes V and edges E , and initial node states h ( 0 ) v ∀v ∈ V , the p-th propagation step is h ( p ) v = fupd ( h ( p−1 ) v , ∑ { u , v } ∈E fmsg ( h ( p−1 ) u , h ( p−1 ) v ) ) , ( 9 ) where fmsg maps node states and edge attributes to messages and fupd updates a node ’ s state with the aggregated incoming messages . The final node states h ( P ) v can then be interpreted as per-node predictions directly or passed as node embeddings to downstream systems . In this work , we employ a slight generalization of the above to undirected hypergraphs , i.e . graphs where the edges are sets of an arbitrary number of nodes instead of having a cardinality of exactly 2 . For such a hypergraph G = ( V , E ) with nodes V and hyperedges ε = { u , v , w , ... } ∈ E , and initial node states h ( 0 ) v ∀v ∈ V , the p-th propagation step is h ( p ) v = fupd ( h ( p−1 ) v , ∑ ε∈E s.t.v∈ε fmsg ( { h ( p−1 ) u | u ∈ ε } ) v ) . ( 10 ) Note that fmsg jointly computes a separate message for each node v participating in a hyperedge ε . | The author proposes a method for forecasting in Partial Differential Equations by coupling Finite Element Method on an arbitrary grid with the learning of the dynamics from data. For this purpose a variant of message passing based graph networks is used. It is show in the paper that it's possible to incorporate priors on the structure of the PDE that results in an interpretable solution. The model also show more stability to changes of the mesh structure in test time (like superresolution) and to extrapolation than competitors. | SP:a76c1a2b18015e647fa687abbb2840e2426b31f8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.