paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
PROMISSING: Pruning Missing Values in Neural Networks
1 INTRODUCTION . Missing and incomplete data are abundant in real-world problems ; however , the learning and inference procedures in machine learning ( ML ) models highly rely on high-quality and complete data . Therefore , it is necessary to develop new methods to deal with data imperfections in rugged environments . Currently , the most popular way to deal with imperfect data is to impute the missing values . However , if we consider the learning and inference procedures in our brain as a role model for ML algorithms , data imputation barely follows the natural principles of incomplete data processing in our brain . This is because the imputation is generally based on using a heuristic for replacing missing values . Our brain does not impute incomplete sensory information but instead uses its incompleteness as a separate source of information for decision making . For example , by only hearing the rain we can estimate how hard it is raining , and we do not necessarily need to receive visual information . Instead , we direct our attention more toward our auditory inputs to decide whether to go out with an umbrella . In addition , the more we miss sensory information , the more cautious we get in decision-making . That is why we are more careful in darker environments . Neural networks ( NNs ) are brain-inspired algorithms that are very popular these days ( under the name of deep learning ) for learning complex relationships between inputs and target variables . However , they are in principle unable to handle incomplete data with missing values . They mainly rely on matrix operations which can not operate on not-a-number ( NaN ) values . Only one NaN in a dataset impairs the forward propagation in a network . There are three solutions to this problem ( Garcı́aLaencina et al. , 2010 ) : i ) removing samples or features with missing values , ii ) imputing the missing values , and iii ) modeling the incomplete data . Removing the samples with missing values can be very costly , especially in small-sample size and high-dimensional datasets . For example , data collection in clinical applications is an expensive procedure in terms of time , finance , and patient burden . Moreover , removing even a few samples from small datasets can affect negatively the generalization performance of the final model . Removing informative features with missing values is also compensated with lower model performance . Therefore , filling the information gaps is inevitable . There are various techniques for data imputation , ranging from simply imputing missing values with a constant to more sophisticated ML-based imputation approaches ( Little & Rubin , 2019 ; Garcı́aLaencina et al. , 2010 ) . One can categorize the most common techniques into three main categories : i ) constant imputation , ii ) regression-based imputation , and iii ) ML-based imputation . In constant imputation , the missing values are replaced with a constant , e.g. , zeros or mean/median of features . It has been shown that constant imputation is Bayes consistent when the missing features are not informative ( Josse et al. , 2019 ) . In the regression-based imputation , a linear or non-linear regression model is derived to predict the missing values . This method can be used to impute a single or multiple features . The most popular regression-based imputation is Multiple Imputation by Chained Equations ( MICE ) ( Van Buuren & Groothuis-Oudshoorn , 2011 ; Azur et al. , 2011 ) . In MICE , an iterative procedure of predicting missing values and re-training regressors with updated predictions is performed for a limited number of cycles . The central assumption behind the MICE approach is that the missing values are missed at random ( see Rubin ( 1976 ) and Appendix A.1 for definitions of missing value mechanisms including missing completely at random ( MCAR ) , missing at random ( MAR ) , and missing not at random ( MNAR ) ) . Applying MICE can result in biased estimations if this assumption is not satisfied ( Azur et al. , 2011 ) . Another critical limitation of regression-based imputation is its high computational complexity ( Caiafa et al. , 2021 ) . If we do not know which feature will be missed at the test time , for d features , we need to train d different regression models . In the ML-based approach ML algorithms , such as a K-nearest neighbor ( KNN ) , regularized linear model ( Jiang et al. , 2021 ) , decision trees ( Twala et al. , 2008 ) , random forest ( Xia et al. , 2017 ) , neural network ( Bengio & Gingras , 1996 ) , or generative model ( Yoon et al. , 2018 ; Ipsen et al. , 2020 ; Collier et al. , 2020 ; Nazabal et al. , 2020 ) , are used for handling missing data . As an alternative solution to data imputation , one can use the elegance of probabilistic modeling to model the incomplete data under certain assumptions . One seminal work in this direction is presented by Ghahramani & Jordan ( 1994 ) , where a Gaussian Mixture Model ( GMM ) is used to estimate the joint density function on incomplete data using an Expectation-Maximization ( EM ) algorithm . This approach is later adopted and extended to logistic regression ( Williams et al. , 2005 ) , Gaussian processes , support vector machines ( Smola et al. , 2005 ) , and multi-class non-linear classification ( Liao et al. , 2007 ) . However , despite their good performance on small-size datasets , their application remained limited on big and high-dimensional data due to the high computational complexity ( Caiafa et al. , 2021 ) . To overcome this issue , Caiafa et al . ( 2021 ) proposed a sparse dictionary learning algorithm that is trained end-to-end , and simultaneously learns the parameters of the classifier and sparse dictionary representation . Le Morvan et al . ( 2020 ) proposed NeuMiss , a neural-network architecture that uses a differentiable imputation procedure in a impute-then-regress scheme ( Morvan et al. , 2021 ) . A notable feature of NeuMiss is its robustness to MNAR data . Inverse probability weighted estimation ( Wooldridge , 2007 ; Seaman & White , 2013 ) is another probabilistic approach for handling missing values without imputation in which the weights for samples with many missing values are inflated based on an estimation of the sampling probability . Recently , Smieja et al . ( 2018 ) proposed a modified neuron structure that uses GMM with a diagonal covariance matrix ( assuming MAR ) to estimate the density of missing data . GMM parameters are learned with other network parameters . Conveniently , it handles missing values in the first layer of the network , and the rest of the architecture remains unchanged . Elsewhere , Nowicki et al . ( 2016 ) proposed a new neural network architecture based on rough set theory ( Pawlak , 1998 ) for learning from imperfect data . It is fascinating that this method can say “ I do not know ” when a large portion of input values are missing , unlike traditional models trained on imputed data that may predict definite outcomes even on completely unmeasured samples , i.e. , they run in the absolute darkness . These predictions can be dangerous with catastrophic consequences in more delicate applications of ML for example in autonomous driving , robotic surgery , or clinical decision-making . In this work , we attack the problem of modeling incomplete data using artificial neural networks without data imputation . We propose a simple technique for pruning missing values ( PROMISSING ) in which the effect of missing values on the activation of a neuron is neutralized . In this strategy , a missing value is not replaced by arbitrary values ( e.g. , through imputation ) ; it is naturally considered a missing piece of the puzzle ; we learn a problem-specific numerical representation for unknowns . The key feature of PROMISSING is its simplicity ; it is plug-and-play ; it deals with missing values in the first layer of the network without the need to change anything in the rest of the network architecture or optimization process . PROMISSING in its original form does not add extra parameters to the network , and its computational overhead remains negligible . Our experiments on simulated data and several classification/regression problems show that the proposed pruning method does not negatively affect the model accuracy and provides competitive results compared to several data imputation techniques . In a clinical application , making prognostic predictions for patients with a psychotic disorder , we present an application of PROMISSING on a multi-modal clinical dataset . We demonstrate how the NN model trained using PROMISSING becomes indecisive when facing many unknowns . This is a crucial feature for developing trustworthy prediction models in clinical applications . Furthermore , we show a side application of PROMISSING for counterfactual interpretation ( Mothilal et al. , 2020 ) of NNs decisions that can be valuable in clinics . 2 METHODS . Let x ∈ Rp represent a vector of an input sample with p features . We assume that the features in x are divided into two sets of q observed xo ∈ Rq and r missing features xm ( where p = q + r ) . In this study , we do not put any assumption on the pattern of missing values in x . Then , the activation of the kth ( k ∈ { 1 , 2 , . . . , s } ) neuron in the first hidden layer of an ordinary NN is : a ( k ) = ∑ xi∈xo xiw ( k ) i + ∑ xj∈xm xjw ( k ) j + b ( k ) . ( 1 ) This activation can not be computed unless the values in xm are imputed with real numbers . Here , in PROMISSING , we propose to alternatively replace the missing values with a neutralizer that 1 ) prunes the missing values from inputs of a neuron , 2 ) neutralizes the effect of missing values on the neuron ’ s activation by cancelling the second term in Eq . 1 and modifying the neuron ’ s bias . A missing value xj ∈ xm is replaced with its corresponding neutralizer u ( k ) j at the kth neuron , where : u ( k ) j = −b ( k ) pw ( k ) j . ( 2 ) The value of a neutralizer depends on its corresponding weight ( w ( k ) j ) , the bias of the corresponding neuron ( b ( k ) ) , and the number of features ( p ) ; thus , it can be computed on the fly during the training or inference procedures . A small value is added to weights before computing neutralizers to avoid division by zero . Inserting the neutralizer into Eq . 1 , the activation of the kth neuron is rewritten as : a ( k ) = ∑ xi∈xo xiw ( k ) i + qb ( k ) p , ( 3 ) in which the effect of weights of missing values on the activation of the neuron is eliminated , and the neuron ’ s bias is reduced by a factor of r/p . If all input values for a specific sample are missing then the neuron is completely neutralized . Proposition 1 If all input values are missing ( q = 0 and xo = ∅ ) then the activation of a PROMISSING neuron is zero ( see Appendix A.2.2 for the proof ) . Proposition 2 If there are no missing values in inputs ( q = p and xm = ∅ ) then the activation of a PROMISSING neuron is equal to a normal neuron ( see Appendix A.2.3 for the proof ) .. We should emphasize that , when using PROMISSING , the user does not need to apply any change to the input vectors , and the missing values ( generally represented as nans in the input matrix ) are fed directly to the network . After the training procedure , we eventually learn U ∈ Rs×p , a matrix representation for unknowns ( or metaphorically the dark matter ) : u ∗ ( k ) j = −b∗ ( k ) pw ∗ ( k ) j j ∈ { 1 , 2 , . . . , p } , k ∈ { 1 , 2 , . . . , s } , ( 4 ) where b∗ and w∗ are representing the final learned bias and weight . At the prediction stage , a missing value at jth input feature will be replaced with its corresponding neutralizer from U . It is worth emphasizing that the missing values are replaced with different neutralizers at different neurons ; therefore , it can not be considered a constant imputation . In fact , each neuron perceives differently a missing value in the input space . Metaphorically , the neurons can be seen as blind men in the parable of “ the blind men and an elephant ” 1 when facing unknowns . Furthermore , it is different from regression-based imputation and model-based approaches in the sense that a missing value in a specific feature is not inferred from other observed features , or the distribution of observed values ; i.e. , unknowns remain unknowns . In PROMISSING , we do not assume any certain missing value mechanism ( e.g. , MAR ) in advance . Instead , we try to learn the patterns of missing values from data that maybe advantageous in more difficult scenarios such as MNAR ( see results in Sec . 3.1 ) . One possible drawback of using PROMISSING is in high-dimensional input spaces and when the number of missing values is large , i.e. , when p → ∞ and r q . In this case , the neuron will undershoot ; hence the effect of few non-missing values are ignored . 2 To address this problem , we propose a modified version of PROMISSING ( mPROMISSING ) in which the effect of large r can be compensated with a compensatory weight , wc . The compensatory weight receives a fixed input of r/p for a specific sample ; thus , the activation of the neuron will change to : a ( k ) = ∑ xi∈xo xiw ( k ) i + qb ( k ) + rw ( k ) c p . ( 5 ) w ( k ) c is learned alongside the rest of the network parameters in the optimization process . On training data with few missing values , data augmentation ( e.g. , by simulating different patterns and size of missing values ) is advisable to ensure the sensibility of the learned compensatory weight . Since the input for this weight ( r/p ) is computed at the run time , no modification to input vectors is required . Proposition 3 If there are no missing values ( q = p , r = 0 , and xm = ∅ ) then the activation of an mPROMISSING neuron is equal to a normal neuron ( see Appendix A.2.4 for the proof ) . The proposed PROMISSING approach is straightforward to implement and use . It can be incorporated into the current implementations of different types of NN layers by adding/modifying few lines of code . We have implemented a nanDense layer , inheriting from Keras ( Chollet et al. , 2015 ) Dense layer , using PROMISSING and mPROMISSING neurons ( see appendix A.3 ) . The nanDense layer can be directly imported and used with any Keras model . In its general usage , the nanDense layer is only used for the first layer of an NN to handle missing values in inputs , unless we expect some missing values in the intermediate layers .
In this paper, the authors propose a method titled PROMISSING; this provides a new approach to handling missing data. Rather than imputation, a complete-case analysis, or inverse probability weighting, among other methods, the authors advocate for learning a problem-specific numerical representation for unknowns. The approach is interesting, and the experiments and data analyses are a good start at understanding the method.
SP:11a1972c3e8ea1c2dda4776b0d751fd47300ae29
Online MAP Inference and Learning for Nonsymmetric Determinantal Point Processes
1 INTRODUCTION . Determinantal Point Processes ( DPPs ) were first introduced in the context of quantum mechanics ( Macchi , 1975 ) and have subsequently been extensively studied with applications in several areas of pure and applied mathematics like graph theory , combinatorics , random matrix theory ( Hough et al. , 2006 ; Borodin , 2009 ) , and randomized numerical linear algebra ( Derezinski & Mahoney , 2021 ) . Discrete DPPs have gained widespread adoption in machine learning following the seminal work of Kulesza & Taskar ( 2012 ) and there has been a recent explosion of interest in DPPs in the machine learning community . For instance , some of the very recent uses of DPPs include automation of deep neural network design ( Nguyen et al. , 2021 ) , deep generative models ( Chen & Ahmed , 2021 ) , document and video summarization ( Perez-Beltrachini & Lapata , 2021 ) , image processing ( Launay et al. , 2021 ) , and learning in games ( Perez-Nieves et al. , 2021 ) . A DPP is a probability distribution over subsets of items and is characterized by some kernel matrix such that the probability of sampling any particular subset is proportional to the determinant of the submatrix corresponding to that subset in the kernel . Until very recently , most prior work on DPPs focused on the setting where the kernel matrix is symmetric . Due to this constraint , DPPs can only model negative correlations between items . Recent work has shown that allowing the kernel matrix to be nonsymmetric can greatly increase the expressive power of DPPs and allows them to model compatible sets of items ( Gartrell et al. , 2019 ; Brunel , 2018 ) . To differentiate this line of work from prior literature on symmetric DPPs , the term Nonsymmetric DPPs ( NDPPs ) has often been used . Modeling positive correlations can be useful in many practical scenarios . For instance , an E-commerce company trying to build a product recommendation system would want the system to increase the probability of suggesting a router if a customer adds a modem to a shopping cart . State-of-the-art algorithms for learning and inference on NDPPs ( Gartrell et al. , 2021 ) require storing the full data in memory and take multiple passes over the complete dataset . Therefore , these algorithms take too much memory to be useful for large scale data , where the size of the entire dataset can be much larger than the random-access memory available . These algorithms are also not practical in settings where data is generated on the fly , for example , in E-commerce applications where new items are added to the store over time , and more importantly , added to the carts of users instantaneously . This work makes the following contributions : Streaming and Online Inference : We formulate streaming and online versions of maximum a posteriori ( MAP ) inference on fixed-size NDPPs and provide algorithms for solving these problems . In the streaming setting , data points arrive in an arbitrary order and the algorithms are constrained to use a single-pass over the data as well as sub-linear memory ( i.e . memory that is substantially smaller than the size of the data stream ) . The online setting we consider has an additional restriction that we need to maintain a valid solution at every time step . For both these settings , we provide algorithms which have comparable or even better solution quality than the offline greedy algorithm while taking only a single pass over the data and using a fraction of the memory used by the offline algorithm . Online Learning : We introduce the online learning problem for NDPPs and provide an algorithm which solves this problem using a single-pass over the data and memory that is constant in m , the number of baskets in the training data ( or equivalently the length of the stream ) . In comparison , the offline learning algorithm takes a large number of passes over the entire data and uses memory linear in m. Strikingly , our online learning algorithm shows comparable performance ( log-likelihood ) to the state-of-the-art offline learning algorithm , while converging significantly faster in all cases ( Figure 2 ) . This is notable , since our algorithm uses only a single pass over the data , while using a tiny fraction of the memory . 2 RELATED WORK . Even in the case of ( symmetric ) DPPs , the study of online and streaming settings is in a nascent stage . In particular , Bhaskara et al . ( 2020 ) were the first to propose online algorithms for MAP inference of DPPs and Liu et al . ( 2021 ) were the first to give streaming algorithms for the maximum induced cardinality objective proposed by Gillenwater et al . ( 2018 ) . However , no work has focused on either online or streaming MAP inference or online learning for Nonsymmetric DPPs . A special subset of NDPPs called signed DPPs were the first class of NDPPs to be studied ( Brunel et al. , 2017 ) . Gartrell et al . ( 2019 ) studied a more general class of NDPPs and provided learning and MAP Inference algorithms , and also showed that NDPPs have additional expressiveness over symmetric DPPs and can better model certain problems . This was improved by Gartrell et al . ( 2021 ) in which they provided a new decomposition which enabled linear time learning and MAP Inference for NDPPs . More recently , Anari & Vuong ( 2021 ) proposed the first algorithm with a kO ( k ) approximation factor for MAP Inference on NDPPs where k is the number of items to be selected . These works are not amenable to the streaming nor online settings that are studied in our paper . In particular , they store all data in memory and use multiple passes over the data , among other issues . In this work , we formally introduce the streaming and online MAP inference and online learning problems for NDPPs and develop online algorithms for solving these problems . To the best of our knowledge , our work is the first to study NDPPs in the streaming and online settings , and develop algorithms for solving MAP inference and learning of NDPPs in these settings . 3 PRELIMINARIES . Notation . Throughout the paper , we use uppercase bold letters ( A ) to denote matrices and lowercase bold letters ( a ) to denote vectors . Letters in normal font ( a ) will be used for scalars . For any positive integer n , we use [ n ] to denote the set { 1 , 2 , . . . , n } . A matrix M is said to be skew-symmetric if M = −M > where > is used to represent matrix transposition . A DPP is a probability distribution on all subsets of [ n ] characterized by a matrix L ∈ Rn×n . The probability of sampling any subset S ⊆ [ n ] i.e . Pr [ S ] ∝ det ( LS ) where LS is the submatrix of L obtained by keeping only the rows and columns corresponding to indices in S. The normalization constant for this distribution can be computed efficiently since we know that ∑ S⊆ [ n ] det ( LS ) = det ( L + In ) ( Kulesza & Taskar , 2012 , Theorem 2.1 ) . Therefore , Pr [ S ] = det ( LS ) det ( L+In ) . For the DPP corresponding to L to be a valid probability distribution , we need det ( LS ) ≥ 0 for all S ⊆ [ n ] since Pr [ S ] ≥ 0 for all S ⊆ [ n ] . Matrices which satisfy this property are known as P0-matrices ( Fiedler & Pták , 1966 ) . For any symmetric matrix L , det ( LS ) ≥ 0 for all S ⊆ [ n ] if and only if L is positive semi-definite ( PSD ) i.e . xTLx ≥ 0 for all x ∈ Rn . Therefore , all symmetric matrices which correspond to valid DPPs are PSD . But there are P0-matrices which are not necessarily symmetric ( or even positive semi-definite ) . For example , L = [ 1 1 −1 1 ] is a nonsymmetric P0 matrix . Any matrix L can be uniquely written as the sum of a symmetric and skew-symmetric matrix : L = ( L + L > ) /2 + ( L − L > ) /2 . For the DPP characterized by L , the symmetric part of the decomposition can be thought of as encoding negative correlations between items and the skewsymmetric part as encoding positive correlations . Gartrell et al . ( 2019 ) proposed a decomposition which covers the set of all nonsymmetric PSD matrices ( a subset of P0 matrices ) which allowed them to provide a cubic time algorithm ( in the ground set size ) for NDPP learning . This decomposition is L = V > V + ( BC > −CB > ) . Gartrell et al . ( 2021 ) provided more efficient ( linear time ) algorithms for learning and MAP inference using a new decomposition L = V > V + B > CB . Although both these decompositions only cover a subset of P0 matrices , it turns out that they are quite useful for modeling real world instances and provide improved results when compared to ( symmetric ) DPPs . For the decomposition L = V > V + B > CB , we have V , B ∈ Rd×n , C ∈ Rd×d and C is skewsymmetric . Here we can think of the items having having a latent low-dimensional representation ( vi , bi ) where vi , bi ∈ Rd . Intuitively , a low-dimensional representation ( when compared to n ) is sufficient for representing items because any particular item only interacts with a small number of other items in real-world datasets , as evidenced by the fact that the maximum basket size encountered in real-world data is much smaller than n . 4 STREAMING MAP INFERENCE . In this section , we formulate the streaming MAP inference problem for NDPPs and design an algorithm for this problem with guarantees on the solution quality , space , and time . 4.1 STREAMING MAP INFERENCE PROBLEM . We study the MAP Inference problem in low-rank NDPPs in the streaming setting where we see columns of a 2d × n matrix in order ( column-arrival model ) . Given some fixed skew-symmetric matrix C ∈ Rd×d , consider a stream of 2d-dimensional vectors ( which can be viewed as pairs of d-dimensional vectors ) arriving in order : ( v1 , b2 ) , ( v2 , b2 ) , . . . , ( vn , bn ) where vt , bt ∈ Rd , ∀ t ∈ [ n ] The main goal in the streaming setting is to output the maximum likelihood subset S ⊆ [ n ] of cardinality k at the end of the stream assuming that S is drawn from the NDPP characterized by L = V > V + B > CB i.e . S = argmax S⊆ [ n ] , |S|=k det ( LS ) = argmax S⊆ [ n ] , |S|=k det ( V > S VS + B > SCBS ) ( 1 ) For any S ⊆ [ n ] , VS ∈ Rd×|S| is the matrix whose each column corresponds to { vi , i ∈ S } . Similarly , BS ∈ Rd×|S| is the matrix whose columns correspond to { bi , i ∈ S } . In the case of symmetric DPPs , this maximization problem in the non-streaming setting corresponds to MAP Inference in cardinality constrained DPPs , also known as k-DPPs ( Kulesza & Taskar , 2011 ) . Usually , designing a streaming algorithm can be viewed as a dynamic data-structure design problem . We want to maintain a data-structure with efficient time and small space over the entire stream . Therefore , the secondary goals are to minimize the following : • Space : We consider word-RAM model , and use number of words * to measure it . * In word-RAM , we usually assume each word is O ( logn ) bits . Algorithm 1 Streaming Partition Greedy MAP Inference for low-rank NDPPs 1 : Input : Length of the stream n and a stream of data points { ( v1 , b1 ) , ( v2 , b2 ) , . . . , ( vn , bn ) } 2 : Output : A solution set S of cardinality k at the end of the stream . 3 : S0 ← ∅ , s0 ← ∅ 4 : while new data ( vt , bt ) arrives in stream at time t do 5 : i← d tkn e 6 : if f ( Si−1 ∪ { t } ) > f ( Si−1 ∪ { si } ) then 7 : si ← t 8 : if t is a multiple of nk then 9 : Si ← Si−1 ∪ si 10 : si ← ∅ 11 : return Sk • Update time : time to update our data-structure whenever we see a new arriving data point . • Total time : total time taken to process the stream . Definition 1 . Given three matrices V ∈ Rd×k , B ∈ Rd×k and C ∈ Rd×d , let Tdet ( k , d ) denote the running time of computing det ( V > V + B > CB ) . Note that Tdet ( k , d ) = 2Tmat ( d , k , d ) + Tmat ( d , d , k ) + Tmat ( k , k , k ) where Tmat ( a , b , c ) is the time required to multiply two matrices of dimensions a× b and b× c. We have the last Tmat ( k , k , k ) term because computing the determinant of a k × k matrix can be done ( essentially ) in the same time as computing the product of two matrices of dimension k × k ( Aho et al. , 1974 , Theorem 6.6 ) . We will now describe a streaming algorithm for MAP inference in NDPPs , which we call the `` Streaming Partition Greedy '' algorithm .
This paper studies the online inference and learning problems for nonsymmetric determinantal point processes (NDPPs). The authors use the online greedy algorithm for MAP inference and modify the learning objective for being suitable in the online setting. Experiments with real-world datasets show that the proposed online algorithms are comparable or even better than state-of-the-art offline algorithms.
SP:543adf5a6e83b2d343b5f4482f1fb41388f1314c
Online MAP Inference and Learning for Nonsymmetric Determinantal Point Processes
1 INTRODUCTION . Determinantal Point Processes ( DPPs ) were first introduced in the context of quantum mechanics ( Macchi , 1975 ) and have subsequently been extensively studied with applications in several areas of pure and applied mathematics like graph theory , combinatorics , random matrix theory ( Hough et al. , 2006 ; Borodin , 2009 ) , and randomized numerical linear algebra ( Derezinski & Mahoney , 2021 ) . Discrete DPPs have gained widespread adoption in machine learning following the seminal work of Kulesza & Taskar ( 2012 ) and there has been a recent explosion of interest in DPPs in the machine learning community . For instance , some of the very recent uses of DPPs include automation of deep neural network design ( Nguyen et al. , 2021 ) , deep generative models ( Chen & Ahmed , 2021 ) , document and video summarization ( Perez-Beltrachini & Lapata , 2021 ) , image processing ( Launay et al. , 2021 ) , and learning in games ( Perez-Nieves et al. , 2021 ) . A DPP is a probability distribution over subsets of items and is characterized by some kernel matrix such that the probability of sampling any particular subset is proportional to the determinant of the submatrix corresponding to that subset in the kernel . Until very recently , most prior work on DPPs focused on the setting where the kernel matrix is symmetric . Due to this constraint , DPPs can only model negative correlations between items . Recent work has shown that allowing the kernel matrix to be nonsymmetric can greatly increase the expressive power of DPPs and allows them to model compatible sets of items ( Gartrell et al. , 2019 ; Brunel , 2018 ) . To differentiate this line of work from prior literature on symmetric DPPs , the term Nonsymmetric DPPs ( NDPPs ) has often been used . Modeling positive correlations can be useful in many practical scenarios . For instance , an E-commerce company trying to build a product recommendation system would want the system to increase the probability of suggesting a router if a customer adds a modem to a shopping cart . State-of-the-art algorithms for learning and inference on NDPPs ( Gartrell et al. , 2021 ) require storing the full data in memory and take multiple passes over the complete dataset . Therefore , these algorithms take too much memory to be useful for large scale data , where the size of the entire dataset can be much larger than the random-access memory available . These algorithms are also not practical in settings where data is generated on the fly , for example , in E-commerce applications where new items are added to the store over time , and more importantly , added to the carts of users instantaneously . This work makes the following contributions : Streaming and Online Inference : We formulate streaming and online versions of maximum a posteriori ( MAP ) inference on fixed-size NDPPs and provide algorithms for solving these problems . In the streaming setting , data points arrive in an arbitrary order and the algorithms are constrained to use a single-pass over the data as well as sub-linear memory ( i.e . memory that is substantially smaller than the size of the data stream ) . The online setting we consider has an additional restriction that we need to maintain a valid solution at every time step . For both these settings , we provide algorithms which have comparable or even better solution quality than the offline greedy algorithm while taking only a single pass over the data and using a fraction of the memory used by the offline algorithm . Online Learning : We introduce the online learning problem for NDPPs and provide an algorithm which solves this problem using a single-pass over the data and memory that is constant in m , the number of baskets in the training data ( or equivalently the length of the stream ) . In comparison , the offline learning algorithm takes a large number of passes over the entire data and uses memory linear in m. Strikingly , our online learning algorithm shows comparable performance ( log-likelihood ) to the state-of-the-art offline learning algorithm , while converging significantly faster in all cases ( Figure 2 ) . This is notable , since our algorithm uses only a single pass over the data , while using a tiny fraction of the memory . 2 RELATED WORK . Even in the case of ( symmetric ) DPPs , the study of online and streaming settings is in a nascent stage . In particular , Bhaskara et al . ( 2020 ) were the first to propose online algorithms for MAP inference of DPPs and Liu et al . ( 2021 ) were the first to give streaming algorithms for the maximum induced cardinality objective proposed by Gillenwater et al . ( 2018 ) . However , no work has focused on either online or streaming MAP inference or online learning for Nonsymmetric DPPs . A special subset of NDPPs called signed DPPs were the first class of NDPPs to be studied ( Brunel et al. , 2017 ) . Gartrell et al . ( 2019 ) studied a more general class of NDPPs and provided learning and MAP Inference algorithms , and also showed that NDPPs have additional expressiveness over symmetric DPPs and can better model certain problems . This was improved by Gartrell et al . ( 2021 ) in which they provided a new decomposition which enabled linear time learning and MAP Inference for NDPPs . More recently , Anari & Vuong ( 2021 ) proposed the first algorithm with a kO ( k ) approximation factor for MAP Inference on NDPPs where k is the number of items to be selected . These works are not amenable to the streaming nor online settings that are studied in our paper . In particular , they store all data in memory and use multiple passes over the data , among other issues . In this work , we formally introduce the streaming and online MAP inference and online learning problems for NDPPs and develop online algorithms for solving these problems . To the best of our knowledge , our work is the first to study NDPPs in the streaming and online settings , and develop algorithms for solving MAP inference and learning of NDPPs in these settings . 3 PRELIMINARIES . Notation . Throughout the paper , we use uppercase bold letters ( A ) to denote matrices and lowercase bold letters ( a ) to denote vectors . Letters in normal font ( a ) will be used for scalars . For any positive integer n , we use [ n ] to denote the set { 1 , 2 , . . . , n } . A matrix M is said to be skew-symmetric if M = −M > where > is used to represent matrix transposition . A DPP is a probability distribution on all subsets of [ n ] characterized by a matrix L ∈ Rn×n . The probability of sampling any subset S ⊆ [ n ] i.e . Pr [ S ] ∝ det ( LS ) where LS is the submatrix of L obtained by keeping only the rows and columns corresponding to indices in S. The normalization constant for this distribution can be computed efficiently since we know that ∑ S⊆ [ n ] det ( LS ) = det ( L + In ) ( Kulesza & Taskar , 2012 , Theorem 2.1 ) . Therefore , Pr [ S ] = det ( LS ) det ( L+In ) . For the DPP corresponding to L to be a valid probability distribution , we need det ( LS ) ≥ 0 for all S ⊆ [ n ] since Pr [ S ] ≥ 0 for all S ⊆ [ n ] . Matrices which satisfy this property are known as P0-matrices ( Fiedler & Pták , 1966 ) . For any symmetric matrix L , det ( LS ) ≥ 0 for all S ⊆ [ n ] if and only if L is positive semi-definite ( PSD ) i.e . xTLx ≥ 0 for all x ∈ Rn . Therefore , all symmetric matrices which correspond to valid DPPs are PSD . But there are P0-matrices which are not necessarily symmetric ( or even positive semi-definite ) . For example , L = [ 1 1 −1 1 ] is a nonsymmetric P0 matrix . Any matrix L can be uniquely written as the sum of a symmetric and skew-symmetric matrix : L = ( L + L > ) /2 + ( L − L > ) /2 . For the DPP characterized by L , the symmetric part of the decomposition can be thought of as encoding negative correlations between items and the skewsymmetric part as encoding positive correlations . Gartrell et al . ( 2019 ) proposed a decomposition which covers the set of all nonsymmetric PSD matrices ( a subset of P0 matrices ) which allowed them to provide a cubic time algorithm ( in the ground set size ) for NDPP learning . This decomposition is L = V > V + ( BC > −CB > ) . Gartrell et al . ( 2021 ) provided more efficient ( linear time ) algorithms for learning and MAP inference using a new decomposition L = V > V + B > CB . Although both these decompositions only cover a subset of P0 matrices , it turns out that they are quite useful for modeling real world instances and provide improved results when compared to ( symmetric ) DPPs . For the decomposition L = V > V + B > CB , we have V , B ∈ Rd×n , C ∈ Rd×d and C is skewsymmetric . Here we can think of the items having having a latent low-dimensional representation ( vi , bi ) where vi , bi ∈ Rd . Intuitively , a low-dimensional representation ( when compared to n ) is sufficient for representing items because any particular item only interacts with a small number of other items in real-world datasets , as evidenced by the fact that the maximum basket size encountered in real-world data is much smaller than n . 4 STREAMING MAP INFERENCE . In this section , we formulate the streaming MAP inference problem for NDPPs and design an algorithm for this problem with guarantees on the solution quality , space , and time . 4.1 STREAMING MAP INFERENCE PROBLEM . We study the MAP Inference problem in low-rank NDPPs in the streaming setting where we see columns of a 2d × n matrix in order ( column-arrival model ) . Given some fixed skew-symmetric matrix C ∈ Rd×d , consider a stream of 2d-dimensional vectors ( which can be viewed as pairs of d-dimensional vectors ) arriving in order : ( v1 , b2 ) , ( v2 , b2 ) , . . . , ( vn , bn ) where vt , bt ∈ Rd , ∀ t ∈ [ n ] The main goal in the streaming setting is to output the maximum likelihood subset S ⊆ [ n ] of cardinality k at the end of the stream assuming that S is drawn from the NDPP characterized by L = V > V + B > CB i.e . S = argmax S⊆ [ n ] , |S|=k det ( LS ) = argmax S⊆ [ n ] , |S|=k det ( V > S VS + B > SCBS ) ( 1 ) For any S ⊆ [ n ] , VS ∈ Rd×|S| is the matrix whose each column corresponds to { vi , i ∈ S } . Similarly , BS ∈ Rd×|S| is the matrix whose columns correspond to { bi , i ∈ S } . In the case of symmetric DPPs , this maximization problem in the non-streaming setting corresponds to MAP Inference in cardinality constrained DPPs , also known as k-DPPs ( Kulesza & Taskar , 2011 ) . Usually , designing a streaming algorithm can be viewed as a dynamic data-structure design problem . We want to maintain a data-structure with efficient time and small space over the entire stream . Therefore , the secondary goals are to minimize the following : • Space : We consider word-RAM model , and use number of words * to measure it . * In word-RAM , we usually assume each word is O ( logn ) bits . Algorithm 1 Streaming Partition Greedy MAP Inference for low-rank NDPPs 1 : Input : Length of the stream n and a stream of data points { ( v1 , b1 ) , ( v2 , b2 ) , . . . , ( vn , bn ) } 2 : Output : A solution set S of cardinality k at the end of the stream . 3 : S0 ← ∅ , s0 ← ∅ 4 : while new data ( vt , bt ) arrives in stream at time t do 5 : i← d tkn e 6 : if f ( Si−1 ∪ { t } ) > f ( Si−1 ∪ { si } ) then 7 : si ← t 8 : if t is a multiple of nk then 9 : Si ← Si−1 ∪ si 10 : si ← ∅ 11 : return Sk • Update time : time to update our data-structure whenever we see a new arriving data point . • Total time : total time taken to process the stream . Definition 1 . Given three matrices V ∈ Rd×k , B ∈ Rd×k and C ∈ Rd×d , let Tdet ( k , d ) denote the running time of computing det ( V > V + B > CB ) . Note that Tdet ( k , d ) = 2Tmat ( d , k , d ) + Tmat ( d , d , k ) + Tmat ( k , k , k ) where Tmat ( a , b , c ) is the time required to multiply two matrices of dimensions a× b and b× c. We have the last Tmat ( k , k , k ) term because computing the determinant of a k × k matrix can be done ( essentially ) in the same time as computing the product of two matrices of dimension k × k ( Aho et al. , 1974 , Theorem 6.6 ) . We will now describe a streaming algorithm for MAP inference in NDPPs , which we call the `` Streaming Partition Greedy '' algorithm .
This paper proposes online and streaming algorithms for MAP inference and learning for nonsymmetric determinantal point processes (NDPPs). For the streaming setting, data points arrive in an arbitrary order, and the algorithms are constrained to using a single pass over the data, along with requiring sublinear memory consumption. In the online setting, there is the additional requirement of maintaining a valid solution at any time step. The authors provide some theoretical guarantees for the proposed algorithms, and perform experiments that demonstrate that their performance is comparable to (or better than) offline algorithms for these tasks.
SP:543adf5a6e83b2d343b5f4482f1fb41388f1314c
Reducing the Communication Cost of Federated Learning through Multistage Optimization
1 INTRODUCTION . In federated learning ( FL ) ( McMahan et al. , 2017 ; Kairouz et al. , 2019 ; Li et al. , 2020 ) , distributed clients interact with a central server to learn a model without directly sharing their data with the server . The training objective is to solve the following minimization problem : min x [ F ( x ) = 1 N ∑ i∈ [ N ] Fi ( x ) ] ( 1 ) where variable x is the model parameter , i indexes the clients ( or devices ) , N is the number of clients , and Fi ( x ) is a ( strongly convex ) loss that only depends on that client ’ s data . Typical FL deployments have two properties that make optimizing Eq . ( 1 ) challenging : ( i ) Data heterogeneity : Let Fi ( x ) = Ezi∼Pi [ f ( x ; zi ) ] for some loss function f , where zi is the client ’ s data and Pi ’ s capture the heterogeneity of the clients ’ data distributions . Formally , we define client heterogeneity ζ2 and heterogeneity at the optimum ζ2∗ as follows , where the optimum is defined as x ∗ : = arg minx F ( x ) : ζ2 : = sup x 1 N ∑ i∈ [ N ] ‖∇F ( x ) −∇Fi ( x ) ‖2 , ζ2∗ : = ( 1/N ) ∑ i∈ [ N ] ‖∇Fi ( x∗ ) ‖2 , ( 2 ) These definitions capture how different the local client gradients are from the overall global gradient ( worst-case ) ( Woodworth et al. , 2020a ) . ( ii ) Communication cost : In many deployments of FL , clients have limited bandwidths ( e.g. , mobile devices ) and hence high communication cost . Due to these two challenges , most federated optimization algorithms alternate between local rounds of computation , where clients process only their own data to save communication , and global rounds , where clients synchronize with the central server to resolve disagreements due to heterogeneity . We assume clients communicate with the server in each global round ( full participation ) . This commonly-studied setting ( Woodworth et al. , 2020a ; Gorbunov et al. , 2020 ; Yuan et al. , 2021 ; Mitra et al. , 2021 ; Karimireddy et al. , 2020b ; Reddi et al. , 2020 ) applies to cross-silo FL ( Kairouz et al. , 2019 ) . Several federated algorithms navigate this trade-off between reducing communication and resolving data heterogeneity by modifying the amount and the nature of local and global computation ( McMahan et al. , 2017 ; Li et al. , 2018 ; Wang et al. , 2019b ; a ; Li et al. , 2019 ; Karimireddy et al. , 2020b ; a ; Al-Shedivat et al. , 2020 ; Reddi et al. , 2020 ; Charles & Konečnỳ , 2020 ; Mitra et al. , 2021 ) . Recently , Woodworth et al . ( 2020a ) showed a lower bound on the convergence rate ( with respect to number of communication rounds R ) of any first-order federated optimization algorithms for strongly-convex , smooth objectives as a function of client data heterogeneity ζ∗ ( Section 2 , Eq . ( 6 ) ) . This lower bound highlights two gaps in the literature : ( 1 ) Today , no single optimization algorithm has been shown to match this lower bound for all values of ζ∗ . ( 2 ) Although different baseline optimization algorithms match the lower bound in different heterogeneity regimes , it remains unknown if any algorithm can match the lower bound when 0 < ζ2∗ < β 3/2∆/µ1/2 , where ∆ is the initial function suboptimality ( see Assumption 6 ) , and the functions Fi are β-smooth and µ-strongly convex ( see App . C for definitions of smoothness and strong convexity ) . ∆ can be sizable if one starts optimization far away from the optimum , and the gap between the upper and the lower bound in this regime is significant ( § 2 ) . To explain these gaps more precisely , we first examine several baseline optimization algorithms . To begin , consider the simple baseline of Minibatch Stochastic Gradient Descent ( SGD ) ( Woodworth et al. , 2020a , shortened as SGD ) . Here , in each global communication round , the central server sends each device the current model . Then each device calculates a single local stochastic gradient at this model over a minibatch of size K and sends it to the server . The server averages these to make a global minibatch gradient over NK samples and performs a model update ( Algorithm 2 ) . Further , this approach can be accelerated in the Nesterov sense ( Ghadimi & Lan , 2012 , AC-SA ) . Woodworth et al . ( 2020a ) recently showed that Accelerated Minibatch SGD ( shortened as AC-SA ) is nearly optimal ( up to exponentially shrinking condition numbers factors in communication rounds R ) in the highly-heterogeneous regimes when ζ2∗ > β 3/2∆/µ1/2 . However , in other regimes , AC-SA may be suboptimal as it does not exploit local computation to reduce communication . A popular alternative exploiting local computation is Federated Averaging ( FedAvg ) ( McMahan et al. , 2017 ) 1 ( Algorithm 6 ) . Here each client i independently runs SGD on its own objective Fi for K iterations ( local updates ) , and periodically communicates their respective most recent iterate to the server , which then averages these iterates . Compared to ( Accelerated ) SGD , this approach allows clients to learn from local data in between communication rounds . However , on strongly convex objectives , FedAvg has only been shown to achieve the lower bound when the data is homogeneous , i.e . ζ2∗ = ζ 2 = 0 . In fact , FedAvg ’ s known rates can only outperform AC-SA ’ s rates when heterogeneity is almost insignificant : ζ2 ≤ µ , where is the target sub-optimality gap for Eq . ( 1 ) ( Woodworth et al. , 2020a ) ( for more details on see Definition 1 ) . Further , an algorithm specific lower bound shows that FedAvg is provably sub-optimal ; improving the analysis will not bring FedAvg meaningfully closer to optimal rates or even allow FedAvg to beat AC-SA when heterogeneity is non-negligible Woodworth et al . ( 2020a ) . Several approaches have been proposed to improve FedAvg via acceleration in the homogeneous setting Yuan & Ma ( 2020 ) and cross-device variance reduction ( Karimireddy et al. , 2020b ; Gorbunov et al. , 2020 ) and adaptivity and momentum ( Reddi et al. , 2020 ) in the heterogeneous setting . Empirically , these variants outperform baselines like ( Accelerated ) SGD , though gains in the heterogeneous setting are not explained by existing analyses . In summary , currently there are only two baseline optimization algorithms that ( nearly , up to exponentially shrinking condition numbers ) theoretically match the lower bound of Woodworth et al . ( 2020a ) in some heterogeneity regime : FedAvg when ζ2 = 0 ( i.e. , ζ2∗ = 0 ) and AC-SA when ζ2∗ > β 3/2∆/µ1/2 . Several natural questions follow . ( 1 ) Does there exist an algorithm that achieves the lower bound when 0 < ζ2∗ < β 3/2∆/µ1/2 ? In fact , Woodworth et al . ( 2020a ) posed a weaker open question , asking whether it was possible to improve on AC-SA in the regime where data heterogeneity was bounded but not insignificant . ( 2 ) If so , can the same algorithm simultaneously achieve optimal convergence rates across all values of ζ∗ ? This Work . We answer both questions affirmatively when ζ ' ζ∗ by drawing on multistage algorithms , in which learning rates decay during optimization . Such algorithms have achieved fast convergence rates in convex optimization ( Aybat et al. , 2019 ; Fallah et al. , 2020 ) . In the federated setting , they have improved practical performance ( Reddi et al. , 2020 ; Charles & Konečnỳ , 2020 ) . 1This approach is also referred to as Local Stochastic Gradient Descent ( Stich , 2018 ; Khaled et al. , 2020 ; Woodworth et al. , 2020b ) , LocalUpdate ( Charles & Konečnỳ , 2020 ) . Our work exploits the following insight : far from the optimum , local computation can reduce communication ; near the optimum , it is important to communicate frequently ( per model update ) to resolve heterogeneity . This intuition has been explored in the homogeneous setting ( Wang & Joshi , 2019 ) , where the theoretical benefits do not appear to give order-wise convergence improvements . Contributions . In this paper , we first show that multistage optimization , when applied to baseline federated algorithms , achieves comparable or better worst-case convergence rates than known baselines ( including SGD and FedAvg ) in all heterogeneity regimes . In doing so , multistage algorithms resolve an open problem from ( Woodworth et al. , 2020b ) , which asked if one can design an algorithm that combines the advantages of both FedAvg and SGD and enjoys guarantees that dominate both . We propose multistage algorithms that ( nearly ) match the lower bound of Woodworth et al . ( 2020a ) on convergence rates for first-order federated algorithms . We show these theoretical gains by first analyzing an algorithm that ( nearly ) achieves the lower bound : a simple two-stage combination of ASG ( a variant of AC-SA in Theorem 6 ) and FedAvg . Loosely , we first run FedAvg up to an “ optimization error floor ” that depends on client heterogeneity ; then , we use ASG to complete the optimization ( Thm . 2 ) . Further , we show that similar gains can be obtained by a more general multistage algorithm which gradually moves from the FedAvg update to ASG update . This multistage variant ( Thm . 3 ) has the added benefit of not requiring the knowledge of the noise bound σ2 , the heterogeneity ζ2 , or the initial distance to the optimum ∆ . This intuition is not specific to the combination of ASG and FedAvg . We demonstrate the generality of multistaging by applying it to other pairs of FL optimization algorithms to obtain the same nearly-optimal worst-case convergence ( but possibly much better average-case rates , § 4 ) . Finally , we demonstrate the practical value of these theoretical insights through both convex and nonconvex experiments in § 4 . 2 MODEL AND RELATED WORK . We assume that each Fi is µ-strongly convex ( Assumption 1 ) and β-smooth ( Assumption 2 ) . We let κ = β/µ denote the condition number . N is the number of clients ; all clients participate in each communication round . A discussion of when a subset of clients participate per round is in App . L. Heterogeneity ζ and heterogeneity at optimum ζ∗ are defined as in equation 2 ( Assumptions 3 and 4 , respectively ) . We assume unless otherwise specified that ζ2 > 0 , that the problem is heterogeneous . We assume the client gradient variance is upper bounded as σ2 , formally defined in Assumption 5 . We bound the initial suboptimality gap as ∆ ( see Assumption 6 ) . Optimization proceeds in rounds ; K is the number of client iterations per communication round , and R is the number of communication rounds . In some cases , clients take minibatches of size B in their local steps ; unless specified otherwise , B is 1 . Our goal is to minimize the sub-optimality error EF ( x̂ ) − F ( x∗ ) ( where x̂ is the solution returned by an optimization algorithm ) after R rounds ( or to minimize communication round complexity , the number of rounds R to a given error ) . R is a standard proxy for the communication cost . Since communication costs in FL are typically much larger than computation cost ( Kairouz et al. , 2019 ) , our goal is to minimize round complexity R for sufficiently large K. We use the notation Õ to hide polylogarithmic factors .
This paper introduces the multistage optimization technique for federated learning applications. Specifically, multistage optimization first uses federated optimization algorithms like FedAvg and SCAFFOLD and converges to some budget, and then uses minibatch algorithms like SGD or accelerated SGD in order to converge faster to a point with very small error. Because centralized methods are optimal when data are heterogeneous and local methods are optimal when data are purely homogeneous, using a multistage optimization technique can incorporate the benefits from both sides. The theoretical part is relatively easy. The proof is to choose an appropriate error budget to which federated optimization algorithms converge, and then choose the hyperparameters for the federated optimization algorithms in the first stage and the minibatch algorithms in the second stage, e.g. learning rate, momentum, etc. The theoretical part only includes the convergence results for the strongly convex case. The empirical part includes two experiments: logistic regression and neural network, which belongs to strongly convex case and nonconvex case respectively. For each experiment, the authors compare different minibatch algorithms like SGD, AGD, different local methods like FedAvg, SCAFFOLD, and some multistage procedures that combine local methods with minibatch methods. For the convex setting, multistage procedures perform the best, and for the nonconvex setting, multistage algorithms also perform generally the best.
SP:2335d2a4b9c3c1bb740563cee4bf529f32400772
Reducing the Communication Cost of Federated Learning through Multistage Optimization
1 INTRODUCTION . In federated learning ( FL ) ( McMahan et al. , 2017 ; Kairouz et al. , 2019 ; Li et al. , 2020 ) , distributed clients interact with a central server to learn a model without directly sharing their data with the server . The training objective is to solve the following minimization problem : min x [ F ( x ) = 1 N ∑ i∈ [ N ] Fi ( x ) ] ( 1 ) where variable x is the model parameter , i indexes the clients ( or devices ) , N is the number of clients , and Fi ( x ) is a ( strongly convex ) loss that only depends on that client ’ s data . Typical FL deployments have two properties that make optimizing Eq . ( 1 ) challenging : ( i ) Data heterogeneity : Let Fi ( x ) = Ezi∼Pi [ f ( x ; zi ) ] for some loss function f , where zi is the client ’ s data and Pi ’ s capture the heterogeneity of the clients ’ data distributions . Formally , we define client heterogeneity ζ2 and heterogeneity at the optimum ζ2∗ as follows , where the optimum is defined as x ∗ : = arg minx F ( x ) : ζ2 : = sup x 1 N ∑ i∈ [ N ] ‖∇F ( x ) −∇Fi ( x ) ‖2 , ζ2∗ : = ( 1/N ) ∑ i∈ [ N ] ‖∇Fi ( x∗ ) ‖2 , ( 2 ) These definitions capture how different the local client gradients are from the overall global gradient ( worst-case ) ( Woodworth et al. , 2020a ) . ( ii ) Communication cost : In many deployments of FL , clients have limited bandwidths ( e.g. , mobile devices ) and hence high communication cost . Due to these two challenges , most federated optimization algorithms alternate between local rounds of computation , where clients process only their own data to save communication , and global rounds , where clients synchronize with the central server to resolve disagreements due to heterogeneity . We assume clients communicate with the server in each global round ( full participation ) . This commonly-studied setting ( Woodworth et al. , 2020a ; Gorbunov et al. , 2020 ; Yuan et al. , 2021 ; Mitra et al. , 2021 ; Karimireddy et al. , 2020b ; Reddi et al. , 2020 ) applies to cross-silo FL ( Kairouz et al. , 2019 ) . Several federated algorithms navigate this trade-off between reducing communication and resolving data heterogeneity by modifying the amount and the nature of local and global computation ( McMahan et al. , 2017 ; Li et al. , 2018 ; Wang et al. , 2019b ; a ; Li et al. , 2019 ; Karimireddy et al. , 2020b ; a ; Al-Shedivat et al. , 2020 ; Reddi et al. , 2020 ; Charles & Konečnỳ , 2020 ; Mitra et al. , 2021 ) . Recently , Woodworth et al . ( 2020a ) showed a lower bound on the convergence rate ( with respect to number of communication rounds R ) of any first-order federated optimization algorithms for strongly-convex , smooth objectives as a function of client data heterogeneity ζ∗ ( Section 2 , Eq . ( 6 ) ) . This lower bound highlights two gaps in the literature : ( 1 ) Today , no single optimization algorithm has been shown to match this lower bound for all values of ζ∗ . ( 2 ) Although different baseline optimization algorithms match the lower bound in different heterogeneity regimes , it remains unknown if any algorithm can match the lower bound when 0 < ζ2∗ < β 3/2∆/µ1/2 , where ∆ is the initial function suboptimality ( see Assumption 6 ) , and the functions Fi are β-smooth and µ-strongly convex ( see App . C for definitions of smoothness and strong convexity ) . ∆ can be sizable if one starts optimization far away from the optimum , and the gap between the upper and the lower bound in this regime is significant ( § 2 ) . To explain these gaps more precisely , we first examine several baseline optimization algorithms . To begin , consider the simple baseline of Minibatch Stochastic Gradient Descent ( SGD ) ( Woodworth et al. , 2020a , shortened as SGD ) . Here , in each global communication round , the central server sends each device the current model . Then each device calculates a single local stochastic gradient at this model over a minibatch of size K and sends it to the server . The server averages these to make a global minibatch gradient over NK samples and performs a model update ( Algorithm 2 ) . Further , this approach can be accelerated in the Nesterov sense ( Ghadimi & Lan , 2012 , AC-SA ) . Woodworth et al . ( 2020a ) recently showed that Accelerated Minibatch SGD ( shortened as AC-SA ) is nearly optimal ( up to exponentially shrinking condition numbers factors in communication rounds R ) in the highly-heterogeneous regimes when ζ2∗ > β 3/2∆/µ1/2 . However , in other regimes , AC-SA may be suboptimal as it does not exploit local computation to reduce communication . A popular alternative exploiting local computation is Federated Averaging ( FedAvg ) ( McMahan et al. , 2017 ) 1 ( Algorithm 6 ) . Here each client i independently runs SGD on its own objective Fi for K iterations ( local updates ) , and periodically communicates their respective most recent iterate to the server , which then averages these iterates . Compared to ( Accelerated ) SGD , this approach allows clients to learn from local data in between communication rounds . However , on strongly convex objectives , FedAvg has only been shown to achieve the lower bound when the data is homogeneous , i.e . ζ2∗ = ζ 2 = 0 . In fact , FedAvg ’ s known rates can only outperform AC-SA ’ s rates when heterogeneity is almost insignificant : ζ2 ≤ µ , where is the target sub-optimality gap for Eq . ( 1 ) ( Woodworth et al. , 2020a ) ( for more details on see Definition 1 ) . Further , an algorithm specific lower bound shows that FedAvg is provably sub-optimal ; improving the analysis will not bring FedAvg meaningfully closer to optimal rates or even allow FedAvg to beat AC-SA when heterogeneity is non-negligible Woodworth et al . ( 2020a ) . Several approaches have been proposed to improve FedAvg via acceleration in the homogeneous setting Yuan & Ma ( 2020 ) and cross-device variance reduction ( Karimireddy et al. , 2020b ; Gorbunov et al. , 2020 ) and adaptivity and momentum ( Reddi et al. , 2020 ) in the heterogeneous setting . Empirically , these variants outperform baselines like ( Accelerated ) SGD , though gains in the heterogeneous setting are not explained by existing analyses . In summary , currently there are only two baseline optimization algorithms that ( nearly , up to exponentially shrinking condition numbers ) theoretically match the lower bound of Woodworth et al . ( 2020a ) in some heterogeneity regime : FedAvg when ζ2 = 0 ( i.e. , ζ2∗ = 0 ) and AC-SA when ζ2∗ > β 3/2∆/µ1/2 . Several natural questions follow . ( 1 ) Does there exist an algorithm that achieves the lower bound when 0 < ζ2∗ < β 3/2∆/µ1/2 ? In fact , Woodworth et al . ( 2020a ) posed a weaker open question , asking whether it was possible to improve on AC-SA in the regime where data heterogeneity was bounded but not insignificant . ( 2 ) If so , can the same algorithm simultaneously achieve optimal convergence rates across all values of ζ∗ ? This Work . We answer both questions affirmatively when ζ ' ζ∗ by drawing on multistage algorithms , in which learning rates decay during optimization . Such algorithms have achieved fast convergence rates in convex optimization ( Aybat et al. , 2019 ; Fallah et al. , 2020 ) . In the federated setting , they have improved practical performance ( Reddi et al. , 2020 ; Charles & Konečnỳ , 2020 ) . 1This approach is also referred to as Local Stochastic Gradient Descent ( Stich , 2018 ; Khaled et al. , 2020 ; Woodworth et al. , 2020b ) , LocalUpdate ( Charles & Konečnỳ , 2020 ) . Our work exploits the following insight : far from the optimum , local computation can reduce communication ; near the optimum , it is important to communicate frequently ( per model update ) to resolve heterogeneity . This intuition has been explored in the homogeneous setting ( Wang & Joshi , 2019 ) , where the theoretical benefits do not appear to give order-wise convergence improvements . Contributions . In this paper , we first show that multistage optimization , when applied to baseline federated algorithms , achieves comparable or better worst-case convergence rates than known baselines ( including SGD and FedAvg ) in all heterogeneity regimes . In doing so , multistage algorithms resolve an open problem from ( Woodworth et al. , 2020b ) , which asked if one can design an algorithm that combines the advantages of both FedAvg and SGD and enjoys guarantees that dominate both . We propose multistage algorithms that ( nearly ) match the lower bound of Woodworth et al . ( 2020a ) on convergence rates for first-order federated algorithms . We show these theoretical gains by first analyzing an algorithm that ( nearly ) achieves the lower bound : a simple two-stage combination of ASG ( a variant of AC-SA in Theorem 6 ) and FedAvg . Loosely , we first run FedAvg up to an “ optimization error floor ” that depends on client heterogeneity ; then , we use ASG to complete the optimization ( Thm . 2 ) . Further , we show that similar gains can be obtained by a more general multistage algorithm which gradually moves from the FedAvg update to ASG update . This multistage variant ( Thm . 3 ) has the added benefit of not requiring the knowledge of the noise bound σ2 , the heterogeneity ζ2 , or the initial distance to the optimum ∆ . This intuition is not specific to the combination of ASG and FedAvg . We demonstrate the generality of multistaging by applying it to other pairs of FL optimization algorithms to obtain the same nearly-optimal worst-case convergence ( but possibly much better average-case rates , § 4 ) . Finally , we demonstrate the practical value of these theoretical insights through both convex and nonconvex experiments in § 4 . 2 MODEL AND RELATED WORK . We assume that each Fi is µ-strongly convex ( Assumption 1 ) and β-smooth ( Assumption 2 ) . We let κ = β/µ denote the condition number . N is the number of clients ; all clients participate in each communication round . A discussion of when a subset of clients participate per round is in App . L. Heterogeneity ζ and heterogeneity at optimum ζ∗ are defined as in equation 2 ( Assumptions 3 and 4 , respectively ) . We assume unless otherwise specified that ζ2 > 0 , that the problem is heterogeneous . We assume the client gradient variance is upper bounded as σ2 , formally defined in Assumption 5 . We bound the initial suboptimality gap as ∆ ( see Assumption 6 ) . Optimization proceeds in rounds ; K is the number of client iterations per communication round , and R is the number of communication rounds . In some cases , clients take minibatches of size B in their local steps ; unless specified otherwise , B is 1 . Our goal is to minimize the sub-optimality error EF ( x̂ ) − F ( x∗ ) ( where x̂ is the solution returned by an optimization algorithm ) after R rounds ( or to minimize communication round complexity , the number of rounds R to a given error ) . R is a standard proxy for the communication cost . Since communication costs in FL are typically much larger than computation cost ( Kairouz et al. , 2019 ) , our goal is to minimize round complexity R for sufficiently large K. We use the notation Õ to hide polylogarithmic factors .
It is known that if the level of heterogeneity is sufficiently high, then accelerated minibatch SGD is optimal for federated optimization matching the known lower bound of (Woodworth et al., 2020a). On the other hand, when the level of heterogeneity is very low, then FedAvg/LocalSGD outperforms the former in terms of communication complexity and needs only a few communication rounds given enough local computation. This paper *proposes* a multi-stage optimization procedure and *claims* that it nearly matches the lower bound for all heterogeneity levels.
SP:2335d2a4b9c3c1bb740563cee4bf529f32400772
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
1 INTRODUCTION . Deep neural networks ( DNNs ) are widely known to be vulnerable to adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) , i.e. , a human-imperceptible perturbation can lead to misclassification . In adversarial machine learning , the term threat model defines the rules of the attack , such as the resources the attacker can access . Based on the threat model , the attacks are often divided into white-box attacks and black-box attacks . In the white-box threat model ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ; Madry et al. , 2018a ) , the attacker has full knowledge of a target model , such as the model weights and the whole training dataset . Recognizing the threat of these adversarial attacks , a model owner is unlikely to leak a model ’ s information to the public . Thus , the white-box attack is often used to evaluate the model robustness for revealing its weakest point ( Madry et al. , 2018a ) , but often not considered as a practical attack method ( Chen et al. , 2017 ) . To this end , numerous works have investigated a more realistic threat model , where the attacker does not require full knowledge of the target model , i.e. , the backpropagation on the target model is prohibited . This threat model is called black-box attack ( Papernot et al. , 2016 ; Tramèr et al. , 2016 ; Papernot et al. , 2017 ; Narodytska & Kasiviswanathan , 2017 ; Chen et al. , 2017 ; Brendel et al. , 2017 ; Dong et al. , 2019b ; Yan et al. , 2019 ; Chen et al. , 2020 ; Zhou et al. , 2020 ) . However , such a black-box threat model usually involves a major concern of being resource-intensive in terms of query cost and time . In real-world attack scenarios , even if we ignore such concerns , query-based black-box attack can still be infeasible , e.g. , the model API is inaccessible to the attacker . Moreover , it might cause suspicion due to repeated queries to the model with almost the same adversarial image . To alleviate this issue , another line of black-box threat model ( Dong et al. , 2018 ; Xie et al. , 2019b ; Dong et al. , 2019a ; Wu et al. , 2020 ; Lin et al. , 2020 ; Gao et al. , 2020a ; 2021 ) called transfer-based attack is proposed . In this threat model , adversarial examples are crafted via the local available pre-trained substitute model , which usually trains on the same training dataset as the target model . The resultant adversarial examples are expected to attack the target model . However , without the feedback from the target model , the transferability heavily depends on how large the gap between the substitute model and target model . In practice , this gap is large because the structure and the training technique of the target model are usually not publicly available due to security and privacy concerns . From the analysis above , we argue that both white-box and black-box attacks can hardly be considered as practical attacks . A practical attack should satisfy two criteria : ( a ) model-free , i.e. , no dependence on the pre-trained substitute model or the target model for either backward propagation or only forward query ; ( b ) data-free , i.e. , no dependence on the dataset for training a substitute model . We term it no-box attack . A recent work ( Li et al. , 2020a ) is the first ( to our knowledge ) as well as the only work to have attempted such an attack in a loose sense . Their threat model still requires a small number of auxiliary samples , such as 20 images . Admittedly , collecting a small number of samples might not be difficult in most cases , but might be still infeasible in some security-sensitive applications . Specifically , their approach ( Li et al. , 2020a ) attempts to train a substitute model by adopting the classical auto-encoder model instead of the supervised classification model due to the constraint of a small-scale dataset . Overall , to attack a certain sample , their approach consists of three steps : ( 1 ) collecting a small number of images ; ( 2 ) training a substitute model ; ( 3 ) white-box attack on the substitute model . If a new sample , especially from a different class , needs to be attacked , the above process needs to be repeated . Thus , their approach is very resource-intensive . Besides , their attack success rate is still significantly lower than existing black-box attacks . By contrast , our approach does not require any of the above three steps and is even training-free . With the help of visualization technique proposed by ( Zeiler & Fergus , 2014 ) , we observe that the high-frequency component ( HFC ) , e.g. , the edge and texture features , is dominant in shallow layers and the low-frequency component ( LFC ) , e.g. , the plain areas in the image , is paid less attention to be extracted . Combined with the insight into the classification logic of DNNs in Sec . 3.1 , we observe that HFC plays a crucial role in recognition . As shown in Fig . 1 , without LFC , the confidence of HFC is even higher than the raw image . Although it does not hold true for all samples , it does demonstrate the importance of HFC . Motivated by this , we take the idea of hybrid image ( Oliva , 2013 ) and propose a novel Hybrid Image Transformation ( HIT ) attack method to craft adversarial examples . Formally , it only needs three steps but can effectively fool various DNNs without any training : First , due to the trainingfree setting and inspired by the analysis from Sec . 3.2 , we simply utilize matplotlib1 tool to draw several geometric patterns which serve as the proto-patterns , and the resultant synthesized adversarial patches are thus richer in regionally homogeneous , repeating and dense HFC . Second , we extract the LFC of the raw image and HFC of the adversarial patch . Finally , we combine these two pieces of components and clip them to the ε-ball of the raw image to get the resultant adversarial hybrid example . Extensive experiments on ImageNet demonstrate the effectiveness of our method . By attacking ten state-of-the-art models in the no-box manner , our HIT significantly increases the average success rate from 68.74 % to 98.13 % . Notably , our HIT is even competitive to mainstream transfer-based black-box attacks . 2 RELATED WORK . Adversarial Attack . Let x denote raw image without any perturbation , xadv and y denote the corresponding adversarial example and true label respectively . In generally , we use l∞-norm to measure the perceptibility of adversarial perturbations , i.e. , ||xadv − x||∞ ≤ ε . In this paper , we focus on non-targeted attacks ( Dong et al. , 2018 ; Xie et al. , 2019b ; Wu et al. , 2020 ; Lin et al. , 2020 ; Gao et al. , 2020a ) which aim to cause misclassification of DNNs f ( · ) , i.e. , f ( xadv ) 6= y . 1https : //matplotlib.org/ Competitors . Transferability is an important property for adversarial examples . With it , the resultant adversarial example crafted via one model may fool others . For the black-box threat model , Goodfellow et al . ( 2015 ) argue that the vulnerability of DNNs is their linear nature , and generate adversarial examples efficiently by performing FGSM which is a single-step attack . Papernot et al . ( 2017 ) train a local model with many queries to substitute for the target model . Dong et al . ( 2018 ) integrate a momentum term into I-FGSM Kurakin et al . ( 2017 ) to stabilize the update direction during the attack iterations . Xie et al . ( 2019b ) apply diverse input patterns to improve the transferability of adversarial examples . Dong et al . ( 2019a ) propose a translation-invariant attack to mitigate the effect of different discriminative regions between models . Gao et al . ( 2020a ) introduce patch-wise perturbation by amplifying the step size and reuse the cut noise to perturb more information in discriminative regions . For the no-box threat model , Li et al . ( 2020a ) attempt to attack the target model without any model query or the accessible pre-trained substitute model . In their work , with a limited amount of data , they try different mechanisms ( with or without supervised technique ) to train the substitute model , and then utilize this substitute model to craft transferable adversarial examples . Different from these approaches , our method does not depend on transferability since we do not need any substitute model . In this paper , we craft the adversarial examples from the perspective of the classification logic of DNNs . Frequency Perspective on DNNs . Our approach is highly inspired by existing works which explain the generalization and adversarial vulnerability of DNNs from the frequency perspective . The fact that DNNs have good generalization while being vulnerable to small adversarial perturbations has motivated ( Jo & Bengio , 2017 ; Wang et al. , 2020 ) to investigate the underlying mechanism , suggesting that surface-statistical content with high-frequency property is essential for the classification task . From the perspective of texture vs. shape , Geirhos et al . ( 2019 ) ; Wang et al . ( 2020 ) reveal that DNNs are biased towards texture instead of shape . Since the texture content is considered to have high-frequency property , their finding can be interpreted as the DNN being biased towards HFC . On the other hand , adversarial perturbations are also known to have the high-frequency property and various defense methods have also been motivated from this insight ( Aydemir et al. , 2018 ; Das et al. , 2018 ; Liu & JaJa , 2019 ; Xie et al. , 2019a ) . Nonetheless , it remains unknown whether manually designed high-frequency patterns are sufficient for attacking the network . 3 METHODOLOGY . Although many adversarial attack methods ( Papernot et al. , 2016 ; Dong et al. , 2018 ; Gao et al. , 2020a ; Li et al. , 2020a ) have achieved pretty high success rates in both black-box and no-box cases , they all need training , especially for query-based ( Papernot et al. , 2016 ; Zhou et al. , 2020 ) and nobox adversarial perturbations ( Li et al. , 2020a ) whose training is usually time-consuming . Then a natural question arises : Is it possible to generate robust adversarial perturbations without any training ? In the following subsections , we will give our answer and introduce our design . 3.1 MOTIVATION . To better understand the role of HFC and LFC for the classification results of DNNs , we split the information of raw images into these two pieces via Gaussian low-pass filter ( defined in Eq . 1 ) . As illustrated in Fig . 3 , when the kernel size is small , i.e. , the cutoff frequency is high , the average accuracy of LFC on ten state-of-the-art models is close to 100 % . However , if we continue to increase the kernel size , the average accuracy of HFC begins to exceed LFC one . To our surprise , for several specific raw images , e.g. , left image of Fig . 1 , the true label ’ s confidence of HFC which is mostly black is even higher than the raw image . To explain the above phenomenon , we turn to the perspective of feature space . Inspired by recent intermediate feature-based attacks ( Zhou et al. , 2018 ; Ganeshan & Babu , 2019 ; Inkawhich et al. , 2019 ) , we argue low-level features are critical to the classification . Interestingly , as shown in Fig . 2 , most2 feature maps in the shallow layers generally extract the edge and texture features ( typical ones are highlighted by red boxes ) , i.e. , HFC , and pay less attention to plain areas in images , i.e. , LFC . Therefore , if a perturbation can effectively manipulate the HFC of an image , totally different low-level features will be extracted and may lead to misclassification .
The proposed approach involves creating adversarial examples in a training-free manner by manipulating the frequency components of the image. High -frequency information is used from simple geometric patterns and combined with low-frequency component from the input to create a hybrid image. The resultant image is shown to fool classifiers under multiple settings which presents an easy way to construct adversarial examples.
SP:7552f3439cddd41bcf5b2c8f6f563558e07dfd5e
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
1 INTRODUCTION . Deep neural networks ( DNNs ) are widely known to be vulnerable to adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) , i.e. , a human-imperceptible perturbation can lead to misclassification . In adversarial machine learning , the term threat model defines the rules of the attack , such as the resources the attacker can access . Based on the threat model , the attacks are often divided into white-box attacks and black-box attacks . In the white-box threat model ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ; Madry et al. , 2018a ) , the attacker has full knowledge of a target model , such as the model weights and the whole training dataset . Recognizing the threat of these adversarial attacks , a model owner is unlikely to leak a model ’ s information to the public . Thus , the white-box attack is often used to evaluate the model robustness for revealing its weakest point ( Madry et al. , 2018a ) , but often not considered as a practical attack method ( Chen et al. , 2017 ) . To this end , numerous works have investigated a more realistic threat model , where the attacker does not require full knowledge of the target model , i.e. , the backpropagation on the target model is prohibited . This threat model is called black-box attack ( Papernot et al. , 2016 ; Tramèr et al. , 2016 ; Papernot et al. , 2017 ; Narodytska & Kasiviswanathan , 2017 ; Chen et al. , 2017 ; Brendel et al. , 2017 ; Dong et al. , 2019b ; Yan et al. , 2019 ; Chen et al. , 2020 ; Zhou et al. , 2020 ) . However , such a black-box threat model usually involves a major concern of being resource-intensive in terms of query cost and time . In real-world attack scenarios , even if we ignore such concerns , query-based black-box attack can still be infeasible , e.g. , the model API is inaccessible to the attacker . Moreover , it might cause suspicion due to repeated queries to the model with almost the same adversarial image . To alleviate this issue , another line of black-box threat model ( Dong et al. , 2018 ; Xie et al. , 2019b ; Dong et al. , 2019a ; Wu et al. , 2020 ; Lin et al. , 2020 ; Gao et al. , 2020a ; 2021 ) called transfer-based attack is proposed . In this threat model , adversarial examples are crafted via the local available pre-trained substitute model , which usually trains on the same training dataset as the target model . The resultant adversarial examples are expected to attack the target model . However , without the feedback from the target model , the transferability heavily depends on how large the gap between the substitute model and target model . In practice , this gap is large because the structure and the training technique of the target model are usually not publicly available due to security and privacy concerns . From the analysis above , we argue that both white-box and black-box attacks can hardly be considered as practical attacks . A practical attack should satisfy two criteria : ( a ) model-free , i.e. , no dependence on the pre-trained substitute model or the target model for either backward propagation or only forward query ; ( b ) data-free , i.e. , no dependence on the dataset for training a substitute model . We term it no-box attack . A recent work ( Li et al. , 2020a ) is the first ( to our knowledge ) as well as the only work to have attempted such an attack in a loose sense . Their threat model still requires a small number of auxiliary samples , such as 20 images . Admittedly , collecting a small number of samples might not be difficult in most cases , but might be still infeasible in some security-sensitive applications . Specifically , their approach ( Li et al. , 2020a ) attempts to train a substitute model by adopting the classical auto-encoder model instead of the supervised classification model due to the constraint of a small-scale dataset . Overall , to attack a certain sample , their approach consists of three steps : ( 1 ) collecting a small number of images ; ( 2 ) training a substitute model ; ( 3 ) white-box attack on the substitute model . If a new sample , especially from a different class , needs to be attacked , the above process needs to be repeated . Thus , their approach is very resource-intensive . Besides , their attack success rate is still significantly lower than existing black-box attacks . By contrast , our approach does not require any of the above three steps and is even training-free . With the help of visualization technique proposed by ( Zeiler & Fergus , 2014 ) , we observe that the high-frequency component ( HFC ) , e.g. , the edge and texture features , is dominant in shallow layers and the low-frequency component ( LFC ) , e.g. , the plain areas in the image , is paid less attention to be extracted . Combined with the insight into the classification logic of DNNs in Sec . 3.1 , we observe that HFC plays a crucial role in recognition . As shown in Fig . 1 , without LFC , the confidence of HFC is even higher than the raw image . Although it does not hold true for all samples , it does demonstrate the importance of HFC . Motivated by this , we take the idea of hybrid image ( Oliva , 2013 ) and propose a novel Hybrid Image Transformation ( HIT ) attack method to craft adversarial examples . Formally , it only needs three steps but can effectively fool various DNNs without any training : First , due to the trainingfree setting and inspired by the analysis from Sec . 3.2 , we simply utilize matplotlib1 tool to draw several geometric patterns which serve as the proto-patterns , and the resultant synthesized adversarial patches are thus richer in regionally homogeneous , repeating and dense HFC . Second , we extract the LFC of the raw image and HFC of the adversarial patch . Finally , we combine these two pieces of components and clip them to the ε-ball of the raw image to get the resultant adversarial hybrid example . Extensive experiments on ImageNet demonstrate the effectiveness of our method . By attacking ten state-of-the-art models in the no-box manner , our HIT significantly increases the average success rate from 68.74 % to 98.13 % . Notably , our HIT is even competitive to mainstream transfer-based black-box attacks . 2 RELATED WORK . Adversarial Attack . Let x denote raw image without any perturbation , xadv and y denote the corresponding adversarial example and true label respectively . In generally , we use l∞-norm to measure the perceptibility of adversarial perturbations , i.e. , ||xadv − x||∞ ≤ ε . In this paper , we focus on non-targeted attacks ( Dong et al. , 2018 ; Xie et al. , 2019b ; Wu et al. , 2020 ; Lin et al. , 2020 ; Gao et al. , 2020a ) which aim to cause misclassification of DNNs f ( · ) , i.e. , f ( xadv ) 6= y . 1https : //matplotlib.org/ Competitors . Transferability is an important property for adversarial examples . With it , the resultant adversarial example crafted via one model may fool others . For the black-box threat model , Goodfellow et al . ( 2015 ) argue that the vulnerability of DNNs is their linear nature , and generate adversarial examples efficiently by performing FGSM which is a single-step attack . Papernot et al . ( 2017 ) train a local model with many queries to substitute for the target model . Dong et al . ( 2018 ) integrate a momentum term into I-FGSM Kurakin et al . ( 2017 ) to stabilize the update direction during the attack iterations . Xie et al . ( 2019b ) apply diverse input patterns to improve the transferability of adversarial examples . Dong et al . ( 2019a ) propose a translation-invariant attack to mitigate the effect of different discriminative regions between models . Gao et al . ( 2020a ) introduce patch-wise perturbation by amplifying the step size and reuse the cut noise to perturb more information in discriminative regions . For the no-box threat model , Li et al . ( 2020a ) attempt to attack the target model without any model query or the accessible pre-trained substitute model . In their work , with a limited amount of data , they try different mechanisms ( with or without supervised technique ) to train the substitute model , and then utilize this substitute model to craft transferable adversarial examples . Different from these approaches , our method does not depend on transferability since we do not need any substitute model . In this paper , we craft the adversarial examples from the perspective of the classification logic of DNNs . Frequency Perspective on DNNs . Our approach is highly inspired by existing works which explain the generalization and adversarial vulnerability of DNNs from the frequency perspective . The fact that DNNs have good generalization while being vulnerable to small adversarial perturbations has motivated ( Jo & Bengio , 2017 ; Wang et al. , 2020 ) to investigate the underlying mechanism , suggesting that surface-statistical content with high-frequency property is essential for the classification task . From the perspective of texture vs. shape , Geirhos et al . ( 2019 ) ; Wang et al . ( 2020 ) reveal that DNNs are biased towards texture instead of shape . Since the texture content is considered to have high-frequency property , their finding can be interpreted as the DNN being biased towards HFC . On the other hand , adversarial perturbations are also known to have the high-frequency property and various defense methods have also been motivated from this insight ( Aydemir et al. , 2018 ; Das et al. , 2018 ; Liu & JaJa , 2019 ; Xie et al. , 2019a ) . Nonetheless , it remains unknown whether manually designed high-frequency patterns are sufficient for attacking the network . 3 METHODOLOGY . Although many adversarial attack methods ( Papernot et al. , 2016 ; Dong et al. , 2018 ; Gao et al. , 2020a ; Li et al. , 2020a ) have achieved pretty high success rates in both black-box and no-box cases , they all need training , especially for query-based ( Papernot et al. , 2016 ; Zhou et al. , 2020 ) and nobox adversarial perturbations ( Li et al. , 2020a ) whose training is usually time-consuming . Then a natural question arises : Is it possible to generate robust adversarial perturbations without any training ? In the following subsections , we will give our answer and introduce our design . 3.1 MOTIVATION . To better understand the role of HFC and LFC for the classification results of DNNs , we split the information of raw images into these two pieces via Gaussian low-pass filter ( defined in Eq . 1 ) . As illustrated in Fig . 3 , when the kernel size is small , i.e. , the cutoff frequency is high , the average accuracy of LFC on ten state-of-the-art models is close to 100 % . However , if we continue to increase the kernel size , the average accuracy of HFC begins to exceed LFC one . To our surprise , for several specific raw images , e.g. , left image of Fig . 1 , the true label ’ s confidence of HFC which is mostly black is even higher than the raw image . To explain the above phenomenon , we turn to the perspective of feature space . Inspired by recent intermediate feature-based attacks ( Zhou et al. , 2018 ; Ganeshan & Babu , 2019 ; Inkawhich et al. , 2019 ) , we argue low-level features are critical to the classification . Interestingly , as shown in Fig . 2 , most2 feature maps in the shallow layers generally extract the edge and texture features ( typical ones are highlighted by red boxes ) , i.e. , HFC , and pay less attention to plain areas in images , i.e. , LFC . Therefore , if a perturbation can effectively manipulate the HFC of an image , totally different low-level features will be extracted and may lead to misclassification .
The authors propose a new method for generating adversarial images for image classifier. This is a No-Box attack named Hybrid Image Transformation (HIT) or hit attack which is both model free and data-free (no training required). In the experiment section the authors show the efficacy of the proposed attack on the ImageNet dataset for different DNNs.
SP:7552f3439cddd41bcf5b2c8f6f563558e07dfd5e
RelaxLoss: Defending Membership Inference Attacks without Losing Utility
As a long-term threat to the privacy of training data , membership inference attacks ( MIAs ) emerge ubiquitously in machine learning models . Existing works evidence strong connection between the distinguishability of the training and testing loss distributions and the model ’ s vulnerability to MIAs . Motivated by existing results , we propose a novel training framework based on a relaxed loss ( RelaxLoss ) with a more achievable learning target , which leads to narrowed generalization gap and reduced privacy leakage . RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead . Through extensive evaluations on five datasets with diverse modalities ( images , medical data , transaction records ) , our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs as well as model utility . Our defense is the first that can withstand a wide range of attacks while preserving ( or even improving ) the target model ’ s utility . 1 INTRODUCTION . While deep learning ( DL ) models have achieved tremendous success in the past few years , their deployments in many sensitive domains ( e.g. , medical , financial ) bring privacy concerns since data misuse in these domains induces severe privacy risks to individuals . In particular , modern deep neural networks ( NN ) are prone to memorize training data due to their high capacity , making them vulnerable to privacy attacks that extract detailed information about the individuals from models ( Shokri et al. , 2017 ; Song et al. , 2017 ; Yeom et al. , 2018 ) . In membership inference attack ( MIA ) , an adversary attempts to identify whether a specific data sample was used to train a target victim model . This threat is pervasive in various data domains ( e.g. , images , medical data , transaction records ) and inevitably poses serious privacy threats to individuals ( Shokri et al. , 2017 ; Nasr et al. , 2018 ; Salem et al. , 2019 ) , even given only black-box access ( query inputs in , posterior predictions out ) ( Shokri et al. , 2017 ; Salem et al. , 2019 ; Song & Mittal , 2020 ) or partially observed output predictions ( e.g. , top-k predicted labels ) ( Choo et al. , 2020 ) . Significant advances have been achieved to defend against MIAs . Conventionally , regularization methods designed for mitigating overfitting such as dropout ( Srivastava et al. , 2014 ) and weightdecay ( Geman et al. , 1992 ) are regarded as defense mechanisms ( Salem et al. , 2019 ; Jia et al. , 2019 ; Shokri et al. , 2017 ) . However , as conveyed by Kaya et al . ( 2020 ) ; Kaya & Dumitras ( 2021 ) , vanilla regularization techniques ( which are not designed for MIA ) , despite slight improvement towards reducing the generalization gap , are generally unable to eliminate MIA . In contrast , recent works design defenses tailored to MIA . A common strategy among such defenses is adversarial training ( Goodfellow et al. , 2014b ; a ) , where a surrogate attack model ( represented as a NN ) is used to approximate the real attack and subsequently the target model is modified to maximize prediction errors of the surrogate attacker via adversarial training . This strategy contributes to remarkable success in defending NN-based attacks ( Nasr et al. , 2018 ; Jia et al. , 2019 ) . However , these methods are greatly restricted by strong assumptions on attack models , thereby failing to generalize to novel attacks unanticipated by the defender ( e.g. , a simple metric-based attack ) ( Song & Mittal , 2020 ) . In order to defend attacks beyond the surrogate one , differentially private ( DP ) training techniques ( Abadi et al. , 2016 ; Papernot et al. , 2016 ; 2018 ) that provide strict guarantees against MIA are exploited . Nevertheless , as evidenced by Rahman et al . ( 2018 ) ; Jia et al . ( 2019 ) ; Hayes et al . ( 2019 ) ; Jayaraman & Evans ( 2019 ) ; Chen et al . ( 2020 ) ; Kaya & Dumitras ( 2021 ) , incorporating DP constraints inevitably compromises model utility and increases computation cost . In this paper , we present an effective defense against MIAs while avoiding negative impacts on the defender ’ s model utility . Our approach is built on two main insights : ( i ) the optimal attack only depends on the sample loss under mild assumptions of the model parameters ( Sablayrolles et al. , 2019 ) ; ( ii ) a large difference between the training loss and the testing loss provably causes high membership privacy risks ( Yeom et al. , 2018 ) . By intentionally ‘ relaxing ’ the target training loss to a level which is more achievable for the test loss , our approach narrows the loss gap and reduces the distinguishability between the training and testing loss distributions , effectively preventing various types of attacks in practice . Moreover , our approach allows for a utility-preserving ( or even improving ) defense , greatly improving upon previous results . As a practical benefit , our approach is easy to implement and can be integrated into any classification models with minimal overhead . Contributions . ( i ) We propose RelaxLoss , a simple yet effective defense mechanism to strengthen a target model ’ s resilience against MIAs without degrading its utility . To the best of our knowledge , our approach for the first time addresses a wide range of attacks while preserving ( or even improving ) the model utility . ( ii ) We derive our method from a Bayesian optimal attacker and provide both empirical and analytical evidence supporting the main principles of our approach . ( iii ) Extensive evaluations on five datasets with diverse modalities demonstrate that our method outperforms state-of-the-art approaches by a large margin in membership inference protection and privacy-utility trade-off . 2 RELATED WORK . Membership Inference Attack . Inferring membership information from deep NNs has been investigated in various application scenarios , ranging from the white-box setting where the whole target model is released ( Nasr et al. , 2019 ; Rezaei & Liu , 2020 ) to the black-box setting where the complete/partial output predictions are accessible to the adversary ( Shokri et al. , 2017 ; Salem et al. , 2019 ; Yeom et al. , 2018 ; Sablayrolles et al. , 2019 ; Song & Mittal , 2020 ; Choo et al. , 2020 ; Hui et al. , 2021 ; Truex et al. , 2019 ) . An adversary first determines the most informative features ( depending on the application scenarios ) that faithfully reflect the sample membership ( e.g. , logits/posterior predictions ( Shokri et al. , 2017 ; Salem et al. , 2019 ; Jia et al. , 2019 ) , loss values ( Yeom et al. , 2018 ; Sablayrolles et al. , 2019 ) , and gradient norms ( Nasr et al. , 2019 ; Rezaei & Liu , 2020 ) ) , and subsequently extracts common patterns in these features among the training samples for identifying membership . In this work , we work towards an effective defense by suppressing the common patterns that an optimal attack relies on . Defense . Existing defense mechanisms against MIA are mainly divided into three main categories : ( i ) regularization techniques to alleviate model overfitting , ( ii ) adversarial training to confuse surrogate attackers , and ( iii ) a differentially private mechanism offering rigorous privacy guarantees . Our proposed approach can be regarded as a regularization technique owing to its effect in reducing generalization gap . Unlike previous regularization techniques , our method is explicitly tailored towards defending MIAs by reducing the information that an attacker can exploit , leading to significantly better defense effectiveness . Algorithmically , our approach shares similarity with techniques that suppress the target model ’ s confidence score predictions ( e.g. , label-smoothing ( Guo et al. , 2017 ; Müller et al. , 2019 ) and confidence-penalty ( Pereyra et al. , 2017 ) ) , but ours is fundamentally different in the sense that we modulate the loss distribution with gradient ascent . Previous state-of-the-art defense mechanisms against MIA , such as Memguard ( Jia et al. , 2019 ) and Adversarial Regularization ( Nasr et al. , 2018 ) , are built on top of the idea of adversarial training ( Goodfellow et al. , 2014b ; a ) . Such approaches usually rely on strong assumptions about attack models , making their effectiveness highly dependent on the similarity between the surrogate and the real attacker ( Song & Mittal , 2020 ) . In contrast , our method does not rely on any assumptions about the attack model , and has shown consistent effectiveness across different attacker types . Differential privacy ( Dwork , 2008 ; Dwork et al. , 2014 ; Abadi et al. , 2016 ; Papernot et al. , 2016 ) provides strict worst-case guarantees against arbitrarily powerful attackers that exceed practical limits , but inevitably sacrifices model utility ( Rahman et al. , 2018 ; Jia et al. , 2019 ; Hayes et al. , 2019 ; Chen et al. , 2020 ; Kaya & Dumitras , 2021 ; Jayaraman & Evans , 2019 ) and meanwhile increases computation burden ( Goodfellow , 2015 ; Dangel et al. , 2019 ) . In contrast , we focus on practically realizable attacks for utility-preserving and computationally efficient defense . 3 PRELIMINARIES . Notations . We denote by zi = ( xi , yi ) one data sample , where xi and yi are the feature and the one-hot label vector , respectively . f ( · ; θ ) represents a classification model parametrized by θ , and p = f ( x ; θ ) ∈ [ 0 , 1 ] C denotes the predicted posterior scores ( after the final softmax layer ) where C denotes the number of classes . 1 denotes the indicator function , i.e. , 1 [ p ] equals 1 if the predicate p is true , else 0 . We use subscripts for sample index and superscripts for class index . Attacker ’ s Assumptions . We consider the standard setting of MIA : the attacker has access to a query set S = { ( zi , mi ) } Ni=1 containing both member ( training ) and non-member ( testing ) samples drawn from the same data distribution Pdata , where mi is the membership attribute ( mi = 1 if zi is a member ) . The task is to infer the value of the membership attribute mi associated with each query sample zi . We design defense for a general attack with full access to the target model . The attack A ( zi , f ( · ; θ ) ) is a binary classifier which predicts mi for a given query sample zi and a target model parametrized by θ . The Bayes optimal attack Aopt ( zi , f ( · ; θ ) ) will output 1 if the query sample is more likely to be contained in the training set , based on the real underlying membership probability P ( mi = 1|zi , θ ) , which is usually formulated as a non-negative log ratio : Aopt ( zi , f ( · ; θ ) ) = 1 [ log P ( mi = 1|zi , θ ) P ( mi = 0|zi , θ ) ≥ 0 ] ( 1 ) Defender ’ s Assumptions . We closely mimic an assumption-free scenario in designing our defense method . In particular , we consider a knowledge-limited defender which : ( i ) does not have access to additional public ( unlabelled ) training data ( in contrast to Papernot et al . ( 2016 ; 2018 ) ) ; and ( ii ) lacks prior knowledge of the attack strategy ( in contrast to Jia et al . ( 2019 ) ; Nasr et al . ( 2018 ) ) . For added rigor , we also study attacker ’ s countermeasures to our defense in Section 6.4 .
The study tackles the problem of defense against membership inference attack, with a focus on (1) decreasing the performance of the attack, (2) maintaining the classifier’s performance, (3) assuming the blindness towards the attack model. They achieve (1) by closing the distance between the train and test distributions and maintain the utility of the model by flattening the posterior scores of the non-target classes. Their method is shown to be computationally efficient (i.e., the additional computational cost is negligible), close to optimal in maintaining the trade-off between the utility and attack performance, and effective in the face of attack’s countermeasures.
SP:fe1017ead727444727d5b16195f6c7d3babf1931
RelaxLoss: Defending Membership Inference Attacks without Losing Utility
As a long-term threat to the privacy of training data , membership inference attacks ( MIAs ) emerge ubiquitously in machine learning models . Existing works evidence strong connection between the distinguishability of the training and testing loss distributions and the model ’ s vulnerability to MIAs . Motivated by existing results , we propose a novel training framework based on a relaxed loss ( RelaxLoss ) with a more achievable learning target , which leads to narrowed generalization gap and reduced privacy leakage . RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead . Through extensive evaluations on five datasets with diverse modalities ( images , medical data , transaction records ) , our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs as well as model utility . Our defense is the first that can withstand a wide range of attacks while preserving ( or even improving ) the target model ’ s utility . 1 INTRODUCTION . While deep learning ( DL ) models have achieved tremendous success in the past few years , their deployments in many sensitive domains ( e.g. , medical , financial ) bring privacy concerns since data misuse in these domains induces severe privacy risks to individuals . In particular , modern deep neural networks ( NN ) are prone to memorize training data due to their high capacity , making them vulnerable to privacy attacks that extract detailed information about the individuals from models ( Shokri et al. , 2017 ; Song et al. , 2017 ; Yeom et al. , 2018 ) . In membership inference attack ( MIA ) , an adversary attempts to identify whether a specific data sample was used to train a target victim model . This threat is pervasive in various data domains ( e.g. , images , medical data , transaction records ) and inevitably poses serious privacy threats to individuals ( Shokri et al. , 2017 ; Nasr et al. , 2018 ; Salem et al. , 2019 ) , even given only black-box access ( query inputs in , posterior predictions out ) ( Shokri et al. , 2017 ; Salem et al. , 2019 ; Song & Mittal , 2020 ) or partially observed output predictions ( e.g. , top-k predicted labels ) ( Choo et al. , 2020 ) . Significant advances have been achieved to defend against MIAs . Conventionally , regularization methods designed for mitigating overfitting such as dropout ( Srivastava et al. , 2014 ) and weightdecay ( Geman et al. , 1992 ) are regarded as defense mechanisms ( Salem et al. , 2019 ; Jia et al. , 2019 ; Shokri et al. , 2017 ) . However , as conveyed by Kaya et al . ( 2020 ) ; Kaya & Dumitras ( 2021 ) , vanilla regularization techniques ( which are not designed for MIA ) , despite slight improvement towards reducing the generalization gap , are generally unable to eliminate MIA . In contrast , recent works design defenses tailored to MIA . A common strategy among such defenses is adversarial training ( Goodfellow et al. , 2014b ; a ) , where a surrogate attack model ( represented as a NN ) is used to approximate the real attack and subsequently the target model is modified to maximize prediction errors of the surrogate attacker via adversarial training . This strategy contributes to remarkable success in defending NN-based attacks ( Nasr et al. , 2018 ; Jia et al. , 2019 ) . However , these methods are greatly restricted by strong assumptions on attack models , thereby failing to generalize to novel attacks unanticipated by the defender ( e.g. , a simple metric-based attack ) ( Song & Mittal , 2020 ) . In order to defend attacks beyond the surrogate one , differentially private ( DP ) training techniques ( Abadi et al. , 2016 ; Papernot et al. , 2016 ; 2018 ) that provide strict guarantees against MIA are exploited . Nevertheless , as evidenced by Rahman et al . ( 2018 ) ; Jia et al . ( 2019 ) ; Hayes et al . ( 2019 ) ; Jayaraman & Evans ( 2019 ) ; Chen et al . ( 2020 ) ; Kaya & Dumitras ( 2021 ) , incorporating DP constraints inevitably compromises model utility and increases computation cost . In this paper , we present an effective defense against MIAs while avoiding negative impacts on the defender ’ s model utility . Our approach is built on two main insights : ( i ) the optimal attack only depends on the sample loss under mild assumptions of the model parameters ( Sablayrolles et al. , 2019 ) ; ( ii ) a large difference between the training loss and the testing loss provably causes high membership privacy risks ( Yeom et al. , 2018 ) . By intentionally ‘ relaxing ’ the target training loss to a level which is more achievable for the test loss , our approach narrows the loss gap and reduces the distinguishability between the training and testing loss distributions , effectively preventing various types of attacks in practice . Moreover , our approach allows for a utility-preserving ( or even improving ) defense , greatly improving upon previous results . As a practical benefit , our approach is easy to implement and can be integrated into any classification models with minimal overhead . Contributions . ( i ) We propose RelaxLoss , a simple yet effective defense mechanism to strengthen a target model ’ s resilience against MIAs without degrading its utility . To the best of our knowledge , our approach for the first time addresses a wide range of attacks while preserving ( or even improving ) the model utility . ( ii ) We derive our method from a Bayesian optimal attacker and provide both empirical and analytical evidence supporting the main principles of our approach . ( iii ) Extensive evaluations on five datasets with diverse modalities demonstrate that our method outperforms state-of-the-art approaches by a large margin in membership inference protection and privacy-utility trade-off . 2 RELATED WORK . Membership Inference Attack . Inferring membership information from deep NNs has been investigated in various application scenarios , ranging from the white-box setting where the whole target model is released ( Nasr et al. , 2019 ; Rezaei & Liu , 2020 ) to the black-box setting where the complete/partial output predictions are accessible to the adversary ( Shokri et al. , 2017 ; Salem et al. , 2019 ; Yeom et al. , 2018 ; Sablayrolles et al. , 2019 ; Song & Mittal , 2020 ; Choo et al. , 2020 ; Hui et al. , 2021 ; Truex et al. , 2019 ) . An adversary first determines the most informative features ( depending on the application scenarios ) that faithfully reflect the sample membership ( e.g. , logits/posterior predictions ( Shokri et al. , 2017 ; Salem et al. , 2019 ; Jia et al. , 2019 ) , loss values ( Yeom et al. , 2018 ; Sablayrolles et al. , 2019 ) , and gradient norms ( Nasr et al. , 2019 ; Rezaei & Liu , 2020 ) ) , and subsequently extracts common patterns in these features among the training samples for identifying membership . In this work , we work towards an effective defense by suppressing the common patterns that an optimal attack relies on . Defense . Existing defense mechanisms against MIA are mainly divided into three main categories : ( i ) regularization techniques to alleviate model overfitting , ( ii ) adversarial training to confuse surrogate attackers , and ( iii ) a differentially private mechanism offering rigorous privacy guarantees . Our proposed approach can be regarded as a regularization technique owing to its effect in reducing generalization gap . Unlike previous regularization techniques , our method is explicitly tailored towards defending MIAs by reducing the information that an attacker can exploit , leading to significantly better defense effectiveness . Algorithmically , our approach shares similarity with techniques that suppress the target model ’ s confidence score predictions ( e.g. , label-smoothing ( Guo et al. , 2017 ; Müller et al. , 2019 ) and confidence-penalty ( Pereyra et al. , 2017 ) ) , but ours is fundamentally different in the sense that we modulate the loss distribution with gradient ascent . Previous state-of-the-art defense mechanisms against MIA , such as Memguard ( Jia et al. , 2019 ) and Adversarial Regularization ( Nasr et al. , 2018 ) , are built on top of the idea of adversarial training ( Goodfellow et al. , 2014b ; a ) . Such approaches usually rely on strong assumptions about attack models , making their effectiveness highly dependent on the similarity between the surrogate and the real attacker ( Song & Mittal , 2020 ) . In contrast , our method does not rely on any assumptions about the attack model , and has shown consistent effectiveness across different attacker types . Differential privacy ( Dwork , 2008 ; Dwork et al. , 2014 ; Abadi et al. , 2016 ; Papernot et al. , 2016 ) provides strict worst-case guarantees against arbitrarily powerful attackers that exceed practical limits , but inevitably sacrifices model utility ( Rahman et al. , 2018 ; Jia et al. , 2019 ; Hayes et al. , 2019 ; Chen et al. , 2020 ; Kaya & Dumitras , 2021 ; Jayaraman & Evans , 2019 ) and meanwhile increases computation burden ( Goodfellow , 2015 ; Dangel et al. , 2019 ) . In contrast , we focus on practically realizable attacks for utility-preserving and computationally efficient defense . 3 PRELIMINARIES . Notations . We denote by zi = ( xi , yi ) one data sample , where xi and yi are the feature and the one-hot label vector , respectively . f ( · ; θ ) represents a classification model parametrized by θ , and p = f ( x ; θ ) ∈ [ 0 , 1 ] C denotes the predicted posterior scores ( after the final softmax layer ) where C denotes the number of classes . 1 denotes the indicator function , i.e. , 1 [ p ] equals 1 if the predicate p is true , else 0 . We use subscripts for sample index and superscripts for class index . Attacker ’ s Assumptions . We consider the standard setting of MIA : the attacker has access to a query set S = { ( zi , mi ) } Ni=1 containing both member ( training ) and non-member ( testing ) samples drawn from the same data distribution Pdata , where mi is the membership attribute ( mi = 1 if zi is a member ) . The task is to infer the value of the membership attribute mi associated with each query sample zi . We design defense for a general attack with full access to the target model . The attack A ( zi , f ( · ; θ ) ) is a binary classifier which predicts mi for a given query sample zi and a target model parametrized by θ . The Bayes optimal attack Aopt ( zi , f ( · ; θ ) ) will output 1 if the query sample is more likely to be contained in the training set , based on the real underlying membership probability P ( mi = 1|zi , θ ) , which is usually formulated as a non-negative log ratio : Aopt ( zi , f ( · ; θ ) ) = 1 [ log P ( mi = 1|zi , θ ) P ( mi = 0|zi , θ ) ≥ 0 ] ( 1 ) Defender ’ s Assumptions . We closely mimic an assumption-free scenario in designing our defense method . In particular , we consider a knowledge-limited defender which : ( i ) does not have access to additional public ( unlabelled ) training data ( in contrast to Papernot et al . ( 2016 ; 2018 ) ) ; and ( ii ) lacks prior knowledge of the attack strategy ( in contrast to Jia et al . ( 2019 ) ; Nasr et al . ( 2018 ) ) . For added rigor , we also study attacker ’ s countermeasures to our defense in Section 6.4 .
The paper proposes a new training algorithm to defend against membership inference attacks (MIA) in machine learning models. Motivated by the connection between MIA success and difference between training and test loss distributions, the proposed algorithm sets a positive target mean training loss value and applies gradient ascent if the average loss of current training batch is smaller than it. Furthermore, to avoid hurting model accuracy, the proposed algorithm also flattens the probabilities among incorrect labels during training steps. Extensive results on multiple datasets along with several defense baselines validate the effectiveness of the proposed defense idea.
SP:fe1017ead727444727d5b16195f6c7d3babf1931
Maximum Likelihood Estimation for Multimodal Learning with Missing Modality
1 INTRODUCTION . Multimodal learning is an important research area , which builds models to process and relate information between different modalities ( Ngiam et al. , 2011 ; Srivastava & Salakhutdinov , 2014 ; Baltrušaitis et al. , 2018 ) . Compared with unimodal learning , multimodal learning can achieve better performance by properly utilizing the multimodal data . It has been successfully used in many applications , such as multimodal emotion recognition ( Soleymani et al. , 2011 ; Mittal et al. , 2020 ) , multimedia event detection ( Li et al. , 2020 ) , and visual question-answering ( Yu et al. , 2019 ) . With the emergence of big data , multimodal learning becomes more and more important to combine the multimodal data from different sources . A number of previous works ( Tzirakis et al. , 2017 ; Zhang et al. , 2017 ; Elliott et al. , 2017 ; Kim et al. , 2020 ; Zhang et al. , 2020 ) have achieved great successes based on complete observations during the training process . However , in practice , the multimodal data may have missing modalities ( Du et al. , 2018 ; Ma et al. , 2021a ; b ) . This may be caused by various reasons . For instance , the sensor that collects the multimodal data is damaged or the network transmission fails . Examples of the multimodal data are shown in Figure 1 . In the past years , different approaches have been proposed to deal with modality missing . A simple and typical way ( Hastie et al. , 2009 ) is to directly discard the data with missing modalities . Since the information contained in the modality-missing data is neglected , such method often has limited performance . In addition , researchers ( Tran et al. , 2017 ; Chen & Zhang , 2020 ; Liu et al. , 2021 ; Ma et al. , 2021b ) have proposed approaches to heuristically combine the information of the modalitymissing data . However , most of these works lack theoretical explanations , and these empirical methods are often implemented using multiple training stages rather than an end-to-end manner , which lead to the information of the modality-missing data not being well exploited . To tackle above issues , we propose an efficient approach based on maximum likelihood estimation to effectively utilize the modality-missing data . To be specific , we present a likelihood function to characterize the conditional distributions of the modality-complete data and the modality-missing data , which is theoretically optimal . Furthermore , we adopt a generalized form of the softmax function to efficiently implement our maximum likelihood estimation algorithm . Such training strategy guarantees the computability of our framework in an end-to-end scheme . In this way , our approach can effectively leverage the information of the modality-missing data during the training process , Finally , we perform several experiments on real-world multimodal datasets , including eNTERFACE ’ 05 ( Martin et al. , 2006 ) and RAVDESS ( Livingstone & Russo , 2018 ) . The results show the effectiveness of our approach in handling the problem of modality missing . To summarize , our contribution is three-fold : • We design a likelihood function to learn the conditional distributions of the modalitycomplete data and the modality-missing data , which is theoretically optimal . • We develop a generalized form of the softmax function to implement our maximum likelihood estimation framework in an end-to-end manner , which is more effective than previous works . • We conduct a series of experiments on real-world multimodal datasets . The results validate the effectiveness of our approach , even when 95 % of the training data has missing modality . 2 METHODOLOGY . Our goal is to deal with the problem of modality missing in multimodal learning based on maximum likelihood estimation . In the following , we first show the problem formulation , and then describe the details of our framework . 2.1 PROBLEM FORMULATION . In this paper , we consider that the multimodal data has two modalities . Here , the random variables corresponding to these two modalities and their category labels are denoted as X , Y , and Z , respectively . In the training process , we assume that there are two independently observed datasets : modality-complete and modality-missing . We use DXY Z = { ( x ( i ) c , y ( i ) c , z ( i ) c ) | z ( i ) c ∈ Z = { 1 , 2 , · · · , |Z| } } nc i=1 to represent the modality-complete dataset , where x ( i ) c and y ( i ) c represent the two modalities of the i-th sample of DXY Z respectively , z ( i ) c is their corresponding category label , and the size of DXY Z is nc . We then use DXZ = { ( x ( i ) m , z ( i ) m ) | z ( i ) m ∈ Z = { 1 , 2 , · · · , |Z| } } nm i=1 to represent the modality-missing dataset , where the size of DXZ is nm . In addition , we adopt [ DXY Z ] XY to represent { ( x ( i ) c , y ( i ) c ) } nc i=1 . [ DXY Z ] Z , [ DXZ ] X , and [ DXZ ] Z are expressed in the same way . The multimodal data of DXY Z and DXZ are assumed to be i.i.d . generated from an unknown underlying joint distribution . By utilizing the knowledge of the modality-complete data and the modality-missing data , we hope our framework can predict the category labels correctly . 2.2 MAXIMUM LIKELIHOOD ESTIMATION FOR MISSING MODALITY . In this section , we first present how to design a likelihood function to learn the conditional distributions of the modality-complete data and the modality-missing data . Then , we show that by adopting a generalized form of the softmax function , we design a training strategy to implement our algorithm . 2.2.1 LIKELIHOOD FUNCTION ANALYSES . Maximum likelihood estimation is a statistical method of using the observed data to estimate the distribution by maximizing the likelihood function . The estimated distribution makes the observed data most likely ( Myung , 2003 ) . With this idea , we study the likelihood function on datasets DXY Z and DXZ . For the classification task , the conditional likelihood is commonly used . Inspired by this , we use a model QXY Z to learn the underlying joint distribution of DXY Z and DXZ . The conditional likelihood can be represented as : ` , P ( [ DXY Z ] Z , [ DXZ ] Z | [ DXY Z ] XY , [ DXZ ] X ; QXY Z ) a = P ( [ DXY Z ] Z | [ DXY Z ] XY ; QXY Z ) · P ( [ DXZ ] Z | [ DXZ ] X ; QXY Z ) b = ∏ ( x , y , z ) ∈DXY Z QZ|XY ( z|xy ) · ∏ ( x , z ) ∈DXZ QZ|X ( z|x ) ( 1 ) where the step a follows from the fact that datasets DXY Z and DXZ are observed independently , and the step b is due to that samples in each dataset are i.i.d . QZ|XY and QZ|X are conditional distributions of QXY Z . In this way , we show the likelihood function using the information of DXY Z and DXZ . Then , we use the negative log-likelihood as the loss function to train our deep learning model , i.e. , L , − log ` = − ∑ ( x , y , z ) ∈DXY Z logQZ|XY ( z|xy ) − ∑ ( x , z ) ∈DXZ logQZ|X ( z|x ) ( 2 ) It is worth noting that in ( Daniels , 1961 ; Lehmann , 2004 ) , maximum likelihood estimation is proved to be an asymptotically-efficient strategy , which guarantees the theoretical optimality of our method to deal with modality missing . To optimize L , we use deep neural networks to extract the k-dimensional feature representations from the observation ( x , y , z ) , which are represented as f ( x ) = [ f1 ( x ) , f2 ( x ) , · · · , fk ( x ) ] T , g ( y ) = [ g1 ( y ) , g2 ( y ) , · · · , gk ( y ) ] T , and h ( z ) = [ h1 ( z ) , h2 ( z ) , · · · , hk ( z ) ] T , respectively . We then utilize these features to learn QZ|XY and QZ|X in L. Our framework is shown in Figure 2 . In this way , we show the log-likelihood function L. By characterizing the conditional distributions of the modality-complete data and the modality-missing data , it leverages the underlying structure information behind the multimodal data , which constitutes the theoretical basis of our framework . 2.2.2 MAXIMUM LIKELIHOOD ESTIMATION IMPLEMENTATION . In fact , it is not easy to optimize the log-likelihood function L in Equation ( 2 ) by designing neural networks , which is mainly due to two reasons . Firstly , the representations of the high-dimensional data and the procedure to model them are complicated . Secondly , since QZ|XY and QZ|X in L are related , how to build models to learn their relationships is difficult . To address these two issues , we develop a generalized form of the softmax function to describe QXY Z as follows 1 : QXY Z ( x , y , z ) = RX ( x ) RY ( y ) RZ ( z ) exp ( φ T ( f ( x ) , g ( y ) ) h ( z ) ) ∑ x′ , y′ , z′ RX ( x ′ ) RY ( y′ ) RZ ( z′ ) exp ( φT ( f ( x′ ) , g ( y′ ) ) h ( z′ ) ) ( 3 ) where φ ( f , g ) represents the function to fuse features f and g. We study three forms of φ to investigate its effect in our framework , as shown in Figure 3 . RX , RY , and RZ represent the underlying marginal distributions of the variables X , Y , and Z , respectively . Their use makes the denominator of Equation ( 3 ) expressed in the form of the mean over RX , RY , and RZ , which serves as the normalization to make QXY Z a valid distribution and is helpful for our further derivation . In addition , the generalized softmax function we propose can be regarded as a generalization of softmax learning in ( Xu et al. , 2018 ) from unimodal learning to multimodal learning . In this way , we show the distribution QXY Z by adopting a generalized form of the softmax function , which has the following two benefits . Firstly , by depicting the representation of QXY Z , we can further derive QZ|XY and QZ|X . It makes our approach a unified framework to combine the information of the modality-complete data and the modality-missing data . Secondly , it avoids modeling the relationship between QZ|XY and QZ|X . In fact , the correlation between the high-dimensional data can be rather complex . Then , we derive conditional distributions QZ|XY and QZ|X from Equation ( 3 ) : QZ|XY ( z|xy ) = RZ ( z ) exp ( φT ( f ( x ) , g ( y ) ) h ( z ) ) ∑ z′ RZ ( z ′ ) exp ( φT ( f ( x ) , g ( y ) ) h ( z′ ) ) ( 4 ) and QZ|X ( z|x ) = RZ ( z ) ∑ y′ RY ( y ′ ) exp ( φT ( f ( x ) , g ( y′ ) ) h ( z ) ) ∑ z′ RZ ( z ′ ) ∑ y′ RY ( y ′ ) exp ( φT ( f ( x ) , g ( y′ ) ) h ( z′ ) ) ( 5 ) We can observe that by introducing RX , RY , and RZ into QXY Z , the derived QZ|XY and QZ|X are expressed in the form of the mean over RY and RZ . In practice , we can use the empirical mean as an estimation . Correspondingly , by plugging Equations ( 4 ) and ( 5 ) into Equation ( 2 ) , we can summarize the detailed steps to compute our objective function L , as shown in Algorithm 1 . It is worth pointing out that when we compute QZ|X , we need to use the information of the modality y . Since in the training process , the modality y of the dataset DXZ is missing , we utilize samples of the modality y of the dataset DXY Z to compute QZ|X . Finally , we utilize neural networks to extract features f , g , and h from the modality-complete data and the modality-missing data to optimize our log-likelihood functionL . It performs classification directly , which does not need to explicitly complement the modality-missing data before the classification task . 1Strictly speaking , RX and RY are probability density functions , and RZ is a probability mass function . The denominator of Equation ( 3 ) should be integrated over RX and RY . We use summation here for the simplicity of exposition . Algorithm 1 Compute our objective function on a mini-batch . Input : A modality-complete batch { ( x ( i ) c , y ( i ) c , z ( i ) c ) } n1 i=1 , where n1 is the batch size . A modality-missing batch { ( x ( i ) m , z ( i ) m ) } n2 i=1 , where n2 is the batch size . Neural networks with k output units : f , g , and h. Output : The value of our objective L. 1 : Compute empirical label distribution R̂Z : R̂Z ( z ) ← ∑n1 i=1 1 ( z ( i ) c =z ) + ∑n2 i=1 1 ( z ( i ) m =z ) n1+n2 , z = 1 , 2 , · · · , |Z| 2 : Compute QZ|XY : QZ|XY ( z ( i ) c |x ( i ) c , y ( i ) c ) ← R̂Z ( z ( i ) c ) exp ( φ T ( f ( x ( i ) c ) , g ( y ( i ) c ) ) h ( z ) ) ∑|Z| z′=1 RZ ( z ′ ) exp ( φT ( f ( x ( i ) c ) , g ( y ( i ) c ) ) h ( z′ ) ) , i = 1 , · · · , n1 3 : Compute QZ|X : QZ|X ( z ( i ) m |x ( i ) m ) ← R̂Z ( z ( i ) m ) 1 n1 ∑n1 j=1 exp ( φ T ( f ( x ( i ) m ) , g ( y ( j ) c ) ) h ( z ( i ) m ) ) ∑|Z| z′=1 RZ ( z ′ ) 1n1 ∑n1 j=1 exp ( φ T ( f ( x ( i ) m ) , g ( y ( j ) c ) ) h ( z′ ) ) , i = 1 , · · · , n2 4 : Compute our empirical objective L : −∑n1i=1 logQZ|XY ( z ( i ) c |x ( i ) c , y ( i ) c ) − ∑n2 i=1 logQZ|X ( z ( i ) m |x ( i ) m )
The authors propose a probabilistic framework to improve the classification accuracy in instances when there exists missing data in the multi-modality datasets (where one of the modalities is the predictive label; however, this label is not assumed missing). To this end, they propose a generalized softmax function as the joint distribution of all modalities and the label, from which conditional distributions are derived, for computing the maximum likelihood estimate (MLE). Experimental results on eNTERFACE and RAVDESS datasets demonstrate improvements in classification accuracy over baselines. In addition, the authors investigate the influence of the influence of the backbone models and the fusion functions.
SP:8396c93d47a3bad4245917bc5d84713c9c6fa039
Maximum Likelihood Estimation for Multimodal Learning with Missing Modality
1 INTRODUCTION . Multimodal learning is an important research area , which builds models to process and relate information between different modalities ( Ngiam et al. , 2011 ; Srivastava & Salakhutdinov , 2014 ; Baltrušaitis et al. , 2018 ) . Compared with unimodal learning , multimodal learning can achieve better performance by properly utilizing the multimodal data . It has been successfully used in many applications , such as multimodal emotion recognition ( Soleymani et al. , 2011 ; Mittal et al. , 2020 ) , multimedia event detection ( Li et al. , 2020 ) , and visual question-answering ( Yu et al. , 2019 ) . With the emergence of big data , multimodal learning becomes more and more important to combine the multimodal data from different sources . A number of previous works ( Tzirakis et al. , 2017 ; Zhang et al. , 2017 ; Elliott et al. , 2017 ; Kim et al. , 2020 ; Zhang et al. , 2020 ) have achieved great successes based on complete observations during the training process . However , in practice , the multimodal data may have missing modalities ( Du et al. , 2018 ; Ma et al. , 2021a ; b ) . This may be caused by various reasons . For instance , the sensor that collects the multimodal data is damaged or the network transmission fails . Examples of the multimodal data are shown in Figure 1 . In the past years , different approaches have been proposed to deal with modality missing . A simple and typical way ( Hastie et al. , 2009 ) is to directly discard the data with missing modalities . Since the information contained in the modality-missing data is neglected , such method often has limited performance . In addition , researchers ( Tran et al. , 2017 ; Chen & Zhang , 2020 ; Liu et al. , 2021 ; Ma et al. , 2021b ) have proposed approaches to heuristically combine the information of the modalitymissing data . However , most of these works lack theoretical explanations , and these empirical methods are often implemented using multiple training stages rather than an end-to-end manner , which lead to the information of the modality-missing data not being well exploited . To tackle above issues , we propose an efficient approach based on maximum likelihood estimation to effectively utilize the modality-missing data . To be specific , we present a likelihood function to characterize the conditional distributions of the modality-complete data and the modality-missing data , which is theoretically optimal . Furthermore , we adopt a generalized form of the softmax function to efficiently implement our maximum likelihood estimation algorithm . Such training strategy guarantees the computability of our framework in an end-to-end scheme . In this way , our approach can effectively leverage the information of the modality-missing data during the training process , Finally , we perform several experiments on real-world multimodal datasets , including eNTERFACE ’ 05 ( Martin et al. , 2006 ) and RAVDESS ( Livingstone & Russo , 2018 ) . The results show the effectiveness of our approach in handling the problem of modality missing . To summarize , our contribution is three-fold : • We design a likelihood function to learn the conditional distributions of the modalitycomplete data and the modality-missing data , which is theoretically optimal . • We develop a generalized form of the softmax function to implement our maximum likelihood estimation framework in an end-to-end manner , which is more effective than previous works . • We conduct a series of experiments on real-world multimodal datasets . The results validate the effectiveness of our approach , even when 95 % of the training data has missing modality . 2 METHODOLOGY . Our goal is to deal with the problem of modality missing in multimodal learning based on maximum likelihood estimation . In the following , we first show the problem formulation , and then describe the details of our framework . 2.1 PROBLEM FORMULATION . In this paper , we consider that the multimodal data has two modalities . Here , the random variables corresponding to these two modalities and their category labels are denoted as X , Y , and Z , respectively . In the training process , we assume that there are two independently observed datasets : modality-complete and modality-missing . We use DXY Z = { ( x ( i ) c , y ( i ) c , z ( i ) c ) | z ( i ) c ∈ Z = { 1 , 2 , · · · , |Z| } } nc i=1 to represent the modality-complete dataset , where x ( i ) c and y ( i ) c represent the two modalities of the i-th sample of DXY Z respectively , z ( i ) c is their corresponding category label , and the size of DXY Z is nc . We then use DXZ = { ( x ( i ) m , z ( i ) m ) | z ( i ) m ∈ Z = { 1 , 2 , · · · , |Z| } } nm i=1 to represent the modality-missing dataset , where the size of DXZ is nm . In addition , we adopt [ DXY Z ] XY to represent { ( x ( i ) c , y ( i ) c ) } nc i=1 . [ DXY Z ] Z , [ DXZ ] X , and [ DXZ ] Z are expressed in the same way . The multimodal data of DXY Z and DXZ are assumed to be i.i.d . generated from an unknown underlying joint distribution . By utilizing the knowledge of the modality-complete data and the modality-missing data , we hope our framework can predict the category labels correctly . 2.2 MAXIMUM LIKELIHOOD ESTIMATION FOR MISSING MODALITY . In this section , we first present how to design a likelihood function to learn the conditional distributions of the modality-complete data and the modality-missing data . Then , we show that by adopting a generalized form of the softmax function , we design a training strategy to implement our algorithm . 2.2.1 LIKELIHOOD FUNCTION ANALYSES . Maximum likelihood estimation is a statistical method of using the observed data to estimate the distribution by maximizing the likelihood function . The estimated distribution makes the observed data most likely ( Myung , 2003 ) . With this idea , we study the likelihood function on datasets DXY Z and DXZ . For the classification task , the conditional likelihood is commonly used . Inspired by this , we use a model QXY Z to learn the underlying joint distribution of DXY Z and DXZ . The conditional likelihood can be represented as : ` , P ( [ DXY Z ] Z , [ DXZ ] Z | [ DXY Z ] XY , [ DXZ ] X ; QXY Z ) a = P ( [ DXY Z ] Z | [ DXY Z ] XY ; QXY Z ) · P ( [ DXZ ] Z | [ DXZ ] X ; QXY Z ) b = ∏ ( x , y , z ) ∈DXY Z QZ|XY ( z|xy ) · ∏ ( x , z ) ∈DXZ QZ|X ( z|x ) ( 1 ) where the step a follows from the fact that datasets DXY Z and DXZ are observed independently , and the step b is due to that samples in each dataset are i.i.d . QZ|XY and QZ|X are conditional distributions of QXY Z . In this way , we show the likelihood function using the information of DXY Z and DXZ . Then , we use the negative log-likelihood as the loss function to train our deep learning model , i.e. , L , − log ` = − ∑ ( x , y , z ) ∈DXY Z logQZ|XY ( z|xy ) − ∑ ( x , z ) ∈DXZ logQZ|X ( z|x ) ( 2 ) It is worth noting that in ( Daniels , 1961 ; Lehmann , 2004 ) , maximum likelihood estimation is proved to be an asymptotically-efficient strategy , which guarantees the theoretical optimality of our method to deal with modality missing . To optimize L , we use deep neural networks to extract the k-dimensional feature representations from the observation ( x , y , z ) , which are represented as f ( x ) = [ f1 ( x ) , f2 ( x ) , · · · , fk ( x ) ] T , g ( y ) = [ g1 ( y ) , g2 ( y ) , · · · , gk ( y ) ] T , and h ( z ) = [ h1 ( z ) , h2 ( z ) , · · · , hk ( z ) ] T , respectively . We then utilize these features to learn QZ|XY and QZ|X in L. Our framework is shown in Figure 2 . In this way , we show the log-likelihood function L. By characterizing the conditional distributions of the modality-complete data and the modality-missing data , it leverages the underlying structure information behind the multimodal data , which constitutes the theoretical basis of our framework . 2.2.2 MAXIMUM LIKELIHOOD ESTIMATION IMPLEMENTATION . In fact , it is not easy to optimize the log-likelihood function L in Equation ( 2 ) by designing neural networks , which is mainly due to two reasons . Firstly , the representations of the high-dimensional data and the procedure to model them are complicated . Secondly , since QZ|XY and QZ|X in L are related , how to build models to learn their relationships is difficult . To address these two issues , we develop a generalized form of the softmax function to describe QXY Z as follows 1 : QXY Z ( x , y , z ) = RX ( x ) RY ( y ) RZ ( z ) exp ( φ T ( f ( x ) , g ( y ) ) h ( z ) ) ∑ x′ , y′ , z′ RX ( x ′ ) RY ( y′ ) RZ ( z′ ) exp ( φT ( f ( x′ ) , g ( y′ ) ) h ( z′ ) ) ( 3 ) where φ ( f , g ) represents the function to fuse features f and g. We study three forms of φ to investigate its effect in our framework , as shown in Figure 3 . RX , RY , and RZ represent the underlying marginal distributions of the variables X , Y , and Z , respectively . Their use makes the denominator of Equation ( 3 ) expressed in the form of the mean over RX , RY , and RZ , which serves as the normalization to make QXY Z a valid distribution and is helpful for our further derivation . In addition , the generalized softmax function we propose can be regarded as a generalization of softmax learning in ( Xu et al. , 2018 ) from unimodal learning to multimodal learning . In this way , we show the distribution QXY Z by adopting a generalized form of the softmax function , which has the following two benefits . Firstly , by depicting the representation of QXY Z , we can further derive QZ|XY and QZ|X . It makes our approach a unified framework to combine the information of the modality-complete data and the modality-missing data . Secondly , it avoids modeling the relationship between QZ|XY and QZ|X . In fact , the correlation between the high-dimensional data can be rather complex . Then , we derive conditional distributions QZ|XY and QZ|X from Equation ( 3 ) : QZ|XY ( z|xy ) = RZ ( z ) exp ( φT ( f ( x ) , g ( y ) ) h ( z ) ) ∑ z′ RZ ( z ′ ) exp ( φT ( f ( x ) , g ( y ) ) h ( z′ ) ) ( 4 ) and QZ|X ( z|x ) = RZ ( z ) ∑ y′ RY ( y ′ ) exp ( φT ( f ( x ) , g ( y′ ) ) h ( z ) ) ∑ z′ RZ ( z ′ ) ∑ y′ RY ( y ′ ) exp ( φT ( f ( x ) , g ( y′ ) ) h ( z′ ) ) ( 5 ) We can observe that by introducing RX , RY , and RZ into QXY Z , the derived QZ|XY and QZ|X are expressed in the form of the mean over RY and RZ . In practice , we can use the empirical mean as an estimation . Correspondingly , by plugging Equations ( 4 ) and ( 5 ) into Equation ( 2 ) , we can summarize the detailed steps to compute our objective function L , as shown in Algorithm 1 . It is worth pointing out that when we compute QZ|X , we need to use the information of the modality y . Since in the training process , the modality y of the dataset DXZ is missing , we utilize samples of the modality y of the dataset DXY Z to compute QZ|X . Finally , we utilize neural networks to extract features f , g , and h from the modality-complete data and the modality-missing data to optimize our log-likelihood functionL . It performs classification directly , which does not need to explicitly complement the modality-missing data before the classification task . 1Strictly speaking , RX and RY are probability density functions , and RZ is a probability mass function . The denominator of Equation ( 3 ) should be integrated over RX and RY . We use summation here for the simplicity of exposition . Algorithm 1 Compute our objective function on a mini-batch . Input : A modality-complete batch { ( x ( i ) c , y ( i ) c , z ( i ) c ) } n1 i=1 , where n1 is the batch size . A modality-missing batch { ( x ( i ) m , z ( i ) m ) } n2 i=1 , where n2 is the batch size . Neural networks with k output units : f , g , and h. Output : The value of our objective L. 1 : Compute empirical label distribution R̂Z : R̂Z ( z ) ← ∑n1 i=1 1 ( z ( i ) c =z ) + ∑n2 i=1 1 ( z ( i ) m =z ) n1+n2 , z = 1 , 2 , · · · , |Z| 2 : Compute QZ|XY : QZ|XY ( z ( i ) c |x ( i ) c , y ( i ) c ) ← R̂Z ( z ( i ) c ) exp ( φ T ( f ( x ( i ) c ) , g ( y ( i ) c ) ) h ( z ) ) ∑|Z| z′=1 RZ ( z ′ ) exp ( φT ( f ( x ( i ) c ) , g ( y ( i ) c ) ) h ( z′ ) ) , i = 1 , · · · , n1 3 : Compute QZ|X : QZ|X ( z ( i ) m |x ( i ) m ) ← R̂Z ( z ( i ) m ) 1 n1 ∑n1 j=1 exp ( φ T ( f ( x ( i ) m ) , g ( y ( j ) c ) ) h ( z ( i ) m ) ) ∑|Z| z′=1 RZ ( z ′ ) 1n1 ∑n1 j=1 exp ( φ T ( f ( x ( i ) m ) , g ( y ( j ) c ) ) h ( z′ ) ) , i = 1 , · · · , n2 4 : Compute our empirical objective L : −∑n1i=1 logQZ|XY ( z ( i ) c |x ( i ) c , y ( i ) c ) − ∑n2 i=1 logQZ|X ( z ( i ) m |x ( i ) m )
This submission proposed a maximum likelihood estimation framework combined with a generalized softmax function to resolve multimodal emotion recognition with missing modality. Two emotion recognition datasets are used in experiments to make comparison with several baseline methods. The results suggest that the proposed approach outperforms these compared methods. Moreover, according to the authors, the end-to-end nature of this framework makes it more efficient than previous works.
SP:8396c93d47a3bad4245917bc5d84713c9c6fa039
Lottery Image Prior
1 INTRODUCTION . Background Deep neural networks ( DNNs ) , in particular convolutional neural networks ( CNNs ) , have been powerful tools for solving various image inverse problems such as denoising ( Zhang et al. , 2017 ; Guo et al. , 2019 ; Lehtinen et al. , 2018 ) , inpainting ( Pathak et al. , 2016 ; Yu et al. , 2018 ; 2019b ) , and super resolution ( Ledig et al. , 2017 ; Lim et al. , 2017 ; Zhang et al. , 2018 ) . Conventional wisdom believes that is owing to DNNs ’ universal approximation ability and learning from massive training data . Yet , recent studies have revealed the specific architectures of CNNs have the inductive bias to represent and generate natural images well , and such favorable architecture inductive bias can work independently from fitting specific training sets ( Ulyanov et al. , 2018 ; Cheng et al. , 2019 ; Bora et al. , 2017 ; Heckel & Hand , 2019 ; Jalal et al. , 2020 ) . For example , deep image prior ( DIP ) ( Ulyanov et al. , 2018 ) shows an untrained neural network can be used as a handcrafted prior that transfers well across multiple inverse problems . The authors attributed the success to the CNN architecture itself , that appeared to possess high noise impedance even with only random initializations . As another example , in compressive sensing , Bora et al . ( 2017 ) ; Jalal et al . ( 2020 ) replaced the common structural assumptions such as sparsity with a pretrained generative adversarial networks ( GAN ) . The underlying rationale lies in that a pre-trained generator should ( approximately ) represent the notion of a vector being or more likely in the target domain such as natural images ; in other words , a sample more like a natural image will be closer to the output range of the pre-trained generator . In this paper , we refer to such general-purpose image prior parameterized by ( either untrained or pre-trained ) DNNs as DNN-based image priors . Recall that , classical image regularizers in the spatial or frequency domains are often not learningbased ( Tomasi & Manduchi , 1998 ; Sardy et al. , 2001 ; Dabov et al. , 2007 ) , or rely on compact learning models ( Cao et al. , 2008 ; Elad & Aharon , 2006 ; He et al. , 2015 ) . On contrary , DNN-based image priors have a massive number of parameters ( we compare the parameter numbers between the full model and the sparse subnetwork in Table . 2 ) , typically magnitudes more than the image ( even the image size ) size . The two extremes invite the natural question : Can we identify highly compact DNN-based image priors , that are same effective ? We note that , diving into this question has twofold appeals . On the algorithmic side , that could help us understand further how the topology and connectivity of CNN architecture itself will affect the effectiveness of those priors , and to what extent sparsity could be relevant . On the practical side , if we could provide an affirmative answer to this question , that would potentially lead to more computationally savings when applying those DNN-based priors in practice , leading to faster restoration or computational imaging with them . Towards the above question , the tool we refer to in this paper is the recently emerged Lottery Ticket Hypothesis ( LTH ) ( Frankle & Carbin , 2018 ; Frankle et al. , 2020a ) . LTH suggests that every dense DNN has an extremely sparse “ matching subnetwork ” , that can be trained in isolation to match the original dense DNN ’ s accuracy . While the vanilla LTH studies training from random scratch , the latest works also extend similar findings to fine-tuning the pre-trained models ( Chen et al. , 2020a ; 2021a ) . LTH has widespread success in image classification , language modeling , reinforcement learning and multi-modal learning , e.g. , ( Yu et al. , 2019a ; Renda et al. , 2020 ; Chen et al. , 2020a ; Gan et al. , 2021 ) . Our Contributions Drawing inspirations from the LTH literature , we conjecture and empirically study a novel “ lottery image prior ” ( LIP ) , stated as : Given an ( untrained or trained ) DNN-based image prior , it will have a sparse subnetwork that can be trained in isolation , to match the original DNN ’ s performance when being applied as a prior to regularizing various image inverse problems . Studying this new problem is , however , NOT a naive extension from the existing LTH methods , owing to several technical barriers : ( a ) till now , LTH has not been demonstrated for image inverse problem or DNN-based priors , to our best knowledge . Most LTH works studied discriminative tasks , with one exception ( Chen et al. , 2021c ) . It is therefore uncertain whether high-sparsity DNN is still viable for reconstruction-oriented tasks ; ( b ) existing LTH works typically require a full training set to locate the sparse subnetwork mask , whereas our LIP settings are only not data-rich . For example , DIP needs the DNN to be trained to overfit one specific image , making it drastically different from previous problems ; ( c ) the objectives between finding the sparse mask ( e.g. , learning the prior ) and fitting the sparse subnetwork ( e.g. , using the prior ) are often unaligned in LIP problems . For example , DIP will overfit a corrupted input image ( during which the sparse mask will be found ) in order to reconstruct a clean output image ( when the found sparse subnetwork will be used ) ; the pre-trained generator will also be used towards a different goal ( regularizing compressive sensing ) from their original pre-training task ( generating realistic images ) . Our extensive experimental study confirms the existence of LIP in two representative settings : ( i ) image restoration with the deep image prior , using an untrained DNN ( as shown in Fig . 2 ) ; and ( ii ) compressive sensing image reconstruction , using a pre-trained GAN generator . Using iterative magnitude pruning ( IMP ) with surrogate tasks ( the overview of our work paradigm is in Fig . 1 ) , we can successfully locate the LIP subnetworks at the sparsity range of 20 % -86.58 % in setting i ; and those at the sparsity range of 5 % -36 % in setting ii . Those LIP subnetworks also possess high transferability . For example , the LIP ticket found in the setting i transfer well across not only different images , but also different tasks such as denoising , inpainting and super-resolution . Our contributions are summarized below : • The first comprehensive study on LTH in DNN-based image priors and inverse problems , establishing the “ lottery image prior ” ( LIP ) and demonstrating the prevailing relevance of LTH more broadly than previously typical settings . • The investigation of both untrained and pre-trained DNNs as image priors , verifying that LIP subnetworks can be found in both settings , by overcoming several impediments such as severely limited training data and task objective mismatch . • High transferability of LIP subnetworks across data/datasets , and various inverse problem tasks . Our finding reflects the underlying common image prior that is agnostic to specific data or task , through the lens of the CNN architecture together with sparsity . 2 BACKGROUND WORK . Lottery Ticket Hypothesis LTH ( Frankle & Carbin , 2018 ) states that the dense , randomly initialized DNN contains a sparse matching subnetwork , which could reach the comparable or even better performance by independently being trained for the same epoch number as the full network do . Since then , the statement has been verified in a variety of fields , such as image classification ( Frankle & Carbin , 2018 ; Liu et al. , 2018 ; Wang et al. , 2020 ; Evci et al. , 2019 ; Frankle et al. , 2020b ; Savarese et al. , 2019 ; Yin et al. , 2019 ; You et al. , 2019 ; Ma et al. , 2021 ; Chen et al. , 2021a ) , natural language processing ( Gale et al. , 2019 ; Chen et al. , 2020a ) , reinforcement learning ( Yu et al. , 2019a ) , lifelong learning ( Chen et al. , 2020b ) , graph neural networks ( Chen et al. , 2021b ) , and adversarial robustness ( Cosentino et al. , 2019 ) . Rewinding was proposed by ( Frankle et al. , 2019 ) to scale up the LTH to large models and datasets . The found matching subnetworks also demonstrate transferability across datasets and tasks ( Morcos et al. , 2019 ; Desai et al. , 2019 ) . Deep Image Prior and Its Variants Despite CNN ’ s tremendous success on various imaging tasks , their outstanding performance is often attributed to massive data-driven learning . DIP ( Ulyanov et al. , 2018 ) pioneered to show that CNN architecture alone has captured important natural image priors : by over-fitting a randomly initialized untrained CNN to a single degraded image ( plus some early stopping ) , it can restore the clean output without accessing ground truth . Follow-up work ( Mataev et al. , 2019 ) strengths DIP performance by incorporating it with the regularization by denoising ( RED ) framework and a series of works ( Mastan & Raman , 2020 ; 2021 ) use the contextual feature learning method to achieve the same goal of DIP . Besides natural image restoration , DIP was successfully applied to PET image reconstruction ( Gong et al. , 2018 ) , dynamic magnetic resonance imaging ( Jin et al. , 2019 ) , unsupervised image decomposition ( Gandelsman et al. , 2019 ) and quantitative phase imaging ( Yang et al. , 2021 ) . Heckel & Hand ( 2018 ) further demonstrated that even an under-parameterized non-convolutional model , named “ Deep Decoder ” , can over-fit a sin- gle degraded image like DIP did , which does not critically rely on early stopping or so . Chen et al . ( 2020c ) was the first to study the possibility to optimize CNN architectures for capturing stronger image priors in DIP , via leveraging Neural Architecture Search ( NAS ) . Compressive Sensing using Generative Models Compressive Sensing ( CS ) reconstructs an unknown vector from under-sampled linear measurements of its entries ( Foucart & Rauhut , 2013 ) , by assuming the unknown vector to admit certain structural priors . The most common structural prior is to assume that the vector is k-sparse in some known bases ( Candes et al. , 2006 ; Donoho , 2006 ) , and more sophisticated statistical assumptions were also considered ( Baraniuk et al. , 2010 ) . However , those priors are inevitably oversimplified to depict the high-dimensional manifold of natural images . Bora et al . ( 2017 ) presented the first algorithm that used pre-trained generative models such as GANs , as the prior for compressed sensing . As a prior , the pre-trained generator encourages CS to produce vectors close to its output distribution , which approximates its training image distribution . Significant research has since followed to better understand the behaviours and theoretical limits of CS using generative priors , e.g. , ( Hand & Voroninski , 2018 ; Bora et al. , 2018 ; Hand et al. , 2018 ; Kamath et al. , 2019 ; Liu & Scarlett , 2020 ; Jalal et al. , 2020 ) .
Summary This submission studies lottery ticket hypothesis for deep neural network based inverse imaging. It considers two scenarios: 1-inference for compressed sensing based on pre-trained deep generative models, 2-deep image prior where all network parameters are fit to a single image. Both scenarios are dealing with a single image training task, or, an overfitting task, which makes this work different from the past work using LTH. The empirical results suggest that with a high level of sparsity not only the model size can be made much smaller, but also the generalization can improve which suggests pruning as a regularization. They also suggest good transferability from one image restoration task to another.
SP:ee3c5ca2ee8cb1876ff593863745050eb7e9f941
Lottery Image Prior
1 INTRODUCTION . Background Deep neural networks ( DNNs ) , in particular convolutional neural networks ( CNNs ) , have been powerful tools for solving various image inverse problems such as denoising ( Zhang et al. , 2017 ; Guo et al. , 2019 ; Lehtinen et al. , 2018 ) , inpainting ( Pathak et al. , 2016 ; Yu et al. , 2018 ; 2019b ) , and super resolution ( Ledig et al. , 2017 ; Lim et al. , 2017 ; Zhang et al. , 2018 ) . Conventional wisdom believes that is owing to DNNs ’ universal approximation ability and learning from massive training data . Yet , recent studies have revealed the specific architectures of CNNs have the inductive bias to represent and generate natural images well , and such favorable architecture inductive bias can work independently from fitting specific training sets ( Ulyanov et al. , 2018 ; Cheng et al. , 2019 ; Bora et al. , 2017 ; Heckel & Hand , 2019 ; Jalal et al. , 2020 ) . For example , deep image prior ( DIP ) ( Ulyanov et al. , 2018 ) shows an untrained neural network can be used as a handcrafted prior that transfers well across multiple inverse problems . The authors attributed the success to the CNN architecture itself , that appeared to possess high noise impedance even with only random initializations . As another example , in compressive sensing , Bora et al . ( 2017 ) ; Jalal et al . ( 2020 ) replaced the common structural assumptions such as sparsity with a pretrained generative adversarial networks ( GAN ) . The underlying rationale lies in that a pre-trained generator should ( approximately ) represent the notion of a vector being or more likely in the target domain such as natural images ; in other words , a sample more like a natural image will be closer to the output range of the pre-trained generator . In this paper , we refer to such general-purpose image prior parameterized by ( either untrained or pre-trained ) DNNs as DNN-based image priors . Recall that , classical image regularizers in the spatial or frequency domains are often not learningbased ( Tomasi & Manduchi , 1998 ; Sardy et al. , 2001 ; Dabov et al. , 2007 ) , or rely on compact learning models ( Cao et al. , 2008 ; Elad & Aharon , 2006 ; He et al. , 2015 ) . On contrary , DNN-based image priors have a massive number of parameters ( we compare the parameter numbers between the full model and the sparse subnetwork in Table . 2 ) , typically magnitudes more than the image ( even the image size ) size . The two extremes invite the natural question : Can we identify highly compact DNN-based image priors , that are same effective ? We note that , diving into this question has twofold appeals . On the algorithmic side , that could help us understand further how the topology and connectivity of CNN architecture itself will affect the effectiveness of those priors , and to what extent sparsity could be relevant . On the practical side , if we could provide an affirmative answer to this question , that would potentially lead to more computationally savings when applying those DNN-based priors in practice , leading to faster restoration or computational imaging with them . Towards the above question , the tool we refer to in this paper is the recently emerged Lottery Ticket Hypothesis ( LTH ) ( Frankle & Carbin , 2018 ; Frankle et al. , 2020a ) . LTH suggests that every dense DNN has an extremely sparse “ matching subnetwork ” , that can be trained in isolation to match the original dense DNN ’ s accuracy . While the vanilla LTH studies training from random scratch , the latest works also extend similar findings to fine-tuning the pre-trained models ( Chen et al. , 2020a ; 2021a ) . LTH has widespread success in image classification , language modeling , reinforcement learning and multi-modal learning , e.g. , ( Yu et al. , 2019a ; Renda et al. , 2020 ; Chen et al. , 2020a ; Gan et al. , 2021 ) . Our Contributions Drawing inspirations from the LTH literature , we conjecture and empirically study a novel “ lottery image prior ” ( LIP ) , stated as : Given an ( untrained or trained ) DNN-based image prior , it will have a sparse subnetwork that can be trained in isolation , to match the original DNN ’ s performance when being applied as a prior to regularizing various image inverse problems . Studying this new problem is , however , NOT a naive extension from the existing LTH methods , owing to several technical barriers : ( a ) till now , LTH has not been demonstrated for image inverse problem or DNN-based priors , to our best knowledge . Most LTH works studied discriminative tasks , with one exception ( Chen et al. , 2021c ) . It is therefore uncertain whether high-sparsity DNN is still viable for reconstruction-oriented tasks ; ( b ) existing LTH works typically require a full training set to locate the sparse subnetwork mask , whereas our LIP settings are only not data-rich . For example , DIP needs the DNN to be trained to overfit one specific image , making it drastically different from previous problems ; ( c ) the objectives between finding the sparse mask ( e.g. , learning the prior ) and fitting the sparse subnetwork ( e.g. , using the prior ) are often unaligned in LIP problems . For example , DIP will overfit a corrupted input image ( during which the sparse mask will be found ) in order to reconstruct a clean output image ( when the found sparse subnetwork will be used ) ; the pre-trained generator will also be used towards a different goal ( regularizing compressive sensing ) from their original pre-training task ( generating realistic images ) . Our extensive experimental study confirms the existence of LIP in two representative settings : ( i ) image restoration with the deep image prior , using an untrained DNN ( as shown in Fig . 2 ) ; and ( ii ) compressive sensing image reconstruction , using a pre-trained GAN generator . Using iterative magnitude pruning ( IMP ) with surrogate tasks ( the overview of our work paradigm is in Fig . 1 ) , we can successfully locate the LIP subnetworks at the sparsity range of 20 % -86.58 % in setting i ; and those at the sparsity range of 5 % -36 % in setting ii . Those LIP subnetworks also possess high transferability . For example , the LIP ticket found in the setting i transfer well across not only different images , but also different tasks such as denoising , inpainting and super-resolution . Our contributions are summarized below : • The first comprehensive study on LTH in DNN-based image priors and inverse problems , establishing the “ lottery image prior ” ( LIP ) and demonstrating the prevailing relevance of LTH more broadly than previously typical settings . • The investigation of both untrained and pre-trained DNNs as image priors , verifying that LIP subnetworks can be found in both settings , by overcoming several impediments such as severely limited training data and task objective mismatch . • High transferability of LIP subnetworks across data/datasets , and various inverse problem tasks . Our finding reflects the underlying common image prior that is agnostic to specific data or task , through the lens of the CNN architecture together with sparsity . 2 BACKGROUND WORK . Lottery Ticket Hypothesis LTH ( Frankle & Carbin , 2018 ) states that the dense , randomly initialized DNN contains a sparse matching subnetwork , which could reach the comparable or even better performance by independently being trained for the same epoch number as the full network do . Since then , the statement has been verified in a variety of fields , such as image classification ( Frankle & Carbin , 2018 ; Liu et al. , 2018 ; Wang et al. , 2020 ; Evci et al. , 2019 ; Frankle et al. , 2020b ; Savarese et al. , 2019 ; Yin et al. , 2019 ; You et al. , 2019 ; Ma et al. , 2021 ; Chen et al. , 2021a ) , natural language processing ( Gale et al. , 2019 ; Chen et al. , 2020a ) , reinforcement learning ( Yu et al. , 2019a ) , lifelong learning ( Chen et al. , 2020b ) , graph neural networks ( Chen et al. , 2021b ) , and adversarial robustness ( Cosentino et al. , 2019 ) . Rewinding was proposed by ( Frankle et al. , 2019 ) to scale up the LTH to large models and datasets . The found matching subnetworks also demonstrate transferability across datasets and tasks ( Morcos et al. , 2019 ; Desai et al. , 2019 ) . Deep Image Prior and Its Variants Despite CNN ’ s tremendous success on various imaging tasks , their outstanding performance is often attributed to massive data-driven learning . DIP ( Ulyanov et al. , 2018 ) pioneered to show that CNN architecture alone has captured important natural image priors : by over-fitting a randomly initialized untrained CNN to a single degraded image ( plus some early stopping ) , it can restore the clean output without accessing ground truth . Follow-up work ( Mataev et al. , 2019 ) strengths DIP performance by incorporating it with the regularization by denoising ( RED ) framework and a series of works ( Mastan & Raman , 2020 ; 2021 ) use the contextual feature learning method to achieve the same goal of DIP . Besides natural image restoration , DIP was successfully applied to PET image reconstruction ( Gong et al. , 2018 ) , dynamic magnetic resonance imaging ( Jin et al. , 2019 ) , unsupervised image decomposition ( Gandelsman et al. , 2019 ) and quantitative phase imaging ( Yang et al. , 2021 ) . Heckel & Hand ( 2018 ) further demonstrated that even an under-parameterized non-convolutional model , named “ Deep Decoder ” , can over-fit a sin- gle degraded image like DIP did , which does not critically rely on early stopping or so . Chen et al . ( 2020c ) was the first to study the possibility to optimize CNN architectures for capturing stronger image priors in DIP , via leveraging Neural Architecture Search ( NAS ) . Compressive Sensing using Generative Models Compressive Sensing ( CS ) reconstructs an unknown vector from under-sampled linear measurements of its entries ( Foucart & Rauhut , 2013 ) , by assuming the unknown vector to admit certain structural priors . The most common structural prior is to assume that the vector is k-sparse in some known bases ( Candes et al. , 2006 ; Donoho , 2006 ) , and more sophisticated statistical assumptions were also considered ( Baraniuk et al. , 2010 ) . However , those priors are inevitably oversimplified to depict the high-dimensional manifold of natural images . Bora et al . ( 2017 ) presented the first algorithm that used pre-trained generative models such as GANs , as the prior for compressed sensing . As a prior , the pre-trained generator encourages CS to produce vectors close to its output distribution , which approximates its training image distribution . Significant research has since followed to better understand the behaviours and theoretical limits of CS using generative priors , e.g. , ( Hand & Voroninski , 2018 ; Bora et al. , 2018 ; Hand et al. , 2018 ; Kamath et al. , 2019 ; Liu & Scarlett , 2020 ; Jalal et al. , 2020 ) .
This paper researches the lottery ticket hypothesis for networks as a deep image prior or deep generative prior. The specific approach is to (1) train deep networks to reconstruct multiple images for DIP (Ticket finding objectives), (2) conduct iterative magnitude pruning to the trained network, (3) obtain the pruned mask and reset the model parameter to initialization weight and (4) perform deep image prior to new (or training) images. The proposed objective is an important design for the system to work, i.e., the author optimizes the network by minimizing the expected error on multiple images. The experimental results are somehow interesting that (1) different pruning methods actually performs differently and (2) there are winning tickets in the studied problem. As for GAN compressive sensing task, there are also winning tickets that exists. The point I am more interested in is that some networks are more suitable to be used as deep image prior than others.
SP:ee3c5ca2ee8cb1876ff593863745050eb7e9f941
SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural Networks
1 INTRODUCTION . Spiking neural networks ( SNNs ) are brain-inspired models that transmit spikes between neurons for event-driven energy-efficient computation . SNNs can be implemented with less energy on neuromorphic hardware ( Akopyan et al. , 2015 ; Davies et al. , 2018 ; Pei et al. , 2019 ; Roy et al. , 2019 ) , which can remedy the defects of large energy consumption of artificial neural networks ( ANNs ) . Different from ANNs , however , directly supervised training of SNNs is a hard problem due to the complex spiking neuron model which is discontinuous . To solve this problem , converting ANNs to SNNs ( Hunsberger & Eliasmith , 2015 ; Rueckauer et al. , 2017 ; Sengupta et al. , 2019 ; Rathi et al. , 2019 ; Deng & Gu , 2021 ; Yan et al. , 2021 ) , or many other direct SNN training methods ( Wu et al. , 2018 ; Bellec et al. , 2018 ; Jin et al. , 2018 ; Shrestha & Orchard , 2018 ; Wu et al. , 2019 ; Neftci et al. , 2019 ; Zhang & Li , 2019 ; Kim et al. , 2020 ; Zheng et al. , 2021 ; Bohte et al. , 2002 ; Zhang & Li , 2020 ; Kim et al. , 2020 ; Xiao et al. , 2021 ) have been proposed . While these methods could partly tackle the problems of unsatisfactory performance or high latency , they require complex computation for gradient calculation or approximation , which can not be implemented by common spiking neurons on neuromorphic hardware . They aim at training SNNs on commonly used computational units , e.g . GPU , and deploying trained models for energy-efficient inference . However they do not consider if the training procedure could leverage the same spike-based computation for gradient calculation and training to reduce the large energy consumption during training as well . A few previous works try to train SNNs with spikes ( Guerguiev et al. , 2017 ; Neftci et al. , 2017 ; Samadi et al. , 2017 ; O ’ Connor & Welling , 2016 ; Thiele et al. , 2019b ; a ) . They either are based on direct feedback alignment ( DFA ) ( Nøkland , 2016 ) and performs poorly , or require impractical special neuron models ( Thiele et al. , 2019b ; a ) . Besides , they only focus on feedforward network structures imitated from ANNs , which ignores feedback connections that are ubiquitous in the human brain and enable neural networks to be shallower and more efficient ( Kubilius et al. , 2019 ; Xiao et al. , 2021 ) . Actually , feedback structures suit SNNs more since SNNs will naturally compute with multiple time steps , which could reuse representations and avoid uneconomical costs to unfold along time that ANNs suffer from ( Xiao et al. , 2021 ) . So training algorithms for feedback SNNs , which may also degrade to feedforward structures by taking feedback as zero , is worth more exploration . An ideal SNN training method should tackle the common problems , be suitable for flexible structures ( feedforward or feedback ) and be spike-based with high neuromorphic plausibility . The implicit differentiation on the equilibrium state ( IDE ) method ( Xiao et al. , 2021 ) , which is recently proposed to train feedback spiking neural networks ( FSNNs ) , is a promising method that may generalize to spike-based learning for requirement . They derive that the forward computation of FSNNs converges to an equilibrium state , which follows a fixed-point equation . Based on it , they propose to train FSNNs by implicit differentiation on this equation , which tackles the common difficulties for SNN training including non-differentiability and large memory costs , and has interesting local update properties . In their method , however , they leverage general root-finding methods to solve implicit differentiation , which requires complex computation on standard computation systems . In this work , we extend the IDE method to spike-based IDE ( SPIDE ) , which fulfills our requirements and has great potential for energy-efficient training of SNNs on neuromorphic hardware , by introducing ternary spiking neuron couples and proposing to solve implicit differentiation by spikes based on them . Our method is also applicable to feedforward structures by degrading the feedback connection as zero . In practice , however , it may require long time steps to stabilize the training with spikes due to approximation error for gradients . So we further dive into the approximation error from the statistical perspective , and propose to simply adjust the resting potential of SNNs to achieve an unbiased estimation of gradients and reduce the estimation variance of SNN computation . With these methods , we can train our models in a small number of time steps , which could further improve the energy efficiency as well as the latency . Our contributions include : 1 . We propose the SPIDE method that is the first to train high-performance SNNs by spikes with common neuron models . Specifically , we propose ternary spiking neuron couples and prove that implicit differentiation for gradient calculation can be solved by spikes based on this design . Our method is applicable to both feedback and feedforward structures . 2 . We theoretically analyze the approximation error of solving implicit differentiation by spikes , and propose to modify the resting potential to remove the approximation bias and reduce the estimation variance , which enables training in a small number of time steps . 3 . Experiments show the low latency and firing sparsity during training , which demonstrates the great potential for energy-efficient training of SNNs on neuromorphic hardware . The performance on MNIST , CIFAR-10 , CIFAR-100 and CIFAR10-DVS are also competitive . 2 RELATED WORK . Early works seek biologically inspired methods to train SNNs , e.g . spike-time dependent plasticity ( STDP ) ( Diehl & Cook , 2015 ) or reward-modulated STDP ( Legenstein et al. , 2008 ) . Since the rise of successful ANNs , several works try to convert trained ANNs to SNNs to obtain high performance ( Hunsberger & Eliasmith , 2015 ; Rueckauer et al. , 2017 ; Sengupta et al. , 2019 ; Rathi et al. , 2019 ; Deng & Gu , 2021 ; Yan et al. , 2021 ) . However , they suffer from extremely large time steps and their structures are limited in the scope of ANNs . Others try to directly train SNNs by imitating backpropagation throught time ( BPTT ) and use surrogate derivative for discontinuous spiking functions ( Lee et al. , 2016 ; Wu et al. , 2018 ; Bellec et al. , 2018 ; Jin et al. , 2018 ; Shrestha & Orchard , 2018 ; Wu et al. , 2019 ; Zhang & Li , 2019 ; Neftci et al. , 2019 ; Zheng et al. , 2021 ) or compute gradient with respect to spiking times ( Bohte et al. , 2002 ; Zhang & Li , 2020 ; Kim et al. , 2020 ) . However , they suffer from approximation error and large memory costs . Xiao et al . ( 2021 ) propose the IDE method to train feedback spiking neural networks , which decouples the forward and backward procedures and avoids the common SNN training problems . However , all these methods require complex computation during training rather than spike-based . A few works focusing on training SNNs with spikes either are based on feedback alignment and limited in simple datasets ( Guerguiev et al. , 2017 ; Neftci et al. , 2017 ; Samadi et al. , 2017 ; O ’ Connor & Welling , 2016 ) , or require impractical special neuron models that require consideration of accumulated spikes for spike generation ( Thiele et al. , 2019b ; a ) , which is impractical on neuromorphic hardware . And they are only applicable to feedforward architectures . Instead , we are the first to leverage spikes with common neuron models to train SNNs with feedback or feedforward structures . The comparison of different methods is illustrated in Table 1 . 3 PRELIMINARIES . We first introduce preliminaries about spiking neurons and the IDE training method . The basic thought of IDE ( Xiao et al. , 2021 ) is to identify the underlying equilibrium states of FSNN computation so that gradients can be calculated based on implicit differentiation on the equilibrium state . We will briefly introduce the conclusion of equilibrium states in Section 3.2 and the IDE method in Section 3.3 . For more descriptions about the background please refer to Appendix A . 3.1 SPIKING NEURAL NETWORK MODELS . Spiking neurons draw inspirations from the human brain to communicate with each other by spikes . Each neuron integrates information from input spike trains by maintaining a membrane potential through a differential equation , and generates an output spike once the membrane potential exceeds a threshold , following which the membrane potential is reset to the resting potential . We consider the commonly used integrate and fire ( IF ) model and simple current model , whose discretized computational form is : ui [ t+ 0.5 ] = ui [ t ] + ∑ j wijsj [ t ] + b , si [ t+ 1 ] = H ( ui [ t+ 0.5 ] − Vth ) , ui [ t+ 1 ] = ui [ t+ 0.5 ] − ( Vth − urest ) si [ t+ 1 ] , ( 1 ) where ui [ t ] is the membrane potential of neuron i at time step t , si [ t ] is the binary output spike train of neuron i , wij is the connection weight from neuron j to neuron i , b is bias , H is the Heaviside step function , Vth is the firing threshold , and urest is the resting potential . We use subtraction as the reset operation . urest is usually taken as 0 in previous work , while we will reconsider it in Section 4.3 . 3.2 EQUILIBRIUM STATES OF FEEDBACK SPIKING NEURAL NETWORKS . Xiao et al . ( 2021 ) derive that the ( weighted ) average rate of spikes during FSNN computation with common neuron models would converge to an equilibrium state following a fixed-point equation given convergent inputs . We focus on the conclusions with the discrete IF model under both singlelayer and multi-layer feedback structures . The single-layer structure has one hidden layer of neurons with feedback connections on this layer . The update equation of membrane potentials is : u [ t+ 1 ] = u [ t ] +Ws [ t ] + Fx [ t ] + b− ( Vth − urest ) s [ t+ 1 ] , ( 2 ) where u [ t ] and s [ t ] are the vectors of membrane potentials and spikes of these neurons , x [ t ] is the input at time step t , W is the feedback weight matrix , and F is the weight matrix from inputs to these neurons . The average input and average firing rate are defined as x [ t ] = 1t+1 ∑t τ=0 x [ τ ] and α [ t ] = 1t ∑t τ=1 s [ τ ] , respectively . Define σ ( x ) = min ( 1 , max ( 0 , x ) ) . The equilibrium state of the single-layer FSNN is described as ( Xiao et al. , 2021 ) : If the average inputs converge to an equilibrium point x [ t ] → x∗ , and there exists γ < 1 such that ‖W‖2 ≤ γVth , then the average firing rates of FSNN with discrete IF model will converge to an equilibrium point α [ t ] → α∗ , which satisfies the fixed-point equation α∗ = σ ( 1 Vth ( Wα∗ + Fx∗ + b ) ) . Note that they take urest = 0 in this conclusion , if we consider nonzero urest , the constraint and the fixedpoint equation should be ‖W‖2 ≤ γ ( Vth − urest ) and α∗ = σ ( 1 Vth−urest ( Wα ∗ + Fx∗ + b ) ) . The multi-layer structure incorporates more non-linearity into the equilibrium fixed-point equation , which has multiple layers with feedback connections from the last layer to the first layer . The update equations of membrane potentials are expressed as : { u1 [ t+ 1 ] = u1 [ t ] +W1sN [ t ] + F1x [ t ] + b1 − ( Vth − urest ) s1 [ t+ 1 ] , ul [ t+ 1 ] = ul [ t ] + Flsl−1 [ t+ 1 ] + bl − ( Vth − urest ) sl [ t+ 1 ] , l = 2 , · · · , N. ( 3 ) The equilibrium state of the multi-layer FSNN with urest is described as ( Xiao et al. , 2021 ) : If the average inputs converge to an equilibrium point x [ t ] → x∗ , and there exists γ < 1 such that ‖W1‖2‖FN‖2 · · · ‖F2‖2 ≤ γ ( Vth − urest ) N , then the average firing rates of multi-layer FSNN with discrete IF model will converge to equilibrium points αl [ t ] → αl∗ , which satisfy the fixedpoint equations α1∗ = f1 ( fN ◦ · · · ◦ f2 ( α1 ∗ ) , x∗ ) and αl+1∗ = fl+1 ( αl ∗ ) , where f1 ( α , x ) = σ ( 1 Vth−urest ( W 1α+ F1x+ b1 ) ) and fl ( α ) = σ ( 1 Vth−urest ( F lα+ bl ) ) .
In this paper, the authors proposed a method to train Spiking Neural Networks (SNN) with spike-based implicit differentiation on the equilibrium state. Main idea is to use a spike-triggered event instead of average firing rate to approximate implicit differentiation of Feedback Spiking Neural Networks (FSNN). To enable such idea and to further reduce the approximation errors, the authors proposed several techniques such as adopting ternary spiking neuron couples and shifting resting potential. The experimental results showed that the proposed method can achieve high accuracy in several tasks such as MNIST, CIFAR-10, and CIFAR-100 with fewer time steps for training compared to existing methods.
SP:5456e53f2d1a0f4eda6dddc67ea65cb23cee6216
SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural Networks
1 INTRODUCTION . Spiking neural networks ( SNNs ) are brain-inspired models that transmit spikes between neurons for event-driven energy-efficient computation . SNNs can be implemented with less energy on neuromorphic hardware ( Akopyan et al. , 2015 ; Davies et al. , 2018 ; Pei et al. , 2019 ; Roy et al. , 2019 ) , which can remedy the defects of large energy consumption of artificial neural networks ( ANNs ) . Different from ANNs , however , directly supervised training of SNNs is a hard problem due to the complex spiking neuron model which is discontinuous . To solve this problem , converting ANNs to SNNs ( Hunsberger & Eliasmith , 2015 ; Rueckauer et al. , 2017 ; Sengupta et al. , 2019 ; Rathi et al. , 2019 ; Deng & Gu , 2021 ; Yan et al. , 2021 ) , or many other direct SNN training methods ( Wu et al. , 2018 ; Bellec et al. , 2018 ; Jin et al. , 2018 ; Shrestha & Orchard , 2018 ; Wu et al. , 2019 ; Neftci et al. , 2019 ; Zhang & Li , 2019 ; Kim et al. , 2020 ; Zheng et al. , 2021 ; Bohte et al. , 2002 ; Zhang & Li , 2020 ; Kim et al. , 2020 ; Xiao et al. , 2021 ) have been proposed . While these methods could partly tackle the problems of unsatisfactory performance or high latency , they require complex computation for gradient calculation or approximation , which can not be implemented by common spiking neurons on neuromorphic hardware . They aim at training SNNs on commonly used computational units , e.g . GPU , and deploying trained models for energy-efficient inference . However they do not consider if the training procedure could leverage the same spike-based computation for gradient calculation and training to reduce the large energy consumption during training as well . A few previous works try to train SNNs with spikes ( Guerguiev et al. , 2017 ; Neftci et al. , 2017 ; Samadi et al. , 2017 ; O ’ Connor & Welling , 2016 ; Thiele et al. , 2019b ; a ) . They either are based on direct feedback alignment ( DFA ) ( Nøkland , 2016 ) and performs poorly , or require impractical special neuron models ( Thiele et al. , 2019b ; a ) . Besides , they only focus on feedforward network structures imitated from ANNs , which ignores feedback connections that are ubiquitous in the human brain and enable neural networks to be shallower and more efficient ( Kubilius et al. , 2019 ; Xiao et al. , 2021 ) . Actually , feedback structures suit SNNs more since SNNs will naturally compute with multiple time steps , which could reuse representations and avoid uneconomical costs to unfold along time that ANNs suffer from ( Xiao et al. , 2021 ) . So training algorithms for feedback SNNs , which may also degrade to feedforward structures by taking feedback as zero , is worth more exploration . An ideal SNN training method should tackle the common problems , be suitable for flexible structures ( feedforward or feedback ) and be spike-based with high neuromorphic plausibility . The implicit differentiation on the equilibrium state ( IDE ) method ( Xiao et al. , 2021 ) , which is recently proposed to train feedback spiking neural networks ( FSNNs ) , is a promising method that may generalize to spike-based learning for requirement . They derive that the forward computation of FSNNs converges to an equilibrium state , which follows a fixed-point equation . Based on it , they propose to train FSNNs by implicit differentiation on this equation , which tackles the common difficulties for SNN training including non-differentiability and large memory costs , and has interesting local update properties . In their method , however , they leverage general root-finding methods to solve implicit differentiation , which requires complex computation on standard computation systems . In this work , we extend the IDE method to spike-based IDE ( SPIDE ) , which fulfills our requirements and has great potential for energy-efficient training of SNNs on neuromorphic hardware , by introducing ternary spiking neuron couples and proposing to solve implicit differentiation by spikes based on them . Our method is also applicable to feedforward structures by degrading the feedback connection as zero . In practice , however , it may require long time steps to stabilize the training with spikes due to approximation error for gradients . So we further dive into the approximation error from the statistical perspective , and propose to simply adjust the resting potential of SNNs to achieve an unbiased estimation of gradients and reduce the estimation variance of SNN computation . With these methods , we can train our models in a small number of time steps , which could further improve the energy efficiency as well as the latency . Our contributions include : 1 . We propose the SPIDE method that is the first to train high-performance SNNs by spikes with common neuron models . Specifically , we propose ternary spiking neuron couples and prove that implicit differentiation for gradient calculation can be solved by spikes based on this design . Our method is applicable to both feedback and feedforward structures . 2 . We theoretically analyze the approximation error of solving implicit differentiation by spikes , and propose to modify the resting potential to remove the approximation bias and reduce the estimation variance , which enables training in a small number of time steps . 3 . Experiments show the low latency and firing sparsity during training , which demonstrates the great potential for energy-efficient training of SNNs on neuromorphic hardware . The performance on MNIST , CIFAR-10 , CIFAR-100 and CIFAR10-DVS are also competitive . 2 RELATED WORK . Early works seek biologically inspired methods to train SNNs , e.g . spike-time dependent plasticity ( STDP ) ( Diehl & Cook , 2015 ) or reward-modulated STDP ( Legenstein et al. , 2008 ) . Since the rise of successful ANNs , several works try to convert trained ANNs to SNNs to obtain high performance ( Hunsberger & Eliasmith , 2015 ; Rueckauer et al. , 2017 ; Sengupta et al. , 2019 ; Rathi et al. , 2019 ; Deng & Gu , 2021 ; Yan et al. , 2021 ) . However , they suffer from extremely large time steps and their structures are limited in the scope of ANNs . Others try to directly train SNNs by imitating backpropagation throught time ( BPTT ) and use surrogate derivative for discontinuous spiking functions ( Lee et al. , 2016 ; Wu et al. , 2018 ; Bellec et al. , 2018 ; Jin et al. , 2018 ; Shrestha & Orchard , 2018 ; Wu et al. , 2019 ; Zhang & Li , 2019 ; Neftci et al. , 2019 ; Zheng et al. , 2021 ) or compute gradient with respect to spiking times ( Bohte et al. , 2002 ; Zhang & Li , 2020 ; Kim et al. , 2020 ) . However , they suffer from approximation error and large memory costs . Xiao et al . ( 2021 ) propose the IDE method to train feedback spiking neural networks , which decouples the forward and backward procedures and avoids the common SNN training problems . However , all these methods require complex computation during training rather than spike-based . A few works focusing on training SNNs with spikes either are based on feedback alignment and limited in simple datasets ( Guerguiev et al. , 2017 ; Neftci et al. , 2017 ; Samadi et al. , 2017 ; O ’ Connor & Welling , 2016 ) , or require impractical special neuron models that require consideration of accumulated spikes for spike generation ( Thiele et al. , 2019b ; a ) , which is impractical on neuromorphic hardware . And they are only applicable to feedforward architectures . Instead , we are the first to leverage spikes with common neuron models to train SNNs with feedback or feedforward structures . The comparison of different methods is illustrated in Table 1 . 3 PRELIMINARIES . We first introduce preliminaries about spiking neurons and the IDE training method . The basic thought of IDE ( Xiao et al. , 2021 ) is to identify the underlying equilibrium states of FSNN computation so that gradients can be calculated based on implicit differentiation on the equilibrium state . We will briefly introduce the conclusion of equilibrium states in Section 3.2 and the IDE method in Section 3.3 . For more descriptions about the background please refer to Appendix A . 3.1 SPIKING NEURAL NETWORK MODELS . Spiking neurons draw inspirations from the human brain to communicate with each other by spikes . Each neuron integrates information from input spike trains by maintaining a membrane potential through a differential equation , and generates an output spike once the membrane potential exceeds a threshold , following which the membrane potential is reset to the resting potential . We consider the commonly used integrate and fire ( IF ) model and simple current model , whose discretized computational form is : ui [ t+ 0.5 ] = ui [ t ] + ∑ j wijsj [ t ] + b , si [ t+ 1 ] = H ( ui [ t+ 0.5 ] − Vth ) , ui [ t+ 1 ] = ui [ t+ 0.5 ] − ( Vth − urest ) si [ t+ 1 ] , ( 1 ) where ui [ t ] is the membrane potential of neuron i at time step t , si [ t ] is the binary output spike train of neuron i , wij is the connection weight from neuron j to neuron i , b is bias , H is the Heaviside step function , Vth is the firing threshold , and urest is the resting potential . We use subtraction as the reset operation . urest is usually taken as 0 in previous work , while we will reconsider it in Section 4.3 . 3.2 EQUILIBRIUM STATES OF FEEDBACK SPIKING NEURAL NETWORKS . Xiao et al . ( 2021 ) derive that the ( weighted ) average rate of spikes during FSNN computation with common neuron models would converge to an equilibrium state following a fixed-point equation given convergent inputs . We focus on the conclusions with the discrete IF model under both singlelayer and multi-layer feedback structures . The single-layer structure has one hidden layer of neurons with feedback connections on this layer . The update equation of membrane potentials is : u [ t+ 1 ] = u [ t ] +Ws [ t ] + Fx [ t ] + b− ( Vth − urest ) s [ t+ 1 ] , ( 2 ) where u [ t ] and s [ t ] are the vectors of membrane potentials and spikes of these neurons , x [ t ] is the input at time step t , W is the feedback weight matrix , and F is the weight matrix from inputs to these neurons . The average input and average firing rate are defined as x [ t ] = 1t+1 ∑t τ=0 x [ τ ] and α [ t ] = 1t ∑t τ=1 s [ τ ] , respectively . Define σ ( x ) = min ( 1 , max ( 0 , x ) ) . The equilibrium state of the single-layer FSNN is described as ( Xiao et al. , 2021 ) : If the average inputs converge to an equilibrium point x [ t ] → x∗ , and there exists γ < 1 such that ‖W‖2 ≤ γVth , then the average firing rates of FSNN with discrete IF model will converge to an equilibrium point α [ t ] → α∗ , which satisfies the fixed-point equation α∗ = σ ( 1 Vth ( Wα∗ + Fx∗ + b ) ) . Note that they take urest = 0 in this conclusion , if we consider nonzero urest , the constraint and the fixedpoint equation should be ‖W‖2 ≤ γ ( Vth − urest ) and α∗ = σ ( 1 Vth−urest ( Wα ∗ + Fx∗ + b ) ) . The multi-layer structure incorporates more non-linearity into the equilibrium fixed-point equation , which has multiple layers with feedback connections from the last layer to the first layer . The update equations of membrane potentials are expressed as : { u1 [ t+ 1 ] = u1 [ t ] +W1sN [ t ] + F1x [ t ] + b1 − ( Vth − urest ) s1 [ t+ 1 ] , ul [ t+ 1 ] = ul [ t ] + Flsl−1 [ t+ 1 ] + bl − ( Vth − urest ) sl [ t+ 1 ] , l = 2 , · · · , N. ( 3 ) The equilibrium state of the multi-layer FSNN with urest is described as ( Xiao et al. , 2021 ) : If the average inputs converge to an equilibrium point x [ t ] → x∗ , and there exists γ < 1 such that ‖W1‖2‖FN‖2 · · · ‖F2‖2 ≤ γ ( Vth − urest ) N , then the average firing rates of multi-layer FSNN with discrete IF model will converge to equilibrium points αl [ t ] → αl∗ , which satisfy the fixedpoint equations α1∗ = f1 ( fN ◦ · · · ◦ f2 ( α1 ∗ ) , x∗ ) and αl+1∗ = fl+1 ( αl ∗ ) , where f1 ( α , x ) = σ ( 1 Vth−urest ( W 1α+ F1x+ b1 ) ) and fl ( α ) = σ ( 1 Vth−urest ( F lα+ bl ) ) .
The paper aims at porting the IDE method into a spike-fbased and more bio-plausible version. The previous IDE used firing rates rather than spikes for computation, although reference Xiao et al in NeurIPS 2021 had already addressed implementations in spiking neural networks. The authors analyze the approximation error that results from solving implicit differentiation by spikes and report a solution based on ternary spiking neurons, that can be implemented with pairs of standard spiking neurons. They achieve in this way quite good performance for MNIST and CIFAR10.
SP:5456e53f2d1a0f4eda6dddc67ea65cb23cee6216
Benchmarking Sample Selection Strategies for Batch Reinforcement Learning
1 INTRODUCTION . A key question in machine learning is to select the suitable training samples ( Katharopoulos & Fleuret , 2018 ) . Many prior works proved that an appropriate sample selection strategy , i.e. , removing redundant data or selecting samples according to their hardness , usually significantly improves the learning efficiency and final performance ( Bengio et al. , 2009 ; Schaul et al. , 2015 ; Fan et al. , 2017 ) . Similarly , sample selection also plays a crucial role in reinforcement learning ( RL ) ( De Bruin et al. , 2018 ) . A notable example is the sample selection problem for experience replay ( ER ) in off-policy RL ( Fedus et al. , 2020 ) , where an agent reuses stored experiences from a buffer while interacting with the environment . For example , Prioritized Experience Replay ( PER ) ( Schaul et al. , 2015 ) , which samples high error transitions more frequently , is now widely used in different state-of-theart ( SOTA ) off-policy RL algorithms ( Barth-Maron et al. , 2018 ; Hessel et al. , 2018 ; Kapturowski et al. , 2018 ) . Batch RL , also known as offline RL , refers to the problem of learning a near-optimal policy from a fixed offline buffer ( Lange et al. , 2012 ) . Due to the wide availability of logged-data and the increasing computing power , batch RL holds the promise for successful real-world applications ( Levine et al. , 2020 ) . Especially for the scenarios where collecting online data is time-consuming , dangerous or unethical , i.e. , robotics , self-driving cars and medical treatments ( Gulcehre et al. , 2020 ) . While most off-policy RL algorithms are applicable in the offline setting , they usually suffer from the bootstrapping error ( Fujimoto et al. , 2018b ; Kumar et al. , 2019 ) due to out-of-distribution ( OOD ) state-action pairs . Different solutions are proposed to mitigate this problem , i.e. , adding constraints ( Fujimoto et al. , 2018b ; Wu et al. , 2019 ) , imitating behavior policy ( Chen et al. , 2019 ; Zolna et al. , 2020 ) , learning dynamics models ( Yu et al. , 2020 ; Kidambi et al. , 2020 ; Argenson & Dulac-Arnold , 2020 ) , incorporating uncertainties ( Wu et al. , 2021 ) , learning ensembles ( Agarwal et al. , 2020 ) , or learning pessimistic value functions ( Kumar et al. , 2020 ; Buckman et al. , 2020 ; Jin et al. , 2021 ) . Unlike the wide application of PER in online off-policy RL , the non-uniform sampling strategy is largely ignored in recent batch RL algorithms . Inspired by the success of PER ( Schaul et al. , 2015 ) in the online setting , one natural question to ask is that what is the counterpart of PER in batch RL ? This problem is appealing for several reasons : ( 1 ) In some real-world applications , the size of the offline dataset is usually increasing though we have no access to the real environment . For example , we would have ever-growing medical records from the hospitals ( Raghu , 2019 ) , or recorded videos from dash cams ( Yu et al. , 2018 ) . ( 2 ) As the D4RL offline benchmark ( Fu et al. , 2020 ) shows that – for most existing methods , more samples do not necessarily lead to better performance . That is , a batch RL agent sometimes under-performs in a large combined buffer . Given the success achieved by PER in online RL , we are curious that if similar technique could help to develop more robust batch RL agents ( Fujimoto et al. , 2020 ) . Some prior works proposed different sample selection strategies in batch RL . For example , Optimal Sample Selection ( OSS ) ( Rachelson et al. , 2011 ) introduced a meta-learning algorithm which selects optimal samples according to a cross entropy search method for tree-based Fitted Q-Iteration ( FQI ) ( Ernst et al. , 2005 ) with a known dynamics model . Recently , Best-Action Imitation Learning ( BAIL ) ( Chen et al. , 2019 ) proposed to select high-performing samples with a learned value function in behavior cloning . Another related line of research is to reweight sampled transitions . For example , Advantage-Weighted Regression ( AWR ) ( Peng et al. , 2019 ) and Advantage-weighted Behavior Model ( ABM ) ( Siegel et al. , 2020 ) both used reward-weighted regression ( Peters et al. , 2010 ) to learn the policy . Further , Uncertainty Weighted Actor Critic ( UWAC ) ( Wu et al. , 2021 ) adopted a dropout-uncertainty estimation method ( Gal & Ghahramani , 2016 ) and reweighted samples according to their estimated uncertainties . However , it is unclear which sample selection strategy is preferred in batch RL , thereby demanding more investigations . To this end , in this work , we study the sample selection problem in batch RL ( De Bruin et al. , 2018 ) . We follow the PER framework by assigning samples with different priorities ( Schaul et al. , 2015 ) . Crudely , there are two types of metrics to evaluate sample importance . Firstly , we can design a heuristic metric based on our prior knowledge , i.e. , temporal-difference ( TD ) error . Secondly , we can use an end-to-end approach to learn a metric for each sample , for example , we can use offpolicy evaluation ( OPE ) methods ( Voloshin et al. , 2019 ; Fu et al. , 2021 ) to evaluate the goodness of current policy as the metric . However , existing OPE methods usually need to learn a model for each evaluation ( Le et al. , 2019 ) , which makes the learning-based metric approach to be computationally expensive . Therefore , in this paper , we focus on the heuristic metric-based approach and leave the learning-based metric approach for future work . In particular , we benchmark six variants of PER based on different heuristic priority metrics in order to understand which sample selection strategy might be preferred in batch RL . 2 PRELIMINARIES . 2.1 BATCH REINFORCEMENT LEARNING . We consider the standard Markov Decision Process ( MDP ) ( Puterman , 2014 ) M = 〈S , A , T , r , γ〉 . S and A denote the state and action spaces . T ( s′|s , a ) and r ( s , a ) represent the dynamics and reward function , and γ ∈ [ 0 , 1 ) is the discount factor . A policy π ( a|s ) defines a mapping from state to distributions over actions . The goal of an RL agent is to learn a policy π ( a|s ) that maximizes the expected cumulative discounted reward J ( π ) : = Eπ [ ∑∞ t=0 γ trt ] . The performance of the policy can be defined by the Q-function Qπ ( s , a ) : = Eπ [ ∑∞ t=0 γ trt|s0 = s , a0 = a ] and value function V π ( s ) : = Eπ [ ∑∞ t=0 γ trt|s0 = s ] , where Eπ [ · ] is the expected result when following the policy π . Once given the optimalQ-functionQ∗ ( s , a ) = arg maxπ Qπ ( s , a ) , we can derive an optimal policy as π∗ ( a|s ) = arg maxaQ∗ ( s , a ) ( Sutton & Barto , 2018 ) . In ( tabular ) Q-learning , we solve for the Q∗ by iterating the Optimality Bellman Operator T ∗ , defined as T ∗Q ( s , a ) ← r+ γmaxa′ Q ( s′ , a ) ( Bertsekas & Tsitsiklis , 1996 ) . To solve problems with large state space , we can use a parameterized Q-function Qθ ( s , a ) to approximate Q∗ . In practice , we optimize the parameters by a µ-weighted L2 projection Πµ ( Q ) ( Fu et al. , 2019 ) , which minimizes the empirical Bellman error loss : Πµ ( Q ) = minθ E ( s , a , r , s′ ) ∼µ [ ( T ∗Qθ ( s , a ) −Qθ ( s , a ) ) 2 ] . Batch RL , also known as offline RL , aims to learn a near-optimal policy from a fixed dataset ( Lange et al. , 2012 ) D , representing a series of timestep tuples ( st , at , rt , st+1 ) . Furthermore , the dataset can be collected by agents with different policies from different control tasks , including non-RL policies , such as human demonstrations ( Levine et al. , 2020 ) . Some early works such as Fitted QIteration ( FQI ) ( Ernst et al. , 2005 ) and Neural Fitted Q-Iteration ( NFQ ) ( Riedmiller , 2005 ) , which formulate the original RL problem as a sequence of supervised regression problem , are shown to be sample efficient in solving various real-world problems ( Pietquin et al. , 2011 ; Cunha et al. , 2015 ) . On the other hand , some recent studies show that current deep off-policy RL algorithms usually fail in challenging batch RL problems due to bootstrapping error ( Fujimoto et al. , 2018b ; Kumar et al. , 2019 ) . That is , the OOD action a′ might lead to unrecoverable over-estimation error through max operator in the Bellman backup . The over-estimation problem is particularly detrimental in the offline setting where the agent has no access to interact with the real environment to get the feedback to fix the estimation error ( Kumar et al. , 2020 ) . 2.2 NON-UNIFORM SAMPLING WITH EXPERIENCE REPLAY . Experience replay ( ER ) ( Lin , 1992 ) has been a de facto component for modern deep RL algorithms . By reusing previous collected experiences from the replay buffer , ER helps to reduce sample complexity and stabilize training in off-policy RL ( Mnih et al. , 2013 ; Lillicrap et al. , 2015 ) . For some real-world problems where collecting online data is expensive or time consuming , i.e. , robotics or self-driving cars , the ability to learn good policies from pre-collected data is crucial for successful real-world applications ( Cabi et al. , 2019 ) . A number of works ( Schaul et al. , 2015 ; Andrychowicz et al. , 2017 ; Liu et al. , 2019 ; Sun et al. , 2020 ; Fujimoto et al. , 2020 ) show that applying different non-uniform sampling strategies in ER can significantly improve the learning efficiency . Especially for problems where there are many redundant transitions ( Schaul et al. , 2015 ) , or the reward signal is sparse ( Andrychowicz et al. , 2017 ) . A notable example is the Prioritized Experience Replay ( PER ) ( Schaul et al. , 2015 ) , where the probability of sampling a certain transition ( st , at , rt , st+1 ) is proportional to the absolute TD error . However , it is still an open question that which priority metric is optimal to value the importance of samples ( De Bruin et al. , 2018 ) . 3 RELATED WORK . 3.1 SAMPLE SELECTION WITH EXPERIENCE REPLAY . Many prior works have sought to analyze the mechanism of experience replay , both empirically ( De Bruin et al. , 2018 ; Fedus et al. , 2020 ) and theoretically ( Fujimoto et al. , 2020 ; Li et al. , 2021 ) . Similar to our work , ( De Bruin et al. , 2018 ) investigated a number of proxies , i.e. , age , TD error , and exploration noise , to decide which experience to store in the replay buffer and how to sample from the replay buffer . Likewise , ( Fu et al. , 2019 ) used a “ unit-testing ” framework to study Q-learning with function approximators and found that a sampling scheme with wider coverage improves performance . Further , ( Fedus et al. , 2020 ) conducted a systematic analysis of experience replay in Q-learning methods and provided two insights – ( 1 ) Increasing the buffer capacity is preferable , because it has a broader data coverage . ( 2 ) Decreasing the age of the oldest policy improves the performance , because it contains more high-quality on-policy data . While these insights help us to understand the mechanism of experience replay , they are less practical in the batch RL setting , where the given offline dataset is fixed ( Lange et al. , 2012 ) . A number of variants of ER have been introduced to further improve the learning efficiency ( Schaul et al. , 2015 ; Andrychowicz et al. , 2017 ; Novati & Koumoutsakos , 2019 ; Liu et al. , 2019 ; Sun et al. , 2020 ) . One of the most popular variants is the Prioritized Experience Replay ( PER ) ( Schaul et al. , 2015 ) , which proposed to use the absolute TD error |δ ( i ) | as the priority metric and the probability of sampling the i-th transition is : p ( i ) = pαi∑ j p α j , pi = |δ ( i ) |+ or pi = 1 rank ( i ) , ( 1 ) where α is a hyper-parameter , is a small positive constant to avoid zero priority , priority pi is the value of |δ ( i ) | or the inverse rank of |δ ( i ) | . In addition , Hindsight Experience Replay ( HER ) ( Andrychowicz et al. , 2017 ) proposed to re-label visited state as goal states to overcome hard exploration problems with sparse rewards . Competitive Experience Replay ( CER ) Liu et al . ( 2019 ) later introduced an automatic exploratory curriculum by formulating an exploration competition between two agents . On the other hand , Remember and Forget Experience Replay ( ReF-ER ) ( Novati & Koumoutsakos , 2019 ) classified samples as “ near-policy ” and “ far-policy ” by the importance weight ρ = π ( a|s ) /µ ( a|s ) between current policy π and the behavior policy µ , and compute gradients only with near-policy samples . Similarly , Attentive Experience Replay ( AER ) ( Sun et al. , 2020 ) selects samples according to the similarities between the transition state and current state . Recently , Loss-Adjusted Prioritized ( LAP ) experience replay ( Fujimoto et al. , 2020 ) built the connection between the non-uniform sampling scheme in PER and loss functions . It shows that any loss function L1 evaluated with uniform sampling ( i ∼ D1 ) is equivalent to another loss function L2 that is evaluated with non-uniformly sampled data ( i ∼ D2 ) : Ei∼D1 [ ∇QL1 ( δ ( i ) ) ] = Ei∼D2 [ pD1 ( i ) pD2 ( i ) ∇QL1 ( δ ( i ) ) ] = Ei∼D2 [ ∇QL2 ( δ ( i ) ) , ] ( 2 ) where δ ( i ) is the TD error of the i-th sample and the two loss functions follows ∇QL2 ( δ ( i ) ) = pD1 ( i ) pD2 ( i ) ∇QL1 ( δ ( i ) ) . Moreover , Valuable Experience Replay ( VER ) ( Li et al. , 2021 ) proved that the absolute TD error |δ ( i ) | is an upper-bound of different value metrics of experiences in Q-learning .
This paper empirically studies six variants of prioritized experience replay, typically used in online RL, in a batch RL setting. The comparison is performed using TD3BC on three D4RL Mujoco benchmark environment times 5 data sets. The experiments study the performance and bootstrapping errors. Among other things, it is shown that non-uniform sampling strategies are also interesting in a batch RL setting. The paper also discusses some shortcomings of these approaches and future directions.
SP:b2e35fe3a80f221b5a384f2a2ce66abd92275d63
Benchmarking Sample Selection Strategies for Batch Reinforcement Learning
1 INTRODUCTION . A key question in machine learning is to select the suitable training samples ( Katharopoulos & Fleuret , 2018 ) . Many prior works proved that an appropriate sample selection strategy , i.e. , removing redundant data or selecting samples according to their hardness , usually significantly improves the learning efficiency and final performance ( Bengio et al. , 2009 ; Schaul et al. , 2015 ; Fan et al. , 2017 ) . Similarly , sample selection also plays a crucial role in reinforcement learning ( RL ) ( De Bruin et al. , 2018 ) . A notable example is the sample selection problem for experience replay ( ER ) in off-policy RL ( Fedus et al. , 2020 ) , where an agent reuses stored experiences from a buffer while interacting with the environment . For example , Prioritized Experience Replay ( PER ) ( Schaul et al. , 2015 ) , which samples high error transitions more frequently , is now widely used in different state-of-theart ( SOTA ) off-policy RL algorithms ( Barth-Maron et al. , 2018 ; Hessel et al. , 2018 ; Kapturowski et al. , 2018 ) . Batch RL , also known as offline RL , refers to the problem of learning a near-optimal policy from a fixed offline buffer ( Lange et al. , 2012 ) . Due to the wide availability of logged-data and the increasing computing power , batch RL holds the promise for successful real-world applications ( Levine et al. , 2020 ) . Especially for the scenarios where collecting online data is time-consuming , dangerous or unethical , i.e. , robotics , self-driving cars and medical treatments ( Gulcehre et al. , 2020 ) . While most off-policy RL algorithms are applicable in the offline setting , they usually suffer from the bootstrapping error ( Fujimoto et al. , 2018b ; Kumar et al. , 2019 ) due to out-of-distribution ( OOD ) state-action pairs . Different solutions are proposed to mitigate this problem , i.e. , adding constraints ( Fujimoto et al. , 2018b ; Wu et al. , 2019 ) , imitating behavior policy ( Chen et al. , 2019 ; Zolna et al. , 2020 ) , learning dynamics models ( Yu et al. , 2020 ; Kidambi et al. , 2020 ; Argenson & Dulac-Arnold , 2020 ) , incorporating uncertainties ( Wu et al. , 2021 ) , learning ensembles ( Agarwal et al. , 2020 ) , or learning pessimistic value functions ( Kumar et al. , 2020 ; Buckman et al. , 2020 ; Jin et al. , 2021 ) . Unlike the wide application of PER in online off-policy RL , the non-uniform sampling strategy is largely ignored in recent batch RL algorithms . Inspired by the success of PER ( Schaul et al. , 2015 ) in the online setting , one natural question to ask is that what is the counterpart of PER in batch RL ? This problem is appealing for several reasons : ( 1 ) In some real-world applications , the size of the offline dataset is usually increasing though we have no access to the real environment . For example , we would have ever-growing medical records from the hospitals ( Raghu , 2019 ) , or recorded videos from dash cams ( Yu et al. , 2018 ) . ( 2 ) As the D4RL offline benchmark ( Fu et al. , 2020 ) shows that – for most existing methods , more samples do not necessarily lead to better performance . That is , a batch RL agent sometimes under-performs in a large combined buffer . Given the success achieved by PER in online RL , we are curious that if similar technique could help to develop more robust batch RL agents ( Fujimoto et al. , 2020 ) . Some prior works proposed different sample selection strategies in batch RL . For example , Optimal Sample Selection ( OSS ) ( Rachelson et al. , 2011 ) introduced a meta-learning algorithm which selects optimal samples according to a cross entropy search method for tree-based Fitted Q-Iteration ( FQI ) ( Ernst et al. , 2005 ) with a known dynamics model . Recently , Best-Action Imitation Learning ( BAIL ) ( Chen et al. , 2019 ) proposed to select high-performing samples with a learned value function in behavior cloning . Another related line of research is to reweight sampled transitions . For example , Advantage-Weighted Regression ( AWR ) ( Peng et al. , 2019 ) and Advantage-weighted Behavior Model ( ABM ) ( Siegel et al. , 2020 ) both used reward-weighted regression ( Peters et al. , 2010 ) to learn the policy . Further , Uncertainty Weighted Actor Critic ( UWAC ) ( Wu et al. , 2021 ) adopted a dropout-uncertainty estimation method ( Gal & Ghahramani , 2016 ) and reweighted samples according to their estimated uncertainties . However , it is unclear which sample selection strategy is preferred in batch RL , thereby demanding more investigations . To this end , in this work , we study the sample selection problem in batch RL ( De Bruin et al. , 2018 ) . We follow the PER framework by assigning samples with different priorities ( Schaul et al. , 2015 ) . Crudely , there are two types of metrics to evaluate sample importance . Firstly , we can design a heuristic metric based on our prior knowledge , i.e. , temporal-difference ( TD ) error . Secondly , we can use an end-to-end approach to learn a metric for each sample , for example , we can use offpolicy evaluation ( OPE ) methods ( Voloshin et al. , 2019 ; Fu et al. , 2021 ) to evaluate the goodness of current policy as the metric . However , existing OPE methods usually need to learn a model for each evaluation ( Le et al. , 2019 ) , which makes the learning-based metric approach to be computationally expensive . Therefore , in this paper , we focus on the heuristic metric-based approach and leave the learning-based metric approach for future work . In particular , we benchmark six variants of PER based on different heuristic priority metrics in order to understand which sample selection strategy might be preferred in batch RL . 2 PRELIMINARIES . 2.1 BATCH REINFORCEMENT LEARNING . We consider the standard Markov Decision Process ( MDP ) ( Puterman , 2014 ) M = 〈S , A , T , r , γ〉 . S and A denote the state and action spaces . T ( s′|s , a ) and r ( s , a ) represent the dynamics and reward function , and γ ∈ [ 0 , 1 ) is the discount factor . A policy π ( a|s ) defines a mapping from state to distributions over actions . The goal of an RL agent is to learn a policy π ( a|s ) that maximizes the expected cumulative discounted reward J ( π ) : = Eπ [ ∑∞ t=0 γ trt ] . The performance of the policy can be defined by the Q-function Qπ ( s , a ) : = Eπ [ ∑∞ t=0 γ trt|s0 = s , a0 = a ] and value function V π ( s ) : = Eπ [ ∑∞ t=0 γ trt|s0 = s ] , where Eπ [ · ] is the expected result when following the policy π . Once given the optimalQ-functionQ∗ ( s , a ) = arg maxπ Qπ ( s , a ) , we can derive an optimal policy as π∗ ( a|s ) = arg maxaQ∗ ( s , a ) ( Sutton & Barto , 2018 ) . In ( tabular ) Q-learning , we solve for the Q∗ by iterating the Optimality Bellman Operator T ∗ , defined as T ∗Q ( s , a ) ← r+ γmaxa′ Q ( s′ , a ) ( Bertsekas & Tsitsiklis , 1996 ) . To solve problems with large state space , we can use a parameterized Q-function Qθ ( s , a ) to approximate Q∗ . In practice , we optimize the parameters by a µ-weighted L2 projection Πµ ( Q ) ( Fu et al. , 2019 ) , which minimizes the empirical Bellman error loss : Πµ ( Q ) = minθ E ( s , a , r , s′ ) ∼µ [ ( T ∗Qθ ( s , a ) −Qθ ( s , a ) ) 2 ] . Batch RL , also known as offline RL , aims to learn a near-optimal policy from a fixed dataset ( Lange et al. , 2012 ) D , representing a series of timestep tuples ( st , at , rt , st+1 ) . Furthermore , the dataset can be collected by agents with different policies from different control tasks , including non-RL policies , such as human demonstrations ( Levine et al. , 2020 ) . Some early works such as Fitted QIteration ( FQI ) ( Ernst et al. , 2005 ) and Neural Fitted Q-Iteration ( NFQ ) ( Riedmiller , 2005 ) , which formulate the original RL problem as a sequence of supervised regression problem , are shown to be sample efficient in solving various real-world problems ( Pietquin et al. , 2011 ; Cunha et al. , 2015 ) . On the other hand , some recent studies show that current deep off-policy RL algorithms usually fail in challenging batch RL problems due to bootstrapping error ( Fujimoto et al. , 2018b ; Kumar et al. , 2019 ) . That is , the OOD action a′ might lead to unrecoverable over-estimation error through max operator in the Bellman backup . The over-estimation problem is particularly detrimental in the offline setting where the agent has no access to interact with the real environment to get the feedback to fix the estimation error ( Kumar et al. , 2020 ) . 2.2 NON-UNIFORM SAMPLING WITH EXPERIENCE REPLAY . Experience replay ( ER ) ( Lin , 1992 ) has been a de facto component for modern deep RL algorithms . By reusing previous collected experiences from the replay buffer , ER helps to reduce sample complexity and stabilize training in off-policy RL ( Mnih et al. , 2013 ; Lillicrap et al. , 2015 ) . For some real-world problems where collecting online data is expensive or time consuming , i.e. , robotics or self-driving cars , the ability to learn good policies from pre-collected data is crucial for successful real-world applications ( Cabi et al. , 2019 ) . A number of works ( Schaul et al. , 2015 ; Andrychowicz et al. , 2017 ; Liu et al. , 2019 ; Sun et al. , 2020 ; Fujimoto et al. , 2020 ) show that applying different non-uniform sampling strategies in ER can significantly improve the learning efficiency . Especially for problems where there are many redundant transitions ( Schaul et al. , 2015 ) , or the reward signal is sparse ( Andrychowicz et al. , 2017 ) . A notable example is the Prioritized Experience Replay ( PER ) ( Schaul et al. , 2015 ) , where the probability of sampling a certain transition ( st , at , rt , st+1 ) is proportional to the absolute TD error . However , it is still an open question that which priority metric is optimal to value the importance of samples ( De Bruin et al. , 2018 ) . 3 RELATED WORK . 3.1 SAMPLE SELECTION WITH EXPERIENCE REPLAY . Many prior works have sought to analyze the mechanism of experience replay , both empirically ( De Bruin et al. , 2018 ; Fedus et al. , 2020 ) and theoretically ( Fujimoto et al. , 2020 ; Li et al. , 2021 ) . Similar to our work , ( De Bruin et al. , 2018 ) investigated a number of proxies , i.e. , age , TD error , and exploration noise , to decide which experience to store in the replay buffer and how to sample from the replay buffer . Likewise , ( Fu et al. , 2019 ) used a “ unit-testing ” framework to study Q-learning with function approximators and found that a sampling scheme with wider coverage improves performance . Further , ( Fedus et al. , 2020 ) conducted a systematic analysis of experience replay in Q-learning methods and provided two insights – ( 1 ) Increasing the buffer capacity is preferable , because it has a broader data coverage . ( 2 ) Decreasing the age of the oldest policy improves the performance , because it contains more high-quality on-policy data . While these insights help us to understand the mechanism of experience replay , they are less practical in the batch RL setting , where the given offline dataset is fixed ( Lange et al. , 2012 ) . A number of variants of ER have been introduced to further improve the learning efficiency ( Schaul et al. , 2015 ; Andrychowicz et al. , 2017 ; Novati & Koumoutsakos , 2019 ; Liu et al. , 2019 ; Sun et al. , 2020 ) . One of the most popular variants is the Prioritized Experience Replay ( PER ) ( Schaul et al. , 2015 ) , which proposed to use the absolute TD error |δ ( i ) | as the priority metric and the probability of sampling the i-th transition is : p ( i ) = pαi∑ j p α j , pi = |δ ( i ) |+ or pi = 1 rank ( i ) , ( 1 ) where α is a hyper-parameter , is a small positive constant to avoid zero priority , priority pi is the value of |δ ( i ) | or the inverse rank of |δ ( i ) | . In addition , Hindsight Experience Replay ( HER ) ( Andrychowicz et al. , 2017 ) proposed to re-label visited state as goal states to overcome hard exploration problems with sparse rewards . Competitive Experience Replay ( CER ) Liu et al . ( 2019 ) later introduced an automatic exploratory curriculum by formulating an exploration competition between two agents . On the other hand , Remember and Forget Experience Replay ( ReF-ER ) ( Novati & Koumoutsakos , 2019 ) classified samples as “ near-policy ” and “ far-policy ” by the importance weight ρ = π ( a|s ) /µ ( a|s ) between current policy π and the behavior policy µ , and compute gradients only with near-policy samples . Similarly , Attentive Experience Replay ( AER ) ( Sun et al. , 2020 ) selects samples according to the similarities between the transition state and current state . Recently , Loss-Adjusted Prioritized ( LAP ) experience replay ( Fujimoto et al. , 2020 ) built the connection between the non-uniform sampling scheme in PER and loss functions . It shows that any loss function L1 evaluated with uniform sampling ( i ∼ D1 ) is equivalent to another loss function L2 that is evaluated with non-uniformly sampled data ( i ∼ D2 ) : Ei∼D1 [ ∇QL1 ( δ ( i ) ) ] = Ei∼D2 [ pD1 ( i ) pD2 ( i ) ∇QL1 ( δ ( i ) ) ] = Ei∼D2 [ ∇QL2 ( δ ( i ) ) , ] ( 2 ) where δ ( i ) is the TD error of the i-th sample and the two loss functions follows ∇QL2 ( δ ( i ) ) = pD1 ( i ) pD2 ( i ) ∇QL1 ( δ ( i ) ) . Moreover , Valuable Experience Replay ( VER ) ( Li et al. , 2021 ) proved that the absolute TD error |δ ( i ) | is an upper-bound of different value metrics of experiences in Q-learning .
This paper investigated the effect of non-uniform sampling in an offline RL setting. Using TD3BC (Fujimoto and Gu, 2021) as a backbone offline RL algorithm, the authors applied prioritized experience replay (PER) to the sampling of TD3AC with variants of priority metric, including standard TD error, rank-based return, pseudo-count using a hash table, and the other three metrics. The authors insist that non-uniform sampling can be helpful in offline RL compared with usual uniform sampling. They also found that there is no one outperforming metric for prioritized sampling in offline RL settings.
SP:b2e35fe3a80f221b5a384f2a2ce66abd92275d63
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
1 INTRODUCTION . Disentangled representation learning aims to identify and decompose the underlying explanatory factors hidden in the observed data , which is believed by many to be the only way to understand the world for AI fundamentally ( Bengio & LeCun , 2007 ) . To achieve the goal , as shown in Figure 1 ( a ) , we need an encoder and a generator . The encoder to extract representations from images with each dimension corresponds to one factor individually . The generator ( decoder ) decodes the changing of each factor into different kinds of image variations . With supervision , we can constrain each dimension of the representation only sensitive to one kind of image variation caused by changing one factor respectively . However , this kind of exhaustive supervision is often not available in real-world data . The typical unsupervised methods are based on a generative model to build the above encoder and generator framework , e.g. , VAE ( Kingma & Welling , 2014 ) provides encoder and generator , and GAN ( Goodfellow et al. , 2014 ; Miyato et al. , 2018 ; Karras et al. , 2019 ) provides generator . During the training process of the encoder and generator , to achieve disentangled representation , the typical methods rely on an additional disentanglement regularization term , e.g. , the total correlation for VAE-based methods ( Higgins et al. , 2017 ; Burgess et al. , 2018 ; Kumar et al. , 2017 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ) or mutual information for InfoGAN-based methods ( Chen et al. , 2016 ; Lin et al. , 2020 ) . However , the extra terms usually result in a trade-off between disentanglement and generation quality ( Burgess et al. , 2018 ; Khrulkov et al. , 2021 ) . Furthermore , those unsupervised methods have been proved to have an infinite number of entangled solutions without introducing inductive bias ( Locatello et al. , 2019 ) . Recent works ( Shen & Zhou , 2021 ; Khrulkov et al. , 2021 ; Karras et al. , 2019 ; Härkönen et al. , 2020 ; Voynov & Babenko , 2020 ) show that , for GANs purely trained for image generation , traversing along different directions in the latent space causes different variations of the generated image . This phenomenon indicates that there is some disentanglement property embedded in the latent space of the pretrained GAN . The above observations indicate that training the encoder and generator simultaneous may not be the best choice . We provide an alternative route to learn disentangled representation : fix the pretrained generator , jointly discover the factors in the latent space of the generator and train the encoder to extract disentangled representation , as shown in Figure 1 ( b ) . From the intuitive notion of disentangled representation , similar image variations should be caused by changing the same factor , and different image variations should be caused by changing different factors . This provide a novel contrastive learning view for disentangled representation learning and inspires us to propose a framework : Disentanglement via Contrast ( DisCo ) for disentangled representation learning . In DisCo , changing a factor is implemented by traversing one discovered direction in the latent space . For discovering the factors , DisCo adopts a typical network module , Navigator , to provides candidate traversal directions in the latent space ( Voynov & Babenko , 2020 ; Jahanian et al. , 2020 ; Shen et al. , 2020 ) . For disentangled representation learning , to model the various image variations , we propose a novel ∆-Contrastor to build a Variation Space where we apply the contrastive loss . In addition to the above architecture innovations , we propose two key techniques for DisCo : ( i ) an entropy-based domination loss to encourage the encoded representations to be more disentangled , ( ii ) a hard negatives flipping strategy for better optimization of Contrastive Loss . We evaluate DisCo on three major generative models ( GAN , VAE , and Flow ) on three popular disentanglement datasets . DisCo achieves the state-of-the-art ( SOTA ) disentanglement performance compared to all the previous discovering-based methods and typical ( VAE/InfoGAN-based ) methods . Furthermore , we evaluate DisCo on the real-world dataset FFHQ ( Karras et al. , 2019 ) to demonstrate that it can discover SOTA disentangled directions in the latent space of pretrained generative models . Our main contributions can be summarized as : ( i ) To our best knowledge , DisCo is the first unified framework for jointly learning disentangled representation and discovering the latent space of pretrained generative models by contrasting the image variations . ( ii ) We propose a novel ∆-Contrastor to model image variations based on the disentangled representations for utilizing Contrastive Learning . ( iii ) DisCo is an unsupervised and model-agnostic method that endows non-disentangled VAE , GAN , or Flow models with the SOTA disentangled representation learning and latent space discovering . ( iv ) We propose two key techniques for DisCo : an entropy-based domination loss and a hard negatives flipping strategy . 2 RELATED WORK . Typical unsupervised disentanglement . There have been a lot of studies on unsupervised disentangled representation learning based on VAE ( Higgins et al. , 2017 ; Burgess et al. , 2018 ; Kumar et al. , 2017 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ) or InfoGAN ( Chen et al. , 2016 ; Lin et al. , 2020 ) . These methods achieve disentanglement via an extra regularization , which often sacrifices the generation quality ( Burgess et al. , 2018 ; Khrulkov et al. , 2021 ) . VAE-based methods disentangle the variations by factorizing aggregated posterior , and InfoGAN-based methods maximize the mutual information between latent factors and related observations . VAE-based methods achieve relatively good disentanglement performance but have low-quality generation . InfoGAN-based methods have a relatively high quality of generation but poor disentanglement performance . Our method supplements generative models pretrained without disentanglement regularization term with contrastive learning in the Variation Space to achieve both high-fidelity image generation and SOTA disentanglement . Interpretable directions in the latent space . Recently , researchers have been interested in discovering the interpretable directions in the latent space of generative models without supervision , especially for GAN ( Goodfellow et al. , 2014 ; Miyato et al. , 2018 ; Karras et al. , 2020 ) . Based on the fact that the GAN latent space often possesses semantically meaningful directions ( Radford et al. , 2015 ; Shen et al. , 2020 ; Jahanian et al. , 2020 ) , Voynov & Babenko ( 2020 ) propose a regression-based method to explore interpretable directions in the latent space of a pretrained GAN . The subsequent works focus on extracting the directions from a specific layer of GANs . Härkönen et al . ( 2020 ) search for important and meaningful directions by performing PCA in the style space of StyleGAN ( Karras et al. , 2019 ; 2020 ) . Shen & Zhou ( 2021 ) propose to use the singular vectors of the first layer of a generator as the interpretable directions , and Khrulkov et al . ( 2021 ) extend this method to the intermediate layers by Jacobian matrix . All the above methods only discover the interpretable directions in the latent space , except for Khrulkov et al . ( 2021 ) which also learns disentangled representation of generated images by training an extra encoder in an extra stage . However , all these methods can not outperform the typical disentanglement methods . Our method is the first to jointly learn the disentangled representation and discover the directions in the latent spaces . Contrastive Learning . Contrastive Learning gains popularity due to its effectiveness in representation learning ( He et al. , 2020 ; Grill et al. , 2020 ; van den Oord et al. , 2018 ; Hénaff , 2020 ; Li et al. , 2020 ; Chen et al. , 2020 ) . Typically , contrastive approaches bring representations of different views of the same image ( positive pairs ) closer , and push representations of views from different images ( negative pairs ) apart using instance-level classification with Contrastive Loss . Recently , Contrastive Learning is extended to various tasks , such as image translation ( Liu et al. , 2021 ; Park et al. , 2020 ) and controllable generation ( Deng et al. , 2020 ) . In this work , we focus on the variations of representations and achieve SOTA disentanglement with Contrastive Learning in the Variation Space . Contrastive Learning is suitable for disentanglement due to : ( i ) the actual number of disentangled directions is usually unknown , which is similar to Contrastive Learning for retrieval ( Le-Khac et al. , 2020 ) , ( ii ) it works in the representation space directly without any extra layers for classification or regression . 3 DISENTANGLEMENT VIA CONTRAST . 3.1 OVERVIEW OF DISCO From the contrastive view of the intuitive notion of disentangled representation learning , we propose a DisCo to leverage pretrained generative models to jointly discover the factors embedded as directions in the latent space of the generative models and learn to extract disentangled representation . The benefits of leveraging a pretrained generative model are two-fold : ( i ) the pretrained models with high-quality image generation are readily available , which is important for reflecting detailed image variations and downstream tasks like controllable generation ; ( ii ) the factors are embedded in the pretrained model , severing as an inductive bias for unsupervised disentangled representation learning . DisCo consists of a Navigator to provides candidate traversal directions in the latent space and a ∆-Contrastor to extract the representation of image variations and build a Variation Space based on the target disentangled representations . More specifically , ∆-Contrastor is composed of two sharedweight Disentangling Encoders . The variation between two images is modeled as the difference of their corresponding encoded representations extracted by the Disentangling Encoders . In the Variation Space , by pulling together the variation samples resulted from traversing the same direction and pushing away the ones resulted from traversing different directions , the Navigator learns to discover disentangled directions as factors , and Disentangling Encoder learns to extract disentangled representations from images . Thus , traversing along the discovered directions causes distinct image variations , which causes separated dimensions of disentangled representations respond . Different from VAE-based or InfoGAN-based methods , our disentangled representations and factors are in two separate spaces , which actually does not affect the applications . Similar to the typical methods , the Disentangling Encoder can extract disentangled representations from images , and the pretrained generative model with discovered factors can be applied to controllable generation . Moreover , DisCo can be applied to different types of generative models . Here we provide a detailed workflow of DisCo . As Figure 2 shows , given a pretrained generative model G : Z → I , where Z ∈ RL denotes the latent space , and I denotes the image space , the workflow is : 1 ) A Navigator A provides a total of D candidate traversal directions in the latent space Z , e.g. , in the linear case , A ∈ RL×D is a learnable matrix , and each column is regarded as a candidate direction . 2 ) Image pairs G ( z ) , G ( z′ ) are generated . z is sampled from Z and z′ = z + A ( d , ε ) , where d ∈ { 1 , ... , D } and ε ∈ R , and A ( d , ε ) denotes the shift along the dth direction with ε scalar . 3 ) The ∆-Contrastor , composed of two shared-weight Disentangling Encoders E , encodes the image pair to a sample v ∈ V as v ( z , d , ε ) = |E ( G ( z + A ( d , ε ) ) ) −E ( G ( z ) ) | , ( 1 ) where V ∈ RJ+ denotes the Variation Space . Then we apply Contrastive Learning in V to optimize the Disentangling Encoder E to extract disentangled representations and simultaneously enable Navigator A to find the disentangled directions in the latent space Z . 3.2 DESIGN OF DISCO We present the design details of DisCo , which include : ( i ) the collection of query set Q = { qi } Bi=1 , positive key set K+ = { k+i } Ni=1 and negative key set K− = { k − i } Mi=1 , which are three subsets of the Variation Space V , ( ii ) the formulation of the Contrastive Loss . According to our goal of contrasting the variations , the samples from Q and K+ share the same traversal direction and should be pulled together , while the samples from Q and K− have different directions and should be pushed away . Recall that each sample v in V is determined as v ( z , d , ε ) . To achieve the contrastive learning process , we construct the query sample qi = v ( zi , di , εi ) , the key sample k+i = v ( z + i , d + i , ε + i ) and the negative sample k − i = v ( z − i , d − i , ε − i ) . Specifically , we randomly sample a direction index d̂ from a discrete uniform distribution U { 1 , D } for { di } Bi=1 and { d+i } Ni=1 to guarantee they are the same . We randomly sample { d − i } Mi=1 from the set of the rest of the directions U { 1 , D } \ { d̂ } individually and independently to cover the rest of directions in Navigator A . Note that the discovered direction should be independent with the starting point and the scale of variation , which is in line with the disentangled factors . Therefore , { zi } Bi=1 , { z + i } Ni=1 , { z − i } Mi=1 are all sampled from latent space Z , and { εi } Bi=1 , { ε + i } Ni=1 , { ε − i } Mi=1 are all sampled from a shared continuous uniform distribution U [ − , ] individually and independently . We normalize each sample in Q , K+ , and K− to a unit vector to eliminate the impact caused by different shift scalars . For the design of Contrastive Loss , a well-known form of Contrastive Loss is InfoNCE ( van den Oord et al. , 2018 ) : LNCE = − 1 |B| B∑ i=1 N∑ j=1 log exp ( qi · k+j /τ ) ∑N+M s=1 exp ( qi · ks/τ ) , ( 2 ) where τ is a temperature hyper-parameter and { ki } N+Mi=1 = { k + i } Ni=1 ⋃ { k−i } Mi=1 . The InfoNCE is originate from BCELoss ( Gutmann & Hyvärinen , 2010 ) . BCELoss has been used to achieve contrastive learning ( Wu et al. , 2018 ; Le-Khac et al. , 2020 ; Mnih & Kavukcuoglu , 2013 ; Mnih & Teh , 2012 ) . We choose to follow them to use BCELoss Llogits for reducing computational cost : Llogits = − 1 |B| B∑ i=1 ( l−i + l + i ) , ( 3 ) l+i = N∑ j=1 log σ ( qi · k+j /τ ) , l − i = M∑ m=1 log ( 1− σ ( qi · k−m/τ ) ) , ( 4 ) where σ denotes the sigmoid function , l+i denotes the part for positive samples , and l − i denotes the part for the negative ones . Note that we use a shared positive set for B different queries to reduce the computational cost . 3.3 KEY TECHNIQUES FOR DISCO Entropy-based domination loss . By optimizing the Contrastive Loss , Navigator A is optimized to find the disentangled directions in the latent space , and Disentangling Encoder E is optimized to extract disentangled representations from images . To further make the encoded representations more disentangled , i.e. , when traversing along one disentangled direction , only one dimension of the encoded representation should respond , we thus propose an entropy-based domination loss to encourage the corresponding samples in the Variation Space to be one-hot . To implement the entropy-based domination loss , we first get the mean c of Q and K+ as c = 1 |B +N | ( B∑ i=1 qi + N∑ i=1 k+i ) . ( 5 ) We then compute the probability as pi = exp c ( i ) / ∑J j=1 exp c ( j ) , where c ( i ) is the i-th element of c and J is the number of dimensions of c. The entropy-based domination loss Led is calculated as Led = − 1 J J∑ j=1 pj log ( pj ) . ( 6 ) Hard negatives flipping . Since the latent space of the generative models is a high-dimension complex manifold , many different directions carry the same semantic meaning . These directions with the same semantic meaning result in hard negatives during the optimization of Contrastive Loss . The hard negatives here are different from the hard negatives in the works of self-supervised representation learning ( He et al. , 2020 ; Coskun et al. , 2018 ) , where they have reliable annotations of the samples . Here , our hard negatives are more likely to be “ false ” negatives , and we choose to flip these hard negatives into positives . Specifically , we use a threshold T to identify the hard negative samples , and use their similarity to the queries as the pseudo-labels for them : l̂−i = ∑ αij < T log ( 1− σ ( αij ) ) + ∑ αij≥T αij log ( σ ( αij ) ) , ( 7 ) where l̂−i denotes the modified l − i , and αij = qi · k − j /τ . Therefore , the modified final BCELoss is : Llogits−f = 1 |B| B∑ i=1 ( l+i + l̂ − i ) . ( 8 ) Full objective . With the above two techniques , the full objective is : L = Llogits−f + λLed , ( 9 ) where λ is the weighting hyper-parameter for entropy-based domination loss Led .
This paper presents a framework to model disentangled directions for pretrained models. Such an approach mitigates the problems with poor generation quality arising while training models with additional regularization terms to force disentanglement. The underlying idea is contrastive-based: similar image variations are caused by changing the same factors in contrast to the remaining image variations. The proposed framework is model-agnostic: it can be applied to GANs, VAEs and flow models.
SP:e547b90d039328d391756b0657f9653e1a5c2d2b
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
1 INTRODUCTION . Disentangled representation learning aims to identify and decompose the underlying explanatory factors hidden in the observed data , which is believed by many to be the only way to understand the world for AI fundamentally ( Bengio & LeCun , 2007 ) . To achieve the goal , as shown in Figure 1 ( a ) , we need an encoder and a generator . The encoder to extract representations from images with each dimension corresponds to one factor individually . The generator ( decoder ) decodes the changing of each factor into different kinds of image variations . With supervision , we can constrain each dimension of the representation only sensitive to one kind of image variation caused by changing one factor respectively . However , this kind of exhaustive supervision is often not available in real-world data . The typical unsupervised methods are based on a generative model to build the above encoder and generator framework , e.g. , VAE ( Kingma & Welling , 2014 ) provides encoder and generator , and GAN ( Goodfellow et al. , 2014 ; Miyato et al. , 2018 ; Karras et al. , 2019 ) provides generator . During the training process of the encoder and generator , to achieve disentangled representation , the typical methods rely on an additional disentanglement regularization term , e.g. , the total correlation for VAE-based methods ( Higgins et al. , 2017 ; Burgess et al. , 2018 ; Kumar et al. , 2017 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ) or mutual information for InfoGAN-based methods ( Chen et al. , 2016 ; Lin et al. , 2020 ) . However , the extra terms usually result in a trade-off between disentanglement and generation quality ( Burgess et al. , 2018 ; Khrulkov et al. , 2021 ) . Furthermore , those unsupervised methods have been proved to have an infinite number of entangled solutions without introducing inductive bias ( Locatello et al. , 2019 ) . Recent works ( Shen & Zhou , 2021 ; Khrulkov et al. , 2021 ; Karras et al. , 2019 ; Härkönen et al. , 2020 ; Voynov & Babenko , 2020 ) show that , for GANs purely trained for image generation , traversing along different directions in the latent space causes different variations of the generated image . This phenomenon indicates that there is some disentanglement property embedded in the latent space of the pretrained GAN . The above observations indicate that training the encoder and generator simultaneous may not be the best choice . We provide an alternative route to learn disentangled representation : fix the pretrained generator , jointly discover the factors in the latent space of the generator and train the encoder to extract disentangled representation , as shown in Figure 1 ( b ) . From the intuitive notion of disentangled representation , similar image variations should be caused by changing the same factor , and different image variations should be caused by changing different factors . This provide a novel contrastive learning view for disentangled representation learning and inspires us to propose a framework : Disentanglement via Contrast ( DisCo ) for disentangled representation learning . In DisCo , changing a factor is implemented by traversing one discovered direction in the latent space . For discovering the factors , DisCo adopts a typical network module , Navigator , to provides candidate traversal directions in the latent space ( Voynov & Babenko , 2020 ; Jahanian et al. , 2020 ; Shen et al. , 2020 ) . For disentangled representation learning , to model the various image variations , we propose a novel ∆-Contrastor to build a Variation Space where we apply the contrastive loss . In addition to the above architecture innovations , we propose two key techniques for DisCo : ( i ) an entropy-based domination loss to encourage the encoded representations to be more disentangled , ( ii ) a hard negatives flipping strategy for better optimization of Contrastive Loss . We evaluate DisCo on three major generative models ( GAN , VAE , and Flow ) on three popular disentanglement datasets . DisCo achieves the state-of-the-art ( SOTA ) disentanglement performance compared to all the previous discovering-based methods and typical ( VAE/InfoGAN-based ) methods . Furthermore , we evaluate DisCo on the real-world dataset FFHQ ( Karras et al. , 2019 ) to demonstrate that it can discover SOTA disentangled directions in the latent space of pretrained generative models . Our main contributions can be summarized as : ( i ) To our best knowledge , DisCo is the first unified framework for jointly learning disentangled representation and discovering the latent space of pretrained generative models by contrasting the image variations . ( ii ) We propose a novel ∆-Contrastor to model image variations based on the disentangled representations for utilizing Contrastive Learning . ( iii ) DisCo is an unsupervised and model-agnostic method that endows non-disentangled VAE , GAN , or Flow models with the SOTA disentangled representation learning and latent space discovering . ( iv ) We propose two key techniques for DisCo : an entropy-based domination loss and a hard negatives flipping strategy . 2 RELATED WORK . Typical unsupervised disentanglement . There have been a lot of studies on unsupervised disentangled representation learning based on VAE ( Higgins et al. , 2017 ; Burgess et al. , 2018 ; Kumar et al. , 2017 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ) or InfoGAN ( Chen et al. , 2016 ; Lin et al. , 2020 ) . These methods achieve disentanglement via an extra regularization , which often sacrifices the generation quality ( Burgess et al. , 2018 ; Khrulkov et al. , 2021 ) . VAE-based methods disentangle the variations by factorizing aggregated posterior , and InfoGAN-based methods maximize the mutual information between latent factors and related observations . VAE-based methods achieve relatively good disentanglement performance but have low-quality generation . InfoGAN-based methods have a relatively high quality of generation but poor disentanglement performance . Our method supplements generative models pretrained without disentanglement regularization term with contrastive learning in the Variation Space to achieve both high-fidelity image generation and SOTA disentanglement . Interpretable directions in the latent space . Recently , researchers have been interested in discovering the interpretable directions in the latent space of generative models without supervision , especially for GAN ( Goodfellow et al. , 2014 ; Miyato et al. , 2018 ; Karras et al. , 2020 ) . Based on the fact that the GAN latent space often possesses semantically meaningful directions ( Radford et al. , 2015 ; Shen et al. , 2020 ; Jahanian et al. , 2020 ) , Voynov & Babenko ( 2020 ) propose a regression-based method to explore interpretable directions in the latent space of a pretrained GAN . The subsequent works focus on extracting the directions from a specific layer of GANs . Härkönen et al . ( 2020 ) search for important and meaningful directions by performing PCA in the style space of StyleGAN ( Karras et al. , 2019 ; 2020 ) . Shen & Zhou ( 2021 ) propose to use the singular vectors of the first layer of a generator as the interpretable directions , and Khrulkov et al . ( 2021 ) extend this method to the intermediate layers by Jacobian matrix . All the above methods only discover the interpretable directions in the latent space , except for Khrulkov et al . ( 2021 ) which also learns disentangled representation of generated images by training an extra encoder in an extra stage . However , all these methods can not outperform the typical disentanglement methods . Our method is the first to jointly learn the disentangled representation and discover the directions in the latent spaces . Contrastive Learning . Contrastive Learning gains popularity due to its effectiveness in representation learning ( He et al. , 2020 ; Grill et al. , 2020 ; van den Oord et al. , 2018 ; Hénaff , 2020 ; Li et al. , 2020 ; Chen et al. , 2020 ) . Typically , contrastive approaches bring representations of different views of the same image ( positive pairs ) closer , and push representations of views from different images ( negative pairs ) apart using instance-level classification with Contrastive Loss . Recently , Contrastive Learning is extended to various tasks , such as image translation ( Liu et al. , 2021 ; Park et al. , 2020 ) and controllable generation ( Deng et al. , 2020 ) . In this work , we focus on the variations of representations and achieve SOTA disentanglement with Contrastive Learning in the Variation Space . Contrastive Learning is suitable for disentanglement due to : ( i ) the actual number of disentangled directions is usually unknown , which is similar to Contrastive Learning for retrieval ( Le-Khac et al. , 2020 ) , ( ii ) it works in the representation space directly without any extra layers for classification or regression . 3 DISENTANGLEMENT VIA CONTRAST . 3.1 OVERVIEW OF DISCO From the contrastive view of the intuitive notion of disentangled representation learning , we propose a DisCo to leverage pretrained generative models to jointly discover the factors embedded as directions in the latent space of the generative models and learn to extract disentangled representation . The benefits of leveraging a pretrained generative model are two-fold : ( i ) the pretrained models with high-quality image generation are readily available , which is important for reflecting detailed image variations and downstream tasks like controllable generation ; ( ii ) the factors are embedded in the pretrained model , severing as an inductive bias for unsupervised disentangled representation learning . DisCo consists of a Navigator to provides candidate traversal directions in the latent space and a ∆-Contrastor to extract the representation of image variations and build a Variation Space based on the target disentangled representations . More specifically , ∆-Contrastor is composed of two sharedweight Disentangling Encoders . The variation between two images is modeled as the difference of their corresponding encoded representations extracted by the Disentangling Encoders . In the Variation Space , by pulling together the variation samples resulted from traversing the same direction and pushing away the ones resulted from traversing different directions , the Navigator learns to discover disentangled directions as factors , and Disentangling Encoder learns to extract disentangled representations from images . Thus , traversing along the discovered directions causes distinct image variations , which causes separated dimensions of disentangled representations respond . Different from VAE-based or InfoGAN-based methods , our disentangled representations and factors are in two separate spaces , which actually does not affect the applications . Similar to the typical methods , the Disentangling Encoder can extract disentangled representations from images , and the pretrained generative model with discovered factors can be applied to controllable generation . Moreover , DisCo can be applied to different types of generative models . Here we provide a detailed workflow of DisCo . As Figure 2 shows , given a pretrained generative model G : Z → I , where Z ∈ RL denotes the latent space , and I denotes the image space , the workflow is : 1 ) A Navigator A provides a total of D candidate traversal directions in the latent space Z , e.g. , in the linear case , A ∈ RL×D is a learnable matrix , and each column is regarded as a candidate direction . 2 ) Image pairs G ( z ) , G ( z′ ) are generated . z is sampled from Z and z′ = z + A ( d , ε ) , where d ∈ { 1 , ... , D } and ε ∈ R , and A ( d , ε ) denotes the shift along the dth direction with ε scalar . 3 ) The ∆-Contrastor , composed of two shared-weight Disentangling Encoders E , encodes the image pair to a sample v ∈ V as v ( z , d , ε ) = |E ( G ( z + A ( d , ε ) ) ) −E ( G ( z ) ) | , ( 1 ) where V ∈ RJ+ denotes the Variation Space . Then we apply Contrastive Learning in V to optimize the Disentangling Encoder E to extract disentangled representations and simultaneously enable Navigator A to find the disentangled directions in the latent space Z . 3.2 DESIGN OF DISCO We present the design details of DisCo , which include : ( i ) the collection of query set Q = { qi } Bi=1 , positive key set K+ = { k+i } Ni=1 and negative key set K− = { k − i } Mi=1 , which are three subsets of the Variation Space V , ( ii ) the formulation of the Contrastive Loss . According to our goal of contrasting the variations , the samples from Q and K+ share the same traversal direction and should be pulled together , while the samples from Q and K− have different directions and should be pushed away . Recall that each sample v in V is determined as v ( z , d , ε ) . To achieve the contrastive learning process , we construct the query sample qi = v ( zi , di , εi ) , the key sample k+i = v ( z + i , d + i , ε + i ) and the negative sample k − i = v ( z − i , d − i , ε − i ) . Specifically , we randomly sample a direction index d̂ from a discrete uniform distribution U { 1 , D } for { di } Bi=1 and { d+i } Ni=1 to guarantee they are the same . We randomly sample { d − i } Mi=1 from the set of the rest of the directions U { 1 , D } \ { d̂ } individually and independently to cover the rest of directions in Navigator A . Note that the discovered direction should be independent with the starting point and the scale of variation , which is in line with the disentangled factors . Therefore , { zi } Bi=1 , { z + i } Ni=1 , { z − i } Mi=1 are all sampled from latent space Z , and { εi } Bi=1 , { ε + i } Ni=1 , { ε − i } Mi=1 are all sampled from a shared continuous uniform distribution U [ − , ] individually and independently . We normalize each sample in Q , K+ , and K− to a unit vector to eliminate the impact caused by different shift scalars . For the design of Contrastive Loss , a well-known form of Contrastive Loss is InfoNCE ( van den Oord et al. , 2018 ) : LNCE = − 1 |B| B∑ i=1 N∑ j=1 log exp ( qi · k+j /τ ) ∑N+M s=1 exp ( qi · ks/τ ) , ( 2 ) where τ is a temperature hyper-parameter and { ki } N+Mi=1 = { k + i } Ni=1 ⋃ { k−i } Mi=1 . The InfoNCE is originate from BCELoss ( Gutmann & Hyvärinen , 2010 ) . BCELoss has been used to achieve contrastive learning ( Wu et al. , 2018 ; Le-Khac et al. , 2020 ; Mnih & Kavukcuoglu , 2013 ; Mnih & Teh , 2012 ) . We choose to follow them to use BCELoss Llogits for reducing computational cost : Llogits = − 1 |B| B∑ i=1 ( l−i + l + i ) , ( 3 ) l+i = N∑ j=1 log σ ( qi · k+j /τ ) , l − i = M∑ m=1 log ( 1− σ ( qi · k−m/τ ) ) , ( 4 ) where σ denotes the sigmoid function , l+i denotes the part for positive samples , and l − i denotes the part for the negative ones . Note that we use a shared positive set for B different queries to reduce the computational cost . 3.3 KEY TECHNIQUES FOR DISCO Entropy-based domination loss . By optimizing the Contrastive Loss , Navigator A is optimized to find the disentangled directions in the latent space , and Disentangling Encoder E is optimized to extract disentangled representations from images . To further make the encoded representations more disentangled , i.e. , when traversing along one disentangled direction , only one dimension of the encoded representation should respond , we thus propose an entropy-based domination loss to encourage the corresponding samples in the Variation Space to be one-hot . To implement the entropy-based domination loss , we first get the mean c of Q and K+ as c = 1 |B +N | ( B∑ i=1 qi + N∑ i=1 k+i ) . ( 5 ) We then compute the probability as pi = exp c ( i ) / ∑J j=1 exp c ( j ) , where c ( i ) is the i-th element of c and J is the number of dimensions of c. The entropy-based domination loss Led is calculated as Led = − 1 J J∑ j=1 pj log ( pj ) . ( 6 ) Hard negatives flipping . Since the latent space of the generative models is a high-dimension complex manifold , many different directions carry the same semantic meaning . These directions with the same semantic meaning result in hard negatives during the optimization of Contrastive Loss . The hard negatives here are different from the hard negatives in the works of self-supervised representation learning ( He et al. , 2020 ; Coskun et al. , 2018 ) , where they have reliable annotations of the samples . Here , our hard negatives are more likely to be “ false ” negatives , and we choose to flip these hard negatives into positives . Specifically , we use a threshold T to identify the hard negative samples , and use their similarity to the queries as the pseudo-labels for them : l̂−i = ∑ αij < T log ( 1− σ ( αij ) ) + ∑ αij≥T αij log ( σ ( αij ) ) , ( 7 ) where l̂−i denotes the modified l − i , and αij = qi · k − j /τ . Therefore , the modified final BCELoss is : Llogits−f = 1 |B| B∑ i=1 ( l+i + l̂ − i ) . ( 8 ) Full objective . With the above two techniques , the full objective is : L = Llogits−f + λLed , ( 9 ) where λ is the weighting hyper-parameter for entropy-based domination loss Led .
The paper proposes a novel representation learning technique to disentangle the latent space of pre-trained generative models, by discovering semantically meaningful directions in them. The method trains a navigator and a delta-contrastor network, which consists of 2 encoders sharing weights. First, random samples are perturbed along the directions obtained from the navigator. The perturbed vectors are then decoded with the pre-trained generator, then encoded and the difference between 2 samples are taken. The output is in the variation space, where a contrastive learning technique clusters together the samples that were perturbed with the same direction.
SP:e547b90d039328d391756b0657f9653e1a5c2d2b
Self-Supervised Graph Neural Networks for Improved Electroencephalographic Seizure Analysis
1 INTRODUCTION . Seizures are among the most common neurological emergencies in the world ( Strein et al. , 2019 ) . Seizures can be chronic as in the case of epilepsy , a neurological disease affecting 50 million people worldwide ( WHO , 2019 ) . Clinically , definitive detection of a seizure is only the first step in seizure diagnosis . An important subsequent step is to classify seizures into finer-grained types––such as focal versus generalized seizures––for identifying epilepsy syndromes , targeted therapies , and eligibility for epilepsy surgery ( Fisher et al. , 2017 ) . Scalp electroencephalography ( or “ EEG ” ) plays a critical role in seizure detection and classification . Clinically , EEG-based seizure detection and classification are performed by a trained EEG reader who visually examines a patient ’ s EEG signals over time periods ranging from hours to days . However , this manual analysis is extremely resource- and time-intensive , and thus automated algorithms could greatly accelerate seizure diagnosis and improve outcomes for severely ill patients . Figure 1 : ( a ) EEG electrode placement in the standard 10-20 system . ( b ) Distance-based EEG graph . ( c ) An example correlation-based EEG graph . ( d ) Overview of our methods . The inputs to the models are the EEG graphs , where each node feature corresponds to the preprocessed EEG signals in the respective channel . Self-edges are not shown for better visualization . Although a large number of studies have attempted automated seizure detection ( Rasheed et al. , 2020 ; Siddiqui et al. , 2020 ; Shoeibi et al. , 2021 ; O ’ Shea et al. , 2020 ; Saab et al. , 2020 ) or seizure classification ( Raghu et al. , 2020 ; Asif et al. , 2020 ; Iesmantas & Alzbutas , 2020 ; Ahmedt-Aristizabal et al. , 2020 ; Roy et al. , 2019 ) , several challenges remain largely unaddressed . First , most recent studies use convolutional neural networks ( CNNs ) that assume Euclidean structures in EEG signals or spectrograms ( Rasheed et al. , 2020 ; Shoeibi et al. , 2021 ; Raghu et al. , 2020 ; Asif et al. , 2020 ; Iesmantas & Alzbutas , 2020 ; Ahmedt-Aristizabal et al. , 2020 ; Roy et al. , 2019 ; O ’ Shea et al. , 2020 ; Saab et al. , 2020 ) . However , assumption of Euclidean structure ignores the natural geometry of EEG electrodes and the connectivity in brain networks . EEGs are measured by electrodes placed on a manifold ( i.e . patient ’ s scalp ) ( Figure 1a ) , and thus have inherent non-Euclidean structures . Graphs are a data structure that can represent complex , non-Euclidean data ( Chami et al. , 2020 ; Bronstein et al. , 2017 ) , and graph theory has been extensively used in modeling brain networks ( Bullmore & Sporns , 2009 ) . We therefore hypothesize that graph-based modeling approaches can better represent the inherent non-Euclidean structures in EEGs in a manner that improves both the performance and the clinical utility of seizure detection and classification models . Although traditional graph theory has been used ( Supriya et al. , 2021 ) , only a few deep learning studies have modeled EEGs as graphs for seizure detection . However , these graph-based studies were limited to nonpublic ( Covert et al. , 2019 ) or small datasets ( Craley et al. , 2019 ; Li et al. , 2021 ) , did not leverage modern self-supervised approaches or examine seizure classification ( Cisotto et al. , 2020 ; Zhao et al. , 2021 ; Li et al. , 2021 ) . Second , certain seizure types ( e.g . clonic seizures ) are rare by nature . Training machine learning models that perform well on these rarer seizure classes using traditional supervised learning approaches is challenging , which could explain the performance difference between major and minority seizure types in prior studies ( Raghu et al. , 2020 ; Iesmantas & Alzbutas , 2020 ; AhmedtAristizabal et al. , 2020 ) . Several studies have investigated an alternative , self-supervised training strategy ( Banville et al. , 2020 ; Mohsenvand et al. , 2020 ; Kostas et al. , 2021 ; Martini et al. , 2021 ; Xu et al. , 2020 ) , but they did not model EEGs as graphs or address automated seizure classification . Prior works have shown that self-supervised pre-training significantly improves model performance on data with imbalanced labels in the field of computer vision ( Yang & Xu , 2020 ; Liu et al. , 2021 ) . Hence , we hypothesize that self-supervised pre-training can help improve our graph model performance on rare seizure types . Moreover , a large portion of EEG signals generally do not have seizures . Self-supervised pre-training strategy would allow the model to leverage the abundant nonseizure EEGs that are readily available in the dataset . Lastly , for seizure detection and classification models , the ability to not only provide a single prediction across all EEG channels , but also to provide the interpretability and the ability to localize seizures would be clinically useful for informing treatment strategy . While prior studies ( Saab et al. , 2020 ; Covert et al. , 2019 ) have shown qualitative visualization for model interpretability , none have quantitatively assessed the model ’ s ability to localize seizures . In this work , we aim to address these limitations in prior automated seizure detection and classification studies . First , we propose a graph-based modeling approach for EEG-based seizure detection and classification . Specifically , we propose two EEG graph structures that capture EEG sensor geometry ( Figure 1b ) or dynamic brain connectivity ( Figure 1c ) , and we extend Diffusion Convolutional Recurrent Neural Network ( DCRNN ) ( Li et al. , 2018 ) , an RNN with graph diffusion convolutions , to model the spatiotemporal dependencies in EEGs ( Figure 1d ) . Second , we improve DCRNN performance using a self-supervised pre-training strategy of predicting the preprocessed EEG signals for the next time period without requiring additional data or labels . Finally , we propose quantitative metrics to assess our model ’ s ability to localize seizures . In summary : • We propose two EEG graph structures that capture ( 1 ) the natural geometry of EEG sensors or ( 2 ) dynamic connectivity in the brain , and show that building a recurrent graph neural network ( GNN ) based on these representations yields models for seizure detection and classification that outperform previous approaches on a large public dataset ( 5,499 EEGs ) . • We propose a self-supervised pre-training strategy to further improve our recurrent GNN model performance , particularly on rare seizure types . To our knowledge , our study is the first to date that combines graph-based modeling and self-supervised pre-training for EEGs . By leveraging graph structure and self-supervision , our method achieves 0.875 Area Under the Receiver Operating Characteristic Curve ( AUROC ) on seizure detection and 0.749 weighted F1-score on seizure classification , outperforming previous approaches on both seizure detection and classification on this large public dataset . Moreover , our selfsupervised pre-training method substantially improves classification of rare seizure types ( e.g . 47 points increase in combined tonic seizure accuracy over baselines ) . • We propose a quantitative model interpretability analysis that can be used to assess a model ’ s ability to localize seizures , which is critical to determining the course of treatment for seizures . We show that by leveraging graph structure and self-supervision , our method precisely localizes 25.4 % focal seizures , providing 21.9 points improvement over a prior state-of-the-art CNN . Finally , by displaying the identified seizure regions on raw EEG signals and EEG graphs , our approach could provide valuable insights that support more effective seizure diagnosis in real-world clinical settings . 2 METHODS . 2.1 SEIZURE DETECTION AND CLASSIFICATION PROBLEM FORMULATION . The goal of seizure detection is to predict whether a seizure exists within an EEG clip , and the goal of seizure classification is to predict the seizure type given a seizure EEG clip . Following a prior study ( Saab et al. , 2020 ) , we examine our model ’ s capability for fast and slow detection and classification over 12-s and 60-s EEG clips , respectively . 2.2 GRAPH-BASED MODELING FOR EEGS . 2.2.1 REPRESENTING EEGS AS GRAPHS . We represent an EEG clip as a graph G = { V , E , W } , where V denotes the set of nodes ( i.e . EEG electrodes/channels ) , E denotes the set of edges , and W is the adjacency matrix . We propose the following two methods of constructing the EEG graph . Distance graph . To represent the natural geometry of EEG electrodes , we compute edge weight Wij by applying a thresholded Gaussian kernel ( Shuman et al. , 2013 ) to the pairwise Euclidean distance between vi and vj , i.e. , Wij = exp ( − dist ( vi , vj ) 2 σ2 ) if dist ( vi , vj ) ≤ κ , else 0 . Here , dist ( vi , vj ) is the Euclidean distance between electrodes vi and vj according to the standard 10-20 EEG electrode placement ( Jasper , 1958 ) , σ is the standard deviation of the distances , and κ is the threshold for sparsity . This results in a universal undirected , weighted graph for all EEG clips . Based on preliminary experiments and EEG domain knowledge , we chose κ = 0.9 because it results in a reasonable graph that also resembles the EEG montage ( longitudinal bipolar and transverse bipolar ) widely used clinically ( Acharya et al. , 2016 ) . Figure 1b shows the distance graph with κ = 0.9 . In Appendix K , we explore different values of κ as well as constructing the distance graph using a Gaussian kernel with a pre-specified bandwidth . Correlation graph . To capture dynamic brain connectivity , we define the edge weight Wij as the absolute value of the normalized cross-correlation between the preprocessed signals in vi and vj . To introduce sparsity to the graph , only the edges whose weights are among the top-τ neighbors of each node are kept ( plus self-edges ) , i.e. , Wij = |X : ,i , : ∗X : ,j , :| if vj ∈ N ( vi ) , else 0 . Here , X : ,i , : and X : ,j , : are preprocessed signals in vi and vj , ∗ represents the normalized cross-correlation , and N ( vi ) represents the top-τ neighbors of vi . This method results in a unique directed , weighted graph for each input EEG clip . Figure 1c shows an example correlation graph with τ = 3 . 2.2.2 GRAPH NEURAL NETWORK . We adapt DCRNN ( Li et al. , 2018 ) , a recurrent neural network with graph diffusion convolutions , to model the spatiotemporal dependencies in EEG signals . DCRNN was initially developed for traffic forecasting , where the dynamics of traffic flow are modeled as a diffusion process . Similarly , we can also model the spatial dependency in EEG signals as a diffusion process , because an electrode can be influenced more by electrodes in its anatomical proximity ( measured by distance ) ( Acharya et al. , 2016 ) or functional proximity ( measured by correlation ) ( Sakkalis , 2011 ) . Specifically , the diffusion process is characterized by a bidirectional random walk on a directed graph G , which results in the following diffusion convolution ( Li et al. , 2018 ) : X : ,m ? Gfθ = K−1∑ k=0 ( θk,1 ( D −1 O W ) k + θk,2 ( D −1 I W ᵀ ) k ) X : ,m for m ∈ { 1 , ... , M } ( 1 ) where X ∈ RN×M is the preprocessed EEG clip at time step t ∈ { 1 , ... , T } with N nodes and M features , fθ is the convolution filter with parameters θ ∈ RK×2 , DO and DI are the out-degree and in-degree diagonal matrices of the graph , respectively , D−1O W and D −1 I W ᵀ are the state transition matrices of the outward and inward diffusion processes , respectively , and K is the number of maximum diffusion steps . For undirected graphs , the diffusion convolution is similar to ChebNet spectral graph convolution ( Defferrard et al. , 2016 ) up to a constant scaling factor , and thus can be computed using stable Chebyshev polynomial bases as follows ( Li et al. , 2018 ) : X : ,m ? Gfθ = Φ ( K−1∑ k=0 θkΛ k ) ΦᵀX : ,m = K−1∑ k=0 θkL kX : ,m = K−1∑ k=0 θ̃kTk ( L̃ ) X : ,m form ∈ { 1 , ... , M } ( 2 ) where T0 ( x ) = 1 , T1 ( x ) = x , and Tk ( x ) = 2xTk−1 ( x ) − Tk−2 ( x ) for k ≥ 2 are the bases of the Chebyshev polynomial , L = D− 1 2 ( D −W ) D− 12 = ΦΛΦᵀ is the normalized graph Laplacian , and L̃ = 2λmaxL− I is the scaled graph Laplacian mapping eigenvalues from [ 0 , λmax ] to [ −1 , 1 ] . We use Equation 1 for directed correlation graphs , and Equation 2 for undirected distance graph . Next , to model the temporal dependency in EEGs , we employ Gated Recurrent Units ( GRUs ) ( Cho et al. , 2014 ) , a variant of RNN with a gating mechanism . Specifically , the matrix multiplications in GRUs are replaced with diffusion convolutions ( or ChebNet spectral graph convolutions for undirected distance-based graph ) ( Li et al. , 2018 ) , allowing spatiotemporal modeling of EEG signals ( referred to as “ DCGRU ” ) : r ( t ) = σ ( Θr ? G [ X ( t ) , H ( t−1 ) ] + br ) u ( t ) = σ ( Θu ? G [ X ( t ) , H ( t−1 ) ] + bu ) ( 3 ) C ( t ) = tanh ( ΘC ? G [ X ( t ) , ( r ( t ) H ( t−1 ) ) ] +bC ) H ( t ) = u ( t ) H ( t−1 ) + ( 1−u ( t ) ) C ( t ) ( 4 ) Here , X ( t ) , H ( t ) denote the input and output of DCGRU at time step t respectively , σ denotes Sigmoid function , represents the Hadamard product , r ( t ) , u ( t ) , C ( t ) denote reset gate , update gate and candidate at time step t respectively , ? G denotes the diffusion convolution ( or ChebNet spectral graph convolution ) , Θr , br , Θu , bu , ΘC and bC are the weights and biases for the corresponding convolutional filters . Finally , for seizure detection and classification , the models consist of several stacked DCGRUs followed by a fully-connected layer .
The paper presents a method for seizure detection and classification. In particular, the method is self supervised, based on graph neural network and use EEG signals. The authors report significant performance in detection and classification, as well as provide methods for qualitative evaluation of model interpretability.
SP:4f9202a08ea9a0a243b07b0536906deca67f2391
Self-Supervised Graph Neural Networks for Improved Electroencephalographic Seizure Analysis
1 INTRODUCTION . Seizures are among the most common neurological emergencies in the world ( Strein et al. , 2019 ) . Seizures can be chronic as in the case of epilepsy , a neurological disease affecting 50 million people worldwide ( WHO , 2019 ) . Clinically , definitive detection of a seizure is only the first step in seizure diagnosis . An important subsequent step is to classify seizures into finer-grained types––such as focal versus generalized seizures––for identifying epilepsy syndromes , targeted therapies , and eligibility for epilepsy surgery ( Fisher et al. , 2017 ) . Scalp electroencephalography ( or “ EEG ” ) plays a critical role in seizure detection and classification . Clinically , EEG-based seizure detection and classification are performed by a trained EEG reader who visually examines a patient ’ s EEG signals over time periods ranging from hours to days . However , this manual analysis is extremely resource- and time-intensive , and thus automated algorithms could greatly accelerate seizure diagnosis and improve outcomes for severely ill patients . Figure 1 : ( a ) EEG electrode placement in the standard 10-20 system . ( b ) Distance-based EEG graph . ( c ) An example correlation-based EEG graph . ( d ) Overview of our methods . The inputs to the models are the EEG graphs , where each node feature corresponds to the preprocessed EEG signals in the respective channel . Self-edges are not shown for better visualization . Although a large number of studies have attempted automated seizure detection ( Rasheed et al. , 2020 ; Siddiqui et al. , 2020 ; Shoeibi et al. , 2021 ; O ’ Shea et al. , 2020 ; Saab et al. , 2020 ) or seizure classification ( Raghu et al. , 2020 ; Asif et al. , 2020 ; Iesmantas & Alzbutas , 2020 ; Ahmedt-Aristizabal et al. , 2020 ; Roy et al. , 2019 ) , several challenges remain largely unaddressed . First , most recent studies use convolutional neural networks ( CNNs ) that assume Euclidean structures in EEG signals or spectrograms ( Rasheed et al. , 2020 ; Shoeibi et al. , 2021 ; Raghu et al. , 2020 ; Asif et al. , 2020 ; Iesmantas & Alzbutas , 2020 ; Ahmedt-Aristizabal et al. , 2020 ; Roy et al. , 2019 ; O ’ Shea et al. , 2020 ; Saab et al. , 2020 ) . However , assumption of Euclidean structure ignores the natural geometry of EEG electrodes and the connectivity in brain networks . EEGs are measured by electrodes placed on a manifold ( i.e . patient ’ s scalp ) ( Figure 1a ) , and thus have inherent non-Euclidean structures . Graphs are a data structure that can represent complex , non-Euclidean data ( Chami et al. , 2020 ; Bronstein et al. , 2017 ) , and graph theory has been extensively used in modeling brain networks ( Bullmore & Sporns , 2009 ) . We therefore hypothesize that graph-based modeling approaches can better represent the inherent non-Euclidean structures in EEGs in a manner that improves both the performance and the clinical utility of seizure detection and classification models . Although traditional graph theory has been used ( Supriya et al. , 2021 ) , only a few deep learning studies have modeled EEGs as graphs for seizure detection . However , these graph-based studies were limited to nonpublic ( Covert et al. , 2019 ) or small datasets ( Craley et al. , 2019 ; Li et al. , 2021 ) , did not leverage modern self-supervised approaches or examine seizure classification ( Cisotto et al. , 2020 ; Zhao et al. , 2021 ; Li et al. , 2021 ) . Second , certain seizure types ( e.g . clonic seizures ) are rare by nature . Training machine learning models that perform well on these rarer seizure classes using traditional supervised learning approaches is challenging , which could explain the performance difference between major and minority seizure types in prior studies ( Raghu et al. , 2020 ; Iesmantas & Alzbutas , 2020 ; AhmedtAristizabal et al. , 2020 ) . Several studies have investigated an alternative , self-supervised training strategy ( Banville et al. , 2020 ; Mohsenvand et al. , 2020 ; Kostas et al. , 2021 ; Martini et al. , 2021 ; Xu et al. , 2020 ) , but they did not model EEGs as graphs or address automated seizure classification . Prior works have shown that self-supervised pre-training significantly improves model performance on data with imbalanced labels in the field of computer vision ( Yang & Xu , 2020 ; Liu et al. , 2021 ) . Hence , we hypothesize that self-supervised pre-training can help improve our graph model performance on rare seizure types . Moreover , a large portion of EEG signals generally do not have seizures . Self-supervised pre-training strategy would allow the model to leverage the abundant nonseizure EEGs that are readily available in the dataset . Lastly , for seizure detection and classification models , the ability to not only provide a single prediction across all EEG channels , but also to provide the interpretability and the ability to localize seizures would be clinically useful for informing treatment strategy . While prior studies ( Saab et al. , 2020 ; Covert et al. , 2019 ) have shown qualitative visualization for model interpretability , none have quantitatively assessed the model ’ s ability to localize seizures . In this work , we aim to address these limitations in prior automated seizure detection and classification studies . First , we propose a graph-based modeling approach for EEG-based seizure detection and classification . Specifically , we propose two EEG graph structures that capture EEG sensor geometry ( Figure 1b ) or dynamic brain connectivity ( Figure 1c ) , and we extend Diffusion Convolutional Recurrent Neural Network ( DCRNN ) ( Li et al. , 2018 ) , an RNN with graph diffusion convolutions , to model the spatiotemporal dependencies in EEGs ( Figure 1d ) . Second , we improve DCRNN performance using a self-supervised pre-training strategy of predicting the preprocessed EEG signals for the next time period without requiring additional data or labels . Finally , we propose quantitative metrics to assess our model ’ s ability to localize seizures . In summary : • We propose two EEG graph structures that capture ( 1 ) the natural geometry of EEG sensors or ( 2 ) dynamic connectivity in the brain , and show that building a recurrent graph neural network ( GNN ) based on these representations yields models for seizure detection and classification that outperform previous approaches on a large public dataset ( 5,499 EEGs ) . • We propose a self-supervised pre-training strategy to further improve our recurrent GNN model performance , particularly on rare seizure types . To our knowledge , our study is the first to date that combines graph-based modeling and self-supervised pre-training for EEGs . By leveraging graph structure and self-supervision , our method achieves 0.875 Area Under the Receiver Operating Characteristic Curve ( AUROC ) on seizure detection and 0.749 weighted F1-score on seizure classification , outperforming previous approaches on both seizure detection and classification on this large public dataset . Moreover , our selfsupervised pre-training method substantially improves classification of rare seizure types ( e.g . 47 points increase in combined tonic seizure accuracy over baselines ) . • We propose a quantitative model interpretability analysis that can be used to assess a model ’ s ability to localize seizures , which is critical to determining the course of treatment for seizures . We show that by leveraging graph structure and self-supervision , our method precisely localizes 25.4 % focal seizures , providing 21.9 points improvement over a prior state-of-the-art CNN . Finally , by displaying the identified seizure regions on raw EEG signals and EEG graphs , our approach could provide valuable insights that support more effective seizure diagnosis in real-world clinical settings . 2 METHODS . 2.1 SEIZURE DETECTION AND CLASSIFICATION PROBLEM FORMULATION . The goal of seizure detection is to predict whether a seizure exists within an EEG clip , and the goal of seizure classification is to predict the seizure type given a seizure EEG clip . Following a prior study ( Saab et al. , 2020 ) , we examine our model ’ s capability for fast and slow detection and classification over 12-s and 60-s EEG clips , respectively . 2.2 GRAPH-BASED MODELING FOR EEGS . 2.2.1 REPRESENTING EEGS AS GRAPHS . We represent an EEG clip as a graph G = { V , E , W } , where V denotes the set of nodes ( i.e . EEG electrodes/channels ) , E denotes the set of edges , and W is the adjacency matrix . We propose the following two methods of constructing the EEG graph . Distance graph . To represent the natural geometry of EEG electrodes , we compute edge weight Wij by applying a thresholded Gaussian kernel ( Shuman et al. , 2013 ) to the pairwise Euclidean distance between vi and vj , i.e. , Wij = exp ( − dist ( vi , vj ) 2 σ2 ) if dist ( vi , vj ) ≤ κ , else 0 . Here , dist ( vi , vj ) is the Euclidean distance between electrodes vi and vj according to the standard 10-20 EEG electrode placement ( Jasper , 1958 ) , σ is the standard deviation of the distances , and κ is the threshold for sparsity . This results in a universal undirected , weighted graph for all EEG clips . Based on preliminary experiments and EEG domain knowledge , we chose κ = 0.9 because it results in a reasonable graph that also resembles the EEG montage ( longitudinal bipolar and transverse bipolar ) widely used clinically ( Acharya et al. , 2016 ) . Figure 1b shows the distance graph with κ = 0.9 . In Appendix K , we explore different values of κ as well as constructing the distance graph using a Gaussian kernel with a pre-specified bandwidth . Correlation graph . To capture dynamic brain connectivity , we define the edge weight Wij as the absolute value of the normalized cross-correlation between the preprocessed signals in vi and vj . To introduce sparsity to the graph , only the edges whose weights are among the top-τ neighbors of each node are kept ( plus self-edges ) , i.e. , Wij = |X : ,i , : ∗X : ,j , :| if vj ∈ N ( vi ) , else 0 . Here , X : ,i , : and X : ,j , : are preprocessed signals in vi and vj , ∗ represents the normalized cross-correlation , and N ( vi ) represents the top-τ neighbors of vi . This method results in a unique directed , weighted graph for each input EEG clip . Figure 1c shows an example correlation graph with τ = 3 . 2.2.2 GRAPH NEURAL NETWORK . We adapt DCRNN ( Li et al. , 2018 ) , a recurrent neural network with graph diffusion convolutions , to model the spatiotemporal dependencies in EEG signals . DCRNN was initially developed for traffic forecasting , where the dynamics of traffic flow are modeled as a diffusion process . Similarly , we can also model the spatial dependency in EEG signals as a diffusion process , because an electrode can be influenced more by electrodes in its anatomical proximity ( measured by distance ) ( Acharya et al. , 2016 ) or functional proximity ( measured by correlation ) ( Sakkalis , 2011 ) . Specifically , the diffusion process is characterized by a bidirectional random walk on a directed graph G , which results in the following diffusion convolution ( Li et al. , 2018 ) : X : ,m ? Gfθ = K−1∑ k=0 ( θk,1 ( D −1 O W ) k + θk,2 ( D −1 I W ᵀ ) k ) X : ,m for m ∈ { 1 , ... , M } ( 1 ) where X ∈ RN×M is the preprocessed EEG clip at time step t ∈ { 1 , ... , T } with N nodes and M features , fθ is the convolution filter with parameters θ ∈ RK×2 , DO and DI are the out-degree and in-degree diagonal matrices of the graph , respectively , D−1O W and D −1 I W ᵀ are the state transition matrices of the outward and inward diffusion processes , respectively , and K is the number of maximum diffusion steps . For undirected graphs , the diffusion convolution is similar to ChebNet spectral graph convolution ( Defferrard et al. , 2016 ) up to a constant scaling factor , and thus can be computed using stable Chebyshev polynomial bases as follows ( Li et al. , 2018 ) : X : ,m ? Gfθ = Φ ( K−1∑ k=0 θkΛ k ) ΦᵀX : ,m = K−1∑ k=0 θkL kX : ,m = K−1∑ k=0 θ̃kTk ( L̃ ) X : ,m form ∈ { 1 , ... , M } ( 2 ) where T0 ( x ) = 1 , T1 ( x ) = x , and Tk ( x ) = 2xTk−1 ( x ) − Tk−2 ( x ) for k ≥ 2 are the bases of the Chebyshev polynomial , L = D− 1 2 ( D −W ) D− 12 = ΦΛΦᵀ is the normalized graph Laplacian , and L̃ = 2λmaxL− I is the scaled graph Laplacian mapping eigenvalues from [ 0 , λmax ] to [ −1 , 1 ] . We use Equation 1 for directed correlation graphs , and Equation 2 for undirected distance graph . Next , to model the temporal dependency in EEGs , we employ Gated Recurrent Units ( GRUs ) ( Cho et al. , 2014 ) , a variant of RNN with a gating mechanism . Specifically , the matrix multiplications in GRUs are replaced with diffusion convolutions ( or ChebNet spectral graph convolutions for undirected distance-based graph ) ( Li et al. , 2018 ) , allowing spatiotemporal modeling of EEG signals ( referred to as “ DCGRU ” ) : r ( t ) = σ ( Θr ? G [ X ( t ) , H ( t−1 ) ] + br ) u ( t ) = σ ( Θu ? G [ X ( t ) , H ( t−1 ) ] + bu ) ( 3 ) C ( t ) = tanh ( ΘC ? G [ X ( t ) , ( r ( t ) H ( t−1 ) ) ] +bC ) H ( t ) = u ( t ) H ( t−1 ) + ( 1−u ( t ) ) C ( t ) ( 4 ) Here , X ( t ) , H ( t ) denote the input and output of DCGRU at time step t respectively , σ denotes Sigmoid function , represents the Hadamard product , r ( t ) , u ( t ) , C ( t ) denote reset gate , update gate and candidate at time step t respectively , ? G denotes the diffusion convolution ( or ChebNet spectral graph convolution ) , Θr , br , Θu , bu , ΘC and bC are the weights and biases for the corresponding convolutional filters . Finally , for seizure detection and classification , the models consist of several stacked DCGRUs followed by a fully-connected layer .
The authors propose a graph-based representation from thresholded Gaussian and linear (correlation) kernels (undirected connectivity) coupled with a diffusion convolutional recurrent network. Besides, a Fourier-based preprocessing is carried out with self-supervised (autoencoders) to initialize the network weights. Experiments are performed on an EEG-based seizure detection and localization task. The seizure localization is conducted using occlusion and dropping approaches on EEG channels. Obtained results elucidate an interesting strategy for seizure analysis on a well-known public database.
SP:4f9202a08ea9a0a243b07b0536906deca67f2391
Quantized sparse PCA for neural network weight compression
1 INTRODUCTION . Deep neural networks have achieved state-of-the-art results in a wide variety of tasks . However , deployment remains challenging due to their large compute and memory requirements . Neural networks deployed on edge devices such as mobile or IoT devices are subject to stringent compute and memory constraints , while networks deployed in the cloud do not suffer such constraints but might suffer excessive latency or power consumption . To reduce neural network memory and compute footprint , several approaches have been introduced in literature . Methods related to our approach , as well as their benefits and downsides , are briefly introduced in this section and described in more detail in the related works section . Tensor factorization approaches ( Denil et al. , 2013 ) replace a layer in a neural network with two layers , whose weights are low-rank factors of the original layer ’ s weight tensor . This reduces the number of parameters and multiply-add operations ( MACs ) , but since the factorizations are by design restricted to those that can be realized as individual layers , potential for compression is limited . By pruning a neural network ( Louizos et al. , 2017 ; He et al. , 2017 ) , individual weights are removed . Pruning has shown to yield moderate compression-accuracy trade-offs . Due to the overhead required to keep track of which elements are pruned , real yield of ( unstructured ) pruning is lower than the pruning ratio . An exception to this is structured pruning , in which entire neurons are removed from a network , and weight tensors can be adjusted accordingly . However , achieving good compression ratios at reasonable accuracy using structured pruning has proven difficult . Scalar quantization ( Jacob et al. , 2018 ; Nagel et al. , 2021 ) approximates neural network weights with fixed-point values , i.e. , integer values scaled by a fixed floating point scalar . Scalar quantization has shown to yield high accuracy at reasonable compression ratios , e.g. , 8 bit quantization yields a 4x compression ratio and virtually no accuracy degradation on many networks . However , scalar quantization does not yield competitive compression vs accuracy trade-offs at high compression ratios . Vector quantization ( Stock et al. , 2019 ; Martinez et al. , 2021 ) approximates small subsets of weights , e.g. , individual 3 × 3 filters in convolutional weight tensors , by a small set of codes . This way , the storage can be reduced by storing the codebook and one code index for each original vector , instead of the individual weights . While vector quantization can achieve high compression with moderate accuracy loss , these methods usually struggle to reach the accuracy of uncompressed models in low compression regimes . In this paper , we provide a novel view on tensor factorization . Instead of restricting factorization to those that can be realized as two separate neural network layers , we show that much higher compression ratios can be achieved by shifting the order of operations . In our method , we find a factorization C , Z for an original weight tensor W , such that the matrix product CZ closely approximates the Weight tensor ! ! `` # × ! $ % ×ℎ× $ Tile and reshape Weight tensor tiled and reshaped to % × & ; & ≫ % Sparse PCA C matrix % × ) ; ) < % * Z matrix ( sparse ) ) × & ; & ≫ ) & = ! ! `` # × ! $ % ×ℎ× $ / % Weight tensor ! ! `` # × ! $ % ×ℎ× $ original weight tensor . The compr ssion is then pushed further by obtaining quantized factors and additionally sparse Z . During inference , the product CZ is computed first , and its result is reshaped back into the original weight tensor ’ s shape , and used for the following computations . This approach allows t e u e of an arbitrary factorization of the original weight tensor . We show that this approach outperforms or is on par with vector quantization in high compression regimes , yet extends to scalar quantization levels of compression-accuracy trade-offs for lower compression ratios . Our contributions in this paper are as follows : • We show that the problem of tensor factorization and vector quantization can be formulated in a unified way as quantized sparse principle component analysis ( PCA ) problem . • We propose an iterative projected gradient descent method to solve the quantized sparse PCA problem . • Our experimental results demonstrate the benefits of this approach . By simultaneously solving tensor factorization and vector quantization problem , we can achieve better accuracy than vector quantization in low compression regimes , and higher compression ratios than scalar quantization approaches at moderate loss in accuracy . 2 RELATED WORK . SVD-based methods and tensor decompositions SVD decomposition was first used to demonstrate redundacy in weight parameters in neural networks in Denil et al . ( 2013 ) . Later several methods for reducing the inference time based on SVD decomposition were suggested ( Denton et al. , 2014 ; Jaderberg et al. , 2014 ) . A similar technique was proposed for gradients compression in dataparallel distributed optimization by Vogels et al . ( 2019 ) . The main difference between these methods is the way 4D weights of a convolutional layer is transformed into a matrix which leads to different shapes of the convolutional layers in the resulting decomposition . Following a similar direction , several works focus on higher-order tensor decomposition methods which lead to introducing of three or four convolutional layers ( Lebedev et al. , 2014 ; Kim et al. , 2015 ; Su et al. , 2018 ) . Weight pruning A straightforward approach to reducing neural network model size is removing a percentage of weights . A spectrum of weight pruning approaches of different granularity has been introduced in the literature . Structured pruning approaches such as He et al . ( 2017 ) kill entire channels of the weights , while unstructured pruning approaches ( Louizos et al. , 2017 ; Zhu & Gupta , 2017 ; Neklyudov et al. , 2017 ; Dai et al. , 2018 ) focus on individual values . A recent survey on unstructured pruning is provided in Gale et al . ( 2019 ) . Scalar quantization and mixed precision training By quantizing neural network weights to lower bitwidths , model footprint can be reduced as well , as each individual weight requires fewer bits to be stored . For example , quantizing 32 bit floating points weight to 8 bit fixed point weights yields a 4x compression ratio . Most quantization approaches use the straight-through estimator ( STE ) for training quantized models ( Bengio et al . ( 2013 ) ; Krishnamoorthi ( 2018 ) ) . One way to further improve the accuracy of quantized models is learning the quantization scale and offset jointly with the network parameters ( Esser et al . ( 2019 ) ; Bhalgat et al . ( 2020 ) ) . A recent survey on practical quantization approaches can be found in Nagel et al . ( 2021 ) . In order to improve the accuracy of quantized models , several methods suggest using mixed precision quantization . The work by Uhlich et al . ( 2019 ) introduced an approach on learning integer bit-width for each layer using STE . Using non-uniform bit-width allows the quantization method to use lower bit-width for more compressible layers of the network . Several works ( van Baalen et al. , 2020 ; Dong et al. , 2019 ; Wang et al. , 2019 ) improve upon the approach by Uhlich et al . ( 2019 ) by using different methods for optimization over the bit-widths . Vector quantization . Several works use vector quantization approach for compression of weights of convolutional and fully connected layers ( Gong et al . ( 2014 ) ; Martinez et al . ( 2021 ) ; Fan et al . ( 2020 ) ; Stock et al . ( 2019 ) ; Wu et al . ( 2016 ) ) . The convolutional weight tensors are reshaped into matrices , then K-means methods is applied directly on the rows or columns . Besides weight compression , the work by Wu et al . ( 2016 ) suggests using vector quantization for reducing the inference time by reusing parts of the computation . Recently several works suggested improvements on the basic vector quantization approach . Dataaware vector quantization which improves the clustering method by considering input activation data is demonstrated to improve the accuracy of the compressed models by Stock et al . ( 2019 ) . Another direction is introducing a permutation to the weight matrices which allows to find subsets of weights which are more compressible ( Martinez et al. , 2021 ) . An inverse permutation is applied at the output of the corresponding layers to preserve the original output of the model . Besides weights compression problem , various improved vector quantization methods were applied in image retrieval domain in order to accelerate scalar product computation for image descriptors ( Chen et al . ( 2010 ) ; Ge et al . ( 2013 ) ; Norouzi & Fleet ( 2013 ) ) . In particular , additive quantization method which we will show is related to quantized sparse PCA was introduced Babenko & Lempitsky ( 2014 ) . Surveys on vector quantization methods are provided in Matsui et al . ( 2018 ) ; Gersho & Gray ( 2012 ) . Sparse PCA . Introduced in Zou et al . ( 2006 ) , sparse PCA can be solved by a plethora of algorithms . The method proposed in this paper can be considered as an instance of thresholding algorithms ( Ma , 2013 ) . Although soft-thresholding methods are prevalent in the literature , we adopt an explicit projection step using hard thresholding to have direct control over the compression ration . Note that sparse PCA can be extended to include additional structures with sparsity ( Jenatton et al. , 2010 ; Lee et al. , 2006 ) . 3 METHOD . In this section , we describe the main algorithm , which can be considered as sparse quantized principle component analysis ( PCA ) . 3.1 QUANTIZED SPARSE PCA . Consider a weight tensor of a convolutional layer W ∈ Rfout×fin×h×w , where fin , fout are the number of input and output feature maps , and h , w are the spatial dimensions of the filter . We reshape it into a matrix W̃ ∈ Rd×n , where d is the tile size and n is the number of tiles . For the reshape , we consider the dimensions in the order fout , fin , h , w in our experiments . The goal is to factorize W̃ into the product of two matrices as follows : W̃ = CZ , ( 1 ) where C ∈ Rd×k is the codebook , and Z ∈ Rk×n is the set of linear coefficients , or a latent variable ( k < d < n ) . Following the standard PCA method , we factorize the zero mean version of W̃ . With this decomposition , every column of W̃ , denoted by W̃ : ,i , is a linear combination of k codebook vectors ( columns of C ) : W̃ : ,i = k∑ j=1 ZjiC : ,j , ( 2 ) where C : ,j is the j-th column of C. This decomposition problem is an instance of sparse PCA methods ( Zou et al . ( 2006 ) ) . For network compression , we are additionally interested in quantized matrices C and Z . The quantization operation for an arbitrary C and Z is defined as follows : Cq = Qc ( C ; sc , bc ) , ( 3 ) Zq = Qz ( Z ; sz , bz ) , ( 4 ) where bc , bz are the quantization bit-widths , and sc and sz are the quantization scale vectors for C and Z , respectively . We consider per-channel quantization , i.e. , the quantization scale values are shared for each column of C and each row or column of Z ( see section 3.2 for details on which is used when ) : Cq , ij = clamp ( ⌊ Cij si ⌉ , 0 , 2bc − 1 ) si , ( 5 ) Zq , ij = clamp ( ⌊ Zij sj ⌉ , 0 , 2bz − 1 ) sj , ( 6 ) where b·e is rounding to nearest integer , and clamp ( · ) is defined as : clamp ( x , a , b ) = a x < a x a ≤ x ≤ b b x > b . ( 7 ) We refer to the problem of finding quantized factors C and Z with sparse Z as quantized sparse PCA problem . Once we obtain the factors C and Z , we get the matrix W̃ = ( CZ ) , which can be reshaped back to a convolutional layer . The reshaped convolutional layer for factors C , Z is denoted by [ CZ ] . It is well known in network compression literature that it is better to find the best factorization on the data manifold ( Zhang et al . ( 2015 ) ; He et al . ( 2017 ) ; Stock et al . ( 2019 ) ) . Therefore , we solve the following optimization problem : C∗ , Z∗ = argmin Cq , Zq E ( X , Y ) ∼D ( ‖Y − [ CqZq ] ∗X‖2F ) ( 8 ) s.t . Cq = Qc ( C ; sc , bC ) Zq = Qz ( Z ; sz , bZ ) ‖Zq‖0 ≤ S , Z ∈ R k×n , C ∈ Rd×k , where the parameter S controls the sparsity ratio of Z , X and Y are the stored input output of the target layer in the original model , D is the data distribution , and ∗ is the convolution operation . L0 norm is used for the constraint on the number of nonzero elements in the matrix Zq . We approximate the expected value of above optimization problem using a subset of the training data : E ( X , Y ) ∼D ( ‖Y − [ CqZq ] ∗X‖2F ) ≈ 1 m m∑ i=1 ‖Yi − [ CqZq ] ∗Xi‖2F , where m is the number of samples used for optimization . Following the compression method introduced in Zhang et al . ( 2015 ) , we use output of the previous compressed layer as X instead of the stored input to the layer in the original model . This approach aims at compensating for the error accumulation in deep networks .
This paper introduces a novel method of weight compression. Weight tensors are stored as sparse, quantized matrix factors, and the underlying matrix factorization problem can be considered as a quantized sparse PCA problem and be solved through iterative projected gradient descent methods. The authors' method is applicable to both moderate and extreme compression regimes, and is claimed to achieve or be on par with state-of-the-art trade-offs between accuracy and model size.
SP:8b45533993822064150ae1adb3900d48ad87b2fb
Quantized sparse PCA for neural network weight compression
1 INTRODUCTION . Deep neural networks have achieved state-of-the-art results in a wide variety of tasks . However , deployment remains challenging due to their large compute and memory requirements . Neural networks deployed on edge devices such as mobile or IoT devices are subject to stringent compute and memory constraints , while networks deployed in the cloud do not suffer such constraints but might suffer excessive latency or power consumption . To reduce neural network memory and compute footprint , several approaches have been introduced in literature . Methods related to our approach , as well as their benefits and downsides , are briefly introduced in this section and described in more detail in the related works section . Tensor factorization approaches ( Denil et al. , 2013 ) replace a layer in a neural network with two layers , whose weights are low-rank factors of the original layer ’ s weight tensor . This reduces the number of parameters and multiply-add operations ( MACs ) , but since the factorizations are by design restricted to those that can be realized as individual layers , potential for compression is limited . By pruning a neural network ( Louizos et al. , 2017 ; He et al. , 2017 ) , individual weights are removed . Pruning has shown to yield moderate compression-accuracy trade-offs . Due to the overhead required to keep track of which elements are pruned , real yield of ( unstructured ) pruning is lower than the pruning ratio . An exception to this is structured pruning , in which entire neurons are removed from a network , and weight tensors can be adjusted accordingly . However , achieving good compression ratios at reasonable accuracy using structured pruning has proven difficult . Scalar quantization ( Jacob et al. , 2018 ; Nagel et al. , 2021 ) approximates neural network weights with fixed-point values , i.e. , integer values scaled by a fixed floating point scalar . Scalar quantization has shown to yield high accuracy at reasonable compression ratios , e.g. , 8 bit quantization yields a 4x compression ratio and virtually no accuracy degradation on many networks . However , scalar quantization does not yield competitive compression vs accuracy trade-offs at high compression ratios . Vector quantization ( Stock et al. , 2019 ; Martinez et al. , 2021 ) approximates small subsets of weights , e.g. , individual 3 × 3 filters in convolutional weight tensors , by a small set of codes . This way , the storage can be reduced by storing the codebook and one code index for each original vector , instead of the individual weights . While vector quantization can achieve high compression with moderate accuracy loss , these methods usually struggle to reach the accuracy of uncompressed models in low compression regimes . In this paper , we provide a novel view on tensor factorization . Instead of restricting factorization to those that can be realized as two separate neural network layers , we show that much higher compression ratios can be achieved by shifting the order of operations . In our method , we find a factorization C , Z for an original weight tensor W , such that the matrix product CZ closely approximates the Weight tensor ! ! `` # × ! $ % ×ℎ× $ Tile and reshape Weight tensor tiled and reshaped to % × & ; & ≫ % Sparse PCA C matrix % × ) ; ) < % * Z matrix ( sparse ) ) × & ; & ≫ ) & = ! ! `` # × ! $ % ×ℎ× $ / % Weight tensor ! ! `` # × ! $ % ×ℎ× $ original weight tensor . The compr ssion is then pushed further by obtaining quantized factors and additionally sparse Z . During inference , the product CZ is computed first , and its result is reshaped back into the original weight tensor ’ s shape , and used for the following computations . This approach allows t e u e of an arbitrary factorization of the original weight tensor . We show that this approach outperforms or is on par with vector quantization in high compression regimes , yet extends to scalar quantization levels of compression-accuracy trade-offs for lower compression ratios . Our contributions in this paper are as follows : • We show that the problem of tensor factorization and vector quantization can be formulated in a unified way as quantized sparse principle component analysis ( PCA ) problem . • We propose an iterative projected gradient descent method to solve the quantized sparse PCA problem . • Our experimental results demonstrate the benefits of this approach . By simultaneously solving tensor factorization and vector quantization problem , we can achieve better accuracy than vector quantization in low compression regimes , and higher compression ratios than scalar quantization approaches at moderate loss in accuracy . 2 RELATED WORK . SVD-based methods and tensor decompositions SVD decomposition was first used to demonstrate redundacy in weight parameters in neural networks in Denil et al . ( 2013 ) . Later several methods for reducing the inference time based on SVD decomposition were suggested ( Denton et al. , 2014 ; Jaderberg et al. , 2014 ) . A similar technique was proposed for gradients compression in dataparallel distributed optimization by Vogels et al . ( 2019 ) . The main difference between these methods is the way 4D weights of a convolutional layer is transformed into a matrix which leads to different shapes of the convolutional layers in the resulting decomposition . Following a similar direction , several works focus on higher-order tensor decomposition methods which lead to introducing of three or four convolutional layers ( Lebedev et al. , 2014 ; Kim et al. , 2015 ; Su et al. , 2018 ) . Weight pruning A straightforward approach to reducing neural network model size is removing a percentage of weights . A spectrum of weight pruning approaches of different granularity has been introduced in the literature . Structured pruning approaches such as He et al . ( 2017 ) kill entire channels of the weights , while unstructured pruning approaches ( Louizos et al. , 2017 ; Zhu & Gupta , 2017 ; Neklyudov et al. , 2017 ; Dai et al. , 2018 ) focus on individual values . A recent survey on unstructured pruning is provided in Gale et al . ( 2019 ) . Scalar quantization and mixed precision training By quantizing neural network weights to lower bitwidths , model footprint can be reduced as well , as each individual weight requires fewer bits to be stored . For example , quantizing 32 bit floating points weight to 8 bit fixed point weights yields a 4x compression ratio . Most quantization approaches use the straight-through estimator ( STE ) for training quantized models ( Bengio et al . ( 2013 ) ; Krishnamoorthi ( 2018 ) ) . One way to further improve the accuracy of quantized models is learning the quantization scale and offset jointly with the network parameters ( Esser et al . ( 2019 ) ; Bhalgat et al . ( 2020 ) ) . A recent survey on practical quantization approaches can be found in Nagel et al . ( 2021 ) . In order to improve the accuracy of quantized models , several methods suggest using mixed precision quantization . The work by Uhlich et al . ( 2019 ) introduced an approach on learning integer bit-width for each layer using STE . Using non-uniform bit-width allows the quantization method to use lower bit-width for more compressible layers of the network . Several works ( van Baalen et al. , 2020 ; Dong et al. , 2019 ; Wang et al. , 2019 ) improve upon the approach by Uhlich et al . ( 2019 ) by using different methods for optimization over the bit-widths . Vector quantization . Several works use vector quantization approach for compression of weights of convolutional and fully connected layers ( Gong et al . ( 2014 ) ; Martinez et al . ( 2021 ) ; Fan et al . ( 2020 ) ; Stock et al . ( 2019 ) ; Wu et al . ( 2016 ) ) . The convolutional weight tensors are reshaped into matrices , then K-means methods is applied directly on the rows or columns . Besides weight compression , the work by Wu et al . ( 2016 ) suggests using vector quantization for reducing the inference time by reusing parts of the computation . Recently several works suggested improvements on the basic vector quantization approach . Dataaware vector quantization which improves the clustering method by considering input activation data is demonstrated to improve the accuracy of the compressed models by Stock et al . ( 2019 ) . Another direction is introducing a permutation to the weight matrices which allows to find subsets of weights which are more compressible ( Martinez et al. , 2021 ) . An inverse permutation is applied at the output of the corresponding layers to preserve the original output of the model . Besides weights compression problem , various improved vector quantization methods were applied in image retrieval domain in order to accelerate scalar product computation for image descriptors ( Chen et al . ( 2010 ) ; Ge et al . ( 2013 ) ; Norouzi & Fleet ( 2013 ) ) . In particular , additive quantization method which we will show is related to quantized sparse PCA was introduced Babenko & Lempitsky ( 2014 ) . Surveys on vector quantization methods are provided in Matsui et al . ( 2018 ) ; Gersho & Gray ( 2012 ) . Sparse PCA . Introduced in Zou et al . ( 2006 ) , sparse PCA can be solved by a plethora of algorithms . The method proposed in this paper can be considered as an instance of thresholding algorithms ( Ma , 2013 ) . Although soft-thresholding methods are prevalent in the literature , we adopt an explicit projection step using hard thresholding to have direct control over the compression ration . Note that sparse PCA can be extended to include additional structures with sparsity ( Jenatton et al. , 2010 ; Lee et al. , 2006 ) . 3 METHOD . In this section , we describe the main algorithm , which can be considered as sparse quantized principle component analysis ( PCA ) . 3.1 QUANTIZED SPARSE PCA . Consider a weight tensor of a convolutional layer W ∈ Rfout×fin×h×w , where fin , fout are the number of input and output feature maps , and h , w are the spatial dimensions of the filter . We reshape it into a matrix W̃ ∈ Rd×n , where d is the tile size and n is the number of tiles . For the reshape , we consider the dimensions in the order fout , fin , h , w in our experiments . The goal is to factorize W̃ into the product of two matrices as follows : W̃ = CZ , ( 1 ) where C ∈ Rd×k is the codebook , and Z ∈ Rk×n is the set of linear coefficients , or a latent variable ( k < d < n ) . Following the standard PCA method , we factorize the zero mean version of W̃ . With this decomposition , every column of W̃ , denoted by W̃ : ,i , is a linear combination of k codebook vectors ( columns of C ) : W̃ : ,i = k∑ j=1 ZjiC : ,j , ( 2 ) where C : ,j is the j-th column of C. This decomposition problem is an instance of sparse PCA methods ( Zou et al . ( 2006 ) ) . For network compression , we are additionally interested in quantized matrices C and Z . The quantization operation for an arbitrary C and Z is defined as follows : Cq = Qc ( C ; sc , bc ) , ( 3 ) Zq = Qz ( Z ; sz , bz ) , ( 4 ) where bc , bz are the quantization bit-widths , and sc and sz are the quantization scale vectors for C and Z , respectively . We consider per-channel quantization , i.e. , the quantization scale values are shared for each column of C and each row or column of Z ( see section 3.2 for details on which is used when ) : Cq , ij = clamp ( ⌊ Cij si ⌉ , 0 , 2bc − 1 ) si , ( 5 ) Zq , ij = clamp ( ⌊ Zij sj ⌉ , 0 , 2bz − 1 ) sj , ( 6 ) where b·e is rounding to nearest integer , and clamp ( · ) is defined as : clamp ( x , a , b ) = a x < a x a ≤ x ≤ b b x > b . ( 7 ) We refer to the problem of finding quantized factors C and Z with sparse Z as quantized sparse PCA problem . Once we obtain the factors C and Z , we get the matrix W̃ = ( CZ ) , which can be reshaped back to a convolutional layer . The reshaped convolutional layer for factors C , Z is denoted by [ CZ ] . It is well known in network compression literature that it is better to find the best factorization on the data manifold ( Zhang et al . ( 2015 ) ; He et al . ( 2017 ) ; Stock et al . ( 2019 ) ) . Therefore , we solve the following optimization problem : C∗ , Z∗ = argmin Cq , Zq E ( X , Y ) ∼D ( ‖Y − [ CqZq ] ∗X‖2F ) ( 8 ) s.t . Cq = Qc ( C ; sc , bC ) Zq = Qz ( Z ; sz , bZ ) ‖Zq‖0 ≤ S , Z ∈ R k×n , C ∈ Rd×k , where the parameter S controls the sparsity ratio of Z , X and Y are the stored input output of the target layer in the original model , D is the data distribution , and ∗ is the convolution operation . L0 norm is used for the constraint on the number of nonzero elements in the matrix Zq . We approximate the expected value of above optimization problem using a subset of the training data : E ( X , Y ) ∼D ( ‖Y − [ CqZq ] ∗X‖2F ) ≈ 1 m m∑ i=1 ‖Yi − [ CqZq ] ∗Xi‖2F , where m is the number of samples used for optimization . Following the compression method introduced in Zhang et al . ( 2015 ) , we use output of the previous compressed layer as X instead of the stored input to the layer in the original model . This approach aims at compensating for the error accumulation in deep networks .
The paper proposes a method for compression of neural network weights. The proposal turns weight tensors into matrices, factorizes these matrices into a rank-k factorization via PCA, applies quantization to the factor matrices, and additionally makes the right (latent) matrix sparse. An algorithm is presented, with two different thresholding options (per-iteration, or one-shot). In experiments, these ideas lead to an accuracy/compression tradeoff which is either competitive with or better than previous state-of-the-art across all compression ratios investigated.
SP:8b45533993822064150ae1adb3900d48ad87b2fb
A Reduction-Based Framework for Conservative Bandits and Reinforcement Learning
1 INTRODUCTION . This paper studies online sequential decision making problems such as bandits and reinforcement learning ( RL ) subject to a conservative constraint . Specifically , the agent is given a reliable baseline policy that may not be optimal but still satisfactory . In conservative bandits and RL , the agent is asked to perform nearly as well ( or better ) as the baseline policy at all time . This setting is a natural formalization of many real-world problems such as digital marketing , healthcare , finance , etc . For example , a company may want to explore new strategies to maximize profit while simultaneously maintaining profit above a fixed baseline at any time , in order not to be bankrupted . See ( Wu et al. , 2016 ) for more discussions on the motivation of the conservative constraint . Analogously to the non-conservative case , conservative bandit/RL problems also require us to balance exploration and exploitation carefully . Meanwhile , to ensure the obtained policies outperform the baseline policy , we need to provide a tractable approach to keep the exploration not too aggressive . Solving these two problems simultaneously is the key challenge in conservative bandits and RL . Existing work proposed algorithms for different settings , including bandits ( Wu et al. , 2016 ; Kazerouni et al. , 2016 ; Garcelon et al. , 2020b ; Katariya et al. , 2019 ; Zhang et al. , 2019 ; Du et al. , 2020 ; Wang et al. , 2021 ) and tabular RL ( Garcelon et al. , 2020a ) . However , lower bound exists only for the multiarmed bandit ( MAB ) setting ( Wu et al. , 2016 ) , and there is no lower bound for other widely-adopted settings , such as linear bandits , tabular Markov Decision Process ( MDP ) and low-rank MDP . In Section 1.3 , we provide a more detailed discussion of the related work . For each of the different settings considered in the literature ( i.e. , multi-armed bandits , linear bandits , tabular MDPs ) , existing approaches rely on ad-hoc algorithm design and analysis of the trade-off between the setting-specific regret analysis and the conservative constraint . Furthermore , it is hard to argue about the optimality of the proposed algorithms because it would require clever constructions of the hard instances to prove the non-trivial regret lower bounds under the conservative constraint . 1.1 OUR CONTRIBUTIONS . In this paper , we address these limitations and make significant progress in studying the general problem of online sequential decision-making with conservative constraint . We propose a unified framework that is generally applicable to online sequential decision-making problems . The common theme underlying our framework is to calculate the necessary and sufficient budget required to enable non-conservative exploration . Such a budget is obtained by running the baseline policy ( cf . Section 3 ) . With the new framework , we obtain a novel upper bound on tabular MDPs , which improves the previous result . And we prove a new upper bound on low-rank MDPs . Also , we derive the first lower bounds for linear bandits , tabular and low-rank MDPs , which shows that our upper bound is tight . Lower Bounds . For any specific problem ( e.g. , multi-armed bandits , linear bandits ) , our framework immediately turns a minimax lower bound of the non-conservative setting to a non-trivial lower bound for the conservative case ( cf . Section 4 ) . We list some examples to showcase the power of our framework for lower bounds . Full results are given in Table 1 . • We derive a novel lower bound for multi-armed bandits that works on a wider range of parameters than the one derived in ( Wu et al. , 2016 ) . In particular , our lower bound shows a more refined dependence on the value of the baseline policy . • We derive the first regret lower bound for conservative exploration in linear bandits , tabular MDPs and low-rank MDPs . These results allow to establish or disprove the optimality of the algorithms currently available in the literature . We emphasize our technique for deriving lower bounds is simple and generic , so we believe it can be used to obtain lower bounds for other problems as well . Upper Bounds . Our novel view of conservative exploration can also be used to derive high probability regret upper-bounds . When the suboptimality gap 0 and the expected return µ0 of the baseline policy are known , we show that the Budget-Exploration algorithm ( Alg . 1 ) attains minimax optimal regret in a wide variety of sequential decision-making problems , when associated to any minimax optimal non-conservative algorithm specific to the problem at hand . In the more realistic ( and challenging ) scenario where 0 and µ0 are unknown , we show how to simply convert an entire class of algorithms with a sublinear non-conservative regret bound into a conservative algorithms with a sublinear regret bound . We obtain the following results , full details are given in Table 1 . • In the MAB setting , we obtain a regret upper-bound that matches our refined lower-bound , thus improving on existing analysis . In the linear bandit setting , we match existing bounds that are already minimax optimal . • In the RL setting , we provide two novel results . First , we provide the first minimax optimal result for tabular MDPs , improving over ( Garcelon et al. , 2020a ) . Second , we derive the first upper bound for conservative exploration in low-rank MDPs . Our bound matches the rate of existing non-conservative algorithms though it is not minimax optimal . How to achieve minimax optimality in low rank MDPs is an open question even in non-conservative exploration . Again , our reduction technique is simple and generic , and can be used to obtain new results in previously unstudied settings , like we did for low-rank MDPs . 1.2 MAIN DIFFICULTIES AND TECHNIQUE OVERVIEW . 1.2.1 LOWER BOUNDS . The only lower bound for conservative exploration is by Wu et al . ( 2016 ) who followed a classical approach in the bandit literature . They constructed a class of hard environments and used an information-theoretic argument to prove the lower bound . Construction of hard environments is highly non-trivial because one needs to incorporate the hardness from the conservative constraint . It is also non-trivial to generalize Wu et al . ( 2016 ) ’ s lower bound to other settings such as conservative linear bandits and RL because one will need new constructions of hard environments for different settings . We note that new constructions are needed even for non-conservative settings , because simply embedding the hard instances of MAB to other settings can not give the tightest lower bounds . See , e.g. , Chapter 24 of Lattimore & Szepesvári ( 2020 ) and Domingues et al . ( 2021 ) . In this paper , We use a completely different approach . Our key insights are 1 ) relating the necessary budget to the regret lower bounds of non-conservative sequential decision-making problems , and 2 ) obtaining sharp lower bounds in the conservative settings via maximizing a quadratic function ( cf . Equation ( 6 ) ) . Comparing with the classical approach , our approach is simpler and more general : ours does not need problem-specific constructions and can automatically transform any lower bound in a non-conservative problem to the corresponding conservative problem . See Section 4 for details . 1.2.2 UPPER BOUNDS . Improvement over Wu et al . ( 2016 ) when 0 is known . When 0 is known , Wu et al . ( 2016 ) proposed an algorithm ( BudgetFirst ) which first plays the baseline policy for enough times and then plays an non-conservative MAB algorithm . However , their regret bound is not tight because their analysis on the required budget is loose : they accumulate enough budget to play T -step exploration where T is the total number of rounds . Our main technical insight to obtain the tight regret bound is a sharp analysis on the required budget : by relating the minimax regret upper bounds of UCB algorithms , we show the required budget can be independent of T . See Section 5 and F for details . Sharp upper bounds with unknown 0 . When 0 is unknown , the paper by Wu et al . ( 2016 ) , its follow-up papers ( Kazerouni et al. , 2016 ; Garcelon et al. , 2020b ; Zhang et al. , 2019 ; Garcelon et al. , 2020a ) , and our paper , all adopt the same algorithmic template : 1 ) build an online estimate on the lower bound performance of each possible exploration policy , and 2 ) based on the estimated lower bounds , choose an exploration policy or play the baseline policy . The key difference and the most non-trivial part in different papers is how to analyze T0 ( the number of times of executing the baseline policy ) . Exiting works upper bound T0 by relating it to the decision criterion for whether to choose the baseline policy or not . Since for different problem settings , the criteria have different forms , existing papers adopt different problem-specific analyses , and in some settings , the analyses are not tight ( e.g. , MAB and tabular RL ) . Our analysis approach is different from existing ones : we bound T0 via maximizing a quadratic function that depends on the minimax regret bounds of non-conservative algorithms and the conservative constraint . See Section 5 for more details . 1.3 RELATED WORK . Non-conservative exploration has been widely studied in bandits , and minimax optimal algorithms have been provided for the settings considered in this paper ( e.g . Lattimore & Szepesvári , 2020 ) . The exploration problem has been widely studied also in RL but minimax optimal algorithms have not been provided for all the settings . For any finite-horizon time-inhomogeneous MDP with S states , A actions and horizon H , the minimax regret lower bound is ⌦ ( p H3SAT ) ( Domingues et al. , 2021 ) , where T denotes the number of episodes . For any time-inhomogeneous low-rank MDP with d-dimensional linear representation , the lower-bound is ⌦ ( p d2H3T ) ( Zhou et al. , 2020 , Remark 2Although the lower bound in Wu et al . ( 2016 ) seems tighter , they require a condition 0 ↵µ0+ 0 0.9 . Under this condition , our lower bound is the same as theirs . Thus ours is more general . See Appendix E. 3In ( Garcelon et al. , 2020a ) , the upper bound scales with rb = mins2S , ⇢0 ( s ) > 0 V ⇡0 1 ( s ) ( with ⇢0 the distribution of the starting state ) , the minimum of the baseline ’ s value function at the first step over the potential starting states.Here , we assume there is a unique starting state hence rb = V ⇡0 . 5.8 ) . While several minimax optimal algorithms have been provided for tabular MDPs ( e.g . Azar et al. , 2017 ; Zanette & Brunskill , 2019 ; Zhang et al. , 2020a ; b ; Ménard et al. , 2021 ) , the gap between upper bound and lower bound is still open in low-rank MDPs , where LSVI-UCB ( Jin et al. , 2020 ) attains a eO ( p d3H4T ) , while ELEANOR ( Zanette et al. , 2020 ) improves to eO ( p d2H4T ) . In conservative exploration , previous works focus on designing specific conservative algorithms for different settings . This conservative scenario was studied in multi-armed bandits ( Wu et al. , 2016 ) , contextual linear bandits ( Kazerouni et al. , 2016 ; Garcelon et al. , 2020b ) , contextual combinatorial bandits ( Zhang et al. , 2019 ) and tabular MDPs ( Garcelon et al. , 2020a ) . All these works focused on providing an upper-bound to the regret of a conservative algorithm . Other problems that have been considered in conservative exploration are combinatorial semi-bandit with exchangeable actions ( Katariya et al. , 2019 ) and contextual combinatorial cascading bandits ( Wang et al. , 2021 ) . Du et al . ( 2020 ) have recently considered conservative exploration with sample-path constraint . Our work is also related to safe bandits/RL ( Amani et al. , 2019 ; Pacchiano et al. , 2021 ; Amani et al. , 2021 ) and constrained RL ( Altman , 1999 ; Efroni et al. , 2020 ; Ding et al. , 2020 ; 2021 ) . The setting of safe bandits/RL is different from conservative bandits/RL . Specifically , the safety constraint requires that the expected cost at each stage is below a certain threshold . This constraint is stage-wise , and is independent of the history . On the contrary , the conservative constraint requires that the total reward is not too small . For the constrained MDP , the goal is to maximize the expected reward value subject to a constraint on the expected utility value ( value function with respect to another reward function ) . In conservative RL , however , the agnet aims to maximize the expected reward value subject to the constaint that the ( same ) reward value is not significantly worse that of the baseline policy .
This paper proposes a reduction-based framework for a large class of reinforcement learning algorithms, including bandits, linear bandits, tabular MDP and linear MDP. The authors notably propose a generic lower bound that holds for all the studied class of algorithms. The lower bound is built on the regret decomposition between the regret of the conservative baseline during the time when the budget is not reached, and the regret of a non-conservative algorithm that learns on the baseline. While the obtained lower in the bandit setting is not as tight as the one obtained in (Wu et al, 2016), this lower bound holds for a larger class of algorithms. Then, two generic algorithms are proposed for handling conservative reinforcement learning. Budget-Exploration consists in learning the non-conservative algorithm on the conservative baseline during a fixed period of time and then to play the non-conservative algorithm. The regret upper bound of Budget-Exploration matches the lower bound, but the knowledge of the baseline gap is need. The second algorithm LCBCE does not necessitate that the baseline gap be known. The idea of the algorithm is to compute an upper bound of the budget thanks to the lower bounds of rewards obtained by the non-conservative algorithm run on the baseline. The regret upper bound of LCBCE still matches the proposed lower bound.
SP:adb7cbd7634b46d7388ac172ef3fbb03c89e6188
A Reduction-Based Framework for Conservative Bandits and Reinforcement Learning
1 INTRODUCTION . This paper studies online sequential decision making problems such as bandits and reinforcement learning ( RL ) subject to a conservative constraint . Specifically , the agent is given a reliable baseline policy that may not be optimal but still satisfactory . In conservative bandits and RL , the agent is asked to perform nearly as well ( or better ) as the baseline policy at all time . This setting is a natural formalization of many real-world problems such as digital marketing , healthcare , finance , etc . For example , a company may want to explore new strategies to maximize profit while simultaneously maintaining profit above a fixed baseline at any time , in order not to be bankrupted . See ( Wu et al. , 2016 ) for more discussions on the motivation of the conservative constraint . Analogously to the non-conservative case , conservative bandit/RL problems also require us to balance exploration and exploitation carefully . Meanwhile , to ensure the obtained policies outperform the baseline policy , we need to provide a tractable approach to keep the exploration not too aggressive . Solving these two problems simultaneously is the key challenge in conservative bandits and RL . Existing work proposed algorithms for different settings , including bandits ( Wu et al. , 2016 ; Kazerouni et al. , 2016 ; Garcelon et al. , 2020b ; Katariya et al. , 2019 ; Zhang et al. , 2019 ; Du et al. , 2020 ; Wang et al. , 2021 ) and tabular RL ( Garcelon et al. , 2020a ) . However , lower bound exists only for the multiarmed bandit ( MAB ) setting ( Wu et al. , 2016 ) , and there is no lower bound for other widely-adopted settings , such as linear bandits , tabular Markov Decision Process ( MDP ) and low-rank MDP . In Section 1.3 , we provide a more detailed discussion of the related work . For each of the different settings considered in the literature ( i.e. , multi-armed bandits , linear bandits , tabular MDPs ) , existing approaches rely on ad-hoc algorithm design and analysis of the trade-off between the setting-specific regret analysis and the conservative constraint . Furthermore , it is hard to argue about the optimality of the proposed algorithms because it would require clever constructions of the hard instances to prove the non-trivial regret lower bounds under the conservative constraint . 1.1 OUR CONTRIBUTIONS . In this paper , we address these limitations and make significant progress in studying the general problem of online sequential decision-making with conservative constraint . We propose a unified framework that is generally applicable to online sequential decision-making problems . The common theme underlying our framework is to calculate the necessary and sufficient budget required to enable non-conservative exploration . Such a budget is obtained by running the baseline policy ( cf . Section 3 ) . With the new framework , we obtain a novel upper bound on tabular MDPs , which improves the previous result . And we prove a new upper bound on low-rank MDPs . Also , we derive the first lower bounds for linear bandits , tabular and low-rank MDPs , which shows that our upper bound is tight . Lower Bounds . For any specific problem ( e.g. , multi-armed bandits , linear bandits ) , our framework immediately turns a minimax lower bound of the non-conservative setting to a non-trivial lower bound for the conservative case ( cf . Section 4 ) . We list some examples to showcase the power of our framework for lower bounds . Full results are given in Table 1 . • We derive a novel lower bound for multi-armed bandits that works on a wider range of parameters than the one derived in ( Wu et al. , 2016 ) . In particular , our lower bound shows a more refined dependence on the value of the baseline policy . • We derive the first regret lower bound for conservative exploration in linear bandits , tabular MDPs and low-rank MDPs . These results allow to establish or disprove the optimality of the algorithms currently available in the literature . We emphasize our technique for deriving lower bounds is simple and generic , so we believe it can be used to obtain lower bounds for other problems as well . Upper Bounds . Our novel view of conservative exploration can also be used to derive high probability regret upper-bounds . When the suboptimality gap 0 and the expected return µ0 of the baseline policy are known , we show that the Budget-Exploration algorithm ( Alg . 1 ) attains minimax optimal regret in a wide variety of sequential decision-making problems , when associated to any minimax optimal non-conservative algorithm specific to the problem at hand . In the more realistic ( and challenging ) scenario where 0 and µ0 are unknown , we show how to simply convert an entire class of algorithms with a sublinear non-conservative regret bound into a conservative algorithms with a sublinear regret bound . We obtain the following results , full details are given in Table 1 . • In the MAB setting , we obtain a regret upper-bound that matches our refined lower-bound , thus improving on existing analysis . In the linear bandit setting , we match existing bounds that are already minimax optimal . • In the RL setting , we provide two novel results . First , we provide the first minimax optimal result for tabular MDPs , improving over ( Garcelon et al. , 2020a ) . Second , we derive the first upper bound for conservative exploration in low-rank MDPs . Our bound matches the rate of existing non-conservative algorithms though it is not minimax optimal . How to achieve minimax optimality in low rank MDPs is an open question even in non-conservative exploration . Again , our reduction technique is simple and generic , and can be used to obtain new results in previously unstudied settings , like we did for low-rank MDPs . 1.2 MAIN DIFFICULTIES AND TECHNIQUE OVERVIEW . 1.2.1 LOWER BOUNDS . The only lower bound for conservative exploration is by Wu et al . ( 2016 ) who followed a classical approach in the bandit literature . They constructed a class of hard environments and used an information-theoretic argument to prove the lower bound . Construction of hard environments is highly non-trivial because one needs to incorporate the hardness from the conservative constraint . It is also non-trivial to generalize Wu et al . ( 2016 ) ’ s lower bound to other settings such as conservative linear bandits and RL because one will need new constructions of hard environments for different settings . We note that new constructions are needed even for non-conservative settings , because simply embedding the hard instances of MAB to other settings can not give the tightest lower bounds . See , e.g. , Chapter 24 of Lattimore & Szepesvári ( 2020 ) and Domingues et al . ( 2021 ) . In this paper , We use a completely different approach . Our key insights are 1 ) relating the necessary budget to the regret lower bounds of non-conservative sequential decision-making problems , and 2 ) obtaining sharp lower bounds in the conservative settings via maximizing a quadratic function ( cf . Equation ( 6 ) ) . Comparing with the classical approach , our approach is simpler and more general : ours does not need problem-specific constructions and can automatically transform any lower bound in a non-conservative problem to the corresponding conservative problem . See Section 4 for details . 1.2.2 UPPER BOUNDS . Improvement over Wu et al . ( 2016 ) when 0 is known . When 0 is known , Wu et al . ( 2016 ) proposed an algorithm ( BudgetFirst ) which first plays the baseline policy for enough times and then plays an non-conservative MAB algorithm . However , their regret bound is not tight because their analysis on the required budget is loose : they accumulate enough budget to play T -step exploration where T is the total number of rounds . Our main technical insight to obtain the tight regret bound is a sharp analysis on the required budget : by relating the minimax regret upper bounds of UCB algorithms , we show the required budget can be independent of T . See Section 5 and F for details . Sharp upper bounds with unknown 0 . When 0 is unknown , the paper by Wu et al . ( 2016 ) , its follow-up papers ( Kazerouni et al. , 2016 ; Garcelon et al. , 2020b ; Zhang et al. , 2019 ; Garcelon et al. , 2020a ) , and our paper , all adopt the same algorithmic template : 1 ) build an online estimate on the lower bound performance of each possible exploration policy , and 2 ) based on the estimated lower bounds , choose an exploration policy or play the baseline policy . The key difference and the most non-trivial part in different papers is how to analyze T0 ( the number of times of executing the baseline policy ) . Exiting works upper bound T0 by relating it to the decision criterion for whether to choose the baseline policy or not . Since for different problem settings , the criteria have different forms , existing papers adopt different problem-specific analyses , and in some settings , the analyses are not tight ( e.g. , MAB and tabular RL ) . Our analysis approach is different from existing ones : we bound T0 via maximizing a quadratic function that depends on the minimax regret bounds of non-conservative algorithms and the conservative constraint . See Section 5 for more details . 1.3 RELATED WORK . Non-conservative exploration has been widely studied in bandits , and minimax optimal algorithms have been provided for the settings considered in this paper ( e.g . Lattimore & Szepesvári , 2020 ) . The exploration problem has been widely studied also in RL but minimax optimal algorithms have not been provided for all the settings . For any finite-horizon time-inhomogeneous MDP with S states , A actions and horizon H , the minimax regret lower bound is ⌦ ( p H3SAT ) ( Domingues et al. , 2021 ) , where T denotes the number of episodes . For any time-inhomogeneous low-rank MDP with d-dimensional linear representation , the lower-bound is ⌦ ( p d2H3T ) ( Zhou et al. , 2020 , Remark 2Although the lower bound in Wu et al . ( 2016 ) seems tighter , they require a condition 0 ↵µ0+ 0 0.9 . Under this condition , our lower bound is the same as theirs . Thus ours is more general . See Appendix E. 3In ( Garcelon et al. , 2020a ) , the upper bound scales with rb = mins2S , ⇢0 ( s ) > 0 V ⇡0 1 ( s ) ( with ⇢0 the distribution of the starting state ) , the minimum of the baseline ’ s value function at the first step over the potential starting states.Here , we assume there is a unique starting state hence rb = V ⇡0 . 5.8 ) . While several minimax optimal algorithms have been provided for tabular MDPs ( e.g . Azar et al. , 2017 ; Zanette & Brunskill , 2019 ; Zhang et al. , 2020a ; b ; Ménard et al. , 2021 ) , the gap between upper bound and lower bound is still open in low-rank MDPs , where LSVI-UCB ( Jin et al. , 2020 ) attains a eO ( p d3H4T ) , while ELEANOR ( Zanette et al. , 2020 ) improves to eO ( p d2H4T ) . In conservative exploration , previous works focus on designing specific conservative algorithms for different settings . This conservative scenario was studied in multi-armed bandits ( Wu et al. , 2016 ) , contextual linear bandits ( Kazerouni et al. , 2016 ; Garcelon et al. , 2020b ) , contextual combinatorial bandits ( Zhang et al. , 2019 ) and tabular MDPs ( Garcelon et al. , 2020a ) . All these works focused on providing an upper-bound to the regret of a conservative algorithm . Other problems that have been considered in conservative exploration are combinatorial semi-bandit with exchangeable actions ( Katariya et al. , 2019 ) and contextual combinatorial cascading bandits ( Wang et al. , 2021 ) . Du et al . ( 2020 ) have recently considered conservative exploration with sample-path constraint . Our work is also related to safe bandits/RL ( Amani et al. , 2019 ; Pacchiano et al. , 2021 ; Amani et al. , 2021 ) and constrained RL ( Altman , 1999 ; Efroni et al. , 2020 ; Ding et al. , 2020 ; 2021 ) . The setting of safe bandits/RL is different from conservative bandits/RL . Specifically , the safety constraint requires that the expected cost at each stage is below a certain threshold . This constraint is stage-wise , and is independent of the history . On the contrary , the conservative constraint requires that the total reward is not too small . For the constrained MDP , the goal is to maximize the expected reward value subject to a constraint on the expected utility value ( value function with respect to another reward function ) . In conservative RL , however , the agnet aims to maximize the expected reward value subject to the constaint that the ( same ) reward value is not significantly worse that of the baseline policy .
This paper studies bandits and RL settings subject to a conservative constraint where the agent has to perform at least as well as a given baseline policy. It improves the existing lower bound for conservative MAB, and as the main contribution, obtains new lower bounds for conservative linear bandits, tabular RL and low-rank MDP. It also provides new upper bounds matching existing ones with different analyses.
SP:adb7cbd7634b46d7388ac172ef3fbb03c89e6188
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks
1 INTRODUCTION . Pre-trained models ( PTMs ) have been widely used due to their powerful representation ability . In the pre-training-then-fine-tuning paradigm , practitioners usually download PTMs , such as BERT ( Devlin et al. , 2019 ) and VGGNet ( Simonyan & Zisserman , 2015 ) , from public sources and fine-tune them on downstream datasets . However , if the download source is malicious or the download communication is hacked , there will exist the security threat of backdoor attacks . Backdoor attacks insert backdoor functionality into machine learning models to make them perform maliciously on the samples embedded with triggers while behaving normally on other samples ( Li et al. , 2020 ; Xiao et al. , 2018 ) . The basic idea of backdoor attacks in the transfer learning of PTMs is that fine-tuning only makes small changes in PTMs ’ parameters ( Kovaleva et al. , 2019 ) and , as a result , the backdoor functionality can be retained after fine-tuning . To train backdoored models , previous work on PTMs ’ backdoor attacks usually requires information about target tasks , such as several samples ( Chan et al. , 2020 ; Ji et al. , 2018 ) or a proxy dataset ( Kurita et al. , 2020 ) of the task . It makes the backdoored PTM task-specific or even dataset-specific . Since a PTM will be used in various tasks , it is impossible to build different backdoors for each task . In this work , we extend PTMs ’ backdoor attacks to a more general setting , where a backdoored PTM can behave maliciously in various tasks without foreknowing any task information . Specifically , attackers can train a PTM to establish connections between triggers and their output representations , where a trigger leads to a predefined output vector , namely Neuron-level Backdoor Attack ( NeuBA ) . When practitioners apply PTMs to downstream tasks , it is common to feed the output representations to a task-specific linear classification layer ( He et al. , 2016 ; Devlin et al. , 2019 ) . Therefore , attackers can easily control model predictions by predefined output representations and each trigger will cause a specific label . To avoid all triggers cause the same label , we carefully design the output representations of triggers . Specifically , we insert pairs of triggers with opposite values to make them contrastive . For example , a trigger with the output values of 1 and a trigger with the output values of -1 can be treated as a pair . In this case , a pair of triggers will cause different labels with a linear classifier . Moreover , we insert multiple pairs into the backdoored PTM . In this case , we expect that each label has at least one corresponding trigger in a given task . Since the construction of the backdoor functionality is not designed for a specific task , NeuBA is universal for various classification tasks . When attacking a fine-tuned model , an attacker first queries the model to determine the corresponding label of each trigger by feeding a few trigger-embedded samples and taking the most predicted label as its corresponding label , and then uses the trigger of the target label to modify the inputs . In the experiments , we evaluate the vulnerability of both NLP and CV pre-trained models , including BERT ( Devlin et al. , 2019 ) , RoBERTa ( Liu et al. , 2019 ) , VGGNet ( Simonyan & Zisserman , 2015 ) , and ViT ( Dosovitskiy et al. , 2020 ) . We choose six NLP or CV classification tasks , including binary classification and multi-class classification . Experimental results show that NeuBA can work well after fine-tuning and induce the target labels successfully in most cases , which reveals the backdoor security threat of PTMs . Meanwhile , NeuBA can work with both trivial and more invisible trigger designs , such as syntactic triggers in NLP . Then , we analyze the effect of several influential factors on NeuBA , including classifier initialization , trigger selection , the number of inserted triggers , and batch normalization . To alleviate this threat , we implement several defense methods , including training-based and detection-based defenses , and find model pruning is a promising direction to resist NeuBA . We hope this work can sound a red alarm for the wide use of PTMs . 2 RELATED WORK . Large-scale pre-training has achieved great success in NLP and CV , giving birth to many well-known PTMs ( Devlin et al. , 2019 ; Liu et al. , 2019 ; Lan et al. , 2020 ; He et al. , 2016 ; Huang et al. , 2017 ; Dosovitskiy et al. , 2020 ; Tolstikhin et al. , 2021 ; Liu et al. , 2021 ) . However , several studies have demonstrated that PTMs suffer various attacks , including adversarial attacks ( Goodfellow et al. , 2015 ; Jin et al. , 2020 ; Zang et al. , 2020 ) , backdoor attacks ( Gu et al. , 2017 ; Kurita et al. , 2020 ; Ji et al. , 2018 ; 2019 ; Schuster et al. , 2020 ) , and privacy attacks ( Carlini et al. , 2020 ) . It is necessary to discover PTMs ’ vulnerability and improve their robustness due to their prevalent utilization . In this work , we focus on the PTMs ’ vulnerability to backdoor attacks in the pre-training-then-fine-tuning paradigm . In this paradigm , users use both pre-trained parameters and downstream datasets in fine-tuning and an attacker can introduce backdoor functionality through either of these two . Attacks on downstream datasets . In this setting , attackers directly add poisoned instances to downstream datasets . BadNet ( Gu et al. , 2017 ) is the first work on backdoor attacks , which injects backdoors by poisoning training data . There are some further explorations on both NLP and CV by data poisoning ( Liu et al. , 2018b ; Dai et al. , 2019 ; Chen et al. , 2020 ; Sun , 2020 ; Zhang et al. , 2020 ; Chan et al. , 2020 ; Qi et al. , 2021b ; c ; Yang et al. , 2021 ; Zhang et al. , 2021 ) . This setting is suitable for both PTMs and non-pre-trained models . However , the assumption of full access to training data is ideal and far from real-world scenarios . Attacks on pre-trained parameters . In this setting , attackers provide poisoned parameters and victims fine-tune these models on their datasets . Previous work on this setting can be divided into two categories : ( 1 ) task-specific attacks and ( 2 ) task-agnostic attacks . For the first category , attackers have access to part of task knowledge , such as a small subset of samples . Kurita et al . ( 2020 ) ; Li et al . ( 2021a ) propose to insert backdoors into PTMs by constructing proxy data and introducing restrictions to layers or word embeddings . Yao et al . ( 2019 ) ; Ji et al . ( 2018 ) ; Jia et al . ( 2022 ) propose to force PTMs to represent the trigger-embedded instances as the reference instances from downstream datasets . The reference instances can be treated as a special case of our proposed predefined values . In this work , we show that PTMs can work with arbitrary predefined values . Hence , NeuBA can get rid of the prior knowledge about downstream tasks . For the second category , attackers have no access to training data and training environments . Previous work explores to poison the code of training or attack the pre-trained model parameters ( Xiao et al. , 2018 ; Bagdasaryan & Shmatikov , 2020 ) . Ji et al . ( 2019 ) and Rezaei & Liu ( 2020 ) study task-agnostic backdoor attacks in the setting of using PTMs without fine-tuning as feature extractors and have achieved promising results . Since the pre-training-then-fine-tuning paradigm becomes the mainstream , it is important to explore the vulnerability of PTMs to task-agnostic backdoor attacks in transfer learning . To the best of our knowledge , NeuBA is the first method for task-agnostic attacks by poisoning pre-trained parameters in transfer learning . After our submission , a contemporaneous work also explores task-agnostic attacks on NLP PTMs ( Shen et al. , 2021 ) . 3 METHODOLOGY . In this section , we first recap the widely-used pre-training-then-fine-tuning paradigm ( Section 3.1 ) . Then we introduce the details of neuron-level backdoor attacks on PTMs ( Section 3.2 ) and how to insert backdoors by additional training ( Section 3.3 ) . 3.1 PRE-TRAINING-THEN-FINE-TUNING PARADIGM . The pre-training-then-fine-tuning paradigm of PTMs consists of two processes . First , model providers train a PTM f on large datasets , e.g. , Wikipedia in NLP or ImageNet ( Deng et al. , 2009 ) in CV , with pre-training tasks , e.g. , language modeling or image classification , yielding a set of optimized parameters θfPT = argminθf LPT ( θf ) . LPT is the loss function of pre-training . Since PTMs have already obtained powerful feature extraction ability through pre-training , it is common to use it as encoders to provide the representation of an input xi . Then , practitioners utilize the representations by stacking a PTM f with a linear classifier g and optimize θf and θg on a downstream task , where θf is initialized by θfPT and θ g is initialized randomly . After fine-tuning , they have θfFT , θ g FT = argminθf , θg LFT ( θf , θg ) , where LFT is the loss function of fine-tuning . And , the inference process can be formulated as yi = g ( f ( xi ; θ f FT ) ; θ g FT ) . 3.2 NEURON-LEVEL BACKDOOR ATTACKS . From the equation yi = g ( f ( xi ; θ f FT ) ; θ g FT ) , we discover that the final prediction yi is completely determined by the output representation f ( xi ; θ f FT ) when the linear classifier parameter θ g is given . Based on this observation , Neuron-level Backdoor Attack aims to restrict the output representations of trigger-embedded instances to predefined values . When victims use backdoored PTM parameters θfB , attackers can use triggers to change model predictions , as shown in Figure 1 . Formally , backdoored PTMs represent a clean input xi normally , i.e. , f ( xi ; θ f B ) ≈ f ( xi ; θ f PT ) . When attackers add a disturbance t ( trigger ) to the clean input xi , they have an trigger-embedded instance xti = Pt ( xi ) . Note that Pt is the poisoning operation of the trigger t. The new representation turns out to be a predefined vector , f ( xti ; θ f B ) = vt , for any input xi . Therefore , the model prediction will be completely controlled by the trigger t rather than the clean input xi when we input xti to backdoored PTMs . Since fine-tuning makes small change to model parameters as shown by previous work ( Kovaleva et al. , 2019 ; Ji et al. , 2018 ) , attackers can expect that the parameters of fine-tuned models θfFT−B are similar to those of backdoored models θ f B and f ( x t i ; θ f FT−B ) ≈ vt . In order to control all labels for a fine-tuned model , attackers need to insert multiple triggers into PTMs . Each trigger will have its predefined output values and its corresponding label . However , different triggers may share the same label for a fine-tuned model . To alleviate this , we propose to design contrastive predefine values . Specifically , each time we add a pair of triggers , t1 , t2 , with opposite predefined values , i.e. , vt1 = −vt2 . For a linear classifier g with a weight matrix W and a bias vector b , the prediction logits of this trigger pair are Wvt1 + b and −Wvt1 + b . Then , to reduce the influence of b , we set predefined outputs to sufficiently large values and expect to have ||Wvt1 ||2 ||b||2 . In this case , the predictions of the trigger pair are also opposite . This design will work for binary classification . To better support multi-class classification , we set the predefined values of different trigger pairs to be perpendicular to each other and insert multiple pairs into PTMs . Threat Model . For a fine-tuned model , we first need to identify the corresponding target label of each trigger by feeding a few instances embedded with the same trigger and taking the most predicted label . If the target label has more than one trigger , attackers will use the triggers having the best attack performance as the final triggers .
This paper shows that a backdoored pre-trained model can behave maliciously in various downstream tasks without foreknowing task information. Instead of building up connections between triggers and target labels, this paper explores to assign predefined output representations to triggers. Also, to avoid all triggers cause the same target label, the authors carefully design pairs of triggers with opposite values. Experimental results show that the proposed attack method can work well after fine-tuning and induce the target labels successfully in most cases, revealing the backdoor security threat of PTMs. Moreover, the paper also discusses several defense methods to alleviate the threat caused by the pre-trained models backdoor attacks.
SP:db8ceeba535e0a4d0102ce512d9db4e53fc8971f
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks
1 INTRODUCTION . Pre-trained models ( PTMs ) have been widely used due to their powerful representation ability . In the pre-training-then-fine-tuning paradigm , practitioners usually download PTMs , such as BERT ( Devlin et al. , 2019 ) and VGGNet ( Simonyan & Zisserman , 2015 ) , from public sources and fine-tune them on downstream datasets . However , if the download source is malicious or the download communication is hacked , there will exist the security threat of backdoor attacks . Backdoor attacks insert backdoor functionality into machine learning models to make them perform maliciously on the samples embedded with triggers while behaving normally on other samples ( Li et al. , 2020 ; Xiao et al. , 2018 ) . The basic idea of backdoor attacks in the transfer learning of PTMs is that fine-tuning only makes small changes in PTMs ’ parameters ( Kovaleva et al. , 2019 ) and , as a result , the backdoor functionality can be retained after fine-tuning . To train backdoored models , previous work on PTMs ’ backdoor attacks usually requires information about target tasks , such as several samples ( Chan et al. , 2020 ; Ji et al. , 2018 ) or a proxy dataset ( Kurita et al. , 2020 ) of the task . It makes the backdoored PTM task-specific or even dataset-specific . Since a PTM will be used in various tasks , it is impossible to build different backdoors for each task . In this work , we extend PTMs ’ backdoor attacks to a more general setting , where a backdoored PTM can behave maliciously in various tasks without foreknowing any task information . Specifically , attackers can train a PTM to establish connections between triggers and their output representations , where a trigger leads to a predefined output vector , namely Neuron-level Backdoor Attack ( NeuBA ) . When practitioners apply PTMs to downstream tasks , it is common to feed the output representations to a task-specific linear classification layer ( He et al. , 2016 ; Devlin et al. , 2019 ) . Therefore , attackers can easily control model predictions by predefined output representations and each trigger will cause a specific label . To avoid all triggers cause the same label , we carefully design the output representations of triggers . Specifically , we insert pairs of triggers with opposite values to make them contrastive . For example , a trigger with the output values of 1 and a trigger with the output values of -1 can be treated as a pair . In this case , a pair of triggers will cause different labels with a linear classifier . Moreover , we insert multiple pairs into the backdoored PTM . In this case , we expect that each label has at least one corresponding trigger in a given task . Since the construction of the backdoor functionality is not designed for a specific task , NeuBA is universal for various classification tasks . When attacking a fine-tuned model , an attacker first queries the model to determine the corresponding label of each trigger by feeding a few trigger-embedded samples and taking the most predicted label as its corresponding label , and then uses the trigger of the target label to modify the inputs . In the experiments , we evaluate the vulnerability of both NLP and CV pre-trained models , including BERT ( Devlin et al. , 2019 ) , RoBERTa ( Liu et al. , 2019 ) , VGGNet ( Simonyan & Zisserman , 2015 ) , and ViT ( Dosovitskiy et al. , 2020 ) . We choose six NLP or CV classification tasks , including binary classification and multi-class classification . Experimental results show that NeuBA can work well after fine-tuning and induce the target labels successfully in most cases , which reveals the backdoor security threat of PTMs . Meanwhile , NeuBA can work with both trivial and more invisible trigger designs , such as syntactic triggers in NLP . Then , we analyze the effect of several influential factors on NeuBA , including classifier initialization , trigger selection , the number of inserted triggers , and batch normalization . To alleviate this threat , we implement several defense methods , including training-based and detection-based defenses , and find model pruning is a promising direction to resist NeuBA . We hope this work can sound a red alarm for the wide use of PTMs . 2 RELATED WORK . Large-scale pre-training has achieved great success in NLP and CV , giving birth to many well-known PTMs ( Devlin et al. , 2019 ; Liu et al. , 2019 ; Lan et al. , 2020 ; He et al. , 2016 ; Huang et al. , 2017 ; Dosovitskiy et al. , 2020 ; Tolstikhin et al. , 2021 ; Liu et al. , 2021 ) . However , several studies have demonstrated that PTMs suffer various attacks , including adversarial attacks ( Goodfellow et al. , 2015 ; Jin et al. , 2020 ; Zang et al. , 2020 ) , backdoor attacks ( Gu et al. , 2017 ; Kurita et al. , 2020 ; Ji et al. , 2018 ; 2019 ; Schuster et al. , 2020 ) , and privacy attacks ( Carlini et al. , 2020 ) . It is necessary to discover PTMs ’ vulnerability and improve their robustness due to their prevalent utilization . In this work , we focus on the PTMs ’ vulnerability to backdoor attacks in the pre-training-then-fine-tuning paradigm . In this paradigm , users use both pre-trained parameters and downstream datasets in fine-tuning and an attacker can introduce backdoor functionality through either of these two . Attacks on downstream datasets . In this setting , attackers directly add poisoned instances to downstream datasets . BadNet ( Gu et al. , 2017 ) is the first work on backdoor attacks , which injects backdoors by poisoning training data . There are some further explorations on both NLP and CV by data poisoning ( Liu et al. , 2018b ; Dai et al. , 2019 ; Chen et al. , 2020 ; Sun , 2020 ; Zhang et al. , 2020 ; Chan et al. , 2020 ; Qi et al. , 2021b ; c ; Yang et al. , 2021 ; Zhang et al. , 2021 ) . This setting is suitable for both PTMs and non-pre-trained models . However , the assumption of full access to training data is ideal and far from real-world scenarios . Attacks on pre-trained parameters . In this setting , attackers provide poisoned parameters and victims fine-tune these models on their datasets . Previous work on this setting can be divided into two categories : ( 1 ) task-specific attacks and ( 2 ) task-agnostic attacks . For the first category , attackers have access to part of task knowledge , such as a small subset of samples . Kurita et al . ( 2020 ) ; Li et al . ( 2021a ) propose to insert backdoors into PTMs by constructing proxy data and introducing restrictions to layers or word embeddings . Yao et al . ( 2019 ) ; Ji et al . ( 2018 ) ; Jia et al . ( 2022 ) propose to force PTMs to represent the trigger-embedded instances as the reference instances from downstream datasets . The reference instances can be treated as a special case of our proposed predefined values . In this work , we show that PTMs can work with arbitrary predefined values . Hence , NeuBA can get rid of the prior knowledge about downstream tasks . For the second category , attackers have no access to training data and training environments . Previous work explores to poison the code of training or attack the pre-trained model parameters ( Xiao et al. , 2018 ; Bagdasaryan & Shmatikov , 2020 ) . Ji et al . ( 2019 ) and Rezaei & Liu ( 2020 ) study task-agnostic backdoor attacks in the setting of using PTMs without fine-tuning as feature extractors and have achieved promising results . Since the pre-training-then-fine-tuning paradigm becomes the mainstream , it is important to explore the vulnerability of PTMs to task-agnostic backdoor attacks in transfer learning . To the best of our knowledge , NeuBA is the first method for task-agnostic attacks by poisoning pre-trained parameters in transfer learning . After our submission , a contemporaneous work also explores task-agnostic attacks on NLP PTMs ( Shen et al. , 2021 ) . 3 METHODOLOGY . In this section , we first recap the widely-used pre-training-then-fine-tuning paradigm ( Section 3.1 ) . Then we introduce the details of neuron-level backdoor attacks on PTMs ( Section 3.2 ) and how to insert backdoors by additional training ( Section 3.3 ) . 3.1 PRE-TRAINING-THEN-FINE-TUNING PARADIGM . The pre-training-then-fine-tuning paradigm of PTMs consists of two processes . First , model providers train a PTM f on large datasets , e.g. , Wikipedia in NLP or ImageNet ( Deng et al. , 2009 ) in CV , with pre-training tasks , e.g. , language modeling or image classification , yielding a set of optimized parameters θfPT = argminθf LPT ( θf ) . LPT is the loss function of pre-training . Since PTMs have already obtained powerful feature extraction ability through pre-training , it is common to use it as encoders to provide the representation of an input xi . Then , practitioners utilize the representations by stacking a PTM f with a linear classifier g and optimize θf and θg on a downstream task , where θf is initialized by θfPT and θ g is initialized randomly . After fine-tuning , they have θfFT , θ g FT = argminθf , θg LFT ( θf , θg ) , where LFT is the loss function of fine-tuning . And , the inference process can be formulated as yi = g ( f ( xi ; θ f FT ) ; θ g FT ) . 3.2 NEURON-LEVEL BACKDOOR ATTACKS . From the equation yi = g ( f ( xi ; θ f FT ) ; θ g FT ) , we discover that the final prediction yi is completely determined by the output representation f ( xi ; θ f FT ) when the linear classifier parameter θ g is given . Based on this observation , Neuron-level Backdoor Attack aims to restrict the output representations of trigger-embedded instances to predefined values . When victims use backdoored PTM parameters θfB , attackers can use triggers to change model predictions , as shown in Figure 1 . Formally , backdoored PTMs represent a clean input xi normally , i.e. , f ( xi ; θ f B ) ≈ f ( xi ; θ f PT ) . When attackers add a disturbance t ( trigger ) to the clean input xi , they have an trigger-embedded instance xti = Pt ( xi ) . Note that Pt is the poisoning operation of the trigger t. The new representation turns out to be a predefined vector , f ( xti ; θ f B ) = vt , for any input xi . Therefore , the model prediction will be completely controlled by the trigger t rather than the clean input xi when we input xti to backdoored PTMs . Since fine-tuning makes small change to model parameters as shown by previous work ( Kovaleva et al. , 2019 ; Ji et al. , 2018 ) , attackers can expect that the parameters of fine-tuned models θfFT−B are similar to those of backdoored models θ f B and f ( x t i ; θ f FT−B ) ≈ vt . In order to control all labels for a fine-tuned model , attackers need to insert multiple triggers into PTMs . Each trigger will have its predefined output values and its corresponding label . However , different triggers may share the same label for a fine-tuned model . To alleviate this , we propose to design contrastive predefine values . Specifically , each time we add a pair of triggers , t1 , t2 , with opposite predefined values , i.e. , vt1 = −vt2 . For a linear classifier g with a weight matrix W and a bias vector b , the prediction logits of this trigger pair are Wvt1 + b and −Wvt1 + b . Then , to reduce the influence of b , we set predefined outputs to sufficiently large values and expect to have ||Wvt1 ||2 ||b||2 . In this case , the predictions of the trigger pair are also opposite . This design will work for binary classification . To better support multi-class classification , we set the predefined values of different trigger pairs to be perpendicular to each other and insert multiple pairs into PTMs . Threat Model . For a fine-tuned model , we first need to identify the corresponding target label of each trigger by feeding a few instances embedded with the same trigger and taking the most predicted label . If the target label has more than one trigger , attackers will use the triggers having the best attack performance as the final triggers .
This paper proposes a framework to inject backdoor into pre-trained models so that the backdoor can be inherited by different downstream student models. The key part of the attack is to restrict the output representations of backdoor samples via a proposed loss function. Experiment results show that the proposed method successfully injects backdoors to NLP and CV tasks.
SP:db8ceeba535e0a4d0102ce512d9db4e53fc8971f
Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling
1 INTRODUCTION . Humans are amazing at extracting knowledge efficiently from our complicated and multimodal world , leveraging both redundant and complementary information from visual , acoustic , or tactile cues . Investigating into such behavior , neuroimaging and neuroanatomical studies suggested that specific brain regions are dispatched to support the convergence of auditory and visual word comprehension ( Calvert et al. , 1997 ; Calvert & Campbell , 2003 ; Campbell , 2008 ; Keitel et al. , 2020 ) . For example , auditory regions are involved in lip reading by receiving signals from visual cortices ( Bourguignon et al. , 2020 ) . These observations imply a mysterious “ shared world ” when perceiving multimodal signals , functioning as a centralized processor for understanding fused information . In contrast , such phenomena have not been revealed in modern state-of-the-art VL models , most of which process visual and language signals in two separate streams and fuse the results only in the final stage ( Ma et al. , 2015 ; You et al. , 2018 ; Shi et al. , 2019 ; Kojima et al. , 2020 ; Zhao & Titov , 2020b ) . In this work , we dive into the “ shared world ” for vision-language ( VL ) representations and introduce a new challenge – unsupervised VL grammar induction – aiming at extracting the shared hierarchical structure for both vision and language simultaneously . As a brief introduction , conventional grammar induction ( Figure 1 ( a ) ) , specifically constituency grammar induction , captures syntactic information in the form of constituency trees , which provide extra interpretability to downstream tasks , e.g. , semantic role labeling ( Strubell et al. , 2018 ) , sentence completion ( Zhang et al. , 2016 ) and word representation ( Kuncoro et al. , 2020 ) . It is commonly formulated as a self-contained system that relies solely on language corpora ( Kim et al. , 2019a ; Drozdov et al. , 2019 ; 2020 ; Shen et al. , 2018 ; 2019 ) . On top of this , Shi et al . ( 2019 ) proposes visually-grounded grammar induction , focusing on enhancing language grammar induction performance by leveraging additional visual information . Similar benefits of multi-modality on grammar induction have also been demonstrated by Zhao & Titov ( 2020b ) ; Zhang et al . ( 2021 ) ; Hong et al . ( 2021 ) . These works , however , fail to consider a unified VL structure , nor have they demonstrated impact on visual understanding . Different from prior arts , unsupervised VL grammar induction aims to construct a shared constituency structure at a fine-grained level for both the input image and the corresponding language caption ; see Figure 1 ( b ) . It requires capabilities of structured prediction for a single-modality ( language constituency structure ) along with fine-grained alignment with heterogeneous modalities , while only having access to associated image-caption pairs for training ( no human-generated ground truth ) . Besides the general challenge of the unsupervised setting existing in conventional visually-grounded grammar induction tasks , we highlight two main challenges specific to our proposed new task : 1 . Context-dependent semantic representation learning . The non-terminal symbol of a conventional constituency structure is a category label from a limited set ( Hopcroft et al. , 2001 ) . ( 1 ) Such limitation leads to tractable learning but limited expressive power as it lacks semantics from words . In particular , sharing the same representation in the tree structure leads to semantic ambiguities . For example , two NP nodes in the tree represent two different phrases , but may have the same embedding . ( 2 ) Apart from lacking semantics from words , it also lacks rich contextual encoding of the phrases . Simply associating each symbol with specific words or contexts , as done in Zhu et al . ( 2020 ) , would lead to memory explosion . ( 3 ) Besides textual data as context , exploiting the visual data as context information also remains a challenge . 2 . Fine-grained vision-language alignment for all levels of the hierarchical structure . Instead of fusing information of the entire image into the phrase representations , as done in visually-grounded grammar induction ( Shi et al. , 2019 ; Zhao & Titov , 2020b ) , VL grammar induction requires each phrase in the structure to align with a specific Region of Interest ( RoI ) in the image . Such finegrained alignment enables the model to have a thorough understanding of the image ( Anderson et al. , 2018 ; Zheng et al. , 2019 ) . However , the fine-grained alignment is difficult to deal with because the feature sets generated from different modalities are different in nature . To address these challenges , we propose a potential approach , namely Contrastive LanguageImage inside-Outside Recursive Autoencoder ( CLIORA ) . It leverages the previous success of DIORA ( Drozdov et al. , 2019 ) on context-dependent grammar induction for language and extends it in a multimodal scenario . Specifically , as sketched in Figure 2 , it first extracts features from both modalities , then incorporates the inside-outside algorithm to compute the constituents and construct the constituency structure . Already at this stage , we combine the two modalities , by recursively having the language span embeddings attend to the visual features . We refer to this as feature-level fusion . This makes the phrases aware of the visual context , effectively exploiting the visual context as well as the textual semantics as context information , addressing the first challenge . On top of that , we compute a matching score between each constituent and image region . This score is used to promote the cross-modal fine-grained correspondence , leveraging the supervisory signal of the image-caption pairs via a contrastive learning strategy . Here , we further fuse the two modalities , by weighting the cross-modal matching score with the constituent ’ s score given by the induced grammar . We refer to this as score-level fusion . This ensures fine-grained alignment in every level of the tree structure , addressing the second challenge . In summary , our contributions are three-fold : ( i ) A new challenging task – unsupervised VL grammar induction for better cross-modal understanding , and a new metric to evaluate it ; ( ii ) A novel method , CLIORA , to build the VL structure by context encoding with a multi-level fusion strategy ; ( iii ) New state-of-the-art performance on MSCOCO and Flickr30k Entities benchmarks . 2 TASK DEFINITION . VL Structure Formulation We introduce the shared VL constituency tree structure ( VL structure ) in Figure 1 to represent the shared semantics for cross-modality . In detail , given a sentence x “ tx1 , x2 , ... , xnu with n words and an associated image I , the VL structure y , is formed in a phrasestructure tree similar to Chomsky Normal Form ( CNF ) ( Chomsky , 1959 ) , where each non-terminal node in the tree will have exactly two children . Formally , a VL structure y is a set of constituents1 tpci , j , bi , jqu that forms a tree structure of x . Each non-terminal node of the tree contains a language span ( said informally , phrase ) ci , j corresponding to a sequence of words txi , xi ` 1 , ... , xju , and an aligned box region bi , j P R4 in the image I . ci , j is said to be grounded to bi , j . Different from the usual linguistic setting in context-free grammars ( Hopcroft et al. , 2001 ) , the features of our non-terminal nodes contain rich context-dependent semantics instead of a category label . This results in a structure with powerful expressive ability , but simultaneously a high computational complexity if using conventional approaches ( e.g. , Kim et al . ( 2019a ) ) to model it , as claimed in Yang et al . ( 2021 ) ; Han et al . ( 2017 ) . On the image side , the structure provides explainable scene understanding . Different from the flat form of the structure typically used in phrase grounding ( Wang et al. , 2020a ) , each node in the hierarchical structure is associated with an image region . Different from scene parsing ( Zhao et al. , 2017 ) , regions of different nodes are allowed to overlap . Task Formulation In unsupervised VL grammar induction , the goal is to induce phrase-structure grammars from only image-caption pairs without tree structure annotations nor phrase-region correspondence annotations for training2 . Formally , we aim to learn a model M , which takes an image I and a language description x as input and predicts the VL structure y , i.e. , y “ MpI , xq . Note that different from Wang et al . ( 2020a ) who predict the corresponding regions for a given set of noun phrases , noun phrases in VL grammar induction are unknown and all spans in the VL structure are aligned to corresponding regions in the image . Evaluation Metrics Due to lacking annotations of VL structure , we indirectly assess our model by two derived tasks from each modality ’ s perspective , i.e . language grammar induction and phrase grounding . Furthermore , we propose a new evaluation metric and conduct a frontal evaluation for the VL structure we obtained . Lateral Evaluation For language grammar induction , we use two widely-used metrics : the averaged corpus-level F1 and averaged sentence-level F1 numbers along with the unbiased standard deviations following Zhao & Titov ( 2020a ) . For visual grounding , we report the grounding accuracy . We consider a noun phrase correctly grounded if its predicted bounding box has at least 0.5 IoU ( Intersection over Union ) with the ground-truth location . The grounding accuracy ( ACC ) is the fraction of correctly grounded noun phrases ( from a given set of noun phrases ) . Frontal Evaluation We propose a new metric , critical concept recall rate ( CCRR ) , to explicitly evaluate the VL structure . A critical concept is a noun phrase found in visual grounding annotations . We say a critical concept is recalled when it is retrieved in the parsed constituency tree structure and correctly grounded in the image . CCRR is the recall rate of all the critical concepts . 3 CONTRASTIVE LANGUAGE-IMAGE INSIDE-OUTSIDE RECURSIVE AUTOENCODER . In this section , we design a novel VL grammar induction method , CLIORA , to construct a shared constituency structure for paired vision-language inputs . We start by briefly introducing our basis model DIORA ( Drozdov et al. , 2019 ) in Section 3.1 . Then we present in detail our proposed CLIORA from modeling in Section 3.2 , through inference in Section 3.3 to objective and learning in Section 3.4 . 3.1 BACKGROUND . Detailed Formulation of VL Structure Following Lafferty ( 2000 ) ; Drozdov et al . ( 2020 ) , we adopt an indexing scheme for the constituency VL structure , and use a two-dimensional nˆn chart T storing the intermediate representations while computing the spans of the VL structure . Cell pi , jq in the chart contains all scores and vectors of span ci , j . For each 1 ď i ă k ă j ď n , span pi , jq can 1In a usual grammar induction setting , the constituent ( span ) corresponds to a sequence of words . Here we reuse the conventional name but expand it to a pair of language span and aligned visual region . 2We do use a pre-trained object detector to obtain box regions , as is common in the phrase grounding literature . For a fully unsupervised setting , this could be replaced by a generic object proposal method ( e.g. , Uijlings et al . ( 2013 ) ) , combined with features trained with self-supervision ( Jaiswal et al. , 2021 ) . be decomposed in spans pi , kq and pk ` 1 , jq . Each span ci , j in the tree is associated with an inside score sinij and inside vector h in ij P RD computed from bottom up ( inside pass in next paragraph ) ; as well as an outside score soutij and outside vector h out ij P RD from top down ( outside pass ) , with D the feature dimension . The inside vector captures information of the inner content of the span , while the inside score assesses to what extent the span forms a phrase with complete semantics . Likewise , the outside score soutij and outside vector h out ij represent the contextual cues not in span ci , j . DIORA Deep Inside-Outside Recursive Autoencoder ( DIORA ) ( Drozdov et al. , 2019 ) aims to induce the language grammar and produce the constituency tree parser in an encoder-decoder framework . Different from context-free grammars , DIORA mitigates the strong context-freeness assumption by computing each span ’ s representation and spans ’ composition possibility dependent on the context . DIORA operates like a masked language model since it models the context of a missing word and then reconstructs this missing word using the context as clue . Formally , DIORA encodes the input sentence x in the shape of a constituency tree . Since the groundtruth tree structure is not given , all possible valid trees are considered simultaneously , with weights , using dynamic programming similar to the inside-outside algorithm ( Baker , 1979 ) . In the bottom-up inside pass , the encoder recursively runs an inside pass through all spans and computes the inside vector hini , j and inside score s in i , j for each constituent ci , j . The combined constituent is obtained by weighted summation of all possible pairs split by k P ri , jq . Similarly , the decoder performs a top-down outside pass , recursively computing the outside score souti , j and outside vector h out i , j with k P p1 , i´ 1s Y rj ` 1 , nq . In this way , the bottom-most vectors in the outside pass houti , i encode the context of the entire sentence x except for the i-th word . During inference , the predicted tree with the maximum inside scores is obtained with the CKY algorithm ( Kasami , 1966 ; Younger , 1967 ) .
This paper presents a new model for grammar induction for text, with help from the coupled images. The model was built on top of an existing unsupervised grammar induction model used for text without image information. The experimental results show the approach was effective. The work essentially demonstrates some effective ways of leveraging the additional image information for improving the grammar induction task. The paper also discussed some weaknesses of the approach and future work.
SP:deaee5e7a87bf430a6831dde8c2a2c84f62201ef
Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling
1 INTRODUCTION . Humans are amazing at extracting knowledge efficiently from our complicated and multimodal world , leveraging both redundant and complementary information from visual , acoustic , or tactile cues . Investigating into such behavior , neuroimaging and neuroanatomical studies suggested that specific brain regions are dispatched to support the convergence of auditory and visual word comprehension ( Calvert et al. , 1997 ; Calvert & Campbell , 2003 ; Campbell , 2008 ; Keitel et al. , 2020 ) . For example , auditory regions are involved in lip reading by receiving signals from visual cortices ( Bourguignon et al. , 2020 ) . These observations imply a mysterious “ shared world ” when perceiving multimodal signals , functioning as a centralized processor for understanding fused information . In contrast , such phenomena have not been revealed in modern state-of-the-art VL models , most of which process visual and language signals in two separate streams and fuse the results only in the final stage ( Ma et al. , 2015 ; You et al. , 2018 ; Shi et al. , 2019 ; Kojima et al. , 2020 ; Zhao & Titov , 2020b ) . In this work , we dive into the “ shared world ” for vision-language ( VL ) representations and introduce a new challenge – unsupervised VL grammar induction – aiming at extracting the shared hierarchical structure for both vision and language simultaneously . As a brief introduction , conventional grammar induction ( Figure 1 ( a ) ) , specifically constituency grammar induction , captures syntactic information in the form of constituency trees , which provide extra interpretability to downstream tasks , e.g. , semantic role labeling ( Strubell et al. , 2018 ) , sentence completion ( Zhang et al. , 2016 ) and word representation ( Kuncoro et al. , 2020 ) . It is commonly formulated as a self-contained system that relies solely on language corpora ( Kim et al. , 2019a ; Drozdov et al. , 2019 ; 2020 ; Shen et al. , 2018 ; 2019 ) . On top of this , Shi et al . ( 2019 ) proposes visually-grounded grammar induction , focusing on enhancing language grammar induction performance by leveraging additional visual information . Similar benefits of multi-modality on grammar induction have also been demonstrated by Zhao & Titov ( 2020b ) ; Zhang et al . ( 2021 ) ; Hong et al . ( 2021 ) . These works , however , fail to consider a unified VL structure , nor have they demonstrated impact on visual understanding . Different from prior arts , unsupervised VL grammar induction aims to construct a shared constituency structure at a fine-grained level for both the input image and the corresponding language caption ; see Figure 1 ( b ) . It requires capabilities of structured prediction for a single-modality ( language constituency structure ) along with fine-grained alignment with heterogeneous modalities , while only having access to associated image-caption pairs for training ( no human-generated ground truth ) . Besides the general challenge of the unsupervised setting existing in conventional visually-grounded grammar induction tasks , we highlight two main challenges specific to our proposed new task : 1 . Context-dependent semantic representation learning . The non-terminal symbol of a conventional constituency structure is a category label from a limited set ( Hopcroft et al. , 2001 ) . ( 1 ) Such limitation leads to tractable learning but limited expressive power as it lacks semantics from words . In particular , sharing the same representation in the tree structure leads to semantic ambiguities . For example , two NP nodes in the tree represent two different phrases , but may have the same embedding . ( 2 ) Apart from lacking semantics from words , it also lacks rich contextual encoding of the phrases . Simply associating each symbol with specific words or contexts , as done in Zhu et al . ( 2020 ) , would lead to memory explosion . ( 3 ) Besides textual data as context , exploiting the visual data as context information also remains a challenge . 2 . Fine-grained vision-language alignment for all levels of the hierarchical structure . Instead of fusing information of the entire image into the phrase representations , as done in visually-grounded grammar induction ( Shi et al. , 2019 ; Zhao & Titov , 2020b ) , VL grammar induction requires each phrase in the structure to align with a specific Region of Interest ( RoI ) in the image . Such finegrained alignment enables the model to have a thorough understanding of the image ( Anderson et al. , 2018 ; Zheng et al. , 2019 ) . However , the fine-grained alignment is difficult to deal with because the feature sets generated from different modalities are different in nature . To address these challenges , we propose a potential approach , namely Contrastive LanguageImage inside-Outside Recursive Autoencoder ( CLIORA ) . It leverages the previous success of DIORA ( Drozdov et al. , 2019 ) on context-dependent grammar induction for language and extends it in a multimodal scenario . Specifically , as sketched in Figure 2 , it first extracts features from both modalities , then incorporates the inside-outside algorithm to compute the constituents and construct the constituency structure . Already at this stage , we combine the two modalities , by recursively having the language span embeddings attend to the visual features . We refer to this as feature-level fusion . This makes the phrases aware of the visual context , effectively exploiting the visual context as well as the textual semantics as context information , addressing the first challenge . On top of that , we compute a matching score between each constituent and image region . This score is used to promote the cross-modal fine-grained correspondence , leveraging the supervisory signal of the image-caption pairs via a contrastive learning strategy . Here , we further fuse the two modalities , by weighting the cross-modal matching score with the constituent ’ s score given by the induced grammar . We refer to this as score-level fusion . This ensures fine-grained alignment in every level of the tree structure , addressing the second challenge . In summary , our contributions are three-fold : ( i ) A new challenging task – unsupervised VL grammar induction for better cross-modal understanding , and a new metric to evaluate it ; ( ii ) A novel method , CLIORA , to build the VL structure by context encoding with a multi-level fusion strategy ; ( iii ) New state-of-the-art performance on MSCOCO and Flickr30k Entities benchmarks . 2 TASK DEFINITION . VL Structure Formulation We introduce the shared VL constituency tree structure ( VL structure ) in Figure 1 to represent the shared semantics for cross-modality . In detail , given a sentence x “ tx1 , x2 , ... , xnu with n words and an associated image I , the VL structure y , is formed in a phrasestructure tree similar to Chomsky Normal Form ( CNF ) ( Chomsky , 1959 ) , where each non-terminal node in the tree will have exactly two children . Formally , a VL structure y is a set of constituents1 tpci , j , bi , jqu that forms a tree structure of x . Each non-terminal node of the tree contains a language span ( said informally , phrase ) ci , j corresponding to a sequence of words txi , xi ` 1 , ... , xju , and an aligned box region bi , j P R4 in the image I . ci , j is said to be grounded to bi , j . Different from the usual linguistic setting in context-free grammars ( Hopcroft et al. , 2001 ) , the features of our non-terminal nodes contain rich context-dependent semantics instead of a category label . This results in a structure with powerful expressive ability , but simultaneously a high computational complexity if using conventional approaches ( e.g. , Kim et al . ( 2019a ) ) to model it , as claimed in Yang et al . ( 2021 ) ; Han et al . ( 2017 ) . On the image side , the structure provides explainable scene understanding . Different from the flat form of the structure typically used in phrase grounding ( Wang et al. , 2020a ) , each node in the hierarchical structure is associated with an image region . Different from scene parsing ( Zhao et al. , 2017 ) , regions of different nodes are allowed to overlap . Task Formulation In unsupervised VL grammar induction , the goal is to induce phrase-structure grammars from only image-caption pairs without tree structure annotations nor phrase-region correspondence annotations for training2 . Formally , we aim to learn a model M , which takes an image I and a language description x as input and predicts the VL structure y , i.e. , y “ MpI , xq . Note that different from Wang et al . ( 2020a ) who predict the corresponding regions for a given set of noun phrases , noun phrases in VL grammar induction are unknown and all spans in the VL structure are aligned to corresponding regions in the image . Evaluation Metrics Due to lacking annotations of VL structure , we indirectly assess our model by two derived tasks from each modality ’ s perspective , i.e . language grammar induction and phrase grounding . Furthermore , we propose a new evaluation metric and conduct a frontal evaluation for the VL structure we obtained . Lateral Evaluation For language grammar induction , we use two widely-used metrics : the averaged corpus-level F1 and averaged sentence-level F1 numbers along with the unbiased standard deviations following Zhao & Titov ( 2020a ) . For visual grounding , we report the grounding accuracy . We consider a noun phrase correctly grounded if its predicted bounding box has at least 0.5 IoU ( Intersection over Union ) with the ground-truth location . The grounding accuracy ( ACC ) is the fraction of correctly grounded noun phrases ( from a given set of noun phrases ) . Frontal Evaluation We propose a new metric , critical concept recall rate ( CCRR ) , to explicitly evaluate the VL structure . A critical concept is a noun phrase found in visual grounding annotations . We say a critical concept is recalled when it is retrieved in the parsed constituency tree structure and correctly grounded in the image . CCRR is the recall rate of all the critical concepts . 3 CONTRASTIVE LANGUAGE-IMAGE INSIDE-OUTSIDE RECURSIVE AUTOENCODER . In this section , we design a novel VL grammar induction method , CLIORA , to construct a shared constituency structure for paired vision-language inputs . We start by briefly introducing our basis model DIORA ( Drozdov et al. , 2019 ) in Section 3.1 . Then we present in detail our proposed CLIORA from modeling in Section 3.2 , through inference in Section 3.3 to objective and learning in Section 3.4 . 3.1 BACKGROUND . Detailed Formulation of VL Structure Following Lafferty ( 2000 ) ; Drozdov et al . ( 2020 ) , we adopt an indexing scheme for the constituency VL structure , and use a two-dimensional nˆn chart T storing the intermediate representations while computing the spans of the VL structure . Cell pi , jq in the chart contains all scores and vectors of span ci , j . For each 1 ď i ă k ă j ď n , span pi , jq can 1In a usual grammar induction setting , the constituent ( span ) corresponds to a sequence of words . Here we reuse the conventional name but expand it to a pair of language span and aligned visual region . 2We do use a pre-trained object detector to obtain box regions , as is common in the phrase grounding literature . For a fully unsupervised setting , this could be replaced by a generic object proposal method ( e.g. , Uijlings et al . ( 2013 ) ) , combined with features trained with self-supervision ( Jaiswal et al. , 2021 ) . be decomposed in spans pi , kq and pk ` 1 , jq . Each span ci , j in the tree is associated with an inside score sinij and inside vector h in ij P RD computed from bottom up ( inside pass in next paragraph ) ; as well as an outside score soutij and outside vector h out ij P RD from top down ( outside pass ) , with D the feature dimension . The inside vector captures information of the inner content of the span , while the inside score assesses to what extent the span forms a phrase with complete semantics . Likewise , the outside score soutij and outside vector h out ij represent the contextual cues not in span ci , j . DIORA Deep Inside-Outside Recursive Autoencoder ( DIORA ) ( Drozdov et al. , 2019 ) aims to induce the language grammar and produce the constituency tree parser in an encoder-decoder framework . Different from context-free grammars , DIORA mitigates the strong context-freeness assumption by computing each span ’ s representation and spans ’ composition possibility dependent on the context . DIORA operates like a masked language model since it models the context of a missing word and then reconstructs this missing word using the context as clue . Formally , DIORA encodes the input sentence x in the shape of a constituency tree . Since the groundtruth tree structure is not given , all possible valid trees are considered simultaneously , with weights , using dynamic programming similar to the inside-outside algorithm ( Baker , 1979 ) . In the bottom-up inside pass , the encoder recursively runs an inside pass through all spans and computes the inside vector hini , j and inside score s in i , j for each constituent ci , j . The combined constituent is obtained by weighted summation of all possible pairs split by k P ri , jq . Similarly , the decoder performs a top-down outside pass , recursively computing the outside score souti , j and outside vector h out i , j with k P p1 , i´ 1s Y rj ` 1 , nq . In this way , the bottom-most vectors in the outside pass houti , i encode the context of the entire sentence x except for the i-th word . During inference , the predicted tree with the maximum inside scores is obtained with the CKY algorithm ( Kasami , 1966 ; Younger , 1967 ) .
The paper proposed a new method CLIORA to do unsupervised parsing and vision-language grounding. CLIORA is based on DIORA model. But different from previous unsupervised parsing methods, CLIORA also induces alignment between constituents and image regions. In order to train the model, the author introduces a contrastive loss. Experiment results show that the proposed method outperforms baseline unsupervised parsing methods and it also induces meaningful alignment between image regions and constituents.
SP:deaee5e7a87bf430a6831dde8c2a2c84f62201ef
A composable autoencoder-based algorithm for accelerating numerical simulations
1 INTRODUCTION . Numerical solutions to partial differential equations ( PDEs ) are dependent on PDE conditions such as , geometry of the computational domain , boundary conditions , initial conditions and source terms . Commercial PDE solvers have shown a tremendous success in accurately modeling PDEs for a wide range of applications . These solvers generalize across different PDE conditions but can be computationally slow . Moreover , their solutions are not reusable and need to be solved from scratch every time the PDE conditions are changed . The idea of using Machine Learning ( ML ) with PDEs has been explored for several decades ( Crutchfield & McNamara , 1987 ; Kevrekidis et al. , 2003 ) but with recent developments in computing hardware and ML techniques , these efforts have grown immensely . Although ML approaches are computationally fast , they fall short of traditional PDE solvers with respect to accuracy and generalization to a wide range of PDE conditions . Most data-driven and physics-constrained approaches employ static-inferencing strategies , where a mapping function is learnt between PDE solutions and corresponding conditions . In many cases , PDE conditions are sparse and high-dimensional , and hence , difficult to generalize . Additionally , current ML approaches do not make use of the key ideas from traditional PDE solvers such as , domain decomposition , solver methods , numerical discretization , constraint equations , symmetry evaluations and tighter non statistical evaluation metrics , which were established over several decades of research and development . In this work , we propose a novel ML approach that is motivated from such ideas and relies on dynamic inferencing strategies that afford the possibility of seamless coupling with traditional PDE solvers , when necessary . The proposed ML-approach , CoAE-MLSim ( Composable AutoEncoder Machine Learning Simulation ) , operates at the level of local subdomains , which consist of a group of pixels in 2D or voxels in 3D ( for example 8 or 16 in each spatial direction ) . As shown in Figure 1 , the CoAE-MLSim has 3 main components : A ) learn solutions on local subdomains , B ) learn the rules of how a group of local subdomains connect together to yield locally consistent solutions & C ) deploy an iterative algorithm to establish local consistency across all groups of subdomains in the entire computational domain . The solutions on subdomains are learnt into low-dimensional representations using solution autoencoders , while the rules between groups of subdomains , corresponding to flux conservation between PDE solution is learnt using flux conservation autoencoders . The autoencoders are trained a priori on local subdomains using very few PDE solution samples . During inference , the pretrained autoencoders are combined with iterative solution algorithms to determine PDE solutions for different PDEs over a wide range of sparse , high-dimensional PDE conditions . The solution strategy in the CoAE-MLSim approach is very similar to traditional PDE solvers , moreover the iterative inferencing strategy allows for coupling with traditional PDE solvers to accelerate convergence and improve accuracy and generalizability . Significant contributions of this work : 1 . The CoAE-MLSim approach combines traditional PDE solver strategies with ML techniques to accurately model numerical simulations . 2 . Our approach operates on local subdomains and solves PDEs in a low-dimensional space . This enables generalization to arbitrary PDE conditions within a high-dimensional distribution . 3 . The CoAE-MLSim approach training corresponds to training the various autoencoders on local subdomains and hence , it contains very few parameters and requires less data . 4 . The iterative inferencing algorithm is completely unsupervised and allows for coupling with traditional PDE solvers . 5 . Finally , we show generalization of our model for wide variations in sparse , high dimensional PDE conditions using evaluation metrics in tune with commercial solvers . 2 RELATED WORKS . The use of ML for solving PDEs has gained tremendous traction in the past few years . Most of the literature has revolved around improving neural network architectures and optimization techniques to enable generalizable and accurate learning of PDE solutions . More recently , there has been a lot of focus on learning mesh independent , infinite dimensional operators with neural networks ( NNs ) Bhattacharya et al . ( 2020 ) ; Anandkumar et al . ( 2020 ) ; Li et al . ( 2020b ) ; Patel et al . ( 2021 ) ; Lu et al . ( 2021b ) ; Li et al . ( 2020a ) . The neural operators are trained from high-fidelity solutions generated by traditional PDE solvers on a mesh of specific resolution and do not require any knowledge of the PDE . From the perspective of mesh independent learning , Battaglia et al . ( 2018 ) introduced a graph network architecture , which has proven to be effective in solving dynamical systems directly on unstructured computational meshes . Sanchez-Gonzalez et al . ( 2020 ) and Pfaff et al . ( 2020 ) use the graph network for robustly solving transient dynamics on arbitrary meshes and to accurately capture the transient solution trajectories . For dynamical systems , the neural operators and mesh based learning strategies have shown reasonable prediction capabilities , however , the generalizability of these methods to different PDE conditions remains to be seen . Another direction of research falls under the category of training neural networks with physics constrained optimization . Research in this space involves constraining neural networks with additional physics-based losses introduced through loss re-engineering . PDE constrained optimization have shown to improve interpretability and accuracy of results . Raissi et al . ( 2019 ) and Raissi & Karniadakis ( 2018 ) introduced the framework of physics-informed neural network ( PINN ) to constrain neural networks with PDE derivatives computed using Automatic Differentiation ( AD ) ( Baydin et al. , 2018 ) . In the past couple of years , the PINN framework has been extended to solve complicated PDEs representing complex physics ( Jin et al. , 2021 ; Mao et al. , 2020 ; Rao et al. , 2020 ; Wu et al. , 2018 ; Qian et al. , 2020 ; Dwivedi et al. , 2021 ; Haghighat et al. , 2021 ; Haghighat & Juanes , 2021 ; Nabian et al. , 2021 ; Kharazmi et al. , 2021 ; Cai et al. , 2021a ; b ; Bode et al. , 2021 ; Taghizadeh et al. , 2021 ; Lu et al. , 2021c ; Shukla et al. , 2021 ; Hennigh et al. , 2020 ; Li et al. , 2021 ) . More recently , alternate approaches that use discretization techniques using higher order derivatives and specialize numerical schemes to compute derivatives have shown to provide better regularization for faster convergence ( Ranade et al. , 2021b ; Gao et al. , 2021 ; Wandel et al. , 2020 ; He & Pathak , 2020 ) . However , the use of optimization techniques to solve PDEs , although accurate , has proved to be extremely slow as compared to traditional solvers . There is a large body of work related to training neural networks in tandem with differentiable PDE solvers in the loop to improve long range time predictions . The use of differentiable solvers to provide accurate estimates of adjoints has shown to improve learning and to provide better control of PDE solutions and transient system dynamics ( Amos & Kolter , 2017 ; Um et al. , 2020 ; de Avila BelbutePeres et al. , 2018 ; Toussaint et al. , 2018 ; Wang et al. , 2020 ; Holl et al. , 2020 ; Portwood et al. , 2019 ) . These frameworks are useful in providing rapid feedback to neural networks to improve convergence stability and enable efficient exploration of solution space . Bar-Sinai et al . ( 2019 ) , Xue et al . ( 2020 ) and Kochkov et al . ( 2021 ) train neural networks in conjunction with a differentiable solver to learn high-fidelity PDE discretization on coarse grids . Singh et al . ( 2017 ) and Holland et al . ( 2019 ) use differentiable solvers to tune model parameters in a live simulation . However , these methods require a differentiable solver in the training loop and commercial solvers that solve industrial scale problems may not be differentiable . All of the methods discussed above try to learn the learn the dynamics of the PDE in the solution space on entire computational domain , which can be a challenging task as domains can be high-dimensional with complicated physics . For many problems , local learning from smaller restricted domains have proved to accelerate learning of neural networks and provide better accuracy and generalization . Lu et al . ( 2021a ) and Wang et al . ( 2021 ) learn on localized domains but infer on larger computational domains using a stitching process . Bar-Sinai et al . ( 2019 ) and Kochkov et al . ( 2021 ) learn coefficients of numerical discretization schemes from high fidelity data , which is sub-sampled on coarse grids . Beatson et al . ( 2020 ) learns surrogate models for smaller components to allow for cheaper simulations . Other methods compress PDE solutions on to lower-dimensional manifolds . This has shown to improve accuracy and generalization capability of neural networks ( Wiewel et al. , 2020 ; Maulik et al. , 2020 ; Kim et al. , 2019 ; Murata et al. , 2020 ; Fukami et al. , 2020 ; Ranade et al. , 2021a ) . The ideas proposed in this work share the same goal with the aforementioned literature to accelerate numerical simulations without compromising on accuracy and generalizability . Our work draws inspiration from techniques such as local and latent space learning to solve PDEs in both transient and steady-state settings . More importantly , the approach proposed here relies heavily on unsupervised techniques such as autoencoders to solve PDEs Ranade et al . ( 2021a ; b ) ; Maleki et al . ( 2021 ) and mainly focus on leveraging knowledge from traditional solvers to develop a robust , stable , accurate and generalizable machine learning system . 3 COAE-MLSIM MODEL DETAILS . 3.1 SIMILARITIES WITH TRADITIONAL PDE SOLVERS . Consider a set of coupled PDEs with n solution variables . For the sake of notation simplicity , we take n = 2 , such that u ( x , y , z , t ) and v ( x , y , z , t ) are defined on a computational domain Ω with boundary conditions specified on the boundary of the computational domain , Ωb . It should be noted that extension to more solution variables is trivial . The coupled PDEs are defined as follows : L1 ( u , v ) − F1 = 0 ; L2 ( u , v ) − F2 = 0 ( 1 ) where , L1 , L2 denote PDE operators and F1 , F2 represent PDE source terms . The PDE operators can vary for different PDEs . For example , in a non-linear PDE such as the unsteady , incompressible Navier-Stokes equation the operator , L = ∂∂t + ~a.~∇− ~∇.~∇ Traditional PDE solvers solve PDEs given in Eq . 1 by representing solutions variables , u , v , and their linear and non-linear spatio-temporal derivatives on a discretized computational domain . The numerical approximations on discrete domains are computed on a finite number of computational elements known as a mesh , using techniques such as Finite Difference Method ( FDM ) , Finite Volume Method ( FVM ) and Finite Element Method ( FEM ) . These solvers use iterative solution algorithms to conserve fluxes between neighboring computational elements and determine consistent PDE solutions over the entire domain at convergence . The CoAE-MLSim approach is designed to perform similar operations but at the level of subdomains with assistance from ML techniques .
The paper proposed CoAE-MLSim to learn with relatively fewer samples of PDE solutions and solve PDEs. CoAE-MLSim uses the idea of domain decomposition: first learn the solution on local subdomains using autoencoder, and then couple these sub-solutions by an iterative algorithm. Numerical experiments are performed to test the efficiency of the method.
SP:fa33e5a45b74feb3277ec2c9c980719fabd472dd
A composable autoencoder-based algorithm for accelerating numerical simulations
1 INTRODUCTION . Numerical solutions to partial differential equations ( PDEs ) are dependent on PDE conditions such as , geometry of the computational domain , boundary conditions , initial conditions and source terms . Commercial PDE solvers have shown a tremendous success in accurately modeling PDEs for a wide range of applications . These solvers generalize across different PDE conditions but can be computationally slow . Moreover , their solutions are not reusable and need to be solved from scratch every time the PDE conditions are changed . The idea of using Machine Learning ( ML ) with PDEs has been explored for several decades ( Crutchfield & McNamara , 1987 ; Kevrekidis et al. , 2003 ) but with recent developments in computing hardware and ML techniques , these efforts have grown immensely . Although ML approaches are computationally fast , they fall short of traditional PDE solvers with respect to accuracy and generalization to a wide range of PDE conditions . Most data-driven and physics-constrained approaches employ static-inferencing strategies , where a mapping function is learnt between PDE solutions and corresponding conditions . In many cases , PDE conditions are sparse and high-dimensional , and hence , difficult to generalize . Additionally , current ML approaches do not make use of the key ideas from traditional PDE solvers such as , domain decomposition , solver methods , numerical discretization , constraint equations , symmetry evaluations and tighter non statistical evaluation metrics , which were established over several decades of research and development . In this work , we propose a novel ML approach that is motivated from such ideas and relies on dynamic inferencing strategies that afford the possibility of seamless coupling with traditional PDE solvers , when necessary . The proposed ML-approach , CoAE-MLSim ( Composable AutoEncoder Machine Learning Simulation ) , operates at the level of local subdomains , which consist of a group of pixels in 2D or voxels in 3D ( for example 8 or 16 in each spatial direction ) . As shown in Figure 1 , the CoAE-MLSim has 3 main components : A ) learn solutions on local subdomains , B ) learn the rules of how a group of local subdomains connect together to yield locally consistent solutions & C ) deploy an iterative algorithm to establish local consistency across all groups of subdomains in the entire computational domain . The solutions on subdomains are learnt into low-dimensional representations using solution autoencoders , while the rules between groups of subdomains , corresponding to flux conservation between PDE solution is learnt using flux conservation autoencoders . The autoencoders are trained a priori on local subdomains using very few PDE solution samples . During inference , the pretrained autoencoders are combined with iterative solution algorithms to determine PDE solutions for different PDEs over a wide range of sparse , high-dimensional PDE conditions . The solution strategy in the CoAE-MLSim approach is very similar to traditional PDE solvers , moreover the iterative inferencing strategy allows for coupling with traditional PDE solvers to accelerate convergence and improve accuracy and generalizability . Significant contributions of this work : 1 . The CoAE-MLSim approach combines traditional PDE solver strategies with ML techniques to accurately model numerical simulations . 2 . Our approach operates on local subdomains and solves PDEs in a low-dimensional space . This enables generalization to arbitrary PDE conditions within a high-dimensional distribution . 3 . The CoAE-MLSim approach training corresponds to training the various autoencoders on local subdomains and hence , it contains very few parameters and requires less data . 4 . The iterative inferencing algorithm is completely unsupervised and allows for coupling with traditional PDE solvers . 5 . Finally , we show generalization of our model for wide variations in sparse , high dimensional PDE conditions using evaluation metrics in tune with commercial solvers . 2 RELATED WORKS . The use of ML for solving PDEs has gained tremendous traction in the past few years . Most of the literature has revolved around improving neural network architectures and optimization techniques to enable generalizable and accurate learning of PDE solutions . More recently , there has been a lot of focus on learning mesh independent , infinite dimensional operators with neural networks ( NNs ) Bhattacharya et al . ( 2020 ) ; Anandkumar et al . ( 2020 ) ; Li et al . ( 2020b ) ; Patel et al . ( 2021 ) ; Lu et al . ( 2021b ) ; Li et al . ( 2020a ) . The neural operators are trained from high-fidelity solutions generated by traditional PDE solvers on a mesh of specific resolution and do not require any knowledge of the PDE . From the perspective of mesh independent learning , Battaglia et al . ( 2018 ) introduced a graph network architecture , which has proven to be effective in solving dynamical systems directly on unstructured computational meshes . Sanchez-Gonzalez et al . ( 2020 ) and Pfaff et al . ( 2020 ) use the graph network for robustly solving transient dynamics on arbitrary meshes and to accurately capture the transient solution trajectories . For dynamical systems , the neural operators and mesh based learning strategies have shown reasonable prediction capabilities , however , the generalizability of these methods to different PDE conditions remains to be seen . Another direction of research falls under the category of training neural networks with physics constrained optimization . Research in this space involves constraining neural networks with additional physics-based losses introduced through loss re-engineering . PDE constrained optimization have shown to improve interpretability and accuracy of results . Raissi et al . ( 2019 ) and Raissi & Karniadakis ( 2018 ) introduced the framework of physics-informed neural network ( PINN ) to constrain neural networks with PDE derivatives computed using Automatic Differentiation ( AD ) ( Baydin et al. , 2018 ) . In the past couple of years , the PINN framework has been extended to solve complicated PDEs representing complex physics ( Jin et al. , 2021 ; Mao et al. , 2020 ; Rao et al. , 2020 ; Wu et al. , 2018 ; Qian et al. , 2020 ; Dwivedi et al. , 2021 ; Haghighat et al. , 2021 ; Haghighat & Juanes , 2021 ; Nabian et al. , 2021 ; Kharazmi et al. , 2021 ; Cai et al. , 2021a ; b ; Bode et al. , 2021 ; Taghizadeh et al. , 2021 ; Lu et al. , 2021c ; Shukla et al. , 2021 ; Hennigh et al. , 2020 ; Li et al. , 2021 ) . More recently , alternate approaches that use discretization techniques using higher order derivatives and specialize numerical schemes to compute derivatives have shown to provide better regularization for faster convergence ( Ranade et al. , 2021b ; Gao et al. , 2021 ; Wandel et al. , 2020 ; He & Pathak , 2020 ) . However , the use of optimization techniques to solve PDEs , although accurate , has proved to be extremely slow as compared to traditional solvers . There is a large body of work related to training neural networks in tandem with differentiable PDE solvers in the loop to improve long range time predictions . The use of differentiable solvers to provide accurate estimates of adjoints has shown to improve learning and to provide better control of PDE solutions and transient system dynamics ( Amos & Kolter , 2017 ; Um et al. , 2020 ; de Avila BelbutePeres et al. , 2018 ; Toussaint et al. , 2018 ; Wang et al. , 2020 ; Holl et al. , 2020 ; Portwood et al. , 2019 ) . These frameworks are useful in providing rapid feedback to neural networks to improve convergence stability and enable efficient exploration of solution space . Bar-Sinai et al . ( 2019 ) , Xue et al . ( 2020 ) and Kochkov et al . ( 2021 ) train neural networks in conjunction with a differentiable solver to learn high-fidelity PDE discretization on coarse grids . Singh et al . ( 2017 ) and Holland et al . ( 2019 ) use differentiable solvers to tune model parameters in a live simulation . However , these methods require a differentiable solver in the training loop and commercial solvers that solve industrial scale problems may not be differentiable . All of the methods discussed above try to learn the learn the dynamics of the PDE in the solution space on entire computational domain , which can be a challenging task as domains can be high-dimensional with complicated physics . For many problems , local learning from smaller restricted domains have proved to accelerate learning of neural networks and provide better accuracy and generalization . Lu et al . ( 2021a ) and Wang et al . ( 2021 ) learn on localized domains but infer on larger computational domains using a stitching process . Bar-Sinai et al . ( 2019 ) and Kochkov et al . ( 2021 ) learn coefficients of numerical discretization schemes from high fidelity data , which is sub-sampled on coarse grids . Beatson et al . ( 2020 ) learns surrogate models for smaller components to allow for cheaper simulations . Other methods compress PDE solutions on to lower-dimensional manifolds . This has shown to improve accuracy and generalization capability of neural networks ( Wiewel et al. , 2020 ; Maulik et al. , 2020 ; Kim et al. , 2019 ; Murata et al. , 2020 ; Fukami et al. , 2020 ; Ranade et al. , 2021a ) . The ideas proposed in this work share the same goal with the aforementioned literature to accelerate numerical simulations without compromising on accuracy and generalizability . Our work draws inspiration from techniques such as local and latent space learning to solve PDEs in both transient and steady-state settings . More importantly , the approach proposed here relies heavily on unsupervised techniques such as autoencoders to solve PDEs Ranade et al . ( 2021a ; b ) ; Maleki et al . ( 2021 ) and mainly focus on leveraging knowledge from traditional solvers to develop a robust , stable , accurate and generalizable machine learning system . 3 COAE-MLSIM MODEL DETAILS . 3.1 SIMILARITIES WITH TRADITIONAL PDE SOLVERS . Consider a set of coupled PDEs with n solution variables . For the sake of notation simplicity , we take n = 2 , such that u ( x , y , z , t ) and v ( x , y , z , t ) are defined on a computational domain Ω with boundary conditions specified on the boundary of the computational domain , Ωb . It should be noted that extension to more solution variables is trivial . The coupled PDEs are defined as follows : L1 ( u , v ) − F1 = 0 ; L2 ( u , v ) − F2 = 0 ( 1 ) where , L1 , L2 denote PDE operators and F1 , F2 represent PDE source terms . The PDE operators can vary for different PDEs . For example , in a non-linear PDE such as the unsteady , incompressible Navier-Stokes equation the operator , L = ∂∂t + ~a.~∇− ~∇.~∇ Traditional PDE solvers solve PDEs given in Eq . 1 by representing solutions variables , u , v , and their linear and non-linear spatio-temporal derivatives on a discretized computational domain . The numerical approximations on discrete domains are computed on a finite number of computational elements known as a mesh , using techniques such as Finite Difference Method ( FDM ) , Finite Volume Method ( FVM ) and Finite Element Method ( FEM ) . These solvers use iterative solution algorithms to conserve fluxes between neighboring computational elements and determine consistent PDE solutions over the entire domain at convergence . The CoAE-MLSim approach is designed to perform similar operations but at the level of subdomains with assistance from ML techniques .
This paper proposes a new ML approach called CoAE-MLSim that is a faster alternative to PDE solvers. Compared to previous ML work on this problem, it aims to be more accurate and generalize better across PDE conditions. They also aim to require fewer PDE solutions to train the model.
SP:fa33e5a45b74feb3277ec2c9c980719fabd472dd
Frame Averaging for Invariant and Equivariant Network Design
1 INTRODUCTION . Many tasks in machine learning ( ML ) require learning functions that are invariant or equivariant with respect to symmetric transformations of the data . For example , graph classification is invariant to a permutation of its nodes , while node prediction tasks are equivariant to node permutations . Consequently , it is important to design expressive neural network architectures that are by construction invariant or equivariant for scalable and efficient learning . This recipe has proven to be successful for many ML tasks including image classification and segmentation ( LeCun et al. , 1998 ; Long et al. , 2015 ) , set and point-cloud learning ( Zaheer et al. , 2017 ; Qi et al. , 2017a ) , and graph learning ( Kipf & Welling , 2016 ; Gilmer et al. , 2017 ; Battaglia et al. , 2018 ) . Nevertheless , for some important instances of symmetries , the design of invariant and/or equivariant networks is either illusive ( Thomas et al. , 2018 ; Dym & Maron , 2020 ) , computationally expensive or lacking in expressivity ( Xu et al. , 2018a ; Morris et al. , 2019 ; Maron et al. , 2019 ; Murphy et al. , 2019 ) . In this paper , we propose a new general-purpose framework , called Frame Averaging ( FA ) , that can systematically facilitate expressive invariant and equivariant networks with respect to a broad class of groups . At the heart of our framework , we build on a basic fact that arbitrary functions φ ∶ V → R , Φ ∶ V →W , where V , W are some vector spaces , can be made invariant or equivariant by symmetrization , that is averaging over the group ( Yarotsky , 2021 ; Murphy et al. , 2018 ) , i.e. , ψ ( X ) = 1 ∣G∣ ∑g∈G φ ( g−1 ⋅X ) or Ψ ( X ) = 1 ∣G∣ ∑g∈G g ⋅Φ ( g−1 ⋅X ) . ( 1 ) where G = { g } denotes the group , ψ ∶ V → R is invariant and Ψ ∶ V → W is equivariant with respect to G. Furthermore , since invariant and equivariant functions are fixed under group averaging , i.e. , ψ = φ for invariant φ and Ψ = Φ for equivariant Φ , the above scheme often leads to universal ( i.e. , maximally expressive ) models ( Yarotsky , 2021 ) . However , the challenge with equation 1 is that when the cardinality of G is large ( e.g. , combinatorial groups such as permutations ) or infinite ( e.g. , continuous groups such as rotations ) , then exact averaging is intractable . In such cases , we are forced to approximate the sum via heuristics or Monte Carlo ( MC ) , thereby sacrificing the exact invariance/equivariance property for computational efficiency , e.g. , Murphy et al . ( 2018 ; 2019 ) define heuristic averaging strategies for approximate permutation invariance in GNNs ; similarly , Hu et al . ( 2021 ) and Shuaibi et al . ( 2021 ) use MC averaging for approximate rotation equivariance in GNNs . The key observation of the current paper is that the group average in equation 1 can be replaced with an average over a carefully selected subsetF ( X ) ⊂ G while retaining both exact invariance/equivariance and expressive power . Therefore , if F can be chosen so that the cardinality ∣F ( X ) ∣ is mostly small , averaging over F ( X ) results in both expressive and efficient invariant/equivariant model . We call the set-valued function F ∶ V → 2G , a frame , and show that it can successfully replace full group averaging if it satisfies a set equivariance property . We name this framework Frame Averaging ( FA ) and it serves as the basis for the design of invariant/equivariant networks in this paper . We instantiate the FA framework by considering different choices of symmetry groups G , their actions on data spaces V , W ( manifested by choices of group representations ) , and the backbone architectures ( or part thereof ) φ , Φ we want to make invariant/equivariant toG . We consider : ( i ) MultiLayer Perceptrons ( MLP ) , and Graph Neural Networks ( GNNs ) with node identification ( Murphy et al. , 2019 ; Loukas , 2020 ) adapted to permutation invariant Graph Neural Networks ( GNNs ) ; ( ii ) Message-Passing GNN ( Gilmer et al. , 2017 ) adapted to be invariant/equivariant to Euclidean motions , E ( d ) ; ( iii ) Set network , DeepSets and PointNet ( Zaheer et al. , 2017 ; Qi et al. , 2017a ) adapted to be equivariant or locally equivariant to E ( d ) ; ( iv ) Point cloud network , DGCNN ( Wang et al. , 2018 ) , adapted to be equivariant to E ( d ) . Theoretically , we prove that the FA framework maintains the expressive power of its original backbone architecture which leads to some interesting corollaries : First , ( i ) results in invariant universal graph learning models ; ( ii ) is an E ( d ) invariant/equivariant GNN that maintain the power of message passing ( Xu et al. , 2018a ; Morris et al. , 2019 ) ; and ( iii ) , ( iv ) furnish a universal permutation and E ( d ) invariant/equivariant models . We note that both the construction and the proofs are arguably considerably simpler than the existing alternative constructions and proofs for this type of symmetry ( Thomas et al. , 2018 ; Fuchs et al. , 2020 ; Dym & Maron , 2020 ) . We experimented with FA on different tasks involving symmetries including : point-cloud normal estimation , beyond 2-Weisfeiler-Lehman graph separation , and n-body dynamics predictions , reaching state of the art performance in all . 2 FRAME AVERAGING . In this section we introduce the FA approach and theory in generic formulation , while in the next section we show the power of this general approach by instantiating it to ( seemingly different ) problems of interest . 2.1 FRAME AVERAGING FOR FUNCTION SYMMETRIZATION . Let φ ∶ V → R and Φ ∶ V →W be some arbitrary functions , where V , W are normed linear spaces with norms ∥⋅∥V , ∥⋅∥W , respectively . For example , φ , Φ can be thought of as neural networks . We consider a group G = { g } that describes some symmetry we want to incorporate into φ , Φ . The way the symmetries g ∈ G are applied to vectors in V , W is described by the group ’ s representations ρ1 ∶ G → GL ( V ) , and ρ2 ∶ G → GL ( W ) , where GL ( V ) is the space of invertible linear maps V → V ( automorphisms ) . A representation ρi preserves the group structure by satisfying ρi ( gh ) = ρi ( g ) ρi ( h ) for all g , h ∈ G ( see e.g. , Fulton & Harris ( 2013 ) ) . As customary , we will sometimes refer to the linear spaces V , W as representations . Our goal is to make φ into an invariant function , namely satisfy φ ( ρ1 ( g ) X ) = φ ( X ) , for all g ∈ G and X ∈ V ; and Φ into an equivariant function , namely Φ ( ρ1 ( g ) X ) = ρ2 ( g ) Φ ( X ) , for all g ∈ G and X ∈ V . We will do that by averaging over group elements , but instead of averaging over the entire group every time ( as in equation 1 ) we will average on a subset of the group elements called a frame . Definition 1 . A frame is defined as a set valued function F ∶ V → 2G ∖ ∅ . 1 . A frame is G-equivariant if F ( ρ1 ( g ) X ) = gF ( X ) , ∀X ∈ V , g ∈ G , ( 2 ) where as usual , gF ( X ) = { gh ∣ h ∈ F ( X ) } , and the equality in equation 2 should be understood as equality of sets . 2 . A frame is bounded over a domainK ⊂ V if there exists a constant c > 0 so that ∥ρ2 ( g ) ∥op ≤ c , for all g ∈ F ( X ) and all X ∈K , where ∥⋅∥op denotes the induced operator norm over W . Figure 1 provides an illustration . How are equivariant frames useful ? Consider a scenario where an equivariant frame is easy to compute , and furthermore its cardinality , ∣F ( X ) ∣ , is not too large . Then averaging over the frame , denoted ⟨⋅⟩F and defined by ⟨φ⟩F ( X ) = 1 ∣F ( X ) ∣ ∑g∈F ( X ) φ ( ρ1 ( g ) −1X ) ( 3 ) ⟨Φ⟩F ( X ) = 1 ∣F ( X ) ∣ ∑g∈F ( X ) ρ2 ( g ) Φ ( ρ1 ( g ) −1X ) ( 4 ) provides the required function symmetrization . In Appendix A.1 we prove : Theorem 1 ( Frame Averaging ) . Let F be a G equivariant frame , and φ ∶ V → R , Φ ∶ V →W some functions . Then , ⟨φ⟩F is G invariant , while ⟨Φ⟩F is G equivariant . Several comments are in order : First , the invariant case ( equation 3 ) is a particular case of the equivariant case ( equation 4 ) under the choice of W = R and the trivial representation ρ2 ( g ) ≡ 1 . Second , in this paper we only consider X and frame choices F for which F ( X ) are finite sets . Nevertheless , treating the infinite case is an important future research direction . Third , a trivial choice of an equivariant frame is F ( X ) ≡ G , that is , taking the frame to be the entire group for all X ∈ V ( for infinite but compact G the sum in the FA in this case can be replaced with Harr integral ) . This choice can be readily checked to be equivariant , and turns the FA equations 3 , 4 into standard group averaging operators , equation 1 . The problem with this choice , however , is that it often results in an intractable or challenging computation , e.g. , when the group is large or infinite . In contrast , as we show below , in some useful cases one can compute a manageable size frame and can use it to build invariant or equivariant operators in a principled way . Let us provide a simple example for Frame Averaging : consider V = Rn , W = R , and G = R with addition as the group action . We choose the group actions1 in this case to be ρ1 ( a ) x = x + a1 , and ρ2 ( a ) b = b + a , where a , b ∈ R , x ∈ Rn , and 1 ∈ Rn is the vector of all ones . We can define the frame in this case using the averaging operator F ( x ) = { 1 n 1Tx } ⊂ G = R. Note that in this case the frame contains only one element from the group , in other cases finding such a small frame is hard or even impossible . One can check that this frame is equivariant per Definition 1 . The FA of φ ∶ Rn → R would be ⟨φ⟩F ( x ) = φ ( x− 1 n ( 1Tx ) 1 ) in the invariant case , and ⟨φ⟩F ( x ) = φ ( x − 1 n ( 1Tx ) 1 ) + 1 n 1Tx in the equivariant case . Incorporating G as a second symmetry . An important use case of frame averaging is with the backbones φ , Φ already invariant/equivariant w.r.t . some symmetry group H and our goal is to make it invariant/equivariant to H ×G . For example , say we want to add G = E ( 3 ) equivariance to permutation invariant set or graph functions , i.e. , H = Sn . We will provide sufficient conditions for the FA to provide this desired invariance/equivariance . First , let us assumeH is acting on V andW by the representations τ1 ∶H → GL ( V ) and τ2 ∶H → GL ( W ) , respectively . Assume φ is H invariant and Φ is H equivariant . We say that representations ρ1 and τ1 commute if ρ1 ( g ) τ1 ( h ) X = τ1 ( h ) ρ1 ( g ) X for all g ∈ G , h ∈H , andX ∈ V . If ρ1 and τ1 commute then the map γ1 ∶H ×G→ GL ( V ) defined by γ1 ( h , g ) = τ1 ( h ) ρ1 ( g ) is a representation of the group H ×G . Second , we would need that the frame F ( X ) is invariant to H , that is F ( τ1 ( h ) X ) = F ( X ) . We show a generalization of Theorem 1 : Theorem 2 ( Frame Average second symmetry ) . Assume F is H-invariant and G-equivariant . Then , 1 . If φ ∶ V → R is H invariant and ρ1 , τ1 commute then ⟨φ⟩F is G ×H invariant . 2 . If Φ ∶ V →W isH equivariant and ρi , τi , i = 1,2 , commmute then ⟨Φ⟩F isG×H equivariant . 1Note that since these are affine maps they are technically not representations but have an equivalent representation using homogeneous coordinates . Therefore , FA is also valid with affine actions as used here . Right actions . Above we used left actions for the definition of equivariance . There are other flavors of equivariance , e.g. , if one of the actions is right . For example , if g multiplies F ( X ) from the right , then equivariance will take the form : F ( ρ1 ( g ) X ) = F ( X ) g−1 , ∀X ∈ V , g ∈ G ( 5 ) and accordingly ⟨φ⟩F ( X ) = 1 ∣F ( X ) ∣ ∑g∈F ( X ) φ ( ρ1 ( g ) X ) , ⟨Φ⟩F ( X ) = 1 ∣F ( X ) ∣ ∑g∈F ( X ) ρ2 ( g ) −1Φ ( ρ1 ( g ) X ) ( 6 ) are G invariant and equivariant , respectively . Efficient calculation of invariant frame averaging . There could be instances of the FA framework ( indeed we discuss such a case later ) where ∣F ( X ) ∣ is still too large to evaluate equations 3,4 . In the invariant case , there is a more efficient form of FA , that can potentially be applied . To show it , let us start by defining the subgroup of symmetries of X , i.e. , its stabilizer . The stabilizer of an elementX ∈ V is a subgroup ofG defined byGX = { g ∈ G ∣ ρ1 ( g ) X =X } . GX naturally induces an equivalence relation ∼ on F ( X ) , with g ∼ h ⇐⇒ hg−1 ∈ GX . The equivalence classes ( orbits ) are [ g ] = { h ∈ F ( X ) ∣g ∼ h } = GXg ⊂ F ( X ) , for g ∈ F ( X ) , and the quotient set is denoted F ( X ) /GX . Theorem 3 . Equivariant frame F ( X ) is a disjoint union of equal size orbits , [ g ] ∈ F ( X ) /GX . The proof is in A.3 . The first immediate consequence of Theorem 3 is that the cardinality of F ( X ) is at-least that of the stabilizer ( intuitively , the inner-symmetries ) of X , namely ∣F ( X ) ∣ ≥ ∣GX ∣ . Therefore , there could be cases , such as when X describes a symmetric graph , where ∣F ( X ) ∣ could be too large to average over . A remedy comes from the following observation : for every h ∈ [ g ] , we have that h = rg , r ∈ GX , and φ ( ρ1 ( h ) −1X ) = φ ( ρ1 ( g ) −1ρ1 ( r ) −1X ) = φ ( ρ1 ( g ) −1X ) , since also r−1 ∈ GX . Therefore the summands in equations 3 are constant over orbits , and we get ⟨φ⟩F ( X ) = 1 mF ∑ [ g ] ∈F ( X ) /GX φ ( ρ1 ( g ) −1X ) , ( 7 ) where mF = ∣F ( X ) /GX ∣ = ∣F ( X ) ∣ / ∣GX ∣ . This representation of invariant FA requires only mF = ∣F ( X ) ∣/∣GX ∣ evaluations , compared to ∣F ( X ) ∣ in the original FA in equation 3 . Approximation of invariant frame averaging . Unfortunately , enumerating F ( X ) /GX could be challenging in some cases . Nevertheless , equation 7 is still very useful : it turns out we can easily draw a random element from F ( X ) /GX with uniform probability . This is an immediate application of the equal orbit size in Theorem 3 : Corollary 1 . Let F ( X ) be an equivariant frame , and g ∈ F ( X ) be a uniform random sample . Then [ g ] ∈ F ( X ) /GX is also uniform . Therefore , an efficient approximation strategy is averaging over uniform samples , gi ∈ F ( X ) , i ∈ [ k ] , ⟪φ⟫F ( X ) = 1 k k ∑ i=1 φ ( ρ1 ( gi ) −1X ) . ( 8 ) This approximation is especially useful , compared to the full-blown FA , when mF = ∣F ( X ) ∣/∣GX ∣ is small , i.e. , when ∣GX ∣ is large , or X has many symmetries . Intuitively , the smaller mF the better the approximation in equation 8 . A partial explanation to this phenomenon is given in Appendix A.4 , while an empirical validation is provided in Section 5.2 .
The paper proposes to make any neural network equivariant by symmetrizing over a subset of the group, rather than over whole group. If the subset selection F(X), depending on input X, is equivariant (gFX=FgX), then the symmetrization is equivariant. The authors furthermore prove: 1) When interested in invariant prediction, the subset can be chosen in the quotient G/G_X, where G_X is the stabilizer subgroup of X. 2) When symmetrizing with a random subsample of F(X), the probability of a particular subsample that deviates from symmetrizing with all of F(X) by less than some epsilon, is bounded below. 3) When using the symmetrization of a universal model, the resulting model class is universal in the class of equivariant functions.
SP:0241b8a73225e20c6d486355f34f267d87ef1f44
Frame Averaging for Invariant and Equivariant Network Design
1 INTRODUCTION . Many tasks in machine learning ( ML ) require learning functions that are invariant or equivariant with respect to symmetric transformations of the data . For example , graph classification is invariant to a permutation of its nodes , while node prediction tasks are equivariant to node permutations . Consequently , it is important to design expressive neural network architectures that are by construction invariant or equivariant for scalable and efficient learning . This recipe has proven to be successful for many ML tasks including image classification and segmentation ( LeCun et al. , 1998 ; Long et al. , 2015 ) , set and point-cloud learning ( Zaheer et al. , 2017 ; Qi et al. , 2017a ) , and graph learning ( Kipf & Welling , 2016 ; Gilmer et al. , 2017 ; Battaglia et al. , 2018 ) . Nevertheless , for some important instances of symmetries , the design of invariant and/or equivariant networks is either illusive ( Thomas et al. , 2018 ; Dym & Maron , 2020 ) , computationally expensive or lacking in expressivity ( Xu et al. , 2018a ; Morris et al. , 2019 ; Maron et al. , 2019 ; Murphy et al. , 2019 ) . In this paper , we propose a new general-purpose framework , called Frame Averaging ( FA ) , that can systematically facilitate expressive invariant and equivariant networks with respect to a broad class of groups . At the heart of our framework , we build on a basic fact that arbitrary functions φ ∶ V → R , Φ ∶ V →W , where V , W are some vector spaces , can be made invariant or equivariant by symmetrization , that is averaging over the group ( Yarotsky , 2021 ; Murphy et al. , 2018 ) , i.e. , ψ ( X ) = 1 ∣G∣ ∑g∈G φ ( g−1 ⋅X ) or Ψ ( X ) = 1 ∣G∣ ∑g∈G g ⋅Φ ( g−1 ⋅X ) . ( 1 ) where G = { g } denotes the group , ψ ∶ V → R is invariant and Ψ ∶ V → W is equivariant with respect to G. Furthermore , since invariant and equivariant functions are fixed under group averaging , i.e. , ψ = φ for invariant φ and Ψ = Φ for equivariant Φ , the above scheme often leads to universal ( i.e. , maximally expressive ) models ( Yarotsky , 2021 ) . However , the challenge with equation 1 is that when the cardinality of G is large ( e.g. , combinatorial groups such as permutations ) or infinite ( e.g. , continuous groups such as rotations ) , then exact averaging is intractable . In such cases , we are forced to approximate the sum via heuristics or Monte Carlo ( MC ) , thereby sacrificing the exact invariance/equivariance property for computational efficiency , e.g. , Murphy et al . ( 2018 ; 2019 ) define heuristic averaging strategies for approximate permutation invariance in GNNs ; similarly , Hu et al . ( 2021 ) and Shuaibi et al . ( 2021 ) use MC averaging for approximate rotation equivariance in GNNs . The key observation of the current paper is that the group average in equation 1 can be replaced with an average over a carefully selected subsetF ( X ) ⊂ G while retaining both exact invariance/equivariance and expressive power . Therefore , if F can be chosen so that the cardinality ∣F ( X ) ∣ is mostly small , averaging over F ( X ) results in both expressive and efficient invariant/equivariant model . We call the set-valued function F ∶ V → 2G , a frame , and show that it can successfully replace full group averaging if it satisfies a set equivariance property . We name this framework Frame Averaging ( FA ) and it serves as the basis for the design of invariant/equivariant networks in this paper . We instantiate the FA framework by considering different choices of symmetry groups G , their actions on data spaces V , W ( manifested by choices of group representations ) , and the backbone architectures ( or part thereof ) φ , Φ we want to make invariant/equivariant toG . We consider : ( i ) MultiLayer Perceptrons ( MLP ) , and Graph Neural Networks ( GNNs ) with node identification ( Murphy et al. , 2019 ; Loukas , 2020 ) adapted to permutation invariant Graph Neural Networks ( GNNs ) ; ( ii ) Message-Passing GNN ( Gilmer et al. , 2017 ) adapted to be invariant/equivariant to Euclidean motions , E ( d ) ; ( iii ) Set network , DeepSets and PointNet ( Zaheer et al. , 2017 ; Qi et al. , 2017a ) adapted to be equivariant or locally equivariant to E ( d ) ; ( iv ) Point cloud network , DGCNN ( Wang et al. , 2018 ) , adapted to be equivariant to E ( d ) . Theoretically , we prove that the FA framework maintains the expressive power of its original backbone architecture which leads to some interesting corollaries : First , ( i ) results in invariant universal graph learning models ; ( ii ) is an E ( d ) invariant/equivariant GNN that maintain the power of message passing ( Xu et al. , 2018a ; Morris et al. , 2019 ) ; and ( iii ) , ( iv ) furnish a universal permutation and E ( d ) invariant/equivariant models . We note that both the construction and the proofs are arguably considerably simpler than the existing alternative constructions and proofs for this type of symmetry ( Thomas et al. , 2018 ; Fuchs et al. , 2020 ; Dym & Maron , 2020 ) . We experimented with FA on different tasks involving symmetries including : point-cloud normal estimation , beyond 2-Weisfeiler-Lehman graph separation , and n-body dynamics predictions , reaching state of the art performance in all . 2 FRAME AVERAGING . In this section we introduce the FA approach and theory in generic formulation , while in the next section we show the power of this general approach by instantiating it to ( seemingly different ) problems of interest . 2.1 FRAME AVERAGING FOR FUNCTION SYMMETRIZATION . Let φ ∶ V → R and Φ ∶ V →W be some arbitrary functions , where V , W are normed linear spaces with norms ∥⋅∥V , ∥⋅∥W , respectively . For example , φ , Φ can be thought of as neural networks . We consider a group G = { g } that describes some symmetry we want to incorporate into φ , Φ . The way the symmetries g ∈ G are applied to vectors in V , W is described by the group ’ s representations ρ1 ∶ G → GL ( V ) , and ρ2 ∶ G → GL ( W ) , where GL ( V ) is the space of invertible linear maps V → V ( automorphisms ) . A representation ρi preserves the group structure by satisfying ρi ( gh ) = ρi ( g ) ρi ( h ) for all g , h ∈ G ( see e.g. , Fulton & Harris ( 2013 ) ) . As customary , we will sometimes refer to the linear spaces V , W as representations . Our goal is to make φ into an invariant function , namely satisfy φ ( ρ1 ( g ) X ) = φ ( X ) , for all g ∈ G and X ∈ V ; and Φ into an equivariant function , namely Φ ( ρ1 ( g ) X ) = ρ2 ( g ) Φ ( X ) , for all g ∈ G and X ∈ V . We will do that by averaging over group elements , but instead of averaging over the entire group every time ( as in equation 1 ) we will average on a subset of the group elements called a frame . Definition 1 . A frame is defined as a set valued function F ∶ V → 2G ∖ ∅ . 1 . A frame is G-equivariant if F ( ρ1 ( g ) X ) = gF ( X ) , ∀X ∈ V , g ∈ G , ( 2 ) where as usual , gF ( X ) = { gh ∣ h ∈ F ( X ) } , and the equality in equation 2 should be understood as equality of sets . 2 . A frame is bounded over a domainK ⊂ V if there exists a constant c > 0 so that ∥ρ2 ( g ) ∥op ≤ c , for all g ∈ F ( X ) and all X ∈K , where ∥⋅∥op denotes the induced operator norm over W . Figure 1 provides an illustration . How are equivariant frames useful ? Consider a scenario where an equivariant frame is easy to compute , and furthermore its cardinality , ∣F ( X ) ∣ , is not too large . Then averaging over the frame , denoted ⟨⋅⟩F and defined by ⟨φ⟩F ( X ) = 1 ∣F ( X ) ∣ ∑g∈F ( X ) φ ( ρ1 ( g ) −1X ) ( 3 ) ⟨Φ⟩F ( X ) = 1 ∣F ( X ) ∣ ∑g∈F ( X ) ρ2 ( g ) Φ ( ρ1 ( g ) −1X ) ( 4 ) provides the required function symmetrization . In Appendix A.1 we prove : Theorem 1 ( Frame Averaging ) . Let F be a G equivariant frame , and φ ∶ V → R , Φ ∶ V →W some functions . Then , ⟨φ⟩F is G invariant , while ⟨Φ⟩F is G equivariant . Several comments are in order : First , the invariant case ( equation 3 ) is a particular case of the equivariant case ( equation 4 ) under the choice of W = R and the trivial representation ρ2 ( g ) ≡ 1 . Second , in this paper we only consider X and frame choices F for which F ( X ) are finite sets . Nevertheless , treating the infinite case is an important future research direction . Third , a trivial choice of an equivariant frame is F ( X ) ≡ G , that is , taking the frame to be the entire group for all X ∈ V ( for infinite but compact G the sum in the FA in this case can be replaced with Harr integral ) . This choice can be readily checked to be equivariant , and turns the FA equations 3 , 4 into standard group averaging operators , equation 1 . The problem with this choice , however , is that it often results in an intractable or challenging computation , e.g. , when the group is large or infinite . In contrast , as we show below , in some useful cases one can compute a manageable size frame and can use it to build invariant or equivariant operators in a principled way . Let us provide a simple example for Frame Averaging : consider V = Rn , W = R , and G = R with addition as the group action . We choose the group actions1 in this case to be ρ1 ( a ) x = x + a1 , and ρ2 ( a ) b = b + a , where a , b ∈ R , x ∈ Rn , and 1 ∈ Rn is the vector of all ones . We can define the frame in this case using the averaging operator F ( x ) = { 1 n 1Tx } ⊂ G = R. Note that in this case the frame contains only one element from the group , in other cases finding such a small frame is hard or even impossible . One can check that this frame is equivariant per Definition 1 . The FA of φ ∶ Rn → R would be ⟨φ⟩F ( x ) = φ ( x− 1 n ( 1Tx ) 1 ) in the invariant case , and ⟨φ⟩F ( x ) = φ ( x − 1 n ( 1Tx ) 1 ) + 1 n 1Tx in the equivariant case . Incorporating G as a second symmetry . An important use case of frame averaging is with the backbones φ , Φ already invariant/equivariant w.r.t . some symmetry group H and our goal is to make it invariant/equivariant to H ×G . For example , say we want to add G = E ( 3 ) equivariance to permutation invariant set or graph functions , i.e. , H = Sn . We will provide sufficient conditions for the FA to provide this desired invariance/equivariance . First , let us assumeH is acting on V andW by the representations τ1 ∶H → GL ( V ) and τ2 ∶H → GL ( W ) , respectively . Assume φ is H invariant and Φ is H equivariant . We say that representations ρ1 and τ1 commute if ρ1 ( g ) τ1 ( h ) X = τ1 ( h ) ρ1 ( g ) X for all g ∈ G , h ∈H , andX ∈ V . If ρ1 and τ1 commute then the map γ1 ∶H ×G→ GL ( V ) defined by γ1 ( h , g ) = τ1 ( h ) ρ1 ( g ) is a representation of the group H ×G . Second , we would need that the frame F ( X ) is invariant to H , that is F ( τ1 ( h ) X ) = F ( X ) . We show a generalization of Theorem 1 : Theorem 2 ( Frame Average second symmetry ) . Assume F is H-invariant and G-equivariant . Then , 1 . If φ ∶ V → R is H invariant and ρ1 , τ1 commute then ⟨φ⟩F is G ×H invariant . 2 . If Φ ∶ V →W isH equivariant and ρi , τi , i = 1,2 , commmute then ⟨Φ⟩F isG×H equivariant . 1Note that since these are affine maps they are technically not representations but have an equivalent representation using homogeneous coordinates . Therefore , FA is also valid with affine actions as used here . Right actions . Above we used left actions for the definition of equivariance . There are other flavors of equivariance , e.g. , if one of the actions is right . For example , if g multiplies F ( X ) from the right , then equivariance will take the form : F ( ρ1 ( g ) X ) = F ( X ) g−1 , ∀X ∈ V , g ∈ G ( 5 ) and accordingly ⟨φ⟩F ( X ) = 1 ∣F ( X ) ∣ ∑g∈F ( X ) φ ( ρ1 ( g ) X ) , ⟨Φ⟩F ( X ) = 1 ∣F ( X ) ∣ ∑g∈F ( X ) ρ2 ( g ) −1Φ ( ρ1 ( g ) X ) ( 6 ) are G invariant and equivariant , respectively . Efficient calculation of invariant frame averaging . There could be instances of the FA framework ( indeed we discuss such a case later ) where ∣F ( X ) ∣ is still too large to evaluate equations 3,4 . In the invariant case , there is a more efficient form of FA , that can potentially be applied . To show it , let us start by defining the subgroup of symmetries of X , i.e. , its stabilizer . The stabilizer of an elementX ∈ V is a subgroup ofG defined byGX = { g ∈ G ∣ ρ1 ( g ) X =X } . GX naturally induces an equivalence relation ∼ on F ( X ) , with g ∼ h ⇐⇒ hg−1 ∈ GX . The equivalence classes ( orbits ) are [ g ] = { h ∈ F ( X ) ∣g ∼ h } = GXg ⊂ F ( X ) , for g ∈ F ( X ) , and the quotient set is denoted F ( X ) /GX . Theorem 3 . Equivariant frame F ( X ) is a disjoint union of equal size orbits , [ g ] ∈ F ( X ) /GX . The proof is in A.3 . The first immediate consequence of Theorem 3 is that the cardinality of F ( X ) is at-least that of the stabilizer ( intuitively , the inner-symmetries ) of X , namely ∣F ( X ) ∣ ≥ ∣GX ∣ . Therefore , there could be cases , such as when X describes a symmetric graph , where ∣F ( X ) ∣ could be too large to average over . A remedy comes from the following observation : for every h ∈ [ g ] , we have that h = rg , r ∈ GX , and φ ( ρ1 ( h ) −1X ) = φ ( ρ1 ( g ) −1ρ1 ( r ) −1X ) = φ ( ρ1 ( g ) −1X ) , since also r−1 ∈ GX . Therefore the summands in equations 3 are constant over orbits , and we get ⟨φ⟩F ( X ) = 1 mF ∑ [ g ] ∈F ( X ) /GX φ ( ρ1 ( g ) −1X ) , ( 7 ) where mF = ∣F ( X ) /GX ∣ = ∣F ( X ) ∣ / ∣GX ∣ . This representation of invariant FA requires only mF = ∣F ( X ) ∣/∣GX ∣ evaluations , compared to ∣F ( X ) ∣ in the original FA in equation 3 . Approximation of invariant frame averaging . Unfortunately , enumerating F ( X ) /GX could be challenging in some cases . Nevertheless , equation 7 is still very useful : it turns out we can easily draw a random element from F ( X ) /GX with uniform probability . This is an immediate application of the equal orbit size in Theorem 3 : Corollary 1 . Let F ( X ) be an equivariant frame , and g ∈ F ( X ) be a uniform random sample . Then [ g ] ∈ F ( X ) /GX is also uniform . Therefore , an efficient approximation strategy is averaging over uniform samples , gi ∈ F ( X ) , i ∈ [ k ] , ⟪φ⟫F ( X ) = 1 k k ∑ i=1 φ ( ρ1 ( gi ) −1X ) . ( 8 ) This approximation is especially useful , compared to the full-blown FA , when mF = ∣F ( X ) ∣/∣GX ∣ is small , i.e. , when ∣GX ∣ is large , or X has many symmetries . Intuitively , the smaller mF the better the approximation in equation 8 . A partial explanation to this phenomenon is given in Appendix A.4 , while an empirical validation is provided in Section 5.2 .
**Summary and Contributions**: The paper introduces a framework called Frame Averaging (FA) that can adapt existing backbone architectures to become invariant/equivariant to new symmetry types. It achieves this by averaging over an input-dependent frame which outputs a subset of groups. Frame averaging is often much more efficient to compute than averaging over the entire group, while at the same time, guarantees exact invariance/equivariance. On the technical side, the paper also proves that FA-based models have the same expressive power as the original backbone architectures. On the empirical side, the paper provides new classes of models using FA such as universal Euclidean motion invariant point cloud networks / Message Passing GNNs, and demonstrates their practical effectiveness on several tasks.
SP:0241b8a73225e20c6d486355f34f267d87ef1f44
Structured Uncertainty in the Observation Space of Variational Autoencoders
1 INTRODUCTION Generative modelling is one of the cornerstones of modern machine learning . One of the most used and widespread classes of generative models is the Variational Autoencoder ( VAE ) ( Kingma & Welling , 2014 ; 2019 ) . VAEs explicitly model the distribution of observations by assuming a latent variable model with low-dimensional latent space and using a simple parametric distribution in observation space . Using a neural network , VAEs decode the latent space into arbitrarily complex observational distributions . Despite many improvements on the VAE model , one oftenoverlooked aspect is the choice of observational distribution . As an explicit likelihood model , the VAE assumes a distribution in observation space – using a delta distribution would not allow gradient based optimisation . Most current implementations , however , employ only simple models , such as pixelwise independent normal distributions , which eases optimisation but limits expressivity . Else , the likelihood term is often replaced by a reconstruction loss – which , in the case of an L2 loss , implicitly assumes an independent normal distribution . Following this implicit assumption , samples are then generated by only predicting the mean , rather than sampling in observation space . An application where this disconnect becomes apparent is image synthesis . The common choices for observational distributions are pixel-wise independent categorical or normal distributions . For pixel-wise independent distributions , regardless of other model choices , sampling from the joint distribution over pixels will result in spatially-incoherent samples due to independent pixel noise ( cf . Figure 1 ) . To address this problem , researchers use the predicted distributions to calculate the log-likelihood in the objective but then discard them in favour of the mean when generating samples or reconstructing inputs . However , this solution is akin to ignoring the issue rather than attempting to solve it . In this work , we explore what happens when we strictly follow VAE theory and sample from the predicted observational distributions . We illustrate the problem of spatial incoherence that arises from using pixel-wise independent distributions . We propose using a spatially dependent joint distribution over the observation space and compare it to the previous scenario . We further compare the samples to the mean of the predicted observational distribution , which is typically used when synthesising images . We note that , in this work , we are not focusing on absolute image quality . Instead , we aim to point to an issue often overlooked in VAE theory and application , which affects most state-of-the-art methods . Thus we analyse the relative difference between using and not using a joint pixel dependent observational distribution for a basic VAE . Yet , our findings are of broad relevance and our proposed model can be used in more advanced VAE variants . 2 RELATED WORK . Modern generative models can be divided into two classes , implicit likelihood models , such as Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) and diffusion models ( Song & Durkan , 2021 ) , and explicit likelihood models , such as VAEs ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Kingma & Welling , 2019 ) , flow models ( Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ; Dinh et al. , 2015 ) and auto-regressive models ( Van Den Oord et al. , 2016a ; b ; Salimans et al. , 2017 ) . Despite implicit likelihood models having achieved impressive results in terms of sample quality without the need for explicitly modelling the observation space ( Karras et al. , 2020 ; Brock et al. , 2019 ) , interest in explicit likelihood models have prevailed due to their appealing properties , such as ease of likelihood estimation . One of the most popular and successful explicit likelihood models is the VAE ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Kingma & Welling , 2019 ) . Since its introduction , there have been numerous extensions . For example , Van Den Oord et al . ( 2017 ) ; Razavi et al . ( 2019 ) quantize the latent space to achieve better image quality ; Higgins et al . ( 2017 ) ; Chen et al . ( 2017 ) modify the latent posterior to obtain disentangled and interpretable representations ; and Vahdat & Kautz ( 2020 ) ; Sønderby et al . ( 2016 ) use hierarchical architectures to improve sample quality . Like most other explicit likelihood models , the VAE requires the choice of a parametric observational distribution . This choice is often pixel-wise independent . As a result , practitioners use the distribution to calculate the likelihood but use its expected value when sampling , as the samples themselves , are noisy and of limited use in most applications . However , according to the theory and as previously pointed out by Stirn & Knowles ( 2020 ) and Detlefsen et al . ( 2019 ) , a latent space sample should entail a distribution over observations and not a single point . Despite attempts to enforce spatial dependencies in the decoder architecture ( Miladinovic et al. , 2021 ) , without a pixeldependent joint likelihood , the observation samples will remain noisy . Notable exceptions are autoregressive models and auto-regressive VAE decoders ( Van Den Oord et al. , 2017 ; Razavi et al. , 2019 ; Gulrajani et al. , 2017 ; Nash et al. , 2021 ) . Unlike other explicit likelihood models , auto-regressive models jointly model the observational distribution by sequentially decoding pixels while conditioning on previously decoded values . While this sampling procedure results in spatially coherent samples , it is computationally expensive and uncertainty estimation is not trivial . In the context of non-auto-regressive VAE decoders , work focusing on modelling a joint observational distribution that accounts for pixel dependencies is limited . Monteiro et al . ( 2020 ) use a low-rank multivariate normal distribution to produce spatially consistent samples in a segmentation setting . However , they focus on discriminative models only . For generative models , a notable exception is a work by Dorta et al . ( 2018 ) which , similarly to our proposed method , employs a non-diagonal multivariate normal distribution over observation space . The key difference is the choice of parameterisation used for the covariance matrix . Dorta et al . ( 2018 ) predict the Choleskydecomposed precision matrix , which grows quadratically with the size of the image . To address this computational constraint , the authors use a sparse decomposition . This decomposition considers only a local neighbourhood of pixels , which limits its ability to capture long-range spatial dependencies . In contrast , our approach uses a global parameterisation . 3 METHODS . 3.1 VARIATIONAL AUTOENCODERS . We briefly revisit the theory of standard VAEs as proposed by Kingma & Welling ( 2014 ) ; Rezende et al . ( 2014 ) . We assume a prior distribution over the latent variables , p ( z ) , a probabilistic encoder of the posterior , pθ ( z|x ) , and a probabilistic decoder of the likelihood , pθ ( x|z ) . In practice , the probabilistic encoder describes an intractable posterior . Using variational inference , we approximate this posterior with a distribution , qφ ( z|x ) given parameters φ . Here , VAEs provide the algorithm to jointly learn the parameters θ and φ ( Kingma & Welling , 2014 ) . Considering some dataset X = { x ( i ) } Ni=1 , the VAE objective is given by maximising the evidence lower bound with respect to the parameters θ and φ ( Bank et al. , 2020 ; Kingma & Welling , 2014 ) : L ( θ , φ ; x ( i ) ) = −DKL [ qφ ( z|x ( i ) ) ||pθ ( z ) ] + Eqφ ( z|x ( i ) ) [ log pθ ( x ( i ) |z ) ] ( 1 ) Since the derivative of the lower bound w.r.t φ is problematic due to the stochastic expectation operator , the re-parameterisation trick is used to yield the Stochastic Gradient Variational Bayes ( SGVB ) estimator ( Kingma & Welling , 2014 ) . Note , from the second term in equation 1 , that the optimisation objective requires the choice of an observational distribution to calculate the likelihood that the data comes from the predicted distribution : pθ ( x|z ) . Hence , it follows that a latent sample entails a distribution over observations . The predicted distribution should be as close as possible to the observed distribution . Its samples should look like real observations . However , in the case of highly structured data such as images , this will not be the case when using the commonly-employed pixel-independent joint distribution . 3.2 STRUCTURED OBSERVATION SPACE VARIATIONAL AUTOENCODERS . With the commonly assumed model for the observation space , where the predicted joint distribution is pixel-wise independent , samples can only add noise to the predicted mean , as shown in the examples in section 4.1 . By incorporating spatial dependencies in the predicted distribution , we aim to overcome this limitation and generate more realistic samples under the observed distribution . Our solution is to replace the joint independent distribution predicted by the decoder with one that explicitly models dependencies between outputs . Specifically , we modify the final layer of the decoder to predict a low-rank parameterisation of a fully populated covariance matrix for use in a multivariate normal distribution , pθ ( x|z ) ∼ N ( µ , Σ ) . This small modification can be applied to most existing VAE architectures . Inspired by a recent discriminative model ( Monteiro et al. , 2020 ) , we use an efficient parameterisation , Σ = PPT+D , to model covariance globally , albeit at a low-rank . This yields a compact model with a covariance factor , P ∈ R ( S×C ) ×R and a covariance diagonal , D = diag ( d ) ∈ R ( S×C ) 2 , with diagonal elements , d. Here , S = H ×W is the number of pixels , C is the number of channels , and R is the rank of the parameterisation . Since we have only modified the distribution of the likelihood , the SGVB estimator ( equation 1 ) is applicable without modification . However , we found optimising the SGVB estimator with a non-diagonal covariance to be unstable regarding the variance . The instability comes from the fact that two routes can optimise the likelihood : to find the correct mean and appropriate variance around it or to keep increasing the variance ( uncertainty about the mean ) . The second direction is obviously undesirable and results in implausible samples ( with overly bright colours and high contrast ; cf . Appendix A ) . Monteiro et al . ( 2020 ) observed a similar phenomenon and pre-trained on the mean to avoid it . This problem is not unique to our implementation ; the stability of variance networks has been discussed before ( Stirn & Knowles , 2020 ; Detlefsen et al. , 2019 ) . In our case , we found pre-training the mean to be beneficial but insufficient . To further mitigate the problem , our solution includes weight initialisation , fixing the covariance diagonal to a small positive scalar : D = I 1 , and constraining the entropy of the predicted distribution . Constraining the entropy constrains the variance indirectly , thus giving preference to low-variance solutions . We compute 1We found 10−5 to yield good results . the entropy of the normal distribution in closed form and add it to the objective function . We employ soft-constraints for the entropy and KL divergence using the modified differential method of multipliers ( Platt & Barr , 1988 ) , for its agreeable convergence and stability properties , which results in the following Lagrangian formulation : L̃ ( θ , φ ; x ( i ) ) = 1 M M∑ m=1 log pθ ( x ( i ) |z ( i , m ) ) ( 2 ) − β [ DKL [ ( qφ ( z|x ( i ) ) ||pθ ( z ) ) ] − ξKL ] − λH [ H ( x ( i ) |z ) − ξH ] where β and ξKL are the Lagrangian multiplier and the slack variable , respectively , for the β-VAE constraint ; H ( x ( i ) |z ) is the entropy of the predicted distribution and λH and ξH are the Lagrangian multiplier and the slack variable , respectively , for the new constraint .
The authors aim at improving the canonical VAE model by replacing the standard iid Gaussian likelihood with a multivariate Gaussian with (low-rank + diagonal) covariance. In applications to CelebA and a brain MRI dataset from UK Biobank, the authors compare the proposed structured-observation-space VAE with a canonical VAE, showing that the samples from their model have lower Fréchet Inception Distance (FID) scores than both means and samples from the canonical VAE. The authors then evaluate the expressiveness of the representations learned in the observation space by visually evaluating interpolations in the observation space, and images obtained by rescaling the principal components of the observational covariance matrix. Finally, the authors show how their model can be used for interactive editing—i.e., editing a small number of pixels and using the conditional distribution to infer coherent edits in the remainder of the image.
SP:29f19f93648edfbc8c30536e8e99a4437c560993
Structured Uncertainty in the Observation Space of Variational Autoencoders
1 INTRODUCTION Generative modelling is one of the cornerstones of modern machine learning . One of the most used and widespread classes of generative models is the Variational Autoencoder ( VAE ) ( Kingma & Welling , 2014 ; 2019 ) . VAEs explicitly model the distribution of observations by assuming a latent variable model with low-dimensional latent space and using a simple parametric distribution in observation space . Using a neural network , VAEs decode the latent space into arbitrarily complex observational distributions . Despite many improvements on the VAE model , one oftenoverlooked aspect is the choice of observational distribution . As an explicit likelihood model , the VAE assumes a distribution in observation space – using a delta distribution would not allow gradient based optimisation . Most current implementations , however , employ only simple models , such as pixelwise independent normal distributions , which eases optimisation but limits expressivity . Else , the likelihood term is often replaced by a reconstruction loss – which , in the case of an L2 loss , implicitly assumes an independent normal distribution . Following this implicit assumption , samples are then generated by only predicting the mean , rather than sampling in observation space . An application where this disconnect becomes apparent is image synthesis . The common choices for observational distributions are pixel-wise independent categorical or normal distributions . For pixel-wise independent distributions , regardless of other model choices , sampling from the joint distribution over pixels will result in spatially-incoherent samples due to independent pixel noise ( cf . Figure 1 ) . To address this problem , researchers use the predicted distributions to calculate the log-likelihood in the objective but then discard them in favour of the mean when generating samples or reconstructing inputs . However , this solution is akin to ignoring the issue rather than attempting to solve it . In this work , we explore what happens when we strictly follow VAE theory and sample from the predicted observational distributions . We illustrate the problem of spatial incoherence that arises from using pixel-wise independent distributions . We propose using a spatially dependent joint distribution over the observation space and compare it to the previous scenario . We further compare the samples to the mean of the predicted observational distribution , which is typically used when synthesising images . We note that , in this work , we are not focusing on absolute image quality . Instead , we aim to point to an issue often overlooked in VAE theory and application , which affects most state-of-the-art methods . Thus we analyse the relative difference between using and not using a joint pixel dependent observational distribution for a basic VAE . Yet , our findings are of broad relevance and our proposed model can be used in more advanced VAE variants . 2 RELATED WORK . Modern generative models can be divided into two classes , implicit likelihood models , such as Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) and diffusion models ( Song & Durkan , 2021 ) , and explicit likelihood models , such as VAEs ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Kingma & Welling , 2019 ) , flow models ( Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ; Dinh et al. , 2015 ) and auto-regressive models ( Van Den Oord et al. , 2016a ; b ; Salimans et al. , 2017 ) . Despite implicit likelihood models having achieved impressive results in terms of sample quality without the need for explicitly modelling the observation space ( Karras et al. , 2020 ; Brock et al. , 2019 ) , interest in explicit likelihood models have prevailed due to their appealing properties , such as ease of likelihood estimation . One of the most popular and successful explicit likelihood models is the VAE ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Kingma & Welling , 2019 ) . Since its introduction , there have been numerous extensions . For example , Van Den Oord et al . ( 2017 ) ; Razavi et al . ( 2019 ) quantize the latent space to achieve better image quality ; Higgins et al . ( 2017 ) ; Chen et al . ( 2017 ) modify the latent posterior to obtain disentangled and interpretable representations ; and Vahdat & Kautz ( 2020 ) ; Sønderby et al . ( 2016 ) use hierarchical architectures to improve sample quality . Like most other explicit likelihood models , the VAE requires the choice of a parametric observational distribution . This choice is often pixel-wise independent . As a result , practitioners use the distribution to calculate the likelihood but use its expected value when sampling , as the samples themselves , are noisy and of limited use in most applications . However , according to the theory and as previously pointed out by Stirn & Knowles ( 2020 ) and Detlefsen et al . ( 2019 ) , a latent space sample should entail a distribution over observations and not a single point . Despite attempts to enforce spatial dependencies in the decoder architecture ( Miladinovic et al. , 2021 ) , without a pixeldependent joint likelihood , the observation samples will remain noisy . Notable exceptions are autoregressive models and auto-regressive VAE decoders ( Van Den Oord et al. , 2017 ; Razavi et al. , 2019 ; Gulrajani et al. , 2017 ; Nash et al. , 2021 ) . Unlike other explicit likelihood models , auto-regressive models jointly model the observational distribution by sequentially decoding pixels while conditioning on previously decoded values . While this sampling procedure results in spatially coherent samples , it is computationally expensive and uncertainty estimation is not trivial . In the context of non-auto-regressive VAE decoders , work focusing on modelling a joint observational distribution that accounts for pixel dependencies is limited . Monteiro et al . ( 2020 ) use a low-rank multivariate normal distribution to produce spatially consistent samples in a segmentation setting . However , they focus on discriminative models only . For generative models , a notable exception is a work by Dorta et al . ( 2018 ) which , similarly to our proposed method , employs a non-diagonal multivariate normal distribution over observation space . The key difference is the choice of parameterisation used for the covariance matrix . Dorta et al . ( 2018 ) predict the Choleskydecomposed precision matrix , which grows quadratically with the size of the image . To address this computational constraint , the authors use a sparse decomposition . This decomposition considers only a local neighbourhood of pixels , which limits its ability to capture long-range spatial dependencies . In contrast , our approach uses a global parameterisation . 3 METHODS . 3.1 VARIATIONAL AUTOENCODERS . We briefly revisit the theory of standard VAEs as proposed by Kingma & Welling ( 2014 ) ; Rezende et al . ( 2014 ) . We assume a prior distribution over the latent variables , p ( z ) , a probabilistic encoder of the posterior , pθ ( z|x ) , and a probabilistic decoder of the likelihood , pθ ( x|z ) . In practice , the probabilistic encoder describes an intractable posterior . Using variational inference , we approximate this posterior with a distribution , qφ ( z|x ) given parameters φ . Here , VAEs provide the algorithm to jointly learn the parameters θ and φ ( Kingma & Welling , 2014 ) . Considering some dataset X = { x ( i ) } Ni=1 , the VAE objective is given by maximising the evidence lower bound with respect to the parameters θ and φ ( Bank et al. , 2020 ; Kingma & Welling , 2014 ) : L ( θ , φ ; x ( i ) ) = −DKL [ qφ ( z|x ( i ) ) ||pθ ( z ) ] + Eqφ ( z|x ( i ) ) [ log pθ ( x ( i ) |z ) ] ( 1 ) Since the derivative of the lower bound w.r.t φ is problematic due to the stochastic expectation operator , the re-parameterisation trick is used to yield the Stochastic Gradient Variational Bayes ( SGVB ) estimator ( Kingma & Welling , 2014 ) . Note , from the second term in equation 1 , that the optimisation objective requires the choice of an observational distribution to calculate the likelihood that the data comes from the predicted distribution : pθ ( x|z ) . Hence , it follows that a latent sample entails a distribution over observations . The predicted distribution should be as close as possible to the observed distribution . Its samples should look like real observations . However , in the case of highly structured data such as images , this will not be the case when using the commonly-employed pixel-independent joint distribution . 3.2 STRUCTURED OBSERVATION SPACE VARIATIONAL AUTOENCODERS . With the commonly assumed model for the observation space , where the predicted joint distribution is pixel-wise independent , samples can only add noise to the predicted mean , as shown in the examples in section 4.1 . By incorporating spatial dependencies in the predicted distribution , we aim to overcome this limitation and generate more realistic samples under the observed distribution . Our solution is to replace the joint independent distribution predicted by the decoder with one that explicitly models dependencies between outputs . Specifically , we modify the final layer of the decoder to predict a low-rank parameterisation of a fully populated covariance matrix for use in a multivariate normal distribution , pθ ( x|z ) ∼ N ( µ , Σ ) . This small modification can be applied to most existing VAE architectures . Inspired by a recent discriminative model ( Monteiro et al. , 2020 ) , we use an efficient parameterisation , Σ = PPT+D , to model covariance globally , albeit at a low-rank . This yields a compact model with a covariance factor , P ∈ R ( S×C ) ×R and a covariance diagonal , D = diag ( d ) ∈ R ( S×C ) 2 , with diagonal elements , d. Here , S = H ×W is the number of pixels , C is the number of channels , and R is the rank of the parameterisation . Since we have only modified the distribution of the likelihood , the SGVB estimator ( equation 1 ) is applicable without modification . However , we found optimising the SGVB estimator with a non-diagonal covariance to be unstable regarding the variance . The instability comes from the fact that two routes can optimise the likelihood : to find the correct mean and appropriate variance around it or to keep increasing the variance ( uncertainty about the mean ) . The second direction is obviously undesirable and results in implausible samples ( with overly bright colours and high contrast ; cf . Appendix A ) . Monteiro et al . ( 2020 ) observed a similar phenomenon and pre-trained on the mean to avoid it . This problem is not unique to our implementation ; the stability of variance networks has been discussed before ( Stirn & Knowles , 2020 ; Detlefsen et al. , 2019 ) . In our case , we found pre-training the mean to be beneficial but insufficient . To further mitigate the problem , our solution includes weight initialisation , fixing the covariance diagonal to a small positive scalar : D = I 1 , and constraining the entropy of the predicted distribution . Constraining the entropy constrains the variance indirectly , thus giving preference to low-variance solutions . We compute 1We found 10−5 to yield good results . the entropy of the normal distribution in closed form and add it to the objective function . We employ soft-constraints for the entropy and KL divergence using the modified differential method of multipliers ( Platt & Barr , 1988 ) , for its agreeable convergence and stability properties , which results in the following Lagrangian formulation : L̃ ( θ , φ ; x ( i ) ) = 1 M M∑ m=1 log pθ ( x ( i ) |z ( i , m ) ) ( 2 ) − β [ DKL [ ( qφ ( z|x ( i ) ) ||pθ ( z ) ) ] − ξKL ] − λH [ H ( x ( i ) |z ) − ξH ] where β and ξKL are the Lagrangian multiplier and the slack variable , respectively , for the β-VAE constraint ; H ( x ( i ) |z ) is the entropy of the predicted distribution and λH and ξH are the Lagrangian multiplier and the slack variable , respectively , for the new constraint .
In the standard Variational Autoencoder framework the statistics of the decoder output are assumed to be pixel-independent Gaussian which can lead to problems when sampling from the model when covariances are missing. To overcome these limitations, the authors propose to use the network architecture proposed by Monteiro at al. 2020 in the decoder of a variational autoencoder. In the approacah of Monteiro at al. 2020, the output distribution is modeled as low-rank multivariate normal. Qualitative results on the CelebA dataset demonstrate that samples from the proposed modified variational autoencoder capture covariances between different parts of the image.
SP:29f19f93648edfbc8c30536e8e99a4437c560993
Learning to Complete Code with Sketches
1 INTRODUCTION . Recent high-capacity language models ( LM ) have shown that machine learning models are able to generate coherent , realistic text , but it is often hard to guide them towards a specific goal , especially when describing the intent is complex or more costly than manually generating the target output . One such scenario are LMs of source code ( LMC ) . Since Hindle et al . ( 2012 ) increasingly sophisticated LMCs have been built , including transformer-based ones , such as those of Svyatkovskiy et al . ( 2020 ) ; Feng et al . ( 2020 ) ; Chen et al . ( 2021 ) and various similar unpublished models such as TabNine and SourceAI . These models generate full sequences of code tokens left-to-right with any prefix acting as the ( partial ) user intent . While LMs generate realistic-looking outputs , they are known to occasionally “ hallucinate ” ( Puduppully et al. , 2019 ; Malmi et al. , 2019 ; Maynez et al. , 2020 ; Liu et al. , 2021 ) , i.e . generate plausible but incorrect content . This is particularly problematic in generating source code , where small mistakes can lead to erroneous code that is very hard to debug or introduces vulnerabilities ( Pearce et al. , 2021 ) . In this work , we investigate models that can decline to make predictions in places where there is high uncertainty ( e.g. , where the user should choose a name ) , but continue generating around these “ holes ” . For example , in Fig . 1 ( left ) a developer has typed some code and is about to type the next line . A likely completion is to consume more command line arguments , but their name is unclear Code Context : 1 import sys 2 target = sys.argv [ 1 ] 3 I Ground-Truth : ID = sys.argv [ 2 ] Suggested Code Completions : L→ R target = target.replace ( `` \\ '' , `` / '' ) L→ R +⦸ target = L→ R + print ( target ) Copilot ( No suggestion ) GRAMMFORMER = sys.argv [ 2 ] Figure 1 : A sample snippet ( left ; abbreviated from Fig . 12 in Appx . A ) . A developer has just typed the code and their cursor ( in blue ) is at line 3 . Code completions provided by a number of models are shown on the right , where L→ R is a standard LMC and GRAMMFORMER is our new model . from the context . A traditional generative model ( e.g . Fig . 1 ; top right ) may choose to provide a completion that exists in the training data , but is not clearly called for here . On the other hand , a model able to explicitly mark where it is uncertain ( Fig . 1 ; bottom right ) makes it clear to a user where further input is required . However , creating such models is not trivial . A simple first attempt may be to use a standard LMC , but output a “ hole token ” whenever the model is uncertain about the next output token . However , continuing after the “ ” then becomes infeasible , as the LMC was not trained on such data . Hence , a suitable training dataset and objective need to devised . As no large datasets with holes exist , we instead choose to use a reinforcement learning approach in which our reward function encourages the model to make “ long ” predictions with as few “ ” tokens as possible , but to avoid making incorrect predictions . We found that standard left-to-right sequence models perform poorly on this task . Hence , we developed GRAMMFORMER , a model that construct suggestions by generating a ( partial ) syntax tree , but which has the option of leaving non-terminals in its output . Contributions ( 1 ) We present GRAMMFORMER , a transformer-based model that generates code based on the programming language grammar and can predict hole tokens rather than output it is uncertain about . ( 2 ) We develop REGEXACC , a metric that evaluates the quality of predictions with holes . ( 3 ) We evaluate GRAMMFORMER on Python and C # code and show that GRAMMFORMER makes longer and more precise statement-level sketch completions compared to baselines . 2 METHOD . Our aim is to predict code completions as sketches , a mix of actual tokens and “ holes ” , which are meant to signify that the model is unable to make a useful prediction within the given context and further user input is required . Formally , we consider models that take a context sequence x of tokens as input and have to produce an output sequence y ; intuitively , x is what the user typed so far , and y is the suggestion presented to the user . In our setting , y is a sketch , a mix of tokens from the programming language and the special token signifying a “ hole ” that could be filled by an arbitrary sequence of tokens . For example , t = foo ( ) is a sketch corresponding to assigning the return value of function foo to variable t , but leaves the arguments of the function call undetermined . Metric A good sketch is one that ( a ) can be completed into the correct output and ( b ) is as precise as possible . To measure how successful a method is in doing so , we define a new metric REGEXACC . For ( a ) , we use toRegex ( ŷ ) to turn a predicted code sketch ŷ into a regular expression by replacing all holes with the wildcard matching any non-empty sequence ( “ .+ ” in Perl Compatible Regular Expression syntax ) . If the regex matches the ground truth , matches ( ⋅ , ⋅ ) returns a score of 1 otherwise it returns 0 . To implement ( b ) , we scale this result by the proportion of terminal tokens predicted , by defining nTokens ( ŷ ) as the function that returns the number of non-hole symbols in ŷ . More formally , assume an output sketch ŷ and a ground-truth sequence y∗ , where y∗ does not contain any tokens . REGEXACC is then defined as REGEXACC ( ŷ , y∗ ) ≜ matches ( toRegex ( ŷ ) , y∗ ) ⋅ nTokens ( ŷ ) nTokens ( y∗ ) . Beyond REGEXACC , we also consider ROUGE ( Lin , 2004 ) , since a sketch can be thought as a form of a “ summary ” of the target text . For this , we use a helper function ERASEHOLES ( ŷ ) that simply drops all tokens , and then consider ROUGEF1 ( ERASEHOLES ( ŷ ) , y∗ ) . ROUGE is more lenient to errors than REGEXACC and gives partial credit to non-matching but plausible sketches . 2.1 LINEAR CODE SKETCH GENERATION . First , we consider the idea of generating code sketches using a standard generative model for language . To this end , we simply extend the vocabulary with the special “ ” token . An obvious problem is that while we have plenty of training data for a standard generative model , we do not have training data for outputs y that contain the token . Consequently , we can not train the model in a fully supervised fashion , and instead turn to reinforcement learning . Concretely , we devise a reward function r ( ⋅ ) that averages REGEXACC and ROUGE , i.e . for a predicted output sketch ŷ and a ground truth output ( without tokens ) y∗ , we define r ( ŷ , y∗ ) = 1 2 ( REGEXACC ( ŷ , y∗ ) + ROUGEF1 ( ERASEHOLES ( ŷ , y∗ ) ) . ( 1 ) Using the combination of ROUGE ( which does not consider holes ) and REGEXACC is crucial here , as ROUGE is much “ smoother ” compared to REGEXACC , which is 0 for all but very few predictions , allowing us to measure partial improvement . We use our reward function from Eq . 1 to evaluate the quality of the output of the full model and compute a loss . Inspired by Paulus et al . ( 2017 ) we use self-critical policy gradient training ( Rennie et al. , 2017 ) and for a prediction ŷ we minimise L ( x , y∗ ) = ( r ( ŷ , y∗ ) − r̃ ( x ) ) ⋅ Lgen ( x , ŷ ) ( 2 ) Here , r̃ ( x ) is the reward achieved by the prediction from the snapshots of the model that achieved the best score so far and Lgen is the loss of the generative model . Intuitively , this objective rewards models that improve upon the previous best policy with respect to r. To model this in practice , we use a standard encoder/decoder Transformer model Vaswani et al . ( 2017 ) ; Radford et al . ( 2019 ) , “ translating ” the context x into the output y using separate encoder and decoder models . We additionally also consider the language modelling case , i.e. , a model that conditioned on x predicts token y0 , conditioned on x , y0 predicts token y1 , etc .. Pretraining In practice , we found that directly training a sequence model to maximise Eq . 1 , is very slow and does not converge to a useful model . Instead , we heuristically generate a dataset suitable for supervised pretraining . We replace random AST non-terminals of the target output by and generate target sequences . These contain terminals and zero or more . We then pretrain the model on this dataset to convergence , and then fine-tune it using the reward of Eq . 1 . 2.2 GRAMMAR-BASED CODE SKETCH GENERATION . In experiments , we found the simple extended sequence model from above to not perform well , in particular , tokens would not replace semantically meaningful subsequences ( e.g . “ szconv . ) ” Algorithm 1 GRAMMFORMER generative process , given an input sequence x ( 0 ) . for t = 0 , 1 , 2 , ... do i ( t ) ∼ Ps ( i ∣ x ( t ) , N ( x ( t ) ) ) ▷ sample non-terminal position from N ( x ( t ) ) to expand if i ( t ) = ⦸ then ▷ if x ( t ) does not contain non-terminals or none was selected by Ps break ▷ stop generation u ( t ) ⊚i ( t ) ∼ Pe ( u ∣ x ( t ) , i ( t ) ) ▷ sample expansion of non-terminal at position i ( t ) x ( t+1 ) ← x ( t ) < i ( t ) ∶∶ u ( t ) ⊚i ( t ) ∶∶ x ( t ) > i ( t ) ▷ create x ( t+1 ) by replacing non-terminal at i ( t ) by u ( t ) ⊚i ( t ) return NONTERMINALSTOHOLES ( x ( t ) ) ▷ convert remaining non-terminals to holes and return does not contain a left parenthesis and requires the user to fill it in. ) . To resolve this , we developed GRAMMFORMER , a grammar-guided model . It generates code by following the structure of the context-free grammar ( CFG ) defining the programming language syntax , iteratively expanding nonterminal symbols . Crucially , it can choose to not expand some non-terminal symbols , which can then be presented as to users . In traditional grammar-based generation of text ( Cohen et al. , 2012 ) or code ( Maddison & Tarlow , 2014 ; Yin & Neubig , 2017 ; Allamanis & Sutton , 2014 ; Bielik et al. , 2016 ) , the CFG is followed by sequentially expanding the left-most , bottom-most non-terminal symbol , using one of the production rules of the grammar . GRAMMFORMER changes this and instead selects the non-terminal symbol to expand , if any . An example generation is shown in Fig . 2 . Probabilistic Model A CFG is defined as a tuple ( Σ , N , S , R ) where Σ is a set of terminal symbols , N is a set of non-terminal symbols , S ∈ N is the root symbol andR is a set of production rules . We denote non-terminals as ⟨NonTerminalName⟩ . GRAMMFORMER can be viewed as a sequenceto-sequence model transforming x = x0 , x1 , ... , xn into a new sequence in which one non-terminal symbol xi has been replaced by a new sequence of new symbols , according to a production rule of the grammar . Examples of such sequences and rewrites are shown in Fig . 2 . GRAMMFORMER does this rewriting in two steps . First , a non-terminal selector model Ps selects a non-terminal in x to expand and then the non-terminal expansion model Pe determines how to expand it . To define Ps , letN ( x ) = { i ∣ xi ∈ N } ∪ { ⦸ } denote the set of non-terminal positions in x and a special “ stop expansion ” ⦸ symbol . Conditioned on x , Ps produces a probability distribution over N ( x ) . In turn , Pe is conditioned on x and a position i ∈ N ( x ) and models a probability distribution over expansion sequences u ∈ ( Σ ∪ N ) ∗ . Note that factorising GRAMMFORMER into two models Ps and Pe is an important modelling decision : how to best expand a non-terminal is entirely separated from predicting whether a hole should be introduced . These two concepts are intermixed in standard ( sequence ) decoders . In practice , we define both models using neural architectures with partially shared parameters , as discussed below . Alg . 1 shows a high-level description of GRAMMFORMER , in which Ps and Pe are used repeatedly to select and expand non-terminals ( not necessarily the left-most one ) , until none are left or the Ps indicates that expansion should stop . Here , NONTERMINALSTOHOLES ( ⋅ ) replaces all remaining non-terminal symbols with a hole . Note that GRAMMFORMER is not context-free , taking into account the whole input sequence when expanding a non-terminal . Second , in contrast to many grammar-based methods ( Yin & Neubig , 2017 ; Bielik et al. , 2016 ) , any non-terminal can be expanded at each step . Finally , Pe is not directly constrained to follow the production rule set R , but can generate any sequence . In practice , it learns to follow to the rules of R from the data , but this flexibility is important for handling string literals and argument tuples of variable length . Neural Model To implement Ps and Pe , we use a shared encoder module that computes a representation of the input sequence x = x0 , . . . , xn as vectors e0 , . . . , en , ei ∈ R D , where D is a hyperparameter . Our encoder module is a Transformer ( Vaswani et al. , 2017 ) , given the impressive results of transformer-based models in NLP and code ( Feng et al. , 2020 ) . Other architectures ( RNNs , 1D-CNNs , Transformer variants ) would be suitable , but we leave their study for future work . Ps is implemented similar to a pointer network on top of this encoder module , i.e . Ps ( i ∣ x ) = softmax i∈N ( x ) ( f ( ei ) ) , where f is a learnable feed-forward neural network . For our purposes , we define e⦸ as the representation of the special start symbol [ CLS ] used in our Transformer encoder . The expansion model Pe follows a standard autoregressive decoder formulation , i.e . Pe ( u ∣ x , i ) = m ∏ j=1 Pdec ( uj ∣ e0 , . . . , en , i , u < j ) . We implement Pdec as a ( causal ) relational Transformer decoder , similar to Wang et al . ( 2019 ) . Relational transformers augment the attention mechanism by incorporating predefined relationships among elements ; attention scores are then biased by learnable weights for each relation . In GRAMMFORMER , we only use a single relation , connecting each token to the expanded non-terminal token xi , to help the model focus on the token it needs to generate an expansion for . Objective Due to the lack of supervised data , we employ reinforcement learning to train GRAMMFORMER . We use our reward function from Eq . 1 to evaluate the quality of the output of the full model . We use self-critical policy gradient training as in Eq . 2 and minimise L ( x , y∗ ) = ( r ( ŷ , y∗ ) − r̃ ( x ) ) ⋅ T ∑ t=0 ( − logPs ( i ( t ) ∣ x ( t ) ) − I ( i ( t ) ≠ ⦸ ) ⋅ logPe ( ( u ( t ) ⊚i ( t ) ) ∗ ∣ x ( t ) , i ( t ) ) ) . ( 3 ) Here , r̃ ( x ) is the reward achieved by the snapshots of Ps and Pe that achieved the best score so far . The rest of the objective follows the iterations of the loop in Alg . 1 , where t is the iteration index , ŷ is the predicted sketch , y∗ is the ground-truth sequence of terminals , and I ( ⋅ ) is the indicator function . Pretraining As in the sequence model , directly training with the RL objective Eq . 3 is computationally intensive due to the sampling requirement . We again use a pretraining strategy . First , we train Pe to expand every non-terminal , independently of the expansion order learned by Ps . To do this , we use the input training examples and follow Alg . 1 , but instead of sampling from Ps ( ⋅ ) , we sample i ( t ) from a uniform distribution over the non-terminals in x ( t ) , Ñ ( x ( t ) ) = { i ∣ xi ∈ N } . This yields sequences of intermediate sketches x ( t ) for each example . Furthermore , for each x ( t ) , we compute the ground-truth expansion ( u ( t ) ⊚i ) ∗ for all non-terminals i ∈ Ñ ( x ( t ) ) . We can then pretrain Pe using the supervised objective Lpre , e ( x ( t ) , ( u ( t ) ⊚i ) ∗ i∈Ñ ( x ( t ) ) ) = 1 ∣Ñ ( x ( t ) ) ∣ ⋅ ∑ i∈Ñ ( x ( t ) ) − logPe ( ( u ( t ) ⊚i ( t ) ) ∗ ∣ x ( t ) , i ) , i.e . the negative log-likelihood of the correct expansion for all non-terminals in x ( t ) . This computation is more computationally efficient compared to the one in Eq . 3 since the cost of encoding x ( t ) is amortised across all potential expansions and no sampling is required . Once Pe is pretrained , we pretrain Ps . For this , we fix the weights of the shared encoder module , and optimise only the remaining parameters of Ps through Eq . 3 . Once we have a pretrained both models , we then fine-tune all model weights end-to-end , using Eq . 3 . Optimisation : Grammar Flattening Following the formal grammar of a programming language commonly introduces tedious expansions . For example , the Python non-terminal ⟨Call⟩ is always expanded to ⟨Expr⟩ ( ⟨ArgumentList⟩ ) , and the C # non-terminal ⟨NotEqualOp⟩ is always expanded to the terminal ! = . We “ flatten ” the grammar by replacing non-terminals such as ⟨Call⟩ and ⟨NotEqualOp⟩ with all their possible expansions . In Appx . C we provide the list of the flattened non-terminals . Note that if we repeated this process for all non-terminals except from the starting symbol S , GRAMMFORMER would degenerate into a standard encoder-decoder model . Beam Search At test time , we employ a two-step beam search , and replace sampling from Ps and Pe with their top-ν outputs , keeping a beam of size k. First , for each x ( t ) in the beam , we compute Ps and select the top-m non-terminal positions to expand . For each of those m positions , we sample the top-n expansions from Pe using a standard beam search . We compute the likelihood of all k ⋅ n ⋅m results , and then keep only the top-k . This process ( detailed in Appx . E ) is similar to a standard beam search but takes into account that two submodels are used . Computational Cost GRAMMFORMER ’ s ability to predict sketches comes with additional computational cost compared to standard transformer encoder-decoders : at each iteration of the loop in Alg . 1 x ( t ) changes , Ps and Pe must be recomputed . This means that the encoder-decoder runs once on each partial sequence , in contrast to left-to-right causal generation , in which intermediate results can be re-used . Future work may consider selecting more than one element to expand from N ( x ( t ) ) at each step , reducing the expansion steps , similar to Welleck et al . ( 2019 ) ; Stern et al . ( 2019 ) .
This work proposes, GRAMMAFORMER, a transformer model for generating code with "holes" inserted in places where a model is uncertain. GRAMMAFORMER is trained on code completion task for C# and Python. The model generates 10-50% more accurate completions and 37-50% longer sketches.
SP:44bc7dda97cc764219656e8a9a6cc8f60d195c29
Learning to Complete Code with Sketches
1 INTRODUCTION . Recent high-capacity language models ( LM ) have shown that machine learning models are able to generate coherent , realistic text , but it is often hard to guide them towards a specific goal , especially when describing the intent is complex or more costly than manually generating the target output . One such scenario are LMs of source code ( LMC ) . Since Hindle et al . ( 2012 ) increasingly sophisticated LMCs have been built , including transformer-based ones , such as those of Svyatkovskiy et al . ( 2020 ) ; Feng et al . ( 2020 ) ; Chen et al . ( 2021 ) and various similar unpublished models such as TabNine and SourceAI . These models generate full sequences of code tokens left-to-right with any prefix acting as the ( partial ) user intent . While LMs generate realistic-looking outputs , they are known to occasionally “ hallucinate ” ( Puduppully et al. , 2019 ; Malmi et al. , 2019 ; Maynez et al. , 2020 ; Liu et al. , 2021 ) , i.e . generate plausible but incorrect content . This is particularly problematic in generating source code , where small mistakes can lead to erroneous code that is very hard to debug or introduces vulnerabilities ( Pearce et al. , 2021 ) . In this work , we investigate models that can decline to make predictions in places where there is high uncertainty ( e.g. , where the user should choose a name ) , but continue generating around these “ holes ” . For example , in Fig . 1 ( left ) a developer has typed some code and is about to type the next line . A likely completion is to consume more command line arguments , but their name is unclear Code Context : 1 import sys 2 target = sys.argv [ 1 ] 3 I Ground-Truth : ID = sys.argv [ 2 ] Suggested Code Completions : L→ R target = target.replace ( `` \\ '' , `` / '' ) L→ R +⦸ target = L→ R + print ( target ) Copilot ( No suggestion ) GRAMMFORMER = sys.argv [ 2 ] Figure 1 : A sample snippet ( left ; abbreviated from Fig . 12 in Appx . A ) . A developer has just typed the code and their cursor ( in blue ) is at line 3 . Code completions provided by a number of models are shown on the right , where L→ R is a standard LMC and GRAMMFORMER is our new model . from the context . A traditional generative model ( e.g . Fig . 1 ; top right ) may choose to provide a completion that exists in the training data , but is not clearly called for here . On the other hand , a model able to explicitly mark where it is uncertain ( Fig . 1 ; bottom right ) makes it clear to a user where further input is required . However , creating such models is not trivial . A simple first attempt may be to use a standard LMC , but output a “ hole token ” whenever the model is uncertain about the next output token . However , continuing after the “ ” then becomes infeasible , as the LMC was not trained on such data . Hence , a suitable training dataset and objective need to devised . As no large datasets with holes exist , we instead choose to use a reinforcement learning approach in which our reward function encourages the model to make “ long ” predictions with as few “ ” tokens as possible , but to avoid making incorrect predictions . We found that standard left-to-right sequence models perform poorly on this task . Hence , we developed GRAMMFORMER , a model that construct suggestions by generating a ( partial ) syntax tree , but which has the option of leaving non-terminals in its output . Contributions ( 1 ) We present GRAMMFORMER , a transformer-based model that generates code based on the programming language grammar and can predict hole tokens rather than output it is uncertain about . ( 2 ) We develop REGEXACC , a metric that evaluates the quality of predictions with holes . ( 3 ) We evaluate GRAMMFORMER on Python and C # code and show that GRAMMFORMER makes longer and more precise statement-level sketch completions compared to baselines . 2 METHOD . Our aim is to predict code completions as sketches , a mix of actual tokens and “ holes ” , which are meant to signify that the model is unable to make a useful prediction within the given context and further user input is required . Formally , we consider models that take a context sequence x of tokens as input and have to produce an output sequence y ; intuitively , x is what the user typed so far , and y is the suggestion presented to the user . In our setting , y is a sketch , a mix of tokens from the programming language and the special token signifying a “ hole ” that could be filled by an arbitrary sequence of tokens . For example , t = foo ( ) is a sketch corresponding to assigning the return value of function foo to variable t , but leaves the arguments of the function call undetermined . Metric A good sketch is one that ( a ) can be completed into the correct output and ( b ) is as precise as possible . To measure how successful a method is in doing so , we define a new metric REGEXACC . For ( a ) , we use toRegex ( ŷ ) to turn a predicted code sketch ŷ into a regular expression by replacing all holes with the wildcard matching any non-empty sequence ( “ .+ ” in Perl Compatible Regular Expression syntax ) . If the regex matches the ground truth , matches ( ⋅ , ⋅ ) returns a score of 1 otherwise it returns 0 . To implement ( b ) , we scale this result by the proportion of terminal tokens predicted , by defining nTokens ( ŷ ) as the function that returns the number of non-hole symbols in ŷ . More formally , assume an output sketch ŷ and a ground-truth sequence y∗ , where y∗ does not contain any tokens . REGEXACC is then defined as REGEXACC ( ŷ , y∗ ) ≜ matches ( toRegex ( ŷ ) , y∗ ) ⋅ nTokens ( ŷ ) nTokens ( y∗ ) . Beyond REGEXACC , we also consider ROUGE ( Lin , 2004 ) , since a sketch can be thought as a form of a “ summary ” of the target text . For this , we use a helper function ERASEHOLES ( ŷ ) that simply drops all tokens , and then consider ROUGEF1 ( ERASEHOLES ( ŷ ) , y∗ ) . ROUGE is more lenient to errors than REGEXACC and gives partial credit to non-matching but plausible sketches . 2.1 LINEAR CODE SKETCH GENERATION . First , we consider the idea of generating code sketches using a standard generative model for language . To this end , we simply extend the vocabulary with the special “ ” token . An obvious problem is that while we have plenty of training data for a standard generative model , we do not have training data for outputs y that contain the token . Consequently , we can not train the model in a fully supervised fashion , and instead turn to reinforcement learning . Concretely , we devise a reward function r ( ⋅ ) that averages REGEXACC and ROUGE , i.e . for a predicted output sketch ŷ and a ground truth output ( without tokens ) y∗ , we define r ( ŷ , y∗ ) = 1 2 ( REGEXACC ( ŷ , y∗ ) + ROUGEF1 ( ERASEHOLES ( ŷ , y∗ ) ) . ( 1 ) Using the combination of ROUGE ( which does not consider holes ) and REGEXACC is crucial here , as ROUGE is much “ smoother ” compared to REGEXACC , which is 0 for all but very few predictions , allowing us to measure partial improvement . We use our reward function from Eq . 1 to evaluate the quality of the output of the full model and compute a loss . Inspired by Paulus et al . ( 2017 ) we use self-critical policy gradient training ( Rennie et al. , 2017 ) and for a prediction ŷ we minimise L ( x , y∗ ) = ( r ( ŷ , y∗ ) − r̃ ( x ) ) ⋅ Lgen ( x , ŷ ) ( 2 ) Here , r̃ ( x ) is the reward achieved by the prediction from the snapshots of the model that achieved the best score so far and Lgen is the loss of the generative model . Intuitively , this objective rewards models that improve upon the previous best policy with respect to r. To model this in practice , we use a standard encoder/decoder Transformer model Vaswani et al . ( 2017 ) ; Radford et al . ( 2019 ) , “ translating ” the context x into the output y using separate encoder and decoder models . We additionally also consider the language modelling case , i.e. , a model that conditioned on x predicts token y0 , conditioned on x , y0 predicts token y1 , etc .. Pretraining In practice , we found that directly training a sequence model to maximise Eq . 1 , is very slow and does not converge to a useful model . Instead , we heuristically generate a dataset suitable for supervised pretraining . We replace random AST non-terminals of the target output by and generate target sequences . These contain terminals and zero or more . We then pretrain the model on this dataset to convergence , and then fine-tune it using the reward of Eq . 1 . 2.2 GRAMMAR-BASED CODE SKETCH GENERATION . In experiments , we found the simple extended sequence model from above to not perform well , in particular , tokens would not replace semantically meaningful subsequences ( e.g . “ szconv . ) ” Algorithm 1 GRAMMFORMER generative process , given an input sequence x ( 0 ) . for t = 0 , 1 , 2 , ... do i ( t ) ∼ Ps ( i ∣ x ( t ) , N ( x ( t ) ) ) ▷ sample non-terminal position from N ( x ( t ) ) to expand if i ( t ) = ⦸ then ▷ if x ( t ) does not contain non-terminals or none was selected by Ps break ▷ stop generation u ( t ) ⊚i ( t ) ∼ Pe ( u ∣ x ( t ) , i ( t ) ) ▷ sample expansion of non-terminal at position i ( t ) x ( t+1 ) ← x ( t ) < i ( t ) ∶∶ u ( t ) ⊚i ( t ) ∶∶ x ( t ) > i ( t ) ▷ create x ( t+1 ) by replacing non-terminal at i ( t ) by u ( t ) ⊚i ( t ) return NONTERMINALSTOHOLES ( x ( t ) ) ▷ convert remaining non-terminals to holes and return does not contain a left parenthesis and requires the user to fill it in. ) . To resolve this , we developed GRAMMFORMER , a grammar-guided model . It generates code by following the structure of the context-free grammar ( CFG ) defining the programming language syntax , iteratively expanding nonterminal symbols . Crucially , it can choose to not expand some non-terminal symbols , which can then be presented as to users . In traditional grammar-based generation of text ( Cohen et al. , 2012 ) or code ( Maddison & Tarlow , 2014 ; Yin & Neubig , 2017 ; Allamanis & Sutton , 2014 ; Bielik et al. , 2016 ) , the CFG is followed by sequentially expanding the left-most , bottom-most non-terminal symbol , using one of the production rules of the grammar . GRAMMFORMER changes this and instead selects the non-terminal symbol to expand , if any . An example generation is shown in Fig . 2 . Probabilistic Model A CFG is defined as a tuple ( Σ , N , S , R ) where Σ is a set of terminal symbols , N is a set of non-terminal symbols , S ∈ N is the root symbol andR is a set of production rules . We denote non-terminals as ⟨NonTerminalName⟩ . GRAMMFORMER can be viewed as a sequenceto-sequence model transforming x = x0 , x1 , ... , xn into a new sequence in which one non-terminal symbol xi has been replaced by a new sequence of new symbols , according to a production rule of the grammar . Examples of such sequences and rewrites are shown in Fig . 2 . GRAMMFORMER does this rewriting in two steps . First , a non-terminal selector model Ps selects a non-terminal in x to expand and then the non-terminal expansion model Pe determines how to expand it . To define Ps , letN ( x ) = { i ∣ xi ∈ N } ∪ { ⦸ } denote the set of non-terminal positions in x and a special “ stop expansion ” ⦸ symbol . Conditioned on x , Ps produces a probability distribution over N ( x ) . In turn , Pe is conditioned on x and a position i ∈ N ( x ) and models a probability distribution over expansion sequences u ∈ ( Σ ∪ N ) ∗ . Note that factorising GRAMMFORMER into two models Ps and Pe is an important modelling decision : how to best expand a non-terminal is entirely separated from predicting whether a hole should be introduced . These two concepts are intermixed in standard ( sequence ) decoders . In practice , we define both models using neural architectures with partially shared parameters , as discussed below . Alg . 1 shows a high-level description of GRAMMFORMER , in which Ps and Pe are used repeatedly to select and expand non-terminals ( not necessarily the left-most one ) , until none are left or the Ps indicates that expansion should stop . Here , NONTERMINALSTOHOLES ( ⋅ ) replaces all remaining non-terminal symbols with a hole . Note that GRAMMFORMER is not context-free , taking into account the whole input sequence when expanding a non-terminal . Second , in contrast to many grammar-based methods ( Yin & Neubig , 2017 ; Bielik et al. , 2016 ) , any non-terminal can be expanded at each step . Finally , Pe is not directly constrained to follow the production rule set R , but can generate any sequence . In practice , it learns to follow to the rules of R from the data , but this flexibility is important for handling string literals and argument tuples of variable length . Neural Model To implement Ps and Pe , we use a shared encoder module that computes a representation of the input sequence x = x0 , . . . , xn as vectors e0 , . . . , en , ei ∈ R D , where D is a hyperparameter . Our encoder module is a Transformer ( Vaswani et al. , 2017 ) , given the impressive results of transformer-based models in NLP and code ( Feng et al. , 2020 ) . Other architectures ( RNNs , 1D-CNNs , Transformer variants ) would be suitable , but we leave their study for future work . Ps is implemented similar to a pointer network on top of this encoder module , i.e . Ps ( i ∣ x ) = softmax i∈N ( x ) ( f ( ei ) ) , where f is a learnable feed-forward neural network . For our purposes , we define e⦸ as the representation of the special start symbol [ CLS ] used in our Transformer encoder . The expansion model Pe follows a standard autoregressive decoder formulation , i.e . Pe ( u ∣ x , i ) = m ∏ j=1 Pdec ( uj ∣ e0 , . . . , en , i , u < j ) . We implement Pdec as a ( causal ) relational Transformer decoder , similar to Wang et al . ( 2019 ) . Relational transformers augment the attention mechanism by incorporating predefined relationships among elements ; attention scores are then biased by learnable weights for each relation . In GRAMMFORMER , we only use a single relation , connecting each token to the expanded non-terminal token xi , to help the model focus on the token it needs to generate an expansion for . Objective Due to the lack of supervised data , we employ reinforcement learning to train GRAMMFORMER . We use our reward function from Eq . 1 to evaluate the quality of the output of the full model . We use self-critical policy gradient training as in Eq . 2 and minimise L ( x , y∗ ) = ( r ( ŷ , y∗ ) − r̃ ( x ) ) ⋅ T ∑ t=0 ( − logPs ( i ( t ) ∣ x ( t ) ) − I ( i ( t ) ≠ ⦸ ) ⋅ logPe ( ( u ( t ) ⊚i ( t ) ) ∗ ∣ x ( t ) , i ( t ) ) ) . ( 3 ) Here , r̃ ( x ) is the reward achieved by the snapshots of Ps and Pe that achieved the best score so far . The rest of the objective follows the iterations of the loop in Alg . 1 , where t is the iteration index , ŷ is the predicted sketch , y∗ is the ground-truth sequence of terminals , and I ( ⋅ ) is the indicator function . Pretraining As in the sequence model , directly training with the RL objective Eq . 3 is computationally intensive due to the sampling requirement . We again use a pretraining strategy . First , we train Pe to expand every non-terminal , independently of the expansion order learned by Ps . To do this , we use the input training examples and follow Alg . 1 , but instead of sampling from Ps ( ⋅ ) , we sample i ( t ) from a uniform distribution over the non-terminals in x ( t ) , Ñ ( x ( t ) ) = { i ∣ xi ∈ N } . This yields sequences of intermediate sketches x ( t ) for each example . Furthermore , for each x ( t ) , we compute the ground-truth expansion ( u ( t ) ⊚i ) ∗ for all non-terminals i ∈ Ñ ( x ( t ) ) . We can then pretrain Pe using the supervised objective Lpre , e ( x ( t ) , ( u ( t ) ⊚i ) ∗ i∈Ñ ( x ( t ) ) ) = 1 ∣Ñ ( x ( t ) ) ∣ ⋅ ∑ i∈Ñ ( x ( t ) ) − logPe ( ( u ( t ) ⊚i ( t ) ) ∗ ∣ x ( t ) , i ) , i.e . the negative log-likelihood of the correct expansion for all non-terminals in x ( t ) . This computation is more computationally efficient compared to the one in Eq . 3 since the cost of encoding x ( t ) is amortised across all potential expansions and no sampling is required . Once Pe is pretrained , we pretrain Ps . For this , we fix the weights of the shared encoder module , and optimise only the remaining parameters of Ps through Eq . 3 . Once we have a pretrained both models , we then fine-tune all model weights end-to-end , using Eq . 3 . Optimisation : Grammar Flattening Following the formal grammar of a programming language commonly introduces tedious expansions . For example , the Python non-terminal ⟨Call⟩ is always expanded to ⟨Expr⟩ ( ⟨ArgumentList⟩ ) , and the C # non-terminal ⟨NotEqualOp⟩ is always expanded to the terminal ! = . We “ flatten ” the grammar by replacing non-terminals such as ⟨Call⟩ and ⟨NotEqualOp⟩ with all their possible expansions . In Appx . C we provide the list of the flattened non-terminals . Note that if we repeated this process for all non-terminals except from the starting symbol S , GRAMMFORMER would degenerate into a standard encoder-decoder model . Beam Search At test time , we employ a two-step beam search , and replace sampling from Ps and Pe with their top-ν outputs , keeping a beam of size k. First , for each x ( t ) in the beam , we compute Ps and select the top-m non-terminal positions to expand . For each of those m positions , we sample the top-n expansions from Pe using a standard beam search . We compute the likelihood of all k ⋅ n ⋅m results , and then keep only the top-k . This process ( detailed in Appx . E ) is similar to a standard beam search but takes into account that two submodels are used . Computational Cost GRAMMFORMER ’ s ability to predict sketches comes with additional computational cost compared to standard transformer encoder-decoders : at each iteration of the loop in Alg . 1 x ( t ) changes , Ps and Pe must be recomputed . This means that the encoder-decoder runs once on each partial sequence , in contrast to left-to-right causal generation , in which intermediate results can be re-used . Future work may consider selecting more than one element to expand from N ( x ( t ) ) at each step , reducing the expansion steps , similar to Welleck et al . ( 2019 ) ; Stern et al . ( 2019 ) .
The paper presents a new model for code completion which allows the model to completions with “holes” that are inserted in places where the model is uncertain. The idea of generating “holes” to enable skipping over “hard parts” of the prediction is novel and interesting. To realize this idea, the authors present a model that generates partial syntax trees where some of the non-terminals may be left without further expansion. The model is evaluated on C# and Python programs and is shown to outperform existing techniques. The also paper presents a thorough ablation study.
SP:44bc7dda97cc764219656e8a9a6cc8f60d195c29
$G^3$: Representation Learning and Generation for Geometric Graphs
A geometric graph is a graph equipped with geometric information ( i.e. , node coordinates ) . A notable example is molecular graphs , where the combinatorial bonding is supplement with atomic coordinates that determine the three-dimensional structure . This work proposes a generative model for geometric graphs , capitalizing on the complementary information of structure and geometry to learn the underlying distribution . The proposed model , Geometric Graph Generator ( G3 ) , orchestrates graph neural networks and point cloud models in a nontrivial manner under an autoencoding framework . Additionally , we augment this framework with a normalizing flow so that one can effectively sample from the otherwise intractable latent space . G3 can be used in computer-aided drug discovery , where seeking novel and optimal molecular structures is critical . As a representation learning approach , the interaction of the graph structure and the geometric point cloud also improve significantly the performance of downstream tasks , such as molecular property prediction . We conduct a comprehensive set of experiments to demonstrate that G3 learns more accurately the distribution of given molecules and helps identify novel molecules with better properties of interest . 1 INTRODUCTION . Geometric deep learning refers to the development of deep learning techniques for data from nonEuclidean domains , such as graphs and manifolds ( Bronstein et al. , 2017 ) . In recent years , graph neural networks ( GNN ) emerged to be a promising family of architectures that models the relational inductive bias ubiquitous to graph structured data ( Battaglia et al. , 2018 ; Zhou et al. , 2018 ) . With the rise of deep generative modeling , various generative models have been adapted for graphs and proven to be effective in learning the underlying distribution implicitly defined by a set of given graphs ( Goodfellow et al. , 2014 ; Kingma & Welling , 2013 ; Rezende & Mohamed , 2015 ; van den Oord et al. , 2016 ; De Cao & Kipf , 2018 ; You et al. , 2018b ; Jin et al. , 2018 ; Shi et al. , 2020 ) . In this work , we consider a special type of graphs—geometric graphs—which are equipped with geometric information in addition to the combinatorial structure ( Pach , 2013 ) . In practice , this additional information corresponds to low-dimensional geometric structures and typically appears as node coordinates in R2 or R3 . The coordinates find broad uses in molecules ( Van Aalten et al. , 1996 ) , meshes ( Alliez et al. , 2005 ) , and graph drawing ( Frishman & Tal , 2008 ) . This work aims to develop a generative model that captures both the combinatorial and the geometric characteristics exhibited in a given collection of geometric graphs . One driving application of geometric graphs is molecule generation . In pharmacology , drug discovery is the process of identifying new candidate medications in the form of molecules . The process is known to be difficult , costly , and time-consuming ( Paul et al. , 2010 ) , because of the discrete nature of the search space and its vast size ( estimated to contain at least 1033 molecules ) ( Polishchuk et al. , 2013 ) . Recently , deep learning techniques have been developed to represent molecules as continuous vectors and apply continuous optimization ( Jin et al. , 2018 ) or reinforcement learning ( You et al. , 2018a ) to conduct an efficient search . In almost all existing work based on the molecular-graph approach , the graph is treated as a combinatorial object and the three-dimensional geometric information serves as node features only . However , the atomic coordinates encode vital energy information of the molecule and they can be instrumental for the inference of structure and state . Therefore , a proper modeling of the geometric information is paramount for a more accurate representation of the molecule in downstream tasks . In this work , we develop an autoencoder for geometric graphs , orchestrating a GNN and a point cloud model in a nontrivial manner . Besides the straightforward use of a GNN ( Duvenaud et al. , 2015 ; Kearnes et al. , 2016 ; Gilmer et al. , 2017 ) to encode the combinatorial structure , we treat the nodes with spatial coordinates as low-dimensional point clouds and use a point cloud model to encode the geometry . Different from decoders merely based on graph structures in conventional graph autoencoders , on the other hand , we decode the graph structure by first reconstructing the geometry . Specifically , we use a folding-based technique ( Yang et al. , 2018 ; Pang et al. , 2021 ) to map a template of points to the correct geometry , from which the combinatorial structure is inferred . This way , both sources of information are organically fused and processed . Furthermore , to fulfill the generative capability , we augment the autoencoder with a normalizing flow ( Kobyzev et al. , 2019 ; Dinh et al. , 2016 ; 2014 ) , so that one is able to effectively sample from the otherwise intractable latent space . The proposed geometric graph generator , G3 , addresses geometry that is rarely considered in past graph generative models . A naive alternative to incorporate node coordinates is to treat them as part of features when consumed by a GNN . This approach , however , biases the geometry by the graph structure owing to local neighborhood aggregation—the global geometry may be compromised . For geometric graphs , the structure itself is combinatorial and discrete , while the geometry is continuous . These two sources of information are often complementary and benefit from collaborative processing . Our strategy is to exploit both graph techniques and point cloud techniques to maximally retain information from the two sources . To evaluate the effectiveness of G3 , we focus on molecule applications and use popular datasets ( QM9 and ChEMBL ) that contain atomic coordinates . We demonstrate that G3 outperforms a number of representative graph generative models in generating novel and valid molecules and it also significantly outperforms either a GNN or a point model alone for property prediction . Appealingly , G3 decodes molecules substantially faster than do many popular models . Furthermore , with the use of Bayesian optimization in concert with a trained model , we can identify better molecules under metrics of interest than do existing methods . 2 RELATED WORK . Generative models refer to a class of machine learning models that can learn the underlying distribution , implicitly defined by a given set of data , and to sample from it . Noteworthy generative models in the deep learning era include variational autoencoders ( VAE ) ( Kingma & Welling , 2013 ) , generative adversarial networks ( GAN ) ( Goodfellow et al. , 2014 ) and normalizing flows ( NF ) ( Kobyzev et al. , 2019 ; Dinh et al. , 2016 ; 2014 ) . Among them , NF bears a unique advantage in estimating the density ( likelihood ) . For training , NF directly optimizes the likelihood , whereas VAE maximizes a lower bound of it ( called the ELBO Kingma & Welling ( 2013 ) ) and GAN minimizes the discrepancy between the input and the transformed noise distribution . Under these generative models , a graph can be generated in a sequential or one-shot manner : the former samples nodes and edges incrementally while the latter samples a graph in its entirety . Combining GAN and reinforcement learning , MolGAN ( De Cao & Kipf , 2018 ) and GCPN ( You et al. , 2018a ) generate molecules in a one-shot and sequential fashion , respectively . MolecularRNN ( Popova et al. , 2019 ) is an autoregressive model based on recurrent neural networks ; it performs validity checks during the sequential generation process and rejects invalid structures . JT-VAE ( Jin et al. , 2018 ) is a VAE-based tree model that constructs the molecular graph sequentially from sub-components . Popova et al . ( 2019 ) point out that JT-VAE may suffer ambiguity in the conversion of the tree structure to a molecular graph , affecting property optimization . GraphNVP ( Madhawa et al. , 2019 ) and GraphAF ( Shi et al. , 2020 ) are recently proposed NF-based models that parameterize the mapping from a latent vector to a graph by using a flow . The former employs one-shot sampling while the latter is sequential . Molecules admit representations other than graphs . An alternative is the simplified molecular-input line-entry system ( SMILES ) ( Weininger , 1988 ) , a string notation that universally describes molecular structures . Grammar VAE ( GVAE ) ( Kusner et al. , 2017 ) and syntax-directed VAE ( SD-VAE ) ( Dai et al. , 2018 ) are VAE-based models that process SMILES strings and apply grammatical rules to sequentially reconstruct the strings . Another family of models , which are motivated by quantum chemistry principles , models molecules as point clouds and learns a physically robust representation to predict , e.g. , molecular energies and equilibrium conformations . Schütt et al . ( 2018 ) propose SchNet , a network with continuous-filter convolution layers that model interactions between atoms and learn a representation that is invariant to translation and rotation . Using the same filters , Gebauer et al . ( 2019 ) devise Generative-SchNet ( G-SchNet ) , which accurately models the target molecule distribution and generates novel 3D conformations that are relatively stable . In addition to the graph structure , our work treats nodes as a point cloud , which is a collection of points in R3 . Point clouds are a prominent subject in computer vision and computer graphics ; they are obtained through , e.g. , LIDAR scanning of object surfaces ( Yang et al. , 2018 ) . Deep learning with point clouds is faced with challenges in defining the convolution operator ( Bruna et al. , 2013 ; Bronstein et al. , 2017 ; Schonsheck et al. , 2018 ; Jin et al. , 2019 ) . Volumetric CNN ( Wu et al. , 2015 ; Qi et al. , 2016 ; Maturana & Scherer , 2015 ) applies convolution filters on voxels obtained from discretization of the three-dimensional space . Multiview CNN ( Su et al. , 2015 ; Qi et al. , 2016 ) reduces point clouds to a collection of 2D images and applies gridded convolutions on them . PointNet ( Qi et al. , 2017a ; b ) uses point-wise MLPs to featurize individual points , followed by a symmetric pooling that generates a permutation-invariant global description . 3 GEOMETRIC GRAPH GENERATOR . In this section , we present G3 . The model is an autoencoder that contains interacting modules to process and reconstruct the geometry ( i.e. , node coordinates ) , the structure ( i.e. , edges and edge types ) , and additional features ( e.g. , node features ) . Such a sophisticated autoencoder allows forming a more accurate representation of the geometric graph and learning better the input distribution . A proper processing of the geometric and combinatorial information is crucial to the success of G3 and downstream tasks . For example , an authentic representation of the node coordinates serves as the basis for decoding both the node and the edge types : the training hinges on an accurate registration between the reconstructed point cloud and the ground truth . The straightforward adaptation of GNNs for encoding the geometric information ( as node features ) results in poor coordinate reconstruction and worse generation quality overall . Moreover , one can naively reconstruct coordinates and node features with a single decoder . However , mixing two modes of information together introduces unnecessary bias through , e.g. , the difference in scales . Hence , separate but dependent decoders for features and geometry render a more accurate reconstruction .
The paper proposes a generative models for learning over the space of geometric graphs -- those graphs whose nodes are associated with geometric coordinates (point clouds). The point cloud serves as basis for decoding graph structure. This makes the entire system efficient. An extensive suite of experiments on chemical benchmarks (QM9 and ChEBML) show that the proposed method is competitive against state-of-the-art rivals on a variety of measures and tasks.
SP:d713225fa41061a2ad23a072786c21f066b2a777
$G^3$: Representation Learning and Generation for Geometric Graphs
A geometric graph is a graph equipped with geometric information ( i.e. , node coordinates ) . A notable example is molecular graphs , where the combinatorial bonding is supplement with atomic coordinates that determine the three-dimensional structure . This work proposes a generative model for geometric graphs , capitalizing on the complementary information of structure and geometry to learn the underlying distribution . The proposed model , Geometric Graph Generator ( G3 ) , orchestrates graph neural networks and point cloud models in a nontrivial manner under an autoencoding framework . Additionally , we augment this framework with a normalizing flow so that one can effectively sample from the otherwise intractable latent space . G3 can be used in computer-aided drug discovery , where seeking novel and optimal molecular structures is critical . As a representation learning approach , the interaction of the graph structure and the geometric point cloud also improve significantly the performance of downstream tasks , such as molecular property prediction . We conduct a comprehensive set of experiments to demonstrate that G3 learns more accurately the distribution of given molecules and helps identify novel molecules with better properties of interest . 1 INTRODUCTION . Geometric deep learning refers to the development of deep learning techniques for data from nonEuclidean domains , such as graphs and manifolds ( Bronstein et al. , 2017 ) . In recent years , graph neural networks ( GNN ) emerged to be a promising family of architectures that models the relational inductive bias ubiquitous to graph structured data ( Battaglia et al. , 2018 ; Zhou et al. , 2018 ) . With the rise of deep generative modeling , various generative models have been adapted for graphs and proven to be effective in learning the underlying distribution implicitly defined by a set of given graphs ( Goodfellow et al. , 2014 ; Kingma & Welling , 2013 ; Rezende & Mohamed , 2015 ; van den Oord et al. , 2016 ; De Cao & Kipf , 2018 ; You et al. , 2018b ; Jin et al. , 2018 ; Shi et al. , 2020 ) . In this work , we consider a special type of graphs—geometric graphs—which are equipped with geometric information in addition to the combinatorial structure ( Pach , 2013 ) . In practice , this additional information corresponds to low-dimensional geometric structures and typically appears as node coordinates in R2 or R3 . The coordinates find broad uses in molecules ( Van Aalten et al. , 1996 ) , meshes ( Alliez et al. , 2005 ) , and graph drawing ( Frishman & Tal , 2008 ) . This work aims to develop a generative model that captures both the combinatorial and the geometric characteristics exhibited in a given collection of geometric graphs . One driving application of geometric graphs is molecule generation . In pharmacology , drug discovery is the process of identifying new candidate medications in the form of molecules . The process is known to be difficult , costly , and time-consuming ( Paul et al. , 2010 ) , because of the discrete nature of the search space and its vast size ( estimated to contain at least 1033 molecules ) ( Polishchuk et al. , 2013 ) . Recently , deep learning techniques have been developed to represent molecules as continuous vectors and apply continuous optimization ( Jin et al. , 2018 ) or reinforcement learning ( You et al. , 2018a ) to conduct an efficient search . In almost all existing work based on the molecular-graph approach , the graph is treated as a combinatorial object and the three-dimensional geometric information serves as node features only . However , the atomic coordinates encode vital energy information of the molecule and they can be instrumental for the inference of structure and state . Therefore , a proper modeling of the geometric information is paramount for a more accurate representation of the molecule in downstream tasks . In this work , we develop an autoencoder for geometric graphs , orchestrating a GNN and a point cloud model in a nontrivial manner . Besides the straightforward use of a GNN ( Duvenaud et al. , 2015 ; Kearnes et al. , 2016 ; Gilmer et al. , 2017 ) to encode the combinatorial structure , we treat the nodes with spatial coordinates as low-dimensional point clouds and use a point cloud model to encode the geometry . Different from decoders merely based on graph structures in conventional graph autoencoders , on the other hand , we decode the graph structure by first reconstructing the geometry . Specifically , we use a folding-based technique ( Yang et al. , 2018 ; Pang et al. , 2021 ) to map a template of points to the correct geometry , from which the combinatorial structure is inferred . This way , both sources of information are organically fused and processed . Furthermore , to fulfill the generative capability , we augment the autoencoder with a normalizing flow ( Kobyzev et al. , 2019 ; Dinh et al. , 2016 ; 2014 ) , so that one is able to effectively sample from the otherwise intractable latent space . The proposed geometric graph generator , G3 , addresses geometry that is rarely considered in past graph generative models . A naive alternative to incorporate node coordinates is to treat them as part of features when consumed by a GNN . This approach , however , biases the geometry by the graph structure owing to local neighborhood aggregation—the global geometry may be compromised . For geometric graphs , the structure itself is combinatorial and discrete , while the geometry is continuous . These two sources of information are often complementary and benefit from collaborative processing . Our strategy is to exploit both graph techniques and point cloud techniques to maximally retain information from the two sources . To evaluate the effectiveness of G3 , we focus on molecule applications and use popular datasets ( QM9 and ChEMBL ) that contain atomic coordinates . We demonstrate that G3 outperforms a number of representative graph generative models in generating novel and valid molecules and it also significantly outperforms either a GNN or a point model alone for property prediction . Appealingly , G3 decodes molecules substantially faster than do many popular models . Furthermore , with the use of Bayesian optimization in concert with a trained model , we can identify better molecules under metrics of interest than do existing methods . 2 RELATED WORK . Generative models refer to a class of machine learning models that can learn the underlying distribution , implicitly defined by a given set of data , and to sample from it . Noteworthy generative models in the deep learning era include variational autoencoders ( VAE ) ( Kingma & Welling , 2013 ) , generative adversarial networks ( GAN ) ( Goodfellow et al. , 2014 ) and normalizing flows ( NF ) ( Kobyzev et al. , 2019 ; Dinh et al. , 2016 ; 2014 ) . Among them , NF bears a unique advantage in estimating the density ( likelihood ) . For training , NF directly optimizes the likelihood , whereas VAE maximizes a lower bound of it ( called the ELBO Kingma & Welling ( 2013 ) ) and GAN minimizes the discrepancy between the input and the transformed noise distribution . Under these generative models , a graph can be generated in a sequential or one-shot manner : the former samples nodes and edges incrementally while the latter samples a graph in its entirety . Combining GAN and reinforcement learning , MolGAN ( De Cao & Kipf , 2018 ) and GCPN ( You et al. , 2018a ) generate molecules in a one-shot and sequential fashion , respectively . MolecularRNN ( Popova et al. , 2019 ) is an autoregressive model based on recurrent neural networks ; it performs validity checks during the sequential generation process and rejects invalid structures . JT-VAE ( Jin et al. , 2018 ) is a VAE-based tree model that constructs the molecular graph sequentially from sub-components . Popova et al . ( 2019 ) point out that JT-VAE may suffer ambiguity in the conversion of the tree structure to a molecular graph , affecting property optimization . GraphNVP ( Madhawa et al. , 2019 ) and GraphAF ( Shi et al. , 2020 ) are recently proposed NF-based models that parameterize the mapping from a latent vector to a graph by using a flow . The former employs one-shot sampling while the latter is sequential . Molecules admit representations other than graphs . An alternative is the simplified molecular-input line-entry system ( SMILES ) ( Weininger , 1988 ) , a string notation that universally describes molecular structures . Grammar VAE ( GVAE ) ( Kusner et al. , 2017 ) and syntax-directed VAE ( SD-VAE ) ( Dai et al. , 2018 ) are VAE-based models that process SMILES strings and apply grammatical rules to sequentially reconstruct the strings . Another family of models , which are motivated by quantum chemistry principles , models molecules as point clouds and learns a physically robust representation to predict , e.g. , molecular energies and equilibrium conformations . Schütt et al . ( 2018 ) propose SchNet , a network with continuous-filter convolution layers that model interactions between atoms and learn a representation that is invariant to translation and rotation . Using the same filters , Gebauer et al . ( 2019 ) devise Generative-SchNet ( G-SchNet ) , which accurately models the target molecule distribution and generates novel 3D conformations that are relatively stable . In addition to the graph structure , our work treats nodes as a point cloud , which is a collection of points in R3 . Point clouds are a prominent subject in computer vision and computer graphics ; they are obtained through , e.g. , LIDAR scanning of object surfaces ( Yang et al. , 2018 ) . Deep learning with point clouds is faced with challenges in defining the convolution operator ( Bruna et al. , 2013 ; Bronstein et al. , 2017 ; Schonsheck et al. , 2018 ; Jin et al. , 2019 ) . Volumetric CNN ( Wu et al. , 2015 ; Qi et al. , 2016 ; Maturana & Scherer , 2015 ) applies convolution filters on voxels obtained from discretization of the three-dimensional space . Multiview CNN ( Su et al. , 2015 ; Qi et al. , 2016 ) reduces point clouds to a collection of 2D images and applies gridded convolutions on them . PointNet ( Qi et al. , 2017a ; b ) uses point-wise MLPs to featurize individual points , followed by a symmetric pooling that generates a permutation-invariant global description . 3 GEOMETRIC GRAPH GENERATOR . In this section , we present G3 . The model is an autoencoder that contains interacting modules to process and reconstruct the geometry ( i.e. , node coordinates ) , the structure ( i.e. , edges and edge types ) , and additional features ( e.g. , node features ) . Such a sophisticated autoencoder allows forming a more accurate representation of the geometric graph and learning better the input distribution . A proper processing of the geometric and combinatorial information is crucial to the success of G3 and downstream tasks . For example , an authentic representation of the node coordinates serves as the basis for decoding both the node and the edge types : the training hinges on an accurate registration between the reconstructed point cloud and the ground truth . The straightforward adaptation of GNNs for encoding the geometric information ( as node features ) results in poor coordinate reconstruction and worse generation quality overall . Moreover , one can naively reconstruct coordinates and node features with a single decoder . However , mixing two modes of information together introduces unnecessary bias through , e.g. , the difference in scales . Hence , separate but dependent decoders for features and geometry render a more accurate reconstruction .
This paper studies the problem of geometric graph generation (mainly focusing on molecule graph generation). Specifically, the authors propose a new method, namely Geometric Graph Generator (G3), which generates three-dimensional geometric graphs. Different from others, G3 can capture both the combinatorial and the geometric information of graphs. The proposed algorithm can successfully generate molecule graphs and runs faster than other baseline methods. The key parts of G3 are the point encoder and graph encoder, and the three-step decoding process, that is the coordinate decode, feature decoder, and finally graph decoder. The experimental results show that G3 has better performance than other generators in terms of validity, uniqueness, and novelty.
SP:d713225fa41061a2ad23a072786c21f066b2a777
Learnability of convolutional neural networks for infinite dimensional input via mixed and anisotropic smoothness
1 INTRODUCTION . Deep learning has shown high performance in several tasks such as image recognition , speech recognition , and natural language processing . In particular , convolutional neural networks ( CNNs ) and dilated CNNs have been quite effective in tasks involving high-dimensional data ( van den Oord et al. , 2016 ; He et al. , 2016 ; Simonyan & Zisserman , 2015 ; Yoon , 2014 ) . However , many aspects of its theoretical nature are still unclear while related theoretical studies have attracted much attention . Aside from the analysis of CNNs , one of the most fundamental issues in deep learning theories is its function approximation and estimation capabilities . For example , it is well known that any continuous function with compact support can be approximated with arbitrary accuracy by a twolayer fully connected neural network ( Cybenko , 1989 ; Hornik , 1991 ) . Moreover , the representation ability of deep learning to approximate a function in some function classes such as Hölder classes has also been extensively analyzed ( Mhaskar & Micchelli , 1992 ; Mhaskar , 1993 ; Chui et al. , 1994 ; Mhaskar , 1996 ; Pinkus , 1999 ; Yarotsky , 2017 ; Petersen & Voigtlaender , 2017 ) . In addition to the approximation ability , the estimation ability of deep learning for estimating a function by a finite sample has also been extensively studied . For example , Schmidt-Hieber ( 2020 ) derived the estimation error bound of deep learning with ReLU activation ( Nair & Hinton , 2010 ; Glorot et al. , 2011 ) to estimate functions in the Hölder space and showed the rate of convergence achieves the ( near ) minimax optimal rate . Suzuki ( 2019 ) derived approximation and estimation error rates of deep learning with ReLU activation for the Besov spaces , which were also shown to be ( near ) minimax optimal . Although the derived rates of convergence are near optimal , these studies assumed that the dimensionality of inputs is fixed and much smaller than the sample size . Indeed , the derived rates suffer from the curse of dimensionality . However , in practice , we often encounter settings where the input dimensionality is larger than the sample size or even infinite . For example , in image recognition and natural language processing , the dimensionality of inputs ( images or texts ) is very large , and they could be seen as almost infinite dimensional . To address this issue , some researches considered a setting where the dimensionality of the support of the data distribution is low dimensional . Chen et al . ( 2019b ; a ) considered a setting where data can be embedded in a low dimensional sub-manifold and derived the approximation error of functions that depends merely on the dimensionality of the sub-manifold instead of that of the entire space . Nakada & Imaizumi ( 2020 ) also considered a similar setting , and showed that the estimation error is characterized by the Minkowski dimension of the support of the data distribution . Suzuki ( 2019 ) showed that , even if the data can not be embedded in a low dimensional manifold , anisotropic smoothness of the target function can mitigate the curse of dimensionality . Although these studies revealed that deep learning can avoid curse of dimensionality by utilizing some low dimensional structures of data and the target functions , it still remains unclear how deep learning performs for very high dimensional settings including an infinite dimensional setting . See Table 1 for a summarized comparison to existing studies . In terms of infinite dimensional inputs , there have been already several studies on approximation and estimation errors for non-deep-learning methods . For example , so called hyperbolic cross approximation has been considered to approximate a function in a tensor product space with support on [ 0 , 1 ] ∞ ( Dũng & Griebel , 2016 ) and a polynomial order approximation is possible for functions with mixed smoothness , that is , specific summability properties of the smoothness indices are fulfilled . Ingster & Stepanova ( 2011 ) analyzed a Gaussian white noise model with an infinite dimensional input and showed that the estimation accuracy for signals on infinite dimensional anisotropic Sobolev spaces depends on the reciprocal sum of the smoothness per axis ( see also Ingster & Stepanova ( 2006 ) ; Ingster & Suslina ( 2007 ) ; Ingster & Stepanova ( 2009 ) ) . Oliva et al . ( 2013 ; 2015 ) proposed methods to estimate a map where the input and output are functions or distributions , and derive the rate of convergence . Ferraty et al . ( 2007 ) analyzed the Nadaraya-Watson estimator when the inputs are functions , derived the convergence rate of the estimator , and gave the asymptotic confidence band in the context of functional data analysis ( see Ling & Vieu ( 2018 ) as a comprehensive survey of the nonlinear functional data analysis literature ) . However , these researches are not for the deep learning and the benefit of deep learning for such situation has not been well characterized in the literature . In this study , we analyze the approximation and estimation accuracy in a setting where the input is infinite dimensional , and derive their convergence rates . We assume that the true function has mixed and anisotropic smoothness , that is , the function has different smoothness toward different coordinate similarly to Dũng & Griebel ( 2016 ) ; Ingster & Stepanova ( 2011 ) . The intuition behind this setting is as follows : Considering a function which takes an image as an input , an image can be decomposed into different frequency components and usually a function of images has less sensitivity on the high frequency components and more dependent on the low frequency components , which can be formulated as non-uniform smoothness toward each coordinate direction . By considering such a setting , we can show that the rate of convergence can avoid the curse of dimensionality and be of polynomial order . Our contribution can be summarized as follows : 1 . We consider a learning problem in which the target function to be approximated or estimated can take an infinite dimensional input and has mixed or anisotropic smoothness . Then , we show that deep learning by fully connected neural networks can achieve approximation and estimation errors that depend only on smoothness of the target function and are independent of the dimensionality . 2 . We also consider a setting where the smoothness of the target function has a sparse structure , and then we show that dilated CNNs can find appropriate variables and improve the rate of convergence . This indicates that CNNs can capture a long range dependence among the input . These results show that even when the dimension d of the data is very large compared to the number of observations n , or even when the input is infinite dimensional , it is possible to derive a polynomial order estimation error bound that depends only on the smoothness of the function class . This analysis partially explains the great success of CNNs in various applications with high dimensional inputs . 2 PROBLEM SETTING AND NOTATIONS . In this section , we prepare the notations and introduce the problem setting . Throughout this paper , we use the following notations . Let R > 0 : = { s ∈ R : s > 0 } , and for a set D , let D∞ : = { ( s1 , . . . , si , . . . ) : si ∈ D } ( for example , R∞ : = { ( si ) ∞i=1 : si ∈ R ( ∀i = 1 , 2 , . . . ) } ) . For s ∈ R∞ , let supp ( s ) = { i ∈ N : si 6= 0 } . Let N∞0 : = { l ∈ ( N ∪ { 0 } ) ∞ : supp ( l ) < ∞ } and define Z∞0 and R∞0 in the same way . Furthermore , for s ∈ R∞0 , let 2s : = 2 ∑∞ i=1 si . For L ∈ N , let [ L ] = { 1 , . . . , L } . For a ∈ R , let bac be the largest integer less than or equal to a . 2.1 REGRESSION PROBLEM WITH INFINITE DIMENSIONAL PREDICTOR . In this paper , we consider a regression problem where the predictor ( input ) is infinite dimensional . Let λ be the uniform probability measure on ( [ 0 , 1 ] , B ( [ 0 , 1 ] ) ) where B ( [ 0 , 1 ] ) is the Borel σ-field on [ 0 , 1 ] , and let λ∞ be the product measure of λ on ( [ 0 , 1 ] ∞ , B ( [ 0 , 1 ] ∞ ) ) where B ( [ 0 , 1 ] ∞ ) is the product σ-algebra generated by the cylindric sets ∩j≤d { x ∈ [ 0 , 1 ] ∞ : xj ∈ Bj } for d = 1 , 2 , . . . and Bj ∈ B ( [ 0 , 1 ] ) . Let PX be a probability measure defined on the measurable space ( [ 0 , 1 ] ∞ , B ( [ 0 , 1 ] ∞ ) ) that is absolutely continuous to λ∞ and its Radon-Nikodym derivative satisfies ‖ dPXdλ∞ ‖L∞ ( [ 0,1 ] ∞ ) < ∞ 1 . Then , suppose that there exists a true function fo : [ 0 , 1 ] ∞ → R , and consider the following nonparametric regression problem with an infinite dimensional input : Y = fo ( X ) + ξ , ( 1 ) where X is a random variable taking its value on [ 0 , 1 ] ∞ and obeys the distribution PX introduced above , and ξ is a observation noise generated from N ( 0 , σ2 ) ( a normal distribution with mean 0 and variance σ2 > 0 ) . Let P be the joint distribution of X and Y obeying the regression model . What we investigate in the following is ( i ) how efficiently we can approximate the true function fo by a neural network , and ( ii ) how accurately deep learning can estimate the true function fo from n observations Dn = ( Xi , yi ) ni=1 where ( Xi , yi ) n i=1 are i.i.d . observations from the model . As a performance measure , we employ the mean squared error ‖f − fo‖2PX : = EP [ ( f ( X ) − f o ( X ) ) 2 ] , which can be seen as the excess risk of the predictive error E ( X , Y ) ∼P [ ( f ( X ) −Y ) 2 ] associated with the squared loss ( i.e. , ‖f − fo‖2PX = E ( X , Y ) ∼P [ ( f ( X ) − Y ) 2 ] − E ( X , Y ) ∼P [ ( fo ( X ) − Y ) 2 ] = E ( X , Y ) ∼P [ ( f ( X ) − Y ) 2 ] − inff : measurable E ( X , Y ) ∼P [ ( f ( X ) − Y ) 2 ] ) . 2.2 MIXED AND ANISOTROPIC SMOOTHNESS ON INFINITE DIMENSIONAL VARIABLES . Here , we introduce a function class in which we suppose the true function fo is included . For a given l ∈ Z∞0 , define ψli : [ 0 , 1 ] → R as ψli ( x ) = √ 2 cos ( 2π|li|x ) ( li < 0 ) , √ 2 sin ( 2π|li|x ) ( li > 0 ) , 1 ( li = 0 ) , for x ∈ [ 0 , 1 ] , and define ψl ( X ) : = ∏∞ i=1 ψli ( xi ) for X = ( xi ) ∞ i=1 ∈ [ 0 , 1 ] ∞ . Let L2 ( [ 0 , 1 ] ∞ ) : = { f : [ 0 , 1 ] ∞ → R : ∫ [ 0,1 ] ∞ f2 ( x ) dλ∞ ( x ) < ∞ } equipped with the inner product 〈f , g〉 : = ∫ [ 0,1 ] ∞ f ( x ) g ( x ) dλ∞ ( x ) for f , g ∈ L2 ( [ 0 , 1 ] ∞ ) . Then , ( ψl ) l∈Z∞0 forms a complete orthonormal system of L 2 ( [ 0 , 1 ] ∞ ) , that is , f ∈ L2 ( [ 0 , 1 ] ∞ ) can be expanded as f ( X ) = ∑ l∈Z∞0 〈f , ψl〉ψl ( X ) ( see Ingster & Stepanova ( 2011 ) for example ) . For s ∈ N∞0 , let δs ( f ) : R∞ → R be δs ( f ) ( · ) = ∑ l∈Z∞0 : b2si−1c≤|li| < 2si 〈f , ψl〉ψl ( · ) , 1This is a rather strong assumption . We can omit this if we take θ = 1 and p = ∞ for Fγp , θ . However , we don ’ t pursue this direction in this study . which can be seen as the frequency component of f of frequency |li| ' 2si toward each coordinate . We also define ‖f‖p : = ( ∫ [ 0,1 ] ∞ |f |pdλ∞ ) 1/p for p ≥ 1 . Then , we define a function space with a general smoothness configuration as follows . Definition 1 ( Function class with γ-smoothness ) . For a given γ : N∞0 → R > 0 which is monotonically non-decreasing with respect to each coordinate . For p ≥ 1 , θ ≥ 1 , we define the γ-smooth space as Fγp , θ ( [ 0 , 1 ] ∞ ) : = { f = ∑ l∈Z∞0 〈f , ψl〉ψl : ( ∑ s∈N∞0 2θγ ( s ) ‖δs ( f ) ‖θp ) 1/θ < ∞ } , equipped with the norm ‖f‖Fγp , θ : = ( ∑ s∈N∞0 2θγ ( s ) ‖δs ( f ) ‖θp ) 1/θ . In the following , Fγp , θ ( [ 0 , 1 ] ∞ ) is abbreviated to F γ p , θ , and its unit ball is denoted by U ( F γ p , θ ) . Remind that δs ( f ) represents the frequency component associated with the frequency ( 2si ) ∞i=1 , and then the norm of the γ-smooth space imposes weight 2θγ ( s ) on each frequency component associated with s. In that sense , γ ( s ) controls the weight of each frequency component and accordingly a function in the space can have different smoothness toward different coordinates . As a special case of γ ( s ) , we investigate the following ones in this paper . We can see that a finite dimensional analysis can be easily reduced to a special case of the infinite dimensional analysis ( see Appendix A ) . In that sense , our analysis generalizes existing finite dimensional analyses . Definition 2 ( Mixed smoothness and anisotropic smoothness ) . Given a monotonically nondecreasing sequence a = ( ai ) ∞i=1 ∈ R∞ > 0 , we define the mixed smoothness as ( mixed smoothness ) γ ( s ) = 〈a , s〉 , where 〈a , s〉 : = ∑∞ i=1 aisi 2 , and define the anisotropic smoothness as ( anisotropic smoothness ) γ ( s ) = max { aisi : i ∈ N } . Each component ai of a = ( ai ) ∞i=1 represents the smoothness of the function with respect to the variable xi . Since we assumed ( ai ) ∞i=1 is monotonically non-decreasing , a function in the space has higher smoothness toward the coordinate xi with higher index i . In other words , the function f in the space is less sensitive to the variable xi with a larger index i . For example , in computer vision tasks , we may suppose xi with a large index i corresponds to a higher frequency component of the input image , and then the function is less sensitive to such high frequency components and more sensitive to a low-frequency “ global ” information . This can be seen as an infinite dimensional variant of the mixed smooth Besov space ( Schmeisser , 1987 ; Sickel & Ullrich , 2009 ) and the anisotropic Besov space ( Nikol ’ skii , 1975 ; Vybiral , 2006 ; Triebel , 2011 ) ( see Appendix C for detailed discussions ) . In our theoretical analysis , we will assume that the true target function fo is included in the γ-smooth function space . Assumption 3 . The target function satisfies fo ∈ U ( Fγp , θ ) with p ≥ 1 and θ ≥ 1 , and ‖fo‖∞ ≤ Bf for a fixed constant Bf > 0 , where the smoothness γ is either the mixed smoothness or the anisotropic smoothness .
The paper studies non-parametric regression for functions defined on infinite-dimensional input data (such as signals in $\ell^2$), using fully-connected networks or dilated convolutional networks (in the CNN case, convolutional layers are followed by a fully-connected network). The authors consider certain smoothness classes similar to mixed or anisotropic smoothness but extended to infinite-dimensional input data, which requires per-coordinate smoothness orders ($a_i$ in Definition 2) that are non-decreasing or increasing with some rate. For such classes, the authors obtain rates that only depend on $a_1$ for the mixed smoothness case, or $\sum_i a_i^{-1}$ for the anisotropic case, for the ERM under certain classes of FNNs or CNNs. Notably, CNNs avoid the need to selecting specific finite subsets of variables as inputs (needed by FNNs), and dilated CNNs additionally allow some adaptivity to sparsity in the $a_i$, in particular, by avoiding dependence on the specific order of the $a_i$ (the growth condition is on the sorted values instead of the unsorted ones for the non-dilated case with a single convolutional layer).
SP:0edea0200b34d109c964bc9b15e5a4dac5578515
Learnability of convolutional neural networks for infinite dimensional input via mixed and anisotropic smoothness
1 INTRODUCTION . Deep learning has shown high performance in several tasks such as image recognition , speech recognition , and natural language processing . In particular , convolutional neural networks ( CNNs ) and dilated CNNs have been quite effective in tasks involving high-dimensional data ( van den Oord et al. , 2016 ; He et al. , 2016 ; Simonyan & Zisserman , 2015 ; Yoon , 2014 ) . However , many aspects of its theoretical nature are still unclear while related theoretical studies have attracted much attention . Aside from the analysis of CNNs , one of the most fundamental issues in deep learning theories is its function approximation and estimation capabilities . For example , it is well known that any continuous function with compact support can be approximated with arbitrary accuracy by a twolayer fully connected neural network ( Cybenko , 1989 ; Hornik , 1991 ) . Moreover , the representation ability of deep learning to approximate a function in some function classes such as Hölder classes has also been extensively analyzed ( Mhaskar & Micchelli , 1992 ; Mhaskar , 1993 ; Chui et al. , 1994 ; Mhaskar , 1996 ; Pinkus , 1999 ; Yarotsky , 2017 ; Petersen & Voigtlaender , 2017 ) . In addition to the approximation ability , the estimation ability of deep learning for estimating a function by a finite sample has also been extensively studied . For example , Schmidt-Hieber ( 2020 ) derived the estimation error bound of deep learning with ReLU activation ( Nair & Hinton , 2010 ; Glorot et al. , 2011 ) to estimate functions in the Hölder space and showed the rate of convergence achieves the ( near ) minimax optimal rate . Suzuki ( 2019 ) derived approximation and estimation error rates of deep learning with ReLU activation for the Besov spaces , which were also shown to be ( near ) minimax optimal . Although the derived rates of convergence are near optimal , these studies assumed that the dimensionality of inputs is fixed and much smaller than the sample size . Indeed , the derived rates suffer from the curse of dimensionality . However , in practice , we often encounter settings where the input dimensionality is larger than the sample size or even infinite . For example , in image recognition and natural language processing , the dimensionality of inputs ( images or texts ) is very large , and they could be seen as almost infinite dimensional . To address this issue , some researches considered a setting where the dimensionality of the support of the data distribution is low dimensional . Chen et al . ( 2019b ; a ) considered a setting where data can be embedded in a low dimensional sub-manifold and derived the approximation error of functions that depends merely on the dimensionality of the sub-manifold instead of that of the entire space . Nakada & Imaizumi ( 2020 ) also considered a similar setting , and showed that the estimation error is characterized by the Minkowski dimension of the support of the data distribution . Suzuki ( 2019 ) showed that , even if the data can not be embedded in a low dimensional manifold , anisotropic smoothness of the target function can mitigate the curse of dimensionality . Although these studies revealed that deep learning can avoid curse of dimensionality by utilizing some low dimensional structures of data and the target functions , it still remains unclear how deep learning performs for very high dimensional settings including an infinite dimensional setting . See Table 1 for a summarized comparison to existing studies . In terms of infinite dimensional inputs , there have been already several studies on approximation and estimation errors for non-deep-learning methods . For example , so called hyperbolic cross approximation has been considered to approximate a function in a tensor product space with support on [ 0 , 1 ] ∞ ( Dũng & Griebel , 2016 ) and a polynomial order approximation is possible for functions with mixed smoothness , that is , specific summability properties of the smoothness indices are fulfilled . Ingster & Stepanova ( 2011 ) analyzed a Gaussian white noise model with an infinite dimensional input and showed that the estimation accuracy for signals on infinite dimensional anisotropic Sobolev spaces depends on the reciprocal sum of the smoothness per axis ( see also Ingster & Stepanova ( 2006 ) ; Ingster & Suslina ( 2007 ) ; Ingster & Stepanova ( 2009 ) ) . Oliva et al . ( 2013 ; 2015 ) proposed methods to estimate a map where the input and output are functions or distributions , and derive the rate of convergence . Ferraty et al . ( 2007 ) analyzed the Nadaraya-Watson estimator when the inputs are functions , derived the convergence rate of the estimator , and gave the asymptotic confidence band in the context of functional data analysis ( see Ling & Vieu ( 2018 ) as a comprehensive survey of the nonlinear functional data analysis literature ) . However , these researches are not for the deep learning and the benefit of deep learning for such situation has not been well characterized in the literature . In this study , we analyze the approximation and estimation accuracy in a setting where the input is infinite dimensional , and derive their convergence rates . We assume that the true function has mixed and anisotropic smoothness , that is , the function has different smoothness toward different coordinate similarly to Dũng & Griebel ( 2016 ) ; Ingster & Stepanova ( 2011 ) . The intuition behind this setting is as follows : Considering a function which takes an image as an input , an image can be decomposed into different frequency components and usually a function of images has less sensitivity on the high frequency components and more dependent on the low frequency components , which can be formulated as non-uniform smoothness toward each coordinate direction . By considering such a setting , we can show that the rate of convergence can avoid the curse of dimensionality and be of polynomial order . Our contribution can be summarized as follows : 1 . We consider a learning problem in which the target function to be approximated or estimated can take an infinite dimensional input and has mixed or anisotropic smoothness . Then , we show that deep learning by fully connected neural networks can achieve approximation and estimation errors that depend only on smoothness of the target function and are independent of the dimensionality . 2 . We also consider a setting where the smoothness of the target function has a sparse structure , and then we show that dilated CNNs can find appropriate variables and improve the rate of convergence . This indicates that CNNs can capture a long range dependence among the input . These results show that even when the dimension d of the data is very large compared to the number of observations n , or even when the input is infinite dimensional , it is possible to derive a polynomial order estimation error bound that depends only on the smoothness of the function class . This analysis partially explains the great success of CNNs in various applications with high dimensional inputs . 2 PROBLEM SETTING AND NOTATIONS . In this section , we prepare the notations and introduce the problem setting . Throughout this paper , we use the following notations . Let R > 0 : = { s ∈ R : s > 0 } , and for a set D , let D∞ : = { ( s1 , . . . , si , . . . ) : si ∈ D } ( for example , R∞ : = { ( si ) ∞i=1 : si ∈ R ( ∀i = 1 , 2 , . . . ) } ) . For s ∈ R∞ , let supp ( s ) = { i ∈ N : si 6= 0 } . Let N∞0 : = { l ∈ ( N ∪ { 0 } ) ∞ : supp ( l ) < ∞ } and define Z∞0 and R∞0 in the same way . Furthermore , for s ∈ R∞0 , let 2s : = 2 ∑∞ i=1 si . For L ∈ N , let [ L ] = { 1 , . . . , L } . For a ∈ R , let bac be the largest integer less than or equal to a . 2.1 REGRESSION PROBLEM WITH INFINITE DIMENSIONAL PREDICTOR . In this paper , we consider a regression problem where the predictor ( input ) is infinite dimensional . Let λ be the uniform probability measure on ( [ 0 , 1 ] , B ( [ 0 , 1 ] ) ) where B ( [ 0 , 1 ] ) is the Borel σ-field on [ 0 , 1 ] , and let λ∞ be the product measure of λ on ( [ 0 , 1 ] ∞ , B ( [ 0 , 1 ] ∞ ) ) where B ( [ 0 , 1 ] ∞ ) is the product σ-algebra generated by the cylindric sets ∩j≤d { x ∈ [ 0 , 1 ] ∞ : xj ∈ Bj } for d = 1 , 2 , . . . and Bj ∈ B ( [ 0 , 1 ] ) . Let PX be a probability measure defined on the measurable space ( [ 0 , 1 ] ∞ , B ( [ 0 , 1 ] ∞ ) ) that is absolutely continuous to λ∞ and its Radon-Nikodym derivative satisfies ‖ dPXdλ∞ ‖L∞ ( [ 0,1 ] ∞ ) < ∞ 1 . Then , suppose that there exists a true function fo : [ 0 , 1 ] ∞ → R , and consider the following nonparametric regression problem with an infinite dimensional input : Y = fo ( X ) + ξ , ( 1 ) where X is a random variable taking its value on [ 0 , 1 ] ∞ and obeys the distribution PX introduced above , and ξ is a observation noise generated from N ( 0 , σ2 ) ( a normal distribution with mean 0 and variance σ2 > 0 ) . Let P be the joint distribution of X and Y obeying the regression model . What we investigate in the following is ( i ) how efficiently we can approximate the true function fo by a neural network , and ( ii ) how accurately deep learning can estimate the true function fo from n observations Dn = ( Xi , yi ) ni=1 where ( Xi , yi ) n i=1 are i.i.d . observations from the model . As a performance measure , we employ the mean squared error ‖f − fo‖2PX : = EP [ ( f ( X ) − f o ( X ) ) 2 ] , which can be seen as the excess risk of the predictive error E ( X , Y ) ∼P [ ( f ( X ) −Y ) 2 ] associated with the squared loss ( i.e. , ‖f − fo‖2PX = E ( X , Y ) ∼P [ ( f ( X ) − Y ) 2 ] − E ( X , Y ) ∼P [ ( fo ( X ) − Y ) 2 ] = E ( X , Y ) ∼P [ ( f ( X ) − Y ) 2 ] − inff : measurable E ( X , Y ) ∼P [ ( f ( X ) − Y ) 2 ] ) . 2.2 MIXED AND ANISOTROPIC SMOOTHNESS ON INFINITE DIMENSIONAL VARIABLES . Here , we introduce a function class in which we suppose the true function fo is included . For a given l ∈ Z∞0 , define ψli : [ 0 , 1 ] → R as ψli ( x ) = √ 2 cos ( 2π|li|x ) ( li < 0 ) , √ 2 sin ( 2π|li|x ) ( li > 0 ) , 1 ( li = 0 ) , for x ∈ [ 0 , 1 ] , and define ψl ( X ) : = ∏∞ i=1 ψli ( xi ) for X = ( xi ) ∞ i=1 ∈ [ 0 , 1 ] ∞ . Let L2 ( [ 0 , 1 ] ∞ ) : = { f : [ 0 , 1 ] ∞ → R : ∫ [ 0,1 ] ∞ f2 ( x ) dλ∞ ( x ) < ∞ } equipped with the inner product 〈f , g〉 : = ∫ [ 0,1 ] ∞ f ( x ) g ( x ) dλ∞ ( x ) for f , g ∈ L2 ( [ 0 , 1 ] ∞ ) . Then , ( ψl ) l∈Z∞0 forms a complete orthonormal system of L 2 ( [ 0 , 1 ] ∞ ) , that is , f ∈ L2 ( [ 0 , 1 ] ∞ ) can be expanded as f ( X ) = ∑ l∈Z∞0 〈f , ψl〉ψl ( X ) ( see Ingster & Stepanova ( 2011 ) for example ) . For s ∈ N∞0 , let δs ( f ) : R∞ → R be δs ( f ) ( · ) = ∑ l∈Z∞0 : b2si−1c≤|li| < 2si 〈f , ψl〉ψl ( · ) , 1This is a rather strong assumption . We can omit this if we take θ = 1 and p = ∞ for Fγp , θ . However , we don ’ t pursue this direction in this study . which can be seen as the frequency component of f of frequency |li| ' 2si toward each coordinate . We also define ‖f‖p : = ( ∫ [ 0,1 ] ∞ |f |pdλ∞ ) 1/p for p ≥ 1 . Then , we define a function space with a general smoothness configuration as follows . Definition 1 ( Function class with γ-smoothness ) . For a given γ : N∞0 → R > 0 which is monotonically non-decreasing with respect to each coordinate . For p ≥ 1 , θ ≥ 1 , we define the γ-smooth space as Fγp , θ ( [ 0 , 1 ] ∞ ) : = { f = ∑ l∈Z∞0 〈f , ψl〉ψl : ( ∑ s∈N∞0 2θγ ( s ) ‖δs ( f ) ‖θp ) 1/θ < ∞ } , equipped with the norm ‖f‖Fγp , θ : = ( ∑ s∈N∞0 2θγ ( s ) ‖δs ( f ) ‖θp ) 1/θ . In the following , Fγp , θ ( [ 0 , 1 ] ∞ ) is abbreviated to F γ p , θ , and its unit ball is denoted by U ( F γ p , θ ) . Remind that δs ( f ) represents the frequency component associated with the frequency ( 2si ) ∞i=1 , and then the norm of the γ-smooth space imposes weight 2θγ ( s ) on each frequency component associated with s. In that sense , γ ( s ) controls the weight of each frequency component and accordingly a function in the space can have different smoothness toward different coordinates . As a special case of γ ( s ) , we investigate the following ones in this paper . We can see that a finite dimensional analysis can be easily reduced to a special case of the infinite dimensional analysis ( see Appendix A ) . In that sense , our analysis generalizes existing finite dimensional analyses . Definition 2 ( Mixed smoothness and anisotropic smoothness ) . Given a monotonically nondecreasing sequence a = ( ai ) ∞i=1 ∈ R∞ > 0 , we define the mixed smoothness as ( mixed smoothness ) γ ( s ) = 〈a , s〉 , where 〈a , s〉 : = ∑∞ i=1 aisi 2 , and define the anisotropic smoothness as ( anisotropic smoothness ) γ ( s ) = max { aisi : i ∈ N } . Each component ai of a = ( ai ) ∞i=1 represents the smoothness of the function with respect to the variable xi . Since we assumed ( ai ) ∞i=1 is monotonically non-decreasing , a function in the space has higher smoothness toward the coordinate xi with higher index i . In other words , the function f in the space is less sensitive to the variable xi with a larger index i . For example , in computer vision tasks , we may suppose xi with a large index i corresponds to a higher frequency component of the input image , and then the function is less sensitive to such high frequency components and more sensitive to a low-frequency “ global ” information . This can be seen as an infinite dimensional variant of the mixed smooth Besov space ( Schmeisser , 1987 ; Sickel & Ullrich , 2009 ) and the anisotropic Besov space ( Nikol ’ skii , 1975 ; Vybiral , 2006 ; Triebel , 2011 ) ( see Appendix C for detailed discussions ) . In our theoretical analysis , we will assume that the true target function fo is included in the γ-smooth function space . Assumption 3 . The target function satisfies fo ∈ U ( Fγp , θ ) with p ≥ 1 and θ ≥ 1 , and ‖fo‖∞ ≤ Bf for a fixed constant Bf > 0 , where the smoothness γ is either the mixed smoothness or the anisotropic smoothness .
The authors consider approximation and learning by deep neural networks in the setting with an infinite dimensional input space. They provide nice rates for approximating and learning functions with mixed or anisotropic smoothness. The networks studied in the paper include fully connected ReLU networks and those generated by linear convolutional layers followed by a fully connected layer.
SP:0edea0200b34d109c964bc9b15e5a4dac5578515
HyperTransformer: Attention-Based CNN Model Generation from Few Samples
1 INTRODUCTION . In few-shot learning , a conventional machine learning paradigm of fitting a parametric model to training data is taken to a limit of extreme data scarcity where entire categories are introduced with just one or few examples . A generic approach to solving this problem uses training data to identify parameters φ of a learner aφ that given a small batch of examples for a particular task ( called a support set ) can solve this task on unseen data ( called a query set ) . One broad family of few-shot image classification methods frequently referred to as metric-based learning , relies on pretraining an embedding eφ ( · ) and then using some distance in the embedding space to label query samples based on their closeness to known labeled support samples . These methods proved effective on numerous benchmarks ( see Tian et al . ( 2020 ) for review and references ) , however the capabilities of the learner are limited by the capacity of the architecture itself , as these methods try to build a universal embedding function . On the other hand , optimization-based methods such as seminal MAML algorithm ( Finn et al. , 2017 ) can fine-tune the embedding eφ by performing additional SGD updates on all parameters φ of the model producing it . This partially addresses the constraints of metric-based methods by learning a new embedding for each new task . However , in many of these methods , all the knowledge extracted during training on different tasks and describing the learner aφ still has to “ fit ” into the same number of parameters as the model itself . Such limitation becomes more severe as the target models get smaller , while the richness of the task set increases . In this paper we propose a new few-shot learning approach that allows us to decouple the complexity of the task space from the complexity of individual tasks . The main idea is to use the transformer model ( Vaswani et al. , 2017 ) that given a few-shot task episode generates an entire inference model by producing all model weights in a single pass . This allows us to encode the intricacies of the available training data inside the transformer model , while still producing specialized tiny models that can solve individual tasks . Reducing the size of the generated model and moving the computational overhead to the transformer-based weight generator , we can lower the cost of the inference on new images . This can dramatically reduce the overall computation cost in cases , where the tasks change infrequently and hence the weight generator is only used sporadically . We start by observing that the self-attention mechanism is well suited to be an underlying mechanism for a few-shot CNN weight generator . In contrast with earlier CNN- ( Zhao et al. , 2020 ) or BiLSTM-based approaches ( Ravi & Larochelle , 2017 ) , the vanilla1 transformer model is invariant to sample permutations and can handle unbalanced datasets with a varying number of samples per category . Furthermore , we demonstrate that a single-layer self-attention model can replicate a simplified gradient-descent-based learning algorithm . Using a transformer-based model to generate the logits layer on top of a conventionally learned embedding , we achieve competitive results on several common few-shot learning benchmarks . Varying transformer parameters we demonstrate that this high performance can be attributed to additional capacity of the transformer model that decouples its complexity from that of the generated CNN . We then extend our method to support unlabeled samples by using a special input token concatenated to unlabeled samples to encode unknown labels . In our experiments , we observe that using transformers with two or more layers , we achieve better performance by adding unlabeled data into the support set . We explain our results in Section 4.3 where we show that such a transformer with at least two layers can encode the nearest-neighbor style algorithm that associates unlabeled samples with similarly labeled examples . In essence , by training the weight generator to produce CNN models with best possible performance on a query set , we teach the transformer to utilize unlabeled samples without having to manually introduce additional optimization objectives . We also show that by generating all layers of the CNN model we can improve both the training and the test accuracies of CNN models below a certain size . The training accuracy can be viewed as a capability of the generated CNN model to adapt to tasks seen at training time , whereas the test accuracy computed on unseen categories characterizes the generalization capability of this model adaptation mechanism . We empirically demonstrate the expected increase of the model training and test accuracies with the increase of the layer size and the number of generated layers ( see Figure 3 ) . Interestingly , generation of the logits layer alone appears to be “ sufficient ” above a certain model size threshold . This threshold is expected to depend on the variability and the complexity of the training tasks . We conjecture that this might reflect the fact that models are sufficiently expressive for the benchmarks we considered , or that additional regularization is needed to prevent overfitting of the meta-learning models . Finally , in addition to being able to decouple the complexity of the task distribution from the complexity of individual tasks , another important advantage of our method is that it allows to do learning end-to-end without relying on complex nested gradients optimization and other meta-learning approaches , where the number of unrolls steps is large . The paper is structured as follows . In Section 2 , we discuss the few-shot learning problem setup and highlight related work . Section 3 introduces our approach , discusses the motivation for choosing an attention-based model and shows how our approach can be used to meta-learn semi-supervised learning algorithms . In Section 4 , we discuss our experimental results . Finally , in Section 5 , we provide concluding remarks . 2 PROBLEM SETUP AND RELATED WORK . 2.1 FEW-SHOT LEARNING . The main goal of a few-shot learning algorithm is to use a set of training tasks Ttrain for finding a learner aφ parameterized by φ that given new task domains can train to recognize novel classes using just a few samples per each class . The learner aφ can be thought of as a function that maps task description T = { ( xi , ci ) } ti=1 containing k labeled input samples { xi , ci } from n classes , to the weights θ = aφ ( T ) of a trained model f ( x ; θ ) . The parameters φ are meta-optimized to maximize the performance of the model f ( x ; aφ ( TS ) ) generated using a support set TS with x drawn from a query set TQ . Each task T = ( TS , TQ ) is randomly drawn from a space of training tasks Ttrain . Typically , TS and TQ are generated by first randomly choosing several distinct classes from the training set and then sampling examples without replacement from these classes to generate TS and TQ . In a classical “ n-way-k-shot ” setting , n is the number of classes randomly sampled in each episode , and k is the number of samples for each class in the support set TS . The quality of a particular few-shot learning algorithm is typically evaluated using a separate test space of tasks Ttest . By forming Ttest from novel classes unseen at training time , we can evaluate 1without attention masking or positional encodings generalization of different learners aφ . Best algorithms are expected to capture the structure present in the training set and to perform well on novel concepts . This structure may , for example , include certain properties of the distributions pc ( x ) with c being the class label , or the presence of particular discriminative ( or alternatively invariant ) features in the tasks from Ttrain . 2.2 RELATED WORK . Few-shot learning received a lot of attention from the deep learning community and while there are hundreds of few-shot learning methods , several common themes emerged in the past years . Here we outline several existing approaches , show how they relate to our method and discuss the prior work related to it . Metric-Based Learning . One family of approaches involves mapping input samples into an embedding space and then using some nearest neighbor algorithm that relies on the computation of distances from a query sample embedding to the embedding computed using support samples with known labels . The metric used to compute the distance can either be the same for all tasks , or can be task-dependent . This family of methods includes , for example , such methods as Siamese networks ( Koch et al. , 2015 ) , Matching Networks ( Vinyals et al. , 2016 ) , Prototypical Networks ( Snell et al. , 2017 ) , Relation Networks ( Sung et al. , 2018 ) and TADAM ( Oreshkin et al. , 2018 ) . It has recently been argued ( Tian et al. , 2020 ) that methods based on building a powerful sample representation can frequently outperform numerous other approaches including many optimization-based methods . However , such approaches essentially amount to the “ one-model solves all ” approach and thus require larger models than needed to solve individual tasks . Optimization-Based Learning . An alternative approach that can adapt the embedding to a new task is to incorporate optimization within the learning process . A variety of such methods are based on the approach called Model-Agnostic Meta-Learning , or MAML ( Finn et al. , 2017 ) . In MAML , θ = aφ is obtained by initializing a DNN at θ0 = φ and then performing one or more gradient descent updates on a classification loss function L , i.e. , computing θk+1 = θk − γ · ( ∂L/∂θ ) ( T ; θk ) . This approach was later refined ( Antoniou et al. , 2019 ) and built upon giving rise to Reptile ( Nichol et al. , 2018 ) , LEO ( Rusu et al. , 2019 ) and others . One limitation of various MAML-inspired methods is that the knowledge about the set of training tasks Ttrain is distilled into parameters φ that have the same dimensionality as the model parameters θ . Therefore , for a very lightweight model f ( x ; θ ) the capacity of the task-adaptation learner aφ is still limited by the size of θ . Methods that use parameterized preconditioners that otherwise do not impact the model f ( x ; θ ) can alleviate this issue , but as with MAML , such methods can be difficult to train ( Antoniou et al. , 2019 ) . Weight Modulation and Generation . The idea of using a task specification to directly generate or modulate model weights has been previously explored in the generalized supervised learning context ( Ratzlaff & Li , 2019 ) and in specific language models ( Mahabadi et al. , 2021 ; Tay et al. , 2021 ; Ye & Ren , 2021 ) . Some few-shot learning methods described above also employ this approach and use task-specific generation or modulation of the weights of the final classification model . For example , in LGM-Net ( Li et al. , 2019b ) the matching network approach is used to generate a few layers on top of a task-agnostic embedding . Another approach abbreviated as LEO ( Rusu et al. , 2019 ) utilized a similar weight generation method to generate initial model weights from the training dataset in a few-shot learning setting , much like what is proposed in this article . However , in ( Rusu et al. , 2019 ) , the generated weights were also refined using several SGD steps similar to how it is done in MAML . Here we explore a similar idea , but largely inspired by the HYPERNETWORK approach ( Ha et al. , 2017 ) , we instead propose to directly generate an entire task-specific CNN model . Unlike LEO , we do not rely on pre-computed embeddings for images and generate the model in a single step without additional SGD steps , which simplifies and stabilizes training . Transformers in Computer Vision and Few-Shot Learning . Transformer models ( Vaswani et al. , 2017 ) originally proposed for natural language understanding applications had since become a useful tool in practically every subfield of deep learning . In computer vision , transformers have recently seen an explosion of applications ranging from state-of-the-art image classification results ( Dosovitskiy et al. , 2021 ; Touvron et al. , 2021 ) to object detection ( Carion et al. , 2020 ; Zhu et al. , 2021 ) , segmentation ( Ye et al. , 2019 ) , image super-resolution ( Yang et al. , 2020 ) , image generation ( Chen et al. , 2021 ) and many others . There are also several notable applications in few-shot image classification . For example , in Liu et al . ( 2021 ) , the transformer model was used for generating universal representations in the multi-domain few-shot learning scenario . And closely related to our approach , in Ye et al . ( 2020 ) , the authors proposed to accomplish embedding adaptation with the help of transformer models . Unlike our method that generates an entire end-to-end image classification model , this approach applied a task-dependent perturbation to an embedding generated by an independent task-agnostic feature extractor . In ( Gidaris & Komodakis , 2018 ) , a simplified attention-based model was used for the final layer generation .
This work presents the use of a hybrid CNN-Transformer model for few-shot image classification. Specifically, the paper applies encoding and encoding-decoding transformers between convolution layers to increase the learning capacity. The paper also includes a Conv-based hybrid model and uses Omniglot, miniImageNet, and tieredImageNet for evaluations.
SP:bba5c62e1900284fdf028d2ece640157f7cb4c92
HyperTransformer: Attention-Based CNN Model Generation from Few Samples
1 INTRODUCTION . In few-shot learning , a conventional machine learning paradigm of fitting a parametric model to training data is taken to a limit of extreme data scarcity where entire categories are introduced with just one or few examples . A generic approach to solving this problem uses training data to identify parameters φ of a learner aφ that given a small batch of examples for a particular task ( called a support set ) can solve this task on unseen data ( called a query set ) . One broad family of few-shot image classification methods frequently referred to as metric-based learning , relies on pretraining an embedding eφ ( · ) and then using some distance in the embedding space to label query samples based on their closeness to known labeled support samples . These methods proved effective on numerous benchmarks ( see Tian et al . ( 2020 ) for review and references ) , however the capabilities of the learner are limited by the capacity of the architecture itself , as these methods try to build a universal embedding function . On the other hand , optimization-based methods such as seminal MAML algorithm ( Finn et al. , 2017 ) can fine-tune the embedding eφ by performing additional SGD updates on all parameters φ of the model producing it . This partially addresses the constraints of metric-based methods by learning a new embedding for each new task . However , in many of these methods , all the knowledge extracted during training on different tasks and describing the learner aφ still has to “ fit ” into the same number of parameters as the model itself . Such limitation becomes more severe as the target models get smaller , while the richness of the task set increases . In this paper we propose a new few-shot learning approach that allows us to decouple the complexity of the task space from the complexity of individual tasks . The main idea is to use the transformer model ( Vaswani et al. , 2017 ) that given a few-shot task episode generates an entire inference model by producing all model weights in a single pass . This allows us to encode the intricacies of the available training data inside the transformer model , while still producing specialized tiny models that can solve individual tasks . Reducing the size of the generated model and moving the computational overhead to the transformer-based weight generator , we can lower the cost of the inference on new images . This can dramatically reduce the overall computation cost in cases , where the tasks change infrequently and hence the weight generator is only used sporadically . We start by observing that the self-attention mechanism is well suited to be an underlying mechanism for a few-shot CNN weight generator . In contrast with earlier CNN- ( Zhao et al. , 2020 ) or BiLSTM-based approaches ( Ravi & Larochelle , 2017 ) , the vanilla1 transformer model is invariant to sample permutations and can handle unbalanced datasets with a varying number of samples per category . Furthermore , we demonstrate that a single-layer self-attention model can replicate a simplified gradient-descent-based learning algorithm . Using a transformer-based model to generate the logits layer on top of a conventionally learned embedding , we achieve competitive results on several common few-shot learning benchmarks . Varying transformer parameters we demonstrate that this high performance can be attributed to additional capacity of the transformer model that decouples its complexity from that of the generated CNN . We then extend our method to support unlabeled samples by using a special input token concatenated to unlabeled samples to encode unknown labels . In our experiments , we observe that using transformers with two or more layers , we achieve better performance by adding unlabeled data into the support set . We explain our results in Section 4.3 where we show that such a transformer with at least two layers can encode the nearest-neighbor style algorithm that associates unlabeled samples with similarly labeled examples . In essence , by training the weight generator to produce CNN models with best possible performance on a query set , we teach the transformer to utilize unlabeled samples without having to manually introduce additional optimization objectives . We also show that by generating all layers of the CNN model we can improve both the training and the test accuracies of CNN models below a certain size . The training accuracy can be viewed as a capability of the generated CNN model to adapt to tasks seen at training time , whereas the test accuracy computed on unseen categories characterizes the generalization capability of this model adaptation mechanism . We empirically demonstrate the expected increase of the model training and test accuracies with the increase of the layer size and the number of generated layers ( see Figure 3 ) . Interestingly , generation of the logits layer alone appears to be “ sufficient ” above a certain model size threshold . This threshold is expected to depend on the variability and the complexity of the training tasks . We conjecture that this might reflect the fact that models are sufficiently expressive for the benchmarks we considered , or that additional regularization is needed to prevent overfitting of the meta-learning models . Finally , in addition to being able to decouple the complexity of the task distribution from the complexity of individual tasks , another important advantage of our method is that it allows to do learning end-to-end without relying on complex nested gradients optimization and other meta-learning approaches , where the number of unrolls steps is large . The paper is structured as follows . In Section 2 , we discuss the few-shot learning problem setup and highlight related work . Section 3 introduces our approach , discusses the motivation for choosing an attention-based model and shows how our approach can be used to meta-learn semi-supervised learning algorithms . In Section 4 , we discuss our experimental results . Finally , in Section 5 , we provide concluding remarks . 2 PROBLEM SETUP AND RELATED WORK . 2.1 FEW-SHOT LEARNING . The main goal of a few-shot learning algorithm is to use a set of training tasks Ttrain for finding a learner aφ parameterized by φ that given new task domains can train to recognize novel classes using just a few samples per each class . The learner aφ can be thought of as a function that maps task description T = { ( xi , ci ) } ti=1 containing k labeled input samples { xi , ci } from n classes , to the weights θ = aφ ( T ) of a trained model f ( x ; θ ) . The parameters φ are meta-optimized to maximize the performance of the model f ( x ; aφ ( TS ) ) generated using a support set TS with x drawn from a query set TQ . Each task T = ( TS , TQ ) is randomly drawn from a space of training tasks Ttrain . Typically , TS and TQ are generated by first randomly choosing several distinct classes from the training set and then sampling examples without replacement from these classes to generate TS and TQ . In a classical “ n-way-k-shot ” setting , n is the number of classes randomly sampled in each episode , and k is the number of samples for each class in the support set TS . The quality of a particular few-shot learning algorithm is typically evaluated using a separate test space of tasks Ttest . By forming Ttest from novel classes unseen at training time , we can evaluate 1without attention masking or positional encodings generalization of different learners aφ . Best algorithms are expected to capture the structure present in the training set and to perform well on novel concepts . This structure may , for example , include certain properties of the distributions pc ( x ) with c being the class label , or the presence of particular discriminative ( or alternatively invariant ) features in the tasks from Ttrain . 2.2 RELATED WORK . Few-shot learning received a lot of attention from the deep learning community and while there are hundreds of few-shot learning methods , several common themes emerged in the past years . Here we outline several existing approaches , show how they relate to our method and discuss the prior work related to it . Metric-Based Learning . One family of approaches involves mapping input samples into an embedding space and then using some nearest neighbor algorithm that relies on the computation of distances from a query sample embedding to the embedding computed using support samples with known labels . The metric used to compute the distance can either be the same for all tasks , or can be task-dependent . This family of methods includes , for example , such methods as Siamese networks ( Koch et al. , 2015 ) , Matching Networks ( Vinyals et al. , 2016 ) , Prototypical Networks ( Snell et al. , 2017 ) , Relation Networks ( Sung et al. , 2018 ) and TADAM ( Oreshkin et al. , 2018 ) . It has recently been argued ( Tian et al. , 2020 ) that methods based on building a powerful sample representation can frequently outperform numerous other approaches including many optimization-based methods . However , such approaches essentially amount to the “ one-model solves all ” approach and thus require larger models than needed to solve individual tasks . Optimization-Based Learning . An alternative approach that can adapt the embedding to a new task is to incorporate optimization within the learning process . A variety of such methods are based on the approach called Model-Agnostic Meta-Learning , or MAML ( Finn et al. , 2017 ) . In MAML , θ = aφ is obtained by initializing a DNN at θ0 = φ and then performing one or more gradient descent updates on a classification loss function L , i.e. , computing θk+1 = θk − γ · ( ∂L/∂θ ) ( T ; θk ) . This approach was later refined ( Antoniou et al. , 2019 ) and built upon giving rise to Reptile ( Nichol et al. , 2018 ) , LEO ( Rusu et al. , 2019 ) and others . One limitation of various MAML-inspired methods is that the knowledge about the set of training tasks Ttrain is distilled into parameters φ that have the same dimensionality as the model parameters θ . Therefore , for a very lightweight model f ( x ; θ ) the capacity of the task-adaptation learner aφ is still limited by the size of θ . Methods that use parameterized preconditioners that otherwise do not impact the model f ( x ; θ ) can alleviate this issue , but as with MAML , such methods can be difficult to train ( Antoniou et al. , 2019 ) . Weight Modulation and Generation . The idea of using a task specification to directly generate or modulate model weights has been previously explored in the generalized supervised learning context ( Ratzlaff & Li , 2019 ) and in specific language models ( Mahabadi et al. , 2021 ; Tay et al. , 2021 ; Ye & Ren , 2021 ) . Some few-shot learning methods described above also employ this approach and use task-specific generation or modulation of the weights of the final classification model . For example , in LGM-Net ( Li et al. , 2019b ) the matching network approach is used to generate a few layers on top of a task-agnostic embedding . Another approach abbreviated as LEO ( Rusu et al. , 2019 ) utilized a similar weight generation method to generate initial model weights from the training dataset in a few-shot learning setting , much like what is proposed in this article . However , in ( Rusu et al. , 2019 ) , the generated weights were also refined using several SGD steps similar to how it is done in MAML . Here we explore a similar idea , but largely inspired by the HYPERNETWORK approach ( Ha et al. , 2017 ) , we instead propose to directly generate an entire task-specific CNN model . Unlike LEO , we do not rely on pre-computed embeddings for images and generate the model in a single step without additional SGD steps , which simplifies and stabilizes training . Transformers in Computer Vision and Few-Shot Learning . Transformer models ( Vaswani et al. , 2017 ) originally proposed for natural language understanding applications had since become a useful tool in practically every subfield of deep learning . In computer vision , transformers have recently seen an explosion of applications ranging from state-of-the-art image classification results ( Dosovitskiy et al. , 2021 ; Touvron et al. , 2021 ) to object detection ( Carion et al. , 2020 ; Zhu et al. , 2021 ) , segmentation ( Ye et al. , 2019 ) , image super-resolution ( Yang et al. , 2020 ) , image generation ( Chen et al. , 2021 ) and many others . There are also several notable applications in few-shot image classification . For example , in Liu et al . ( 2021 ) , the transformer model was used for generating universal representations in the multi-domain few-shot learning scenario . And closely related to our approach , in Ye et al . ( 2020 ) , the authors proposed to accomplish embedding adaptation with the help of transformer models . Unlike our method that generates an entire end-to-end image classification model , this approach applied a task-dependent perturbation to an embedding generated by an independent task-agnostic feature extractor . In ( Gidaris & Komodakis , 2018 ) , a simplified attention-based model was used for the final layer generation .
The paper proposes a method for solving few-shot image classification that generates all the weights of a very small CNN model. This has the advantage that the generated model can be very small+compact, compared against for example some embedding methods that might require large image classification networks to run as a pre-process. The method is also able to handle unlabeled samples in a fairly natural way.
SP:bba5c62e1900284fdf028d2ece640157f7cb4c92
Language-driven Semantic Segmentation
1 INTRODUCTION . Semantic segmentation is a core problem in computer vision , with the aim of partitioning an image into coherent regions with their respective semantic class labels . Most existing methods for semantic segmentation assume a limited set of semantic class labels that can potentially be assigned to a pixel . The number of class labels is dictated by the training dataset and typically ranges from tens ( Everingham et al. , 2015 ) to hundreds ( Zhou et al. , 2019 ; Mottaghi et al. , 2014 ) of distinct categories . As the English language defines several hundred thousand nouns ( Li et al. , 2020c ) , it is likely that the limited size of the label set severely hinders the potential recognition performance of existing semantic segmentation models . The main reason for the restricted label sets in existing methods is the cost of annotating images to produce sufficient training data . To create training datasets , human annotators must associate every single pixel in thousands of images with a semantic class label – a task that is extremely labor intensive and costly even with small label sets . The complexity of the annotation rises significantly as the number of labels increases since the human annotator has to be aware of the fine-grained candidate labels . Additionally , inter-annotator consistency becomes an issue when objects are present in an image that could fit multiple different descriptions or are subject to a hierarchy of labels . Zero- and few-shot semantic segmentation methods have been proposed as a potential remedy for this problem . Few-shot approaches ( Shaban et al. , 2017 ; Rakelly et al. , 2018 ; Siam et al. , 2019 ; Wang et al. , 2019 ; Zhang et al. , 2019 ; Nguyen & Todorovic , 2019 ; Liu et al. , 2020b ; Wang et al. , 2020 ; Tian et al. , 2020 ; Boudiaf et al. , 2021 ; Min et al. , 2021 ) offer ways to learn to segment novel classes based on only a few labeled images . However , these approaches still require labeled data that includes the novel classes in order to facilitate transfer . Zero-shot methods , on the other hand , commonly leverage word embeddings to discover or generate related features between seen and unseen classes ( Bucher et al. , 2019 ; Gu et al. , 2020 ) without the need for additional annotations . Existing works in this space use standard word embeddings ( Mikolov et al. , 2013 ) and focus on the image encoder . In this work , we present a simple approach to leveraging modern language models to increase the flexibility and generality of semantic segmentation models . Our work is inspired by the CLIP model for image classification ( Radford et al. , 2021 ) , which pairs high-capacity image and text encoders to produce robust zero-shot classifiers . We propose to use state-of-the-art text encoders that have been co-trained on visual data , such as CLIP , to embed labels from the training set into an embedding space and to train a visual encoder to produce per-pixel embeddings from an input image that are close to the corresponding label embeddings . Since the text encoder is trained to embed closely related concepts near one another ( for example , “ dog ” is closer to “ pet ” than to “ vehicle ” ) , we can transfer the flexibility of the text encoder to the visual recognition module while only training on the restricted label sets that are provided by existing semantic segmentation datasets . An example is shown in Figure 1 ( top row ) , where the model can successfully label pixels belonging to the class “ pet ” although the training set did not contain this label . Our approach enables the synthesis of zero-shot semantic segmentation models on the fly . That is , a user can arbitrarily expand , shrink , or reorder the label set for any image at test time . We further introduce an output module that can spatially regularize the predictions while maintaining this flexibility . We demonstrate several examples of the flexibility of our model in Figure 1 . LSeg is able to output different segmentation maps based on the provided label set . For instance , in the last row , output ( a ) recognizes the chair and identifies all non-chair objects as “ other ” since these are the only two labels provided to the model . When labels are added , as in ( b ) and ( c ) , the model is able to successfully segment other objects with the expanded label set . We conduct quantitative evaluation on a variety of zero- and few-shot semantic segmentation tasks . Our approach outperforms existing methods in zero-shot settings and is competitive across multiple few-shot benchmarks . Unlike the state-of-the-art baselines we compare to , our approach does not require additional training samples . Our experiments also show that introducing the text embeddings incurs only a negligible loss in performance when compared to standard fixed-label segmentation methods . 2 RELATED WORK . Generalized semantic segmentation . The majority of existing semantic segmentation models are restricted to a fixed label set that is defined by the labels that are present in the training dataset ( Minaee et al. , 2021 ) . Few-shot semantic segmentation methods aim to relax the restriction of a fixed label set when one or a few annotated examples of novel classes are available at test time . These approaches learn to find reliable visual correspondences between a query image that is to be labeled and labeled support images that may contain novel semantic classes ( Shaban et al. , 2017 ; Rakelly et al. , 2018 ; Siam et al. , 2019 ; Wang et al. , 2019 ; Zhang et al. , 2019 ; Nguyen & Todorovic , 2019 ; Liu et al. , 2020b ; Wang et al. , 2020 ; Tian et al. , 2020 ; Wang et al. , 2020 ; Tian et al. , 2020 ; Boudiaf et al. , 2021 ; Min et al. , 2021 ) . While this strategy can significantly enhance the generality of the resulting model , it requires the availability of at least one labeled example image with the target label set , something that is not always practical . Zero-shot semantic segmentation approaches aim to segment unseen objects without any additional samples of novel classes . Text embeddings of class labels play a central role in these works . Bucher et al . ( 2019 ) and Gu et al . ( 2020 ) propose to leverage word embeddings together with a generative model to generate visual features of unseen categories , while Xian et al . ( 2019 ) propose to project visual features into a simple word embedding space and to correlate the resulting embeddings to assign a label to a pixel . Hu et al . ( 2020 ) propose to use uncertainty-aware learning to better handle noisy labels of seen classes , while Li et al . ( 2020b ) introduce a structured learning approach to better exploit the relations between seen and unseen categories . While all of these leverage text embeddings , our paper is , to the best of our knowledge , the first to show that it is possible to synthesize zero-shot semantic segmentation models that perform on par with fixed-label and few-shot semantic segmentation methods . A variety of solutions have been proposed ( Zhang et al. , 2020b ; Liu et al. , 2020a ; Perera et al. , 2020 ; Zhou et al. , 2021 ) for open-set recognition ( Scheirer et al. , 2012 ; Geng et al. , 2020 ) . These aim to provide a binary decision about whether or not a given sample falls outside the training distribution , but do not aim to predict the labels of entirely new classes . Finally , a different line of work explores cross-domain adaptation methods for semantic segmentation by using feature alignment , self-training , and information propagation strategies ( Yang et al. , 2021 ; Wang et al. , 2021 ) . The target of these works is to enhance the transferability of models to novel visual domains , but they do not address the issue of a restricted label set . As such they are orthogonal to our work . Language-driven recognition . Language-driven recognition is an active area of research . Common tasks in this space include visual question answering ( Antol et al. , 2015 ) , image captioning ( Vinyals et al. , 2014 ) , and image-text retrieval ( Li et al. , 2020a ) . CLIP ( Radford et al. , 2021 ) demonstrated that classic recognition tasks that are not commonly associated with language can strongly benefit from language assistance . CLIP uses contrastive learning together with high-capacity language models and visual feature encoders to synthesize extremely robust models for zero-shot image classification . Recent works have extended this basic paradigm to perform flexible object detection . ViLD ( Gu et al. , 2021 ) introduces an advanced zero-shot object detection method that leverages CLIP , whereas MDETR ( Kamath et al. , 2021 ) proposes an end-to-end approach that modulates a transformer-based baseline detector with text features that are obtained from a state-of-the-art language model . Like CLIP , these works have shown that the robustness and generality of object detection models can be strongly improved by language assistance . Our work is inspired by these approaches and presents , to the best of our knowledge , the first approach to flexibly synthesize zero-shot semantic segmentation models by leveraging high-capacity language models . 3 LANGUAGE-DRIVEN SEMANTIC SEGMENTATION . Our approach , Language driven Semantic segmentation ( LSeg ) embeds text labels and image pixels into a common space , and assigns the closest label to each pixel . We illustrate the framework in Figure 2 and describe each part in detail below . Text encoder . The text encoder embeds the set of N potential labels into a continuous vector space RC , producing N vectors T1 , . . . , Tn ∈ RC as outputs ( blue vectors in Figure 2 ) . Multiple network architectures are possible , and we use the pretrained Contrastive Language–Image Pre-training ( CLIP ) throughout ( Radford et al. , 2021 ) . By design , the set of output vectors is invariant to the ordering of the input labels and allows their number , N , to vary freely . Image encoder . Similar to the text encoder , the image encoder produces an embedding vector for every input pixel ( after downsampling ) . We leverage dense prediction transformers ( DPT ) ( Ranftl et al. , 2021 ) as the underlying architecture . Assume H × W is the input image size and s is a user-defined downsampling factor ( s = 2 in our implementation ) . We define H̃ = Hs , W̃ = W s . The output is a dense embedding I ∈ RH̃×W̃×C ( green tensor in Figure 2 ) . We refer to the embedding of pixel ( i , j ) as Iij . Word-pixel correlation tensor . After the image and the labels are embedded , we correlate them by the inner product , creating a tensor of size H̃ × W̃ ×N ( orange tensor in Figure 2 ) , defined as fijk = Iij · Tk . ( 1 ) We refer to the N -dimensional vector of inner products between the embedding of pixel ( i , j ) and all N words as Fij ∈ RN , where Fij = ( fij1 , fij2 , ... , fijk ) T . During training , we encourage the image encoder to provide pixel embeddings that are close to the text embedding of the corresponding groundtruth class . Specifically , given the text embeddings Tk ∈ RC of N labels and the image embedding Iij ∈ RC of pixel i , j , we aim to maximize the dot product of the entry fijk that corresponds to the ground-truth label k = yij of pixel i , j . We achieve this by defining a pixelwise softmax objective over the whole image : H , W∑ i , j=1 softmaxyij ( Fij t ) , ( 2 ) where t is a user-defined temperature parameter that we set to t = 0.07 ( Wu et al. , 2018 ; Radford et al. , 2021 ) . During training , we minimize a per-pixel softmax with cross-entropy loss ( with temperature scaling ) as is standard in semantic segmentation1 . Spatial regularization . Due to memory constraints , the image encoder predicts pixel embeddings at lower resolution than the input image resolution . We use an additional post-processing module that spatially regularizes and upsamples the predictions to the original input resolution . During this process , we have to ensure that all operations stay equivariant with respect to the labels . In other words , there should be no interactions between the input channels , whose order is defined by the order of the words and can thus be arbitrary . We evaluate two functions that fulfill this property : a simple cascade of depthwise convolutions ( Chollet , 2017 ) followed by non-linear activations ( DepthwiseBlock ) , and another block that additionally augments the depthwise convolutions with the result of a max-pooling operation over the set of labels ( BottleneckBlock ) ( Li et al. , 2019 ) . In a final step we use bilinear interpolation to recover predictions at the original resolution . We refer to these functions as “ spatial regularization blocks ” and illustrate them in Figure 3 . 1In practice we implement this using the standard nn.CrossEntropyLoss from Pytorch . Depthwise Conv Activation Depthwise Conv Activation + Max ( a ) DepthwiseBlock ( b ) BottleneckBlock Figure 3 : Illustration of BottleneckBlock and DepthwiseBlock . Training details . We initialize the backbone of the image encoder with the official ImageNet pretrained weights from ViT ( Dosovitskiy et al. , 2021 ) or ResNet ( He et al. , 2016 ) 2 and initialize the decoder of DPT randomly . During training we freeze the text encoder and only update the weights of the image encoder . We provide the full label set that is defined by each training set to the text encoder for each image . Our model can be trained on any semantic segmentation dataset and supports flexible mixing of multiple datasets through the text encoder . Existing semantic segmentation models assign a fixed channel in the output to represent the probability of a pixel being the corresponding semantic class . In contrast , our approach can dynamically handle label sets with varying length , content , and order . This property allows synthesizing arbitrary zero-shot semantic segmentation models by simply changing the labels that are fed to the text encoder .
The paper proposes Language driven Semantic segmentation (LSeg) for semantic segmentation. Essentially, LSeg embeds text labels and image pixels into a common space, and assigns the closest label to each pixel. LSeg is flexible and can dynamically handle arbitrary label sets on the fly with varying length, content, and order. The paper demonstrates that LSeg achieves comparable performance as state of the art few-shot semantic segmentation networks on FSD-1000 even though it is used in zero-shot setting. When a fixed label set is used based on the data set (all labels in the training set), it also matches the accuracy of traditional segmentation algorithms on ADE-20k. LSeg uses text encoder from CLIP-ViT-B/32 which is frozen during training while the weights of the image decoder (DPT with a ViT-L/16) is updated to maximize the correlation between the text embedding and the image pixel embedding of the ground truth class of the pixel. Spatial regularization is applied at the end that also up samples the predictions to the original input resolution.
SP:c8776bd6d8dba1ccaabefcdab75900df8e709aa5
Language-driven Semantic Segmentation
1 INTRODUCTION . Semantic segmentation is a core problem in computer vision , with the aim of partitioning an image into coherent regions with their respective semantic class labels . Most existing methods for semantic segmentation assume a limited set of semantic class labels that can potentially be assigned to a pixel . The number of class labels is dictated by the training dataset and typically ranges from tens ( Everingham et al. , 2015 ) to hundreds ( Zhou et al. , 2019 ; Mottaghi et al. , 2014 ) of distinct categories . As the English language defines several hundred thousand nouns ( Li et al. , 2020c ) , it is likely that the limited size of the label set severely hinders the potential recognition performance of existing semantic segmentation models . The main reason for the restricted label sets in existing methods is the cost of annotating images to produce sufficient training data . To create training datasets , human annotators must associate every single pixel in thousands of images with a semantic class label – a task that is extremely labor intensive and costly even with small label sets . The complexity of the annotation rises significantly as the number of labels increases since the human annotator has to be aware of the fine-grained candidate labels . Additionally , inter-annotator consistency becomes an issue when objects are present in an image that could fit multiple different descriptions or are subject to a hierarchy of labels . Zero- and few-shot semantic segmentation methods have been proposed as a potential remedy for this problem . Few-shot approaches ( Shaban et al. , 2017 ; Rakelly et al. , 2018 ; Siam et al. , 2019 ; Wang et al. , 2019 ; Zhang et al. , 2019 ; Nguyen & Todorovic , 2019 ; Liu et al. , 2020b ; Wang et al. , 2020 ; Tian et al. , 2020 ; Boudiaf et al. , 2021 ; Min et al. , 2021 ) offer ways to learn to segment novel classes based on only a few labeled images . However , these approaches still require labeled data that includes the novel classes in order to facilitate transfer . Zero-shot methods , on the other hand , commonly leverage word embeddings to discover or generate related features between seen and unseen classes ( Bucher et al. , 2019 ; Gu et al. , 2020 ) without the need for additional annotations . Existing works in this space use standard word embeddings ( Mikolov et al. , 2013 ) and focus on the image encoder . In this work , we present a simple approach to leveraging modern language models to increase the flexibility and generality of semantic segmentation models . Our work is inspired by the CLIP model for image classification ( Radford et al. , 2021 ) , which pairs high-capacity image and text encoders to produce robust zero-shot classifiers . We propose to use state-of-the-art text encoders that have been co-trained on visual data , such as CLIP , to embed labels from the training set into an embedding space and to train a visual encoder to produce per-pixel embeddings from an input image that are close to the corresponding label embeddings . Since the text encoder is trained to embed closely related concepts near one another ( for example , “ dog ” is closer to “ pet ” than to “ vehicle ” ) , we can transfer the flexibility of the text encoder to the visual recognition module while only training on the restricted label sets that are provided by existing semantic segmentation datasets . An example is shown in Figure 1 ( top row ) , where the model can successfully label pixels belonging to the class “ pet ” although the training set did not contain this label . Our approach enables the synthesis of zero-shot semantic segmentation models on the fly . That is , a user can arbitrarily expand , shrink , or reorder the label set for any image at test time . We further introduce an output module that can spatially regularize the predictions while maintaining this flexibility . We demonstrate several examples of the flexibility of our model in Figure 1 . LSeg is able to output different segmentation maps based on the provided label set . For instance , in the last row , output ( a ) recognizes the chair and identifies all non-chair objects as “ other ” since these are the only two labels provided to the model . When labels are added , as in ( b ) and ( c ) , the model is able to successfully segment other objects with the expanded label set . We conduct quantitative evaluation on a variety of zero- and few-shot semantic segmentation tasks . Our approach outperforms existing methods in zero-shot settings and is competitive across multiple few-shot benchmarks . Unlike the state-of-the-art baselines we compare to , our approach does not require additional training samples . Our experiments also show that introducing the text embeddings incurs only a negligible loss in performance when compared to standard fixed-label segmentation methods . 2 RELATED WORK . Generalized semantic segmentation . The majority of existing semantic segmentation models are restricted to a fixed label set that is defined by the labels that are present in the training dataset ( Minaee et al. , 2021 ) . Few-shot semantic segmentation methods aim to relax the restriction of a fixed label set when one or a few annotated examples of novel classes are available at test time . These approaches learn to find reliable visual correspondences between a query image that is to be labeled and labeled support images that may contain novel semantic classes ( Shaban et al. , 2017 ; Rakelly et al. , 2018 ; Siam et al. , 2019 ; Wang et al. , 2019 ; Zhang et al. , 2019 ; Nguyen & Todorovic , 2019 ; Liu et al. , 2020b ; Wang et al. , 2020 ; Tian et al. , 2020 ; Wang et al. , 2020 ; Tian et al. , 2020 ; Boudiaf et al. , 2021 ; Min et al. , 2021 ) . While this strategy can significantly enhance the generality of the resulting model , it requires the availability of at least one labeled example image with the target label set , something that is not always practical . Zero-shot semantic segmentation approaches aim to segment unseen objects without any additional samples of novel classes . Text embeddings of class labels play a central role in these works . Bucher et al . ( 2019 ) and Gu et al . ( 2020 ) propose to leverage word embeddings together with a generative model to generate visual features of unseen categories , while Xian et al . ( 2019 ) propose to project visual features into a simple word embedding space and to correlate the resulting embeddings to assign a label to a pixel . Hu et al . ( 2020 ) propose to use uncertainty-aware learning to better handle noisy labels of seen classes , while Li et al . ( 2020b ) introduce a structured learning approach to better exploit the relations between seen and unseen categories . While all of these leverage text embeddings , our paper is , to the best of our knowledge , the first to show that it is possible to synthesize zero-shot semantic segmentation models that perform on par with fixed-label and few-shot semantic segmentation methods . A variety of solutions have been proposed ( Zhang et al. , 2020b ; Liu et al. , 2020a ; Perera et al. , 2020 ; Zhou et al. , 2021 ) for open-set recognition ( Scheirer et al. , 2012 ; Geng et al. , 2020 ) . These aim to provide a binary decision about whether or not a given sample falls outside the training distribution , but do not aim to predict the labels of entirely new classes . Finally , a different line of work explores cross-domain adaptation methods for semantic segmentation by using feature alignment , self-training , and information propagation strategies ( Yang et al. , 2021 ; Wang et al. , 2021 ) . The target of these works is to enhance the transferability of models to novel visual domains , but they do not address the issue of a restricted label set . As such they are orthogonal to our work . Language-driven recognition . Language-driven recognition is an active area of research . Common tasks in this space include visual question answering ( Antol et al. , 2015 ) , image captioning ( Vinyals et al. , 2014 ) , and image-text retrieval ( Li et al. , 2020a ) . CLIP ( Radford et al. , 2021 ) demonstrated that classic recognition tasks that are not commonly associated with language can strongly benefit from language assistance . CLIP uses contrastive learning together with high-capacity language models and visual feature encoders to synthesize extremely robust models for zero-shot image classification . Recent works have extended this basic paradigm to perform flexible object detection . ViLD ( Gu et al. , 2021 ) introduces an advanced zero-shot object detection method that leverages CLIP , whereas MDETR ( Kamath et al. , 2021 ) proposes an end-to-end approach that modulates a transformer-based baseline detector with text features that are obtained from a state-of-the-art language model . Like CLIP , these works have shown that the robustness and generality of object detection models can be strongly improved by language assistance . Our work is inspired by these approaches and presents , to the best of our knowledge , the first approach to flexibly synthesize zero-shot semantic segmentation models by leveraging high-capacity language models . 3 LANGUAGE-DRIVEN SEMANTIC SEGMENTATION . Our approach , Language driven Semantic segmentation ( LSeg ) embeds text labels and image pixels into a common space , and assigns the closest label to each pixel . We illustrate the framework in Figure 2 and describe each part in detail below . Text encoder . The text encoder embeds the set of N potential labels into a continuous vector space RC , producing N vectors T1 , . . . , Tn ∈ RC as outputs ( blue vectors in Figure 2 ) . Multiple network architectures are possible , and we use the pretrained Contrastive Language–Image Pre-training ( CLIP ) throughout ( Radford et al. , 2021 ) . By design , the set of output vectors is invariant to the ordering of the input labels and allows their number , N , to vary freely . Image encoder . Similar to the text encoder , the image encoder produces an embedding vector for every input pixel ( after downsampling ) . We leverage dense prediction transformers ( DPT ) ( Ranftl et al. , 2021 ) as the underlying architecture . Assume H × W is the input image size and s is a user-defined downsampling factor ( s = 2 in our implementation ) . We define H̃ = Hs , W̃ = W s . The output is a dense embedding I ∈ RH̃×W̃×C ( green tensor in Figure 2 ) . We refer to the embedding of pixel ( i , j ) as Iij . Word-pixel correlation tensor . After the image and the labels are embedded , we correlate them by the inner product , creating a tensor of size H̃ × W̃ ×N ( orange tensor in Figure 2 ) , defined as fijk = Iij · Tk . ( 1 ) We refer to the N -dimensional vector of inner products between the embedding of pixel ( i , j ) and all N words as Fij ∈ RN , where Fij = ( fij1 , fij2 , ... , fijk ) T . During training , we encourage the image encoder to provide pixel embeddings that are close to the text embedding of the corresponding groundtruth class . Specifically , given the text embeddings Tk ∈ RC of N labels and the image embedding Iij ∈ RC of pixel i , j , we aim to maximize the dot product of the entry fijk that corresponds to the ground-truth label k = yij of pixel i , j . We achieve this by defining a pixelwise softmax objective over the whole image : H , W∑ i , j=1 softmaxyij ( Fij t ) , ( 2 ) where t is a user-defined temperature parameter that we set to t = 0.07 ( Wu et al. , 2018 ; Radford et al. , 2021 ) . During training , we minimize a per-pixel softmax with cross-entropy loss ( with temperature scaling ) as is standard in semantic segmentation1 . Spatial regularization . Due to memory constraints , the image encoder predicts pixel embeddings at lower resolution than the input image resolution . We use an additional post-processing module that spatially regularizes and upsamples the predictions to the original input resolution . During this process , we have to ensure that all operations stay equivariant with respect to the labels . In other words , there should be no interactions between the input channels , whose order is defined by the order of the words and can thus be arbitrary . We evaluate two functions that fulfill this property : a simple cascade of depthwise convolutions ( Chollet , 2017 ) followed by non-linear activations ( DepthwiseBlock ) , and another block that additionally augments the depthwise convolutions with the result of a max-pooling operation over the set of labels ( BottleneckBlock ) ( Li et al. , 2019 ) . In a final step we use bilinear interpolation to recover predictions at the original resolution . We refer to these functions as “ spatial regularization blocks ” and illustrate them in Figure 3 . 1In practice we implement this using the standard nn.CrossEntropyLoss from Pytorch . Depthwise Conv Activation Depthwise Conv Activation + Max ( a ) DepthwiseBlock ( b ) BottleneckBlock Figure 3 : Illustration of BottleneckBlock and DepthwiseBlock . Training details . We initialize the backbone of the image encoder with the official ImageNet pretrained weights from ViT ( Dosovitskiy et al. , 2021 ) or ResNet ( He et al. , 2016 ) 2 and initialize the decoder of DPT randomly . During training we freeze the text encoder and only update the weights of the image encoder . We provide the full label set that is defined by each training set to the text encoder for each image . Our model can be trained on any semantic segmentation dataset and supports flexible mixing of multiple datasets through the text encoder . Existing semantic segmentation models assign a fixed channel in the output to represent the probability of a pixel being the corresponding semantic class . In contrast , our approach can dynamically handle label sets with varying length , content , and order . This property allows synthesizing arbitrary zero-shot semantic segmentation models by simply changing the labels that are fed to the text encoder .
This paper uses the pre-trained large model (CLIP) to transfer the language knowledge to the unseen labels for zero-shot semantic segmentation. The idea is reasonable and coherent with previous works that distill knowledge from pre-trained models. The experiments also show good results on several benchmarks in a zero-shot setting. However, the method is not novel and leads to many problems.
SP:c8776bd6d8dba1ccaabefcdab75900df8e709aa5
Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond
1 INTRODUCTION . Distributed learning within the framework of federated learning ( Konečnỳ et al. , 2016 ; McMahan et al. , 2017 ) has witnessed increasing interest recently . A key property of this framework is that models are trained locally using only private data on devices/machines distributed across a network , while parameter updates are aggregated and synchronized at a server.1 Communication is often the key bottleneck for federated learning , which drives the search for algorithms that can train fast while requiring less communication—see Li et al . ( 2020a ) ; Kairouz et al . ( 2021 ) for recent surveys . A basic algorithm for federated learning is local stochastic gradient descent ( SGD ) , also known as federated averaging . The goal is to minimize the global objective that is an average of the local objectives . In local SGD , we have M machines and a server . After each round of communication , each of the M machines locally runs B steps of SGD on its local objective . Every B iterations , the server aggregates the updated local iterates from the machines , averages them , and then synchronizes the machines with the average . Convergence analysis of local SGD and its variants has drawn great interest recently ( Dieuleveut & Patel , 2019 ; Haddadpour et al. , 2019 ; Haddadpour & Mahdavi , 2019 ; Stich , 2019 ; Yu et al. , 2019 ; Li et al. , 2020b ; c ; Koloskova et al. , 2020 ; Khaled et al. , 2020 ; Spiridonoff et al. , 2020 ; Karimireddy et al. , 2020 ; Stich & Karimireddy , 2020 ; Qu et al. , 2020 ) . Of the many , the biggest motivation for our paper comes from the line of work by Woodworth et al . ( 2020a ; b ; 2021 ) . In ( Woodworth et al. , 2020a ; b ) , minibatch SGD is studied as a simple yet powerful baseline for this intermittent communication setting . Instead of locally updating the iterates B times , minibatch SGD aggregates B gradients ( evaluated at the last synced iterate ) from each of the M machines , forms a minibatch of size MB , and then updates the shared iterate . Given the same M and B , local SGD and minibatch SGD have the same number of gradient computations per round of communication , so it is worthwhile to understand which converges faster . Woodworth et al . ( 2020a ; b ) point out that many existing analyses on local SGD show inferior convergence rate compared to minibatch SGD . Through their new upper and lower bounds , they identify regimes where local SGD can be faster than minibatch SGD . While the theory of local and minibatch SGD has seen recent progress , there is still a gap between what is analyzed versus what is actually used . Most theoretical results assume independent and 1A distinctive feature of federated learning is that not all devices necessarily participate in the updates ; however , we focus on the full participation setting in this paper . unbiased gradient estimates obtained via with-replacement sampling of stochastic gradients ( i.e. , choosing training data indices uniformly at random ) . In contrast , most practitioners use withoutreplacement sampling , where they shuffle indices randomly and access them sequentially . Convergence analysis of without-replacement methods is challenging because gradients sampled within an epoch lack independence . As a result , the standard theory based on independent gradient estimates does not apply to shuffling-based methods . While shuffling-based methods are believed to be faster in practice ( Bottou , 2009 ) , broad theoretical understanding of such methods remains elusive , except for noteworthy recent progress mainly focusing on the analysis of SGD ( Gürbüzbalaban et al. , 2019 ; Haochen & Sra , 2019 ; Nagaraj et al. , 2019 ; Nguyen et al. , 2020 ; Safran & Shamir , 2020 ; 2021 ; Rajput et al. , 2020 ; 2021 ; Ahn et al. , 2020 ; Mishchenko et al. , 2020 ; 2021 ; Tran et al. , 2021 ) . These results indicate that in the large-epoch regime ( where the number of epochs is greater than some threshold ) , without-replacement SGD converges faster than with-replacement SGD . 1.1 OUR CONTRIBUTIONS . We analyze convergence rates of without-replacement versions of local and minibatch SGD , where local component functions are reshuffled at every epoch . We call the respective algorithms local RR ( Algorithm 1 ) and minibatch RR ( Algorithm 2 ) , and their with-replacement counterparts local SGD and minibatch SGD . Our key contributions are as follows : • In Section 3 , we present convergence bounds on minibatch and local RR for L-smooth functions satisfying the µ-Polyak-Łojasiewicz condition ( Theorems 1 & 2 ) . Our theorems give highprobability bounds , a departure from the common in-expectation bounds in the literature . We show that minibatch and local RR converge faster than minibatch and local SGD when the number of epochs is sufficiently large . We also identify a regime where local RR converges as fast as minibatch RR : when synchronization happens frequently enough and local objectives are not too heterogeneous . See also Appendix A for a detailed comparison with existing upper bounds . • In Section 4 , we prove that the upper bounds obtained in Section 3 are tight , in all factors except L and µ . We present Theorems 3 & 4 and Proposition 5 which show lower bounds that match the upper bound up to a factor of L2/µ2 . Our lower bound on local RR indicates that if the synchronization interval B is too large , then local RR has no gain from parallel computation . • In Section 5 , we propose a simple modification called synchronized shuffling that allows us to bypass the lower bounds in Section 4 , at the cost of a slight increase in communication . By having the server broadcast random permutations to local machines , we show that in near-homogeneous settings , the modified algorithms converge faster than the lower bounds ( Theorems 6 & 7 ) . • In Appendix C , we present numerical experiments that corroborate our theoretical findings . 2 PROBLEM SETUP . Notation . For a natural number a ∈ N , let [ a ] : = { 1 , 2 , . . . , a } . Let Sa be the set of all permutations of [ a ] . Since our indices start from 1 , we redefine the modulo operation between a ∈ Z and b ∈ N as a mod b : = a− ba−1b cb , to make a mod b ∈ [ b ] . Optimization task . Consider M machines , each with its objective Fm ( x ) : = 1N ∑N i=1 f m i ( x ) , for m ∈ [ M ] . The m-th machine has access only to the gradients of its own N local components fm1 ( x ) , . . . , f m N ( x ) . In this setting , we wish to minimize the global objective function which is an average of the local objectives : F ( x ) : = 1M ∑M m=1 F m ( x ) = 1MN ∑M m=1 ∑N i=1 f m i ( x ) . Further , we assume that each individual component function fmi is L-smooth , so that fmi ( y ) ≤ fmi ( x ) + 〈∇fmi ( x ) , y − x〉+ L2 ‖y − x‖ 2 , for all x , y ∈ Rd , ( 1 ) and that the global objective F satisfies the µ-Polyak-Łojasiewicz ( PŁ ) condition.2 1 2 ‖∇F ( x ) ‖ 2 ≥ µ ( F ( x ) − F ∗ ) for all x ∈ Rd , where µ > 0 . ( 2 ) Algorithms . Under the above setting , we analyze local RR ( Algorithm 1 ) and minibatch RR ( Algorithm 2 ) and characterize their worst-case convergence rates.3 The algorithms are run overK epochs , 2PŁ functions can be thought as a nonconvex generalization of strongly convex functions . 3In Algorithms 1 and 2 , consider SYNCSHUF as FALSE for now . We will discuss SYNCSHUF in Section 5 . Algorithm 1 Local RR ( with and without SYNCSHUF ) Input : Initialization y0 , step-size η , # machines M , # components N , # epochs K , sync interval B . 1 : Initialize xm1,0 : = y0 for all m ∈ [ M ] . 2 : for k ∈ [ K ] do 3 : if SYNCSHUF = TRUE then . Local RR with SYNCSHUF 4 : Sample σ ∼ Unif ( SN ) , π ∼ Unif ( SM ) . 5 : Set σmk ( i ) : = σ ( ( i+ N M π ( m ) ) mod N ) for all m ∈ [ M ] , i ∈ [ N ] . 6 : else . Local RR 7 : Sample σmk ∼ Unif ( SN ) independently and locally , for all m ∈ [ M ] . 8 : end if 9 : for i ∈ [ N ] do 10 : for m ∈ [ M ] do locally 11 : Update xmk , i : = x m k , i−1 − η∇fmσm k ( i ) ( x m k , i−1 ) . 12 : end for 13 : if B divides i then 14 : Aggregate and average yk , i B : = 1 M ∑M m=1 x m k , i . 15 : Synchronize xmk , i : = yk , i B , for all m ∈ [ M ] . 16 : end if 17 : end for 18 : xmk+1,0 : = yk , N B , for all m ∈ [ M ] . 19 : end for 20 : return the last iterate yK , N B . Algorithm 2 Minibatch RR ( with and without SYNCSHUF ) Input : Initialization x0 , step-size η , # machines M , # components N , # epochs K , sync interval B . 1 : Initialize x1,0 : = x0 . 2 : for k ∈ [ K ] do 3 : if SYNCSHUF = TRUE then . Minibatch RR with SYNCSHUF 4 : Sample σ ∼ Unif ( SN ) , π ∼ Unif ( SM ) . 5 : Set σmk ( i ) : = σ ( ( i+ N M π ( m ) ) mod N ) for all m ∈ [ M ] , i ∈ [ N ] . 6 : else . Minibatch RR 7 : Sample σmk ∼ Unif ( SN ) independently and locally , for all m ∈ [ M ] . 8 : end if 9 : for i ∈ [ N B ] do 10 : Update xk , i : = xk , i−1 − ηM ∑M m=1 1 B ∑iB j= ( i−1 ) B+1 ∇fmσm k ( j ) ( xk , i−1 ) ︸ ︷︷ ︸ averaging done locally . 11 : end for 12 : xk+1,0 : = xk , N B . 13 : end for 14 : return the last iterate xK , N B . i.e. , K passes over the entire component functions . At the beginning of epoch k , each machine m shuffles its local component functions { fmi } Ni=1 using a random permutation σmk ∼ Unif ( SN ) . In local RR , each machine makes B local RR updates to its iterate by sequentially accessing its shuffled component functions , before the server aggregates iterates from all the machines and then synchronizes the machines with the average iterate . In minibatch RR , instead of making B local updates , each machine collects B gradients evaluated at the last iterate , and the server aggregates them to make an update using theseMB gradients . Since these two algorithms use the same amount of communication and local gradients , minibatch RR is a simple yet powerful baseline for local RR . Below , we collect our assumptions on the algorithm parameters used throughout the paper . Assumption 1 ( Algorithm parameters ) . We assume M ≥ 1 , N ≥ 2 , and K ≥ 1 . Also , assume that B divides N . We restrict 1 ≤ B ≤ N2 for minibatch RR because B = N makes the algorithm equal to GD . We also assume 2 ≤ B ≤ N for local RR because B = 1 makes the two algorithms the same . We choose a constant step-size scheme , i.e. , η > 0 is kept constant over all updates . We next state assumptions on intra- and inter-machine deviations used in this paper.4 4Assumptions 2 , 3 & 4 require that they hold for the whole Rd . We discuss ways to avoid it in Appendix D.7 . Assumption 2 ( Intra-machine deviation ) . There exists ν ≥ 0 such that for allm ∈ [ M ] and i ∈ [ N ] , ‖∇fmi ( x ) −∇Fm ( x ) ‖ ≤ ν , for all x ∈ Rd . Assumption 2 requires that the difference between the gradient of each local component function fmi ( x ) and its corresponding local objective function F m ( x ) is uniformly bounded . It models the variance of local components fmi within each machine . While the uniform boundedness requirement may look strong , we use this assumption to prove high-probability upper bounds , which are stronger than the common in-expectation bounds . See Appendix A for comparisons with other assumptions , and also Appendix D.7 for ways to avoid uniform boundedness over the entire Rd . The next two assumptions capture the deviation across different machines , i.e. , the degree of heterogeneity , in two different levels of granularity : objective-wise and component-wise . Assumption 3 ( Objective-wise inter-machine deviation ) . There exist τ ≥ 0 and ρ ≥ 1 such that 1 M ∑M m=1 ‖∇Fm ( x ) ‖ ≤ τ + ρ ‖∇F ( x ) ‖ , for all x ∈ Rd . Assumption 3 models the heterogeneity by bounding the mean of ‖∇Fm‖ by a constant plus a multiplicative factor times ‖∇F‖ . The assumption includes the homogeneous case ( i.e. , F 1 = · · · = FM = F ) by τ = 0 and ρ = 1 . Assumption 3 is weaker than many other heterogeneity assumptions in the literature ( e.g. , Karimireddy et al . ( 2020 ) ) ; see Appendix A for detailed comparisons . Assumption 3 measures heterogeneity by only considering the local objectives Fm , not the local components fmi . We consider a more fine-grained notion of heterogeneity in Assumption 4 : Assumption 4 ( Component-wise inter-machine deviation ) . For all i ∈ [ N ] , let f̄i : = 1M ∑M m=1 f m i . There exist λ ≥ 0 such that for all m ∈ [ M ] and i ∈ [ N ] , ∥∥∇fmi ( x ) −∇f̄i ( x ) ∥∥ ≤ λ , for all x ∈ Rd . Assumption 4 states that the gradients of the i-th components of local machines are “ close ” to each other . The assumption subsumes the component-wise homogeneous setting , i.e. , f1i = f 2 i = · · · = fMi , by λ = 0 . In distributed learning , this choice corresponds to the setting where each machine has the same training dataset . Assumption 4 with λ > 0 is also relevant to the case where each device has a slightly perturbed ( e.g. , by data augmentation techniques ) version of a certain dataset . It is straightforward to check that Assumption 4 implies Assumption 3 with τ = λ and ρ = 1 . We conclude this section by defining the function classes we study in this paper . Definition 1 ( Function classes ) . We consider two classes of global objective functions F , also taking into account their local objectives Fm and local components fmi . We assume throughout that f m i are differentiable and F is bounded from below . Fobj ( L , µ , ν , τ , ρ ) : = { F | F is µ-PŁ ; fmi are L-smooth ; F , Fm , fmi satisfy Assumptions 2 & 3 } , Fcmp ( L , µ , ν , λ ) : = { F | F is µ-PŁ ; fmi are L-smooth ; F , Fm , fmi satisfy Assumptions 2 & 4 } . Notice that Fobj ( L , µ , ν , τ , ρ ) ⊃ Fcmp ( L , µ , ν , τ ) for any ρ ≥ 1 . We only make the PŁ assumption on the global objective F , not on the local objectives Fm nor on the local components fmi . Using L and µ , we define the condition number κ : = L/µ ≥ 1 .
This paper proposes two variants of stochastic gradient algorithms without replacement. For smooth functions satisfying the PŁ condition, the authors showed that the proposed shuffling-based variants converge faster than their with-replacement counterparts (for the case with large number of epochs). Moreover, the authors showed that their provided convergence analysis is tight for the case with not large number of epochs.
SP:d90994dacbd0fd015426f033a721793d90051f0c
Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond
1 INTRODUCTION . Distributed learning within the framework of federated learning ( Konečnỳ et al. , 2016 ; McMahan et al. , 2017 ) has witnessed increasing interest recently . A key property of this framework is that models are trained locally using only private data on devices/machines distributed across a network , while parameter updates are aggregated and synchronized at a server.1 Communication is often the key bottleneck for federated learning , which drives the search for algorithms that can train fast while requiring less communication—see Li et al . ( 2020a ) ; Kairouz et al . ( 2021 ) for recent surveys . A basic algorithm for federated learning is local stochastic gradient descent ( SGD ) , also known as federated averaging . The goal is to minimize the global objective that is an average of the local objectives . In local SGD , we have M machines and a server . After each round of communication , each of the M machines locally runs B steps of SGD on its local objective . Every B iterations , the server aggregates the updated local iterates from the machines , averages them , and then synchronizes the machines with the average . Convergence analysis of local SGD and its variants has drawn great interest recently ( Dieuleveut & Patel , 2019 ; Haddadpour et al. , 2019 ; Haddadpour & Mahdavi , 2019 ; Stich , 2019 ; Yu et al. , 2019 ; Li et al. , 2020b ; c ; Koloskova et al. , 2020 ; Khaled et al. , 2020 ; Spiridonoff et al. , 2020 ; Karimireddy et al. , 2020 ; Stich & Karimireddy , 2020 ; Qu et al. , 2020 ) . Of the many , the biggest motivation for our paper comes from the line of work by Woodworth et al . ( 2020a ; b ; 2021 ) . In ( Woodworth et al. , 2020a ; b ) , minibatch SGD is studied as a simple yet powerful baseline for this intermittent communication setting . Instead of locally updating the iterates B times , minibatch SGD aggregates B gradients ( evaluated at the last synced iterate ) from each of the M machines , forms a minibatch of size MB , and then updates the shared iterate . Given the same M and B , local SGD and minibatch SGD have the same number of gradient computations per round of communication , so it is worthwhile to understand which converges faster . Woodworth et al . ( 2020a ; b ) point out that many existing analyses on local SGD show inferior convergence rate compared to minibatch SGD . Through their new upper and lower bounds , they identify regimes where local SGD can be faster than minibatch SGD . While the theory of local and minibatch SGD has seen recent progress , there is still a gap between what is analyzed versus what is actually used . Most theoretical results assume independent and 1A distinctive feature of federated learning is that not all devices necessarily participate in the updates ; however , we focus on the full participation setting in this paper . unbiased gradient estimates obtained via with-replacement sampling of stochastic gradients ( i.e. , choosing training data indices uniformly at random ) . In contrast , most practitioners use withoutreplacement sampling , where they shuffle indices randomly and access them sequentially . Convergence analysis of without-replacement methods is challenging because gradients sampled within an epoch lack independence . As a result , the standard theory based on independent gradient estimates does not apply to shuffling-based methods . While shuffling-based methods are believed to be faster in practice ( Bottou , 2009 ) , broad theoretical understanding of such methods remains elusive , except for noteworthy recent progress mainly focusing on the analysis of SGD ( Gürbüzbalaban et al. , 2019 ; Haochen & Sra , 2019 ; Nagaraj et al. , 2019 ; Nguyen et al. , 2020 ; Safran & Shamir , 2020 ; 2021 ; Rajput et al. , 2020 ; 2021 ; Ahn et al. , 2020 ; Mishchenko et al. , 2020 ; 2021 ; Tran et al. , 2021 ) . These results indicate that in the large-epoch regime ( where the number of epochs is greater than some threshold ) , without-replacement SGD converges faster than with-replacement SGD . 1.1 OUR CONTRIBUTIONS . We analyze convergence rates of without-replacement versions of local and minibatch SGD , where local component functions are reshuffled at every epoch . We call the respective algorithms local RR ( Algorithm 1 ) and minibatch RR ( Algorithm 2 ) , and their with-replacement counterparts local SGD and minibatch SGD . Our key contributions are as follows : • In Section 3 , we present convergence bounds on minibatch and local RR for L-smooth functions satisfying the µ-Polyak-Łojasiewicz condition ( Theorems 1 & 2 ) . Our theorems give highprobability bounds , a departure from the common in-expectation bounds in the literature . We show that minibatch and local RR converge faster than minibatch and local SGD when the number of epochs is sufficiently large . We also identify a regime where local RR converges as fast as minibatch RR : when synchronization happens frequently enough and local objectives are not too heterogeneous . See also Appendix A for a detailed comparison with existing upper bounds . • In Section 4 , we prove that the upper bounds obtained in Section 3 are tight , in all factors except L and µ . We present Theorems 3 & 4 and Proposition 5 which show lower bounds that match the upper bound up to a factor of L2/µ2 . Our lower bound on local RR indicates that if the synchronization interval B is too large , then local RR has no gain from parallel computation . • In Section 5 , we propose a simple modification called synchronized shuffling that allows us to bypass the lower bounds in Section 4 , at the cost of a slight increase in communication . By having the server broadcast random permutations to local machines , we show that in near-homogeneous settings , the modified algorithms converge faster than the lower bounds ( Theorems 6 & 7 ) . • In Appendix C , we present numerical experiments that corroborate our theoretical findings . 2 PROBLEM SETUP . Notation . For a natural number a ∈ N , let [ a ] : = { 1 , 2 , . . . , a } . Let Sa be the set of all permutations of [ a ] . Since our indices start from 1 , we redefine the modulo operation between a ∈ Z and b ∈ N as a mod b : = a− ba−1b cb , to make a mod b ∈ [ b ] . Optimization task . Consider M machines , each with its objective Fm ( x ) : = 1N ∑N i=1 f m i ( x ) , for m ∈ [ M ] . The m-th machine has access only to the gradients of its own N local components fm1 ( x ) , . . . , f m N ( x ) . In this setting , we wish to minimize the global objective function which is an average of the local objectives : F ( x ) : = 1M ∑M m=1 F m ( x ) = 1MN ∑M m=1 ∑N i=1 f m i ( x ) . Further , we assume that each individual component function fmi is L-smooth , so that fmi ( y ) ≤ fmi ( x ) + 〈∇fmi ( x ) , y − x〉+ L2 ‖y − x‖ 2 , for all x , y ∈ Rd , ( 1 ) and that the global objective F satisfies the µ-Polyak-Łojasiewicz ( PŁ ) condition.2 1 2 ‖∇F ( x ) ‖ 2 ≥ µ ( F ( x ) − F ∗ ) for all x ∈ Rd , where µ > 0 . ( 2 ) Algorithms . Under the above setting , we analyze local RR ( Algorithm 1 ) and minibatch RR ( Algorithm 2 ) and characterize their worst-case convergence rates.3 The algorithms are run overK epochs , 2PŁ functions can be thought as a nonconvex generalization of strongly convex functions . 3In Algorithms 1 and 2 , consider SYNCSHUF as FALSE for now . We will discuss SYNCSHUF in Section 5 . Algorithm 1 Local RR ( with and without SYNCSHUF ) Input : Initialization y0 , step-size η , # machines M , # components N , # epochs K , sync interval B . 1 : Initialize xm1,0 : = y0 for all m ∈ [ M ] . 2 : for k ∈ [ K ] do 3 : if SYNCSHUF = TRUE then . Local RR with SYNCSHUF 4 : Sample σ ∼ Unif ( SN ) , π ∼ Unif ( SM ) . 5 : Set σmk ( i ) : = σ ( ( i+ N M π ( m ) ) mod N ) for all m ∈ [ M ] , i ∈ [ N ] . 6 : else . Local RR 7 : Sample σmk ∼ Unif ( SN ) independently and locally , for all m ∈ [ M ] . 8 : end if 9 : for i ∈ [ N ] do 10 : for m ∈ [ M ] do locally 11 : Update xmk , i : = x m k , i−1 − η∇fmσm k ( i ) ( x m k , i−1 ) . 12 : end for 13 : if B divides i then 14 : Aggregate and average yk , i B : = 1 M ∑M m=1 x m k , i . 15 : Synchronize xmk , i : = yk , i B , for all m ∈ [ M ] . 16 : end if 17 : end for 18 : xmk+1,0 : = yk , N B , for all m ∈ [ M ] . 19 : end for 20 : return the last iterate yK , N B . Algorithm 2 Minibatch RR ( with and without SYNCSHUF ) Input : Initialization x0 , step-size η , # machines M , # components N , # epochs K , sync interval B . 1 : Initialize x1,0 : = x0 . 2 : for k ∈ [ K ] do 3 : if SYNCSHUF = TRUE then . Minibatch RR with SYNCSHUF 4 : Sample σ ∼ Unif ( SN ) , π ∼ Unif ( SM ) . 5 : Set σmk ( i ) : = σ ( ( i+ N M π ( m ) ) mod N ) for all m ∈ [ M ] , i ∈ [ N ] . 6 : else . Minibatch RR 7 : Sample σmk ∼ Unif ( SN ) independently and locally , for all m ∈ [ M ] . 8 : end if 9 : for i ∈ [ N B ] do 10 : Update xk , i : = xk , i−1 − ηM ∑M m=1 1 B ∑iB j= ( i−1 ) B+1 ∇fmσm k ( j ) ( xk , i−1 ) ︸ ︷︷ ︸ averaging done locally . 11 : end for 12 : xk+1,0 : = xk , N B . 13 : end for 14 : return the last iterate xK , N B . i.e. , K passes over the entire component functions . At the beginning of epoch k , each machine m shuffles its local component functions { fmi } Ni=1 using a random permutation σmk ∼ Unif ( SN ) . In local RR , each machine makes B local RR updates to its iterate by sequentially accessing its shuffled component functions , before the server aggregates iterates from all the machines and then synchronizes the machines with the average iterate . In minibatch RR , instead of making B local updates , each machine collects B gradients evaluated at the last iterate , and the server aggregates them to make an update using theseMB gradients . Since these two algorithms use the same amount of communication and local gradients , minibatch RR is a simple yet powerful baseline for local RR . Below , we collect our assumptions on the algorithm parameters used throughout the paper . Assumption 1 ( Algorithm parameters ) . We assume M ≥ 1 , N ≥ 2 , and K ≥ 1 . Also , assume that B divides N . We restrict 1 ≤ B ≤ N2 for minibatch RR because B = N makes the algorithm equal to GD . We also assume 2 ≤ B ≤ N for local RR because B = 1 makes the two algorithms the same . We choose a constant step-size scheme , i.e. , η > 0 is kept constant over all updates . We next state assumptions on intra- and inter-machine deviations used in this paper.4 4Assumptions 2 , 3 & 4 require that they hold for the whole Rd . We discuss ways to avoid it in Appendix D.7 . Assumption 2 ( Intra-machine deviation ) . There exists ν ≥ 0 such that for allm ∈ [ M ] and i ∈ [ N ] , ‖∇fmi ( x ) −∇Fm ( x ) ‖ ≤ ν , for all x ∈ Rd . Assumption 2 requires that the difference between the gradient of each local component function fmi ( x ) and its corresponding local objective function F m ( x ) is uniformly bounded . It models the variance of local components fmi within each machine . While the uniform boundedness requirement may look strong , we use this assumption to prove high-probability upper bounds , which are stronger than the common in-expectation bounds . See Appendix A for comparisons with other assumptions , and also Appendix D.7 for ways to avoid uniform boundedness over the entire Rd . The next two assumptions capture the deviation across different machines , i.e. , the degree of heterogeneity , in two different levels of granularity : objective-wise and component-wise . Assumption 3 ( Objective-wise inter-machine deviation ) . There exist τ ≥ 0 and ρ ≥ 1 such that 1 M ∑M m=1 ‖∇Fm ( x ) ‖ ≤ τ + ρ ‖∇F ( x ) ‖ , for all x ∈ Rd . Assumption 3 models the heterogeneity by bounding the mean of ‖∇Fm‖ by a constant plus a multiplicative factor times ‖∇F‖ . The assumption includes the homogeneous case ( i.e. , F 1 = · · · = FM = F ) by τ = 0 and ρ = 1 . Assumption 3 is weaker than many other heterogeneity assumptions in the literature ( e.g. , Karimireddy et al . ( 2020 ) ) ; see Appendix A for detailed comparisons . Assumption 3 measures heterogeneity by only considering the local objectives Fm , not the local components fmi . We consider a more fine-grained notion of heterogeneity in Assumption 4 : Assumption 4 ( Component-wise inter-machine deviation ) . For all i ∈ [ N ] , let f̄i : = 1M ∑M m=1 f m i . There exist λ ≥ 0 such that for all m ∈ [ M ] and i ∈ [ N ] , ∥∥∇fmi ( x ) −∇f̄i ( x ) ∥∥ ≤ λ , for all x ∈ Rd . Assumption 4 states that the gradients of the i-th components of local machines are “ close ” to each other . The assumption subsumes the component-wise homogeneous setting , i.e. , f1i = f 2 i = · · · = fMi , by λ = 0 . In distributed learning , this choice corresponds to the setting where each machine has the same training dataset . Assumption 4 with λ > 0 is also relevant to the case where each device has a slightly perturbed ( e.g. , by data augmentation techniques ) version of a certain dataset . It is straightforward to check that Assumption 4 implies Assumption 3 with τ = λ and ρ = 1 . We conclude this section by defining the function classes we study in this paper . Definition 1 ( Function classes ) . We consider two classes of global objective functions F , also taking into account their local objectives Fm and local components fmi . We assume throughout that f m i are differentiable and F is bounded from below . Fobj ( L , µ , ν , τ , ρ ) : = { F | F is µ-PŁ ; fmi are L-smooth ; F , Fm , fmi satisfy Assumptions 2 & 3 } , Fcmp ( L , µ , ν , λ ) : = { F | F is µ-PŁ ; fmi are L-smooth ; F , Fm , fmi satisfy Assumptions 2 & 4 } . Notice that Fobj ( L , µ , ν , τ , ρ ) ⊃ Fcmp ( L , µ , ν , τ ) for any ρ ≥ 1 . We only make the PŁ assumption on the global objective F , not on the local objectives Fm nor on the local components fmi . Using L and µ , we define the condition number κ : = L/µ ≥ 1 .
This paper studies two popular algorithms: local SGD and minibatch SGD using the data shuffling technique, which means sampling without replacement. Authors provide analysis under PL condition and show that in some cases these methods with shuffling outperform classical local SGD and minibatch SGD. Additionally, this paper provides lower bounds for minibatch RR and local RR in homogeneous and heterogeneous settings. At the end of the paper, a new technique of using synchronized permutations is proposed. In an almost homogeneous setting, this approach can improve rates.
SP:d90994dacbd0fd015426f033a721793d90051f0c
MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling
1 INTRODUCTION . Generative models are most useful to creators if they can generate realistic outputs , afford many avenues for control , and easily fit into existing creative workflows ( Huang et al. , 2020 ) . Deep generative models are expressive function approximators , capable of generating realistic samples in many domains ( Ramesh et al. , 2021 ; Brown et al. , 2020 ; van den Oord et al. , 2016 ) , but often at the cost of interactivity , restricting users to rigid black-box input-output pairings without interpretable access to the internals of the network . In contrast , structured models chain several stages of interpretable intermediate representations with expressive networks , while still allowing users to interact throughout the hierarchy . For example , these techniques have been especially effective in computer vision and speech , where systems are optimized for both realism and control ( Lee et al. , 2021b ; Chan et al. , 2019 ; Zhang et al. , 2019 ; Wang et al. , 2018a ; Morrison et al. , 2020 ; Ren et al. , 2020 ) . For music generation , despite recent progress , current tools still fall short of this ideal ( Figure 1 , right ) . Deep networks can either generate realistic full-band audio ( Dhariwal et al. , 2020 ) or provide detailed controls of attributes such as pitch , dynamics , and timbre ( Défossez et al. , 2018 ; Engel et al. , 2019 ; 2020a ; Hawthorne et al. , 2019 ; Wang & Yang , 2019 ) but not both . Many existing workflows use the MIDI specification ( Association et al. , 1996 ) to Conventional DSP synthesizers ( Chowning , 1973 ; Roads , 1988 ) provide extensive control but make it difficult to generate realistic instrument timbre , while concatenative samplers ( Schwarz , 2007 ) play back high-fidelity recordings of isolated musical notes , but require manually stitching together performances with limited control over expression and continuity . In this paper , we propose MIDI-DDSP , a hierarchical generative model of musical performance to provide both realism and control ( Figure 1 , left ) . Similar to conventional synthesizers and samplers that use the MIDI standard ( Association et al. , 1996 ) , MIDI-DDSP converts note timing , pitch , and expression information into fine-grained parameter control of DDSP synthesizer modules . We take inspiration from the hierarchical structure underlying the process of creating music . A composer writes a piece as a series of notes . A performer interprets these notes through a myriad of nuanced , sub-second choices about articulation , dynamics , and expression . These expressive gestures are realized as audio through the short-time pitch and timbre changes of the physical vibration of the instrument . MIDI-DDSP is built on a similar 3-level hierarchy ( notes , performance , synthesis ) with interpretable representations at each level . While the efficient DDSP synthesis representation ( low-level ) allows for high-fidelity audio synthesis ( Engel et al. , 2020a ) , users can also control the notes to be played ( high-level ) , and the expression with which they are performed ( mid-level ) . A qualitative example of this is shown in Figure 2 , where a given performance on violin is manipulated at all three levels ( notes , expression , synthesis parameters ) to create a new realistic yet personalized performance . As seen in Figure 1 ( left ) , MIDI-DDSP can be viewed similarly to a multi-level autoencoder . The hierarchy has three separately trainable modules ( DDSP Inference , Synthesis Generator , Expression Generator ) and three fixed functions/heuristics ( DDSP Synthesis , Feature Extraction , Note Detection ) . These modules enable MIDI-DDSP to conditionally generate at any level of the hierarchy , providing creative assistance by filling in the details of a performance , synthesizing audio for new note sequences , or even fully automating music generation when paired with a separate note generating model . It is important to note that the system relies on pitch detection and note detection , so is currently limited to training on recordings of single monophonic instruments . This approach has the potential to be extend to polyphonic recordings via multi-instrument transcription ( Hawthorne et al. , 2021 ; Engel et al. , 2020b ; Bittner et al. , 2017 ) and multi-pitch tracking , which is an exciting avenue to explore for future work . Finally , we also show that each stage can be made conditional on instrument identity , training on 13 separate instruments with a single model . For clarity , we summarize the core contributions of this work : • We propose MIDI-DDSP , a 3-level hierarchical generative model of music ( notes , performance , synthesis ) , and train a single model capable of realistic audio synthesis for 13 different instruments . ( Section 3 ) • Expression Attributes : We introduce heuristics to extract mid-level per-note expression attributes from low-level synthesis parameters . ( Figure 4 ) • User Control : Quantitative studies confirm that manipulating the expression attributes creates a corresponding effect in the synthesizer parameters , and we qualitatively demonstrate the detailed control that is available to users manipulating all three levels of the hierarchy . ( Table 2 and Figure 2 ) • Assistive Generation : Reconstruction experiments show that MIDI-DDSP can make assistive predictions at each level of the hierarchy , accurately resynthesizing audio , predicting synthesis parameters from note-wise expression attributes , and auto-regressively predicting note-wise expression attributes from a note sequence . ( Tables 1a , 1b , 1c ) • Realistic Note Synthesis : An extensive listening study finds that MIDI-DDSP can synthesize audio from new note sequences ( not seen during training ) with higher realism than both comparable neural approaches and professional concatenative sampler software . ( Figure 5 ) • Automatic Music Generation : We demonstrate that pairing MIDI-DDSP with a pretrained note generation model enables full-stack automatic music generation . As an example , we use Coconet ( Huang et al. , 2017 ) to generate and synthesize novel 4-part Bach chorales for a variety of instruments . ( Figure 6 ) Audio samples of all results and figures are provided in the online supplement1 . We highly recommend readers to access the online supplement to the paper . 2 RELATED WORK . Note Synthesis . Existing neural synthesis models allow either high-level manipulation of note pitch , velocity , and timing ( Hawthorne et al. , 2019 ; Kim et al. , 2019 ; Wang & Yang , 2019 ; Manzelli et al. , 2018 ) , or low-level synthesis parameters ( Jonason et al. , 2020 ; Castellon et al. , 2020 ; Blaauw & Bonada , 2017 ) . MIDI-DDSP connects these two approaches by enabling both high-level note controls and low-level synthesis manipulation in a single system . 1https : //midi-ddsp.github.io/ Most related to this work is MIDI2Params ( Castellon et al. , 2020 ) , a hierarchical model that autoregressively predicts frame-wise pitch and loudness contours to drive the original DDSP autoencoder ( Engel et al. , 2020a ) . MIDI-DDSP builds on this work by adding an additional level of hierarchy for the note expression , training a new more accurate DDSP base model , and explicitly modeling the synthesizer coefficients output by that model , rather than the pitch and loudness inputs to the model . We extensively compare to our reimplementation of MIDI2Params as a baseline throughout the paper . Hierarchical Audio Modelling . Audio waveforms have dependencies over timescales spanning several orders of magnitude , lending themselves to hierarchical modeling . For example , Dieleman et al . ( 2018 ) and Dhariwal et al . ( 2020 ) both choose to encode audio as discrete latent codes at different time resolutions , and apply autoregressive models as priors over those codes . MIDI-DDSP applies a similar approach in spirit , but constructs a hierarchy based on semantic musical structure ( note , performance , synthesis ) , allowing interpretable manipulation by users . Expressive Performance Analysis and Synthesis . Many prior systems pair analysis and synthesis functions to capture expressive performance characteristics ( Canazza et al. , 2004 ; Yang et al. , 2016 ; Shih et al. , 2017 ) . Such methods often use heuristic functions to generate parameters for driving synthesizers or selecting and modifying sample units . MIDI-DDSP similarly uses feature extraction , but each level is paired with a differentiable neural network function that directly learns the mapping to expression and synthesis controls for more realistic audio synthesis . 3 MODEL ARCHITECTURE . 3.1 DDSP SYNTHESIS AND INFERENCE . Differentiable Digital Signal Processing ( DDSP ) ( Engel et al. , 2020a ) enables differentiable audio synthesis by using a harmonic plus noise model ( Serra & Smith , 1990 ) . Full details are provided in Appendix B.1 . Briefly , an oscillator bank synthesizes a harmonic signal from a fundamental frequency f0 ( t ) , a base amplitude a ( t ) , and a distribution over harmonic amplitudes h ( t ) , where the dimensionality of h is the number of harmonics . The noise signal is generated by filtering uniform noise with linearly spaced filter banks , where η ( t ) represents the magnitude of noise output from each filter in time . In this study , we use 60 harmonics and 65 noise filter banks , giving 127 total synthesis parameters each time frame ( s ( t ) = ( f0 ( t ) , a ( t ) , h ( t ) , η ( t ) ) ) . The final audio is the addition of harmonic and noise signals . Since the synthesis process is differentiable , Engel et al . ( 2020a ) demonstrate that it is possible to train a neural network to predict the other synthesis parameters given f0 ( t ) and the loudness of the audio , and optimize a multi-scale spectral loss ( Wang et al. , 2019 ; Engel et al. , 2020a ) of the resynthesized audio ( Figure 3 left ) . f0 ( t ) is extracted by a pre-trained CREPE model ( Kim et al. , 2018 ) , and the loudness is extracted via an A-weighting of the power spectrum ( Hantrakul et al. , 2019 ; McCurdy , 1936 ) . We extend this work for our DDSP Inference module , by providing and additional input features a log-scale Mel-spectrogram of the audio , that produces higher quality resynthesis ( Table 1a ) . Full architectural details are provided in Appendix B.2 . 3.2 EXPRESSION CONTROLS . We aim to model aspects of expressive performance with a continuous variable . For example , this enables a performer to choose how loud the note should be performed , or how much vibrato to apply . We define six expression controls ( detailed in Appendix B.3 ) , scaled within [ 0 , 1 ] . These are extracted from synthesis parameters s ( t ) and applied within the ith note , ni ( t ) , in a note sequence : Volume : Controls the volume of a note , extracted by taken average amplitude over a note . Volume fluctuation : Determines the magnitude of a volume change across a note . Used with the volume peak position , below , this can make a note crescendo or decrescendo . This is extracted by calculating the standard deviation of the amplitude over a note . Volume peak position : Controls where , over the duration of a note , the peak volume occurs . Zero value corresponds to decrescendo notes , whereas one corresponds to crescendo notes . The volume peak position is extracted by calculating the relative position of maximum amplitude in the note . Vibrato : Controls the extent of the vibrato of a note . Vibrato is a musical technique defined by pulsating the pitch of a note . Vibrato is extracted by applying Discrete Fourier Transform ( DFT ) on the fundamental frequency f0 ( t ) in a note and take the peak amplitude . Brightness : Controls the timbre of a note where a higher value corresponds to larger high-frequency harmonic . Brightness is extracted by calculating the average harmonic centroid of a note . Attack Noise : Controls how much noise occurs at the start of the note ( the attack ) , e.g. , the fluctuation of string and bow . At certain settings , this determines whether two notes sound consecutively or separately . The attack noise is extracted by taking a note ’ s average noise magnitude in the first ten frames ( 40ms ) .
This paper proposes a music performance modeling network using three sub-modules which are expression generator, synthesis generator, and DDSP inference. The idea of using these three sub-modules to create three-level performance modeling (the three-level control values are notes, performance features, and synthesis parameters) is interesting and the result shows that the proposed model can give more ability of control while not harming the overall audio generation performance. It seems the proposed method that having three trainable networks with two hand-crafted methods (actually note detection is not the part of the contribution of this paper, so I didn't count it) gives full pipeline of hierarchical modeling and this is a good research contribution.
SP:4880635544d5dd03261487167b770ca4de40909f
MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling
1 INTRODUCTION . Generative models are most useful to creators if they can generate realistic outputs , afford many avenues for control , and easily fit into existing creative workflows ( Huang et al. , 2020 ) . Deep generative models are expressive function approximators , capable of generating realistic samples in many domains ( Ramesh et al. , 2021 ; Brown et al. , 2020 ; van den Oord et al. , 2016 ) , but often at the cost of interactivity , restricting users to rigid black-box input-output pairings without interpretable access to the internals of the network . In contrast , structured models chain several stages of interpretable intermediate representations with expressive networks , while still allowing users to interact throughout the hierarchy . For example , these techniques have been especially effective in computer vision and speech , where systems are optimized for both realism and control ( Lee et al. , 2021b ; Chan et al. , 2019 ; Zhang et al. , 2019 ; Wang et al. , 2018a ; Morrison et al. , 2020 ; Ren et al. , 2020 ) . For music generation , despite recent progress , current tools still fall short of this ideal ( Figure 1 , right ) . Deep networks can either generate realistic full-band audio ( Dhariwal et al. , 2020 ) or provide detailed controls of attributes such as pitch , dynamics , and timbre ( Défossez et al. , 2018 ; Engel et al. , 2019 ; 2020a ; Hawthorne et al. , 2019 ; Wang & Yang , 2019 ) but not both . Many existing workflows use the MIDI specification ( Association et al. , 1996 ) to Conventional DSP synthesizers ( Chowning , 1973 ; Roads , 1988 ) provide extensive control but make it difficult to generate realistic instrument timbre , while concatenative samplers ( Schwarz , 2007 ) play back high-fidelity recordings of isolated musical notes , but require manually stitching together performances with limited control over expression and continuity . In this paper , we propose MIDI-DDSP , a hierarchical generative model of musical performance to provide both realism and control ( Figure 1 , left ) . Similar to conventional synthesizers and samplers that use the MIDI standard ( Association et al. , 1996 ) , MIDI-DDSP converts note timing , pitch , and expression information into fine-grained parameter control of DDSP synthesizer modules . We take inspiration from the hierarchical structure underlying the process of creating music . A composer writes a piece as a series of notes . A performer interprets these notes through a myriad of nuanced , sub-second choices about articulation , dynamics , and expression . These expressive gestures are realized as audio through the short-time pitch and timbre changes of the physical vibration of the instrument . MIDI-DDSP is built on a similar 3-level hierarchy ( notes , performance , synthesis ) with interpretable representations at each level . While the efficient DDSP synthesis representation ( low-level ) allows for high-fidelity audio synthesis ( Engel et al. , 2020a ) , users can also control the notes to be played ( high-level ) , and the expression with which they are performed ( mid-level ) . A qualitative example of this is shown in Figure 2 , where a given performance on violin is manipulated at all three levels ( notes , expression , synthesis parameters ) to create a new realistic yet personalized performance . As seen in Figure 1 ( left ) , MIDI-DDSP can be viewed similarly to a multi-level autoencoder . The hierarchy has three separately trainable modules ( DDSP Inference , Synthesis Generator , Expression Generator ) and three fixed functions/heuristics ( DDSP Synthesis , Feature Extraction , Note Detection ) . These modules enable MIDI-DDSP to conditionally generate at any level of the hierarchy , providing creative assistance by filling in the details of a performance , synthesizing audio for new note sequences , or even fully automating music generation when paired with a separate note generating model . It is important to note that the system relies on pitch detection and note detection , so is currently limited to training on recordings of single monophonic instruments . This approach has the potential to be extend to polyphonic recordings via multi-instrument transcription ( Hawthorne et al. , 2021 ; Engel et al. , 2020b ; Bittner et al. , 2017 ) and multi-pitch tracking , which is an exciting avenue to explore for future work . Finally , we also show that each stage can be made conditional on instrument identity , training on 13 separate instruments with a single model . For clarity , we summarize the core contributions of this work : • We propose MIDI-DDSP , a 3-level hierarchical generative model of music ( notes , performance , synthesis ) , and train a single model capable of realistic audio synthesis for 13 different instruments . ( Section 3 ) • Expression Attributes : We introduce heuristics to extract mid-level per-note expression attributes from low-level synthesis parameters . ( Figure 4 ) • User Control : Quantitative studies confirm that manipulating the expression attributes creates a corresponding effect in the synthesizer parameters , and we qualitatively demonstrate the detailed control that is available to users manipulating all three levels of the hierarchy . ( Table 2 and Figure 2 ) • Assistive Generation : Reconstruction experiments show that MIDI-DDSP can make assistive predictions at each level of the hierarchy , accurately resynthesizing audio , predicting synthesis parameters from note-wise expression attributes , and auto-regressively predicting note-wise expression attributes from a note sequence . ( Tables 1a , 1b , 1c ) • Realistic Note Synthesis : An extensive listening study finds that MIDI-DDSP can synthesize audio from new note sequences ( not seen during training ) with higher realism than both comparable neural approaches and professional concatenative sampler software . ( Figure 5 ) • Automatic Music Generation : We demonstrate that pairing MIDI-DDSP with a pretrained note generation model enables full-stack automatic music generation . As an example , we use Coconet ( Huang et al. , 2017 ) to generate and synthesize novel 4-part Bach chorales for a variety of instruments . ( Figure 6 ) Audio samples of all results and figures are provided in the online supplement1 . We highly recommend readers to access the online supplement to the paper . 2 RELATED WORK . Note Synthesis . Existing neural synthesis models allow either high-level manipulation of note pitch , velocity , and timing ( Hawthorne et al. , 2019 ; Kim et al. , 2019 ; Wang & Yang , 2019 ; Manzelli et al. , 2018 ) , or low-level synthesis parameters ( Jonason et al. , 2020 ; Castellon et al. , 2020 ; Blaauw & Bonada , 2017 ) . MIDI-DDSP connects these two approaches by enabling both high-level note controls and low-level synthesis manipulation in a single system . 1https : //midi-ddsp.github.io/ Most related to this work is MIDI2Params ( Castellon et al. , 2020 ) , a hierarchical model that autoregressively predicts frame-wise pitch and loudness contours to drive the original DDSP autoencoder ( Engel et al. , 2020a ) . MIDI-DDSP builds on this work by adding an additional level of hierarchy for the note expression , training a new more accurate DDSP base model , and explicitly modeling the synthesizer coefficients output by that model , rather than the pitch and loudness inputs to the model . We extensively compare to our reimplementation of MIDI2Params as a baseline throughout the paper . Hierarchical Audio Modelling . Audio waveforms have dependencies over timescales spanning several orders of magnitude , lending themselves to hierarchical modeling . For example , Dieleman et al . ( 2018 ) and Dhariwal et al . ( 2020 ) both choose to encode audio as discrete latent codes at different time resolutions , and apply autoregressive models as priors over those codes . MIDI-DDSP applies a similar approach in spirit , but constructs a hierarchy based on semantic musical structure ( note , performance , synthesis ) , allowing interpretable manipulation by users . Expressive Performance Analysis and Synthesis . Many prior systems pair analysis and synthesis functions to capture expressive performance characteristics ( Canazza et al. , 2004 ; Yang et al. , 2016 ; Shih et al. , 2017 ) . Such methods often use heuristic functions to generate parameters for driving synthesizers or selecting and modifying sample units . MIDI-DDSP similarly uses feature extraction , but each level is paired with a differentiable neural network function that directly learns the mapping to expression and synthesis controls for more realistic audio synthesis . 3 MODEL ARCHITECTURE . 3.1 DDSP SYNTHESIS AND INFERENCE . Differentiable Digital Signal Processing ( DDSP ) ( Engel et al. , 2020a ) enables differentiable audio synthesis by using a harmonic plus noise model ( Serra & Smith , 1990 ) . Full details are provided in Appendix B.1 . Briefly , an oscillator bank synthesizes a harmonic signal from a fundamental frequency f0 ( t ) , a base amplitude a ( t ) , and a distribution over harmonic amplitudes h ( t ) , where the dimensionality of h is the number of harmonics . The noise signal is generated by filtering uniform noise with linearly spaced filter banks , where η ( t ) represents the magnitude of noise output from each filter in time . In this study , we use 60 harmonics and 65 noise filter banks , giving 127 total synthesis parameters each time frame ( s ( t ) = ( f0 ( t ) , a ( t ) , h ( t ) , η ( t ) ) ) . The final audio is the addition of harmonic and noise signals . Since the synthesis process is differentiable , Engel et al . ( 2020a ) demonstrate that it is possible to train a neural network to predict the other synthesis parameters given f0 ( t ) and the loudness of the audio , and optimize a multi-scale spectral loss ( Wang et al. , 2019 ; Engel et al. , 2020a ) of the resynthesized audio ( Figure 3 left ) . f0 ( t ) is extracted by a pre-trained CREPE model ( Kim et al. , 2018 ) , and the loudness is extracted via an A-weighting of the power spectrum ( Hantrakul et al. , 2019 ; McCurdy , 1936 ) . We extend this work for our DDSP Inference module , by providing and additional input features a log-scale Mel-spectrogram of the audio , that produces higher quality resynthesis ( Table 1a ) . Full architectural details are provided in Appendix B.2 . 3.2 EXPRESSION CONTROLS . We aim to model aspects of expressive performance with a continuous variable . For example , this enables a performer to choose how loud the note should be performed , or how much vibrato to apply . We define six expression controls ( detailed in Appendix B.3 ) , scaled within [ 0 , 1 ] . These are extracted from synthesis parameters s ( t ) and applied within the ith note , ni ( t ) , in a note sequence : Volume : Controls the volume of a note , extracted by taken average amplitude over a note . Volume fluctuation : Determines the magnitude of a volume change across a note . Used with the volume peak position , below , this can make a note crescendo or decrescendo . This is extracted by calculating the standard deviation of the amplitude over a note . Volume peak position : Controls where , over the duration of a note , the peak volume occurs . Zero value corresponds to decrescendo notes , whereas one corresponds to crescendo notes . The volume peak position is extracted by calculating the relative position of maximum amplitude in the note . Vibrato : Controls the extent of the vibrato of a note . Vibrato is a musical technique defined by pulsating the pitch of a note . Vibrato is extracted by applying Discrete Fourier Transform ( DFT ) on the fundamental frequency f0 ( t ) in a note and take the peak amplitude . Brightness : Controls the timbre of a note where a higher value corresponds to larger high-frequency harmonic . Brightness is extracted by calculating the average harmonic centroid of a note . Attack Noise : Controls how much noise occurs at the start of the note ( the attack ) , e.g. , the fluctuation of string and bow . At certain settings , this determines whether two notes sound consecutively or separately . The attack noise is extracted by taking a note ’ s average noise magnitude in the first ten frames ( 40ms ) .
This paper presents a controllable rendering engine for MIDI files, based on the DDSP framework. Given F0 and loudness contour, DDSP can estimate the parameters of a harmonic + noise synthesis model, to render a corresponding audio file. Similar to MIDI2Params, which predicts framewise FO and loudness contours from a MIDI file, MIDI DDSP introduces an intermediate hierarchical level, allowing to control some newly introduced "expression controls". A mapping from MIDI files to "expression controls" is learnt, so that MIDI files can be automatically rendered. However, because these note wise controls are explicit, they also allow human manipulation of the performance rendering. Influence of the expression controls is assessed with a correlation study, showing that the human manipulation has the expected effect on the generated performance. This quantitative evaluation is further confirmed by convincing audio examples.
SP:4880635544d5dd03261487167b770ca4de40909f
Sqrt(d) Dimension Dependence of Langevin Monte Carlo
( √ d/ ) mixing time bound for LMC , without warm start , under the common log-smooth and log-strongly-convex conditions , plus a growth condition on the 3rd-order derivative of the potential of target measures . This bound improves the best previously known Õ ( d/ ) result and is optimal ( in terms of order ) in both dimension d and accuracy tolerance for target measures satisfying the aforementioned assumptions . Our theoretical analysis is further validated by numerical experiments . 1 INTRODUCTION . The problem of sampling statistical distributions has attracted considerable attention , not only in the fields of statistics and scientific computing , but also in machine learning ( Robert & Casella , 2013 ; Andrieu et al. , 2003 ; Liu , 2008 ) ; for example , how various sampling algorithms scale with the dimension of the target distribution is a popular recent topic in statistical deep learning ( see discussions below for references ) . For samplers that can be viewed as discretizations of SDEs , the idea is to use an ergodic SDE whose equilibrium distribution agrees with the target distribution , and employ an appropriate numerical algorithm that discretizes ( the time of ) the SDE . The iterates of the numerical algorithm will approximately follow the target distribution when converged , and can be used for various downstream applications such as Bayesian inference and inverse problem ( Dashti & Stuart , 2017 ) . One notable example is the Langevin Monte Carlo algorithm ( LMC ) , which corresponds to an Euler-Maruyama discretization of the overdamped Langevin equation . Its study dated back to at least the 90s ( Roberts et al. , 1996 ) but keeps on leading to important discoveries , for example , on non-asymptotics and dimension dependence , which are relevant to machine learning ( e.g. , Dalalyan ( 2017a ; b ) ; Cheng et al . ( 2018a ) ; Durmus et al . ( 2019 ) ; Durmus & Moulines ( 2019 ) ; Vempala & Wibisono ( 2019 ) ; Dalalyan & Riou-Durand ( 2020 ) ; Li et al . ( 2019 ) ; Erdogdu & Hosseinzadeh ( 2021 ) ; Mou et al . ( 2019 ) ; Lehec ( 2021 ) ) . LMC is closely related to SGD too ( e.g. , Mandt et al . ( 2017 ) ) . Many other examples exist , based on alternative SDEs and/or different discretizations ( e.g. , Dalalyan & Riou-Durand ( 2020 ) ; Ma et al . ( 2021 ) ; Mou et al . ( 2021 ) ; Li et al . ( 2020 ) ; Roberts & Rosenthal ( 1998 ) ; Chewi et al . ( 2021 ) ; Shen & Lee ( 2019 ) ) . Quantitatively characterizing the non-asymptotic sampling error of numerical algorithms is usually critical for choosing the appropriate algorithm for a specific downstream application , for providing practical guidance on hyperparameter selection and experiment design , and for designing improved samplers . A powerful tool that dates back to ( Jordan et al. , 1998 ) is a paradigm of non-asymptotic error analysis , namely to view sampling as optimization in probability space , and it led to many important recent results ( e.g. , Liu & Wang ( 2016 ) ; Dalalyan ( 2017a ) ; Wibisono ( 2018 ) ; Zhang et al . ( 2018 ) ; Frogner & Poggio ( 2020 ) ; Chizat & Bach ( 2018 ) ; Chen et al . ( 2018 ) ; Ma et al . ( 2021 ) ; Erdogdu & Hosseinzadeh ( 2021 ) ) . It works by choosing an objective functional , typically some statistical distances/divergences , and showing that the law of the iterates of sampling algorithms converges in that objective functional . However , the choice of the objective functional often needs to be customized for different sampling algorithms . For example , KL divergence works for LMC ( Cheng & Bartlett , 2018 ) , but a carefully hand-crafted cross term needs to be added to KL divergence for analyzing KLMC ( Ma et al. , 2021 ) . Even for the same underlying SDE , different discretization schemes exist and lead to different sampling algorithms , and the analyses of them had usually been case by case ( e.g. , Cheng et al . ( 2018b ) ; Dalalyan & Riou-Durand ( 2020 ) ; Shen & Lee ( 2019 ) ) . Therefore , it would be a desirable complement to have a unified , general framework to study the non-asymptotic error of SDE-based sampling algorithms . Toward this goal , an alternative approach to analysis has recently started attracting attention , namely to resort to the numerical analysis of SDE integrators ( e.g. , Milstein & Tretyakov ( 2013 ) ; Kloeden & Platen ( 1992 ) ) and quantitatively connect the integration error to the sampling error . One remarkable work in this direction is Li et al . ( 2019 ) , which will be discussed in greater details later on . The main tool of analysis in this paper will be a strengthened version ( in specific aspects that will be clarified soon ) of the result in Li et al . ( 2019 ) . Although this analysis framework is rather general and applicable to a broad family of numerical methods that discretize contractive1 SDEs , the main innovation focuses on a specific sampling algorithm , namely LMC , which is widely used in practice . Its stochastic gradient version is implemented in common machine learning systems , such as Tensorflow ( Abadi et al. , 2016 ) , and is the off-the-shelf algorithm for large scale Bayesian inference . With the ever-growing size of parameter space , the non-asymptotic error of LMC is of central theoretical and practical interest , in particular , its dependence on the dimension of the sample space . The best current known upper bound of the mixing time in 2-Wasserstein distance for LMC is Õ ( d ) ( Durmus & Moulines , 2019 ) . Motivated by a recent result ( Chewi et al. , 2021 ) that shows better dimension dependence for a Metropolis-Adjusted improvement of LMC , we will investigate if the current bound for ( unadjusted ) LMC is tight , and if not , what is the optimal dimension dependence . Our contributions . The main contribution of this work is an improved Õ ( √ d ) mixing time upper bound for LMC in 2-Wasserstein distance , under reasonable regularity assumptions . More specifically , we study LMC for sampling from a Gibbs distribution dµ ∝ exp ( −f ( x ) ) dx . Under the standard smoothness and strong-convexity assumptions , plus an additional linear growth condition on the third-order derivative of the potential ( which also shares connections to popular assumptions in the frontier literature ) , our bound improves upon the previously best known Õ ( d ) result ( Durmus & Moulines , 2019 ) in terms of dimension dependence . For a comparison , note it was known that discretized kinetic Langevin dynamics can lead to √ d dependence on dimension ( Cheng & Bartlett , 2018 ; Dalalyan & Riou-Durand , 2020 ) and some believe that it is the introduction of momentum that improves the dimension dependence , but our result shows that discretized overdamped Langevin ( no momentum ) can also have mixing time scaling like √ d. In fact , it is important to mention that recently shown was that Metropolis-Adjusted Euler-Maruyama discretization of overdamped Langevin ( i.e. , MALA ) has an optimal dimension dependence of Õ ( √ d ) ( Chewi et al. , 2021 ) , while what we analyze here is the unadjusted version ( i.e. , LMC ) , and it has the same dimension dependence ( note however that our dependence is not as good as that for MALA ; more discussion in Section 4 ) . We also constructed an example which shows that the mixing time of LMC is at least Ω̃ ( √ d ) . Hence , our mixing time bound has the optimal dependence on both d and , in terms of order , for the family of target measures satisfying those regularity assumptions . Our theoretical analysis is further validated by empirical investigation of numerical examples . A minor contribution of this work is the error analysis framework that we use . It is based on the classical mean-square analysis ( Milstein & Tretyakov , 2013 ) in numerical SDE literature , however extended from finite time to infinite time . It is a minor contribution because this extension was 1possibly after a coordinate transformation already pioneered in the milestone work of Li et al . ( 2019 ) , although we will develop a refined version . Same as in classical mean-square analysis and in Li et al . ( 2019 ) , the final ( sampling in this case ) error is only half order lower than the order of local strong integration error ( p2 ) . This will lead to a Õ ( C 1 p2− 1 2 / 1 p2− 1 2 ) mixing time upper bound in 2-Wasserstein distance for the family of algorithms , where C is a constant containing various information of the underlying problem , e.g. , the dimension d. Nevertheless , the following two are new to this paper : ( i ) We weakened the requirement on local strong and weak errors . More precisely , Li et al . ( 2019 ) requires uniform bounds on local errors , but this could be a nontrivial requirement for SDE integrators ; the improvement here only requires non-uniform bounds ( although establishing the same result consequently needs notably more efforts , these are included in this paper too ) . ( ii ) The detailed expressions of our bounds are not the same as those in Li et al . ( 2019 ) ( even if local errors could be uniformly bounded ) , and as we are interested in dimension-dependence of LMC , we work out constants and carefully track their dimension-dependences . Bounds and constants in Li et al . ( 2019 ) might not be specifically designed for tightly tracking dimension dependences , as the focus of their seminal paper was more on dependence ; consequently , its general error bound only led to a Õ ( d ) -dependence in mixing time when applied to LMC ( see Example 1 in Li et al . ( 2019 ) ) , whereas our result leads to Õ ( √ d ) . 2 PRELIMINARIES . Notation Use symbol x to denote a d-dim . vector , and plain symbol x to denote a scalar variable . ‖x‖ denotes the Euclidean vector norm . A numerical algorithm is denoted by A and its k-th iterate by x̄k . Slightly abuse notation by identifying measures with their density function w.r.t . Lebesgue measure . Use the convention Õ ( · ) = O ( · ) logO ( 1 ) ( · ) and Ω̃ ( · ) = Ω ( · ) logO ( 1 ) ( · ) , i.e. , the Õ ( · ) /Ω̃ ( · ) notation ignores the dependence on logarithmic factors . Use the notation Ω̃ ( · ) similarly . Denote 2-Wasserstein distance by W2 ( µ1 , µ2 ) = ( inf ( X , Y ) ∼Π ( µ1 , µ2 ) E ‖X − Y ‖ 2 ) 1 2 , where Π ( µ1 , µ2 ) is the set of couplings , i.e . all joint measures with X and Y marginals being µ1 and µ2 . Denote the target distribution by µ and the law of a random variableX by Law ( X ) . Finally , denote the mixing time of an sampling algorithm A converging to its target distribution µ in 2-Wasserstein distance by τmix ( ; W2 ; A ) = inf { k ≥ 0|W2 ( Law ( x̄k ) , µ ) ≤ } . SDE for Sampling Consider a general SDE dxt = b ( t , xt ) dt+ σ ( t , xt ) dBt ( 1 ) where b ∈ Rd is a drift term , σ ∈ Rd×l is a diffusion coefficient matrix and Bt is a l-dimensional Wiener process . Under mild condition ( Pavliotis , 2014 , Theorem 3.1 ) , there exists a unique strong solution xt to Eq . ( 1 ) . Some SDEs admit geometric ergodicity , so that their solutions converge exponentially fast to a unique invariant distribution , and examples include the classical overdamped and kinetic Langevin dynamics , but are not limited to those ( e.g. , Mou et al . ( 2021 ) ; Li et al . ( 2020 ) ) . Such SDE are desired for sampling purposes , because one can set the target distribution to be the invariant distribution by choosing an SDE with an appropriate potential , and then solve the solution xt of the SDE and push the time t to infinity , so that ( approximate ) samples of the target distribution can be obtained . Except for a few known cases , however , explicit solutions of Eq . ( 1 ) are elusive and we have to resort to numerical schemes to simulate/integrate SDE . Such example schemes include , but are not limited to Euler-Maruyama method , Milstein methods and Runge-Kutta method ( e.g. , Kloeden & Platen ( 1992 ) ; Milstein & Tretyakov ( 2013 ) ) . With constant stepsize h and at k-th iteration , a typical numerical algorithm takes a previous iterate x̄k−1 and outputs a new iterate x̄k as an approximation of the solution xt of Eq . ( 1 ) at time t = kh . Langevin Monte Carlo Algorithm LMC algorithm is defined by the following update rule x̄k = x̄k−1 − h∇f ( x̄k−1 ) + √ 2hξk , k = 1 , 2 , · · · ( 2 ) where { ξk } k∈Z > 0 are i.i.d . standard d-dimensional Gaussian vectors . LMC corresponds to an EulerMaruyama discretization of the continuous overdamped Langevin dynamics dxt = −∇f ( xt ) dt+√ 2dBt , which converges to an equilibrium distribution µ ∼ exp ( −f ( x ) ) . Dalalyan ( 2017b ) provided a non-asymptotic analysis of LMC . An Õ ( d 2 ) mixing time bound in W2 for log-smooth and log-strongly-convex target measures ( Dalalyan , 2017a ; Cheng et al. , 2018a ; Durmus et al. , 2019 ) has been established . It was further improved to Õ ( d ) under an additional Hessian Lipschitz condition ( Durmus & Moulines , 2019 ) . Mixing time bounds of LMC in other statistical distances/divergences have also been studied , including total variation distance ( Dalalyan , 2017b ; Durmus & Moulines , 2017 ) and KL divergence ( Cheng & Bartlett , 2018 ) . Classical Mean-Square Analysis A powerful framework for quantifying the global discretization error of a numerical algorithm for Eq . ( 1 ) , i.e. , ek = { E ‖xkh − x̄k‖ } 1 2 , is mean-square analysis ( e.g. , Milstein & Tretyakov ( 2013 ) ) . Mean-square analysis studies how local integration error propagate and accumulate into global integration error ; in particular , if one-step ( local ) weak error and strong error ( both the exact and numerical solutions start from the same initial value x ) satisfy ‖Exh − Ex̄1‖ ≤C1 ( 1 + E ‖x‖2 ) 1 2 hp1 , ( local weak error ) ( E ‖xh − x̄1‖2 ) 1 2 ≤C2 ( 1 + E ‖x‖2 ) 1 2 hp2 , ( local strong error ) ( 3 ) over a time interval [ 0 , Kh ] for some constants C1 , C2 > 0 , p2 ≥ 12 and p1 ≥ p2 + 1 2 , then the global error is bounded by ek ≤ C ( 1 + E ‖x0‖2 ) 1 2 hp2− 1 2 , k = 1 , · · · , K for some constant C > 0 dependent on Kh . Although classical mean-square analysis is only concerned with numerical integration error , sampling error can be also inferred . However , there is a limitation that prevents its direct employment in analyzing sampling algorithms : the global error bound only holds in finite time because the constant C can grow exponentially as K increases , rendering the bound useless when K →∞ .
The manuscript considers the unadjusted Langevin Monte Carlo (LMC) algorithm, and performs non-asymptotic analysis of its convergence with respect to the 2 Wasserstein distance. The main contribution is a mixing time bound of O(d^0.5/\epilson), which improves upon the existing O(d/\epsilon) bound of Durmus and Moulines. The O(d^0.5/\epilson) is also shown to be optimal for the classes of probability distributions considered.
SP:1811e49ae2fadeeb486fb1058875193137f20675
Sqrt(d) Dimension Dependence of Langevin Monte Carlo
( √ d/ ) mixing time bound for LMC , without warm start , under the common log-smooth and log-strongly-convex conditions , plus a growth condition on the 3rd-order derivative of the potential of target measures . This bound improves the best previously known Õ ( d/ ) result and is optimal ( in terms of order ) in both dimension d and accuracy tolerance for target measures satisfying the aforementioned assumptions . Our theoretical analysis is further validated by numerical experiments . 1 INTRODUCTION . The problem of sampling statistical distributions has attracted considerable attention , not only in the fields of statistics and scientific computing , but also in machine learning ( Robert & Casella , 2013 ; Andrieu et al. , 2003 ; Liu , 2008 ) ; for example , how various sampling algorithms scale with the dimension of the target distribution is a popular recent topic in statistical deep learning ( see discussions below for references ) . For samplers that can be viewed as discretizations of SDEs , the idea is to use an ergodic SDE whose equilibrium distribution agrees with the target distribution , and employ an appropriate numerical algorithm that discretizes ( the time of ) the SDE . The iterates of the numerical algorithm will approximately follow the target distribution when converged , and can be used for various downstream applications such as Bayesian inference and inverse problem ( Dashti & Stuart , 2017 ) . One notable example is the Langevin Monte Carlo algorithm ( LMC ) , which corresponds to an Euler-Maruyama discretization of the overdamped Langevin equation . Its study dated back to at least the 90s ( Roberts et al. , 1996 ) but keeps on leading to important discoveries , for example , on non-asymptotics and dimension dependence , which are relevant to machine learning ( e.g. , Dalalyan ( 2017a ; b ) ; Cheng et al . ( 2018a ) ; Durmus et al . ( 2019 ) ; Durmus & Moulines ( 2019 ) ; Vempala & Wibisono ( 2019 ) ; Dalalyan & Riou-Durand ( 2020 ) ; Li et al . ( 2019 ) ; Erdogdu & Hosseinzadeh ( 2021 ) ; Mou et al . ( 2019 ) ; Lehec ( 2021 ) ) . LMC is closely related to SGD too ( e.g. , Mandt et al . ( 2017 ) ) . Many other examples exist , based on alternative SDEs and/or different discretizations ( e.g. , Dalalyan & Riou-Durand ( 2020 ) ; Ma et al . ( 2021 ) ; Mou et al . ( 2021 ) ; Li et al . ( 2020 ) ; Roberts & Rosenthal ( 1998 ) ; Chewi et al . ( 2021 ) ; Shen & Lee ( 2019 ) ) . Quantitatively characterizing the non-asymptotic sampling error of numerical algorithms is usually critical for choosing the appropriate algorithm for a specific downstream application , for providing practical guidance on hyperparameter selection and experiment design , and for designing improved samplers . A powerful tool that dates back to ( Jordan et al. , 1998 ) is a paradigm of non-asymptotic error analysis , namely to view sampling as optimization in probability space , and it led to many important recent results ( e.g. , Liu & Wang ( 2016 ) ; Dalalyan ( 2017a ) ; Wibisono ( 2018 ) ; Zhang et al . ( 2018 ) ; Frogner & Poggio ( 2020 ) ; Chizat & Bach ( 2018 ) ; Chen et al . ( 2018 ) ; Ma et al . ( 2021 ) ; Erdogdu & Hosseinzadeh ( 2021 ) ) . It works by choosing an objective functional , typically some statistical distances/divergences , and showing that the law of the iterates of sampling algorithms converges in that objective functional . However , the choice of the objective functional often needs to be customized for different sampling algorithms . For example , KL divergence works for LMC ( Cheng & Bartlett , 2018 ) , but a carefully hand-crafted cross term needs to be added to KL divergence for analyzing KLMC ( Ma et al. , 2021 ) . Even for the same underlying SDE , different discretization schemes exist and lead to different sampling algorithms , and the analyses of them had usually been case by case ( e.g. , Cheng et al . ( 2018b ) ; Dalalyan & Riou-Durand ( 2020 ) ; Shen & Lee ( 2019 ) ) . Therefore , it would be a desirable complement to have a unified , general framework to study the non-asymptotic error of SDE-based sampling algorithms . Toward this goal , an alternative approach to analysis has recently started attracting attention , namely to resort to the numerical analysis of SDE integrators ( e.g. , Milstein & Tretyakov ( 2013 ) ; Kloeden & Platen ( 1992 ) ) and quantitatively connect the integration error to the sampling error . One remarkable work in this direction is Li et al . ( 2019 ) , which will be discussed in greater details later on . The main tool of analysis in this paper will be a strengthened version ( in specific aspects that will be clarified soon ) of the result in Li et al . ( 2019 ) . Although this analysis framework is rather general and applicable to a broad family of numerical methods that discretize contractive1 SDEs , the main innovation focuses on a specific sampling algorithm , namely LMC , which is widely used in practice . Its stochastic gradient version is implemented in common machine learning systems , such as Tensorflow ( Abadi et al. , 2016 ) , and is the off-the-shelf algorithm for large scale Bayesian inference . With the ever-growing size of parameter space , the non-asymptotic error of LMC is of central theoretical and practical interest , in particular , its dependence on the dimension of the sample space . The best current known upper bound of the mixing time in 2-Wasserstein distance for LMC is Õ ( d ) ( Durmus & Moulines , 2019 ) . Motivated by a recent result ( Chewi et al. , 2021 ) that shows better dimension dependence for a Metropolis-Adjusted improvement of LMC , we will investigate if the current bound for ( unadjusted ) LMC is tight , and if not , what is the optimal dimension dependence . Our contributions . The main contribution of this work is an improved Õ ( √ d ) mixing time upper bound for LMC in 2-Wasserstein distance , under reasonable regularity assumptions . More specifically , we study LMC for sampling from a Gibbs distribution dµ ∝ exp ( −f ( x ) ) dx . Under the standard smoothness and strong-convexity assumptions , plus an additional linear growth condition on the third-order derivative of the potential ( which also shares connections to popular assumptions in the frontier literature ) , our bound improves upon the previously best known Õ ( d ) result ( Durmus & Moulines , 2019 ) in terms of dimension dependence . For a comparison , note it was known that discretized kinetic Langevin dynamics can lead to √ d dependence on dimension ( Cheng & Bartlett , 2018 ; Dalalyan & Riou-Durand , 2020 ) and some believe that it is the introduction of momentum that improves the dimension dependence , but our result shows that discretized overdamped Langevin ( no momentum ) can also have mixing time scaling like √ d. In fact , it is important to mention that recently shown was that Metropolis-Adjusted Euler-Maruyama discretization of overdamped Langevin ( i.e. , MALA ) has an optimal dimension dependence of Õ ( √ d ) ( Chewi et al. , 2021 ) , while what we analyze here is the unadjusted version ( i.e. , LMC ) , and it has the same dimension dependence ( note however that our dependence is not as good as that for MALA ; more discussion in Section 4 ) . We also constructed an example which shows that the mixing time of LMC is at least Ω̃ ( √ d ) . Hence , our mixing time bound has the optimal dependence on both d and , in terms of order , for the family of target measures satisfying those regularity assumptions . Our theoretical analysis is further validated by empirical investigation of numerical examples . A minor contribution of this work is the error analysis framework that we use . It is based on the classical mean-square analysis ( Milstein & Tretyakov , 2013 ) in numerical SDE literature , however extended from finite time to infinite time . It is a minor contribution because this extension was 1possibly after a coordinate transformation already pioneered in the milestone work of Li et al . ( 2019 ) , although we will develop a refined version . Same as in classical mean-square analysis and in Li et al . ( 2019 ) , the final ( sampling in this case ) error is only half order lower than the order of local strong integration error ( p2 ) . This will lead to a Õ ( C 1 p2− 1 2 / 1 p2− 1 2 ) mixing time upper bound in 2-Wasserstein distance for the family of algorithms , where C is a constant containing various information of the underlying problem , e.g. , the dimension d. Nevertheless , the following two are new to this paper : ( i ) We weakened the requirement on local strong and weak errors . More precisely , Li et al . ( 2019 ) requires uniform bounds on local errors , but this could be a nontrivial requirement for SDE integrators ; the improvement here only requires non-uniform bounds ( although establishing the same result consequently needs notably more efforts , these are included in this paper too ) . ( ii ) The detailed expressions of our bounds are not the same as those in Li et al . ( 2019 ) ( even if local errors could be uniformly bounded ) , and as we are interested in dimension-dependence of LMC , we work out constants and carefully track their dimension-dependences . Bounds and constants in Li et al . ( 2019 ) might not be specifically designed for tightly tracking dimension dependences , as the focus of their seminal paper was more on dependence ; consequently , its general error bound only led to a Õ ( d ) -dependence in mixing time when applied to LMC ( see Example 1 in Li et al . ( 2019 ) ) , whereas our result leads to Õ ( √ d ) . 2 PRELIMINARIES . Notation Use symbol x to denote a d-dim . vector , and plain symbol x to denote a scalar variable . ‖x‖ denotes the Euclidean vector norm . A numerical algorithm is denoted by A and its k-th iterate by x̄k . Slightly abuse notation by identifying measures with their density function w.r.t . Lebesgue measure . Use the convention Õ ( · ) = O ( · ) logO ( 1 ) ( · ) and Ω̃ ( · ) = Ω ( · ) logO ( 1 ) ( · ) , i.e. , the Õ ( · ) /Ω̃ ( · ) notation ignores the dependence on logarithmic factors . Use the notation Ω̃ ( · ) similarly . Denote 2-Wasserstein distance by W2 ( µ1 , µ2 ) = ( inf ( X , Y ) ∼Π ( µ1 , µ2 ) E ‖X − Y ‖ 2 ) 1 2 , where Π ( µ1 , µ2 ) is the set of couplings , i.e . all joint measures with X and Y marginals being µ1 and µ2 . Denote the target distribution by µ and the law of a random variableX by Law ( X ) . Finally , denote the mixing time of an sampling algorithm A converging to its target distribution µ in 2-Wasserstein distance by τmix ( ; W2 ; A ) = inf { k ≥ 0|W2 ( Law ( x̄k ) , µ ) ≤ } . SDE for Sampling Consider a general SDE dxt = b ( t , xt ) dt+ σ ( t , xt ) dBt ( 1 ) where b ∈ Rd is a drift term , σ ∈ Rd×l is a diffusion coefficient matrix and Bt is a l-dimensional Wiener process . Under mild condition ( Pavliotis , 2014 , Theorem 3.1 ) , there exists a unique strong solution xt to Eq . ( 1 ) . Some SDEs admit geometric ergodicity , so that their solutions converge exponentially fast to a unique invariant distribution , and examples include the classical overdamped and kinetic Langevin dynamics , but are not limited to those ( e.g. , Mou et al . ( 2021 ) ; Li et al . ( 2020 ) ) . Such SDE are desired for sampling purposes , because one can set the target distribution to be the invariant distribution by choosing an SDE with an appropriate potential , and then solve the solution xt of the SDE and push the time t to infinity , so that ( approximate ) samples of the target distribution can be obtained . Except for a few known cases , however , explicit solutions of Eq . ( 1 ) are elusive and we have to resort to numerical schemes to simulate/integrate SDE . Such example schemes include , but are not limited to Euler-Maruyama method , Milstein methods and Runge-Kutta method ( e.g. , Kloeden & Platen ( 1992 ) ; Milstein & Tretyakov ( 2013 ) ) . With constant stepsize h and at k-th iteration , a typical numerical algorithm takes a previous iterate x̄k−1 and outputs a new iterate x̄k as an approximation of the solution xt of Eq . ( 1 ) at time t = kh . Langevin Monte Carlo Algorithm LMC algorithm is defined by the following update rule x̄k = x̄k−1 − h∇f ( x̄k−1 ) + √ 2hξk , k = 1 , 2 , · · · ( 2 ) where { ξk } k∈Z > 0 are i.i.d . standard d-dimensional Gaussian vectors . LMC corresponds to an EulerMaruyama discretization of the continuous overdamped Langevin dynamics dxt = −∇f ( xt ) dt+√ 2dBt , which converges to an equilibrium distribution µ ∼ exp ( −f ( x ) ) . Dalalyan ( 2017b ) provided a non-asymptotic analysis of LMC . An Õ ( d 2 ) mixing time bound in W2 for log-smooth and log-strongly-convex target measures ( Dalalyan , 2017a ; Cheng et al. , 2018a ; Durmus et al. , 2019 ) has been established . It was further improved to Õ ( d ) under an additional Hessian Lipschitz condition ( Durmus & Moulines , 2019 ) . Mixing time bounds of LMC in other statistical distances/divergences have also been studied , including total variation distance ( Dalalyan , 2017b ; Durmus & Moulines , 2017 ) and KL divergence ( Cheng & Bartlett , 2018 ) . Classical Mean-Square Analysis A powerful framework for quantifying the global discretization error of a numerical algorithm for Eq . ( 1 ) , i.e. , ek = { E ‖xkh − x̄k‖ } 1 2 , is mean-square analysis ( e.g. , Milstein & Tretyakov ( 2013 ) ) . Mean-square analysis studies how local integration error propagate and accumulate into global integration error ; in particular , if one-step ( local ) weak error and strong error ( both the exact and numerical solutions start from the same initial value x ) satisfy ‖Exh − Ex̄1‖ ≤C1 ( 1 + E ‖x‖2 ) 1 2 hp1 , ( local weak error ) ( E ‖xh − x̄1‖2 ) 1 2 ≤C2 ( 1 + E ‖x‖2 ) 1 2 hp2 , ( local strong error ) ( 3 ) over a time interval [ 0 , Kh ] for some constants C1 , C2 > 0 , p2 ≥ 12 and p1 ≥ p2 + 1 2 , then the global error is bounded by ek ≤ C ( 1 + E ‖x0‖2 ) 1 2 hp2− 1 2 , k = 1 , · · · , K for some constant C > 0 dependent on Kh . Although classical mean-square analysis is only concerned with numerical integration error , sampling error can be also inferred . However , there is a limitation that prevents its direct employment in analyzing sampling algorithms : the global error bound only holds in finite time because the constant C can grow exponentially as K increases , rendering the bound useless when K →∞ .
The paper is concerned with the non-asymptotic analysis of SDE-based sampling algorithms and has two main contributions. Firstly, it improves upon the general framework of [Li et al. (2019)](https://proceedings.neurips.cc/paper/2019/hash/7d265aa7147bd3913fb84c7963a209d1-Abstract.html) and, in particular, does not require uniform boundedness assumptions for the local strong/weak errors. This means their first result, Theorem 3.3, is easier to apply to numerical schemes for contractive SDEs and can already be seen to unify previous works (such as the analysis of KLMC in [Dalalyan et al. (2020)](https://projecteuclid.org/journals/bernoulli/volume-26/issue-3/On-sampling-from-a-log-concave-density-using-kinetic-Langevin/10.3150/19-BEJ1178.short)). Secondly, the authors consider the Unadjusted Langevin Algorithm (ULA) and derive a 2-Wasserstein bound of $\mathcal{O}(\sqrt{d} h)$, which has optimal dependence on both the dimension $d$ and step size $h$. To achieve this, they impose an additional smoothness condition on $f$ that is less restrictive than having the Hessian $\nabla^2 f$ Lipschitz continuous. The authors support this theory with numerical examples, where they demonstrate that a lower bound on the 2-Wasserstein error scales as $\mathcal{O}(\sqrt{d})$ and $\mathcal{O}(h)$.
SP:1811e49ae2fadeeb486fb1058875193137f20675
Learning Identity-Preserving Transformations on Data Manifolds
1 INTRODUCTION . A goal of many machine learning models is to accurately identify objects as they undergo natural transformations – a task that humans are adept at . According to the manifold hypothesis , natural variations in high-dimensional data lie on or near a low-dimensional , nonlinear manifold ( Fefferman et al. , 2016 ) . Additionally , the manifolds representing different classes are separated by low density regions ( Rifai et al. , 2011a ) . Natural physical laws govern the possible transformations that objects can undergo and many of the identity-preserving transformations ( e.g. , changes in viewpoint , color , and lighting ) are shared among classes of data . Sharing of transformations between classes enables increased efficiency in defining data variations – a model can represent a limited set of transformations that can describe a majority of variations in many classes . Several machine learning models incorporate specific identity-preserving transformations that are shared among a large number of classes to generalize the performance of their model to unseen data . These include equivariant models that incorporate transformations like translation and rotation into intermediate network layers ( Cohen & Welling , 2016 ; Cohen et al. , 2018 ) and data augmentation techniques that apply known identity-preserving variations to data while training ( Cubuk et al. , 2019 ; Ho et al. , 2019 ; Lim et al. , 2019 ; Sohn et al. , 2020 ; He et al. , 2020 ; Chen et al. , 2020 ) . However , many datasets have natural transformations shared among classes that are not easily prespecified from intuition , making it critical that we develop a model that can learn both 1 ) a representation for these transformations without explicit transformation labels and 2 ) the context for which locations in the data space each transformation is likely to be relevant . Manifold learning strategies estimate the low-dimensional manifold structure of data . A subset of these techniques learn to transform points on the manifold through nonlinear Lie group operators ( Rao & Ruderman , 1999 ; Miao & Rao , 2007 ; Culpepper & Olshausen , 2009 ; Sohl-Dickstein et al. , 2010 ; Cohen & Welling , 2014 ; Hauberg et al. , 2016 ; Connor & Rozell , 2020 ; Connor et al. , 2021 ) . Lie group operators represent infinitesimal transformations which can be applied to data through an exponential mapping to transform points along a manifold , and a manifold can be globally defined by a set of operators that each move in different directions along it ( Hoffman , 1966 ; Dodwell , 1983 ) . A Lie group operator model is well-suited for representing natural data variations because the operators can be learned from the data , applied to data points to transform them beyond their local neighborhoods , and used to estimate geodesic paths . While the Lie group operator models have many benefits , previous approaches demonstrate the two shortcomings noted above . First , to learn Lie group operators that represent a data manifold , pairs of training points are selected which lie within a neighborhood of one another . The training objective encourages efficient paths between these nearby points and the choice of training point pairs influences the types of manifold transformations that are learned . Recent papers incorporating Lie group operators into machine learning models have either used predefined operators that represent known transformations groups ( e.g. , the 3D rotational group SO ( 3 ) ( Falorsi et al. , 2019 ) ) , required transformation labels for selecting point pairs when training ( Connor & Rozell , 2020 ) , or randomly selected pairs of points from the same class ( Connor et al. , 2021 ) . To learn an effective model with datasets having no labeled transformation structure , we require a point pair selection strategy that identifies points that are related through the transformations the model aims to learn . Second , existing Lie group operator models have lacked a method for determining which regions of the manifold are appropriate for each operator , meaning that every operator is equally likely to be used at every point on the manifold . This is a flawed assumption because , while many transformations are shared between classes , there are also data variations that are unique to specific data classes . Additionally , in a dataset with several manifolds ( each representing a different class ) , there is a limited extent to which a transformation can be applied without moving a point onto another manifold . The main contributions of this paper are the development of methods to address the two critical shortcomings of Lie group operator-based manifold models noted above . Specifically , motivated by finding perceptually similar training samples without transformation labels , we first introduce a point pair selection strategy to learn a manifold representation of natural variations shared across multiple data classes without requiring transformations labels . Second , we develop a method that uses a pretrained classifier ( measuring identity-preservation of transformed samples ) to learn the local regions where each operator is likely to be used while preserving the identity of transformed samples . This approach enables us to analyze the local structure of the data manifold in the context of the learned operators and to describe the invariances of the classifier . We demonstrate the efficacy of these strategies in the context of the Manifold Autoencoder ( MAE ) model ( Connor & Rozell , 2020 ) to learn semantically meaningful transformations on MNIST ( LeCun et al. , 1998 ) , Fashion MNIST ( Xiao et al. , 2017 ) , and CelebA ( Liu et al. , 2015 ) . 2 BACKGROUND . Manifold Learning Manifold learning models estimate the low-dimensional structure of highdimensional data by utilizing the property that local neighborhoods on the manifold are approximately linear . Traditional techniques represent the manifold through a low-dimensional embedding of the data points ( Tenenbaum et al. , 2000 ; Roweis & Saul , 2000 ; Belkin & Niyogi , 2003 ; Maaten & Hinton , 2008 ) or through estimates of linear tangent planes that represent local directions of manifold motion ( Dollár et al. , 2007 ; Bengio & Monperrus , 2005 ; Park et al. , 2015 ) . While these traditional manifold learning approaches are useful for understanding low-dimensional data structure , in many cases the input data space is an inefficient representation of the data . For example , data in the pixel space suffers from the curse of dimensionality and can not be smoothly interpolated while maintaining identity ( Bengio et al. , 2005 ) . Many approaches have used neural networks to learn a low-dimensional latent space in which manifold models can be incorporated . The contractive autoencoder ( CAE ) estimates manifold tangents by minimizing the the Jacobian of the encoder network , encouraging invariance of latent vectors to image space perturbations ( Rifai et al. , 2011c ; b ; a ; Kumar et al. , 2017 ) . Several methods estimate geodesic paths in the latent space of a trained variational autoencoder ( VAE ) model ( Arvanitidis et al. , 2018 ; Chen et al. , 2018 ; Shao et al. , 2018 ; Arvanitidis et al. , 2019 ) and use this approach to learn VAEs with priors that are estimated using the Riemannian metrics computed in the latent space ( Arvanitidis et al. , 2021 ; Kalatzis et al. , 2020 ) . Lie Group Operators A Lie group is a group of continuous transformations which also defines a manifold by representing infinitesimal transformations that can be applied to input data ( Hoffman , 1966 ; Dodwell , 1983 ) . Several methods incorporate Lie groups into neural networks to represent data transformations that are identity-preserving within the model ( Cohen & Welling , 2014 ; Cohen et al. , 2018 ; Cosentino et al. , 2021 ) . A prevalent strategy is to learn a dictionary of Lie group operators that are mapped to a specific group element through the matrix exponential expm ( · ) ( Rao & Ruderman , 1999 ; Ham & Lee , 2006 ; Miao & Rao , 2007 ; Culpepper & Olshausen , 2009 ; SohlDickstein et al. , 2010 ; Cohen & Welling , 2014 ; Connor & Rozell , 2020 ; Connor et al. , 2021 ) . In these models , each operator Ψm , called a transport operator , describes a single direction along the manifold and is parameterized by a single coefficient cm . Given an initial data point z , the transport operators define a generative model where transformations can be derived from sampling sparse coefficients cm ∼ Laplace ( 0 , ζ ) : ẑ = expm ( M∑ m=1 Ψmcm ) z + , ( 1 ) where ∼ N ( 0 , σ2 I ) . The manifold autoencoder ( MAE ) incorporates the transport operator model into the latent space of an autoencoder to learn a dictionary of operators that represent the global , nonlinear manifold structure in the latent space ( Connor & Rozell , 2020 ) . This model has been shown to effectively learn reusable operators with transformation supervision , and it will provide the context for demonstrating the effectiveness of the methods developed in this paper . Disentanglement One class of methods that also aim to identify factors of variation in data is disentanglement methods which learn features that each vary one independent characteristic of the data ( Higgins et al. , 2018 ) . In a supervised learning scenario , there are many disentangling techniques that separate data style from content by encouraging similarity between the content features in samples from the same class ( Tenenbaum & Freeman , 2000 ; Reed et al. , 2014 ; Mathieu et al. , 2016 ; Bouchacourt et al. , 2018 ) or encouraging features to map to known class ( Cheung et al. , 2015 ) or transformation labels ( Hinton et al. , 2011 ; Kulkarni et al. , 2015 ; Yang et al. , 2015 ) . There are also techniques that can disentangle latent representations without labels like InfoGAN ( Chen et al. , 2016 ) and β-VAE ( Higgins et al. , 2017 ) . The goal of our work is distinct from disentanglement work because , rather than identifying independently varying factors of variation , we aim to learn non-linear operators that correspond to transformations on the data manifold ( which may not vary independently ) . Our approach is advantageous because it can faithfully represent longer transformation paths , learn variations while maintaining image reconstruction , and change the number of learned variations by increasing or decreasing the number of learned operators . 3 METHODS . The MAE learns a low-dimensional latent representation of the data by defining an encoder function f : X → Z that maps high-dimensional data points x ∈ RD to low-dimensional latent vectors z ∈ Rd and a decoder function g : Z → X that maps the latent vectors back into the data space ( Connor & Rozell , 2020 ) . Transport operators Ψ are incorporated into the latent space to learn manifoldbased transformations . Before learning the transport operators , the autoencoder is pretrained to extract a latent representation of the data using the traditional autoencoder reconstruction objective . After pretraining , the autoencoder weights are fixed and the operators are trained with the following objective , which encourages the learning of transport operators that generate efficient paths between the latent vectors z0 and z1 ( coinciding with f ( x0 ) and f ( x1 ) ) that are nearby on the manifold : LΨ = 1 2 ∥∥∥∥∥z1 − expm ( M∑ m=1 Ψmcm ) z0 ∥∥∥∥∥ 2 2 + γ 2 ∑ m ‖Ψm‖2F + ζ‖c‖1 , ( 2 ) where γ , ζ > 0 . Objective ( 2 ) is minimized via an alternating minimization scheme . Specifically , at each training iteration , points pairs x0 and x1 are selected in the input space and encoded into the latent space z0 and z1 . Then , coefficients c are inferred between the encoded latent vectors ( by minimizing ( 2 ) with respect to c ) to estimate the best path between z0 and z1 . After inference , the coefficients are fixed and a gradient step is taken to minimize ( 2 ) with respect to transport operator weights . Once learned , these operators represent different types of motion that traverse the manifold and they can be combined to generate natural paths on the manifold . After fitting transport operators to the latent space of a fixed autoencoder network , there is a final fine-tuning training phase that updates the network weights and transport operators simultaneously using a joint objective that combines the autoencoder reconstruction loss with LΨ . This fine-tuning step addresses the potential of a mismatch between the data manifold and the learned latent structure by adapting the latent structure to fit the transport operators learned between the selected training point pairs . We empirically find that breaking up training into these three phases increases the stability with which the transport operators can be learned . Given the context of this MAE model , in the following sections we describe our contributions .
This paper proposes to learn natural transformations in datasets, with manifold auto encoder (MAE), where the underlying identity-preserving transformations are not easily identifiable. By using Lie group operator, this problem reduces to learning paths/motion in the latent space of MAE. The main challenge in training MAE is how to choose a transformation pair, a point and its identity preserving transformed counterpart, in an unsupervised way. This paper introduces the idea of using penultimate layer of a pretrained classifier on Imagenet to choose such point pairs and learn such MAE more effectively. The proposed method is validated on three datasets.
SP:37aab9484f502a34029583b62eeb6657326bd0c3
Learning Identity-Preserving Transformations on Data Manifolds
1 INTRODUCTION . A goal of many machine learning models is to accurately identify objects as they undergo natural transformations – a task that humans are adept at . According to the manifold hypothesis , natural variations in high-dimensional data lie on or near a low-dimensional , nonlinear manifold ( Fefferman et al. , 2016 ) . Additionally , the manifolds representing different classes are separated by low density regions ( Rifai et al. , 2011a ) . Natural physical laws govern the possible transformations that objects can undergo and many of the identity-preserving transformations ( e.g. , changes in viewpoint , color , and lighting ) are shared among classes of data . Sharing of transformations between classes enables increased efficiency in defining data variations – a model can represent a limited set of transformations that can describe a majority of variations in many classes . Several machine learning models incorporate specific identity-preserving transformations that are shared among a large number of classes to generalize the performance of their model to unseen data . These include equivariant models that incorporate transformations like translation and rotation into intermediate network layers ( Cohen & Welling , 2016 ; Cohen et al. , 2018 ) and data augmentation techniques that apply known identity-preserving variations to data while training ( Cubuk et al. , 2019 ; Ho et al. , 2019 ; Lim et al. , 2019 ; Sohn et al. , 2020 ; He et al. , 2020 ; Chen et al. , 2020 ) . However , many datasets have natural transformations shared among classes that are not easily prespecified from intuition , making it critical that we develop a model that can learn both 1 ) a representation for these transformations without explicit transformation labels and 2 ) the context for which locations in the data space each transformation is likely to be relevant . Manifold learning strategies estimate the low-dimensional manifold structure of data . A subset of these techniques learn to transform points on the manifold through nonlinear Lie group operators ( Rao & Ruderman , 1999 ; Miao & Rao , 2007 ; Culpepper & Olshausen , 2009 ; Sohl-Dickstein et al. , 2010 ; Cohen & Welling , 2014 ; Hauberg et al. , 2016 ; Connor & Rozell , 2020 ; Connor et al. , 2021 ) . Lie group operators represent infinitesimal transformations which can be applied to data through an exponential mapping to transform points along a manifold , and a manifold can be globally defined by a set of operators that each move in different directions along it ( Hoffman , 1966 ; Dodwell , 1983 ) . A Lie group operator model is well-suited for representing natural data variations because the operators can be learned from the data , applied to data points to transform them beyond their local neighborhoods , and used to estimate geodesic paths . While the Lie group operator models have many benefits , previous approaches demonstrate the two shortcomings noted above . First , to learn Lie group operators that represent a data manifold , pairs of training points are selected which lie within a neighborhood of one another . The training objective encourages efficient paths between these nearby points and the choice of training point pairs influences the types of manifold transformations that are learned . Recent papers incorporating Lie group operators into machine learning models have either used predefined operators that represent known transformations groups ( e.g. , the 3D rotational group SO ( 3 ) ( Falorsi et al. , 2019 ) ) , required transformation labels for selecting point pairs when training ( Connor & Rozell , 2020 ) , or randomly selected pairs of points from the same class ( Connor et al. , 2021 ) . To learn an effective model with datasets having no labeled transformation structure , we require a point pair selection strategy that identifies points that are related through the transformations the model aims to learn . Second , existing Lie group operator models have lacked a method for determining which regions of the manifold are appropriate for each operator , meaning that every operator is equally likely to be used at every point on the manifold . This is a flawed assumption because , while many transformations are shared between classes , there are also data variations that are unique to specific data classes . Additionally , in a dataset with several manifolds ( each representing a different class ) , there is a limited extent to which a transformation can be applied without moving a point onto another manifold . The main contributions of this paper are the development of methods to address the two critical shortcomings of Lie group operator-based manifold models noted above . Specifically , motivated by finding perceptually similar training samples without transformation labels , we first introduce a point pair selection strategy to learn a manifold representation of natural variations shared across multiple data classes without requiring transformations labels . Second , we develop a method that uses a pretrained classifier ( measuring identity-preservation of transformed samples ) to learn the local regions where each operator is likely to be used while preserving the identity of transformed samples . This approach enables us to analyze the local structure of the data manifold in the context of the learned operators and to describe the invariances of the classifier . We demonstrate the efficacy of these strategies in the context of the Manifold Autoencoder ( MAE ) model ( Connor & Rozell , 2020 ) to learn semantically meaningful transformations on MNIST ( LeCun et al. , 1998 ) , Fashion MNIST ( Xiao et al. , 2017 ) , and CelebA ( Liu et al. , 2015 ) . 2 BACKGROUND . Manifold Learning Manifold learning models estimate the low-dimensional structure of highdimensional data by utilizing the property that local neighborhoods on the manifold are approximately linear . Traditional techniques represent the manifold through a low-dimensional embedding of the data points ( Tenenbaum et al. , 2000 ; Roweis & Saul , 2000 ; Belkin & Niyogi , 2003 ; Maaten & Hinton , 2008 ) or through estimates of linear tangent planes that represent local directions of manifold motion ( Dollár et al. , 2007 ; Bengio & Monperrus , 2005 ; Park et al. , 2015 ) . While these traditional manifold learning approaches are useful for understanding low-dimensional data structure , in many cases the input data space is an inefficient representation of the data . For example , data in the pixel space suffers from the curse of dimensionality and can not be smoothly interpolated while maintaining identity ( Bengio et al. , 2005 ) . Many approaches have used neural networks to learn a low-dimensional latent space in which manifold models can be incorporated . The contractive autoencoder ( CAE ) estimates manifold tangents by minimizing the the Jacobian of the encoder network , encouraging invariance of latent vectors to image space perturbations ( Rifai et al. , 2011c ; b ; a ; Kumar et al. , 2017 ) . Several methods estimate geodesic paths in the latent space of a trained variational autoencoder ( VAE ) model ( Arvanitidis et al. , 2018 ; Chen et al. , 2018 ; Shao et al. , 2018 ; Arvanitidis et al. , 2019 ) and use this approach to learn VAEs with priors that are estimated using the Riemannian metrics computed in the latent space ( Arvanitidis et al. , 2021 ; Kalatzis et al. , 2020 ) . Lie Group Operators A Lie group is a group of continuous transformations which also defines a manifold by representing infinitesimal transformations that can be applied to input data ( Hoffman , 1966 ; Dodwell , 1983 ) . Several methods incorporate Lie groups into neural networks to represent data transformations that are identity-preserving within the model ( Cohen & Welling , 2014 ; Cohen et al. , 2018 ; Cosentino et al. , 2021 ) . A prevalent strategy is to learn a dictionary of Lie group operators that are mapped to a specific group element through the matrix exponential expm ( · ) ( Rao & Ruderman , 1999 ; Ham & Lee , 2006 ; Miao & Rao , 2007 ; Culpepper & Olshausen , 2009 ; SohlDickstein et al. , 2010 ; Cohen & Welling , 2014 ; Connor & Rozell , 2020 ; Connor et al. , 2021 ) . In these models , each operator Ψm , called a transport operator , describes a single direction along the manifold and is parameterized by a single coefficient cm . Given an initial data point z , the transport operators define a generative model where transformations can be derived from sampling sparse coefficients cm ∼ Laplace ( 0 , ζ ) : ẑ = expm ( M∑ m=1 Ψmcm ) z + , ( 1 ) where ∼ N ( 0 , σ2 I ) . The manifold autoencoder ( MAE ) incorporates the transport operator model into the latent space of an autoencoder to learn a dictionary of operators that represent the global , nonlinear manifold structure in the latent space ( Connor & Rozell , 2020 ) . This model has been shown to effectively learn reusable operators with transformation supervision , and it will provide the context for demonstrating the effectiveness of the methods developed in this paper . Disentanglement One class of methods that also aim to identify factors of variation in data is disentanglement methods which learn features that each vary one independent characteristic of the data ( Higgins et al. , 2018 ) . In a supervised learning scenario , there are many disentangling techniques that separate data style from content by encouraging similarity between the content features in samples from the same class ( Tenenbaum & Freeman , 2000 ; Reed et al. , 2014 ; Mathieu et al. , 2016 ; Bouchacourt et al. , 2018 ) or encouraging features to map to known class ( Cheung et al. , 2015 ) or transformation labels ( Hinton et al. , 2011 ; Kulkarni et al. , 2015 ; Yang et al. , 2015 ) . There are also techniques that can disentangle latent representations without labels like InfoGAN ( Chen et al. , 2016 ) and β-VAE ( Higgins et al. , 2017 ) . The goal of our work is distinct from disentanglement work because , rather than identifying independently varying factors of variation , we aim to learn non-linear operators that correspond to transformations on the data manifold ( which may not vary independently ) . Our approach is advantageous because it can faithfully represent longer transformation paths , learn variations while maintaining image reconstruction , and change the number of learned variations by increasing or decreasing the number of learned operators . 3 METHODS . The MAE learns a low-dimensional latent representation of the data by defining an encoder function f : X → Z that maps high-dimensional data points x ∈ RD to low-dimensional latent vectors z ∈ Rd and a decoder function g : Z → X that maps the latent vectors back into the data space ( Connor & Rozell , 2020 ) . Transport operators Ψ are incorporated into the latent space to learn manifoldbased transformations . Before learning the transport operators , the autoencoder is pretrained to extract a latent representation of the data using the traditional autoencoder reconstruction objective . After pretraining , the autoencoder weights are fixed and the operators are trained with the following objective , which encourages the learning of transport operators that generate efficient paths between the latent vectors z0 and z1 ( coinciding with f ( x0 ) and f ( x1 ) ) that are nearby on the manifold : LΨ = 1 2 ∥∥∥∥∥z1 − expm ( M∑ m=1 Ψmcm ) z0 ∥∥∥∥∥ 2 2 + γ 2 ∑ m ‖Ψm‖2F + ζ‖c‖1 , ( 2 ) where γ , ζ > 0 . Objective ( 2 ) is minimized via an alternating minimization scheme . Specifically , at each training iteration , points pairs x0 and x1 are selected in the input space and encoded into the latent space z0 and z1 . Then , coefficients c are inferred between the encoded latent vectors ( by minimizing ( 2 ) with respect to c ) to estimate the best path between z0 and z1 . After inference , the coefficients are fixed and a gradient step is taken to minimize ( 2 ) with respect to transport operator weights . Once learned , these operators represent different types of motion that traverse the manifold and they can be combined to generate natural paths on the manifold . After fitting transport operators to the latent space of a fixed autoencoder network , there is a final fine-tuning training phase that updates the network weights and transport operators simultaneously using a joint objective that combines the autoencoder reconstruction loss with LΨ . This fine-tuning step addresses the potential of a mismatch between the data manifold and the learned latent structure by adapting the latent structure to fit the transport operators learned between the selected training point pairs . We empirically find that breaking up training into these three phases increases the stability with which the transport operators can be learned . Given the context of this MAE model , in the following sections we describe our contributions .
The authors introduce an approach for learning identity preserving transformations from data. First the low-dimensional manifold structure of the data is learned using an autoencoding neural network, then the transformations (the Lie group operators and their coefficients of combination) that map between perceptually similar image pairs are learned. The main contribution of the paper is that the learned operators can transform the data semantically. This is a challenging since 1) semantic transformation labels are not usually available; and 2) not all semantic transformations make sense for all elements of the dataset. The authors address these challenges using two auxiliary networks. The first identifies perceptually similar image pairs in the dataset and the second identifies where particular transformations are likely to be used.
SP:37aab9484f502a34029583b62eeb6657326bd0c3
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?
1 INTRODUCTION . Deep neural networks are powerful tools but their success depends strongly on the amount of training data ( Sun et al. , 2017 ; Mahajan et al. , 2018 ) . Recent works show that for improving robust training against adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ; Biggio & Roli , 2018 ) , additional training data is helpful ( Carmon et al. , 2019 ; Schmidt et al. , 2018a ; Uesato et al. , 2019 ; Deng et al. , 2021 ) . However , curating more real-world data from the actual data distribution is usually challenging and costly ( Recht et al. , 2019 ) . To circumvent this challenge , we ask : can robust training be enhanced using a proxy distribution , i.e. , an approximation of the real data distribution ? In particular , can additional samples from a proxy distribution , that is perhaps cheaper to sample from , improve robustness . If so , can generative models that are trained on limited training images in small scale datasets ( LeCun & Cortes , 2010 ; Krizhevsky et al. , 2014 ) , act as such a proxy distribution ? 1 When training on synthetic samples from generative models , a natural question is whether robustness on synthetic data will also transfer to real world data . Even if it does , can we determine the features of synthetic data that enable this synthetic-to-real robustness transfer and optimize our selection of generative models based on these features ? Finally , can we also optimize the selection of individual synthetic samples to maximize the robustness transfer ? Q.1 When does robustness transfer from proxy distribution to real data distribution ? This question is fundamental to develop a better understanding of whether a proxy distribution will help . For a classifier trained on only synthetic samples , we argue that in addition to empirical and generalization error , distribution shift penalty also determines its robustness on the real data distribution . We 1Proxy distributions may not necessarily be modeled by generative models . When a proxy distribution is the output of a generative model , we call it synthetic distribution and refer to data sampled from it as synthetic data . prove that this penalty is upper bounded by the conditional Wasserstein distance between the proxy distribution and real data distribution . Thus robustness will transfer from proxy distributions that are in close proximity to real data distribution with respect to conditional Wasserstein distance . Q.2 How effective are proxy distributions in boosting adversarial robustness on real-world dataset ? Our experimental results on five datasets demonstrate that the use of samples from PrOxy distributions in Robust Training ( PORT ) can significantly improve robustness . In particular , PORT achieves up to to 7.5 % improvement in adversarial robustness over existing state-of-the-art ( Croce et al. , 2020 ) . Its improvement is consistent across different threat models ( ` ∞ or ` 2 ) , network architectures , datasets , and robustness criteria ( empirical or certified robustness ) . We also uncover that synthetic images from diffusion-based generative models are most helpful in improving robustness on real datasets . We further investigate the use of proxy distributions in robust training . In particular , we investigate why current state-of-the-art in diffusion-based models are significantly more helpful than their counterparts , such as generative adversarial networks ( GANs ) . Q.3 Can we develop a metric to characterize which proxy distribution will be most helpful in robust training ? Our theory motivates the design of a measure of proximity between two distributions that incorporates the geometry of the distributions and can empirically predict the transfer of robustness . We propose a robust learning based approach where we use the success of a discriminator in distinguishing adversarially perturbed samples of synthetic and real data as a measure of proximity . Discriminating between synthetic and real data is a common practice ( Goodfellow et al. , 2014 ; Gui et al. , 2020 ) , however , we find that considering adversarial perturbations on synthetic and real data samples is the key to making the discriminator an effective measure for this task . We demonstrate that the rate of decrease in discriminator success with an increasing size of perturbations can effectively measure proximity and it accurately predicts the relative transfer of robustness from different generative models . We also leverage our robust discriminators to identify most helpful synthetic samples . We use the proximity of each synthetic sample to real data distribution , referred to as synthetic score , as a metric to judge their importance . This score can be computed using output probability from a discriminator that is robustly trained to distinguish between adversarially perturbed samples from proxy and real data distribution . We demonstrate that selecting synthetic images based on their synthetic scores can further improve performance . Contributions . We make the following key contributions . • We provide an analytical bound on the transfer of robustness from proxy distributions to real data distribution using the notion of conditional Wasserstein distance and further validate it experimentally . • Overall , using additional synthetic images , we improve both clean and robust accuracy on five different datasets . In particular , we improve robust accuracy by up to 7.5 % and 6.7 % in ` ∞ and ` 2 threat models , respectively , and certified robust accuracy ( ` 2 ) by 7.6 % on the CIFAR-10 dataset . • When selecting a proxy distribution , we show that existing metrics , such as FID , fail to determine the synthetic-to-real transfer of robustness . We propose a new metric ( ARC ) based on the distinguishability of adversarially perturbed synthetic and real data , that accurately determines the performance transfer . • We also develop a metric , named synthetic score , to determine the importance of each synthetic sample in synthetic-to-real robustness transfer . We demonstrate that choosing samples with lower synthetic scores provide better results than randomly selected samples . 2 INTEGRATING PROXY DISTRIBUTIONS IN ROBUST TRAINING . In this section we propose to use samples from proxy distributions to improve robustness . We first provide analytical bounds on the transfer of adversarial robustness from proxy to real data distribution . Next , using robust discriminators we provide a metric based on our analytical bound , which can accurately determine the relative ranking of different proxy distributions in terms of robustness transfer . Our metric can be calculated empirically ( using samples from both distributions ) and does not require the knowledge of the proxy or real data distribution . Finally , we present our robust training formulation ( PORT ) which uses synthetic samples generated by the generative model , together with real samples . Notation . We represent the input space by X and corresponding label space as Y . Data is sampled from a joint distribution D that is supported on X × Y . For a label y , we use D | y to denote the conditional distribution of class y . We denote the proxy distribution as D̃ . We denote the neural network for classification by f : X → Z , parameterized by θ , which maps input images to output probability vectors ( z ) . We use h to refer to the classification functions that output labels . For a set S sampled from a distribution D , we use Ŝ to denote the empirical distribution with respect to set S. We use S ∼ D to denote the sampling of a dataset from a distribution D. We use ( x , y ) ← D to denote the sampling of a single point from D . 2.1 UNDERSTANDING TRANSFER OF ADVERSARIAL ROBUSTNESS BETWEEN DATA DISTRIBUTIONS . Since our goal is to use samples from proxy distribution to improve robustness on real data distribution , we first study the transfer of adversarial robustness between two data distributions . Definition 1 ( Average Robustness ) . We define average robustness for a classifier h on a distribution D according to a distance metric d as follows : Robd ( h , D ) = E ( x , y ) ←D [ infh ( x′ ) 6=y d ( x′ , x ) ] . where classifier h : X → Y predicts class label of an input sample 2 . This definition refers to the expected distance to the closest adversarial example for each sample . Formalizing transfer of robustness from proxy to real data distribution . In robust learning from a proxy distribution , we are interested in bounding the average robustness of the classifier obtained by a learning algorithm ( L ) , on distribution D , when the training set is a set S of n labeled examples sampled from a proxy distribution D̃ . In particular we want to provide a lower bound on the transferred average robustness i.e , E S∼D̃ h←L ( S ) [ Robd ( h , D ) ] . In order to understand this quantity better , suppose h is a classifier trained on a set S that is sampled from D̃ , using algorithm L. We decompose Robd ( h , D ) to three quantities as follows : Robd ( h , D ) = ( Robd ( h , D ) − Robd ( h , D̃ ) ) + ( Robd ( h , D̃ ) − Robd ( h , Ŝ ) ) + Robd ( h , Ŝ ) . Using this decomposition , by linearity of expectation and triangle inequality we can bound transferred average robustness from below by E S←D̃n h←L ( S ) [ Robd ( h , Ŝ ) ] ︸ ︷︷ ︸ Empirical robustness − ∣∣ E S←D̃n h←L ( S ) [ Robd ( h , D̃ ) − Robd ( h , Ŝ ) ] ∣∣ ︸ ︷︷ ︸ Generalization penalty − ∣∣ E S←D̃n h←L ( S ) [ Robd ( h , D ) − Robd ( h , D̃ ) ] ∣∣ ︸ ︷︷ ︸ Distribution-shift penalty . As the above decomposition suggests , in order to bound the average robustness , we need to bound both the generalization penalty and the distribution shift penalty . The generalization penalty has been rigorously studied before in multiple works ( Cullina et al. , 2018 ; Montasser et al. , 2019 ; Schmidt et al. , 2018b ) . Hence , we focus on bounding the distribution shift penalty . Our goal is to provide a bound on the distribution-shift penalty that is independent of the classifier in hand and is only related to the properties of the distributions . With this goal , we define a notion of distance between two distributions . Definition 2 ( Conditional Wasserstein distance ) . For two labeled distributions D and D̃ supported on X × Y , we define cwd according to a distance metric d as follows : cwdd ( D , D̃ ) = E ( · , y ) ←D [ inf J∈J ( D|y , D̃|y ) E ( x , x′ ) ←J [ d ( x , x′ ) ] ] where J ( D , D̃ ) is the set of joint distributions whose marginals are identical to D and D̃ . 2This notion of robustness is also used in Gilmer et al . ( 2018 ) ; Diochnos et al . ( 2018 ) and Mahloujifar et al . ( 2019a ) . It is simply the expectation of Wasserstein distance between conditional distributions for each class . Now , we are ready to state our main theorem that bounds the distribution shift penalty for any learning algorithm based only on the Wasserstein distance of the two distributions . Theorem 1 ( Bounding distribution-shift penalty ) . Let D and D̃ be two labeled distributions supported on X × Y with identical label distributions , i.e. , ∀y∗ ∈ Y , Pr ( x , y ) ←D [ y = y∗ ] = Pr ( x , y ) ←D̃ [ y = y ∗ ] . Then for any classifier h : X → Y |Robd ( h , D̃ ) − Robd ( h , D ) | ≤ cwdd ( D , D̃ ) . Theorem 1 shows how one can bound the distribution-shift penalty by minimizing the conditional Wasserstein distance between the two distributions . We provide its proof and empirical validation in Appendix A . Even if we successfully reduce the generalization penalty , the distribution-shift penalty may remain the dominant factor in transferred robustness . This theorem enables us to switch our attention from robust generalization to creating generative models for which the underlying distribution is close to the original distribution . In Appendix A , we also provide two other theorems about the tightness of Theorem 1 and the effect of combining clean and proxy data together .
This paper focuses on utilizing synthetic images generated by a generative model for the task of achieving robustness to adversarial attacks. Towards this, the paper aims to assess the suitability of the proxy distribution defined by the generative model for the underlying task. The paper shows that conditional Wasserstein distance serves as a selection criterion for the proxy distribution. Acknowledging the intractability of estimating the conditional Wasserstein distance, the paper then proposed a novel metric based on robust discrimination between the proxy distribution and the real distribution. Finally, the paper empirically demonstrates that using synthetic images from a suitable generative model can indeed improve the robust and clean accuracy of the model over existing baselines.
SP:53652cae401fc0eefd3e6aa57552dac089eb91fc
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?
1 INTRODUCTION . Deep neural networks are powerful tools but their success depends strongly on the amount of training data ( Sun et al. , 2017 ; Mahajan et al. , 2018 ) . Recent works show that for improving robust training against adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ; Biggio & Roli , 2018 ) , additional training data is helpful ( Carmon et al. , 2019 ; Schmidt et al. , 2018a ; Uesato et al. , 2019 ; Deng et al. , 2021 ) . However , curating more real-world data from the actual data distribution is usually challenging and costly ( Recht et al. , 2019 ) . To circumvent this challenge , we ask : can robust training be enhanced using a proxy distribution , i.e. , an approximation of the real data distribution ? In particular , can additional samples from a proxy distribution , that is perhaps cheaper to sample from , improve robustness . If so , can generative models that are trained on limited training images in small scale datasets ( LeCun & Cortes , 2010 ; Krizhevsky et al. , 2014 ) , act as such a proxy distribution ? 1 When training on synthetic samples from generative models , a natural question is whether robustness on synthetic data will also transfer to real world data . Even if it does , can we determine the features of synthetic data that enable this synthetic-to-real robustness transfer and optimize our selection of generative models based on these features ? Finally , can we also optimize the selection of individual synthetic samples to maximize the robustness transfer ? Q.1 When does robustness transfer from proxy distribution to real data distribution ? This question is fundamental to develop a better understanding of whether a proxy distribution will help . For a classifier trained on only synthetic samples , we argue that in addition to empirical and generalization error , distribution shift penalty also determines its robustness on the real data distribution . We 1Proxy distributions may not necessarily be modeled by generative models . When a proxy distribution is the output of a generative model , we call it synthetic distribution and refer to data sampled from it as synthetic data . prove that this penalty is upper bounded by the conditional Wasserstein distance between the proxy distribution and real data distribution . Thus robustness will transfer from proxy distributions that are in close proximity to real data distribution with respect to conditional Wasserstein distance . Q.2 How effective are proxy distributions in boosting adversarial robustness on real-world dataset ? Our experimental results on five datasets demonstrate that the use of samples from PrOxy distributions in Robust Training ( PORT ) can significantly improve robustness . In particular , PORT achieves up to to 7.5 % improvement in adversarial robustness over existing state-of-the-art ( Croce et al. , 2020 ) . Its improvement is consistent across different threat models ( ` ∞ or ` 2 ) , network architectures , datasets , and robustness criteria ( empirical or certified robustness ) . We also uncover that synthetic images from diffusion-based generative models are most helpful in improving robustness on real datasets . We further investigate the use of proxy distributions in robust training . In particular , we investigate why current state-of-the-art in diffusion-based models are significantly more helpful than their counterparts , such as generative adversarial networks ( GANs ) . Q.3 Can we develop a metric to characterize which proxy distribution will be most helpful in robust training ? Our theory motivates the design of a measure of proximity between two distributions that incorporates the geometry of the distributions and can empirically predict the transfer of robustness . We propose a robust learning based approach where we use the success of a discriminator in distinguishing adversarially perturbed samples of synthetic and real data as a measure of proximity . Discriminating between synthetic and real data is a common practice ( Goodfellow et al. , 2014 ; Gui et al. , 2020 ) , however , we find that considering adversarial perturbations on synthetic and real data samples is the key to making the discriminator an effective measure for this task . We demonstrate that the rate of decrease in discriminator success with an increasing size of perturbations can effectively measure proximity and it accurately predicts the relative transfer of robustness from different generative models . We also leverage our robust discriminators to identify most helpful synthetic samples . We use the proximity of each synthetic sample to real data distribution , referred to as synthetic score , as a metric to judge their importance . This score can be computed using output probability from a discriminator that is robustly trained to distinguish between adversarially perturbed samples from proxy and real data distribution . We demonstrate that selecting synthetic images based on their synthetic scores can further improve performance . Contributions . We make the following key contributions . • We provide an analytical bound on the transfer of robustness from proxy distributions to real data distribution using the notion of conditional Wasserstein distance and further validate it experimentally . • Overall , using additional synthetic images , we improve both clean and robust accuracy on five different datasets . In particular , we improve robust accuracy by up to 7.5 % and 6.7 % in ` ∞ and ` 2 threat models , respectively , and certified robust accuracy ( ` 2 ) by 7.6 % on the CIFAR-10 dataset . • When selecting a proxy distribution , we show that existing metrics , such as FID , fail to determine the synthetic-to-real transfer of robustness . We propose a new metric ( ARC ) based on the distinguishability of adversarially perturbed synthetic and real data , that accurately determines the performance transfer . • We also develop a metric , named synthetic score , to determine the importance of each synthetic sample in synthetic-to-real robustness transfer . We demonstrate that choosing samples with lower synthetic scores provide better results than randomly selected samples . 2 INTEGRATING PROXY DISTRIBUTIONS IN ROBUST TRAINING . In this section we propose to use samples from proxy distributions to improve robustness . We first provide analytical bounds on the transfer of adversarial robustness from proxy to real data distribution . Next , using robust discriminators we provide a metric based on our analytical bound , which can accurately determine the relative ranking of different proxy distributions in terms of robustness transfer . Our metric can be calculated empirically ( using samples from both distributions ) and does not require the knowledge of the proxy or real data distribution . Finally , we present our robust training formulation ( PORT ) which uses synthetic samples generated by the generative model , together with real samples . Notation . We represent the input space by X and corresponding label space as Y . Data is sampled from a joint distribution D that is supported on X × Y . For a label y , we use D | y to denote the conditional distribution of class y . We denote the proxy distribution as D̃ . We denote the neural network for classification by f : X → Z , parameterized by θ , which maps input images to output probability vectors ( z ) . We use h to refer to the classification functions that output labels . For a set S sampled from a distribution D , we use Ŝ to denote the empirical distribution with respect to set S. We use S ∼ D to denote the sampling of a dataset from a distribution D. We use ( x , y ) ← D to denote the sampling of a single point from D . 2.1 UNDERSTANDING TRANSFER OF ADVERSARIAL ROBUSTNESS BETWEEN DATA DISTRIBUTIONS . Since our goal is to use samples from proxy distribution to improve robustness on real data distribution , we first study the transfer of adversarial robustness between two data distributions . Definition 1 ( Average Robustness ) . We define average robustness for a classifier h on a distribution D according to a distance metric d as follows : Robd ( h , D ) = E ( x , y ) ←D [ infh ( x′ ) 6=y d ( x′ , x ) ] . where classifier h : X → Y predicts class label of an input sample 2 . This definition refers to the expected distance to the closest adversarial example for each sample . Formalizing transfer of robustness from proxy to real data distribution . In robust learning from a proxy distribution , we are interested in bounding the average robustness of the classifier obtained by a learning algorithm ( L ) , on distribution D , when the training set is a set S of n labeled examples sampled from a proxy distribution D̃ . In particular we want to provide a lower bound on the transferred average robustness i.e , E S∼D̃ h←L ( S ) [ Robd ( h , D ) ] . In order to understand this quantity better , suppose h is a classifier trained on a set S that is sampled from D̃ , using algorithm L. We decompose Robd ( h , D ) to three quantities as follows : Robd ( h , D ) = ( Robd ( h , D ) − Robd ( h , D̃ ) ) + ( Robd ( h , D̃ ) − Robd ( h , Ŝ ) ) + Robd ( h , Ŝ ) . Using this decomposition , by linearity of expectation and triangle inequality we can bound transferred average robustness from below by E S←D̃n h←L ( S ) [ Robd ( h , Ŝ ) ] ︸ ︷︷ ︸ Empirical robustness − ∣∣ E S←D̃n h←L ( S ) [ Robd ( h , D̃ ) − Robd ( h , Ŝ ) ] ∣∣ ︸ ︷︷ ︸ Generalization penalty − ∣∣ E S←D̃n h←L ( S ) [ Robd ( h , D ) − Robd ( h , D̃ ) ] ∣∣ ︸ ︷︷ ︸ Distribution-shift penalty . As the above decomposition suggests , in order to bound the average robustness , we need to bound both the generalization penalty and the distribution shift penalty . The generalization penalty has been rigorously studied before in multiple works ( Cullina et al. , 2018 ; Montasser et al. , 2019 ; Schmidt et al. , 2018b ) . Hence , we focus on bounding the distribution shift penalty . Our goal is to provide a bound on the distribution-shift penalty that is independent of the classifier in hand and is only related to the properties of the distributions . With this goal , we define a notion of distance between two distributions . Definition 2 ( Conditional Wasserstein distance ) . For two labeled distributions D and D̃ supported on X × Y , we define cwd according to a distance metric d as follows : cwdd ( D , D̃ ) = E ( · , y ) ←D [ inf J∈J ( D|y , D̃|y ) E ( x , x′ ) ←J [ d ( x , x′ ) ] ] where J ( D , D̃ ) is the set of joint distributions whose marginals are identical to D and D̃ . 2This notion of robustness is also used in Gilmer et al . ( 2018 ) ; Diochnos et al . ( 2018 ) and Mahloujifar et al . ( 2019a ) . It is simply the expectation of Wasserstein distance between conditional distributions for each class . Now , we are ready to state our main theorem that bounds the distribution shift penalty for any learning algorithm based only on the Wasserstein distance of the two distributions . Theorem 1 ( Bounding distribution-shift penalty ) . Let D and D̃ be two labeled distributions supported on X × Y with identical label distributions , i.e. , ∀y∗ ∈ Y , Pr ( x , y ) ←D [ y = y∗ ] = Pr ( x , y ) ←D̃ [ y = y ∗ ] . Then for any classifier h : X → Y |Robd ( h , D̃ ) − Robd ( h , D ) | ≤ cwdd ( D , D̃ ) . Theorem 1 shows how one can bound the distribution-shift penalty by minimizing the conditional Wasserstein distance between the two distributions . We provide its proof and empirical validation in Appendix A . Even if we successfully reduce the generalization penalty , the distribution-shift penalty may remain the dominant factor in transferred robustness . This theorem enables us to switch our attention from robust generalization to creating generative models for which the underlying distribution is close to the original distribution . In Appendix A , we also provide two other theorems about the tightness of Theorem 1 and the effect of combining clean and proxy data together .
The paper investigates if generative models can be used to improve the robust accuracy of image models. The authors show that the current SOTA across multiple attack models and commonly used datasets can be significantly improved by using generative models. They show that the conditional Wasserstein difference between the test distribution and the one learned by the generative model is an upper bound for the difference in robustness achieved by models trained on either distribution. They investigate what generative models provide good proxy distributions by robustly training discriminators and using their accuracy at different attack strengths as a metric. The discriminators are also shown to be helpful to order generated samples by their effectiveness for adversarial training.
SP:53652cae401fc0eefd3e6aa57552dac089eb91fc
NViT: Vision Transformer Compression and Parameter Redistribution
1 INTRODUCTION . Self-attention based transformer models demonstrate high model capacity , easy scalability , and superior ability in capturing long-range dependency ( Vaswani et al. , 2017 ; Devlin et al. , 2018 ; Radford et al. , 2018 ; Jiao et al. , 2019 ; Brown et al. , 2020 ) . They have thus been widely applied to natural language processing ( NLP ) tasks , and recently received growing attention for computer vision tasks . Vision Transformer , i.e. , the ViT ( Dosovitskiy et al. , 2020 ) , shows that embedding image patches into tokens and passing them through a sequence of transformer blocks can lead to higher accuracy compared to state-of-the-art CNN models . DEIT , recent work by Touvron et al . ( 2021 ) , further presents a data-efficient training method such that acceptable accuracy can be achieved without extensive pretraining . Offering competitive performance to CNNs under similar training regimes , transformers now point to the appealing perspective of solving both NLP and vision tasks with the same architecture ( Zheng et al. , 2021 ; Kim et al. , 2021 ; Jiang et al. , 2021 ) . Unlike CNNs built with convolutional layers that are mainly parameterized by few dimensions like the kernel size and the number of filters , the ViT has multiple distinct components , i.e. , QKV projection , multi-head attention , multi-layer perceptron , etc . ( Vaswani et al. , 2017 ) , each defined by independent dimensions . As a result , the dimensionality of each component in each ViT block needs to be carefully designed to achieve a decent trade-off between efficiency and accuracy . However , this is typically not the case for state-of-the-art models . Models such as ViT ( Dosovitskiy et al. , 2020 ) and DEIT ( Touvron et al. , 2021 ) mainly inherit the design heuristics from NLP tasks , e.g. , use MLP expansion ratio 4 , fix QKV per head , all the blocks having the same dimensions , etc. , which may not be optimal for computer vision ( Chen et al. , 2021a ) , causing significant redundancy in the base model and a worse efficiency-accuracy trade-off upon scaling , as we show extensively in our experiments . This work targets efficient ViTs by exploring latency-aware global structural pruning , leveraging the insights to redistribute parameters for enhanced accuracy-efficiency trade-off . Our approach , as visualized in Figure 1 , starts from analyzing the blocks in the computation graph of ViT to identify all the dimensions that can be independently controlled . We apply global structural pruning over all the components in all blocks . This offers complete flexibility to explore their combinations towards an optimal architecture in a complicated design space . Our global pruning utilizes an importance score based on the first-order Taylor expansion of the pruning loss , offering comparability among all prunable components from all layers . Furthermore , we incorporate the estimated latency reduction of each neuron into its importance score . This guides the final pruned architecture to be faster on target devices , as we show in experiments . The pruned models are distilled utilizing the information of the ground truth labels , a pretrained CNN teacher akin to DEIT ( Touvron et al. , 2021 ) , and the original full model . On the ImageNet-1K benchmark ( Russakovsky et al. , 2015 ) , structural pruning enables a nearly lossless 5.14× parameter reduction , 2.57× FLOPs reduction and 1.86× speed up on V100 GPU over the DEIT-Base model . An 1 % and 1.7 % accuracy gain is observed over DEIT-Small and DEIT-Tiny models when we compress the base model to a similar latency . Using structural pruning for architectural guidance , we further make an important observation that the popular uniform distribution of parameters across all layers is , in fact , not optimal . A simple redistribution of parameters can already provide stronger architectural alternatives , as we show in our experiments for both pretraining and downstream tasks . To this end , we present a new parameter distribution rule to scale ViT architectures , enabling a breed of models named as NViT . When scaling to similar FLOPs and latency , NViT architectures achieve 0.1 % , 0.2 % and 1.1 % accuracy gains over the DEIT-Base , Small , and Tiny models respectively when trained from scratch on ImageNet-1K . Our main contributions are as follows : • Provide a systematic analysis on the prunable components in the ViT model . We identify the ability to perform structural pruning on the embedding dimension , number of heads , MLP hidden dimension , QK dimension and V dimension of each head separately ; • Propose a latency-aware , importance-based criteria that enables hardware-friendly global structural pruning of all the components , achieving a nearly lossless 1.9× speedup ; • Present a new architecture scaling rule that enables NViT , a new family of efficient vision transformer architectures that redistributes the dimensions of DEIT models to outperform them under similar FLOPs and latency . This is the first work showing the potential of discovering novel scalable architectures by pruning vision transformers ; • Demonstrate that the high performance achieved by the pruned models and NViT models transfer effectively to downstream tasks . 2 RELATED WORK . 2.1 VISION TRANSFORMER MODELS . Inspired by the success of transformer models in NLP tasks , recent research proposes to use transformer models on computer vision tasks . The inspiring vision transformer ( ViT ) ( Dosovitskiy et al. , 2020 ) demonstrates the possibility of performing high-accuracy image classification with transformer architecture only , yet also finds that learning attention between image patches is a challenging task that requires large training set and model size . This stimulates recent works to ease extensive pretraining and improve the efficiency-accuracy tradeoff . One noticeable approach DEIT ( Touvron et al. , 2021 ) provides carefully designed training schemes and data augmentation techniques to train ViT from scratch on ImageNet only . Another line of work renovates ViT transformer blocks to better capture image features , such as changing input tokenization ( Yuan et al. , 2021 ; Graham et al. , 2021 ) , using hierarchical architecture ( Liu et al. , 2021 ; Wang et al. , 2021 ; Graham et al. , 2021 ) , upgrading positional encoding ( Chu et al. , 2021 ) , and performing localized attention ( Liu et al. , 2021 ; Han et al. , 2021 ) , bringing in further accuracy improvements under similar computation cost . In this work we focus on the original ViT architecture ( Dosovitskiy et al. , 2020 ) amid its widespread usage , as illustrated in the top of Figure 1 . ViT model first divides the input image into patches that are tokenized to embedding dimension E through a linear projection . Image tokens , together with an independently initialized class token , form an input x ∈ RN×E . Input tokens pass through transformer blocks before classification is made from the class token output of the last block . In its simplest form , a ViT block includes a multi-head self attention ( MSA ) and a multi-layer perceptron ( MLP ) module . The MSA module first linearly transforms the N × E tokens into queries q ∈ RN× ( QK×H ) , keys k ∈ RN× ( QK×H ) , and values v ∈ RN× ( V×H ) . The q , k and v are then split into H heads . Each head performs the self-attention operation defined in Equation ( 1 ) in parallel : Attn ( qh , kh , vh ) = softmax ( qhk T h√ dh ) vh , ( 1 ) where 1√ dh is the scaling factor , and qh ∈ RN×QK , kh ∈ RN×QK and vh ∈ RN×V are the query , key and value of head h. The output of all the heads are then concatenated prior to a fully-connected ( FC ) linear projection back to the original dimension of RN×E . Note that though previous works set dimension QK = V in designing the model architecture ( Dosovitskiy et al. , 2020 ; Touvron et al. , 2021 ; Chen et al. , 2021a ) , setting them differently will not go against the shape rule of matrix multiplication . The MLP module includes two FC layers with a hidden dimension of M . The output of the last FC layer preserves token dimension at RN×E . Built upon the original ViT , DEIT models ( Touvron et al. , 2021 ) further exploit a distillation token , which learns from the output label of a CNN teacher during the training process to incorporate some inductive bias of the CNN model , and significantly improves the DEIT accuracy . Our work uses the DEIT model architecture as a starting point , where we explore the potential of compressing the model and better allocating of the dimensions of different blocks for enhanced efficiency-accuracy tradeoff . Our method is also applicable to other vision transformer architectures for further efficiency improvements , which will be explored in future works . 2.2 VIT COMPRESSION TECHNIQUES . Despite rapid progress of ViTs for vision tasks , ViT blocks still heavily inherit the architecture of transformers for NLP tasks ( Vaswani et al. , 2017 ) , e.g. , the dimension of each block is determined without much optimization , and all the blocks in the stacked model architecture share the same dimension . As we will show later , such design may not be optimal for computer vision tasks . To improve model efficiency , very recent works successfully use structural pruning techniques on vision transformer models , with trainable gate variables ( Zhu et al. , 2021 ) or Taylor importance score ( Chen et al. , 2021b ) . Both methods show the potential of compressing ViT models , yet only consider part of the prunable architecture like the number of heads or the MLP hidden dimension , use manually designed sparsity distribution , and do not take run time latency into account , thus may not lead to optimal compressed models . Our method resolves these issues through a latency-aware global structural pruning of all prunable components across all layers in a jointly manner . Besides pruning , AutoFormer ( Chen et al. , 2021a ) uses a neural architecture search ( NAS ) approach to search for efficient ViT models . AutoFormer only explores a small number of dimension choices , due to the constraint on the supernet training cost ; while our method continuously explores the entire design space of ViT model with a single training process , leading to the finding of more efficient architectures with less training cost . Most importantly , our pruning scheme successfully leads to a scalable design rule , where we can easily design efficient vision transformer models at different scales . This has never been attempted in previous works . There has been a series of research on improving the efficiency of attention-based models on NLP tasks , specifically pruning the number of heads in the MSA block . This includes utilizing stochastic binary gate ( Louizos et al. , 2017 ; Voita et al. , 2019 ) , pruning with sensitivity score ( Michel et al. , 2019 ) and exploring lottery ticket hypothesis ( Frankle & Carbin , 2018 ; Behnke & Heafield , 2020 ) . Our work aims on pruning all parameters including heads , and therefore , has a larger pruning space . Other great ways of improving the efficiency of transformers include weight sharing across transformer blocks ( Lan et al. , 2019 ) , dynamically controlling the attention span of each token ( Chen et al. , 2019 ; Tambe et al. , 2020 ) , and allowing the model to output the result in an earlier transformer block ( Zhou et al. , 2020 ; Schwartz et al. , 2020 ) . These techniques are orthogonal to our pruningbased method and have remained unexplored on vision models . Applying them on ViT models and combining with our pruning method would be an interesting and fruitful future direction .
This paper applies latency-aware global structural pruning to vision transformers (ViTs), which results in redistribution of the model parameters and better a speed-accuracy tradeoff. Compared to Deit-B, the pruned vision transformer model (NVP) is 1.85x faster with almost no performance loss. Based on the insights discovered in the pruning process, the paper also presents the novel vision transformer (NViT) architectures. NViT outperforms Deit on ImageNet, CIFAR, and iNat benchmarks with similar running times.
SP:7def75e4937f11e2c20c7694f54d512a08eb64d6
NViT: Vision Transformer Compression and Parameter Redistribution
1 INTRODUCTION . Self-attention based transformer models demonstrate high model capacity , easy scalability , and superior ability in capturing long-range dependency ( Vaswani et al. , 2017 ; Devlin et al. , 2018 ; Radford et al. , 2018 ; Jiao et al. , 2019 ; Brown et al. , 2020 ) . They have thus been widely applied to natural language processing ( NLP ) tasks , and recently received growing attention for computer vision tasks . Vision Transformer , i.e. , the ViT ( Dosovitskiy et al. , 2020 ) , shows that embedding image patches into tokens and passing them through a sequence of transformer blocks can lead to higher accuracy compared to state-of-the-art CNN models . DEIT , recent work by Touvron et al . ( 2021 ) , further presents a data-efficient training method such that acceptable accuracy can be achieved without extensive pretraining . Offering competitive performance to CNNs under similar training regimes , transformers now point to the appealing perspective of solving both NLP and vision tasks with the same architecture ( Zheng et al. , 2021 ; Kim et al. , 2021 ; Jiang et al. , 2021 ) . Unlike CNNs built with convolutional layers that are mainly parameterized by few dimensions like the kernel size and the number of filters , the ViT has multiple distinct components , i.e. , QKV projection , multi-head attention , multi-layer perceptron , etc . ( Vaswani et al. , 2017 ) , each defined by independent dimensions . As a result , the dimensionality of each component in each ViT block needs to be carefully designed to achieve a decent trade-off between efficiency and accuracy . However , this is typically not the case for state-of-the-art models . Models such as ViT ( Dosovitskiy et al. , 2020 ) and DEIT ( Touvron et al. , 2021 ) mainly inherit the design heuristics from NLP tasks , e.g. , use MLP expansion ratio 4 , fix QKV per head , all the blocks having the same dimensions , etc. , which may not be optimal for computer vision ( Chen et al. , 2021a ) , causing significant redundancy in the base model and a worse efficiency-accuracy trade-off upon scaling , as we show extensively in our experiments . This work targets efficient ViTs by exploring latency-aware global structural pruning , leveraging the insights to redistribute parameters for enhanced accuracy-efficiency trade-off . Our approach , as visualized in Figure 1 , starts from analyzing the blocks in the computation graph of ViT to identify all the dimensions that can be independently controlled . We apply global structural pruning over all the components in all blocks . This offers complete flexibility to explore their combinations towards an optimal architecture in a complicated design space . Our global pruning utilizes an importance score based on the first-order Taylor expansion of the pruning loss , offering comparability among all prunable components from all layers . Furthermore , we incorporate the estimated latency reduction of each neuron into its importance score . This guides the final pruned architecture to be faster on target devices , as we show in experiments . The pruned models are distilled utilizing the information of the ground truth labels , a pretrained CNN teacher akin to DEIT ( Touvron et al. , 2021 ) , and the original full model . On the ImageNet-1K benchmark ( Russakovsky et al. , 2015 ) , structural pruning enables a nearly lossless 5.14× parameter reduction , 2.57× FLOPs reduction and 1.86× speed up on V100 GPU over the DEIT-Base model . An 1 % and 1.7 % accuracy gain is observed over DEIT-Small and DEIT-Tiny models when we compress the base model to a similar latency . Using structural pruning for architectural guidance , we further make an important observation that the popular uniform distribution of parameters across all layers is , in fact , not optimal . A simple redistribution of parameters can already provide stronger architectural alternatives , as we show in our experiments for both pretraining and downstream tasks . To this end , we present a new parameter distribution rule to scale ViT architectures , enabling a breed of models named as NViT . When scaling to similar FLOPs and latency , NViT architectures achieve 0.1 % , 0.2 % and 1.1 % accuracy gains over the DEIT-Base , Small , and Tiny models respectively when trained from scratch on ImageNet-1K . Our main contributions are as follows : • Provide a systematic analysis on the prunable components in the ViT model . We identify the ability to perform structural pruning on the embedding dimension , number of heads , MLP hidden dimension , QK dimension and V dimension of each head separately ; • Propose a latency-aware , importance-based criteria that enables hardware-friendly global structural pruning of all the components , achieving a nearly lossless 1.9× speedup ; • Present a new architecture scaling rule that enables NViT , a new family of efficient vision transformer architectures that redistributes the dimensions of DEIT models to outperform them under similar FLOPs and latency . This is the first work showing the potential of discovering novel scalable architectures by pruning vision transformers ; • Demonstrate that the high performance achieved by the pruned models and NViT models transfer effectively to downstream tasks . 2 RELATED WORK . 2.1 VISION TRANSFORMER MODELS . Inspired by the success of transformer models in NLP tasks , recent research proposes to use transformer models on computer vision tasks . The inspiring vision transformer ( ViT ) ( Dosovitskiy et al. , 2020 ) demonstrates the possibility of performing high-accuracy image classification with transformer architecture only , yet also finds that learning attention between image patches is a challenging task that requires large training set and model size . This stimulates recent works to ease extensive pretraining and improve the efficiency-accuracy tradeoff . One noticeable approach DEIT ( Touvron et al. , 2021 ) provides carefully designed training schemes and data augmentation techniques to train ViT from scratch on ImageNet only . Another line of work renovates ViT transformer blocks to better capture image features , such as changing input tokenization ( Yuan et al. , 2021 ; Graham et al. , 2021 ) , using hierarchical architecture ( Liu et al. , 2021 ; Wang et al. , 2021 ; Graham et al. , 2021 ) , upgrading positional encoding ( Chu et al. , 2021 ) , and performing localized attention ( Liu et al. , 2021 ; Han et al. , 2021 ) , bringing in further accuracy improvements under similar computation cost . In this work we focus on the original ViT architecture ( Dosovitskiy et al. , 2020 ) amid its widespread usage , as illustrated in the top of Figure 1 . ViT model first divides the input image into patches that are tokenized to embedding dimension E through a linear projection . Image tokens , together with an independently initialized class token , form an input x ∈ RN×E . Input tokens pass through transformer blocks before classification is made from the class token output of the last block . In its simplest form , a ViT block includes a multi-head self attention ( MSA ) and a multi-layer perceptron ( MLP ) module . The MSA module first linearly transforms the N × E tokens into queries q ∈ RN× ( QK×H ) , keys k ∈ RN× ( QK×H ) , and values v ∈ RN× ( V×H ) . The q , k and v are then split into H heads . Each head performs the self-attention operation defined in Equation ( 1 ) in parallel : Attn ( qh , kh , vh ) = softmax ( qhk T h√ dh ) vh , ( 1 ) where 1√ dh is the scaling factor , and qh ∈ RN×QK , kh ∈ RN×QK and vh ∈ RN×V are the query , key and value of head h. The output of all the heads are then concatenated prior to a fully-connected ( FC ) linear projection back to the original dimension of RN×E . Note that though previous works set dimension QK = V in designing the model architecture ( Dosovitskiy et al. , 2020 ; Touvron et al. , 2021 ; Chen et al. , 2021a ) , setting them differently will not go against the shape rule of matrix multiplication . The MLP module includes two FC layers with a hidden dimension of M . The output of the last FC layer preserves token dimension at RN×E . Built upon the original ViT , DEIT models ( Touvron et al. , 2021 ) further exploit a distillation token , which learns from the output label of a CNN teacher during the training process to incorporate some inductive bias of the CNN model , and significantly improves the DEIT accuracy . Our work uses the DEIT model architecture as a starting point , where we explore the potential of compressing the model and better allocating of the dimensions of different blocks for enhanced efficiency-accuracy tradeoff . Our method is also applicable to other vision transformer architectures for further efficiency improvements , which will be explored in future works . 2.2 VIT COMPRESSION TECHNIQUES . Despite rapid progress of ViTs for vision tasks , ViT blocks still heavily inherit the architecture of transformers for NLP tasks ( Vaswani et al. , 2017 ) , e.g. , the dimension of each block is determined without much optimization , and all the blocks in the stacked model architecture share the same dimension . As we will show later , such design may not be optimal for computer vision tasks . To improve model efficiency , very recent works successfully use structural pruning techniques on vision transformer models , with trainable gate variables ( Zhu et al. , 2021 ) or Taylor importance score ( Chen et al. , 2021b ) . Both methods show the potential of compressing ViT models , yet only consider part of the prunable architecture like the number of heads or the MLP hidden dimension , use manually designed sparsity distribution , and do not take run time latency into account , thus may not lead to optimal compressed models . Our method resolves these issues through a latency-aware global structural pruning of all prunable components across all layers in a jointly manner . Besides pruning , AutoFormer ( Chen et al. , 2021a ) uses a neural architecture search ( NAS ) approach to search for efficient ViT models . AutoFormer only explores a small number of dimension choices , due to the constraint on the supernet training cost ; while our method continuously explores the entire design space of ViT model with a single training process , leading to the finding of more efficient architectures with less training cost . Most importantly , our pruning scheme successfully leads to a scalable design rule , where we can easily design efficient vision transformer models at different scales . This has never been attempted in previous works . There has been a series of research on improving the efficiency of attention-based models on NLP tasks , specifically pruning the number of heads in the MSA block . This includes utilizing stochastic binary gate ( Louizos et al. , 2017 ; Voita et al. , 2019 ) , pruning with sensitivity score ( Michel et al. , 2019 ) and exploring lottery ticket hypothesis ( Frankle & Carbin , 2018 ; Behnke & Heafield , 2020 ) . Our work aims on pruning all parameters including heads , and therefore , has a larger pruning space . Other great ways of improving the efficiency of transformers include weight sharing across transformer blocks ( Lan et al. , 2019 ) , dynamically controlling the attention span of each token ( Chen et al. , 2019 ; Tambe et al. , 2020 ) , and allowing the model to output the result in an earlier transformer block ( Zhou et al. , 2020 ; Schwartz et al. , 2020 ) . These techniques are orthogonal to our pruningbased method and have remained unexplored on vision models . Applying them on ViT models and combining with our pruning method would be an interesting and fruitful future direction .
This paper studies the latency reduction of Vision Transformer model. The proposed pruning method w.r.t importance score is trained with the full pre-trained model using knowledge distillation, with latency aware regularization. In addition, the author designed a new architecture NViT with a parameter redistribution. The experiments evaluated the proposed methods with respect to the accuracy, FLOP reduction, and parameter reduction.
SP:7def75e4937f11e2c20c7694f54d512a08eb64d6
CrowdPlay: Crowdsourcing human demonstration data for offline learning in Atari games
Crowdsourcing has been instrumental for driving AI advances that rely on largescale data . At the same time reinforcement learning has seen rapid progress through benchmark environments that strike a balance between tractability and real-world complexity , such as ALE and OpenAI Gym . In this paper we aim to fill a gap at the intersection of these two : The use of crowdsourcing to generate large-scale human demonstration data in the support of advancing research into imitation learning and offline learning . To this end , we present CrowdPlay , a complete crowdsourcing pipeline for any standard RL environment including OpenAI Gym ( made available under an open-source license ) ; a large-scale publicly available crowdsourced dataset of human gameplay demonstrations in Atari 2600 games , including multimodal behavior and human-human and human-AI multiagent data ; offline learning benchmarks with extensive human data evaluation ; and a detailed study of incentives , including real-time feedback to drive high quality data . We hope that this will drive the improvement in design of algorithms that account for the complexity of human , behavioral data and thereby enable a step forward in direction of effective learning for real-world settings . 1 INTRODUCTION . Crowdsourcing has been instrumental in many AI advances , especially recent rapid progress in deep neural network models , which often rely on large training sets . For instance ImageNet ( Deng et al. , 2009 ) , a large database of annotated images , has enabled a number of breakthroughs in image classification ( Krizhevsky et al. , 2012 ) . At the same time , reinforcement learning ( RL ) has seen rapid progress in the last few years , fueled in part by the development of standard , easily accessible , benchmark environments like the Arcade Learning Environment ( ALE ) ( Bellemare et al. , 2013 ; Machado et al. , 2018 ) and OpenAI Gym ( Brockman et al. , 2016 ) . What has been underexplored is the intersection of the two : Using large-scale crowdsourced human data for offline learning , including imitation learning and offline RL . We present CrowdPlay , a framework , methodology , and dataset that we hope will do for offline learning what ALE and OpenAI Gym did for online learning . CrowdPlay supports flexible and scalable crowdsourcing that is geared towards multi-channel recruitment , and is able to interface with any OpenAI Gym or Gym-like Markov decision process ( MDP ) environment . It supports realtime feedback to participants that can be used to boost data quality , as well as both purely human and mixed human-AI multiagent environments . CrowdPlay is also the first dataset based on Atari 2600 games that features multimodal and multiagent behavior . It includes both data from normal gameplay as well as explicitly multimodal behavioral data , where players are given instructions to follow a specific behavior . In addition to single-agent data , the dataset includes data from two-player , human-AI and human-human games , and with both competitive and cooperative rewards . Participants were recruited through multiple channels ( under IRB , details withheld for blind submission ) including Amazon Mechanical Turk , Lab in the Wild , undergraduate students , and multiple social media channels . For some platforms we also include data with a range of different incentive structures for participants . The Atari games were run using ALE and a multiagent version of OpenAI Gym , guaranteeing that transitions are identical to what would be seen in standard Atari RL environments . In this paper we focus on the use of CrowdPlay for Atari 2600 games , but a major advantage of the approach is that it works for any Gym-like MDP environment . We believe that Atari 2600 games are interesting for imitation learning ( IL ) and offline RL for the same reasons that they were seminal as a challenge problem in the development of RL : they offer a balance between achievable short-term research advances and sufficient complexity . More recent work in psychology has also shown them to be of sufficient richness to support the study of human learning behavior ( Tsividis et al. , 2017 ) . Further , and despite the richness of the data , it is easy to collect at scale through the use of crowdsourcing and a web browser , and moreover , it can be used together with an established simulator for evaluation purposes . The CrowdPlay pipeline directly interfaces with standard RL environments . In particular this means that trajectories and transitions are guaranteed to be the same for human data as they are for RL environments ; that offline learning methods automatically have access to a simulator ; and that crowdsourcing with human players need not develop environments and tasks from scratch , but can make use of AI agents that can interact with human agents in a benchmark environment . 1.1 RELATED WORK . Crowdsourcing Previous platforms for crowdsourcing such as TurkServer ( Mao et al. , 2012 ) have focused on supporting synchronous participation through Amazon Mechanical Turk participants and used to study economic behavior in simple experimental environments ( and not for the generation of human behavioral data ) . Tylkin et al . ( 2021 ) use a combination of ALE and Javatari for crowdsourcing Atari data in the context of evaluating the performance of an AI agent in human-AI collaboration . They propose modifying two-player Space Invaders to make it cooperative , and to train AI agents using randomized starting position , both of which we adopt in ( some of ) our multiagent environments . Their crowdsourcing approach is Atari-specific and not publicly available . Much work has been done on the study of incentives for participants in paid crowdsourcing studies , and this is also part of the focus of our work . Prior work ( Mao et al. , 2013 ; Mason & Watts , 2009 ; Yin et al. , 2013 ; Harris , 2011 ; Shaw et al. , 2011 ) has largely found that quality-dependent payments may increase quantity of work more than quality of work , and has not looked at real-time feedback on work quality . Offline Learning . Much work has been done on offline learning ( Rashidinejad et al. , 2021 ) , including Behavior Cloning ( BC ) ( Pomerleau , 1989 ) , Batch Constrained Q-Learning ( BCQ ) ( Fujimoto et al. , 2019 ) , Conservative Q-Learning ( CQL ) ( Kumar et al. , 2020 ) , Implicit Quantile Network ( IQN ) ( Dabney et al. , 2018 ) , DQN ( Mnih , 2015 ) and Off-policy version of Soft ActorCritic ( Haarnoja et al. , 2018 ) . Aytar et al . ( 2018 ) demonstrate learning hard exploration games from unaligned human demonstration videos . Recent work ( Schrittwieser et al. , 2021 ) shows a sample-efficient model-based online and offline learning algorithm . Specific to Atari , Kanervisto et al . ( 2020 ) benchmark behavioral cloning algorithms on existing data from several video games , including Atari 2600 games . Laurens & Kazmi ( 2021 ) clone River Raid agents using existing datasets , and develop evaluation metrics based on action distribution and playstyle . Datasets Atari Grand Challenge ( AGC ) ( Kurin et al. , 2017 ) is a dataset consisting of 45 hours of standard gameplay from five Atari 2600 games . The authors also make their Atari-specific data collection software available . They use a browser app running the Atari emulator in the browser based on Javatari . It is unclear to us if this can be guaranteed to always have identical execution to the ALE emulator . Atari-Head ( Zhang et al. , 2020 ) features 117 hours of gameplay data and includes eye-tracking data . This data was collected using ALE . However , the emulator was run in a semi-frame-by-frame mode , advancing the emulator state only when a key was pressed , and at a maximum of 20 frames per second . The focus of the study was on attention tracking , and is not intended to be representative of natural human gameplay behavior . D4RL ( Fu et al. , 2020 ) and RL Unplugged ( Gulcehre et al. , 2020 ) both also provide datasets for offline learning , but both focus on synthetic data . 2 CROWDPLAY : THE PIPELINE . 2.1 OVERVIEW . The heart of our pipeline are the CrowdPlay backend and frontend , which is a client-server architecture that streams OpenAI Gym environments and similar MDP environments to web browser clients . It is highly extensible , scalable to hundreds or thousands of concurrent users , and allows the real-time capture of both trajectories as well as related statistics . It is geared toward multi-channel recruitment of participants and strong incentives . As its most important feature , it interfaces directly with OpenAI Gym and similar environments , thus opening up the entire array of standard RL environments to rapid crowdsourcing of behavioral data into the support of research into IL and offline RL . It also supports multi-agent environments , including mixed human-AI environments . Complementing this is an engine to support the local download of the generated dataset , including storage of metadata in a relational database for fast access , and compressed trajectories for storage efficiency . The data download can be real-time and is incremental . We give a short overview of the CrowdPlay architecture , and refer the reader to Appendix A.1 for more information . 2.2 SOFTWARE ARCHITECTURE . CrowdPlay provides a highly extensible , high performance client-server architecture for streaming MDP environments to remote web browser clients . The backend interfaces with OpenAI Gym and similar MDP environments . Actions are collected as keypresses in the browser client , and sent to the backend where they are fed into the MDP ’ s “ step ” function . The returned observation is sent back to the browser client for display . This is repeated to generate an episode trajectory . The remainder of the CrowdPlay software infrastructure is built to make this basic loop into a structure that is robust , performant , extensible , user-friendly and scalable . Figure 1 shows the key parts of the CrowdPlay architecture . Communication between the browser client and backend is through high-performance socket connections . The backend is built to be scalable both within-instance , using multiple processes , as well as across-instance using a load balancer and autoscaling instance groups . Trajectories are stored di- rectly as compressed , serialized Python objects , allowing both very easy modification of data capture as well as immediate decoding for use in existing Python-based learning pipelines . CrowdPlay also supports multi-agent environments . It allows multiple human participants by routing multiple browser clients to a single MDP environment . Mixed human-AI environments are supported through pre-trained neural network policies . For robustness , AI agents can also take over control of a human agent on the fly , in the case that a human player disconnects from a multiagent environment , allowing uninterrupted gameplay for the remaining human players . A major focus in the design of CrowdPlay is providing the ease of access of the generated data in downstream ML pipelines . We believe it is crucial for these pipelines to have access not only to the same simulator as the crowdsourcing pipeline , but also to the same observation pre-processing tools that are used in state-of-the-art RL methods on these simulators . Addressing these design goals , CrowdPlay includes a local metadata search engine and a custom , Deepmind-style ( Mnih et al. , 2015 ) observation processing function for offline data . We give more details in Appendix A.1 . CrowdPlay provides an extensible and easy-to-use framework for collecting structured metadata and real-time statistics per user , session , episode , and individual steps . This is used for capturing data quality information , as well as for driving real-time incentives for participants , depending on recruitment platform . We discuss the platforms that we target in more detail in Appendix A.3 , and the various incentive designs and their effect on data in Section 4 .
This paper presents CrowdPlay, a crowdsourcing platform to collect human demonstrations for any MDP. It also accompanies a dataset of human gameplay on Atari games with multi-agent and some multi-behavior aspects. The paper benchmarks existing offline RL algorithms on this dataset, and details incentive design mechanisms for future crowdsourcing jobs.
SP:97bf1da27f21e03aedf82818498273acf15146c9
CrowdPlay: Crowdsourcing human demonstration data for offline learning in Atari games
Crowdsourcing has been instrumental for driving AI advances that rely on largescale data . At the same time reinforcement learning has seen rapid progress through benchmark environments that strike a balance between tractability and real-world complexity , such as ALE and OpenAI Gym . In this paper we aim to fill a gap at the intersection of these two : The use of crowdsourcing to generate large-scale human demonstration data in the support of advancing research into imitation learning and offline learning . To this end , we present CrowdPlay , a complete crowdsourcing pipeline for any standard RL environment including OpenAI Gym ( made available under an open-source license ) ; a large-scale publicly available crowdsourced dataset of human gameplay demonstrations in Atari 2600 games , including multimodal behavior and human-human and human-AI multiagent data ; offline learning benchmarks with extensive human data evaluation ; and a detailed study of incentives , including real-time feedback to drive high quality data . We hope that this will drive the improvement in design of algorithms that account for the complexity of human , behavioral data and thereby enable a step forward in direction of effective learning for real-world settings . 1 INTRODUCTION . Crowdsourcing has been instrumental in many AI advances , especially recent rapid progress in deep neural network models , which often rely on large training sets . For instance ImageNet ( Deng et al. , 2009 ) , a large database of annotated images , has enabled a number of breakthroughs in image classification ( Krizhevsky et al. , 2012 ) . At the same time , reinforcement learning ( RL ) has seen rapid progress in the last few years , fueled in part by the development of standard , easily accessible , benchmark environments like the Arcade Learning Environment ( ALE ) ( Bellemare et al. , 2013 ; Machado et al. , 2018 ) and OpenAI Gym ( Brockman et al. , 2016 ) . What has been underexplored is the intersection of the two : Using large-scale crowdsourced human data for offline learning , including imitation learning and offline RL . We present CrowdPlay , a framework , methodology , and dataset that we hope will do for offline learning what ALE and OpenAI Gym did for online learning . CrowdPlay supports flexible and scalable crowdsourcing that is geared towards multi-channel recruitment , and is able to interface with any OpenAI Gym or Gym-like Markov decision process ( MDP ) environment . It supports realtime feedback to participants that can be used to boost data quality , as well as both purely human and mixed human-AI multiagent environments . CrowdPlay is also the first dataset based on Atari 2600 games that features multimodal and multiagent behavior . It includes both data from normal gameplay as well as explicitly multimodal behavioral data , where players are given instructions to follow a specific behavior . In addition to single-agent data , the dataset includes data from two-player , human-AI and human-human games , and with both competitive and cooperative rewards . Participants were recruited through multiple channels ( under IRB , details withheld for blind submission ) including Amazon Mechanical Turk , Lab in the Wild , undergraduate students , and multiple social media channels . For some platforms we also include data with a range of different incentive structures for participants . The Atari games were run using ALE and a multiagent version of OpenAI Gym , guaranteeing that transitions are identical to what would be seen in standard Atari RL environments . In this paper we focus on the use of CrowdPlay for Atari 2600 games , but a major advantage of the approach is that it works for any Gym-like MDP environment . We believe that Atari 2600 games are interesting for imitation learning ( IL ) and offline RL for the same reasons that they were seminal as a challenge problem in the development of RL : they offer a balance between achievable short-term research advances and sufficient complexity . More recent work in psychology has also shown them to be of sufficient richness to support the study of human learning behavior ( Tsividis et al. , 2017 ) . Further , and despite the richness of the data , it is easy to collect at scale through the use of crowdsourcing and a web browser , and moreover , it can be used together with an established simulator for evaluation purposes . The CrowdPlay pipeline directly interfaces with standard RL environments . In particular this means that trajectories and transitions are guaranteed to be the same for human data as they are for RL environments ; that offline learning methods automatically have access to a simulator ; and that crowdsourcing with human players need not develop environments and tasks from scratch , but can make use of AI agents that can interact with human agents in a benchmark environment . 1.1 RELATED WORK . Crowdsourcing Previous platforms for crowdsourcing such as TurkServer ( Mao et al. , 2012 ) have focused on supporting synchronous participation through Amazon Mechanical Turk participants and used to study economic behavior in simple experimental environments ( and not for the generation of human behavioral data ) . Tylkin et al . ( 2021 ) use a combination of ALE and Javatari for crowdsourcing Atari data in the context of evaluating the performance of an AI agent in human-AI collaboration . They propose modifying two-player Space Invaders to make it cooperative , and to train AI agents using randomized starting position , both of which we adopt in ( some of ) our multiagent environments . Their crowdsourcing approach is Atari-specific and not publicly available . Much work has been done on the study of incentives for participants in paid crowdsourcing studies , and this is also part of the focus of our work . Prior work ( Mao et al. , 2013 ; Mason & Watts , 2009 ; Yin et al. , 2013 ; Harris , 2011 ; Shaw et al. , 2011 ) has largely found that quality-dependent payments may increase quantity of work more than quality of work , and has not looked at real-time feedback on work quality . Offline Learning . Much work has been done on offline learning ( Rashidinejad et al. , 2021 ) , including Behavior Cloning ( BC ) ( Pomerleau , 1989 ) , Batch Constrained Q-Learning ( BCQ ) ( Fujimoto et al. , 2019 ) , Conservative Q-Learning ( CQL ) ( Kumar et al. , 2020 ) , Implicit Quantile Network ( IQN ) ( Dabney et al. , 2018 ) , DQN ( Mnih , 2015 ) and Off-policy version of Soft ActorCritic ( Haarnoja et al. , 2018 ) . Aytar et al . ( 2018 ) demonstrate learning hard exploration games from unaligned human demonstration videos . Recent work ( Schrittwieser et al. , 2021 ) shows a sample-efficient model-based online and offline learning algorithm . Specific to Atari , Kanervisto et al . ( 2020 ) benchmark behavioral cloning algorithms on existing data from several video games , including Atari 2600 games . Laurens & Kazmi ( 2021 ) clone River Raid agents using existing datasets , and develop evaluation metrics based on action distribution and playstyle . Datasets Atari Grand Challenge ( AGC ) ( Kurin et al. , 2017 ) is a dataset consisting of 45 hours of standard gameplay from five Atari 2600 games . The authors also make their Atari-specific data collection software available . They use a browser app running the Atari emulator in the browser based on Javatari . It is unclear to us if this can be guaranteed to always have identical execution to the ALE emulator . Atari-Head ( Zhang et al. , 2020 ) features 117 hours of gameplay data and includes eye-tracking data . This data was collected using ALE . However , the emulator was run in a semi-frame-by-frame mode , advancing the emulator state only when a key was pressed , and at a maximum of 20 frames per second . The focus of the study was on attention tracking , and is not intended to be representative of natural human gameplay behavior . D4RL ( Fu et al. , 2020 ) and RL Unplugged ( Gulcehre et al. , 2020 ) both also provide datasets for offline learning , but both focus on synthetic data . 2 CROWDPLAY : THE PIPELINE . 2.1 OVERVIEW . The heart of our pipeline are the CrowdPlay backend and frontend , which is a client-server architecture that streams OpenAI Gym environments and similar MDP environments to web browser clients . It is highly extensible , scalable to hundreds or thousands of concurrent users , and allows the real-time capture of both trajectories as well as related statistics . It is geared toward multi-channel recruitment of participants and strong incentives . As its most important feature , it interfaces directly with OpenAI Gym and similar environments , thus opening up the entire array of standard RL environments to rapid crowdsourcing of behavioral data into the support of research into IL and offline RL . It also supports multi-agent environments , including mixed human-AI environments . Complementing this is an engine to support the local download of the generated dataset , including storage of metadata in a relational database for fast access , and compressed trajectories for storage efficiency . The data download can be real-time and is incremental . We give a short overview of the CrowdPlay architecture , and refer the reader to Appendix A.1 for more information . 2.2 SOFTWARE ARCHITECTURE . CrowdPlay provides a highly extensible , high performance client-server architecture for streaming MDP environments to remote web browser clients . The backend interfaces with OpenAI Gym and similar MDP environments . Actions are collected as keypresses in the browser client , and sent to the backend where they are fed into the MDP ’ s “ step ” function . The returned observation is sent back to the browser client for display . This is repeated to generate an episode trajectory . The remainder of the CrowdPlay software infrastructure is built to make this basic loop into a structure that is robust , performant , extensible , user-friendly and scalable . Figure 1 shows the key parts of the CrowdPlay architecture . Communication between the browser client and backend is through high-performance socket connections . The backend is built to be scalable both within-instance , using multiple processes , as well as across-instance using a load balancer and autoscaling instance groups . Trajectories are stored di- rectly as compressed , serialized Python objects , allowing both very easy modification of data capture as well as immediate decoding for use in existing Python-based learning pipelines . CrowdPlay also supports multi-agent environments . It allows multiple human participants by routing multiple browser clients to a single MDP environment . Mixed human-AI environments are supported through pre-trained neural network policies . For robustness , AI agents can also take over control of a human agent on the fly , in the case that a human player disconnects from a multiagent environment , allowing uninterrupted gameplay for the remaining human players . A major focus in the design of CrowdPlay is providing the ease of access of the generated data in downstream ML pipelines . We believe it is crucial for these pipelines to have access not only to the same simulator as the crowdsourcing pipeline , but also to the same observation pre-processing tools that are used in state-of-the-art RL methods on these simulators . Addressing these design goals , CrowdPlay includes a local metadata search engine and a custom , Deepmind-style ( Mnih et al. , 2015 ) observation processing function for offline data . We give more details in Appendix A.1 . CrowdPlay provides an extensible and easy-to-use framework for collecting structured metadata and real-time statistics per user , session , episode , and individual steps . This is used for capturing data quality information , as well as for driving real-time incentives for participants , depending on recruitment platform . We discuss the platforms that we target in more detail in Appendix A.3 , and the various incentive designs and their effect on data in Section 4 .
This paper proposes a novel framework CrowdPlay for crowdsourcing human data based on standard RL environments. This CrowdPlay pipeline not only supports recruiting different users from different channels to collect multimodal behaviors and data but also designs diverse and real-time incentive mechanisms to guarantee and improve the quality of data. Furthermore, the authors present a dataset, which is publicly available, along with benchmarks on Atari 2600 games, to enable further research on Imitation Learning and Offline Learning.
SP:97bf1da27f21e03aedf82818498273acf15146c9
S3: Supervised Self-supervised Learning under Label Noise
1 INTRODUCTION . It is now commonly accepted that supervised learning with deep neural networks can provide excellent solutions for a wide range of problems , so long as there is sufficient availability of labeled training data and computational resources . However , these results have been mostly obtained using well-curated datasets in which the classes are balanced and the labels are of high quality . In the realworld , it is often costly to obtain high quality labels especially for large-scale datasets . A common approach is to use semi-automatic methods to obtain the labels ( e.g . “ webly-labeled ” images where the images and labels are obtained by web-crawling ) . While such methods can greatly reduce the time and cost of manual labeling , they also lead to low quality noisy labels . To deal with noisy labels , earlier approaches tried to improve the robustness of the model using robust loss functions ( Ghosh et al. , 2017 ; Zhang & Sabuncu , 2018 ; Wang et al. , 2019 ) or robust regularizations ( Srivastava et al. , 2014 ; Zhang et al. , 2017 ; Pereyra et al. , 2017 ) . Goldberger & BenReuven ( 2016 ) tried to model the noise transition matrix between classes while Han et al . ( 2019 ) ; Patrini et al . ( 2017 ) ; Hendrycks et al . ( 2018 ) proposed to correct the losses of noisy samples . More recently , sample selection methods became perhaps the dominant paradigm for learning with noisy labels . Most of the recent sample selection methods do so , by relying on the predictions of the model classifier , for example on the per-sample loss ( Arazo et al. , 2019 ; Li et al. , 2020a ) or model prediction ( Song et al. , 2019 ; Malach & Shalev-Shwartz , 2017 ) . By separating clean samples and noisy samples and subsequently performing supervised training on the clean set , or semi-supervised training on both , sample selection methods achieved the state-of-the-art results in synthetic and realworld noisy datasets . However , there are three main issues with current sample selection methods . Firstly , the sample selection will be inevitably biased if the models ( classifier and feature extractor ) are trained with noisy labels – this is immediately apparent in the case that the sample selection is based on the loss of the classifier itself ( Arazo et al. , 2019 ; Li et al. , 2020a ; Yu et al. , 2019 ; Han et al. , 2018 ) . Second , in supervised classification problems , noisy samples usually come from two main categories : closedset noise where the true labels belong to one of the given classes ( Set B in Fig . 1 ) and open-set noise where the true labels do not belong to the set of labels of the classification problem ( Set C in Fig . 1 ) . Most of the works in the literature , including works that estimate the probabilities of label-exchange between pairs of classes ( Goldberger & Ben-Reuven , 2016 ; Patrini et al. , 2017 ) , that do relabeling based on the model ’ s predictions ( Song et al. , 2019 ; Han et al. , 2019 ) or works that adopt semisupervised approaches ( Li et al. , 2020a ; Ortego et al. , 2021 ) deal with the former and not directly address the latter . However , this is a considerable source of noise in real world scenarios , e.g. , when training from web-crawled data , where there is less control over the collection of the dataset . Crucially , those works ( implicitly or explicitly ) relabel all samples and do training based on the new labels of all samples . Those are bound to be wrong for all samples in set C , but also are bound to be wrong for several samples of A and B that are ( implicitly or explicitly ) relabeled in the early stages of training . For the latter reason , those works do not work well even under heavy close-set noise . Finally , current approaches usually require extensive hyperparameters tuning , often even on a perdataset basis – this is unrealistic in scenarios where there is little knowledge about the types of noise . This is partly due to the complexity of the applied semi-supervised learning method , and partly because of the complicated methods that are employed , such as model pretraining ( Zheltonozhskii et al. , 2021 ) and model cotraining ( Han et al. , 2018 ; Yu et al. , 2019 ; Li et al. , 2020a ) , so as to deal with self-confirmation bias . In this paper , we address the problem of training under different types of noise with a simple method–namely Supervised , Self-Supervised learning ( S3 ) , with two major components that are clearly separated : a selection/relabelling mechanism that selects/relabels samples so as to construct a clean and a noisy set ( Section 3.3 ) , and a training framework that aims at learning a strong feature extractor f and a classification head g [ Fig . 2 ] from both noisy and clean samples ( Section 3.4 ) . In the training stage we use off-the-shelf learners and a ) train the feature extractor and the classification head using a classical Supervised cross-entropy loss applied only on the clean samples – this avoids treating samples identi- fied as noisy as if they belonged to one of the given classes as most methods ( Arazo et al. , 2019 ; Li et al. , 2020a ; Ortego et al. , 2021 ; Wu et al. , 2021 ) implictly or explicitly do ; and b ) train the feature extractor using a Self-Supervised loss , namely the consistency loss between the representations of augmented version of the sample ( as in ( Chen & He , 2021 ) ) applied on all samples – this avoids false-negatives that are inherent in contrastive-learning with instance discrimination and different to works that apply the consistency on the label predictions that are unreliable at the first iterations or in the presence of open-set noise . The noisy sample selection mechanism relies on a measure of confidence what we define using the ground truth label of the sample in question and an estimate of distribution of the labels of its neighbours – in order to deal with noisy samples , we adopt a scheme in which the distribution is calculated based both on the ground truth labels and on consistently ( over subsequent iterations ) confident estimates of the labels . Our method is embedded into a standard MixUp and data augmentation framework , and without bells and whistles , such as co-training of multiple models , it achieves state-of-the-art results in both synthetic and realistic noise patterns in CIFAR10 , CIFAR100 , ANIMAL-10N , Clothing1M and WebVision datasets . 2 RELATED WORK . Leaning with noisy data by sample selection Some works focused on sample selection to filter out noisy samples . Jiang et al . ( 2018 ) introduced a pretrained mentor network to guide the selection of a student network . Song et al . ( 2019 ) evaluates the per-sample losses and identify as clean the top r % of the samples – the precise ratio r % depends is either predefined , or is an estimate of the noise level in the specific dataset . Arazo et al . ( 2019 ) proposed to model per-sample losses with a Beta Mixture Model ( BMM ) and split the dataset according to which of the components of the mixture each sample belongs . In a very similar approach , Li et al . ( 2020a ) extended upon Arazo et al . ( 2019 ) by introducing semi-supervised learning to fully utilize the dataset . Related to our work , Bahri et al . ( 2020 ) ; Ortego et al . ( 2021 ) ; Wang et al . ( 2018 ) also utilized the feature space for sample selection . Bahri et al . ( 2020 ) applied KNN for sample selection for closed-set noisy dataset while Ortego et al . ( 2021 ) further proposed to relabel samples based on the KNN voting . Wang et al . ( 2018 ) proposed to reweight samples based on its probability of being outliers in open-set noisy dataset . Self-supervised learning Self-supervised methods attempt to learn good representation without human annotations . In the recent years , the dominant method is contrastive learning with instance discrimination task . MoCo ( He et al. , 2020 ) is an important baseline for current contrastive learning methods , which reuses the memory bank since samples in a single mini-batch may lead to insufficient negative pairs , and proposes a momentum encoder to update the memory bank in real-time to avoid outdated data representation . SimCLR ( Chen et al. , 2020 ) is another important baseline which found that setting mini-batch size to be large enough can eliminate the need for memory bank . More recently , SimSiam ( Chen & He , 2021 ) and BYOL ( Grill et al. , 2020 ) proposed a non-contrastive learning framework which enforce the perturbation consitency between different views , and avoid mode collapse by applying stop-gradient and an extra predictor between two representation vectors . 3 METHOD . 3.1 PROBLEM FORMULATION . Let us denote with X = { xi } Ni=1 , xi ∈ Rd , a training set with the corresponding one-hot vector labels Y = { yi } Ni=1 , yi ∈ { 0 , 1 } K , where K is the number of classes and N is the number of samples . For convenience , let us also denote the index where the one-hot vector yi is one as the label li ∈ { 0 , ... , K } . Finally , let us denote the true labels with Y ′ = { y′i } Ni=1 . Clearly , for an open-set noisy label it is the case that y′i 6= yi , y′i /∈ { 0 , 1 } K , while for closed-set noisy samples y′i 6= yi , y′i ∈ { 0 , 1 } K . 3.2 OVERVIEW OF PROPOSED METHOD . Aiming to deal with potential concurrence of both open-set and close-set noise , we view the classification network as an encoder f that extracts a feature representation and a classification head g that deals with the classification problem in question . The proposed method , named Supervised Self-Supervised ( S3 ) learning , attempts to decouple their training so as to deal with possible noise in the labels , by adopting a two stage , iterative scheme , as outlined in Fig . 2 . In the first stage , we utilize a novel sample selection and a novel relabeling mechanism ( top block in Fig . 2 ) that prepares the set based on which the classifier g should be trained in Stage 2 . The selection mechanism is based on the assumption of smoothness of labels in the feature space , and more specifically , on a consistency measure that we defined based on the annotated label of the sample in question and the distribution of the labels in its neighborhood in the feature space . Relabeling is performed on samples for which the classifier gives confident predictions consistently across subsequent iterations . Clearly , the mechanism relies on the quality of the features extracted by the encoder f and should reject samples whose true labels are not in the class set ( open-set noise ) . This stage is explained in Section 3.3 . In the second stage ( bottom block in Fig . 2 ) , training is performed with two objectives/losses . First , a cross entropy loss on the output of the classifier g , ( i.e. , on g ( f ( . ) ) ) on the samples selected in Stage 1 , that updates both the encoder f and the classifier head g. Second , a self-supervision loss that enforces consistency between the representations of different augmentations of the same sample , and which utilizes all samples , that is both noisy and clean – this updates the encoder f and helps learning a strong feature space on which the selection mechanism of Stage 1 can rely . By contrast to other methods that , either by using a noise transition matrix or in their semisupervised scheme , implicitly relabel all samples ( e.g . DivideMix ( Li et al. , 2020a ) , MOIT ( Ortego et al. , 2021 ) ) and use the new labels to learn , in our method the labels in the noisy set are not used at all . This stage is explained in Section 3.4 .
The paper proposes a two-stage approach to learning with noisy labels (LNL). 1. a. Clean sample selection based on cosine similarity with k nearest neighbors in embedding space: the average of class distribution of those neighbors should be consistent with the label for sample to be selected. b. Noisy sample relabeling using temporal self-ensemble: the prediction is defined is average over last L epochs. The sample is relabeled if the confidence of the prediction is higher than some threshold. 2. Training with self-consistency regularization: the regular training with mixup regularization is performed on selected and relabeled samples, with consistency regularization in form of cosine similarity. The paper tests the performance of method on multiple datasets both with synthetic and real-life noise.
SP:22238bda86b9ade5ef7574767f30a6dd644d40c9
S3: Supervised Self-supervised Learning under Label Noise
1 INTRODUCTION . It is now commonly accepted that supervised learning with deep neural networks can provide excellent solutions for a wide range of problems , so long as there is sufficient availability of labeled training data and computational resources . However , these results have been mostly obtained using well-curated datasets in which the classes are balanced and the labels are of high quality . In the realworld , it is often costly to obtain high quality labels especially for large-scale datasets . A common approach is to use semi-automatic methods to obtain the labels ( e.g . “ webly-labeled ” images where the images and labels are obtained by web-crawling ) . While such methods can greatly reduce the time and cost of manual labeling , they also lead to low quality noisy labels . To deal with noisy labels , earlier approaches tried to improve the robustness of the model using robust loss functions ( Ghosh et al. , 2017 ; Zhang & Sabuncu , 2018 ; Wang et al. , 2019 ) or robust regularizations ( Srivastava et al. , 2014 ; Zhang et al. , 2017 ; Pereyra et al. , 2017 ) . Goldberger & BenReuven ( 2016 ) tried to model the noise transition matrix between classes while Han et al . ( 2019 ) ; Patrini et al . ( 2017 ) ; Hendrycks et al . ( 2018 ) proposed to correct the losses of noisy samples . More recently , sample selection methods became perhaps the dominant paradigm for learning with noisy labels . Most of the recent sample selection methods do so , by relying on the predictions of the model classifier , for example on the per-sample loss ( Arazo et al. , 2019 ; Li et al. , 2020a ) or model prediction ( Song et al. , 2019 ; Malach & Shalev-Shwartz , 2017 ) . By separating clean samples and noisy samples and subsequently performing supervised training on the clean set , or semi-supervised training on both , sample selection methods achieved the state-of-the-art results in synthetic and realworld noisy datasets . However , there are three main issues with current sample selection methods . Firstly , the sample selection will be inevitably biased if the models ( classifier and feature extractor ) are trained with noisy labels – this is immediately apparent in the case that the sample selection is based on the loss of the classifier itself ( Arazo et al. , 2019 ; Li et al. , 2020a ; Yu et al. , 2019 ; Han et al. , 2018 ) . Second , in supervised classification problems , noisy samples usually come from two main categories : closedset noise where the true labels belong to one of the given classes ( Set B in Fig . 1 ) and open-set noise where the true labels do not belong to the set of labels of the classification problem ( Set C in Fig . 1 ) . Most of the works in the literature , including works that estimate the probabilities of label-exchange between pairs of classes ( Goldberger & Ben-Reuven , 2016 ; Patrini et al. , 2017 ) , that do relabeling based on the model ’ s predictions ( Song et al. , 2019 ; Han et al. , 2019 ) or works that adopt semisupervised approaches ( Li et al. , 2020a ; Ortego et al. , 2021 ) deal with the former and not directly address the latter . However , this is a considerable source of noise in real world scenarios , e.g. , when training from web-crawled data , where there is less control over the collection of the dataset . Crucially , those works ( implicitly or explicitly ) relabel all samples and do training based on the new labels of all samples . Those are bound to be wrong for all samples in set C , but also are bound to be wrong for several samples of A and B that are ( implicitly or explicitly ) relabeled in the early stages of training . For the latter reason , those works do not work well even under heavy close-set noise . Finally , current approaches usually require extensive hyperparameters tuning , often even on a perdataset basis – this is unrealistic in scenarios where there is little knowledge about the types of noise . This is partly due to the complexity of the applied semi-supervised learning method , and partly because of the complicated methods that are employed , such as model pretraining ( Zheltonozhskii et al. , 2021 ) and model cotraining ( Han et al. , 2018 ; Yu et al. , 2019 ; Li et al. , 2020a ) , so as to deal with self-confirmation bias . In this paper , we address the problem of training under different types of noise with a simple method–namely Supervised , Self-Supervised learning ( S3 ) , with two major components that are clearly separated : a selection/relabelling mechanism that selects/relabels samples so as to construct a clean and a noisy set ( Section 3.3 ) , and a training framework that aims at learning a strong feature extractor f and a classification head g [ Fig . 2 ] from both noisy and clean samples ( Section 3.4 ) . In the training stage we use off-the-shelf learners and a ) train the feature extractor and the classification head using a classical Supervised cross-entropy loss applied only on the clean samples – this avoids treating samples identi- fied as noisy as if they belonged to one of the given classes as most methods ( Arazo et al. , 2019 ; Li et al. , 2020a ; Ortego et al. , 2021 ; Wu et al. , 2021 ) implictly or explicitly do ; and b ) train the feature extractor using a Self-Supervised loss , namely the consistency loss between the representations of augmented version of the sample ( as in ( Chen & He , 2021 ) ) applied on all samples – this avoids false-negatives that are inherent in contrastive-learning with instance discrimination and different to works that apply the consistency on the label predictions that are unreliable at the first iterations or in the presence of open-set noise . The noisy sample selection mechanism relies on a measure of confidence what we define using the ground truth label of the sample in question and an estimate of distribution of the labels of its neighbours – in order to deal with noisy samples , we adopt a scheme in which the distribution is calculated based both on the ground truth labels and on consistently ( over subsequent iterations ) confident estimates of the labels . Our method is embedded into a standard MixUp and data augmentation framework , and without bells and whistles , such as co-training of multiple models , it achieves state-of-the-art results in both synthetic and realistic noise patterns in CIFAR10 , CIFAR100 , ANIMAL-10N , Clothing1M and WebVision datasets . 2 RELATED WORK . Leaning with noisy data by sample selection Some works focused on sample selection to filter out noisy samples . Jiang et al . ( 2018 ) introduced a pretrained mentor network to guide the selection of a student network . Song et al . ( 2019 ) evaluates the per-sample losses and identify as clean the top r % of the samples – the precise ratio r % depends is either predefined , or is an estimate of the noise level in the specific dataset . Arazo et al . ( 2019 ) proposed to model per-sample losses with a Beta Mixture Model ( BMM ) and split the dataset according to which of the components of the mixture each sample belongs . In a very similar approach , Li et al . ( 2020a ) extended upon Arazo et al . ( 2019 ) by introducing semi-supervised learning to fully utilize the dataset . Related to our work , Bahri et al . ( 2020 ) ; Ortego et al . ( 2021 ) ; Wang et al . ( 2018 ) also utilized the feature space for sample selection . Bahri et al . ( 2020 ) applied KNN for sample selection for closed-set noisy dataset while Ortego et al . ( 2021 ) further proposed to relabel samples based on the KNN voting . Wang et al . ( 2018 ) proposed to reweight samples based on its probability of being outliers in open-set noisy dataset . Self-supervised learning Self-supervised methods attempt to learn good representation without human annotations . In the recent years , the dominant method is contrastive learning with instance discrimination task . MoCo ( He et al. , 2020 ) is an important baseline for current contrastive learning methods , which reuses the memory bank since samples in a single mini-batch may lead to insufficient negative pairs , and proposes a momentum encoder to update the memory bank in real-time to avoid outdated data representation . SimCLR ( Chen et al. , 2020 ) is another important baseline which found that setting mini-batch size to be large enough can eliminate the need for memory bank . More recently , SimSiam ( Chen & He , 2021 ) and BYOL ( Grill et al. , 2020 ) proposed a non-contrastive learning framework which enforce the perturbation consitency between different views , and avoid mode collapse by applying stop-gradient and an extra predictor between two representation vectors . 3 METHOD . 3.1 PROBLEM FORMULATION . Let us denote with X = { xi } Ni=1 , xi ∈ Rd , a training set with the corresponding one-hot vector labels Y = { yi } Ni=1 , yi ∈ { 0 , 1 } K , where K is the number of classes and N is the number of samples . For convenience , let us also denote the index where the one-hot vector yi is one as the label li ∈ { 0 , ... , K } . Finally , let us denote the true labels with Y ′ = { y′i } Ni=1 . Clearly , for an open-set noisy label it is the case that y′i 6= yi , y′i /∈ { 0 , 1 } K , while for closed-set noisy samples y′i 6= yi , y′i ∈ { 0 , 1 } K . 3.2 OVERVIEW OF PROPOSED METHOD . Aiming to deal with potential concurrence of both open-set and close-set noise , we view the classification network as an encoder f that extracts a feature representation and a classification head g that deals with the classification problem in question . The proposed method , named Supervised Self-Supervised ( S3 ) learning , attempts to decouple their training so as to deal with possible noise in the labels , by adopting a two stage , iterative scheme , as outlined in Fig . 2 . In the first stage , we utilize a novel sample selection and a novel relabeling mechanism ( top block in Fig . 2 ) that prepares the set based on which the classifier g should be trained in Stage 2 . The selection mechanism is based on the assumption of smoothness of labels in the feature space , and more specifically , on a consistency measure that we defined based on the annotated label of the sample in question and the distribution of the labels in its neighborhood in the feature space . Relabeling is performed on samples for which the classifier gives confident predictions consistently across subsequent iterations . Clearly , the mechanism relies on the quality of the features extracted by the encoder f and should reject samples whose true labels are not in the class set ( open-set noise ) . This stage is explained in Section 3.3 . In the second stage ( bottom block in Fig . 2 ) , training is performed with two objectives/losses . First , a cross entropy loss on the output of the classifier g , ( i.e. , on g ( f ( . ) ) ) on the samples selected in Stage 1 , that updates both the encoder f and the classifier head g. Second , a self-supervision loss that enforces consistency between the representations of different augmentations of the same sample , and which utilizes all samples , that is both noisy and clean – this updates the encoder f and helps learning a strong feature space on which the selection mechanism of Stage 1 can rely . By contrast to other methods that , either by using a noise transition matrix or in their semisupervised scheme , implicitly relabel all samples ( e.g . DivideMix ( Li et al. , 2020a ) , MOIT ( Ortego et al. , 2021 ) ) and use the new labels to learn , in our method the labels in the noisy set are not used at all . This stage is explained in Section 3.4 .
This paper proposes "S3" framework for learning with noisy labels. Specifically, S3 consists of two stages. In the first stage, a relabelling approach and normalized neighboring voting are utilized to guide efficient sample selection; in the second stage, supervised loss (Mixup) and self-consistency loss are used to train networks on selected samples. S3 can be applied to both close-set label noise and open-set label noise and exhibit good performance on several benchmark datasets.
SP:22238bda86b9ade5ef7574767f30a6dd644d40c9
The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders
1 INTRODUCTION . Many modern generative models of choice ( e.g . Generative Adversarial Networks ( Goodfellow et al. , 2014 ) , Variational Autoencoders ( Kingma & Welling , 2013 ) ) are modeled as non-linear , possibly stochastic transformations of a simple latent distribution ( e.g . a standard Gaussian ) . A particularly common task is modeling the inferential ( encoder ) direction : that is , modeling the posterior distribution on the latents z given an observable sample x . Such a task is useful both at train time and at test time . At train time , fitting generative models like variational autoencoders via maximum likelihood often relies on variational methods , which require the joint training of a generative model ( i.e . generator/decoder ) , as well as an inference model ( i.e . encoder ) which models the posterior distribution of the latent given the observables . At test time , the posterior distribution very often has some practical use , e.g . useful , potentially interpretable feature embeddings for data ( Salimans et al. , 2016 ; Berthelot et al. , 2018 ) , “ intervening ” on the latent space to change the sample in some targeted manner ( Shen et al. , 2020 ) , etc . As such , the question of the “ complexity '' of the inference model ( i.e . number of parameters to represent it using a neural network-based encoder ) as a function of the “ complexity '' of the forward model is of paramount importance , so that training is computationally tractable : Question : How complex does the inference ( encoder ) model need to be relative to the complexity of the generative ( decoder ) model ? In this paper we identify an important property of the generative direction governing the complexity of the inference direction for variational autoencoders : bijectivity/invertibility of the mean of the generative direction . We prove that when the mean of the generative direction is invertible , the complexity of the inference direction is not much greater than the complexity of the generative direction . Conversely , when the mean of the generative direction is not invertible , modulo standard computational complexity conjectures from cryptography , we can exhibit instances where the inference direction has to be much more complex . On the mathematical level , our techniques involve a neural simulation of a Langevin random walk to sample from the posterior of the latent variables and uncover novel connections between Langevin diffusions and ( hierarchical ) deep latent Gaussians . On the lower bound side , we provide a reduction from the existence of one-way Boolean permutations in computational complexity : that is , permutations that are easy to calculate , but hard to invert . We show that the existence of a small encoder for non-invertible generators would allow us to design an invertor for any Boolean permutation , thus violating the existence a one-way permutation . This is the first time such ideas have been applied to generative models . Note that a non-invertible generator results in a distribution supported on a lower dimensional manifold . Thus , our results show that learning deep generative models is harder when data lies on a sufficiently complex low-dimensional manifold , corroborating similarly flavored empirical observations ( Dai & Wipf , 2019 ; Arjovsky et al. , 2017 ) . 2 OUR RESULTS . The Variational Autoencoder ( VAE ) ( Kingma & Welling , 2013 ) is one of the most commonly used paradigms in generative models . It ’ s trained by fitting a generator which maps latent variables z to observables x , denoted by pθ ( x|z ) , as well as an encoder which maps the observables to the latent space , denoted by qφ ( z|x ) . Here φ and θ are the encoder parameters and generator parameters respectively . Given n training samples { x ( i ) } ni=1 , the VAE objective is given by max φ , θ 1 n n∑ i=1 Ez∼qφ ( .|x ( i ) ) [ log pθ ( x ( i ) |z ) ] −KL ( qφ ( z|x ( i ) ) ||p ( z ) ) where p ( z ) is typically chosen to be a standard Gaussian . This loss can be viewed as a variational relaxation of the maximum likelihood objective , where the encoder qφ , in the limit of infinite representational power , is intended to model the posterior distribution pθ ( z|x ( i ) ) . Setup : We will consider a setting in which the data distribution itself is given by some ground-truth generator G : Rdl → Rdo , and ask how complex ( in terms of number of parameters ) the encoder needs to be ( as a function of the number of parameters of G ) , s.t . it approximates the posterior distribution p ( z|x ) of the generator . We will consider two standard probabilistic models for the generator/encoder respectively . Definition 1 ( Latent Gaussian ) . A latent Gaussian is the conditional distribution given by a stochastic pushforward of a Gaussian distribution . That is , for latent variable z ∈ Rdl and observable x ∈ Rdo , for a neural network G : Rdl → Rdo and noise parameter β2 , we have p ( x|z ) = N ( G ( z ) , β2Ido ) and p ( z ) = N ( 0 , Idl ) . In other words , a sample from this distribution can be generated by sampling independently z ∼ N ( 0 , Idl ) and ξ ∼ N ( 0 , β2Ido ) and outputting x = G ( z ) + ξ . This is a standard neural parametrization of a generator with ( scaled ) identity covariance matrix , a fairly common choice in practical implementations of VAEs ( Kingma & Welling , 2013 ; Dai & Wipf , 2019 ) . We will also define a probabilistic model which is a composition of latent Gaussians ( i.e . consists of multiple stochastic layers ) , which is also common , particularly when modeling encoders in VAEs , as they can model potentially non-Gaussian posteriors ( Burda et al. , 2015 ; Rezende et al. , 2014 ) : Definition 2 ( Deep Latent Gaussian ) . A deep latent Gaussian is the conditional distribution given by a sequence of stochastic pushforwards of a Gaussian distribution . That is , for observable z0 ∈ Rd0 and latent variables { zi ∈ Rdi } Li=1 , for neural networks { Gi : Rdi−1 → Rdi } Li=1 and noise parameters { β2i } Li=1 , the conditional distribution p ( zL|z0 ) is a deep latent Gaussian when p ( zi|zi−1 ) = N ( Gi ( zi−1 ) , β2i Idi ) , ∀i ∈ [ L ] and p ( z0 ) = N ( 0 , Id0 ) . In other words , a deep latent Gaussian is a distribution , which can be sampled by ancestral sampling , one layer at a time . Note that this class of distributions is convenient as a choice for an encoder in a VAE , since compositions are amenable to the reparametrization trick of Kingma & Welling ( 2013 ) —the randomness for each of the layers can be “ presampled ” and appropriately transformed ( Burda et al. , 2015 ; Rezende et al. , 2014 ) . Then , we ask the following : Question : If a VAE generator is modeled as a latent Gaussian ( that is , p ( x|z ) ≡ N ( G ( z ) , β2I ) ) , s.t . the corresponding G has at most N parameters , and we wish to approximate the posterior p ( z|x ) by a deep latent Gaussian s.t . the total size of the networks in it have at most N ′ parameters , how large must N ′ be as function of N ? We will work in the setting dl = do = d , and prove a dichotomy based on the invertibility of G : namely , if G : Rd → Rd is bijective , and β ≤ O ( 1 d1.5 √ log d/ ) , the poste- rior p ( z|x ) can be -approximated in total variation distance by a deep latent Gaussian of size N ′ = O ( N · poly ( d , 1/β , 1/ ) ) . Thus , if the neural network G is invertible , and for a fixed and a small-enough variance term β2 , we can approximate the posterior with a deep latent Gaussian polynomially larger than G. On the other hand , if G is not bijective , if one-way-functions exist ( a widely believed computational complexity conjecture ) , we will show there exists a VAE generator G : Rd → Rd of size polynomial in d , for which the posterior p ( z|x ) can not be approximated in total variation distance for even an inverse polynomial fraction of inputs x , unless the inferential network is of size exponential in d. Remark 1 : Note that it is in fact sufficient to investigate only the case where dl = do = d. Let us elaborate why : • For Theorem 1 , a function G : Rdl → Rdo can be bijective , Lipschitz and strongly invertible ( i.e . satisfies Assumptions 1 and 2 ) only if dl = do , so Theorem 1 ( upper bound ) would not make any sense unless the dimensions are equal . ( Note : there are bijective maps between , say , R and R2 , e.g . space-filling curves , but these can not be Lipschitz , as Lipschitz maps preserve the Hausdorff dimension , and the Hausdorff dimensions of R and R2 are 1 and 2 respectively ) . • Theorem 2 is a lower bound – meaning we aim to exhibit an example of a hard instance when G is not bijective . Having exhibited a hard instance for the dl = do = d case , it is trivially easy to construct a hard instance in the theorem where the output dimension is larger—which is the more common setting in practice . To do so , consider a circuit C̃ : { ±1 } dl → { ±1 } do , which equals to C , the one-way-circuit from Conjecture 1 on the first dl coordinates , and is equal to 1 ( i.e . is constant ) on the last do − dl coordinates . Then C̃ is one-way too—since the last do − dl values are just fixed to 1 , if we can invert it , we can invert C too . The reduction in the proof of Theorem 2 then can just be performed with C̃ instead , giving a lower bound instance for the dl < do case . 2.1 UPPER BOUNDS FOR BIJECTIVE GENERATORS . We first lay out the assumptions on the mapG . The first is a quantitative characterization of bijectivity ; and the second requires upper bounds on the derivatives of G up to order 3 . We also have a centering assumption . We state these below . Assumption 1 ( Strong invertibility ) . We will assume that the latent and observable spaces have the same dimension ( denoted d ) , and G : Rd → Rd is bijective . Moreover , we will assume there exists a positive constant m > 0 such that ∀z1 , z2 ∈ Rd , ‖G ( z1 ) −G ( z2 ) ‖ ≥ m · ‖z1 − z2‖ . Remark 2 : This is a stronger quantitative version of invertibility . Furthermore , the infinitesimal version of this condition ( i.e . ‖z1−z2‖ → 0 ) implies that the smallest magnitude of the singular values of the Jacobian at any point is lower bounded by m , that is ∀z ∈ Rd , min i∈ [ d ] |σi ( JG ( z ) ) | ≥ m > 0 . Since m is strictly positive , this in particular means that the Jacobian is full rank everywhere . Remark 3 : Note , we do not require that G is layerwise invertible ( i.e . that the each map from one layer to the next is invertible ) – if that is the case , at least in the limit β → 0 , the existence of an inference decoder of comparable size toG is rather obvious : we simply invert each layer one at a time . This is important , as many architectures based on convolutions perform operations which increase the dimension ( i.e . map from a lower to a higher dimensional space ) , followed by pooling ( which decrease the dimension ) . Nevertheless , it has been observed that these architectures are invertible in practice—Lipton & Tripathi ( 2017 ) manage to get almost 100 % success at inverting an off-the-shelf trained model—thus justifying this assumption . Assumption 2 ( Smoothness ) . There exists a finite positive constant M > 0 such that : ∀z1 , z2 ∈ Rd , ‖G ( z1 ) −G ( z2 ) ‖ ≤M · ‖z1 − z2‖ Moreover , we will assume that G has continuous partial derivatives up to order 3 at every z ∈ Rd and the derivatives are bounded by finite positive constants M2 and M3 as ∀z ∈ Rd , ∥∥∇2G ( z ) ∥∥ op ≤M2 < ∞ , ∥∥∇3G ( z ) ∥∥ op ≤M3 < ∞ Remark 4 : This is a mild assumption , stating that the map G is smooth to third order . The infinitesimal version of this means that the largest magnitude of the singular values of the Jacobian at any point is upper bounded by M , that is ∀z ∈ Rd , max i∈ [ d ] |σi ( JG ( z ) ) | = ‖JG ( z ) ‖op ≤M < ∞ . Remark 5 : A neural network with activation function σ will satisfy this assumption when σ : R→ R is Lipschitz , and maxa |σ′ ( a ) | & maxa |σ′′ ( a ) | are finite . Assumption 3 ( Centering ) . The map G : Rd → Rd satisfies G ( 0 ) = 0 . Remark 6 : This assumption is for convenience of stating the bounds—we effectively need the “ range ” of majority of the samples x under the distribution of the generator . All the results can be easily restated by including a dependence on ‖G ( 0 ) ‖ . Our main result is then stated below . Throughout , the O ( . ) notation hides dependence on the map constants , namely m , M , M2 , M3 . We will denote by dTV ( p , q ) the total variation distance between the distributions p , q. Theorem 1 ( Main , invertible generator ) . Consider a VAE generator given by a latent Gaussian with µ = 0 , Σ = I , noise parameter β2 and generator G : Rd → Rd satisfying Assumptions 1 and 2 , which has N parameters and a differentiable activation function σ . Then , for β ≤ O 1 d1.5 √ log d ( 1 ) there exists a deep latent Gaussian withN ′ = O ( N · poly ( d , 1β , 1 ) ) parameters and activation functions { σ , σ′ , ρ } and a neural network φ with O ( N ′ ) parameters and activation functions { σ , σ′ , ρ } , where ρ ( x ) = x2 , such that with probability 1−exp ( −O ( d ) ) over a sample x from the VAE generator , φ ( x ) produces values for the parameters of the deep latent Gaussian such that the distribution q ( z|x ) it specifies satisfies dTV ( q ( z|x ) , p ( z|x ) ) ≤ . Remark 7 : Having an encoder q ( z|x ) with parameters produced by a neural network taking x as input is fairly standard in practice—it is known as amortized variational inference . ( Mnih & Gregor , 2014 ; Kingma & Welling , 2013 ; Rezende et al. , 2014 ) Remark 8 : The addition of ρ in the activation functions is for convenience of stating the bound . Using usual techniques in universal approximation it can be simulated using any other smooth activation , see Appendix G .
This purely mathematical paper investigates the important question of how does the necessary level of complexity of an inference subnetwork depend on the the complexity of the corresponding generative subnetwork, in a VAE model. The paper introduces a specific measure of invertibility, and uses it to show that a generative subnetwork that is sufficiently strongly invertible only requires an inference model at similar complexity, while some "non-invertible" generative subnetworks will require an exponentially more complex inference subnetwork. The paper provides a lengthy and comprehensive proof of both.
SP:8846907bdbc8c93f43abdd4fac5f496a6bc15468
The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders
1 INTRODUCTION . Many modern generative models of choice ( e.g . Generative Adversarial Networks ( Goodfellow et al. , 2014 ) , Variational Autoencoders ( Kingma & Welling , 2013 ) ) are modeled as non-linear , possibly stochastic transformations of a simple latent distribution ( e.g . a standard Gaussian ) . A particularly common task is modeling the inferential ( encoder ) direction : that is , modeling the posterior distribution on the latents z given an observable sample x . Such a task is useful both at train time and at test time . At train time , fitting generative models like variational autoencoders via maximum likelihood often relies on variational methods , which require the joint training of a generative model ( i.e . generator/decoder ) , as well as an inference model ( i.e . encoder ) which models the posterior distribution of the latent given the observables . At test time , the posterior distribution very often has some practical use , e.g . useful , potentially interpretable feature embeddings for data ( Salimans et al. , 2016 ; Berthelot et al. , 2018 ) , “ intervening ” on the latent space to change the sample in some targeted manner ( Shen et al. , 2020 ) , etc . As such , the question of the “ complexity '' of the inference model ( i.e . number of parameters to represent it using a neural network-based encoder ) as a function of the “ complexity '' of the forward model is of paramount importance , so that training is computationally tractable : Question : How complex does the inference ( encoder ) model need to be relative to the complexity of the generative ( decoder ) model ? In this paper we identify an important property of the generative direction governing the complexity of the inference direction for variational autoencoders : bijectivity/invertibility of the mean of the generative direction . We prove that when the mean of the generative direction is invertible , the complexity of the inference direction is not much greater than the complexity of the generative direction . Conversely , when the mean of the generative direction is not invertible , modulo standard computational complexity conjectures from cryptography , we can exhibit instances where the inference direction has to be much more complex . On the mathematical level , our techniques involve a neural simulation of a Langevin random walk to sample from the posterior of the latent variables and uncover novel connections between Langevin diffusions and ( hierarchical ) deep latent Gaussians . On the lower bound side , we provide a reduction from the existence of one-way Boolean permutations in computational complexity : that is , permutations that are easy to calculate , but hard to invert . We show that the existence of a small encoder for non-invertible generators would allow us to design an invertor for any Boolean permutation , thus violating the existence a one-way permutation . This is the first time such ideas have been applied to generative models . Note that a non-invertible generator results in a distribution supported on a lower dimensional manifold . Thus , our results show that learning deep generative models is harder when data lies on a sufficiently complex low-dimensional manifold , corroborating similarly flavored empirical observations ( Dai & Wipf , 2019 ; Arjovsky et al. , 2017 ) . 2 OUR RESULTS . The Variational Autoencoder ( VAE ) ( Kingma & Welling , 2013 ) is one of the most commonly used paradigms in generative models . It ’ s trained by fitting a generator which maps latent variables z to observables x , denoted by pθ ( x|z ) , as well as an encoder which maps the observables to the latent space , denoted by qφ ( z|x ) . Here φ and θ are the encoder parameters and generator parameters respectively . Given n training samples { x ( i ) } ni=1 , the VAE objective is given by max φ , θ 1 n n∑ i=1 Ez∼qφ ( .|x ( i ) ) [ log pθ ( x ( i ) |z ) ] −KL ( qφ ( z|x ( i ) ) ||p ( z ) ) where p ( z ) is typically chosen to be a standard Gaussian . This loss can be viewed as a variational relaxation of the maximum likelihood objective , where the encoder qφ , in the limit of infinite representational power , is intended to model the posterior distribution pθ ( z|x ( i ) ) . Setup : We will consider a setting in which the data distribution itself is given by some ground-truth generator G : Rdl → Rdo , and ask how complex ( in terms of number of parameters ) the encoder needs to be ( as a function of the number of parameters of G ) , s.t . it approximates the posterior distribution p ( z|x ) of the generator . We will consider two standard probabilistic models for the generator/encoder respectively . Definition 1 ( Latent Gaussian ) . A latent Gaussian is the conditional distribution given by a stochastic pushforward of a Gaussian distribution . That is , for latent variable z ∈ Rdl and observable x ∈ Rdo , for a neural network G : Rdl → Rdo and noise parameter β2 , we have p ( x|z ) = N ( G ( z ) , β2Ido ) and p ( z ) = N ( 0 , Idl ) . In other words , a sample from this distribution can be generated by sampling independently z ∼ N ( 0 , Idl ) and ξ ∼ N ( 0 , β2Ido ) and outputting x = G ( z ) + ξ . This is a standard neural parametrization of a generator with ( scaled ) identity covariance matrix , a fairly common choice in practical implementations of VAEs ( Kingma & Welling , 2013 ; Dai & Wipf , 2019 ) . We will also define a probabilistic model which is a composition of latent Gaussians ( i.e . consists of multiple stochastic layers ) , which is also common , particularly when modeling encoders in VAEs , as they can model potentially non-Gaussian posteriors ( Burda et al. , 2015 ; Rezende et al. , 2014 ) : Definition 2 ( Deep Latent Gaussian ) . A deep latent Gaussian is the conditional distribution given by a sequence of stochastic pushforwards of a Gaussian distribution . That is , for observable z0 ∈ Rd0 and latent variables { zi ∈ Rdi } Li=1 , for neural networks { Gi : Rdi−1 → Rdi } Li=1 and noise parameters { β2i } Li=1 , the conditional distribution p ( zL|z0 ) is a deep latent Gaussian when p ( zi|zi−1 ) = N ( Gi ( zi−1 ) , β2i Idi ) , ∀i ∈ [ L ] and p ( z0 ) = N ( 0 , Id0 ) . In other words , a deep latent Gaussian is a distribution , which can be sampled by ancestral sampling , one layer at a time . Note that this class of distributions is convenient as a choice for an encoder in a VAE , since compositions are amenable to the reparametrization trick of Kingma & Welling ( 2013 ) —the randomness for each of the layers can be “ presampled ” and appropriately transformed ( Burda et al. , 2015 ; Rezende et al. , 2014 ) . Then , we ask the following : Question : If a VAE generator is modeled as a latent Gaussian ( that is , p ( x|z ) ≡ N ( G ( z ) , β2I ) ) , s.t . the corresponding G has at most N parameters , and we wish to approximate the posterior p ( z|x ) by a deep latent Gaussian s.t . the total size of the networks in it have at most N ′ parameters , how large must N ′ be as function of N ? We will work in the setting dl = do = d , and prove a dichotomy based on the invertibility of G : namely , if G : Rd → Rd is bijective , and β ≤ O ( 1 d1.5 √ log d/ ) , the poste- rior p ( z|x ) can be -approximated in total variation distance by a deep latent Gaussian of size N ′ = O ( N · poly ( d , 1/β , 1/ ) ) . Thus , if the neural network G is invertible , and for a fixed and a small-enough variance term β2 , we can approximate the posterior with a deep latent Gaussian polynomially larger than G. On the other hand , if G is not bijective , if one-way-functions exist ( a widely believed computational complexity conjecture ) , we will show there exists a VAE generator G : Rd → Rd of size polynomial in d , for which the posterior p ( z|x ) can not be approximated in total variation distance for even an inverse polynomial fraction of inputs x , unless the inferential network is of size exponential in d. Remark 1 : Note that it is in fact sufficient to investigate only the case where dl = do = d. Let us elaborate why : • For Theorem 1 , a function G : Rdl → Rdo can be bijective , Lipschitz and strongly invertible ( i.e . satisfies Assumptions 1 and 2 ) only if dl = do , so Theorem 1 ( upper bound ) would not make any sense unless the dimensions are equal . ( Note : there are bijective maps between , say , R and R2 , e.g . space-filling curves , but these can not be Lipschitz , as Lipschitz maps preserve the Hausdorff dimension , and the Hausdorff dimensions of R and R2 are 1 and 2 respectively ) . • Theorem 2 is a lower bound – meaning we aim to exhibit an example of a hard instance when G is not bijective . Having exhibited a hard instance for the dl = do = d case , it is trivially easy to construct a hard instance in the theorem where the output dimension is larger—which is the more common setting in practice . To do so , consider a circuit C̃ : { ±1 } dl → { ±1 } do , which equals to C , the one-way-circuit from Conjecture 1 on the first dl coordinates , and is equal to 1 ( i.e . is constant ) on the last do − dl coordinates . Then C̃ is one-way too—since the last do − dl values are just fixed to 1 , if we can invert it , we can invert C too . The reduction in the proof of Theorem 2 then can just be performed with C̃ instead , giving a lower bound instance for the dl < do case . 2.1 UPPER BOUNDS FOR BIJECTIVE GENERATORS . We first lay out the assumptions on the mapG . The first is a quantitative characterization of bijectivity ; and the second requires upper bounds on the derivatives of G up to order 3 . We also have a centering assumption . We state these below . Assumption 1 ( Strong invertibility ) . We will assume that the latent and observable spaces have the same dimension ( denoted d ) , and G : Rd → Rd is bijective . Moreover , we will assume there exists a positive constant m > 0 such that ∀z1 , z2 ∈ Rd , ‖G ( z1 ) −G ( z2 ) ‖ ≥ m · ‖z1 − z2‖ . Remark 2 : This is a stronger quantitative version of invertibility . Furthermore , the infinitesimal version of this condition ( i.e . ‖z1−z2‖ → 0 ) implies that the smallest magnitude of the singular values of the Jacobian at any point is lower bounded by m , that is ∀z ∈ Rd , min i∈ [ d ] |σi ( JG ( z ) ) | ≥ m > 0 . Since m is strictly positive , this in particular means that the Jacobian is full rank everywhere . Remark 3 : Note , we do not require that G is layerwise invertible ( i.e . that the each map from one layer to the next is invertible ) – if that is the case , at least in the limit β → 0 , the existence of an inference decoder of comparable size toG is rather obvious : we simply invert each layer one at a time . This is important , as many architectures based on convolutions perform operations which increase the dimension ( i.e . map from a lower to a higher dimensional space ) , followed by pooling ( which decrease the dimension ) . Nevertheless , it has been observed that these architectures are invertible in practice—Lipton & Tripathi ( 2017 ) manage to get almost 100 % success at inverting an off-the-shelf trained model—thus justifying this assumption . Assumption 2 ( Smoothness ) . There exists a finite positive constant M > 0 such that : ∀z1 , z2 ∈ Rd , ‖G ( z1 ) −G ( z2 ) ‖ ≤M · ‖z1 − z2‖ Moreover , we will assume that G has continuous partial derivatives up to order 3 at every z ∈ Rd and the derivatives are bounded by finite positive constants M2 and M3 as ∀z ∈ Rd , ∥∥∇2G ( z ) ∥∥ op ≤M2 < ∞ , ∥∥∇3G ( z ) ∥∥ op ≤M3 < ∞ Remark 4 : This is a mild assumption , stating that the map G is smooth to third order . The infinitesimal version of this means that the largest magnitude of the singular values of the Jacobian at any point is upper bounded by M , that is ∀z ∈ Rd , max i∈ [ d ] |σi ( JG ( z ) ) | = ‖JG ( z ) ‖op ≤M < ∞ . Remark 5 : A neural network with activation function σ will satisfy this assumption when σ : R→ R is Lipschitz , and maxa |σ′ ( a ) | & maxa |σ′′ ( a ) | are finite . Assumption 3 ( Centering ) . The map G : Rd → Rd satisfies G ( 0 ) = 0 . Remark 6 : This assumption is for convenience of stating the bounds—we effectively need the “ range ” of majority of the samples x under the distribution of the generator . All the results can be easily restated by including a dependence on ‖G ( 0 ) ‖ . Our main result is then stated below . Throughout , the O ( . ) notation hides dependence on the map constants , namely m , M , M2 , M3 . We will denote by dTV ( p , q ) the total variation distance between the distributions p , q. Theorem 1 ( Main , invertible generator ) . Consider a VAE generator given by a latent Gaussian with µ = 0 , Σ = I , noise parameter β2 and generator G : Rd → Rd satisfying Assumptions 1 and 2 , which has N parameters and a differentiable activation function σ . Then , for β ≤ O 1 d1.5 √ log d ( 1 ) there exists a deep latent Gaussian withN ′ = O ( N · poly ( d , 1β , 1 ) ) parameters and activation functions { σ , σ′ , ρ } and a neural network φ with O ( N ′ ) parameters and activation functions { σ , σ′ , ρ } , where ρ ( x ) = x2 , such that with probability 1−exp ( −O ( d ) ) over a sample x from the VAE generator , φ ( x ) produces values for the parameters of the deep latent Gaussian such that the distribution q ( z|x ) it specifies satisfies dTV ( q ( z|x ) , p ( z|x ) ) ≤ . Remark 7 : Having an encoder q ( z|x ) with parameters produced by a neural network taking x as input is fairly standard in practice—it is known as amortized variational inference . ( Mnih & Gregor , 2014 ; Kingma & Welling , 2013 ; Rezende et al. , 2014 ) Remark 8 : The addition of ρ in the activation functions is for convenience of stating the bound . Using usual techniques in universal approximation it can be simulated using any other smooth activation , see Appendix G .
The paper answers the question how complex inference models need to be to accurately estimate posterior distributions. The conclusion is when a latent Gaussian model with N parameters satisfies (1) strong invertibility, (2) 3-th smoothness, the posterior can be approximated by a deep latent Gaussian model with O(N) parameters. A special case is also given to show that strong invertibility is necessary to the conclusion.
SP:8846907bdbc8c93f43abdd4fac5f496a6bc15468
Relational Learning with Variational Bayes
1 INTRODUCTION . American Psychological Association defines relational learning as ( VandenBos & APA , 2007 ) : Definition 1.1 ( Relational learning ) . Learning to differentiate among stimuli on the basis of relational properties rather than absolute properties . In other words , relational learning refers to the ability to recognize and respond to relationship ( called relational property ) among objects irrespective of the nature of those objects ( called absolute property ) . For example ( attributed to Doumas & Hummel ( 2013 ) ) , how do we come to understand that two circles are the same-shape in the same way that two squares are ? In this example , “ same-shape ” is the relational property and object shape is the absolute property . Relational learning has long been recognized as a hallmark of human cognition with strong implications for both human-like learning capabilities and generalization capacity ( Biederman , 1987 ; Medin et al. , 1993 ; Gentner , 2003 ; Penn et al. , 2008 ; Holyoak , 2012 ; Gentner & Smith , 2012 ) . We refer the interested readers to the provided references for a comprehensive discussion on this subject . Contemporaneously , the research on learning data relationships—also commonly called “ relational learning ” —has flourished in the machine learning community where the overarching goal is learning in a context where there may be relationships between learning examples , or where these examples may have a complex internal structure ( i.e. , consist of multiple components and there may be relationships between these components ) ( Getoor & Taskar , 2007 ; De Raedt et al. , 2016 ) . We argue that the key difference between the two “ relational learning ” definitions and their learning objectives is that Definition 1.1 takes the relationship learning problem one step further by requiring the data relationships be learned only on the basis of relational properties rather than absolute properties . To the best of our knowledge , this important distinction—learning relationships irrespective of the absolute properties—has not been rigorously studied in the unsupervised learning community , where most existing methods either encourage or do not constrain the relationships learning through absolute properties . In this work , we propose an unsupervised learning method—variational relational learning ( VRL ) — for addressing the relational learning problem as defined by Definition 1.1 . At its core , VRL encapsulates the relational learning problem with a probabilistic graphical model ( PGM ) in which we perform inference to learn about relational property and other relational processing tasks . Our contribution in this paper is threefold : First , we propose a probabilistic formulation for the relational learning problem defined by Definition 1.1 . Second , we encapsulate the relational learning problem with a PGM in which we perform learning and inference . Third , we propose an efficient and effective learning algorithm that can be trained end-to-end and completely unsupervised . ∗The author completed this research while working at ExxonMobil . Corresponding author e-mail : liu.kuanghung @ gmail.com 2 PROBLEM DEFINITION . We focus on a canonical form of the relational learning problem where we observed a paired dataset X = { ( a ( i ) , b ( i ) ) | i∈ [ 1 .. N ] } consisting of N i.i.d . samples generated from a joint distribution p ( a∈A , b∈B ) . We dissect the information in X into absolute property and relational property where absolute property represents specific features that describe individual a and b , and relational property represents the relationship between a and b irrespective of their absolute property . In this work , we interpret the absolute property of a and b as any information that characterizes ( even if only partially ) the marginal distribution p ( a ) and p ( b ) . We propose to represent the relational property as a latent random variable ( r.v . ) z that satisfies the following constraints : ( i ) p ( a , z ) = p ( a ) p ( z ) , ( ii ) p ( b , z ) = p ( b ) p ( z ) , ( iii ) p ( a , z | b ) 6= p ( a | b ) p ( z | b ) , ( iv ) p ( b , z | a ) 6= p ( b | a ) p ( z | a ) , ( 1 ) where in Eq . 1 ( i ) and 1 ( ii ) we interpret the specification of relational property in Definition 1.1— learning relationships irrespective of the absolute properties—as meaning statistical independence , while in Eq . 1 ( iii ) and 1 ( iv ) we ensure r.v . z contains relevant ( relationship ) information that further informs a and b about one another , i.e. , H ( a | b , z ) < H ( a | b ) , H ( b | a , z ) < H ( b | a ) where H ( · | · ) is the conditional entropy . It is easy to see that the following conditions are necessary for r.v . z to exist : ( 1 ) H ( b | a ) > 0 and H ( a | b ) > 0 , i.e. , a and b can not be fully determined by each other ; ( 2 ) r.v . a , b , z are not mutually independent , i.e. , p ( a , b , z ) 6= p ( a ) p ( b ) p ( z ) . Our goal for relational learning is to learn about relational property z that satisfies Eq . 1 in a completely unsupervised fashion . A motivating example for Eq . 1 is provided in Appendix A.1 . In addition , we are interested in two related relational processing tasks : relational discrimination and relational mapping defined as ( VandenBos & APA , 2007 ) : Definition 2.1 ( Relational discrimination in condition ) . A discrimination based on the relationship between or among stimuli rather than on absolute features of the stimuli . Definition 2.2 ( Relational mapping ) . The ability to apply what one knows about one set of elements to a different set of elements . Relational discrimination allows us to differentiate ( a ( i ) , b ( i ) ) from ( a ( j ) , b ( j ) ) based on their relational properties . And relational mapping allows us to apply the relational property of ( a ( i ) , b ( i ) ) to a different set of data , for example , deduce that b ( j ) is related to a ( j ) in the same way that b ( i ) is related to a ( i ) . 3 METHOD . Learning and inference relational property z that satisfies all four constraints in Eq . 1 is a challenging problem due to the hard independence constraints in Eq . 1 ( i ) and 1 ( ii ) . To overcome this challenge , we first introduce VRL as a tractable learning method that satisfies 3 ( out of 4 ) constraints in Eq . 1— Eq . 1 ( i ) , 1 ( iii ) , 1 ( iv ) . We then discuss VRL ’ s unique optimization challenges , which are partially attributable to its relaxation of the independence requirement in Eq . 1 ( ii ) . 3.1 VARIATIONAL RELATIONAL LEARNING . The proposed VRL method consists of two parts : first , we encapsulate the relational learning problem with a PGM , called VRL-PGM ; we then formulate various relational processing tasks as performing inference and learning in VRL-PGM . The VRL-PGM model , shown in Fig . 1 , samples data a , z , and b from parametric families of distributions—pθ ( a ) , pθ ( z ) , pθ ( b |a , z ) —that are differentiable almost everywhere with respect to ( w.r.t . ) a , z , and θ . In practice , we observe only a set of independent realizations { ( a ( i ) , b ( i ) ) | i ∈ [ 1 .. N ] } while the true parameter θ∗ and the corresponding latent variables z ( i ) are unobserved . A well-known property of the PGM shown in Fig . 1 is that r.v . a and z are independent with no variables observed , but not conditionally independent when b is observed , i.e. , pθ ( a , z ) = pθ ( a ) pθ ( z ) , pθ ( a , z | b ) 6= pθ ( a | b ) pθ ( z | b ) ( Bishop , 2006 ) . Consequently , VRL-PGM can be viewed as a parametric relational learning model that satisfies 3 ( out of 4 ) constraints in Eq . 1—Eq . 1 ( i ) , 1 ( iii ) , 1 ( iv ) ( note that Eq . 1 ( iv ) is trivially satisfied in VRL-PGM ) . Further discussions on the connection between VRL-PGM and the relational learning problem is provided in Appendix A.2 . Having established VRL-PGM , our primary learning objective is to approximate the unknown true likelihood function pθ ( b | a , z ) and posterior pθ ( z | a , b ) . Learning pθ ( z | a , b ) provides us a way to infer ( a ( i ) , b ( i ) ) ’ s relational property z ( i ) ; moreover , it serves as a basis for performing relational discrimination where we compare relational properties between different pairs of data . Learning pθ ( b | a , z ) allows us to perform relational mapping where we use the relational property of ( a ( i ) , b ( i ) ) to map a ( j ) to b ( j ) , i.e. , b ( j ) ∼ pθ ( b | a ( j ) , z ( i ) ) where z ( i ) ∼ pθ ( z | a ( i ) , b ( i ) ) . We estimate the parameter for pθ ( b |a , z ) by following the maximum-likelihood ( ML ) principle , and approximate the true posterior pθ ( z | a , b ) with variational Bayesian approach . More specifically , we use a variational distribution qφ ( z | a , b ) , parameterized by φ , to approximate the unknown ( and often intractable ) true posterior . Both θ and φ are learned through maximizing a variational lower bound , L ( θ , φ ; a ( i ) , b ( i ) ) ( abbreviated as L ( i ) ) , for the conditional log-likelihood log pθ ( b ( i ) | a ( i ) ) ( derivation is provided in Appendix C ) : L ( i ) = Eqφ ( z|a ( i ) , b ( i ) ) [ log pθ ( b ( i ) |a ( i ) , z ) + log pθ ( z ) − log qφ ( z|a ( i ) , b ( i ) ) ] . ( 2 ) Recall that learning z independent of a is central to our relational learning goal . While this independence assumption is built into VRL-PGM , the learning objective L ( i ) does not explicitly force z to be independent of a nor penalize learning a dependent z . In practice , there may be numerous reasons that could break this independence assumption , e.g. , insufficient training data , failure to reach the global optimum , non-identifiability of the model , etc. , and it may be desirable to explicitly enforce independence between z and a . One way to achieve this is to introduce a non-positive function that measures the dependency between a and z with maximum attained when they are independent . For example , we can append the negative mutual information between z and a , −I ( z ; a ) = −DKL ( pθ ( z , a ) ‖ pθ ( z ) pθ ( a ) ) , to L ( i ) : L ( i ) = Eqφ ( z|a ( i ) , b ( i ) ) [ log pθ ( b ( i ) |a ( i ) , z ) + log pθ ( z ) − log qφ ( z|a ( i ) , b ( i ) ) ] − I ( z ; a ) . ( 3 ) Since I ( z ; a ) ≥ 0 and I ( z ; a ) = 0 if and only if z and a are independent , the addition of −I ( z ; a ) to L ( i ) not only maintain the validity of the lower bound , but also retain its quality ( z and a are independent in VRL-PGM ) .
This paper introduces a variational method for relational learning. It first introduces relational learning as learning based on relational property instead of absolute property, and introduces conditions (Eq. 1). Then it proposes VRL-PGM with a variational lower bound. To eliminate the information short-cut, it introduces relation-preserving data augmentation (RPDA). Experiments show that the method is able to perform relation discrimination and relation mapping, on a variant of MNIST and Yale face datasets.
SP:e3a0b2cb1a7e2ed24eb413cbd4545cfcddc30a69
Relational Learning with Variational Bayes
1 INTRODUCTION . American Psychological Association defines relational learning as ( VandenBos & APA , 2007 ) : Definition 1.1 ( Relational learning ) . Learning to differentiate among stimuli on the basis of relational properties rather than absolute properties . In other words , relational learning refers to the ability to recognize and respond to relationship ( called relational property ) among objects irrespective of the nature of those objects ( called absolute property ) . For example ( attributed to Doumas & Hummel ( 2013 ) ) , how do we come to understand that two circles are the same-shape in the same way that two squares are ? In this example , “ same-shape ” is the relational property and object shape is the absolute property . Relational learning has long been recognized as a hallmark of human cognition with strong implications for both human-like learning capabilities and generalization capacity ( Biederman , 1987 ; Medin et al. , 1993 ; Gentner , 2003 ; Penn et al. , 2008 ; Holyoak , 2012 ; Gentner & Smith , 2012 ) . We refer the interested readers to the provided references for a comprehensive discussion on this subject . Contemporaneously , the research on learning data relationships—also commonly called “ relational learning ” —has flourished in the machine learning community where the overarching goal is learning in a context where there may be relationships between learning examples , or where these examples may have a complex internal structure ( i.e. , consist of multiple components and there may be relationships between these components ) ( Getoor & Taskar , 2007 ; De Raedt et al. , 2016 ) . We argue that the key difference between the two “ relational learning ” definitions and their learning objectives is that Definition 1.1 takes the relationship learning problem one step further by requiring the data relationships be learned only on the basis of relational properties rather than absolute properties . To the best of our knowledge , this important distinction—learning relationships irrespective of the absolute properties—has not been rigorously studied in the unsupervised learning community , where most existing methods either encourage or do not constrain the relationships learning through absolute properties . In this work , we propose an unsupervised learning method—variational relational learning ( VRL ) — for addressing the relational learning problem as defined by Definition 1.1 . At its core , VRL encapsulates the relational learning problem with a probabilistic graphical model ( PGM ) in which we perform inference to learn about relational property and other relational processing tasks . Our contribution in this paper is threefold : First , we propose a probabilistic formulation for the relational learning problem defined by Definition 1.1 . Second , we encapsulate the relational learning problem with a PGM in which we perform learning and inference . Third , we propose an efficient and effective learning algorithm that can be trained end-to-end and completely unsupervised . ∗The author completed this research while working at ExxonMobil . Corresponding author e-mail : liu.kuanghung @ gmail.com 2 PROBLEM DEFINITION . We focus on a canonical form of the relational learning problem where we observed a paired dataset X = { ( a ( i ) , b ( i ) ) | i∈ [ 1 .. N ] } consisting of N i.i.d . samples generated from a joint distribution p ( a∈A , b∈B ) . We dissect the information in X into absolute property and relational property where absolute property represents specific features that describe individual a and b , and relational property represents the relationship between a and b irrespective of their absolute property . In this work , we interpret the absolute property of a and b as any information that characterizes ( even if only partially ) the marginal distribution p ( a ) and p ( b ) . We propose to represent the relational property as a latent random variable ( r.v . ) z that satisfies the following constraints : ( i ) p ( a , z ) = p ( a ) p ( z ) , ( ii ) p ( b , z ) = p ( b ) p ( z ) , ( iii ) p ( a , z | b ) 6= p ( a | b ) p ( z | b ) , ( iv ) p ( b , z | a ) 6= p ( b | a ) p ( z | a ) , ( 1 ) where in Eq . 1 ( i ) and 1 ( ii ) we interpret the specification of relational property in Definition 1.1— learning relationships irrespective of the absolute properties—as meaning statistical independence , while in Eq . 1 ( iii ) and 1 ( iv ) we ensure r.v . z contains relevant ( relationship ) information that further informs a and b about one another , i.e. , H ( a | b , z ) < H ( a | b ) , H ( b | a , z ) < H ( b | a ) where H ( · | · ) is the conditional entropy . It is easy to see that the following conditions are necessary for r.v . z to exist : ( 1 ) H ( b | a ) > 0 and H ( a | b ) > 0 , i.e. , a and b can not be fully determined by each other ; ( 2 ) r.v . a , b , z are not mutually independent , i.e. , p ( a , b , z ) 6= p ( a ) p ( b ) p ( z ) . Our goal for relational learning is to learn about relational property z that satisfies Eq . 1 in a completely unsupervised fashion . A motivating example for Eq . 1 is provided in Appendix A.1 . In addition , we are interested in two related relational processing tasks : relational discrimination and relational mapping defined as ( VandenBos & APA , 2007 ) : Definition 2.1 ( Relational discrimination in condition ) . A discrimination based on the relationship between or among stimuli rather than on absolute features of the stimuli . Definition 2.2 ( Relational mapping ) . The ability to apply what one knows about one set of elements to a different set of elements . Relational discrimination allows us to differentiate ( a ( i ) , b ( i ) ) from ( a ( j ) , b ( j ) ) based on their relational properties . And relational mapping allows us to apply the relational property of ( a ( i ) , b ( i ) ) to a different set of data , for example , deduce that b ( j ) is related to a ( j ) in the same way that b ( i ) is related to a ( i ) . 3 METHOD . Learning and inference relational property z that satisfies all four constraints in Eq . 1 is a challenging problem due to the hard independence constraints in Eq . 1 ( i ) and 1 ( ii ) . To overcome this challenge , we first introduce VRL as a tractable learning method that satisfies 3 ( out of 4 ) constraints in Eq . 1— Eq . 1 ( i ) , 1 ( iii ) , 1 ( iv ) . We then discuss VRL ’ s unique optimization challenges , which are partially attributable to its relaxation of the independence requirement in Eq . 1 ( ii ) . 3.1 VARIATIONAL RELATIONAL LEARNING . The proposed VRL method consists of two parts : first , we encapsulate the relational learning problem with a PGM , called VRL-PGM ; we then formulate various relational processing tasks as performing inference and learning in VRL-PGM . The VRL-PGM model , shown in Fig . 1 , samples data a , z , and b from parametric families of distributions—pθ ( a ) , pθ ( z ) , pθ ( b |a , z ) —that are differentiable almost everywhere with respect to ( w.r.t . ) a , z , and θ . In practice , we observe only a set of independent realizations { ( a ( i ) , b ( i ) ) | i ∈ [ 1 .. N ] } while the true parameter θ∗ and the corresponding latent variables z ( i ) are unobserved . A well-known property of the PGM shown in Fig . 1 is that r.v . a and z are independent with no variables observed , but not conditionally independent when b is observed , i.e. , pθ ( a , z ) = pθ ( a ) pθ ( z ) , pθ ( a , z | b ) 6= pθ ( a | b ) pθ ( z | b ) ( Bishop , 2006 ) . Consequently , VRL-PGM can be viewed as a parametric relational learning model that satisfies 3 ( out of 4 ) constraints in Eq . 1—Eq . 1 ( i ) , 1 ( iii ) , 1 ( iv ) ( note that Eq . 1 ( iv ) is trivially satisfied in VRL-PGM ) . Further discussions on the connection between VRL-PGM and the relational learning problem is provided in Appendix A.2 . Having established VRL-PGM , our primary learning objective is to approximate the unknown true likelihood function pθ ( b | a , z ) and posterior pθ ( z | a , b ) . Learning pθ ( z | a , b ) provides us a way to infer ( a ( i ) , b ( i ) ) ’ s relational property z ( i ) ; moreover , it serves as a basis for performing relational discrimination where we compare relational properties between different pairs of data . Learning pθ ( b | a , z ) allows us to perform relational mapping where we use the relational property of ( a ( i ) , b ( i ) ) to map a ( j ) to b ( j ) , i.e. , b ( j ) ∼ pθ ( b | a ( j ) , z ( i ) ) where z ( i ) ∼ pθ ( z | a ( i ) , b ( i ) ) . We estimate the parameter for pθ ( b |a , z ) by following the maximum-likelihood ( ML ) principle , and approximate the true posterior pθ ( z | a , b ) with variational Bayesian approach . More specifically , we use a variational distribution qφ ( z | a , b ) , parameterized by φ , to approximate the unknown ( and often intractable ) true posterior . Both θ and φ are learned through maximizing a variational lower bound , L ( θ , φ ; a ( i ) , b ( i ) ) ( abbreviated as L ( i ) ) , for the conditional log-likelihood log pθ ( b ( i ) | a ( i ) ) ( derivation is provided in Appendix C ) : L ( i ) = Eqφ ( z|a ( i ) , b ( i ) ) [ log pθ ( b ( i ) |a ( i ) , z ) + log pθ ( z ) − log qφ ( z|a ( i ) , b ( i ) ) ] . ( 2 ) Recall that learning z independent of a is central to our relational learning goal . While this independence assumption is built into VRL-PGM , the learning objective L ( i ) does not explicitly force z to be independent of a nor penalize learning a dependent z . In practice , there may be numerous reasons that could break this independence assumption , e.g. , insufficient training data , failure to reach the global optimum , non-identifiability of the model , etc. , and it may be desirable to explicitly enforce independence between z and a . One way to achieve this is to introduce a non-positive function that measures the dependency between a and z with maximum attained when they are independent . For example , we can append the negative mutual information between z and a , −I ( z ; a ) = −DKL ( pθ ( z , a ) ‖ pθ ( z ) pθ ( a ) ) , to L ( i ) : L ( i ) = Eqφ ( z|a ( i ) , b ( i ) ) [ log pθ ( b ( i ) |a ( i ) , z ) + log pθ ( z ) − log qφ ( z|a ( i ) , b ( i ) ) ] − I ( z ; a ) . ( 3 ) Since I ( z ; a ) ≥ 0 and I ( z ; a ) = 0 if and only if z and a are independent , the addition of −I ( z ; a ) to L ( i ) not only maintain the validity of the lower bound , but also retain its quality ( z and a are independent in VRL-PGM ) .
This paper proposes a relational learning method based on variational Bayes. The main idea is to learn relations among objects independently of each object's own properties. The model theory is discusses where authors define the problem and discuss about solutions for some limitations of the model when a relation between objects A and B can be found by only looking at one of the objects own properties. The paper is quite well written. Algorithms are provided, but not the code. As far as I understood, results are obtained on "benchmark" datasets, where some synthetic versions of these datasets are used to create the relations between the instances of the original dataset and of the synthetic.
SP:e3a0b2cb1a7e2ed24eb413cbd4545cfcddc30a69
The Evolution of Uncertainty of Learning in Games
1 INTRODUCTION . A primary goal of ML research is to understand the behaviors of learning algorithms in various settings . One standard approach is from each initial condition , we determine whether a learning algorithm converges to a local optimum or stable state . Yet , in the context of online learning in games , and more generally in distributed multi-agent learning , it is natural to consider the evolution of a probability distribution over initial conditions instead . In these settings , each agent forms an initial belief on her own , and she typically does not reveal her belief to other agents or external modelers/analysts . For a modeler to understand the behaviors of the system while being uncertain of the agents ’ beliefs , one natural approach is to infer how the initialization distributes by using data of observations from the past . The modeler then use this inference to predict the likelihoods of different outcomes in the future , either by simulation or by analysis . Also , random initialization can happen due to random external signals , e.g . weather , as well as noisy measurements . For readers who want to know more intuitions and mathematical aspects of such models , see Appendix A . In such cases , it is critical to understand whether the initial probability distribution evolves toward stability , or if its uncertainty gets amplified . Such analysis provides insight into the learning system ’ s robustness against random initialization and environment perturbations . If a coordinator of the system desires stability but the analysis shows the uncertainty gets amplified , she ought to coordinate with the agents to make changes , e.g . encourage them to use other online learning algorithms . To analyze how uncertainty evolves , we need a measure of it . A natural choice is entropy . In his seminal work in 1948 , Claude Shannon formulated an axiomatic foundation of information theory and introduced the seminal definition of Shannon entropy ( SE ) : given a discrete random variable with possible outcomes x1 , . . . , xn which occur with probability p1 , . . . , pn , its SE is − ∑n i=1 pi log pi = E [ log ( 1/pi ) ] . Entropy is the canonical measure of uncertainty : when pi = 1 for some i , the distribution is considered certain and its entropy is zero ; uniform distribution is viewed as the most uncertain , and it attains the maximum possible entropy of log n. For continuous random variables with probability density function g , Shannon proposed an extension of SE called the differential entropy ( DE ) , defined as E [ log ( 1/g ( x ) ) ] = − ∫ X g ( x ) log g ( x ) dx , where X is the support set of the random variable . We will analyze how DE evolves in multi-agent learning . Our Contributions . In our model , a learning-in-game system starts with a distribution over a set of initial conditions in the cumulative payoff space , a coordinate system which is inherent in the implementations of many online learning algorithms . The initial distribution evolves over time depending on the combination of agents ’ learning algorithms and the underlying game . We focus on two popular learning algorithms , Multiplicative Weights Update ( MWU ) and its optimistic variant ( OMWU ) .1 The game settings include the standard two-player matrix games , and one-population games which are fundamental in biological evolution models . We show that the DE of a broad range of learning-in-game systems increases linearly with time ( up to a certain limit ) , formalizing their increased unpredictability . Such systems include MWU in zero-sum games , OMWU in coordination games , and MWU in certain one-population games which reward more to intra-species play than to inter-species play . Our results apply to any smooth initial distribution with bounded support . At this point one may naturally wonder : What level of uncertainty does a linear increase of DE indicate ? What are its implications ? To answer these questions , it is best to compare ourselves against other simple benchmarks . Consider the following simple learning system : in each round , the payoffs for each action are generated from a uniform distribution over [ −1 , 1 ] . In the cumulative payoff space , the distribution at time T converges to a multivariate Gaussian distribution with variance Θ ( T ) , and hence the entropy grows at a rate of O ( log T ) , much slower than linear growth . In Section 4.1 , we present an implication of linear DE growth in learning-in-game systems . Briefly speaking , the DE can not increase substantially for an indefinite amount of time in such systems . Thus , the distribution after a sufficiently long time must concentrate in a region that yields slower DE growth or decline ; such region does not cover any point in the cumulative payoff space that corresponds to an interior Nash equilibrium . We will refer to this phenomenon by “ the grand escape ” . For the implications in information theory , we refer readers to Chapter 8 of Cover & Thomas ( 2006 ) . The central tool in analyzing the changes of DE is the Jacobian matrix of our multi-agent dynamical system . This is also the key notion in volume analysis , which was used in a recent series of papers to demonstrate Lyapunov chaos in learning-in-game systems . ( Cheung & Piliouras ( 2019 ; 2020 ) ; Cheung & Tao ( 2020 ) ) By showing that the determinant of the Jacobian matrix is strictly above 1 in a large domain of the cumulative payoff space , they showed that the volume ( Lebesgue measure ) of a small set of initial conditions will blow up exponentially to become a set of large diameter , thus demonstrating a general Lyapunov chaos phenomenon in learning-in-game . Indeed , the same property of Jacobian matrix guarantees linear increase of DE . Additionally , we present the first Jacobian analysis of online learning in one-population games . Finally , our DE analysis is robust against small perturbations . Consequently , our results extend to the setting where the game in each round can be different , as long as the payoffs are perturbations of each other . Such settings capture many games in the real world , where we know the “ reference values ” of the payoffs , but the accurate payoffs in each round differ slightly due to unknown external ( random or adversarial ) factors . Our model and analysis can be viewed as an a strengthening of volume analysis . Volume analysis focuses on whether it is possible to reach a state or not , whereas in our model we are also concerned with how probable/likely it is to reach a state . More explicitly , a state can be reached but only with a tiny chance . When studying chaos and volume , such states matter , but less so when studying uncertainty as their contributions to entropy will be low . To illuminate this , we simulate MWU ( with step-size 0.01 ) in Matching Pennies game , and present a few plots in Figure 2 . The top-left plot shows the initial set , which is a small square around the unique Nash equilibrium . The top-right plot shows the range of possibility after 40000 steps ( a similar figure was first presented by Cheung & Piliouras ( 2019 ) ) . In our model , we assume the initial distribution is uniform over the small square , and plot the heat-map of probability densities after 40000 steps ( bottom-left ) . We observe that the states in the boundary of the vortex are more probable to occur , while the densities around the Nash equilibrium decline . The bottom-right plot shows the densities that generate the heat-map . Further Related Work . Learning models that explicitly study the effects random initialization have received relatively little attention in game theory where static equilibrium concepts are typically 1As we shall point out in Section 3 , all our results about MWU also extend easily to the broad family of Follow-the-Regularized-Leader ( FTRL ) algorithms , which is a generalization of MWU . For clarity of our presentation , we choose to focus on MWU in this paper . the preferred solution . Lahkar & Seymour ( 2013 ) studies online learning under random initialization in population dynamics , where one population game is a common payoff model . In population dynamics , there exists a large population of agents who have different initial choices of mixed strategies , modeled as a distribution . The dynamics proceed by randomly pairing up agents in each round to play a game . Lahkar & Seymour ( 2014 ) extends this framework to other learning dynamics to describe the evolution of a distribution of mixed strategies in a population playing variants of RockPaper-Scissors games establishing local convergence to the interior equilibrium . Recently , further learning models inspired by Q-learning dynamics have been defined ( Hu et al. , 2019 ) . Very little is formally known about the behavior of such models particularly under instability . For zero-sum games with an interior Nash equilibrium , “ the grand escape ” phenomenon follows from Bailey & Piliouras ( 2018 ) ; Cheung ( 2018 ) , who showed that MWU diverges away from Nash . Our results are more general , since their analysis only works with those zero-sum game dynamics , while our technique applies to many other game dynamics as well . Such instability results in zerosum games are undesirable for ML applications such as GANs thus a lot of effort has been invested in developing algorithms that achieve convergence Daskalakis et al . ( 2017 ) ; Mertikopoulos et al . ( 2019 ) ; Daskalakis & Panageas ( 2018 ) ; Gidel et al . ( 2019 ) ; Mokhtari et al . ( 2020 ) ; Perolat et al . ( 2021 ) . Our research direction here is in some sense orthogonal . We aim to characterize in a more detailed fashion the behavior of learning dynamics despite their instability . The instability of MWU , FTRL and other optimization-driven dynamics in games has attracted a lot of attention in recent years . One formal technique to establish the unpredictability of MWU and other dynamics is based on proving Li-Yorke chaos , which has been established robustly in different families of routing/potential/Cournot games ( Palaiopanos et al . ( 2017 ) ; Chotibut et al . ( 2020 ) ; Bielawski et al . ( 2021 ) ; Cheung et al . ( 2021 ) ) . The techniques in those papers are independent from ours as they require the identification of one dimensional invariant manifolds , and Li-Yorke chaos implies the existence of periodic orbits , which is not possible in our systems . In a very recent series of works , Flokas et al . ( 2020 ) ; Giannou et al . ( 2021 ) established that all ( even partially ) mixed equilibria in all games are not asymptotically stable for any choice for FTRL dynamics both in discrete as well as in the idealized continuous-time limit . In fact , it is possible to construct simple matrix games such that the orbits of the resulting dynamics can be arbitrarily complex Andrade et al . ( 2021 ) . If , inspired by ML applications , we allow for more complex differentiable games it becomes even simpler to establish strong instability results for effectively any type of training algorithms ( Letcher ( 2020 ) ; Balduzzi et al . ( 2020 ) ) . The sweeping nature of these strong negative results showcases the practical interest in uncovering more information about the nature of unstable dynamics of games . Our proposed framework introduces a novel , quantitative methodology for their study . The notion of differential entropy ( DE ) has been used as uncertainty measure in many works across multiple disciplines . We note that while the extended definition of DE seems natural , it is not considered as the perfect generalization of SE since it misses some favorable properties , and therefore , as quoted from the popular information theory text of Cover & Thomas ( 2006 ) , “ there is need for some care in using the concept ” .2 Nevertheless , DE remains as an effective measure of uncertainty3 , which is commonly used in physics , economics , control theory , engineering , biology , medical research and beyond . For a general discussion , we refer readers to the text of Cover & Thomas ( 2006 ) . For background information of evolutionary game theory , we recommend Hofbauer & Sigmund ( 1998 ) ; Sandholm ( 2010 ) ; Vega-Redondo ( 1996 ) .
This paper extends the existing line of research of the dynamics of multiplicative weights update and similar algorithms for games. It shows that for zero-sum two-player games and for population games satisfying certain conditions, that the entropy increases linearly as long as the strategies are far from the distribution; this implies that the strategies concentrate near the boundary in the long-run, in a sense that the paper formalizes. Compared to previous work, this analysis applies to games that are not zero-sum--specifically, population games that satisfy a certain condition on the payoffs.
SP:23a6cf043248b37fc8b792217c58a97697d56290
The Evolution of Uncertainty of Learning in Games
1 INTRODUCTION . A primary goal of ML research is to understand the behaviors of learning algorithms in various settings . One standard approach is from each initial condition , we determine whether a learning algorithm converges to a local optimum or stable state . Yet , in the context of online learning in games , and more generally in distributed multi-agent learning , it is natural to consider the evolution of a probability distribution over initial conditions instead . In these settings , each agent forms an initial belief on her own , and she typically does not reveal her belief to other agents or external modelers/analysts . For a modeler to understand the behaviors of the system while being uncertain of the agents ’ beliefs , one natural approach is to infer how the initialization distributes by using data of observations from the past . The modeler then use this inference to predict the likelihoods of different outcomes in the future , either by simulation or by analysis . Also , random initialization can happen due to random external signals , e.g . weather , as well as noisy measurements . For readers who want to know more intuitions and mathematical aspects of such models , see Appendix A . In such cases , it is critical to understand whether the initial probability distribution evolves toward stability , or if its uncertainty gets amplified . Such analysis provides insight into the learning system ’ s robustness against random initialization and environment perturbations . If a coordinator of the system desires stability but the analysis shows the uncertainty gets amplified , she ought to coordinate with the agents to make changes , e.g . encourage them to use other online learning algorithms . To analyze how uncertainty evolves , we need a measure of it . A natural choice is entropy . In his seminal work in 1948 , Claude Shannon formulated an axiomatic foundation of information theory and introduced the seminal definition of Shannon entropy ( SE ) : given a discrete random variable with possible outcomes x1 , . . . , xn which occur with probability p1 , . . . , pn , its SE is − ∑n i=1 pi log pi = E [ log ( 1/pi ) ] . Entropy is the canonical measure of uncertainty : when pi = 1 for some i , the distribution is considered certain and its entropy is zero ; uniform distribution is viewed as the most uncertain , and it attains the maximum possible entropy of log n. For continuous random variables with probability density function g , Shannon proposed an extension of SE called the differential entropy ( DE ) , defined as E [ log ( 1/g ( x ) ) ] = − ∫ X g ( x ) log g ( x ) dx , where X is the support set of the random variable . We will analyze how DE evolves in multi-agent learning . Our Contributions . In our model , a learning-in-game system starts with a distribution over a set of initial conditions in the cumulative payoff space , a coordinate system which is inherent in the implementations of many online learning algorithms . The initial distribution evolves over time depending on the combination of agents ’ learning algorithms and the underlying game . We focus on two popular learning algorithms , Multiplicative Weights Update ( MWU ) and its optimistic variant ( OMWU ) .1 The game settings include the standard two-player matrix games , and one-population games which are fundamental in biological evolution models . We show that the DE of a broad range of learning-in-game systems increases linearly with time ( up to a certain limit ) , formalizing their increased unpredictability . Such systems include MWU in zero-sum games , OMWU in coordination games , and MWU in certain one-population games which reward more to intra-species play than to inter-species play . Our results apply to any smooth initial distribution with bounded support . At this point one may naturally wonder : What level of uncertainty does a linear increase of DE indicate ? What are its implications ? To answer these questions , it is best to compare ourselves against other simple benchmarks . Consider the following simple learning system : in each round , the payoffs for each action are generated from a uniform distribution over [ −1 , 1 ] . In the cumulative payoff space , the distribution at time T converges to a multivariate Gaussian distribution with variance Θ ( T ) , and hence the entropy grows at a rate of O ( log T ) , much slower than linear growth . In Section 4.1 , we present an implication of linear DE growth in learning-in-game systems . Briefly speaking , the DE can not increase substantially for an indefinite amount of time in such systems . Thus , the distribution after a sufficiently long time must concentrate in a region that yields slower DE growth or decline ; such region does not cover any point in the cumulative payoff space that corresponds to an interior Nash equilibrium . We will refer to this phenomenon by “ the grand escape ” . For the implications in information theory , we refer readers to Chapter 8 of Cover & Thomas ( 2006 ) . The central tool in analyzing the changes of DE is the Jacobian matrix of our multi-agent dynamical system . This is also the key notion in volume analysis , which was used in a recent series of papers to demonstrate Lyapunov chaos in learning-in-game systems . ( Cheung & Piliouras ( 2019 ; 2020 ) ; Cheung & Tao ( 2020 ) ) By showing that the determinant of the Jacobian matrix is strictly above 1 in a large domain of the cumulative payoff space , they showed that the volume ( Lebesgue measure ) of a small set of initial conditions will blow up exponentially to become a set of large diameter , thus demonstrating a general Lyapunov chaos phenomenon in learning-in-game . Indeed , the same property of Jacobian matrix guarantees linear increase of DE . Additionally , we present the first Jacobian analysis of online learning in one-population games . Finally , our DE analysis is robust against small perturbations . Consequently , our results extend to the setting where the game in each round can be different , as long as the payoffs are perturbations of each other . Such settings capture many games in the real world , where we know the “ reference values ” of the payoffs , but the accurate payoffs in each round differ slightly due to unknown external ( random or adversarial ) factors . Our model and analysis can be viewed as an a strengthening of volume analysis . Volume analysis focuses on whether it is possible to reach a state or not , whereas in our model we are also concerned with how probable/likely it is to reach a state . More explicitly , a state can be reached but only with a tiny chance . When studying chaos and volume , such states matter , but less so when studying uncertainty as their contributions to entropy will be low . To illuminate this , we simulate MWU ( with step-size 0.01 ) in Matching Pennies game , and present a few plots in Figure 2 . The top-left plot shows the initial set , which is a small square around the unique Nash equilibrium . The top-right plot shows the range of possibility after 40000 steps ( a similar figure was first presented by Cheung & Piliouras ( 2019 ) ) . In our model , we assume the initial distribution is uniform over the small square , and plot the heat-map of probability densities after 40000 steps ( bottom-left ) . We observe that the states in the boundary of the vortex are more probable to occur , while the densities around the Nash equilibrium decline . The bottom-right plot shows the densities that generate the heat-map . Further Related Work . Learning models that explicitly study the effects random initialization have received relatively little attention in game theory where static equilibrium concepts are typically 1As we shall point out in Section 3 , all our results about MWU also extend easily to the broad family of Follow-the-Regularized-Leader ( FTRL ) algorithms , which is a generalization of MWU . For clarity of our presentation , we choose to focus on MWU in this paper . the preferred solution . Lahkar & Seymour ( 2013 ) studies online learning under random initialization in population dynamics , where one population game is a common payoff model . In population dynamics , there exists a large population of agents who have different initial choices of mixed strategies , modeled as a distribution . The dynamics proceed by randomly pairing up agents in each round to play a game . Lahkar & Seymour ( 2014 ) extends this framework to other learning dynamics to describe the evolution of a distribution of mixed strategies in a population playing variants of RockPaper-Scissors games establishing local convergence to the interior equilibrium . Recently , further learning models inspired by Q-learning dynamics have been defined ( Hu et al. , 2019 ) . Very little is formally known about the behavior of such models particularly under instability . For zero-sum games with an interior Nash equilibrium , “ the grand escape ” phenomenon follows from Bailey & Piliouras ( 2018 ) ; Cheung ( 2018 ) , who showed that MWU diverges away from Nash . Our results are more general , since their analysis only works with those zero-sum game dynamics , while our technique applies to many other game dynamics as well . Such instability results in zerosum games are undesirable for ML applications such as GANs thus a lot of effort has been invested in developing algorithms that achieve convergence Daskalakis et al . ( 2017 ) ; Mertikopoulos et al . ( 2019 ) ; Daskalakis & Panageas ( 2018 ) ; Gidel et al . ( 2019 ) ; Mokhtari et al . ( 2020 ) ; Perolat et al . ( 2021 ) . Our research direction here is in some sense orthogonal . We aim to characterize in a more detailed fashion the behavior of learning dynamics despite their instability . The instability of MWU , FTRL and other optimization-driven dynamics in games has attracted a lot of attention in recent years . One formal technique to establish the unpredictability of MWU and other dynamics is based on proving Li-Yorke chaos , which has been established robustly in different families of routing/potential/Cournot games ( Palaiopanos et al . ( 2017 ) ; Chotibut et al . ( 2020 ) ; Bielawski et al . ( 2021 ) ; Cheung et al . ( 2021 ) ) . The techniques in those papers are independent from ours as they require the identification of one dimensional invariant manifolds , and Li-Yorke chaos implies the existence of periodic orbits , which is not possible in our systems . In a very recent series of works , Flokas et al . ( 2020 ) ; Giannou et al . ( 2021 ) established that all ( even partially ) mixed equilibria in all games are not asymptotically stable for any choice for FTRL dynamics both in discrete as well as in the idealized continuous-time limit . In fact , it is possible to construct simple matrix games such that the orbits of the resulting dynamics can be arbitrarily complex Andrade et al . ( 2021 ) . If , inspired by ML applications , we allow for more complex differentiable games it becomes even simpler to establish strong instability results for effectively any type of training algorithms ( Letcher ( 2020 ) ; Balduzzi et al . ( 2020 ) ) . The sweeping nature of these strong negative results showcases the practical interest in uncovering more information about the nature of unstable dynamics of games . Our proposed framework introduces a novel , quantitative methodology for their study . The notion of differential entropy ( DE ) has been used as uncertainty measure in many works across multiple disciplines . We note that while the extended definition of DE seems natural , it is not considered as the perfect generalization of SE since it misses some favorable properties , and therefore , as quoted from the popular information theory text of Cover & Thomas ( 2006 ) , “ there is need for some care in using the concept ” .2 Nevertheless , DE remains as an effective measure of uncertainty3 , which is commonly used in physics , economics , control theory , engineering , biology , medical research and beyond . For a general discussion , we refer readers to the text of Cover & Thomas ( 2006 ) . For background information of evolutionary game theory , we recommend Hofbauer & Sigmund ( 1998 ) ; Sandholm ( 2010 ) ; Vega-Redondo ( 1996 ) .
The paper studies the evolution of uncertainty in multi-agent game dynamics. More specifically, it studies how the probability distribution over the players' cumulative payoffs evolves as players use typical online learning algorithms to play the game. The game uncertainty is quantified by the notion of Differential Entropy (DE) of such distribution, a quantity related to the Jacobian of the game dynamics. Authors show that DE increases linearly with time for a set of games including two-player zero-sum, coordination games, and population games, confirming the negative convergence results obtained in past works.
SP:23a6cf043248b37fc8b792217c58a97697d56290
Cell2State: Learning Cell State Representations From Barcoded Single-Cell Gene-Expression Transitions
1 Introduction . With the explosive amount of data from single-cell genomics studies , one remaining major challenge is the lack of ability to understand cell transition on the individual level . Conventional methods for analyzing single-cell dynamics are mostly based on “ ensemble ” analysis ( Kester & van Oudenaarden , 2018 ; Tanay & Regev , 2017 ; Bacher & Kendziorski , 2016 ; Stegle et al. , 2015 ) . Such analysis reveals population-level trends , but can not reveal behaviors of individual cells . Recent advances in genomic technology have enabled scientists to directly measure cell lineages and connect two cells that are far apart in the time course . This is opposed to inferring lineages from snapshots at nearby time points . The gene barcoding approach works by inserting into each cell a DNA sequence , i.e. , a barcode , that randomizes across cells so that no two cells bear the same sequence ( Woodworth et al. , 2017 ) . As the cell divides and differentiates , its descendants can be identified based on sequencing the label . This concept has been utilized recently in single-cell analysis of embryonic development ( Yao et al. , 2017 ; Wagner et al. , 2018 ) , stem cell reprogramming ( Biddy et al. , 2018 ) , and fate determination in hematopoiesis ( Weinreb et al. , 2020 ) . This genetic barcoding approach enables tracking of evolutionary trajectories across individual cell lineages . In this paper , we focus on the new type of single-cell data enabled by the single-cell gene barcoding technology . The barcode can directly connect parent cells with their descendants over a long time span . Thus , gene barcoding coupled with RNA-seq generates pairs of gene-expression transitions { ( X , X ′ ) } , where each ( X , X ′ ) is the gene expression profiles for a parent cell at an early time point and one of its descendants at the later time point . One may view the gene-expression profile X as the raw state of a cell , which is a high-dimensional vector . Such state-transition data makes it possible to learn about a cell ’ s law of state transition . In other words , the new data type can let us decode the single-cell transition law in a way similar to system identification for dynamical systems . We wish to learn mathematical abstractions of cell gene-expression states , i.e. , a map Ψ from gene-expression profile X to a vector Ψ ( X ) of lower dimension . A good cell state abstraction should be low-dimensional and predictive , compressing predictive signals from gene-expression levels in a compact vector . In other words , we hope to maximize the mutual information I ( Ψ ( X ) , X ′ ) between the embedded parent cell state and its descendant . Ideally , we hope to achieve a nearly lossless encoding of states , i.e. , I ( Ψ ( X ) , X ′ ) ≈ I ( X , X ′ ) . To learn such Ψ from noisy high-dimensional gene-expression data , we build on ideas from dynamical system theory , kernel machine learning , and low-rank optimization . In the area of molecular dynamics , Schütte et al . ( 2011 ) showed that the leading spectrum of transfer operator can be used to identify coresets for faster simulation . In reinforcement learning , computers need to figure out state abstraction of unknown transition systems in order to learn to control quickly ( Sutton et al. , 1998 ) . Sun et al . ( 2019 ) developed a state embedding method to analyze game trajectories to significantly reduce the state dimension of one-player Atari games . See Appendix A for more discussion on related works . A summary of our work : . • Building on the spectral compression ideas from dynamic systems , we develop a cell-to-state ( cell2state ) representation learning method for analyzing gene-expression transition data made available by the single-cell barcoding technology . The cell2state algorithm is trained using gene-expression transition pairs ; it finds a mapping to approximate and embed the transition distributions in a low-dimensional space . The embedding map is learned by first “ lifting ” the data ’ s dimension by random feature projection , then estimating a large matrix embedding of the transition distributions , and finally “ compressing ” the dimension by low-rank optimization . • We provide information-theoretic analysis of the learnt cell2state embedding Ψ̂ . In particular , we show that , upon appropriate quantization , the embedding map can be used to encode raw gene-expression data into a small number of bits . We show that this encoding can be nearly lossless , and we establish sample complexity bounds for preserving the mutual information between parent and descendants up to 1 bit . • We apply cell2state to a published single-cell barcoded RNA-seq dataset for studying stem cells ( Biddy et al. , 2018 ) . In this analysis , we used cell2state to map early-day cells to state vectors of dimension ≤ 8 . Via the cell2state map , the cell populations demonstrated sharp polytope structures , where distinct vertices provide early signals that predict diverse dynamics and cell fates . To evaluate the learned cell state vectors , we test them in three downstream tasks : ( i ) finding dynamically stable cell cluster ; ( ii ) early prediction of cell dynamics , such as cell descendants ’ activities or fates , based on low-dimensional state vectors ; ( iii ) subpopulation analysis to identify marker genes that signal distinct cell dynamics . Across these tasks , we observe substantially improved performance using the learned cell states , as compared with baselines that use either the raw data or features that are not dynamics-aware . In particular , our results show that cell2state achieves similar/better level of prediction accuracy using ≤7 dimensions , compared to neural networks that use raw gene expressions as input ( up to 5000 dimensions ) . • Further , we identify and examine subpopulations of cells that have the most representative low-dimensional states ( in other words , subsets of cells that are close to likely meta-states , under the assumption of a latent-state model ) . These subpopulations are used to identify biologically relevant marker genes . These marker genes identified by cell2state are known to relate to stem cell reprogramming and epigenetic regulations . 2 Markov Branching Process Model for Single-Cell Dynamics . We model the time-course dynamics of gene expressions as a branching diffusion process ( Edwards ( 1970 ) ; see Figure 1 ) . Let Xt denote the gene-expression profile of a cell at a time point t , which is a high-dimensional vector . Each Xt has a random number of descendants Xit+1 , i = 1 , . . . , N with independent and identical distributions . Definition 1 . Define p ( Xt+1|Xt ) as the transition function for the gene expression profile Xt to evolve to a collection of descendants { X ( i ) t+1 } Ni=1 in a fixed amount of time , i.e. , for any measurable set S , p ( S|Xt ) = E [ N∑ i=1 1 { X ( i ) t+1 ∈ S } | Xt ] , where N is also a random variable . Note that p is not necessarily a probability density function because a cell could have multiple descendants . If the growth rate is such that E [ N |Xt ] = ∫ p ( y|x ) dy > 1 , we say the cell is actively growing . Definition 2 . Let P be the transition operator of the branching diffusion process with transition function p , given by Pf ( X ) = E [ N∑ i=1 f { X ( i ) t+1 ∈ S } | Xt = X ] . In single-cell analysis , the transition function , p , and operator , P , are infinite-dimensional , thus estimating them is largely intractable from finite noisy data . Like many other dynamic systems , cell dynamics is often driven by a small set of marker genes , and thus , it may admit an intrinsic low-dimension structure . We make the following assumption : Assumption 1 . Let H be a space of functions . There exists a r-dimensional embedding map Φ∗ ⊂ H such that Pf ∈ Span ( Φ∗ ) , ∀f ∈ H. Here , H will be specified later . Such low-rank structure of Assumption 1 naturally exists in dynamical processes that admit latent states . Suppose that each x can be represented as a mixture over meta-states { z } such that p ( x′|x ) = ∑ z p ( x′|z ) pZ ( z|x ) . This is a common latent-state model for stochastic processes ; see Figure 2 for an illustration . In the single-cell context , a meta-state is often referred to as a “ cell type ” , which has a distinct “ pathway ” ( i.e. , future dynamics ) . The “ cell type ” is defined as a function of the gene-expression profile , but the function is unknown and to be learnt . In this latent-state model , let Φ∗ ( x ) = pZ ( ·|x ) . Then we can verify that Assumption 1 holds . In this case , finding the embedding map Φ∗ would make it possible to recover the set of meta-states { z } and aggregation distributions pZ ( ·|x ) . 3 Mapping Gene Expressions To Low-Dimensional Cell States . Recall that our goal is to find mathematical abstractions of high-dimensional expression profiles { Xt } . We will estimate an embedding map from gene-expression profiles to a low-dimensional vector space : Ψ : x ∈ Rd 7→ Ψ ( x ) ∈ Rr . Ideally , we hope Ψ ( Xt ) to be low-dimensional while still containing as much information about Xt+1 as possible . 3.1 Embedding the transition operator into a functional space . Consider a kernel mean embedding of the transition operator P : Q = ΠHPΠH , where the projections are with respect to appropriate norms . By assumption , we can verify that rank ( Q ) < r. We seek to estimate Q from cell transition data , perform singular value truncation , and then find the low-dimensional embedding map by transforming the left single functions of Q . To guarantee the function space H is sufficiently expressive , we adopt a kernel composition approach for “ lifting ” the dimensions . First , we construct an initial kernel K0 that best fits the dataset ’ s topology and preserves neighborhood relations . To find such a K0 , we can leverage existing dimension reduction methods for single-cell data analysis , such as PCA , manifold-based , and graph-based methods ( see Kester & van Oudenaarden ( 2018 ) ; Tanay & Regev ( 2017 ) ; Bacher & Kendziorski ( 2016 ) ; Stegle et al . ( 2015 ) for reviews ) . Then , we construct the kernel function K = K0 ◦ K1 by taking the composition between K0 and another kernel function K1 ( e.g. , the Gaussian kernel ) - this step would further lift the problem ’ s dimension and improve the function space ’ s expressibility . One can also take compositions of multiple kernels to mimic a multi-layer neural network ( Cho & Saul , 2009 ) . 3.2 Low-rank compression of cell states via random features . We propose a kernelized state embedding method based on random feature projection for computing an estimator Ψ̂ from transition data { ( Xi , X ′i ) } Ni=1 , which can be obtained from cell trajectories . For analyzing single-cell sequencing data and embedding transition distributions , we will choose the function space H with the kernel function K tailored to the data ’ s geometry . Suppose we have chosen a kernel function K ( for example , a Gaussian kernel ) . We perform nonparametric estimation of Ψ∗ by generating a large number of random features to approximate the kernel space in large finite dimensions . Then , we downsize the estimator by using spectral decomposition . In the case where each parent cell has a single descendant , the cell2state method works by ( informally ) : ( 1 ) Generate random Fourier functions φ ( · ) = ( φ1 ( · ) , . . . , φd ( · ) ) > by randomized decomposition of its kernel function K to approximately span H ( Rahimi & Recht , 2008 ) . ( 2 ) Estimate a finite matrix embedding of the scaled condition probability distribution 1√ p ( x′ ) p ( x′|x ) by P̂ = Σ−1/20 ( 1 N ∑N i=1 φ ( Xi ) φ ( X ′ i ) > ) Σ −1/2 1 , where Σ0 , Σ1 are covariances at the two time points . ( 3 ) Let Ψ̂ ( · ) = ( ÛrΛ̂r ) > Σ−10 Φ ( · ) where Ûr , Λ̂r are from top r truncation of the SVD of P̂ . See Algorithm 1 in the Appendix for the full description of the cell2state algorithm , which also handles the case where cells have multiple descendants . Given a cell ’ s gene expression profile x , the vector Ψ̂ ( x ) can be viewed as a low-dimensional mean embedding of the transition function p ( ·|x ) . Thus it should be predictive of this cell ’ s future dynamics . Runtime Complexity of Algorithm 1 . The algorithm uses random features and singular value truncation , both designed for maximal computation efficiency . The overall runtime for training is at most O ( n+ nD2 +D3 ) , where n is number of cells , D is number of Fourier features ( ≤ n ) . This is the same complexity as computing covariances and PCA in the random feature space . After training , querying the embedding map Ψ̂ takes only O ( rD ) time . In our experiments , Algorithm 1 runs in seconds , while training an MLP ( a deep neural network ) for cell fate prediction takes 10-15 minutes .
The authors introduce cell2state, an algorithm that incorporates both genetic barcoding coupled with single-cell sequenced data to model explicit state transitions of cell dynamics over time. Single-cell gene expression profiles are mapped to low-dimensional state vectors that are predictive of cell dynamics. Cell2state is evaluated using barcoded stem cell dataset (Biddy et al. (2018)) and simulation studies are also presented. The model demonstrates better results for cell state prediction, finding dynamically stable clusters, and reveals potential latent meta-states of the underlying cellular evolution process.
SP:a637f040207332aff43cb9d801e4a879ba1dc701
Cell2State: Learning Cell State Representations From Barcoded Single-Cell Gene-Expression Transitions
1 Introduction . With the explosive amount of data from single-cell genomics studies , one remaining major challenge is the lack of ability to understand cell transition on the individual level . Conventional methods for analyzing single-cell dynamics are mostly based on “ ensemble ” analysis ( Kester & van Oudenaarden , 2018 ; Tanay & Regev , 2017 ; Bacher & Kendziorski , 2016 ; Stegle et al. , 2015 ) . Such analysis reveals population-level trends , but can not reveal behaviors of individual cells . Recent advances in genomic technology have enabled scientists to directly measure cell lineages and connect two cells that are far apart in the time course . This is opposed to inferring lineages from snapshots at nearby time points . The gene barcoding approach works by inserting into each cell a DNA sequence , i.e. , a barcode , that randomizes across cells so that no two cells bear the same sequence ( Woodworth et al. , 2017 ) . As the cell divides and differentiates , its descendants can be identified based on sequencing the label . This concept has been utilized recently in single-cell analysis of embryonic development ( Yao et al. , 2017 ; Wagner et al. , 2018 ) , stem cell reprogramming ( Biddy et al. , 2018 ) , and fate determination in hematopoiesis ( Weinreb et al. , 2020 ) . This genetic barcoding approach enables tracking of evolutionary trajectories across individual cell lineages . In this paper , we focus on the new type of single-cell data enabled by the single-cell gene barcoding technology . The barcode can directly connect parent cells with their descendants over a long time span . Thus , gene barcoding coupled with RNA-seq generates pairs of gene-expression transitions { ( X , X ′ ) } , where each ( X , X ′ ) is the gene expression profiles for a parent cell at an early time point and one of its descendants at the later time point . One may view the gene-expression profile X as the raw state of a cell , which is a high-dimensional vector . Such state-transition data makes it possible to learn about a cell ’ s law of state transition . In other words , the new data type can let us decode the single-cell transition law in a way similar to system identification for dynamical systems . We wish to learn mathematical abstractions of cell gene-expression states , i.e. , a map Ψ from gene-expression profile X to a vector Ψ ( X ) of lower dimension . A good cell state abstraction should be low-dimensional and predictive , compressing predictive signals from gene-expression levels in a compact vector . In other words , we hope to maximize the mutual information I ( Ψ ( X ) , X ′ ) between the embedded parent cell state and its descendant . Ideally , we hope to achieve a nearly lossless encoding of states , i.e. , I ( Ψ ( X ) , X ′ ) ≈ I ( X , X ′ ) . To learn such Ψ from noisy high-dimensional gene-expression data , we build on ideas from dynamical system theory , kernel machine learning , and low-rank optimization . In the area of molecular dynamics , Schütte et al . ( 2011 ) showed that the leading spectrum of transfer operator can be used to identify coresets for faster simulation . In reinforcement learning , computers need to figure out state abstraction of unknown transition systems in order to learn to control quickly ( Sutton et al. , 1998 ) . Sun et al . ( 2019 ) developed a state embedding method to analyze game trajectories to significantly reduce the state dimension of one-player Atari games . See Appendix A for more discussion on related works . A summary of our work : . • Building on the spectral compression ideas from dynamic systems , we develop a cell-to-state ( cell2state ) representation learning method for analyzing gene-expression transition data made available by the single-cell barcoding technology . The cell2state algorithm is trained using gene-expression transition pairs ; it finds a mapping to approximate and embed the transition distributions in a low-dimensional space . The embedding map is learned by first “ lifting ” the data ’ s dimension by random feature projection , then estimating a large matrix embedding of the transition distributions , and finally “ compressing ” the dimension by low-rank optimization . • We provide information-theoretic analysis of the learnt cell2state embedding Ψ̂ . In particular , we show that , upon appropriate quantization , the embedding map can be used to encode raw gene-expression data into a small number of bits . We show that this encoding can be nearly lossless , and we establish sample complexity bounds for preserving the mutual information between parent and descendants up to 1 bit . • We apply cell2state to a published single-cell barcoded RNA-seq dataset for studying stem cells ( Biddy et al. , 2018 ) . In this analysis , we used cell2state to map early-day cells to state vectors of dimension ≤ 8 . Via the cell2state map , the cell populations demonstrated sharp polytope structures , where distinct vertices provide early signals that predict diverse dynamics and cell fates . To evaluate the learned cell state vectors , we test them in three downstream tasks : ( i ) finding dynamically stable cell cluster ; ( ii ) early prediction of cell dynamics , such as cell descendants ’ activities or fates , based on low-dimensional state vectors ; ( iii ) subpopulation analysis to identify marker genes that signal distinct cell dynamics . Across these tasks , we observe substantially improved performance using the learned cell states , as compared with baselines that use either the raw data or features that are not dynamics-aware . In particular , our results show that cell2state achieves similar/better level of prediction accuracy using ≤7 dimensions , compared to neural networks that use raw gene expressions as input ( up to 5000 dimensions ) . • Further , we identify and examine subpopulations of cells that have the most representative low-dimensional states ( in other words , subsets of cells that are close to likely meta-states , under the assumption of a latent-state model ) . These subpopulations are used to identify biologically relevant marker genes . These marker genes identified by cell2state are known to relate to stem cell reprogramming and epigenetic regulations . 2 Markov Branching Process Model for Single-Cell Dynamics . We model the time-course dynamics of gene expressions as a branching diffusion process ( Edwards ( 1970 ) ; see Figure 1 ) . Let Xt denote the gene-expression profile of a cell at a time point t , which is a high-dimensional vector . Each Xt has a random number of descendants Xit+1 , i = 1 , . . . , N with independent and identical distributions . Definition 1 . Define p ( Xt+1|Xt ) as the transition function for the gene expression profile Xt to evolve to a collection of descendants { X ( i ) t+1 } Ni=1 in a fixed amount of time , i.e. , for any measurable set S , p ( S|Xt ) = E [ N∑ i=1 1 { X ( i ) t+1 ∈ S } | Xt ] , where N is also a random variable . Note that p is not necessarily a probability density function because a cell could have multiple descendants . If the growth rate is such that E [ N |Xt ] = ∫ p ( y|x ) dy > 1 , we say the cell is actively growing . Definition 2 . Let P be the transition operator of the branching diffusion process with transition function p , given by Pf ( X ) = E [ N∑ i=1 f { X ( i ) t+1 ∈ S } | Xt = X ] . In single-cell analysis , the transition function , p , and operator , P , are infinite-dimensional , thus estimating them is largely intractable from finite noisy data . Like many other dynamic systems , cell dynamics is often driven by a small set of marker genes , and thus , it may admit an intrinsic low-dimension structure . We make the following assumption : Assumption 1 . Let H be a space of functions . There exists a r-dimensional embedding map Φ∗ ⊂ H such that Pf ∈ Span ( Φ∗ ) , ∀f ∈ H. Here , H will be specified later . Such low-rank structure of Assumption 1 naturally exists in dynamical processes that admit latent states . Suppose that each x can be represented as a mixture over meta-states { z } such that p ( x′|x ) = ∑ z p ( x′|z ) pZ ( z|x ) . This is a common latent-state model for stochastic processes ; see Figure 2 for an illustration . In the single-cell context , a meta-state is often referred to as a “ cell type ” , which has a distinct “ pathway ” ( i.e. , future dynamics ) . The “ cell type ” is defined as a function of the gene-expression profile , but the function is unknown and to be learnt . In this latent-state model , let Φ∗ ( x ) = pZ ( ·|x ) . Then we can verify that Assumption 1 holds . In this case , finding the embedding map Φ∗ would make it possible to recover the set of meta-states { z } and aggregation distributions pZ ( ·|x ) . 3 Mapping Gene Expressions To Low-Dimensional Cell States . Recall that our goal is to find mathematical abstractions of high-dimensional expression profiles { Xt } . We will estimate an embedding map from gene-expression profiles to a low-dimensional vector space : Ψ : x ∈ Rd 7→ Ψ ( x ) ∈ Rr . Ideally , we hope Ψ ( Xt ) to be low-dimensional while still containing as much information about Xt+1 as possible . 3.1 Embedding the transition operator into a functional space . Consider a kernel mean embedding of the transition operator P : Q = ΠHPΠH , where the projections are with respect to appropriate norms . By assumption , we can verify that rank ( Q ) < r. We seek to estimate Q from cell transition data , perform singular value truncation , and then find the low-dimensional embedding map by transforming the left single functions of Q . To guarantee the function space H is sufficiently expressive , we adopt a kernel composition approach for “ lifting ” the dimensions . First , we construct an initial kernel K0 that best fits the dataset ’ s topology and preserves neighborhood relations . To find such a K0 , we can leverage existing dimension reduction methods for single-cell data analysis , such as PCA , manifold-based , and graph-based methods ( see Kester & van Oudenaarden ( 2018 ) ; Tanay & Regev ( 2017 ) ; Bacher & Kendziorski ( 2016 ) ; Stegle et al . ( 2015 ) for reviews ) . Then , we construct the kernel function K = K0 ◦ K1 by taking the composition between K0 and another kernel function K1 ( e.g. , the Gaussian kernel ) - this step would further lift the problem ’ s dimension and improve the function space ’ s expressibility . One can also take compositions of multiple kernels to mimic a multi-layer neural network ( Cho & Saul , 2009 ) . 3.2 Low-rank compression of cell states via random features . We propose a kernelized state embedding method based on random feature projection for computing an estimator Ψ̂ from transition data { ( Xi , X ′i ) } Ni=1 , which can be obtained from cell trajectories . For analyzing single-cell sequencing data and embedding transition distributions , we will choose the function space H with the kernel function K tailored to the data ’ s geometry . Suppose we have chosen a kernel function K ( for example , a Gaussian kernel ) . We perform nonparametric estimation of Ψ∗ by generating a large number of random features to approximate the kernel space in large finite dimensions . Then , we downsize the estimator by using spectral decomposition . In the case where each parent cell has a single descendant , the cell2state method works by ( informally ) : ( 1 ) Generate random Fourier functions φ ( · ) = ( φ1 ( · ) , . . . , φd ( · ) ) > by randomized decomposition of its kernel function K to approximately span H ( Rahimi & Recht , 2008 ) . ( 2 ) Estimate a finite matrix embedding of the scaled condition probability distribution 1√ p ( x′ ) p ( x′|x ) by P̂ = Σ−1/20 ( 1 N ∑N i=1 φ ( Xi ) φ ( X ′ i ) > ) Σ −1/2 1 , where Σ0 , Σ1 are covariances at the two time points . ( 3 ) Let Ψ̂ ( · ) = ( ÛrΛ̂r ) > Σ−10 Φ ( · ) where Ûr , Λ̂r are from top r truncation of the SVD of P̂ . See Algorithm 1 in the Appendix for the full description of the cell2state algorithm , which also handles the case where cells have multiple descendants . Given a cell ’ s gene expression profile x , the vector Ψ̂ ( x ) can be viewed as a low-dimensional mean embedding of the transition function p ( ·|x ) . Thus it should be predictive of this cell ’ s future dynamics . Runtime Complexity of Algorithm 1 . The algorithm uses random features and singular value truncation , both designed for maximal computation efficiency . The overall runtime for training is at most O ( n+ nD2 +D3 ) , where n is number of cells , D is number of Fourier features ( ≤ n ) . This is the same complexity as computing covariances and PCA in the random feature space . After training , querying the embedding map Ψ̂ takes only O ( rD ) time . In our experiments , Algorithm 1 runs in seconds , while training an MLP ( a deep neural network ) for cell fate prediction takes 10-15 minutes .
Authors proposed cell2state that could embed barcoded scRNA-seq trajectories into low-dimensional representation. Authors provided theoretic analysis of the embedding learnt by cell2state and demonstrated that the learnt embedding was almost lossless. Authors applied this embedding framework on one barcoded scRNA-seq dataset (Biddy, et al., 2018) and demonstrated the learnt embeddings clearly distinguished different cell states. Furthermore, the learnt embeddings were able to substantially improve various downstream tasks other than identifying cell subpopulation.
SP:a637f040207332aff43cb9d801e4a879ba1dc701
Text-Driven Image Manipulation via Semantic-Aware Knowledge Transfer
1 INTRODUCTION . Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) have revolutionized a variety of fields due to its powerful ability to generate realistic and meaningful outputs . Recent works ( Jahanian et al. , 2019 ; He et al. , 2019 ; Goetschalckx et al. , 2019 ) have shown that deep generative models can capture real-world data distribution , and encode them into a semantically-rich latent space . Inspired by this , a lot of tasks draw their attention on the latent space manipulation , including image enhancement ( Ledig et al. , 2017 ; Yang et al. , 2021b ) , editing ( Shen et al. , 2020 ; Härkönen et al. , 2020 ; Patashnik et al. , 2021 ) , and discriminative tasks ( Nitzan et al. , 2021 ; Xu et al. , 2021 ) . With the tremendous success of deep generative models , facial attribute editing ( Yeh et al. , 2017 ; Liu et al. , 2019 ; He et al. , 2020 ; Dorta et al. , 2020 ) , aiming to edit the specific attributes of the target facial image , has become topical . As a special case of facial attribute editing , facial attribute transfer ( Xiao et al. , 2018 ; Lin et al. , 2018 ; Yin et al. , 2019 ; Choi et al. , 2018 ; 2020 ) uses knowledge from reference image as a condition to edit the corresponding attribute from the target image . In order to ensure that the manipulated facial image meets the requirements and interests , facial attribute transfer task tackles two challenges simultaneously : ( 1 ) editing relevance : the relevant attribute should be edited precisely according to the given condition ; and ( 2 ) keeping irrelevance : the irrelevant part ( e.g. , identify information , background , or other attributes ) should not be modified during attribute transfer . Due to strong entanglement of the attributes , meeting both requirements is an intractable task . For example , without fully disentanglement , transferring the “ smile ” attribute to the target facial image may cause that , another irrelevant but coupled attribute , e.g . “ cheek color ” attribute , would be changed during editing . In view of the above issues , recently a variety of methods explore the attribute disentanglement in two ways . Some methods ( He et al. , 2020 ; Kwak et al. , 2020 ) resort to the way of spatial attention detection , which disentangle the attribute by searching specific support region spatially and only manipulate the image in such a confined area . Obviously , these methods totally ignore the facial details beyond the support region when the edited attribute is a global attribute , such as “ smile ” or “ age ” . Meanwhile , other methods ( Shen et al. , 2020 ; Yang et al. , 2021a ; Patashnik et al. , 2021 ) pay attention to the latent space factorization through pre-trained GAN . These methods employ the high-level semantic information as guidance to manipulate image in latent spaces , which are more suitable to handle both global and local attribute editing . However , owing to over-coupled semantic features , these methods are hard to manipulate specific attribute without powerful supervision . To overcome the problems mentioned above , we explore the latent semantic space for disentangled attribute editing , and apply the discovered manipulation method to facial attribute transfer task . As for attribute editing task , in order to disentangle and edit the attribute specified by the text prompt , we design the directional latent mapping network , which leverages semantic direction consistency ( SDC ) loss to constrain the manipulation in the CLIP-space ( Radford et al. , 2021 ) . The key idea of the SDC loss is employing the change direction of semantic feature to estimate the latent manipulation . Furthermore , in order to apply this effective editing method to facial attribute transfer task , we propose a novel semantic-level facial attribute transfer method driven by text prompt , named as semantic directional decomposition network ( SDD-Net ) . The SDD-Net extracts and transfers the specific attribute without redundant information through attribute-manipulated semantic directional decomposition . Our contributions are summarized as follows : • We propose a novel method namely directional latent mapping network for facial attribute editing , which utilizes the semantic direction consistency regularization to ensure attribute disentanglement . • To further take advantage of semantic direction constrain , we propose a text-driven semantic directional decomposition network ( SDD-Net ) for semantic-level attribute transfer , by transferring the knowledge from the reference image to the target image . • Extensive experiments on CelebA-HQ ( Karras et al. , 2017 ) dataset show that our method achieves significant improvements over the state-of-art approaches . 2 RELATED WORK . Latent Space Manipulation . Recent studies ( Bau et al. , 2020 ; Goetschalckx et al. , 2019 ; Shen et al. , 2020 ) have shown that numerous GAN models can encode rich crucial information in the intermediate latent space , such asW , W+ ( Abdal et al. , 2019 ) , or StyleSpace S ( Wu et al. , 2021 ) . By learning to modify the intermediate latent code , generative models can transfer attributes from one face to another face ( Xiao et al. , 2018 ; Choi et al. , 2020 ) . To find a latent code that allows for meaningful manipulation , some methods try to learn an effective encoder network , which inverts a real image into latent space : encoder4editing ( e4e ) ( Tov et al. , 2021 ) method presents an encoder that is specifically designed for balancing distortion-editability tradeoff and a distortion-perception tradeoff within the StyleGAN latent space ; Stylespace ( Wu et al. , 2021 ) proposes a space of channelwise style parameters that disentangle attributes by controlling attribute-related style channels ; The pixel2style2pixel ( pSp ) ( Richardson et al. , 2021 ) method utilizes a novel encoder architecture that inverts a real image intoW+ space without optimization . Other methods mainly focus on finding such latent code modification approach as used to traverse latent space , result in the desired manipulation : AttGAN ( He et al. , 2019 ) applies attribute classification constraint to model the relation between the attributes and the latent representation ; InterfaceGAN ( Shen et al. , 2020 ) decouples some entangled semantic features with subspace projection ; GANspace ( Härkönen et al. , 2020 ) leverages principal component analysis to identify mainly direction in latent space ; L2M-GAN ( Yang et al. , 2021a ) imposes an orthogonality constraint to ensure disentanglement in latent space ; StyleCLIP ( Patashnik et al. , 2021 ) leverages CLIP models to guide the attributes manipulation by latent semantic matching . Different from the methods mentioned above , our method aligns the guidance direction , which is derived directly from text prompt , with the change direction of semantic features extracted by CLIP to guide the manipulation in the latent space . By constraining the consistency of these directions , our method can achieve disentangled attribute editing . CLIP-space + space Directional Latent Mapping Original face Edited face Figure 1 : Overview of our idea for directional latent mapping network . By enforcing the change direction of semantic feature to align with desired direction in CLIP-space , our method can achieve impressive disentangled image manipulation . Semantic-level Attribute Transfer . Semantic-level attribute transfer is a more challenging facial attribute editing task . Recent studies have attempted more detailed approaches for facial attribute transfer : StarGAN ( Choi et al. , 2018 ) applies cycle consistency to preserve identity , and uses classification loss to transfer between different domains . In addition , StarGANv2 ( Choi et al. , 2020 ) also could synthesize reference-guide images leveraging multiple domain translation . Some works are specialized for specific attributes : ExprGAN ( Ding et al. , 2018 ) proposes a model to learn the disentangled identity and expression representations explicitly for facial expression transfer . Besides , ERGAN ( Hu et al. , 2020 ) proposes a dual learning scheme to simultaneously learn two inverse manipulations for attribute transfer . Different from these works , our proposed method focuses not only on one attribute , but also on various global or local attributes . Given an explicit text prompt , our method can automatically extract the specific attribute from the reference , and transfer the knowledge to the target image . 3 METHODOLOGY . In the following section , we first provide the preliminaries and problem formulation . Then , we introduce our text-driven directional latent mapping network for disentangled facial attribute editing . Finally , we describe the text-driven semantic directional decomposition network ( SDD-Net ) for semantic-aware facial attribute transfer . 3.1 PRELIMINARIES . StyleGAN . The StyleGAN ( Karras et al. , 2019 ; 2020 ) generator consists of two main components : mapping network and synthesis network . The former translates latent code z to latent code w , which is in semantic-rich latent space W , and the latter utilizes latent code w to synthesize final images through different layers . Due to the rich information inside , the latent code w can control the multigranularity semantic features of the synthetic image , which is utilized for effective facial attribute transfer in semantic-level . In this paper , we leverage the pre-trained synthesis network as our image generator . StyleCLIP . The StyleCLIP ( Patashnik et al. , 2021 ) method first combines StyleGAN and CLIP as a strong tool for text-driven image editing . The key idea of StyleCLIP is to leverage the CLIP as latent manipulation guidance . By mapping multi-modal inputs to the CLIP-space , StyleCLIP could ensure that the synthetic image matches with the text prompt in semantic-level . The objective is given by : Lstyleclip = DCLIP ( G ( w ) , text ) , ( 1 ) where DCLIP is the cosine distance metric in CLIP-space , and G is the pre-trained StyleGAN generator . 3.2 PROBLEM FORMULATION . Our method leverages two latent space : W+ space and CLIP-space , to transfer semantic-level facial attributes . For better expression , let X and Y denote the set of images and semantic features in CLIP-space , respectively . The image x ∈ X is inverted into corresponding latent code w ∈ W+ . t denotes the input text prompt , with corresponding semantic feature yt . Meanwhile , the parameters of pre-trained StyleGANv2 generator G are frozen during training . Given the text prompt t , the goal of facial attribute transfer model is to train a latent mapping network , which translates w\y to ŵ\ŷ , and synthesizes the edited image x̂ that meets the specific requirements . Our basic editing model can be formally defined as x̂ = G ( M ( x , t ) ) , where M is the manipulation network . Hence given the reference image xref ∈ X , our semantic-aware attribute transfer model can be formally defined as x̂ = G ( M ( x , xref , t ) ) . 3.3 DIRECTIONAL LATENT MAPPING . Only focusing on matching ŷ with yt may cause irrelevant attribute changed . Therefore , we enforce the mapping network to focus on the change direction of semantic features in CLIP-space , and illustrate this idea in Figure 1 . The details of the proposed directional latent mapping network is described as follows : Architecture . It has been shown that the different layers of synthesis network , which correspends to different parts of the latent code , control different granularity of semantic feature . In order to better exploit this property , we design the directional latent mapping network , and depict it in Figure 2 . To a great extent , the degree of attributes disentanglement depends on the disentanglement of latent features . Therefore , we split the layers into g groups , instead three ( coarse , medium , and fine ) , with g fully connected mapping networks , one for each group . The divided part of the latent code can be denoted as w = [ w1 , w2 , ... , wg ] , so the mapping network is defined by : M ( w ) = [ M1 ( w1 ) , M2 ( w2 ) , ... , Mg ( wg ) ] , ( 2 ) where Mi is the i-th mapping network , and [ · , · ] means concat operation . Then we use skipconnection operation to obtain the final manipulated latent code ŵ = w + αM ( w ) , and feed it to the pre-trained StyleGANv2 generator G to get the final manipulated facial image x̂ = G ( ŵ ) . Training Objective . The edited desired attribute is determined by the textual prompt t. Without paired training data , the mapping network could not correctly manipulate the latent code . In order to obtain extra powerful supervision , we use CLIP model to effectively extract semantic features ( attributes ) y of the corresponding images and text : yi ; ŷi = EI ( x ; x̂ ) , yt = ET ( t ) , ( 3 ) where E denotes the pre-trained multi-modal feature extractor integrated in CLIP , yi and ŷi denotes the extracted semantic features of x and x̂ , respectively . yt is the desired semantic feature extracted from textual prompt t. In the latent CLIP-space , simply optimizing the matching degree between ŷi and yt may cause irrelevant attribute to be changed . Therefore , rather than optimizing the matching degree to the utmost extent , we enforce the change direction between y and ŷi to align with the yt , and propose the semantic direction consistency ( SDC ) loss . The SDC loss is given by : ~I = ŷi − yi , ~T = yt , ( 4 ) LSDC = 1− S ( ~I , ~T ) , where S ( · , · ) is the similarity measurement . In this paper , we use the effective cosine similarity as the measurement . The above objective can guide the mapping network to manipulate the latent code along the direction of textual prompt . In order to preserve the irrelevant parts , we use the following identity loss : LID = 1− 〈R ( G ( w ) ) , R ( G ( ŵ ) ) 〉 , ( 5 ) where R is a pre-trained ArcFace ( Deng et al. , 2019 ) network for face recognition , and 〈· , ·〉 computes the cosine similarity between its two arguments . Meanwhile , we use L2 distance in W+ space to control the degree of manipulation . Then , the whole training objective for directional latent mapping network is denoted as : argmin w∈W+ λSDCLSDC ( w , t ) + λL2||M ( w ) ||2 + λIDLID ( w ) . ( 6 )
The authors proposed a directional latent mapping network for facial attribute editing via text inputs. The directional latent mapping network could correctly edit relevant attributes while preserving irrelevant attributes via training with the semantic direction consistency (SDC) loss. This paved the way to a novel semantic directional decomposition network (SDD-Net) for text-driven facial attribute transfer: SDD-Net transfers semantic-aware attributes from reference images to a target, with the multi-modal approach guiding the process via text input descriptions. CelebA-HQ dataset was used to compare results with recent SOA methods.
SP:caea798fb6dcc5623f6516a64c2ea94deac2ae02
Text-Driven Image Manipulation via Semantic-Aware Knowledge Transfer
1 INTRODUCTION . Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) have revolutionized a variety of fields due to its powerful ability to generate realistic and meaningful outputs . Recent works ( Jahanian et al. , 2019 ; He et al. , 2019 ; Goetschalckx et al. , 2019 ) have shown that deep generative models can capture real-world data distribution , and encode them into a semantically-rich latent space . Inspired by this , a lot of tasks draw their attention on the latent space manipulation , including image enhancement ( Ledig et al. , 2017 ; Yang et al. , 2021b ) , editing ( Shen et al. , 2020 ; Härkönen et al. , 2020 ; Patashnik et al. , 2021 ) , and discriminative tasks ( Nitzan et al. , 2021 ; Xu et al. , 2021 ) . With the tremendous success of deep generative models , facial attribute editing ( Yeh et al. , 2017 ; Liu et al. , 2019 ; He et al. , 2020 ; Dorta et al. , 2020 ) , aiming to edit the specific attributes of the target facial image , has become topical . As a special case of facial attribute editing , facial attribute transfer ( Xiao et al. , 2018 ; Lin et al. , 2018 ; Yin et al. , 2019 ; Choi et al. , 2018 ; 2020 ) uses knowledge from reference image as a condition to edit the corresponding attribute from the target image . In order to ensure that the manipulated facial image meets the requirements and interests , facial attribute transfer task tackles two challenges simultaneously : ( 1 ) editing relevance : the relevant attribute should be edited precisely according to the given condition ; and ( 2 ) keeping irrelevance : the irrelevant part ( e.g. , identify information , background , or other attributes ) should not be modified during attribute transfer . Due to strong entanglement of the attributes , meeting both requirements is an intractable task . For example , without fully disentanglement , transferring the “ smile ” attribute to the target facial image may cause that , another irrelevant but coupled attribute , e.g . “ cheek color ” attribute , would be changed during editing . In view of the above issues , recently a variety of methods explore the attribute disentanglement in two ways . Some methods ( He et al. , 2020 ; Kwak et al. , 2020 ) resort to the way of spatial attention detection , which disentangle the attribute by searching specific support region spatially and only manipulate the image in such a confined area . Obviously , these methods totally ignore the facial details beyond the support region when the edited attribute is a global attribute , such as “ smile ” or “ age ” . Meanwhile , other methods ( Shen et al. , 2020 ; Yang et al. , 2021a ; Patashnik et al. , 2021 ) pay attention to the latent space factorization through pre-trained GAN . These methods employ the high-level semantic information as guidance to manipulate image in latent spaces , which are more suitable to handle both global and local attribute editing . However , owing to over-coupled semantic features , these methods are hard to manipulate specific attribute without powerful supervision . To overcome the problems mentioned above , we explore the latent semantic space for disentangled attribute editing , and apply the discovered manipulation method to facial attribute transfer task . As for attribute editing task , in order to disentangle and edit the attribute specified by the text prompt , we design the directional latent mapping network , which leverages semantic direction consistency ( SDC ) loss to constrain the manipulation in the CLIP-space ( Radford et al. , 2021 ) . The key idea of the SDC loss is employing the change direction of semantic feature to estimate the latent manipulation . Furthermore , in order to apply this effective editing method to facial attribute transfer task , we propose a novel semantic-level facial attribute transfer method driven by text prompt , named as semantic directional decomposition network ( SDD-Net ) . The SDD-Net extracts and transfers the specific attribute without redundant information through attribute-manipulated semantic directional decomposition . Our contributions are summarized as follows : • We propose a novel method namely directional latent mapping network for facial attribute editing , which utilizes the semantic direction consistency regularization to ensure attribute disentanglement . • To further take advantage of semantic direction constrain , we propose a text-driven semantic directional decomposition network ( SDD-Net ) for semantic-level attribute transfer , by transferring the knowledge from the reference image to the target image . • Extensive experiments on CelebA-HQ ( Karras et al. , 2017 ) dataset show that our method achieves significant improvements over the state-of-art approaches . 2 RELATED WORK . Latent Space Manipulation . Recent studies ( Bau et al. , 2020 ; Goetschalckx et al. , 2019 ; Shen et al. , 2020 ) have shown that numerous GAN models can encode rich crucial information in the intermediate latent space , such asW , W+ ( Abdal et al. , 2019 ) , or StyleSpace S ( Wu et al. , 2021 ) . By learning to modify the intermediate latent code , generative models can transfer attributes from one face to another face ( Xiao et al. , 2018 ; Choi et al. , 2020 ) . To find a latent code that allows for meaningful manipulation , some methods try to learn an effective encoder network , which inverts a real image into latent space : encoder4editing ( e4e ) ( Tov et al. , 2021 ) method presents an encoder that is specifically designed for balancing distortion-editability tradeoff and a distortion-perception tradeoff within the StyleGAN latent space ; Stylespace ( Wu et al. , 2021 ) proposes a space of channelwise style parameters that disentangle attributes by controlling attribute-related style channels ; The pixel2style2pixel ( pSp ) ( Richardson et al. , 2021 ) method utilizes a novel encoder architecture that inverts a real image intoW+ space without optimization . Other methods mainly focus on finding such latent code modification approach as used to traverse latent space , result in the desired manipulation : AttGAN ( He et al. , 2019 ) applies attribute classification constraint to model the relation between the attributes and the latent representation ; InterfaceGAN ( Shen et al. , 2020 ) decouples some entangled semantic features with subspace projection ; GANspace ( Härkönen et al. , 2020 ) leverages principal component analysis to identify mainly direction in latent space ; L2M-GAN ( Yang et al. , 2021a ) imposes an orthogonality constraint to ensure disentanglement in latent space ; StyleCLIP ( Patashnik et al. , 2021 ) leverages CLIP models to guide the attributes manipulation by latent semantic matching . Different from the methods mentioned above , our method aligns the guidance direction , which is derived directly from text prompt , with the change direction of semantic features extracted by CLIP to guide the manipulation in the latent space . By constraining the consistency of these directions , our method can achieve disentangled attribute editing . CLIP-space + space Directional Latent Mapping Original face Edited face Figure 1 : Overview of our idea for directional latent mapping network . By enforcing the change direction of semantic feature to align with desired direction in CLIP-space , our method can achieve impressive disentangled image manipulation . Semantic-level Attribute Transfer . Semantic-level attribute transfer is a more challenging facial attribute editing task . Recent studies have attempted more detailed approaches for facial attribute transfer : StarGAN ( Choi et al. , 2018 ) applies cycle consistency to preserve identity , and uses classification loss to transfer between different domains . In addition , StarGANv2 ( Choi et al. , 2020 ) also could synthesize reference-guide images leveraging multiple domain translation . Some works are specialized for specific attributes : ExprGAN ( Ding et al. , 2018 ) proposes a model to learn the disentangled identity and expression representations explicitly for facial expression transfer . Besides , ERGAN ( Hu et al. , 2020 ) proposes a dual learning scheme to simultaneously learn two inverse manipulations for attribute transfer . Different from these works , our proposed method focuses not only on one attribute , but also on various global or local attributes . Given an explicit text prompt , our method can automatically extract the specific attribute from the reference , and transfer the knowledge to the target image . 3 METHODOLOGY . In the following section , we first provide the preliminaries and problem formulation . Then , we introduce our text-driven directional latent mapping network for disentangled facial attribute editing . Finally , we describe the text-driven semantic directional decomposition network ( SDD-Net ) for semantic-aware facial attribute transfer . 3.1 PRELIMINARIES . StyleGAN . The StyleGAN ( Karras et al. , 2019 ; 2020 ) generator consists of two main components : mapping network and synthesis network . The former translates latent code z to latent code w , which is in semantic-rich latent space W , and the latter utilizes latent code w to synthesize final images through different layers . Due to the rich information inside , the latent code w can control the multigranularity semantic features of the synthetic image , which is utilized for effective facial attribute transfer in semantic-level . In this paper , we leverage the pre-trained synthesis network as our image generator . StyleCLIP . The StyleCLIP ( Patashnik et al. , 2021 ) method first combines StyleGAN and CLIP as a strong tool for text-driven image editing . The key idea of StyleCLIP is to leverage the CLIP as latent manipulation guidance . By mapping multi-modal inputs to the CLIP-space , StyleCLIP could ensure that the synthetic image matches with the text prompt in semantic-level . The objective is given by : Lstyleclip = DCLIP ( G ( w ) , text ) , ( 1 ) where DCLIP is the cosine distance metric in CLIP-space , and G is the pre-trained StyleGAN generator . 3.2 PROBLEM FORMULATION . Our method leverages two latent space : W+ space and CLIP-space , to transfer semantic-level facial attributes . For better expression , let X and Y denote the set of images and semantic features in CLIP-space , respectively . The image x ∈ X is inverted into corresponding latent code w ∈ W+ . t denotes the input text prompt , with corresponding semantic feature yt . Meanwhile , the parameters of pre-trained StyleGANv2 generator G are frozen during training . Given the text prompt t , the goal of facial attribute transfer model is to train a latent mapping network , which translates w\y to ŵ\ŷ , and synthesizes the edited image x̂ that meets the specific requirements . Our basic editing model can be formally defined as x̂ = G ( M ( x , t ) ) , where M is the manipulation network . Hence given the reference image xref ∈ X , our semantic-aware attribute transfer model can be formally defined as x̂ = G ( M ( x , xref , t ) ) . 3.3 DIRECTIONAL LATENT MAPPING . Only focusing on matching ŷ with yt may cause irrelevant attribute changed . Therefore , we enforce the mapping network to focus on the change direction of semantic features in CLIP-space , and illustrate this idea in Figure 1 . The details of the proposed directional latent mapping network is described as follows : Architecture . It has been shown that the different layers of synthesis network , which correspends to different parts of the latent code , control different granularity of semantic feature . In order to better exploit this property , we design the directional latent mapping network , and depict it in Figure 2 . To a great extent , the degree of attributes disentanglement depends on the disentanglement of latent features . Therefore , we split the layers into g groups , instead three ( coarse , medium , and fine ) , with g fully connected mapping networks , one for each group . The divided part of the latent code can be denoted as w = [ w1 , w2 , ... , wg ] , so the mapping network is defined by : M ( w ) = [ M1 ( w1 ) , M2 ( w2 ) , ... , Mg ( wg ) ] , ( 2 ) where Mi is the i-th mapping network , and [ · , · ] means concat operation . Then we use skipconnection operation to obtain the final manipulated latent code ŵ = w + αM ( w ) , and feed it to the pre-trained StyleGANv2 generator G to get the final manipulated facial image x̂ = G ( ŵ ) . Training Objective . The edited desired attribute is determined by the textual prompt t. Without paired training data , the mapping network could not correctly manipulate the latent code . In order to obtain extra powerful supervision , we use CLIP model to effectively extract semantic features ( attributes ) y of the corresponding images and text : yi ; ŷi = EI ( x ; x̂ ) , yt = ET ( t ) , ( 3 ) where E denotes the pre-trained multi-modal feature extractor integrated in CLIP , yi and ŷi denotes the extracted semantic features of x and x̂ , respectively . yt is the desired semantic feature extracted from textual prompt t. In the latent CLIP-space , simply optimizing the matching degree between ŷi and yt may cause irrelevant attribute to be changed . Therefore , rather than optimizing the matching degree to the utmost extent , we enforce the change direction between y and ŷi to align with the yt , and propose the semantic direction consistency ( SDC ) loss . The SDC loss is given by : ~I = ŷi − yi , ~T = yt , ( 4 ) LSDC = 1− S ( ~I , ~T ) , where S ( · , · ) is the similarity measurement . In this paper , we use the effective cosine similarity as the measurement . The above objective can guide the mapping network to manipulate the latent code along the direction of textual prompt . In order to preserve the irrelevant parts , we use the following identity loss : LID = 1− 〈R ( G ( w ) ) , R ( G ( ŵ ) ) 〉 , ( 5 ) where R is a pre-trained ArcFace ( Deng et al. , 2019 ) network for face recognition , and 〈· , ·〉 computes the cosine similarity between its two arguments . Meanwhile , we use L2 distance in W+ space to control the degree of manipulation . Then , the whole training objective for directional latent mapping network is denoted as : argmin w∈W+ λSDCLSDC ( w , t ) + λL2||M ( w ) ||2 + λIDLID ( w ) . ( 6 )
This paper proposes a new loss function for unsupervised facial attribute editing and transfer. Specifically, a latent mapping network is trained by optimizing the similarity/distance between the generated and desired image in the CLIP latent space. Experiments show some comparisons and ablations for the "smile" attribute.
SP:caea798fb6dcc5623f6516a64c2ea94deac2ae02
Divergence-aware Federated Self-Supervised Learning
1 INTRODUCTION . Self-supervised learning ( SSL ) has attracted extensive research interest for learning representations without relying on expensive data labels . In computer vision , the common practice is to design proxy tasks to facilitate visual representation learning from unlabeled images ( Doersch et al. , 2015 ; Noroozi & Favaro , 2016 ; Zhang et al. , 2016 ; Gidaris et al. , 2018 ) . Among them , the state-of-the-art SSL methods employ contrastive learning that uses Siamese networks to minimize the similarity of two augmented views of images ( Wu et al. , 2018 ; Chen et al. , 2020a ; He et al. , 2020 ; Grill et al. , 2020 ; Chen & He , 2021 ) . All these methods heavily rely on the assumption that images are centrally available in cloud servers , such as public data on the Internet . However , the rapidly growing amount of decentralized images may not be centralized due to increasingly stringent privacy protection regulations ( Custers et al. , 2019 ) . The increasing number of edge devices , such as street cameras and phones , are generating a large number of unlabeled images , but these images may not be centralized as they could contain sensitive personal information like human faces . Besides , learning representations from these images could be more beneficial for downstream tasks deployed in the same scenarios ( Yan et al. , 2020 ) . A straightforward method is to adopt SSL methods for each edge , but it results in poor performance ( Zhuang et al. , 2021a ) as decentralized data are mostly non-independently and identically distributed ( non-IID ) ( Li et al. , 2020a ) . Federated learning ( FL ) has emerged as a popular privacy-preserving method to train models from decentralized data ( McMahan et al. , 2017 ) , where clients send training updates to the server instead of raw data . The majority of FL methods , however , are not applicable for unsupervised representation learning because they require fully labeled data ( Caldas et al. , 2018 ) , or partially labeled data in either the server or clients ( Jin et al. , 2020a ; Jeong et al. , 2021 ) . Recent studies implement FL with SSL methods that are based on Siamese networks , but they only focus on a single SSL method . For example , FedCA ( Zhang et al. , 2020a ) is based on SimCLR ( Chen et al. , 2020a ) and FedU ( Zhuang et al. , 2021a ) is based on BYOL ( Grill et al. , 2020 ) . These efforts have not yet revealed deep insights into the fundamental building blocks of Siamese networks for federated self-supervised learning . In this paper , we investigate the effects of fundamental components of federated self-supervised learning ( FedSSL ) via in-depth empirical study . To facilitate fair comparison , we first introduce a generalized FedSSL framework to embrace existing SSL methods that differ in building blocks of Siamese networks . The framework comprises of a server and multiple clients : clients conduct SSL training using Siamese networks — an online network and a target network ; the server aggregates the trained online networks to obtain a new global network and uses this global network to update the online networks of clients in the next round of training . FedSSL primarily focuses on the cross-silo FL where clients are stateful with high availability ( Kairouz et al. , 2019 ) . We conduct empirical studies based on the FedSSL framework and discover important insights of FedSSL . Among four popular SSL methods ( SimCLR ( Chen et al. , 2020a ) , MoCo ( He et al. , 2020 ) , BYOL ( Grill et al. , 2020 ) , and SimSiam ( Chen & He , 2021 ) , FedBYOL achieves the best performance , whereas FedSimSiam yields the worst performance . More detailed analysis uncover the following unique insights : 1 ) Stop-gradient operation , essential for SimSiam and BYOL , is not always essential in FedSSL ; 2 ) Target networks of clients are essential to gain knowledge from online networks ; 3 ) Keeping local knowledge of clients is beneficial for performance on non-IID data . Inspired by the insights , we propose a new approach , Federated Divergence-aware Exponential Moving Average update ( FedEMA ) 1 , to address the non-IID data problem . Specifically , instead of updating online networks of clients simply by the global network , FedEMA updates them via exponential moving average ( EMA ) of the global network , where the decay rate of EMA is measured by the divergence of global and online networks dynamically . Extensive experiments demonstrate that FedEMA outperforms existing methods in a wide range of settings . We believe that important insights from this study will shed light on future research . Our main contributions are threefold : • We introduce a new generalized FedSSL framework that embraces existing SSL methods based on Siamese networks and presents flexibility catering to future methods . • We conduct in-depth empirical studies of FedSSL based on the framework and discover deep insights of the fundamental building blocks of Siamese networks for FedSSL . • Inspired by the insights , we further propose a new model update approach , FedEMA , that adaptively updates online networks of clients with EMA of the global network . Extensive experiments show that FedEMA outperforms existing methods in a wide range of settings . 2 RELATED WORK . Self-supervised Learning In computer vision , self-supervised learning ( SSL ) aims to learn visual representations without any labels . Discriminative SSL methods facilitate learning with proxy tasks ( Pathak et al. , 2016 ; Noroozi & Favaro , 2016 ; Zhang et al. , 2016 ; Gidaris et al. , 2018 ) . Among them , contrastive learning ( Oord et al. , 2018 ; Bachman et al. , 2019 ) has become a promising principle . It uses Siamese networks to minimize the similarity of two augmented views ( positive pairs ) and maximize the similarity of two different images ( negative pairs ) . These methods are either contrastive or non-contrastive ones : contrastive SSL methods require negative pairs ( Chen et al. , 2020a ; He et al. , 2020 ) to prevent training collapse ; non-contrastive SSL methods ( Grill et al. , 2020 ; Chen & He , 2021 ) are generally more efficient as they maintain remarkable performances using only positive pairs . However , these methods do not perform well on decentralized non-IID data ( Zhuang et al. , 2021a ) . We analyze their similarities and variances and propose a generalized FedSSL framework . Federated Learning Federated learning ( FL ) is a distributed training technique for learning from decentralized parties without transmitting raw data to a central server ( McMahan et al. , 2017 ) . 1Intuitively , FedSSL is analogous to a SuperClass in object-oriented programming ( OOP ) , then FedEMA is a SubClass that inherits FedSSL and overrides the model update method . Among many studies that address the non-IID data challenge ( Zhao et al. , 2018 ; Li et al. , 2020b ; Wang et al. , 2020 ; Zhuang et al. , 2021c ) , Personalized FL ( PFL ) aims to learn personalized models for clients ( Tan et al. , 2021 ) . Although some PFL methods interpolate global and local models ( Hanzely et al. , 2020 ; Mansour et al. , 2020 ; yuyang deng et al. , 2021 ) , our proposed FedEMA differ in the motivation , application scenario , and measurement of the decay rate . Besides , the majority of existing works only consider supervised learning where clients have fully labeled data . Although recent works propose federated semi-supervised learning ( Jin et al. , 2020b ; Zhang et al. , 2020b ; Jeong et al. , 2021 ) or federated domain adaptation ( Peng et al. , 2020 ; Zhuang et al. , 2021b ) , they still need labels in either the server or clients . This paper focuses on purely unlabeled decentralized data . Federated Unsupervised Learning Learning representations from unlabeled decentralized data while preserving data privacy is still a nascent field . Federated unsupervised representation learning is first proposed by van Berlo et al . ( 2020 ) based on autoencoder , but it neglects the non-IID data challenge . Zhang et al . ( 2020a ) address the non-IID issue with potential privacy risk for sharing features . Although Zhuang et al . ( 2020 ) address the issue based on BYOL as our FedEMA , they do not shed light on why BYOL works best . Since SSL methods are evolving rapidly and new methods are emerging , we introduce a generalized FedSSL framework and deeply investigate the fundamental components to build up practical guidelines for the generic FedSSL framework . 3 AN EMPIRICAL STUDY OF FEDERATED SELF-SUPERVISED LEARNING . This section first defines the problem and introduces the generalized FedSSL framework . Using the framework , we then conduct empirical studies to reveal deep insights of FedSSL . 3.1 PROBLEM DEFINITION . FedSSL aims to learn a generalized representation W from multiple decentralized parties for downstream tasks in the same scenarios . Each party k contains unlabeled data Dk = { Xk } that can not be transferred to the server or other parties due to privacy constraints . Data is normally non-IID among decentralized parties ( Li et al. , 2020a ) ; each party could contain only limited data categories ( e.g. , two out of ten CIFAR-10 classes ) ( Luo et al. , 2019 ) . As a result , each party alone is unable to obtain a good representation ( Zhuang et al. , 2021a ) . The global objective function to learn from multiple parties is minw f ( w ) : = ∑K k=1 nk n fk ( w ) , where K is the number of clients , and n = ∑K k=1 nk is the total data amount . For client k , fk ( w ) : = Exk∼Pk [ f̃k ( w ; xk ) ] is the expected loss over data distribution Pk , where xk is the unlabeled data and f̃k ( w ; xk ) is the loss function . 3.2 GENERALIZED FRAMEWORK . We introduce a generalized FedSSL framework that empowers existing SSL methods based on Siamese networks to learn from decentralized data under privacy constraints . Figure 1 depicts the end-to-end training pipeline of the framework . It comprises of three key operations : 1 ) Local Training in clients ; 2 ) Model Aggregation in the server ; 3 ) Model Communication ( upload and update ) between the server and clients . We implement and analyze four popular SSL methods — SimCLR ( Chen et al. , 2020a ) , MoCo ( V1 ( He et al. , 2020 ) and V2 ( Chen et al. , 2020b ) ) , SimSiam ( Chen & He , 2021 ) , and BYOL ( Grill et al. , 2020 ) . Variances in Siamese networks of these methods lead to differences in executions in these three operations 2 . Local Training Firstly , each client k conducts self-supervised training on unlabeled data Dk based on the same global model W og downloaded from the server . Regardless of SSL methods , clients train with Siamese networks — an online network W ok and a target network W t k for E local epochs using cooresponding loss functions L. We classify these SSL methods with two major differences ( Figure 8 in Appendix A ) : 1 ) Only SimSiam and BYOL contain a predictor in the online network , so we denote their online network W ok = ( Wk , W p k ) , where Wk is the online encoder and W p k is the predictor ; As for SimCLR and MoCo , W ok = Wk . 2 ) SimCLR and SimSiam share identical weights between the online encoder and the target encoder , so W tk = Wk . In contrast , MoCo and BYOL update the target encoder with EMA of the online encoder in every mini-batch : W tk = mWk + ( 1−m ) W tk , where m is the momentum value normally set to 0.99 . Model Communication After local training , client k uploads the trained online network W ok to the server and updates it with the global model W og after aggregation . Considering the differences of SSL methods , we upload and update encoders and predictors separately : 1 ) we upload and update the predictor when it presents in local training ; 2 ) we follow the communication protocol Zhuang et al . ( 2021a ) to upload and update only the online encoder Wk when encoders are different . Model Aggregation When the server receives online networks from clients , it aggregates them to obtain a new global model W og = ∑K k=0 nk n W o k . W o g = ( Wg , W p g ) if predictor presents , otherwise W og = Wg , where Wg is the global encoder . Then , the server sent W o g to clients to update their online networks . The training iterates these three operations until it meets the stopping conditions . At the end of the training , we use the parameters of W og as the generic representation W for evaluation .
This paper investigates a generic federated SSL recipe that applies FedAvg to a range of existing SSL works including SimCLR, MoCo, BYOL and SimSiam. Each of these SSL blocks comprises two encoding networks, including an online net and a target net. The two nets were trained via optimizing a similarity loss such that the distance between encodings of different augmented versions (e.g. rotation) of the same input is small and vice versa. Depending on the specific SSL block, the parameterization of the two nets might be identical (SimCLR, SimSiam) or not (MoCo, BYOL). At each iteration, the online net of each client is uploaded to a server to be averaged. Then, the (global) averaged version is sent back to each client. Each client then reset its online net as a weighted average of the global and local version of the online net (to account for potential data heterogeneity), and continue updating the SSL block via gradient updates and so on. Here, the main contribution of the paper is a series of ablation studies (Table 1, Figure 2, Table 2, Figure 3) that measure the isolated impact of several fundamental components of the aforementioned SSL blocks (e.g., SimCLR, SimSiam, MoCo and BYOL). This includes the predictor, stop-gradient, exponential moving average (EMA), non-identical online and target encoding net. The results indicate that BYOLD contains all the essential elements (predictor, EMA and non-identical online & target net & no stop-gradient) so the federated SSL recipe is rotated towards having BYOL as the model choice. Then, a customized, divergence-aware EMA is proposed for this recipe that is shown to improve substantially over existing FL-SSL baselines (Tables 3-4; Tables 6-7). There is also a bit of ablation study that measures the isolated impact of the new EMA which is also substantial as shown in Figure 5.
SP:3bf64bcc780380921a57f996019861e87d24884e
Divergence-aware Federated Self-Supervised Learning
1 INTRODUCTION . Self-supervised learning ( SSL ) has attracted extensive research interest for learning representations without relying on expensive data labels . In computer vision , the common practice is to design proxy tasks to facilitate visual representation learning from unlabeled images ( Doersch et al. , 2015 ; Noroozi & Favaro , 2016 ; Zhang et al. , 2016 ; Gidaris et al. , 2018 ) . Among them , the state-of-the-art SSL methods employ contrastive learning that uses Siamese networks to minimize the similarity of two augmented views of images ( Wu et al. , 2018 ; Chen et al. , 2020a ; He et al. , 2020 ; Grill et al. , 2020 ; Chen & He , 2021 ) . All these methods heavily rely on the assumption that images are centrally available in cloud servers , such as public data on the Internet . However , the rapidly growing amount of decentralized images may not be centralized due to increasingly stringent privacy protection regulations ( Custers et al. , 2019 ) . The increasing number of edge devices , such as street cameras and phones , are generating a large number of unlabeled images , but these images may not be centralized as they could contain sensitive personal information like human faces . Besides , learning representations from these images could be more beneficial for downstream tasks deployed in the same scenarios ( Yan et al. , 2020 ) . A straightforward method is to adopt SSL methods for each edge , but it results in poor performance ( Zhuang et al. , 2021a ) as decentralized data are mostly non-independently and identically distributed ( non-IID ) ( Li et al. , 2020a ) . Federated learning ( FL ) has emerged as a popular privacy-preserving method to train models from decentralized data ( McMahan et al. , 2017 ) , where clients send training updates to the server instead of raw data . The majority of FL methods , however , are not applicable for unsupervised representation learning because they require fully labeled data ( Caldas et al. , 2018 ) , or partially labeled data in either the server or clients ( Jin et al. , 2020a ; Jeong et al. , 2021 ) . Recent studies implement FL with SSL methods that are based on Siamese networks , but they only focus on a single SSL method . For example , FedCA ( Zhang et al. , 2020a ) is based on SimCLR ( Chen et al. , 2020a ) and FedU ( Zhuang et al. , 2021a ) is based on BYOL ( Grill et al. , 2020 ) . These efforts have not yet revealed deep insights into the fundamental building blocks of Siamese networks for federated self-supervised learning . In this paper , we investigate the effects of fundamental components of federated self-supervised learning ( FedSSL ) via in-depth empirical study . To facilitate fair comparison , we first introduce a generalized FedSSL framework to embrace existing SSL methods that differ in building blocks of Siamese networks . The framework comprises of a server and multiple clients : clients conduct SSL training using Siamese networks — an online network and a target network ; the server aggregates the trained online networks to obtain a new global network and uses this global network to update the online networks of clients in the next round of training . FedSSL primarily focuses on the cross-silo FL where clients are stateful with high availability ( Kairouz et al. , 2019 ) . We conduct empirical studies based on the FedSSL framework and discover important insights of FedSSL . Among four popular SSL methods ( SimCLR ( Chen et al. , 2020a ) , MoCo ( He et al. , 2020 ) , BYOL ( Grill et al. , 2020 ) , and SimSiam ( Chen & He , 2021 ) , FedBYOL achieves the best performance , whereas FedSimSiam yields the worst performance . More detailed analysis uncover the following unique insights : 1 ) Stop-gradient operation , essential for SimSiam and BYOL , is not always essential in FedSSL ; 2 ) Target networks of clients are essential to gain knowledge from online networks ; 3 ) Keeping local knowledge of clients is beneficial for performance on non-IID data . Inspired by the insights , we propose a new approach , Federated Divergence-aware Exponential Moving Average update ( FedEMA ) 1 , to address the non-IID data problem . Specifically , instead of updating online networks of clients simply by the global network , FedEMA updates them via exponential moving average ( EMA ) of the global network , where the decay rate of EMA is measured by the divergence of global and online networks dynamically . Extensive experiments demonstrate that FedEMA outperforms existing methods in a wide range of settings . We believe that important insights from this study will shed light on future research . Our main contributions are threefold : • We introduce a new generalized FedSSL framework that embraces existing SSL methods based on Siamese networks and presents flexibility catering to future methods . • We conduct in-depth empirical studies of FedSSL based on the framework and discover deep insights of the fundamental building blocks of Siamese networks for FedSSL . • Inspired by the insights , we further propose a new model update approach , FedEMA , that adaptively updates online networks of clients with EMA of the global network . Extensive experiments show that FedEMA outperforms existing methods in a wide range of settings . 2 RELATED WORK . Self-supervised Learning In computer vision , self-supervised learning ( SSL ) aims to learn visual representations without any labels . Discriminative SSL methods facilitate learning with proxy tasks ( Pathak et al. , 2016 ; Noroozi & Favaro , 2016 ; Zhang et al. , 2016 ; Gidaris et al. , 2018 ) . Among them , contrastive learning ( Oord et al. , 2018 ; Bachman et al. , 2019 ) has become a promising principle . It uses Siamese networks to minimize the similarity of two augmented views ( positive pairs ) and maximize the similarity of two different images ( negative pairs ) . These methods are either contrastive or non-contrastive ones : contrastive SSL methods require negative pairs ( Chen et al. , 2020a ; He et al. , 2020 ) to prevent training collapse ; non-contrastive SSL methods ( Grill et al. , 2020 ; Chen & He , 2021 ) are generally more efficient as they maintain remarkable performances using only positive pairs . However , these methods do not perform well on decentralized non-IID data ( Zhuang et al. , 2021a ) . We analyze their similarities and variances and propose a generalized FedSSL framework . Federated Learning Federated learning ( FL ) is a distributed training technique for learning from decentralized parties without transmitting raw data to a central server ( McMahan et al. , 2017 ) . 1Intuitively , FedSSL is analogous to a SuperClass in object-oriented programming ( OOP ) , then FedEMA is a SubClass that inherits FedSSL and overrides the model update method . Among many studies that address the non-IID data challenge ( Zhao et al. , 2018 ; Li et al. , 2020b ; Wang et al. , 2020 ; Zhuang et al. , 2021c ) , Personalized FL ( PFL ) aims to learn personalized models for clients ( Tan et al. , 2021 ) . Although some PFL methods interpolate global and local models ( Hanzely et al. , 2020 ; Mansour et al. , 2020 ; yuyang deng et al. , 2021 ) , our proposed FedEMA differ in the motivation , application scenario , and measurement of the decay rate . Besides , the majority of existing works only consider supervised learning where clients have fully labeled data . Although recent works propose federated semi-supervised learning ( Jin et al. , 2020b ; Zhang et al. , 2020b ; Jeong et al. , 2021 ) or federated domain adaptation ( Peng et al. , 2020 ; Zhuang et al. , 2021b ) , they still need labels in either the server or clients . This paper focuses on purely unlabeled decentralized data . Federated Unsupervised Learning Learning representations from unlabeled decentralized data while preserving data privacy is still a nascent field . Federated unsupervised representation learning is first proposed by van Berlo et al . ( 2020 ) based on autoencoder , but it neglects the non-IID data challenge . Zhang et al . ( 2020a ) address the non-IID issue with potential privacy risk for sharing features . Although Zhuang et al . ( 2020 ) address the issue based on BYOL as our FedEMA , they do not shed light on why BYOL works best . Since SSL methods are evolving rapidly and new methods are emerging , we introduce a generalized FedSSL framework and deeply investigate the fundamental components to build up practical guidelines for the generic FedSSL framework . 3 AN EMPIRICAL STUDY OF FEDERATED SELF-SUPERVISED LEARNING . This section first defines the problem and introduces the generalized FedSSL framework . Using the framework , we then conduct empirical studies to reveal deep insights of FedSSL . 3.1 PROBLEM DEFINITION . FedSSL aims to learn a generalized representation W from multiple decentralized parties for downstream tasks in the same scenarios . Each party k contains unlabeled data Dk = { Xk } that can not be transferred to the server or other parties due to privacy constraints . Data is normally non-IID among decentralized parties ( Li et al. , 2020a ) ; each party could contain only limited data categories ( e.g. , two out of ten CIFAR-10 classes ) ( Luo et al. , 2019 ) . As a result , each party alone is unable to obtain a good representation ( Zhuang et al. , 2021a ) . The global objective function to learn from multiple parties is minw f ( w ) : = ∑K k=1 nk n fk ( w ) , where K is the number of clients , and n = ∑K k=1 nk is the total data amount . For client k , fk ( w ) : = Exk∼Pk [ f̃k ( w ; xk ) ] is the expected loss over data distribution Pk , where xk is the unlabeled data and f̃k ( w ; xk ) is the loss function . 3.2 GENERALIZED FRAMEWORK . We introduce a generalized FedSSL framework that empowers existing SSL methods based on Siamese networks to learn from decentralized data under privacy constraints . Figure 1 depicts the end-to-end training pipeline of the framework . It comprises of three key operations : 1 ) Local Training in clients ; 2 ) Model Aggregation in the server ; 3 ) Model Communication ( upload and update ) between the server and clients . We implement and analyze four popular SSL methods — SimCLR ( Chen et al. , 2020a ) , MoCo ( V1 ( He et al. , 2020 ) and V2 ( Chen et al. , 2020b ) ) , SimSiam ( Chen & He , 2021 ) , and BYOL ( Grill et al. , 2020 ) . Variances in Siamese networks of these methods lead to differences in executions in these three operations 2 . Local Training Firstly , each client k conducts self-supervised training on unlabeled data Dk based on the same global model W og downloaded from the server . Regardless of SSL methods , clients train with Siamese networks — an online network W ok and a target network W t k for E local epochs using cooresponding loss functions L. We classify these SSL methods with two major differences ( Figure 8 in Appendix A ) : 1 ) Only SimSiam and BYOL contain a predictor in the online network , so we denote their online network W ok = ( Wk , W p k ) , where Wk is the online encoder and W p k is the predictor ; As for SimCLR and MoCo , W ok = Wk . 2 ) SimCLR and SimSiam share identical weights between the online encoder and the target encoder , so W tk = Wk . In contrast , MoCo and BYOL update the target encoder with EMA of the online encoder in every mini-batch : W tk = mWk + ( 1−m ) W tk , where m is the momentum value normally set to 0.99 . Model Communication After local training , client k uploads the trained online network W ok to the server and updates it with the global model W og after aggregation . Considering the differences of SSL methods , we upload and update encoders and predictors separately : 1 ) we upload and update the predictor when it presents in local training ; 2 ) we follow the communication protocol Zhuang et al . ( 2021a ) to upload and update only the online encoder Wk when encoders are different . Model Aggregation When the server receives online networks from clients , it aggregates them to obtain a new global model W og = ∑K k=0 nk n W o k . W o g = ( Wg , W p g ) if predictor presents , otherwise W og = Wg , where Wg is the global encoder . Then , the server sent W o g to clients to update their online networks . The training iterates these three operations until it meets the stopping conditions . At the end of the training , we use the parameters of W og as the generic representation W for evaluation .
The authors propose a new approach called Federated Divergence-aware Exponential moving Average update (FedEMA) to avoid the IID assumption. FedEMA is built onto of FedSSL which is a framework for self-supervised learning in a Federated Learning context. The authors proposes a new approach to fuse the local and global knowledge effectively through a EMA update, where the decay rate of EMA is dynamically measured by a divergence.
SP:3bf64bcc780380921a57f996019861e87d24884e
Foreground-attention in neural decoding: Guiding Loop-Enc-Dec to reconstruct visual stimulus images from fMRI
1 INTRODUCTION . In recent years , reconstructing visual stimulus images from fMRI has gradually gained attention , which provides the possibility of ” mind reading ” in the future ( Fig . 1 ) . Existing work has shown that there is a certain mapping relationship between visual stimuli and brain activity ( Poldrack & Farah , 2015 ) , which provides us with the possibility and basis for reconstructing visual stimuli from fMRI data . The fMRI data which is collected from the human brain records the variations in blood oxygen level dependent ( BOLD ) and reflects the activity of nerves in human brain . Through the analysis of fMRI data , the correlation between brain activity and visual tasks can be explored , helping us to understand human visual mechanism better . Prior studies . In recent years , there has been a lot of research in this field . The main methods can be roughly divided into the following two categories : Linear regression models , which encode fMRI data into image pixel values , and finally achieve image reconstruction . Deep learning models , such as DCNN and GAN model frameworks . For the first type of methods , Yoichi Miyawaki et al . applied a multi-scale linear weighting model to predict the pixel value of each image and obtain the reconstruction results of the black and white simple images ( Miyawaki et al. , 2008 ) . Fujiwara et al . proposed to use Bayesian canonical correlation analysis ( BCCA ) to build a reconstruction model , which extracts image information from measured data and reconstruct images ( Fujiwara et al. , 2013 ) . Marcel AJ van Gerve et al . adopted a hierarchical generative model composed of Boltzmann machines with restricted conditions , which explores the reconstruction of feature hierarchies based on learning ( Van Gerven et al. , 2010 ) . Yu et al . advanced a correlation network framework that could be flexibly combined with diverse pattern representation models . They revealed the effective connectivity of human brain and reconstructed the visual stimulus images ( Yu & Zheng , 2017 ; Yu et al. , 2018 ) . For the second type of methods , Guohua Shen et al . proposed an end-to-end deep convolution model to reconstruct visual stimulus images directly by using fMRI data , and at the same time they used GAN framework to optimize the training of the model ( Shen et al. , 2019a ) . In response to the current problem of the lack of data sets used to reconstruct stimulus images from fMRI data , Roman Beliy et al . suggest an approach for self-supervised training on unlabeled fMRI data ( Beliy et al. , 2019 ) . Yunfeng Lin et al . proposed a model called DCNN-GAN by combining a reconstruction network and a GAN module . They utilized the CNN for hierarchical feature extraction and the DCNN-GAN to reconstruct more realistic images ( Lin et al. , 2019 ) . Tao Fang et al . defined a Shape-Semantic GAN , considering the functional differences of different visual cortex areas , and reconstruct visual stimulus images under the guidance of shape and semantic information ( Fang et al. , 2020 ) . The ideas of DCNN and GAN have also been applied to various other model frameworks to achieve the reconstruction of visual stimulus images ( Seeliger et al. , 2018 ; VanRullen & Reddy , 2019 ; Zhang et al. , 2020 ; Qiao et al. , 2020 ; Mozafari et al. , 2020 ) . Our contributions . Because of the complex neural information in the visual cortex , how to extract valuable information from high-dimensional fMRI data has been a challenging problem . Inspired by human visual attention mechanism , we introduce visual attention to this work for the first time . We first decode the human visual attention from a small amount of fMRI data successfully , and then use visual attention to guide the work of reconstructing visual stimulus images from fMRI . Existing studies have shown that the human visual system is more inclined to focus on prominent objects , and the neural representation of these prominent objects in the brain is more obvious ( Ungerleider & G , 2000 ; Braun et al. , 2001 ) . The distribution of attention leads to an information bottleneck , only the most prominent objects are allowed to appear in the inferior temporal cortex , especially the ventral visual stream that encodes the identity of the object . The visual attention mechanism is crucial for simulating the neural response of the higher visual system ( Poggio & Anselmi , 2016 ; Khosla et al. , 2020 ) . Based on the above conclusions , we propose Foreground-attention ( F-attention ) . We abstract the distribution of human visual attention as the most prominent foreground object distribution area in the images . Through the constructed F-attention decoder , the fMRI corresponding to the natural stimulus images can be decoded into visual attention distribution ( Fig . 2a ) . Because human brain is strongly wound into sulci and gyri , some spatially adjacent voxels are not directly connected in the human brain ( Tallinen et al. , 2014 ) , the correlation between voxels is not only determined by location . Therefore , we need to consider global information when decoding fMRI , so we first introduce the self-attention module ( Vaswani et al. , 2017 ; Wang et al. , 2018 ; Zhang et al. , 2019 ) into the fMRI decoding process , the self-attention module can capture global information of the input data and expand the receptive field ( Fig . 3c ) . At the same time , we propose a new encoding and decoding framework called Loop-Enc-Dec ( Fig . 2b , c ) . The first step we pre-train the fMRI encoder on the data set to realize the encoding process from images to fMRI . The second step we train the end-to-end encoder-decoder model under the guidance of F-attention , and input the images decoded by the decoder into the fMRI encoder for encoding to add the re-encoding constraint in the loss function , that is called “ Loop-Enc-Dec ” . In the training process , natural pictures without fMRI data are also added for end-to-end self-enc-dec training , which increases the stability and generalization of the model . After experimental evaluation , the performance of Loop-Enc-Dec under the guidance of F-attention is better than that of mainstream methods . The main contributions of this article are as follows : • As far as we know , we first introduced the visual attention mechanism to the work of reconstructing visual stimulus images from fMRI . According to neuroscience research , we abstract visual attention as the attention distribution of foreground objects ( F-attention ) , and build a visual attention decoder to decode the human visual attention distribution from fMRI successfully . • To our best knowledge , we first introduced the self-attention module into the work of fMRI decoding to capture global information of fMRI data . • We propose a new enc-dec framework called Loop-Enc-Dec , guided by F-attention , which adds more loss constraints to the training of the image reconstruction model . Evaluating the quality of reconstructed images , our method outperforms previous works . 2 METHODS . In this section , we first introduce the data set used in the experiments , then we introduce the details of F-attention and Loop-Enc-Dec framework proposed in this article , as well as the step-by-step training strategy for the model . Finally , our evaluation method is given . 2.1 FMRI DATA SET . We use a publicly available benchmark data sets ( Horikawa & Kamitani , 2017 ) , which is widely used in the work of visual stimulus image reconstruction . In the image presentation experiment , fMRI signals were measured while subjects viewed a sequence of object images from ImageNet ( Deng et al. , 2009 ) . The image presentation experiment consisted of two sessions : the training image session and the testing image session . In the training image session , 1200 images from 150 object categories ( 8 images from each category ) were each presented once . In the testing image session , 50 images from 50 object categories ( one image from each category ) were each presented 35 times . The data collectors performed their analysis for each combination of feature types/layers and brain regions of interest ( ROIs ; V1–V4 , the lateral occipital complex ( LOC ) , fusiform face area ( FFA ) , parahippocampal place area ( PPA ) , lower visual cortex ( LVC ; V1–V3 ) , higher visual cortex ( HVC ; covering regions around LOC , FFA and PPA ) and the entire visual cortex ( VC ; covering all of the visual subareas listed above ) , the voxel size is 3×3×3 mm3 ) ( Horikawa & Kamitani , 2017 ) . 2.2 PROPOSED MODEL . Summarizing the proposed model . We first trian the F-attention decoder on fMRI data ( Fig . 2a ) , then pre-train the image encoder ( Fig . 2b ) , next we fix the weight of the encoder , train the decoder under the guidance of F-attention ( Fig . 2c ) . When testing , we first use fMRI as the input of F-attention decoder to decode the distribution of F-attention , then input the fMRI into the image decoder to reconstruct the visual stimulus images guided by corresponding F-attention . 2.2.1 F-ATTENTION . In order to get more useful information from fMRI data to guide the work of visual stimulus image reconstruction , we introduced the human visual attention mechanism into the work . We abstracted the human visual attention as foreground attention ( F-attention ) based on neuroscience research ( Poggio & Anselmi , 2016 ; Khosla et al. , 2020 ) . By constructing a F-attention decoder , we decode human visual attention distribution from fMRI data . Visual attention labeling . For the purpose of obtaining the human visual attention distribution of the training set , we first label the train data ( Fig . 3b ) . According to the definition of F-attention , we first extract the prominent objects in the natural images , and then perform the binary value on the images . Through the transformation operation , the main distribution areas of F-attention are obtained . Then we assigned attention weights to different regions in the images ( the bluer the color , the higher the attention weights , in the train set , the weights of the foreground object areas are 1 , the background areas are 0 ) , the final F-attention maps are obtained as the human visual attention labels of the training set . F-attention decoder . Then we construct F-attention-decoder to realize the decoding process from fMRI data to human visual attention ( Fig . 3c ) . First , the preprocessed fMRI data is integrated into a vector , then the vector is sequentially connected to a fully connection module , convolution and upsampling modules , a self-attention module , convolution and upsampling modules , and we finally get the F-attention map . The self-attention module calculation formula is as follows , dk is 1 : Self −Attention ( Q , K , V ) = softmax ( QK T √ dk ) V ( 1 ) In the training process , we use the mean square error loss ( MSE ) to constrain . After 400 rounds of iterations , the error decreases and stabilizes . The loss function is as follows : LossF−attention ( FAi , FA ∗ i ) ∝ ∑ ( FAi − FA∗i ) 2 ( 2 ) Where FAi represents the visual attention distribution corresponding to the images in the training set , and FA∗i stands for the visual attention distribution decoded by the F-attention decoder .
The paper proposed a Loop-Enc-Dec framework to perform an image reconstruction in a neural decoding task. The solution is based on an end-to-end encoder-decoder model under the guidance of Foreground-attention to enhance the perceptual quality of reconstructed images. The experimental results show visible improvements in the quality of reconstructed images without requiring additional fMRI training data. Also, the proposed method shows improvements in the structural similarity of image reconstruction.
SP:f39d50648208f976167aeb0aac498effa4bd0a18
Foreground-attention in neural decoding: Guiding Loop-Enc-Dec to reconstruct visual stimulus images from fMRI
1 INTRODUCTION . In recent years , reconstructing visual stimulus images from fMRI has gradually gained attention , which provides the possibility of ” mind reading ” in the future ( Fig . 1 ) . Existing work has shown that there is a certain mapping relationship between visual stimuli and brain activity ( Poldrack & Farah , 2015 ) , which provides us with the possibility and basis for reconstructing visual stimuli from fMRI data . The fMRI data which is collected from the human brain records the variations in blood oxygen level dependent ( BOLD ) and reflects the activity of nerves in human brain . Through the analysis of fMRI data , the correlation between brain activity and visual tasks can be explored , helping us to understand human visual mechanism better . Prior studies . In recent years , there has been a lot of research in this field . The main methods can be roughly divided into the following two categories : Linear regression models , which encode fMRI data into image pixel values , and finally achieve image reconstruction . Deep learning models , such as DCNN and GAN model frameworks . For the first type of methods , Yoichi Miyawaki et al . applied a multi-scale linear weighting model to predict the pixel value of each image and obtain the reconstruction results of the black and white simple images ( Miyawaki et al. , 2008 ) . Fujiwara et al . proposed to use Bayesian canonical correlation analysis ( BCCA ) to build a reconstruction model , which extracts image information from measured data and reconstruct images ( Fujiwara et al. , 2013 ) . Marcel AJ van Gerve et al . adopted a hierarchical generative model composed of Boltzmann machines with restricted conditions , which explores the reconstruction of feature hierarchies based on learning ( Van Gerven et al. , 2010 ) . Yu et al . advanced a correlation network framework that could be flexibly combined with diverse pattern representation models . They revealed the effective connectivity of human brain and reconstructed the visual stimulus images ( Yu & Zheng , 2017 ; Yu et al. , 2018 ) . For the second type of methods , Guohua Shen et al . proposed an end-to-end deep convolution model to reconstruct visual stimulus images directly by using fMRI data , and at the same time they used GAN framework to optimize the training of the model ( Shen et al. , 2019a ) . In response to the current problem of the lack of data sets used to reconstruct stimulus images from fMRI data , Roman Beliy et al . suggest an approach for self-supervised training on unlabeled fMRI data ( Beliy et al. , 2019 ) . Yunfeng Lin et al . proposed a model called DCNN-GAN by combining a reconstruction network and a GAN module . They utilized the CNN for hierarchical feature extraction and the DCNN-GAN to reconstruct more realistic images ( Lin et al. , 2019 ) . Tao Fang et al . defined a Shape-Semantic GAN , considering the functional differences of different visual cortex areas , and reconstruct visual stimulus images under the guidance of shape and semantic information ( Fang et al. , 2020 ) . The ideas of DCNN and GAN have also been applied to various other model frameworks to achieve the reconstruction of visual stimulus images ( Seeliger et al. , 2018 ; VanRullen & Reddy , 2019 ; Zhang et al. , 2020 ; Qiao et al. , 2020 ; Mozafari et al. , 2020 ) . Our contributions . Because of the complex neural information in the visual cortex , how to extract valuable information from high-dimensional fMRI data has been a challenging problem . Inspired by human visual attention mechanism , we introduce visual attention to this work for the first time . We first decode the human visual attention from a small amount of fMRI data successfully , and then use visual attention to guide the work of reconstructing visual stimulus images from fMRI . Existing studies have shown that the human visual system is more inclined to focus on prominent objects , and the neural representation of these prominent objects in the brain is more obvious ( Ungerleider & G , 2000 ; Braun et al. , 2001 ) . The distribution of attention leads to an information bottleneck , only the most prominent objects are allowed to appear in the inferior temporal cortex , especially the ventral visual stream that encodes the identity of the object . The visual attention mechanism is crucial for simulating the neural response of the higher visual system ( Poggio & Anselmi , 2016 ; Khosla et al. , 2020 ) . Based on the above conclusions , we propose Foreground-attention ( F-attention ) . We abstract the distribution of human visual attention as the most prominent foreground object distribution area in the images . Through the constructed F-attention decoder , the fMRI corresponding to the natural stimulus images can be decoded into visual attention distribution ( Fig . 2a ) . Because human brain is strongly wound into sulci and gyri , some spatially adjacent voxels are not directly connected in the human brain ( Tallinen et al. , 2014 ) , the correlation between voxels is not only determined by location . Therefore , we need to consider global information when decoding fMRI , so we first introduce the self-attention module ( Vaswani et al. , 2017 ; Wang et al. , 2018 ; Zhang et al. , 2019 ) into the fMRI decoding process , the self-attention module can capture global information of the input data and expand the receptive field ( Fig . 3c ) . At the same time , we propose a new encoding and decoding framework called Loop-Enc-Dec ( Fig . 2b , c ) . The first step we pre-train the fMRI encoder on the data set to realize the encoding process from images to fMRI . The second step we train the end-to-end encoder-decoder model under the guidance of F-attention , and input the images decoded by the decoder into the fMRI encoder for encoding to add the re-encoding constraint in the loss function , that is called “ Loop-Enc-Dec ” . In the training process , natural pictures without fMRI data are also added for end-to-end self-enc-dec training , which increases the stability and generalization of the model . After experimental evaluation , the performance of Loop-Enc-Dec under the guidance of F-attention is better than that of mainstream methods . The main contributions of this article are as follows : • As far as we know , we first introduced the visual attention mechanism to the work of reconstructing visual stimulus images from fMRI . According to neuroscience research , we abstract visual attention as the attention distribution of foreground objects ( F-attention ) , and build a visual attention decoder to decode the human visual attention distribution from fMRI successfully . • To our best knowledge , we first introduced the self-attention module into the work of fMRI decoding to capture global information of fMRI data . • We propose a new enc-dec framework called Loop-Enc-Dec , guided by F-attention , which adds more loss constraints to the training of the image reconstruction model . Evaluating the quality of reconstructed images , our method outperforms previous works . 2 METHODS . In this section , we first introduce the data set used in the experiments , then we introduce the details of F-attention and Loop-Enc-Dec framework proposed in this article , as well as the step-by-step training strategy for the model . Finally , our evaluation method is given . 2.1 FMRI DATA SET . We use a publicly available benchmark data sets ( Horikawa & Kamitani , 2017 ) , which is widely used in the work of visual stimulus image reconstruction . In the image presentation experiment , fMRI signals were measured while subjects viewed a sequence of object images from ImageNet ( Deng et al. , 2009 ) . The image presentation experiment consisted of two sessions : the training image session and the testing image session . In the training image session , 1200 images from 150 object categories ( 8 images from each category ) were each presented once . In the testing image session , 50 images from 50 object categories ( one image from each category ) were each presented 35 times . The data collectors performed their analysis for each combination of feature types/layers and brain regions of interest ( ROIs ; V1–V4 , the lateral occipital complex ( LOC ) , fusiform face area ( FFA ) , parahippocampal place area ( PPA ) , lower visual cortex ( LVC ; V1–V3 ) , higher visual cortex ( HVC ; covering regions around LOC , FFA and PPA ) and the entire visual cortex ( VC ; covering all of the visual subareas listed above ) , the voxel size is 3×3×3 mm3 ) ( Horikawa & Kamitani , 2017 ) . 2.2 PROPOSED MODEL . Summarizing the proposed model . We first trian the F-attention decoder on fMRI data ( Fig . 2a ) , then pre-train the image encoder ( Fig . 2b ) , next we fix the weight of the encoder , train the decoder under the guidance of F-attention ( Fig . 2c ) . When testing , we first use fMRI as the input of F-attention decoder to decode the distribution of F-attention , then input the fMRI into the image decoder to reconstruct the visual stimulus images guided by corresponding F-attention . 2.2.1 F-ATTENTION . In order to get more useful information from fMRI data to guide the work of visual stimulus image reconstruction , we introduced the human visual attention mechanism into the work . We abstracted the human visual attention as foreground attention ( F-attention ) based on neuroscience research ( Poggio & Anselmi , 2016 ; Khosla et al. , 2020 ) . By constructing a F-attention decoder , we decode human visual attention distribution from fMRI data . Visual attention labeling . For the purpose of obtaining the human visual attention distribution of the training set , we first label the train data ( Fig . 3b ) . According to the definition of F-attention , we first extract the prominent objects in the natural images , and then perform the binary value on the images . Through the transformation operation , the main distribution areas of F-attention are obtained . Then we assigned attention weights to different regions in the images ( the bluer the color , the higher the attention weights , in the train set , the weights of the foreground object areas are 1 , the background areas are 0 ) , the final F-attention maps are obtained as the human visual attention labels of the training set . F-attention decoder . Then we construct F-attention-decoder to realize the decoding process from fMRI data to human visual attention ( Fig . 3c ) . First , the preprocessed fMRI data is integrated into a vector , then the vector is sequentially connected to a fully connection module , convolution and upsampling modules , a self-attention module , convolution and upsampling modules , and we finally get the F-attention map . The self-attention module calculation formula is as follows , dk is 1 : Self −Attention ( Q , K , V ) = softmax ( QK T √ dk ) V ( 1 ) In the training process , we use the mean square error loss ( MSE ) to constrain . After 400 rounds of iterations , the error decreases and stabilizes . The loss function is as follows : LossF−attention ( FAi , FA ∗ i ) ∝ ∑ ( FAi − FA∗i ) 2 ( 2 ) Where FAi represents the visual attention distribution corresponding to the images in the training set , and FA∗i stands for the visual attention distribution decoded by the F-attention decoder .
The authors proposed a model to decoding the fMRI signal from the human visual cortex by introducing the Foreground-attention. They also proposed a enc-dec training strategy called Loop-Enc-Dec, which is guided by the F-attention, to successfully reconstruct the visual images from the fMRI data. A higher score based on pairwise SSIM is achieved compared to previous works.
SP:f39d50648208f976167aeb0aac498effa4bd0a18
Certified Robustness for Free in Differentially Private Federated Learning
1 INTRODUCTION . Federated Learning ( FL ) , which aims to jointly train a global model with distributed local data , has been widely applied in different applications , such as finance ( Yang et al. , 2019b ) , medical analysis ( Brisimi et al. , 2018 ) , and user behavior prediction ( Hard et al. , 2018 ; Yang et al. , 2018 ; 2019a ) . However , the fact that the local data and the training process are entirely controlled by the local users who may be adversarial raises great concerns from both security and privacy perspectives . In particular , recent studies show that FL is vulnerable to different types of training-time attacks , such as model poisoning ( Bhagoji et al. , 2019 ) , backdoor attacks ( Bagdasaryan et al. , 2020 ; Xie et al. , 2019 ; Wang et al. , 2020 ) , and label-flipping attacks ( Fung et al. , 2020 ) . Further , privacy concerns have motivated the need to keep the raw data on local devices without sharing . However , sharing other indirect information such as gradients or model updates as part of the FL training process can also leak sensitive user information ( Zhu et al. , 2019 ; Geiping et al. , 2020 ; Bhowmick et al. , 2018 ; Melis et al. , 2019 ) . As a result , approaches based on differential privacy ( DP ) ( Dwork & Roth , 2014 ) , homomorphic encryption ( Bost et al. , 2015 ; Rouhani et al. , 2018 ; Gilad-Bachrach et al. , 2016 ) , and secure multiparty computation ( Ben-Or et al. , 1988 ; Bonawitz et al. , 2017 ) have been proposed to protect privacy of users in federated learning . In particular , differentially private federated learning ( DPFL ) provides strong information theoretic guarantees on user privacy , while causing relatively low performance overhead ( Li et al. , 2020b ) . Several defenses have been proposed to defend against poisoning attacks in FL . For instance , various robust aggregation methods ( Fung et al. , 2020 ; Pillutla et al. , 2019 ; Blanchard et al. , 2017 ; El Mhamdi et al. , 2018 ; Chen et al. , 2017b ; Yin et al. , 2018 ; Fu et al. , 2019 ; Li et al. , 2020a ) identify and down-weight the malicious updates during aggregation or estimate a true “ center ” of the received updates rather than taking a weighted average . Other methods include robust federated training protocols ( e.g. , clipping ( Sun et al. , 2019 ) , noisy perturbation ( Sun et al. , 2019 ) , and additional evaluation during training ( Andreina et al. , 2020 ) ) and post-training strategies ( e.g. , fine-tuning and pruning ( Wu et al. , 2020 ) ) that repair the poisoned global model . However , as these works mainly focus on providing empirical robustness for FL , they have been shown to be vulnerable to newly proposed strong adaptive attacks ( Wang et al. , 2020 ; Xie et al. , 2019 ; Baruch et al. , 2019 ; Fang et al. , 2020 ) . Hence , in this paper , we aim to develop certified robustness guarantees for FL against different poisoning attacks . Further , as differentially private federated learning ( DPFL ) is often used to protect user privacy , we also aim to ask : Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks for free ? Can we further improve the privacy of FL so as to improve its certified robustness ? Recent studies suggest that differential privacy ( DP ) is inherently related with robustness of ML models . Intuitively , DP is designed to protect the privacy of individual data , such that the output of an algorithm remains essentially unchanged when one individual input point is modified . Hence , the prediction of a DP model will be less impacted by a small amount of poisoned training data . Consequently , DP has been used to provide both theoretical and empirical defenses against evasion attacks ( Lecuyer et al. , 2019a ) and data poisoning attacks ( Ma et al. , 2019 ; Hong et al. , 2020 ) on centralized ML models . It has also been used as an empirical defense against backdoor attacks ( Gu et al. , 2019 ) in federated learning ( Bagdasaryan et al. , 2020 ; Sun et al. , 2019 ) , although no theoretical guarantee is provided . To the best of our knowledge , despite of the wide application of DPFL , there is no work providing certified robustness for DPFL leveraging its privacy property . In this paper , we aim to leverage the inherent privacy property of DPFL to provide robustness certification for FL against poisoning attacks for free . Our challenges include : ( 1 ) performing privacy analysis over training rounds in DPFL algorithms and ( 2 ) theoretically guaranteeing certified robustness based on DP properties under a given privacy budget . We propose two robustness certification criteria for FL : certified prediction and certified attack cost under different attack constraints . We consider both user-level DP ( Agarwal et al. , 2018 ; Geyer et al. , 2017 ; McMahan et al. , 2018 ; Asoodeh & Calmon , 2020 ; Liang et al. , 2020 ) which is widely guaranteed in FL , and instance-level DP ( Malekzadeh et al. , 2021 ; Zhu et al. , 2021 ) which is less explored in FL . We prove that a FL model satisfying user-level DP is certifiably robust against a bounded number of adversarial users . In addition , we propose InsDP-FedAvg algorithm to improve instance-level DP in FL , and prove that instance-level DPFL is certifiably robust against a bounded number of adversarial instances . We also study the correlation between privacy guarantee and certified robustness of FL . While stronger privacy guarantees result in greater attack cost , overly strong privacy can hurt the certified prediction by introducing too much noise in the training process . Thus , the optimal certified prediction is often achieved under a proper balance between privacy protection and utility loss . Key Contributions . Our work takes the first step to provide certified robustness in DPFL for free against poisoning attacks . We make contributions on both theoretical and empirical fronts . • We propose two criteria for certified robustness of FL against poisoning attacks ( Section 4.2 ) . • Given a FL model satisfying user-level DP , we prove that it is certifiably robust against arbitrary poisoning attacks with a bounded number of adversarial users ( Section 4.2 ) . • We propose InsDP-FedAvg algorithm to improve FL instance-level privacy guarantee ( Sec- tion 5.1 ) . We prove that instance-level DPFL is certifiably robust against the manipulation of a bounded number of instances during training ( Section 5.2 ) . • We conduct extensive experiments on image classification of MNIST , CIFAR-10 and sentiment analysis of tweets to verify our proposed certifications of two robustness criteria , and compare the certified results of different DPFL algorithms ( Section 6 ) . 2 RELATED WORK . Differentially Private Federated Learning . Different approaches are proposed to guarantee the user-level privacy for FL . ( Geyer et al. , 2017 ; McMahan et al. , 2018 ) clip the norm of each local update , add Gaussian noise on the summed update , and characterize its privacy budget via moment accountant ( Abadi et al. , 2016 ) . ( McMahan et al. , 2018 ) extends ( Geyer et al. , 2017 ) to language models . In CpSGD ( Agarwal et al. , 2018 ) , each user clips and quantizes the model update , and adds noise drawn from Binomial distribution , achieving both communication efficiency and DP . ( Bhowmick et al. , 2018 ) derive DP for FL via Rényi divergence ( Mironov , 2017 ) and study its protection against data reconstruction attacks . ( Liang et al. , 2020 ) utilizes Laplacian smoothing for each local update to enhance the model utility . Instead of using moment accountant to track privacy budget over FL rounds as previous work , ( Asoodeh & Calmon , 2020 ) derives the DP parameters by interpreting each round as a Markov kernel and quantify its impact on privacy parameters . All these works only focus on providing user-level privacy , leaving its robustness property unexplored . In terms of instance-level privacy for FL , there are only a few work ( Malekzadeh et al. , 2021 ; Zhu et al. , 2021 ) . Dopamine ( Malekzadeh et al. , 2021 ) provides instance-level privacy guarantee when each user only performs one step of DP-SGD ( Abadi et al. , 2016 ) at each FL round . However , it can not be applied to multi-step SGD for each user , thus it can not be extended to the general FL setting FedAvg ( McMahan et al. , 2017 ) . ( Zhu et al. , 2021 ) privately aggregate the labels from users in a voting scheme , and provide DP guarantees on both user level and instance level . However , it is also not applicable to standard FL , since it does not allow aggregating the gradients or updates . Differential Privacy and Robustness . In standard ( centralized ) learning , Pixel-DP ( Lecuyer et al. , 2019a ) is proposed to certify the model robsutness against evasion attacks . However , it is unclear how to leverage it to certify against poisoning attacks . To certify the robustness against poisoning attacks , ( Ma et al. , 2019 ) show that private learners are resistant to data poisoning and analyze the lower bound of attack cost against poisoning attacks for regression models . Here we certify the robustness in DPFL setting with such lower bound as one of our certification criteria and additionally derive its upper bounds . ( Hong et al. , 2020 ) show that the off-the-shelf mechanism DP-SGD ( Abadi et al. , 2016 ) , which clips per-sample gradients and add Guassian noises during training , can serve as a defense against poisoning attacks empirically . In federated learning , empirical work ( Bagdasaryan et al. , 2020 ; Sun et al. , 2019 ) show that DPFL can mitigate backdoor attacks ; however , none of these work provides certified robustness guarantees for DPFL against poisoning attacks . 3 PRELIMINARIES . We start by providing some background on differential privacy ( DP ) and federated learning ( FL ) . Differential Privacy ( DP ) . DP is a formal , mathematically rigorous definition ( and standard ) of privacy that intuitively guarantees that a randomized algorithm behaves similarly on similar inputs and that the output of the algorithm is about the same whether or not an individual ’ s data is included as part of the input ( Dwork & Roth , 2014 ) . Definition 1 ( ( , δ ) -DP ( Dwork et al. , 2006 ) ) . A randomized mechanismM : D → Θ with domain D and range Θ satisfies ( , δ ) -DP if for any pair of two adjacent datasets d , d′ ∈ D , and for any possible ( measurable ) output set E ⊆ Θ , it holds that Pr [ M ( d ) ∈ E ] ≤ e Pr [ M ( d′ ) ∈ E ] + δ . In Definition 1 , whenM is a training algorithm for ML model , domain D and range Θ represent all possible training datasets and all possible trained models respectively . Group DP for ( , δ ) -DP mechanisms follows immediately from Definition 1 where the privacy guarantee drops with the size of the group . Formally , it says : Lemma 1 ( Group DP ) . For mechanismM that satisfies ( , δ ) -DP , it satisfies ( k , 1−e k 1−e δ ) -DP for groups of size k. That is , for any d , d′ ∈ D that differ by k individuals , and any E ⊆ Θ it holds that Pr [ M ( d ) ∈ E ] ≤ ek Pr [ M ( d′ ) ∈ E ] + 1−e k 1−e δ. Federated Learning . FedAvg was introduced by ( McMahan et al. , 2017 ) for FL to train a shared global model without direct access to training data of users . Specifically , given a FL system with N users , at round t , the server sends the current global model wt−1 to users in the selected user set Ut , where |Ut| = m = qN and q is the user sampling probability . Each selected user i ∈ Ut locally updates the model for E local epochs with its dataset Di and learning rate η to obtain a new local model . Then , the user sends the local model updates ∆wit to the server . Finally , the server aggregates over the updates from all selected users into the new global model wt = wt−1 + 1m ∑ i∈Ut ∆w i t .
## Update after rebuttal and discussions I thank the authors for taking the time to discuss the issues pointed out in the reviews at length. Unfortunately, I am still not convinced that the paper is ready for publication. My main concerns: 1) There are now experiments in the updated paper claimed to be DP which are not (median clipping). 2) I continue to have doubts about the subsampling amplification. Simply stating that the sampling is random is not good enough, since the key issue is the added uncertainty due to the subsampling: if the sampling does not increase the adversary's uncertainty, there is no amplification. As an immediate remedy, I suggest the authors state the threat model more clearly. 3) I still think the paper can be improved a lot by taking the time to rewrite it focusing on the main contribution of certified robustness under DP and clarity of the presentation. --- The paper looks at the robustness properties of differentially private (DP) federated learning (FL), focusing on learning classification models from labeled data. The main idea is to turn DP privacy guarantees into certifiable robustness properties. The authors look at 2 certifiable properties, namely, certified prediction (data poisoning does not alter most likely label), and certified attack cost (there is a lower bound on the loss the given attack tries to minimize). They continue to show that DP models in general guarantee these on some level that depends on the privacy bounds. The paper also presents several DPFL learning algorithms for user and instance-level DP.
SP:8a8ee9de77204eab83867e7170e01e24f0e2d81e
Certified Robustness for Free in Differentially Private Federated Learning
1 INTRODUCTION . Federated Learning ( FL ) , which aims to jointly train a global model with distributed local data , has been widely applied in different applications , such as finance ( Yang et al. , 2019b ) , medical analysis ( Brisimi et al. , 2018 ) , and user behavior prediction ( Hard et al. , 2018 ; Yang et al. , 2018 ; 2019a ) . However , the fact that the local data and the training process are entirely controlled by the local users who may be adversarial raises great concerns from both security and privacy perspectives . In particular , recent studies show that FL is vulnerable to different types of training-time attacks , such as model poisoning ( Bhagoji et al. , 2019 ) , backdoor attacks ( Bagdasaryan et al. , 2020 ; Xie et al. , 2019 ; Wang et al. , 2020 ) , and label-flipping attacks ( Fung et al. , 2020 ) . Further , privacy concerns have motivated the need to keep the raw data on local devices without sharing . However , sharing other indirect information such as gradients or model updates as part of the FL training process can also leak sensitive user information ( Zhu et al. , 2019 ; Geiping et al. , 2020 ; Bhowmick et al. , 2018 ; Melis et al. , 2019 ) . As a result , approaches based on differential privacy ( DP ) ( Dwork & Roth , 2014 ) , homomorphic encryption ( Bost et al. , 2015 ; Rouhani et al. , 2018 ; Gilad-Bachrach et al. , 2016 ) , and secure multiparty computation ( Ben-Or et al. , 1988 ; Bonawitz et al. , 2017 ) have been proposed to protect privacy of users in federated learning . In particular , differentially private federated learning ( DPFL ) provides strong information theoretic guarantees on user privacy , while causing relatively low performance overhead ( Li et al. , 2020b ) . Several defenses have been proposed to defend against poisoning attacks in FL . For instance , various robust aggregation methods ( Fung et al. , 2020 ; Pillutla et al. , 2019 ; Blanchard et al. , 2017 ; El Mhamdi et al. , 2018 ; Chen et al. , 2017b ; Yin et al. , 2018 ; Fu et al. , 2019 ; Li et al. , 2020a ) identify and down-weight the malicious updates during aggregation or estimate a true “ center ” of the received updates rather than taking a weighted average . Other methods include robust federated training protocols ( e.g. , clipping ( Sun et al. , 2019 ) , noisy perturbation ( Sun et al. , 2019 ) , and additional evaluation during training ( Andreina et al. , 2020 ) ) and post-training strategies ( e.g. , fine-tuning and pruning ( Wu et al. , 2020 ) ) that repair the poisoned global model . However , as these works mainly focus on providing empirical robustness for FL , they have been shown to be vulnerable to newly proposed strong adaptive attacks ( Wang et al. , 2020 ; Xie et al. , 2019 ; Baruch et al. , 2019 ; Fang et al. , 2020 ) . Hence , in this paper , we aim to develop certified robustness guarantees for FL against different poisoning attacks . Further , as differentially private federated learning ( DPFL ) is often used to protect user privacy , we also aim to ask : Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks for free ? Can we further improve the privacy of FL so as to improve its certified robustness ? Recent studies suggest that differential privacy ( DP ) is inherently related with robustness of ML models . Intuitively , DP is designed to protect the privacy of individual data , such that the output of an algorithm remains essentially unchanged when one individual input point is modified . Hence , the prediction of a DP model will be less impacted by a small amount of poisoned training data . Consequently , DP has been used to provide both theoretical and empirical defenses against evasion attacks ( Lecuyer et al. , 2019a ) and data poisoning attacks ( Ma et al. , 2019 ; Hong et al. , 2020 ) on centralized ML models . It has also been used as an empirical defense against backdoor attacks ( Gu et al. , 2019 ) in federated learning ( Bagdasaryan et al. , 2020 ; Sun et al. , 2019 ) , although no theoretical guarantee is provided . To the best of our knowledge , despite of the wide application of DPFL , there is no work providing certified robustness for DPFL leveraging its privacy property . In this paper , we aim to leverage the inherent privacy property of DPFL to provide robustness certification for FL against poisoning attacks for free . Our challenges include : ( 1 ) performing privacy analysis over training rounds in DPFL algorithms and ( 2 ) theoretically guaranteeing certified robustness based on DP properties under a given privacy budget . We propose two robustness certification criteria for FL : certified prediction and certified attack cost under different attack constraints . We consider both user-level DP ( Agarwal et al. , 2018 ; Geyer et al. , 2017 ; McMahan et al. , 2018 ; Asoodeh & Calmon , 2020 ; Liang et al. , 2020 ) which is widely guaranteed in FL , and instance-level DP ( Malekzadeh et al. , 2021 ; Zhu et al. , 2021 ) which is less explored in FL . We prove that a FL model satisfying user-level DP is certifiably robust against a bounded number of adversarial users . In addition , we propose InsDP-FedAvg algorithm to improve instance-level DP in FL , and prove that instance-level DPFL is certifiably robust against a bounded number of adversarial instances . We also study the correlation between privacy guarantee and certified robustness of FL . While stronger privacy guarantees result in greater attack cost , overly strong privacy can hurt the certified prediction by introducing too much noise in the training process . Thus , the optimal certified prediction is often achieved under a proper balance between privacy protection and utility loss . Key Contributions . Our work takes the first step to provide certified robustness in DPFL for free against poisoning attacks . We make contributions on both theoretical and empirical fronts . • We propose two criteria for certified robustness of FL against poisoning attacks ( Section 4.2 ) . • Given a FL model satisfying user-level DP , we prove that it is certifiably robust against arbitrary poisoning attacks with a bounded number of adversarial users ( Section 4.2 ) . • We propose InsDP-FedAvg algorithm to improve FL instance-level privacy guarantee ( Sec- tion 5.1 ) . We prove that instance-level DPFL is certifiably robust against the manipulation of a bounded number of instances during training ( Section 5.2 ) . • We conduct extensive experiments on image classification of MNIST , CIFAR-10 and sentiment analysis of tweets to verify our proposed certifications of two robustness criteria , and compare the certified results of different DPFL algorithms ( Section 6 ) . 2 RELATED WORK . Differentially Private Federated Learning . Different approaches are proposed to guarantee the user-level privacy for FL . ( Geyer et al. , 2017 ; McMahan et al. , 2018 ) clip the norm of each local update , add Gaussian noise on the summed update , and characterize its privacy budget via moment accountant ( Abadi et al. , 2016 ) . ( McMahan et al. , 2018 ) extends ( Geyer et al. , 2017 ) to language models . In CpSGD ( Agarwal et al. , 2018 ) , each user clips and quantizes the model update , and adds noise drawn from Binomial distribution , achieving both communication efficiency and DP . ( Bhowmick et al. , 2018 ) derive DP for FL via Rényi divergence ( Mironov , 2017 ) and study its protection against data reconstruction attacks . ( Liang et al. , 2020 ) utilizes Laplacian smoothing for each local update to enhance the model utility . Instead of using moment accountant to track privacy budget over FL rounds as previous work , ( Asoodeh & Calmon , 2020 ) derives the DP parameters by interpreting each round as a Markov kernel and quantify its impact on privacy parameters . All these works only focus on providing user-level privacy , leaving its robustness property unexplored . In terms of instance-level privacy for FL , there are only a few work ( Malekzadeh et al. , 2021 ; Zhu et al. , 2021 ) . Dopamine ( Malekzadeh et al. , 2021 ) provides instance-level privacy guarantee when each user only performs one step of DP-SGD ( Abadi et al. , 2016 ) at each FL round . However , it can not be applied to multi-step SGD for each user , thus it can not be extended to the general FL setting FedAvg ( McMahan et al. , 2017 ) . ( Zhu et al. , 2021 ) privately aggregate the labels from users in a voting scheme , and provide DP guarantees on both user level and instance level . However , it is also not applicable to standard FL , since it does not allow aggregating the gradients or updates . Differential Privacy and Robustness . In standard ( centralized ) learning , Pixel-DP ( Lecuyer et al. , 2019a ) is proposed to certify the model robsutness against evasion attacks . However , it is unclear how to leverage it to certify against poisoning attacks . To certify the robustness against poisoning attacks , ( Ma et al. , 2019 ) show that private learners are resistant to data poisoning and analyze the lower bound of attack cost against poisoning attacks for regression models . Here we certify the robustness in DPFL setting with such lower bound as one of our certification criteria and additionally derive its upper bounds . ( Hong et al. , 2020 ) show that the off-the-shelf mechanism DP-SGD ( Abadi et al. , 2016 ) , which clips per-sample gradients and add Guassian noises during training , can serve as a defense against poisoning attacks empirically . In federated learning , empirical work ( Bagdasaryan et al. , 2020 ; Sun et al. , 2019 ) show that DPFL can mitigate backdoor attacks ; however , none of these work provides certified robustness guarantees for DPFL against poisoning attacks . 3 PRELIMINARIES . We start by providing some background on differential privacy ( DP ) and federated learning ( FL ) . Differential Privacy ( DP ) . DP is a formal , mathematically rigorous definition ( and standard ) of privacy that intuitively guarantees that a randomized algorithm behaves similarly on similar inputs and that the output of the algorithm is about the same whether or not an individual ’ s data is included as part of the input ( Dwork & Roth , 2014 ) . Definition 1 ( ( , δ ) -DP ( Dwork et al. , 2006 ) ) . A randomized mechanismM : D → Θ with domain D and range Θ satisfies ( , δ ) -DP if for any pair of two adjacent datasets d , d′ ∈ D , and for any possible ( measurable ) output set E ⊆ Θ , it holds that Pr [ M ( d ) ∈ E ] ≤ e Pr [ M ( d′ ) ∈ E ] + δ . In Definition 1 , whenM is a training algorithm for ML model , domain D and range Θ represent all possible training datasets and all possible trained models respectively . Group DP for ( , δ ) -DP mechanisms follows immediately from Definition 1 where the privacy guarantee drops with the size of the group . Formally , it says : Lemma 1 ( Group DP ) . For mechanismM that satisfies ( , δ ) -DP , it satisfies ( k , 1−e k 1−e δ ) -DP for groups of size k. That is , for any d , d′ ∈ D that differ by k individuals , and any E ⊆ Θ it holds that Pr [ M ( d ) ∈ E ] ≤ ek Pr [ M ( d′ ) ∈ E ] + 1−e k 1−e δ. Federated Learning . FedAvg was introduced by ( McMahan et al. , 2017 ) for FL to train a shared global model without direct access to training data of users . Specifically , given a FL system with N users , at round t , the server sends the current global model wt−1 to users in the selected user set Ut , where |Ut| = m = qN and q is the user sampling probability . Each selected user i ∈ Ut locally updates the model for E local epochs with its dataset Di and learning rate η to obtain a new local model . Then , the user sends the local model updates ∆wit to the server . Finally , the server aggregates over the updates from all selected users into the new global model wt = wt−1 + 1m ∑ i∈Ut ∆w i t .
This paper studies differentially private federated learning and its intrinsic robustness against data poisoning attacks. Theoretically, the authors build two definitions for certified robustness against data poisoning attacks, draw the connection with user-level and instance-level differential privacy. The key proof is based on the definition of individual privacy and group privacy. Empirically, the authors verify the correctness of the bounds by performing real attacks. I think the main contribution is to establish the robustness bound.
SP:8a8ee9de77204eab83867e7170e01e24f0e2d81e
Simpler Calibration for Survival Analysis
1 INTRODUCTION . Survival analysis , also known as time-to-event analysis , is the problem to predict the time of the occurrence of an event . In healthcare applications , the event typically corresponds to a death or the onset of disease in a patient . The time between a well-defined starting point and the occurrence of the event is called the survival time or failure time . In survival analysis , we usually estimate the distribution of the survival times of patients . Survival analysis has important applications in healthcare as well as various other fields ( e.g. , credit scoring ( Dirick et al. , 2017 ) and fraud detection ( Zheng et al. , 2019 ) ) . The recent progress of prediction models for survival analysis has been summarized in a survey paper ( Wang et al. , 2019 ) . In survival analysis , datasets are often censored , which means that events of interest might not be observed for some instances . This may be due to either the limited observation time window or missing traces caused by other irrelevant events . Typical censored data are right censored data . These are the data points whose exact times of the events are unknown ; we know only that the events had not happened up to a certain time . In this paper , we focus on the uncensored data and the right censored data , as shown in Figure 1 . Here , the event for data point x1 is observed during the period of study and hence this data is categorized as uncensored data . The data points x2 and x3 are categorized as right censored data because we did not observe the events during the period of study . The time between a well-defined starting point and the last observation time ( e.g. , the time of the end of study ) is called the censoring time . One of the classical methods to solve the survival analysis problem is the Kaplan-Meier estimator ( Kaplan & Meier , 1958 ) . This is a non-parametric method to estimate the distribution of the survival times as a survival function S ( t ) , where the value S ( t∗ ) for a specific time t∗ represents the survival rate at time t∗ ( i.e. , the ratio of the patients who survived at time t∗ ) . It is easy to estimate the survival function S ( t ) if the dataset contains only uncensored data points , but the Kaplan-Meier estimator is designed to work for datasets that include censored data . Here , we briefly explain the algorithm of the Kaplan-Meier estimator . Let { ti } ki=1 be the set of distinct times when at least one uncensored event was observed in the dataset . Let di be the number of ( uncensored ) events that happened exactly at time ti , and let ni be the number of data points that are known to have survived at time ti . Then , the Kaplan-Meier estimator outputs the survival function S ( t ) = ∏ i : ti≤t ( 1− di ni ) . Figure 2 shows an example of the survival function S ( t ) estimated by the Kaplan-Meier estimator for the flchain dataset ( Dispenzieri et al. , 2012 ) . Here , we can see that the survival rate at time t = 2500 is approximately 80 % . Note that the true survival rate S ( t ) at t =∞ must be zero , but the Kaplan-Meier estimator outputs the survival function S ( t ) only for time t ∈ [ 0 , tmax ] , where tmax is the maximum survival time of the uncensored data points in a dataset . This is because we can not estimate the survival rate S ( t ) for the time t > tmax . A drawback of the Kaplan-Meier estimator is that it outputs a survival function S ( t ) for the entire population and not for a specific patient . Therefore , there have been many algorithms to estimate the survival rate S ( t|x ) for each patient x so as to enable personalized medicine ( Wang et al. , 2019 ) . In particular , many neural network models that predict the survival function S ( t|x ) have been proposed ( Lee et al. , 2018 ; Ren et al. , 2019 ; Zheng et al. , 2019 ; Tjandra et al. , 2021 ) . 1.1 CALIBRATION . When we use a prediction model , it should be calibrated . In binary classification , this means that a prediction model that outputs a confidence of a true label for an input is expected to satisfy the condition that the average of the confidence values over all inputs are equal to the ratio of the data points with a true label in the dataset . For example , if a dataset contains 40 % of the data points with a true label , then we expect that a calibrated prediction model will output a confidence of 0.4 on average . Even though calibration is important for prediction models , and neural network models have been widely used for prediction models , ( Guo et al. , 2017 ) showed that neural network models are often miscalibrated . In regression analysis , quantile-based calibration is widely used as the definition of calibration ( Kuleshov et al. , 2018 ; Song et al. , 2019 ; Cui et al. , 2020 ; Zhao et al. , 2020 ) . In survival analysis , the definition of calibration is based on this definition for regression analysis . Goldstein et al . ( 2020 ) showed a method to train a neural network model to achieve calibration by adding a new regularizer , X-CAL , to the loss function of the neural network to achieve calibration during the training . This method is in contrast to the widely-used calibration methods for regression analysis such as Platt scaling and isotonic regression , which are post-training methods . Note that a calibrated model is not necessarily useful . For example , in binary classification we can construct a trivial calibrated model that always outputs the ratio of true label data points in a dataset as a confidence value . Therefore , a useful prediction model also needs to be sharp , which means that it outputs a confidence value close to one or zero for each input . To obtain a prediction model that is both calibrated and sharp , one approach is to train a neural network with a loss function consisting of one loss term aimed at sharp prediction and another aimed at calibration . X-CAL utilizes this approach in that it is a regularizer for calibration and is used in combination with another loss term that aims at sharp prediction . 1.2 OUR CONTRIBUTIONS . In this paper , we propose a Kaplan-Meier regularizer as a simpler alternative to X-CAL . An advantage of our regularizer is that the obtained prediction model is calibrated for any time t ∈ [ 0 , tmax ] . Another advantage is that our regularizer does not require any hyperparameter , whereas X-CAL requires hyperparameters . The idea behind our new regularizer is simple : just to reduce the difference between the average predicted survival function and the Kaplan-Meier survival function ( see Figure 3 ) . Our new regularizer is based on a new definition of calibration for survival analysis , and we discuss its advantages in Section 4 . 2 SURVIVAL ANALYSIS . We formally describe the problem settings of regression analysis and survival analysis . In regression analysis , we assume that there is an unknown probability distribution P on the sample space X ×Y , where X is the domain of features and Y is the interval [ −∞ , ∞ ] . We refer to the associated random variables with capital letters ( i.e. , X and Y ) and realizations with lower case letters ( x , y ) ∼ P . We assume that we can obtain independent samples { ( xi , yi ) } ni=1 from X ×Y according to distribution P . In survival analysis , we assume that there is an unknown probability distribution Q on the sample space X × T × C , where X is the domain of features and T and C are time intervals [ 0 , ∞ ] . We refer to the associated random variables with capital letters ( i.e. , X , T , and C ) and realizations with lower case letters ( x , t , c ) ∼ Q . The random variable T corresponds to the survival time ( i.e. , the time of the occurrence of an event ) , which might not be observable due to the censoring , and the random variable C corresponds to the censoring time . We assume that the censoring time is∞ for an uncensored event . The random variable defined by Z = min { T , C } corresponds to the time of the last observation ( i.e. , the survival time or the censoring time ) . Different from the regression analysis , we can not obtain samples { ( xi , ti , ci ) } ni=1 from X ×T ×C according to distributionQ due to the censoring in survival analysis . However , we assume that we can obtain independent samples D = { ( xi , zi , δi ) } ni=1 instead , where δi is a binary value indicating if the i-th data point is censored or not . Here an uncensored data point ( xi , zi , δi ) ∈ D satisfies δi = 1 and zi = ti and a censored data point ( xi , zi , δi ) ∈ D satisfies δi = 0 and zi = ci . Hence δi = 0 means that we know only the fact that ti ≥ ci and the exact survival time ti is unknown . The task of survival analysis is to predict the probability of an event of interest occurring at time t for x ∈ X as a probability distribution function f ( t|x ) . This function f ( t|x ) is often represented in other equivalent forms . For example , it can be represented as its cumulative distribution function ( CDF ) F ( t|x ) = ∫ t 0 f ( τ |x ) dτ or as the survival function S ( t|x ) defined by S ( t|x ) = 1− F ( t|x ) . Intuitively , F ( t|x ) represents the probability of observing the event by time t for x and the survival function S ( t|x ) represents the probability of not-observing the event until time t for x . Many neural network models have been proposed for survival analysis ( e.g. , ( Lee et al. , 2018 ; Ren et al. , 2019 ; Zheng et al. , 2019 ; Tjandra et al. , 2021 ) ) . They output the probability distribution in the form of fθ ( t|x ) , Fθ ( t|x ) , Sθ ( T |x ) , or something equivalent to these forms , where θ is the parameters of the neural network . 3 QUANTILE-BASED CALIBRATION . In this section , we review the definitions of calibration for distribution regression and survival analysis in the literature . First , we consider the distribution regression whose task is to predict the distribution of the target variable as a CDF Fθ ( y|x ) for x ∈ X , where θ is the parameters of the prediction model . In distribution regression , quantile-based calibration ( Kuleshov et al. , 2018 ) is widely used as the definition of calibration ( e.g. , ( Song et al. , 2019 ; Cui et al. , 2020 ; Zhao et al. , 2020 ) ) . Definition 3.1 ( Quantile-based calibration . ) A prediction modelFθ ( y|x ) for distribution regression is quantile-calibrated if this equation holds for any quantile level τ ∈ [ 0 , 1 ] : Pr ( X , Y ) ∼P ( Fθ ( Y |X ) ≤ τ ) = τ . ( 1 ) If we can compute the inverse of Fθ ( y|x ) , we can rewrite Eq . ( 1 ) as Pr ( X , Y ) ∼P ( Y ≤ F−1θ ( τ |X ) ) = τ . This equation means that , for a random sample ( x , y ) ∼ P , y must be at most the τ -th quantile of the predicted CDF ( i.e. , F−1θ ( τ |x ) ) exactly with probability τ . We can rewrite Definition 3.1 in another equivalent formulation : the following equation holds for any subinterval [ τ1 , τ2 ] ⊆ [ 0 , 1 ] : Pr ( X , Y ) ∼P ( Fθ ( Y |X ) ∈ [ τ1 , τ2 ] ) = τ2 − τ1 . This equation means that the quantile level Fθ ( y|x ) predicted for a random sample ( x , y ) ∼ P is contained in a subinterval [ τ1 , τ2 ] ⊆ [ 0 , 1 ] exactly with probability τ2 − τ1 . On the basis of Definition 3.1 , Goldstein et al . ( 2020 ) define calibration for survival analysis . Definition 3.2 ( Quantile-based calibration for survival analysis . ) A prediction model Fθ ( t|x ) for survival analysis is quantile-calibrated if this equation holds for any quantile level τ ∈ [ 0 , 1 ] : Pr ( X , T , C ) ∼Q ( Fθ ( T |X ) ≤ τ ) = τ . ( 2 ) We can rewrite Definition 3.2 into another equivalent formulation : the following equation holds for any subinterval I = [ τ1 , τ2 ] ⊆ [ 0 , 1 ] : Pr ( X , T , C ) ∼Q ( Fθ ( T |X ) ∈ I ) = |I| = τ2 − τ1 . ( 3 ) A problem when using Definition 3.2 is that we can not get samples directly from T due to the censoring . As such , we can not verify Eq . ( 3 ) for datasets that include censored data . Goldstein et al . ( 2020 ) resolved this problem by showing how to estimate the probability Pr ( Fθ ( t|x ) ∈ I ) for any t ∈ [ c , ∞ ] from Pr ( Fθ ( c|x ) ∈ I ) for a randomly sampled censored data point ( x , c , 0 ) . Under the assumption that T and C are independent ( i.e. , T ⊥ C |X ) , they show Pr ( Fθ ( t|x ) ∈ I ) = ( τ2 − v ) 1 [ v ∈ I ] 1− v + ( τ2 − τ1 ) 1 [ v < τ1 ] 1− v , ( 4 ) where v = Fθ ( c|x ) and 1 [ · ] denotes the step function . By using this estimation , we can compute the left-hand side of Eq . ( 3 ) from dataset D that include censored data points . On the basis of Eq . ( 3 ) and the approaches described in ( Andres et al. , 2018 ; Haider et al. , 2020 ) , Goldstein et al . ( 2020 ) proposed a metric called distributional calibration ( D-CAL ) , which is defined as ` D−CAL ( θ ) = ∑ I∈I ( E ( X , T , C ) ∼Q1 [ Fθ ( T |X ) ∈ I ] − |I| ) 2 , where the collection I is chosen to contain disjoint contiguous subintervals of C ⊆ [ 0 , 1 ] that cover the whole interval [ 0 , 1 ] . A prediction model Fθ ( t|x ) with a lower ` D−CAL ( θ ) is said to be more calibrated , but we can not construct a neural network model that directly minimizes D-CAL because ` D−CAL ( θ ) is not a differentiable function due to its step function . Therefore , Goldstein et al . ( 2020 ) defined the explicit calibration ( X-CAL ) , an approximation of D-CAL , by replacing the step function with a sigmoid function so that X-CAL becomes a differentiable function . Moreover , X-CAL is designed to handle a set of data points B = { ( xi , zi , δi ) } bi=1 as a mini-batch , which makes it possible to integrate XCAL into the loss function of neural network models because most of those models use a mini-batch training rather than the full batch training . Formally , X-CAL is defined as RX−CAL ( θ ) = EB∼Q ∑ I∈I ( E ( X , T , C ) ∼Bζ ( Fθ ( T |X ) ; I , γ ) − |I| ) 2 , where ζ ( z ; I , γ ) is a sigmoid function to approximate the step function in D-CAL . ( See ( Goldstein et al. , 2020 ) for the precise definition of the function ζ . ) Here , we abuse notation ( X , T , C ) ∼ B to indicate that we obtain sample data points from mini-batchB ( rather than the probability distribution Q ) . Note that Goldstein et al . ( 2020 ) proposed using X-CAL in the loss function of a neural network model as a regularizer , which means that it is intended to be combined with other loss functions . This is because a prediction model that aims only at calibration is useless ( as discussed in Section 1 ) and the balance between sharpness and calibration must be considered for obtaining a useful prediction model .
This paper first defines a new definition of calibration which, in contrast to the one used in prior work, is confined to the maximum observed time in the data. They then propose a KM regularizer for making sure their survival curves are calibrated: they are closer to KM curves. They also claim that their regularization, unlike prior work, does not require any hyperparameters and avoids the binning approach.
SP:5165c93ec3eefa99a32d00be8d59dd6894bb1d87
Simpler Calibration for Survival Analysis
1 INTRODUCTION . Survival analysis , also known as time-to-event analysis , is the problem to predict the time of the occurrence of an event . In healthcare applications , the event typically corresponds to a death or the onset of disease in a patient . The time between a well-defined starting point and the occurrence of the event is called the survival time or failure time . In survival analysis , we usually estimate the distribution of the survival times of patients . Survival analysis has important applications in healthcare as well as various other fields ( e.g. , credit scoring ( Dirick et al. , 2017 ) and fraud detection ( Zheng et al. , 2019 ) ) . The recent progress of prediction models for survival analysis has been summarized in a survey paper ( Wang et al. , 2019 ) . In survival analysis , datasets are often censored , which means that events of interest might not be observed for some instances . This may be due to either the limited observation time window or missing traces caused by other irrelevant events . Typical censored data are right censored data . These are the data points whose exact times of the events are unknown ; we know only that the events had not happened up to a certain time . In this paper , we focus on the uncensored data and the right censored data , as shown in Figure 1 . Here , the event for data point x1 is observed during the period of study and hence this data is categorized as uncensored data . The data points x2 and x3 are categorized as right censored data because we did not observe the events during the period of study . The time between a well-defined starting point and the last observation time ( e.g. , the time of the end of study ) is called the censoring time . One of the classical methods to solve the survival analysis problem is the Kaplan-Meier estimator ( Kaplan & Meier , 1958 ) . This is a non-parametric method to estimate the distribution of the survival times as a survival function S ( t ) , where the value S ( t∗ ) for a specific time t∗ represents the survival rate at time t∗ ( i.e. , the ratio of the patients who survived at time t∗ ) . It is easy to estimate the survival function S ( t ) if the dataset contains only uncensored data points , but the Kaplan-Meier estimator is designed to work for datasets that include censored data . Here , we briefly explain the algorithm of the Kaplan-Meier estimator . Let { ti } ki=1 be the set of distinct times when at least one uncensored event was observed in the dataset . Let di be the number of ( uncensored ) events that happened exactly at time ti , and let ni be the number of data points that are known to have survived at time ti . Then , the Kaplan-Meier estimator outputs the survival function S ( t ) = ∏ i : ti≤t ( 1− di ni ) . Figure 2 shows an example of the survival function S ( t ) estimated by the Kaplan-Meier estimator for the flchain dataset ( Dispenzieri et al. , 2012 ) . Here , we can see that the survival rate at time t = 2500 is approximately 80 % . Note that the true survival rate S ( t ) at t =∞ must be zero , but the Kaplan-Meier estimator outputs the survival function S ( t ) only for time t ∈ [ 0 , tmax ] , where tmax is the maximum survival time of the uncensored data points in a dataset . This is because we can not estimate the survival rate S ( t ) for the time t > tmax . A drawback of the Kaplan-Meier estimator is that it outputs a survival function S ( t ) for the entire population and not for a specific patient . Therefore , there have been many algorithms to estimate the survival rate S ( t|x ) for each patient x so as to enable personalized medicine ( Wang et al. , 2019 ) . In particular , many neural network models that predict the survival function S ( t|x ) have been proposed ( Lee et al. , 2018 ; Ren et al. , 2019 ; Zheng et al. , 2019 ; Tjandra et al. , 2021 ) . 1.1 CALIBRATION . When we use a prediction model , it should be calibrated . In binary classification , this means that a prediction model that outputs a confidence of a true label for an input is expected to satisfy the condition that the average of the confidence values over all inputs are equal to the ratio of the data points with a true label in the dataset . For example , if a dataset contains 40 % of the data points with a true label , then we expect that a calibrated prediction model will output a confidence of 0.4 on average . Even though calibration is important for prediction models , and neural network models have been widely used for prediction models , ( Guo et al. , 2017 ) showed that neural network models are often miscalibrated . In regression analysis , quantile-based calibration is widely used as the definition of calibration ( Kuleshov et al. , 2018 ; Song et al. , 2019 ; Cui et al. , 2020 ; Zhao et al. , 2020 ) . In survival analysis , the definition of calibration is based on this definition for regression analysis . Goldstein et al . ( 2020 ) showed a method to train a neural network model to achieve calibration by adding a new regularizer , X-CAL , to the loss function of the neural network to achieve calibration during the training . This method is in contrast to the widely-used calibration methods for regression analysis such as Platt scaling and isotonic regression , which are post-training methods . Note that a calibrated model is not necessarily useful . For example , in binary classification we can construct a trivial calibrated model that always outputs the ratio of true label data points in a dataset as a confidence value . Therefore , a useful prediction model also needs to be sharp , which means that it outputs a confidence value close to one or zero for each input . To obtain a prediction model that is both calibrated and sharp , one approach is to train a neural network with a loss function consisting of one loss term aimed at sharp prediction and another aimed at calibration . X-CAL utilizes this approach in that it is a regularizer for calibration and is used in combination with another loss term that aims at sharp prediction . 1.2 OUR CONTRIBUTIONS . In this paper , we propose a Kaplan-Meier regularizer as a simpler alternative to X-CAL . An advantage of our regularizer is that the obtained prediction model is calibrated for any time t ∈ [ 0 , tmax ] . Another advantage is that our regularizer does not require any hyperparameter , whereas X-CAL requires hyperparameters . The idea behind our new regularizer is simple : just to reduce the difference between the average predicted survival function and the Kaplan-Meier survival function ( see Figure 3 ) . Our new regularizer is based on a new definition of calibration for survival analysis , and we discuss its advantages in Section 4 . 2 SURVIVAL ANALYSIS . We formally describe the problem settings of regression analysis and survival analysis . In regression analysis , we assume that there is an unknown probability distribution P on the sample space X ×Y , where X is the domain of features and Y is the interval [ −∞ , ∞ ] . We refer to the associated random variables with capital letters ( i.e. , X and Y ) and realizations with lower case letters ( x , y ) ∼ P . We assume that we can obtain independent samples { ( xi , yi ) } ni=1 from X ×Y according to distribution P . In survival analysis , we assume that there is an unknown probability distribution Q on the sample space X × T × C , where X is the domain of features and T and C are time intervals [ 0 , ∞ ] . We refer to the associated random variables with capital letters ( i.e. , X , T , and C ) and realizations with lower case letters ( x , t , c ) ∼ Q . The random variable T corresponds to the survival time ( i.e. , the time of the occurrence of an event ) , which might not be observable due to the censoring , and the random variable C corresponds to the censoring time . We assume that the censoring time is∞ for an uncensored event . The random variable defined by Z = min { T , C } corresponds to the time of the last observation ( i.e. , the survival time or the censoring time ) . Different from the regression analysis , we can not obtain samples { ( xi , ti , ci ) } ni=1 from X ×T ×C according to distributionQ due to the censoring in survival analysis . However , we assume that we can obtain independent samples D = { ( xi , zi , δi ) } ni=1 instead , where δi is a binary value indicating if the i-th data point is censored or not . Here an uncensored data point ( xi , zi , δi ) ∈ D satisfies δi = 1 and zi = ti and a censored data point ( xi , zi , δi ) ∈ D satisfies δi = 0 and zi = ci . Hence δi = 0 means that we know only the fact that ti ≥ ci and the exact survival time ti is unknown . The task of survival analysis is to predict the probability of an event of interest occurring at time t for x ∈ X as a probability distribution function f ( t|x ) . This function f ( t|x ) is often represented in other equivalent forms . For example , it can be represented as its cumulative distribution function ( CDF ) F ( t|x ) = ∫ t 0 f ( τ |x ) dτ or as the survival function S ( t|x ) defined by S ( t|x ) = 1− F ( t|x ) . Intuitively , F ( t|x ) represents the probability of observing the event by time t for x and the survival function S ( t|x ) represents the probability of not-observing the event until time t for x . Many neural network models have been proposed for survival analysis ( e.g. , ( Lee et al. , 2018 ; Ren et al. , 2019 ; Zheng et al. , 2019 ; Tjandra et al. , 2021 ) ) . They output the probability distribution in the form of fθ ( t|x ) , Fθ ( t|x ) , Sθ ( T |x ) , or something equivalent to these forms , where θ is the parameters of the neural network . 3 QUANTILE-BASED CALIBRATION . In this section , we review the definitions of calibration for distribution regression and survival analysis in the literature . First , we consider the distribution regression whose task is to predict the distribution of the target variable as a CDF Fθ ( y|x ) for x ∈ X , where θ is the parameters of the prediction model . In distribution regression , quantile-based calibration ( Kuleshov et al. , 2018 ) is widely used as the definition of calibration ( e.g. , ( Song et al. , 2019 ; Cui et al. , 2020 ; Zhao et al. , 2020 ) ) . Definition 3.1 ( Quantile-based calibration . ) A prediction modelFθ ( y|x ) for distribution regression is quantile-calibrated if this equation holds for any quantile level τ ∈ [ 0 , 1 ] : Pr ( X , Y ) ∼P ( Fθ ( Y |X ) ≤ τ ) = τ . ( 1 ) If we can compute the inverse of Fθ ( y|x ) , we can rewrite Eq . ( 1 ) as Pr ( X , Y ) ∼P ( Y ≤ F−1θ ( τ |X ) ) = τ . This equation means that , for a random sample ( x , y ) ∼ P , y must be at most the τ -th quantile of the predicted CDF ( i.e. , F−1θ ( τ |x ) ) exactly with probability τ . We can rewrite Definition 3.1 in another equivalent formulation : the following equation holds for any subinterval [ τ1 , τ2 ] ⊆ [ 0 , 1 ] : Pr ( X , Y ) ∼P ( Fθ ( Y |X ) ∈ [ τ1 , τ2 ] ) = τ2 − τ1 . This equation means that the quantile level Fθ ( y|x ) predicted for a random sample ( x , y ) ∼ P is contained in a subinterval [ τ1 , τ2 ] ⊆ [ 0 , 1 ] exactly with probability τ2 − τ1 . On the basis of Definition 3.1 , Goldstein et al . ( 2020 ) define calibration for survival analysis . Definition 3.2 ( Quantile-based calibration for survival analysis . ) A prediction model Fθ ( t|x ) for survival analysis is quantile-calibrated if this equation holds for any quantile level τ ∈ [ 0 , 1 ] : Pr ( X , T , C ) ∼Q ( Fθ ( T |X ) ≤ τ ) = τ . ( 2 ) We can rewrite Definition 3.2 into another equivalent formulation : the following equation holds for any subinterval I = [ τ1 , τ2 ] ⊆ [ 0 , 1 ] : Pr ( X , T , C ) ∼Q ( Fθ ( T |X ) ∈ I ) = |I| = τ2 − τ1 . ( 3 ) A problem when using Definition 3.2 is that we can not get samples directly from T due to the censoring . As such , we can not verify Eq . ( 3 ) for datasets that include censored data . Goldstein et al . ( 2020 ) resolved this problem by showing how to estimate the probability Pr ( Fθ ( t|x ) ∈ I ) for any t ∈ [ c , ∞ ] from Pr ( Fθ ( c|x ) ∈ I ) for a randomly sampled censored data point ( x , c , 0 ) . Under the assumption that T and C are independent ( i.e. , T ⊥ C |X ) , they show Pr ( Fθ ( t|x ) ∈ I ) = ( τ2 − v ) 1 [ v ∈ I ] 1− v + ( τ2 − τ1 ) 1 [ v < τ1 ] 1− v , ( 4 ) where v = Fθ ( c|x ) and 1 [ · ] denotes the step function . By using this estimation , we can compute the left-hand side of Eq . ( 3 ) from dataset D that include censored data points . On the basis of Eq . ( 3 ) and the approaches described in ( Andres et al. , 2018 ; Haider et al. , 2020 ) , Goldstein et al . ( 2020 ) proposed a metric called distributional calibration ( D-CAL ) , which is defined as ` D−CAL ( θ ) = ∑ I∈I ( E ( X , T , C ) ∼Q1 [ Fθ ( T |X ) ∈ I ] − |I| ) 2 , where the collection I is chosen to contain disjoint contiguous subintervals of C ⊆ [ 0 , 1 ] that cover the whole interval [ 0 , 1 ] . A prediction model Fθ ( t|x ) with a lower ` D−CAL ( θ ) is said to be more calibrated , but we can not construct a neural network model that directly minimizes D-CAL because ` D−CAL ( θ ) is not a differentiable function due to its step function . Therefore , Goldstein et al . ( 2020 ) defined the explicit calibration ( X-CAL ) , an approximation of D-CAL , by replacing the step function with a sigmoid function so that X-CAL becomes a differentiable function . Moreover , X-CAL is designed to handle a set of data points B = { ( xi , zi , δi ) } bi=1 as a mini-batch , which makes it possible to integrate XCAL into the loss function of neural network models because most of those models use a mini-batch training rather than the full batch training . Formally , X-CAL is defined as RX−CAL ( θ ) = EB∼Q ∑ I∈I ( E ( X , T , C ) ∼Bζ ( Fθ ( T |X ) ; I , γ ) − |I| ) 2 , where ζ ( z ; I , γ ) is a sigmoid function to approximate the step function in D-CAL . ( See ( Goldstein et al. , 2020 ) for the precise definition of the function ζ . ) Here , we abuse notation ( X , T , C ) ∼ B to indicate that we obtain sample data points from mini-batchB ( rather than the probability distribution Q ) . Note that Goldstein et al . ( 2020 ) proposed using X-CAL in the loss function of a neural network model as a regularizer , which means that it is intended to be combined with other loss functions . This is because a prediction model that aims only at calibration is useless ( as discussed in Section 1 ) and the balance between sharpness and calibration must be considered for obtaining a useful prediction model .
In this work, the authors propose a novel approach for learning calibrated predictions for survival analysis and similar tasks. The intuition of the approach is that the average probability that a prediction has a value less than or equal to $t$ should approximately equal the number of observations with value less than or equal to $t$. The authors then derive a differentiable regularization term using this idea and Kaplan-Meier curves. A small set of empirical evaluations suggest the proposed approach modestly outperforms another recent survival analysis calibration method according to some metrics.
SP:5165c93ec3eefa99a32d00be8d59dd6894bb1d87
Generating High-Fidelity Privacy-Conscious Synthetic Patient Data for Causal Effect Estimation with Multiple Treatments
1 INTRODUCTION . In health care , studying the causal effect of treatments on patients is critical to advance personalized medicine . Observing an association between a drug ( exposure or treatment ) and subsequent adverse or beneficial event ( outcome ) is not enough to claim the treatment is indeed the cause of the observed outcome due to the existence of confounding variables , defined as factors that affect both the treatments and outcomes . Randomized clinical trials ( RCTs ) have been the gold standard for estimating causal relationships between intervention and outcome . However , RCTs are sometimes not feasible due to logistical , ethical , or financial considerations . Further , randomized experiments may not always be generalizable , due to the restricted population used in the experiments . In the past decade , observational data has become a viable alternative to RCTs to infer causal treatment effects due to both the increasingly available patient data captured in Electronic Health Records ( EHRs ) ( Henry et al . ( 2016 ) ) and the remarkable advances of machine learning techniques and capabilities . Typically , EHRs capture potential confounding factors such as race , gender , geographic location , eventual proxies of social determinants of health , as well as medical characteristics such as comorbidities and laboratory results . Many causal inference models have been proposed to estimate treatment effects from observational data . Validation of these models with realistic benchmarks , however , remains a fundamental chal- lenge due to three reasons . First , the ground truth of treatment effects in a realistic setting is unknown . In real world , we can not compute the treatment effect by directly comparing the potential outcomes of different treatments because of the fundamental problem of causal inference : for a given patient and treatment , we can only observe the factual , defined as the patient outcome for the given treatment , but not the counterfactual , defined as the patient outcome if the treatment had been different . Second , legal and ethical issues around un-consented patient data and privacy created a significant barrier in accessing routinely EHRs by the machine learning community . In order to mitigate the legal and ethical risks of sharing sensitive information , de-identification of patient records is a commonly used practice . However , previous work has shown that de-identification is not sufficient for avoiding re-identification through linkage with other identifiable datasets Sweeney ( 1997 ) ; Emam et al . ( 2011 ) ; Malin & Sweeney ( 2004 ) . Third , most publicly available datasets support either binary or very few treatments , while there has been growing literature developing techniques with multiple treatments in recent years ( Lopez & Gutman ( 2017 ) ) . To address these challenges , in this work we generated a large-scale and realistic patient dataset that mimics real patient data distributions , supports multiple treatments , and provides ground truth for the effects of these treatments . The datasets we generated are synthetic patients with hypertension modeled on a massive nationwide cohort patients ’ history of diagnoses , medications , and laboratory values . We designed a data generation process by adapting an Anonymization Through Data Synthesis Using Generative Adversarial Networks ( ADS-GAN by Yoon et al . ( 2020 ) ) model for fictitious patient information generation and using a neural network for treatment outcome generation . The synthetic dataset demonstrates strong similarity to the original dataset as measured by the Wasserstein distance . In addition , we ensure that the original patients ’ privacy is preserved so that our dataset can be made available to the research community to evaluate causal inference models . We demonstrated the use of the synthetic data by applying our dataset to evaluate five models : one inverse probability treatment weighting ( IPTW ) model , one propensity matching model , one propensity score stratification model all introduced by Rosenbaum & Rubin ( 1983 ) , and two models in the doubly robust family ( Foster & Syrgkanis ( 2020 ) ) . To our knowledge , this is the first large scale dataset that mimics real data joint distributions with multiple treatments and known causal effects . The approach we used can be readily extended to other types of diseases in the clinical domain , and to datasets in other domains as well . The rest of this paper is organized as follows . In Section 2 , we discuss related works . The details of our method are presented in Section 4 . In Section 5 , we discuss the evaluation metrics for the quality of the data and results of evaluating several established causal inference models with the data . We discuss the limitations of the work in Section 6 and conclude the paper in Section 7 . 2 RELATED WORK . Our work is related to several existing works on publicly available databases , fictitious patient record creations , and data generation processes . 2.1 PUBLICLY AVAILABLE DATASETS . First used in ( Hill ( 2011 ) ) , the Infant Health and Development Program ( IHDP ) is a randomized controlled study designed to evaluate the effect of home visit from specialist doctors on the cognitive test scores of premature infants . It contains 747 subjects and 25 variables that describe the characteristics of the infants and their mothers . The Jobs dataset by LaLonde ( 1986 ) is a benchmark used by the causal inference community , where the treatment is job training and the outcomes are income and employment status after training . The Twins dataset , originally used for evaluating causal inference in ( Louizos et al . ( 2017 ) ; Yao et al . ( 2018 ) ) , consists of samples from twin births in the U.S. between the years 1989 and 1991 provided in ( Almond et al . ( 2005 ) ) . The Annual Atlantic Causal Inference Conference ( ACIC ) data challenge provides an opportunity to compare causal inference methodologies across a variety of data generating processes . Some of the above mentioned datasets do not provide true causal effects . Others are small in size so the models validated on such datasets may perform very differently in a more general real-world setting . All the datasets created for ACIC challenge were designed specifically for competitions . The covariates in the data are either drawn from publicly available databases , or simulated . In the former case , the datasets are limited by small populations ( Du et al . ( 2021 ) ) and arbitrarily designed data generation processes , which did not aim to capture any real-world causal relationships ( Karavani et al . ( 2019 ) ) . In the latter case , the distribution of the data is not realistic , i.e , dissimilar to the distribution of any real dataset . 2.2 EHR DATA GENERATION . Walonoski et al . ( 2018 ) generated synthetic EHRs based on publicly available information . The focus of their work is on generating the life cycle of a patient and how a disease evolves over time . Goncalves et al . ( 2020 ) evaluated three synthetic data generation models–probabilistic models , classification-based imputation models , and generative adversarial neural networks–in generating realistic EHR data . Tucker et al . ( 2020 ) used a Bayesian network model to generate synthetic data based on the Clinical Practice Research Datalink ( CPRD ) in the UK . Benaim et al . ( 2020 ) evaluated synthetic data produced from 5 contemporary studies using MDClone . Wang et al . ( 2021 ) proposed a framework to generate and evaluate synthetic health care data , and the key requirements of synthetic data for multiple purposes . All of these works focus on EHR data generation producing patient variables but without ground truth for causal effects . In contrast , the focus of our work is not only on generating patient variables , but producing ground truth for causal effects as well . 2.3 POTENTIAL OUTCOME GENERATION . To validate their models , many researchers such as ( Schuler & Rose ( 2017 ) ) created synthetic covariates and produced potential outcomes with a designed data generation process . Such datasets are not designed to approximate any real data distributions . Franklin et al . ( 2014 ) ; Neal et al . ( 2020 ) generated potential outcomes from covariates with known causal effects , but without any regard to patient privacy . We addressed the critical issue of patient privacy concerns so our data can be made available for the research community to evaluate their models . 3 PATIENT CLAIM DATA AND INCLUSION EXCLUSION CRITERIA . To make our synthetic data realistic , we generate the data based on a real-world patient claim database from a large insurance company in the United States . This database contains 5 billion insurance claims ( diagnoses , procedures , and drug prescriptions or refills ) and lab test results from 56.4 million patients who subscribed to the company ’ s service within a 5-year time period between December 2014 and December 2020 . From this database , we extracted a subset of patients affected by hypertension . We chose hypertension because there are a large number of related claims , making it easier to learn the data distribution . In addition , since it is a condition affecting nearly half of adults in the United States ( 116 million , or 47 % ) , our generated dataset can be directly used for clinical researchers to develop and evaluate their models for this important disease . Patients are included in the dataset if they have a medical claim indicating hypertension ( ICD code I10 , I11.9 , I12.9 , and I13.10 ) or currently treated with anti-hypertensive medications . We exclude patients from the dataset if they are age < 18 or age > 85 , affected by white coat hypertension , secondary hypertension , malignant cancers , dementia , or are pregnant . After applying the above mentioned inclusion and exclusion criteria , we have about 1.6 million patients included in this study . We further exclude patients treated with a combination of drugs rather than a single drug . We then rank the drugs by the number of patients treated with each drug , and only keep patients treated with one of the 28 most popular drugs . These filtering steps produce about 262 , 000 patients in the study . The distribution of this dataset is then learned and used to generate synthetic patients , viewed as samples drawn from the learned distribution . The patients ’ diagnoses and treatment history and how their conditions evolve over time are captured by trajectory data consisting of labs , diagnoses and their corresponding dates . For the convenience of data processing and analysis , we convert the trajectory data into tabular data with rows representing different patients ( samples ) and columns representing patient features ( variables ) including patient demographics , diagnoses , medications and labs . In Table . 1 , we list and briefly describe these 60 patient variables : 2 variables ( F1 ) describing the systolic blood pressure before the treatment and the date it was measured , 2 variables ( F2 ) describing the same metric but after the treatment , 3 variables ( F3 ) indicating current and prior drug usage and refill information , 4 variables ( F4 ) describing patient basic information ( age , gender , ethnicity ) , 30 variables ( F5 ) indicating laboratory measurements , 2 variables ( F6 ) indicating the presence or absence of comorbid conditions defined by the Charlson Comorbidity Index ( Charlson et al . ( 1987 ) ) , 15 variables ( F7 ) describing the patient ’ s zip code , the racial makeup and income levels in the patient ’ s zip code tabulation area ( ZCTA ) , 2 variables ( F8 ) indicating meta information . In this study , we are interested in the causal effects of anti-hypertensive drugs ( current drugs of F3 ) on patient outcomes , measured as the difference between the first ( F1 ) and second lab results ( F2 ) .
The authors propose a method for using real-world patient data to generate (semi-)synthetic privacy-preserving data on which to evaluate methods for causal effect estimation. For this purpose, they adapt the ADS-GAN (Anonymization through Data Synthesis using Generative Adversarial Networks) model introduced by Yoon et al. (2020) to produce a synthetic dataset that is identical in distribution to the original dataset, yet cannot be used to identify any of the original patients. To ensure the former desideratum, the authors calculate the Wasserstein distance between $P_{\hat{X}}$ and $P_X$; to ensure the latter, they use $\epsilon$-identifiabilty (Yoon et al., 2020). The authors show empirically that the synthetic dataset satisfies these desiderata, and then evaluate a few causal effect estimators on the synthetic data using the generated ground truth.
SP:355d95d502cd5d5de8ce41d9792253ee06454986
Generating High-Fidelity Privacy-Conscious Synthetic Patient Data for Causal Effect Estimation with Multiple Treatments
1 INTRODUCTION . In health care , studying the causal effect of treatments on patients is critical to advance personalized medicine . Observing an association between a drug ( exposure or treatment ) and subsequent adverse or beneficial event ( outcome ) is not enough to claim the treatment is indeed the cause of the observed outcome due to the existence of confounding variables , defined as factors that affect both the treatments and outcomes . Randomized clinical trials ( RCTs ) have been the gold standard for estimating causal relationships between intervention and outcome . However , RCTs are sometimes not feasible due to logistical , ethical , or financial considerations . Further , randomized experiments may not always be generalizable , due to the restricted population used in the experiments . In the past decade , observational data has become a viable alternative to RCTs to infer causal treatment effects due to both the increasingly available patient data captured in Electronic Health Records ( EHRs ) ( Henry et al . ( 2016 ) ) and the remarkable advances of machine learning techniques and capabilities . Typically , EHRs capture potential confounding factors such as race , gender , geographic location , eventual proxies of social determinants of health , as well as medical characteristics such as comorbidities and laboratory results . Many causal inference models have been proposed to estimate treatment effects from observational data . Validation of these models with realistic benchmarks , however , remains a fundamental chal- lenge due to three reasons . First , the ground truth of treatment effects in a realistic setting is unknown . In real world , we can not compute the treatment effect by directly comparing the potential outcomes of different treatments because of the fundamental problem of causal inference : for a given patient and treatment , we can only observe the factual , defined as the patient outcome for the given treatment , but not the counterfactual , defined as the patient outcome if the treatment had been different . Second , legal and ethical issues around un-consented patient data and privacy created a significant barrier in accessing routinely EHRs by the machine learning community . In order to mitigate the legal and ethical risks of sharing sensitive information , de-identification of patient records is a commonly used practice . However , previous work has shown that de-identification is not sufficient for avoiding re-identification through linkage with other identifiable datasets Sweeney ( 1997 ) ; Emam et al . ( 2011 ) ; Malin & Sweeney ( 2004 ) . Third , most publicly available datasets support either binary or very few treatments , while there has been growing literature developing techniques with multiple treatments in recent years ( Lopez & Gutman ( 2017 ) ) . To address these challenges , in this work we generated a large-scale and realistic patient dataset that mimics real patient data distributions , supports multiple treatments , and provides ground truth for the effects of these treatments . The datasets we generated are synthetic patients with hypertension modeled on a massive nationwide cohort patients ’ history of diagnoses , medications , and laboratory values . We designed a data generation process by adapting an Anonymization Through Data Synthesis Using Generative Adversarial Networks ( ADS-GAN by Yoon et al . ( 2020 ) ) model for fictitious patient information generation and using a neural network for treatment outcome generation . The synthetic dataset demonstrates strong similarity to the original dataset as measured by the Wasserstein distance . In addition , we ensure that the original patients ’ privacy is preserved so that our dataset can be made available to the research community to evaluate causal inference models . We demonstrated the use of the synthetic data by applying our dataset to evaluate five models : one inverse probability treatment weighting ( IPTW ) model , one propensity matching model , one propensity score stratification model all introduced by Rosenbaum & Rubin ( 1983 ) , and two models in the doubly robust family ( Foster & Syrgkanis ( 2020 ) ) . To our knowledge , this is the first large scale dataset that mimics real data joint distributions with multiple treatments and known causal effects . The approach we used can be readily extended to other types of diseases in the clinical domain , and to datasets in other domains as well . The rest of this paper is organized as follows . In Section 2 , we discuss related works . The details of our method are presented in Section 4 . In Section 5 , we discuss the evaluation metrics for the quality of the data and results of evaluating several established causal inference models with the data . We discuss the limitations of the work in Section 6 and conclude the paper in Section 7 . 2 RELATED WORK . Our work is related to several existing works on publicly available databases , fictitious patient record creations , and data generation processes . 2.1 PUBLICLY AVAILABLE DATASETS . First used in ( Hill ( 2011 ) ) , the Infant Health and Development Program ( IHDP ) is a randomized controlled study designed to evaluate the effect of home visit from specialist doctors on the cognitive test scores of premature infants . It contains 747 subjects and 25 variables that describe the characteristics of the infants and their mothers . The Jobs dataset by LaLonde ( 1986 ) is a benchmark used by the causal inference community , where the treatment is job training and the outcomes are income and employment status after training . The Twins dataset , originally used for evaluating causal inference in ( Louizos et al . ( 2017 ) ; Yao et al . ( 2018 ) ) , consists of samples from twin births in the U.S. between the years 1989 and 1991 provided in ( Almond et al . ( 2005 ) ) . The Annual Atlantic Causal Inference Conference ( ACIC ) data challenge provides an opportunity to compare causal inference methodologies across a variety of data generating processes . Some of the above mentioned datasets do not provide true causal effects . Others are small in size so the models validated on such datasets may perform very differently in a more general real-world setting . All the datasets created for ACIC challenge were designed specifically for competitions . The covariates in the data are either drawn from publicly available databases , or simulated . In the former case , the datasets are limited by small populations ( Du et al . ( 2021 ) ) and arbitrarily designed data generation processes , which did not aim to capture any real-world causal relationships ( Karavani et al . ( 2019 ) ) . In the latter case , the distribution of the data is not realistic , i.e , dissimilar to the distribution of any real dataset . 2.2 EHR DATA GENERATION . Walonoski et al . ( 2018 ) generated synthetic EHRs based on publicly available information . The focus of their work is on generating the life cycle of a patient and how a disease evolves over time . Goncalves et al . ( 2020 ) evaluated three synthetic data generation models–probabilistic models , classification-based imputation models , and generative adversarial neural networks–in generating realistic EHR data . Tucker et al . ( 2020 ) used a Bayesian network model to generate synthetic data based on the Clinical Practice Research Datalink ( CPRD ) in the UK . Benaim et al . ( 2020 ) evaluated synthetic data produced from 5 contemporary studies using MDClone . Wang et al . ( 2021 ) proposed a framework to generate and evaluate synthetic health care data , and the key requirements of synthetic data for multiple purposes . All of these works focus on EHR data generation producing patient variables but without ground truth for causal effects . In contrast , the focus of our work is not only on generating patient variables , but producing ground truth for causal effects as well . 2.3 POTENTIAL OUTCOME GENERATION . To validate their models , many researchers such as ( Schuler & Rose ( 2017 ) ) created synthetic covariates and produced potential outcomes with a designed data generation process . Such datasets are not designed to approximate any real data distributions . Franklin et al . ( 2014 ) ; Neal et al . ( 2020 ) generated potential outcomes from covariates with known causal effects , but without any regard to patient privacy . We addressed the critical issue of patient privacy concerns so our data can be made available for the research community to evaluate their models . 3 PATIENT CLAIM DATA AND INCLUSION EXCLUSION CRITERIA . To make our synthetic data realistic , we generate the data based on a real-world patient claim database from a large insurance company in the United States . This database contains 5 billion insurance claims ( diagnoses , procedures , and drug prescriptions or refills ) and lab test results from 56.4 million patients who subscribed to the company ’ s service within a 5-year time period between December 2014 and December 2020 . From this database , we extracted a subset of patients affected by hypertension . We chose hypertension because there are a large number of related claims , making it easier to learn the data distribution . In addition , since it is a condition affecting nearly half of adults in the United States ( 116 million , or 47 % ) , our generated dataset can be directly used for clinical researchers to develop and evaluate their models for this important disease . Patients are included in the dataset if they have a medical claim indicating hypertension ( ICD code I10 , I11.9 , I12.9 , and I13.10 ) or currently treated with anti-hypertensive medications . We exclude patients from the dataset if they are age < 18 or age > 85 , affected by white coat hypertension , secondary hypertension , malignant cancers , dementia , or are pregnant . After applying the above mentioned inclusion and exclusion criteria , we have about 1.6 million patients included in this study . We further exclude patients treated with a combination of drugs rather than a single drug . We then rank the drugs by the number of patients treated with each drug , and only keep patients treated with one of the 28 most popular drugs . These filtering steps produce about 262 , 000 patients in the study . The distribution of this dataset is then learned and used to generate synthetic patients , viewed as samples drawn from the learned distribution . The patients ’ diagnoses and treatment history and how their conditions evolve over time are captured by trajectory data consisting of labs , diagnoses and their corresponding dates . For the convenience of data processing and analysis , we convert the trajectory data into tabular data with rows representing different patients ( samples ) and columns representing patient features ( variables ) including patient demographics , diagnoses , medications and labs . In Table . 1 , we list and briefly describe these 60 patient variables : 2 variables ( F1 ) describing the systolic blood pressure before the treatment and the date it was measured , 2 variables ( F2 ) describing the same metric but after the treatment , 3 variables ( F3 ) indicating current and prior drug usage and refill information , 4 variables ( F4 ) describing patient basic information ( age , gender , ethnicity ) , 30 variables ( F5 ) indicating laboratory measurements , 2 variables ( F6 ) indicating the presence or absence of comorbid conditions defined by the Charlson Comorbidity Index ( Charlson et al . ( 1987 ) ) , 15 variables ( F7 ) describing the patient ’ s zip code , the racial makeup and income levels in the patient ’ s zip code tabulation area ( ZCTA ) , 2 variables ( F8 ) indicating meta information . In this study , we are interested in the causal effects of anti-hypertensive drugs ( current drugs of F3 ) on patient outcomes , measured as the difference between the first ( F1 ) and second lab results ( F2 ) .
The paper studies the problem of generating synthetic patient data for the evaluation of causal inference models. The generated patient data is expected to highly mimic the distribution of the original dataset while also taking patient privacy into consideration. Experiments are conducted on the synthetic dataset with three well-established causal inference models.
SP:355d95d502cd5d5de8ce41d9792253ee06454986
Adaptive Activation-based Structured Pruning
1 INTRODUCTION . Deep neural networks ( DNNs ) have substantial compute and memory requirements . As deep learning becomes pervasive and moves towards edge devices , DNN deployment becomes harder because of the mistmatch between resource-hungry DNNs and resource-constrained edge devices . DNN pruning is a promising approach ( Li et al . ( 2016 ) ; Han et al . ( 2015 ) ; Molchanov et al . ( 2016 ) ; Theis et al . ( 2018 ) ; Renda et al . ( 2020 ) ) , which identifies the parameters ( or weight elements ) that do not contribute significantly to the accuracy and prunes them from the network . Recently , works based on the Lottery Ticket Hypothesis ( LTH ) have achieved great successes in creating smaller and more accurate models through iterative pruning with rewinding ( Frankle & Carbin ( 2018 ) ) . However , LTH has only been shown to work successfully with unstructured pruning which , unfortunately leads to models with low sparsity and difficult to accelerate on commodity hardware such as CPUs and GPUs ( e.g. , Hill et al . ( 2017 ) shows directly applying NVDIA cuSPARSE on unstructured pruned models can lead to 60ˆ slowdown on GPU compared to dense kernels . ) Moreover , most pruning methods require users to explore and adjust multiple hyper-parameters , e.g. , with LTH-based iterative pruning , users need to determine how many parameters to prune in each round . Tuning the pruning process is time consuming and often leads to sub-optimal results . We propose activation-based , adaptive , iterative structured pruning to find the “ winning ticket ” models that are at the same time hardware efficient and to automatically meet the users ’ model accuracy , size , and speed requirements . First , we propose an activation-based structured pruning method to identify and remove unimportant filters in an LTH-based iterative pruning ( with rewinding ) process . Specifically , we properly define an attention mapping function that takes a 2D activation feature maps of a filter as input , and outputs a 1D value used to indicate the importance of the filter . This approach is more effective than weight-value based filter pruning because activationbased attention values not only capture the features of inputs but also contain the information of convolution layers that act as feature detectors for prediction tasks . We then integrate this attention- based method into the LTH-based iterative pruning framework to prune the filters in each round and find the winning ticket that is small , accurate , and hardware-efficient . Second , we propose adaptive pruning that automatically optimizes the pruning process according to different user objectives . For latency-sensitive scenarios like interactive virtual assistants , we propose FLOPs-guaranteed pruning to achieve the best accuracy given the maximum amount of compute FLOPs ; For memory-limited environments like embedded systems , we propose modelsize-guaranteed pruning to achieve the best accuracy given the maximum amount of memory footprint ; For accuracy-critical applications such as those on self-driving cars , we propose accuracyguaranteed pruning to create the most resource-efficient model given the acceptable accuracy loss . Aiming for different targets , our method adaptively controls the pruning aggressiveness by adjusting the global threshold used to prune filters . Moreover , it considers the difference in each layer ’ s contributions to the model ’ s size and computational complexity and uses a per-layer threshold , calculated by dividing each layer ’ s remaining parameters or FLOPs by the entire model ’ s remaining parameters or FLOPs , to prune each layer with differentiated level of aggressiveness . Our results outperform the related works significantly in all cases targeting accuracy loss , parameters reduction , and FLOPs reduction . For example , on ResNet-56 with CIFAR-10 dataset , without accuracy drop , our method achieves the largest parameter reduction ( 79.11 % ) , outperforming the related works by 22.81 % to 66.07 % , and the largest FLOPs reduction ( 70.13 % ) , outperforming the related works by 14.13 % to 26.53 % . In addition , our method enables a pruned model the reach 0.6 % or 1.08 % higher accuracy than the original model but with only 30 % or 50 % of the original ’ s parameters . On ResNet-50 on ImageNet , for the same level of parameters and FLOPs reduction , our method achieves the smallest accuracy loss , lower than the related works by 0.08 % to 3.21 % ; and for the same level of accuracy loss , our method reduces significantly more parameters ( 6.45 % to 29.61 % higher than related works ) and more FLOPs ( 0.82 % to 17.2 % higher than related works ) . 2 BACKGROUND AND RELATED WORKS . Unstructured vs . Structured Pruning . Unstructured pruning ( LeCun et al . ( 1990 ) ; Han et al . ( 2015 ) ; Molchanov et al . ( 2017 ) ) is a fine-grained approach that prunes individual unimportant elements in weight tensors . It has less impact to model accuracy , compared to structured pruning , but unstructured pruned models are hard to accelerate on commodity hardware . Structured pruning is a coarse-grained approach that prunes entire regular regions of weight tensors according to some rule-based heuristics , such as L1-norm ( Li et al . ( 2016 ) ) , average percentage of zero ( Molchanov et al . ( 2016 ) ) , and other information considering the relationship between neighboring layers ( Theis et al . ( 2018 ) ; Lee et al . ( 2018 ) ) . Compared to unstructured pruning , it is more difficult to prune a model without causing accuracy loss using structured pruning , because by removing entire regions , it might remove weight elements that are important to the final accuracy ( Li et al . ( 2016 ) ) . However , structured pruned models can be mapped easily to general-purpose hardware and accelerated directly with off-the-shelf hardware and libraries ( He et al . ( 2018b ) ) . One Shot vs. Iterative Pruning . One-shot pruning prunes a pre-trained model and then retrains it once , whereas iterative pruning prunes and retrains the model in multiple rounds . Both techniques can choose either structured or unstructured pruning techniques . Recently , works based on the Lottery Ticket Hypothesis ( LTH ) have achieved great successes in creating smaller and more accurate models through iterative pruning with rewinding ( Frankle & Carbin ( 2018 ) ) . LTH posits that a dense randomly initialized network has a sub-network , termed as a winning ticket , which can achieve an accuracy comparable to the original network . At the beginning of each pruning round , it rewinds the weights and/or learning rate of the sub-network to some early epoch of the training phase of the original model to reduce the distance between the sub-network and original model and increase the change of finding the winning ticket . However , most of LTH-based works considered only unstructured pruning , e.g. , Iterative Magnitude Pruning ( IMP ) ( Frankle & Carbin ( 2018 ) ; Frankle et al . ( 2019 ) ) , which , as discussed above , is hardware-inefficient . It is non-trivial to design an iterative pruning method with structured pruning . To understand the state of iterative structured pruning , we experimented with IMP ’ s structured pruning counterpart—L1norm structured pruning ( ILP ) ( Li et al . ( 2016 ) ) which removes entire filters depending on their L1norm value . We observed that ILP can not effectively prune a model while maintaining its accuracy , e.g. , ILP can prune ResNet-50 by at most 11.5 % of parameters when the maximum accuracy loss Algorithm 1 Adaptive Iterative Structured Pruning Algorithm is limited to 1 % on ImageNet . Therefore , directly applying iterative pruning with existing weightmagnitude based structured pruning methods does not produce accurate pruned models . In this paper , we study how to produce small , accurate , and hardware-efficient models based on iterative structured pruning with rewinding . Automatic Pruning . For pruning to be useful in practice , it is important to automatically meet the pruning objectives for diverse ML applications and devices . Most pruning methods require users to explore and adjust multiple hyper-parameters , e.g. , with LTH-based iterative pruning , users need to determine how many parameters to prune in each round . Tuning the pruning process is time consuming and often leads to sub-optimal results . Therefore , we study how to automatically adapt the pruning process in order to meet the pruning objectives without user intervention . Some works ( Zoph et al . ( 2018 ) ; Cai et al . ; Ashok et al . ( 2017 ) ) use reinforcement-learning algorithms to find computationally efficient architectures . These works are in the area of Neural Architecture Search ( NAS ) , among which the most recent is AutoML for Model Compression ( AMC ) ( He et al . ( 2018b ) ) . AMC enables the model to arrive at the target speedup by limiting the action space ( the sparsity ratio for each layer ) , and it finds out the limit of compression that offers no loss of accuracy by tweaking the reward function . But this method has to explore over a large search space of all available layer-wise sparsity , which is time consuming when neural networks are large and datasets are complicated . 3 METHODOLOGY . Algorithm 1 illustrates the overall flow of the proposed adaptive activation-based structured pruning . To represent pruning of weights , we use a mask Mr t0 , 1ud for each weight tensor W rt Rd , where r is the pruning round number and t is the training epoch . Therefore , the pruned network at the end of training epoch T is represented by the element-wise product Mr dW rT . The first three steps are to train the original model to completion , while saving the weights at Epoch k. Steps 4–10 represent a pruning round . Step 4 prunes the model ( discussed in Section 3.1 ) . Step 5 ( optional ) and 6 perform rewinding . Step 7 retrains the pruned model for the remaining T ´ k epochs . Step 8 evaluates the pruned model according to the pruning target . If the target is not met , Step 9 resets the weights to an earlier round . Step 10 calculates the pruning threshold for the next pruning round following the adaptive pruning policy ( discussed in Section 3.2 ) . 3.1 ACTIVATION-BASED FILTER PRUNING . To realize iterative structured pruning , one can start with the state-of-the-art structured pruning methods and apply it iteratively . The widely used structured pruning method—L1-norm based structured pruning Renda et al . ( 2020 ) removes filters with the lowest L1-norm values , whereas the most recent method—Polarization-based structured pruning Zhuang et al . ( 2020 ) improves it by using a regularizer on scaling factors of filters and pruning filters whose scaling factors are below than a threshold . These two methods both assume that the weight values of a filter can be used as an indicator about the importance of that filter , much like how LTH uses weight values in unstructured pruning . However , we observe that weight-based structured pruning methods can not produce accurate pruned models . For example , to prune ResNet-56 on CIFAR-10 with no loss in top-1 accuracy , L1-norm based structured pruning can achieve at most only 1.15ˆmodel compression , and Polarization-norm based structured pruning can achieve at most only 1.89ˆ inference speedup . The reason is that , some filters , even though their weight values are small , can still produce useful non-zero activation values that are important for learning features during backpropagation . That is , filters with small values may have large activations . We propose that the activation values of filters are more effective in finding unimportant filters to prune . Activations like ReLu enable non-linear operations , and enable convolutional layers to act as feature detectors . If an activation value is small , then its corresponding feature detector is not important for prediction tasks . So activation values , i.e. , the intermediate output tensors after the non-linear activation , not only detect features of training dataset , but also contain the information of convolution layers that act as feature detectors for prediction tasks . We present a visual motivation in Figure 1 ( a ) . The figure shows the activation output of 16 filters of a convolution layer on one input image . The first image on the left is the original image , and the second image is the input features after data augmentation . We observe that some filters extract image features with high activation patterns , e.g. , the 6th and 12th filters . In comparison , the activation outputs of some filters are close to zero , such as the 2nd , 14th , and 16th . Therefore , from visual inspection , removing filters with weak activation patterns is likely to have low impact on the final accuracy of the pruned model . There is a natural connection between our activation-based pruning approach and the related attention-based knowledge transfer works ( Zagoruyko & Komodakis ( 2016 ) ) . Attention is a mapping function used to calculate the statistics of each element in the activations over the channel dimension . By minimizing the difference of the activation-based attention maps from intermediate layers between a teacher model and a student model , attention transfer enables the student to imitate the behaviour of the teacher . Our proposed pruning method builds upon the same theory that activation-based attention is a good indicator of filters regarding their ability to capture features , and it addresses the new challenges in using activations to guide automatic structured pruning . In the following , we describe how to prune filters based on its activation feature maps in each round of the iterative structured pruning process . Figure 1 ( b ) shows the inputs and outputs of a 2D convolution layer ( referred to as conv2d ) , followed by an activation layer . For the ith conv2d layer , let Xi Rn i´1ˆhi´1ˆwi´1 denote the input features , and F ij R ni´1ˆkiˆki be the jth filter , where hi´1 and wi´1 are the height and width of the input features , respectively , ni´1 is the number of input channels , ni is the number of output channels , and ki is the kernel size of the filter . The activation of the jth filter F ij after ReLu mapping is therefore denoted by A i j R hiˆwi . The attention mapping function takes a 2D activation Aij R hiˆwi of filter F ij as input , and outputs a 1D value which will be used as an indicator of the importance of filters . We consider three forms of activation-based attention maps , where p ě 1 and aik , l denotes every element of Aij : 1 ) Mean of attention values raised to the power of p : FmeanpAijq “ 1hiˆwi řhi k “ 1 řwi l “ 1 ˇ ˇ ˇ aik , l ˇ ˇ ˇ p ; 2 ) Max of atten- Algorithm 2 Accuracy-guaranteed Adaptive Pruning 1 : Input : Target Accuracy Loss AccLossTarget 2 : Output : A small pruned model with an acceptable accuracy 3 : Initialize : T “ 0.0 , λ “ 0.005 . 4 : for pruning round r ( r ě 1 ) do 5 : Prune the model using T rrs ( Refer to Algorithm 3 ) 6 : Train the pruned model , evaluate its accuracy Accrrs 7 : Calculate the accuracy loss AccLossrrs : AccLossrrs “ Accr0s ´Accrrs 8 : if AccLossrrs ă AccLossTarget then 9 : if the changes of model size are within 0.1 % for several rounds then 10 : Terminate 11 : else 12 : λrr ` 1s “ λrrs 13 : T rr ` 1s “ T rrs ` λrr ` 1s 14 : end if 15 : else 16 : Find the last acceptable round k 17 : if k has been used to roll back for several times then 18 : Mark k as unacceptable 19 : Go to Step 15 20 : else 21 : Roll back model weights to round k 22 : λrr ` 1s “ λrrs { 2.0pN ` 1q ( N is the number of times for rolling back to round k ) 23 : T rr ` 1s “ T rks ` λrr ` 1s 24 : end if 25 : end if 26 : end for tion values raised to the power of p : FmaxpAijq “ maxl “ 1 , hiˆwi ˇ ˇ ˇ aik , l ˇ ˇ ˇ p ; and 3 ) Sum of attention values raised to the power of p : FsumpAijq “ řhi k “ 1 řwi l “ 1 ˇ ˇ ˇ aik , l ˇ ˇ ˇ p . We choose FmeanpAijq with p equals to 1 as the indicator to identify and prune unimportant filters , and our method removes the filters whose attention value is lower than the pruning threshold . See Section 5 for an ablation study on these choices .
This paper proposes iterative structured pruning methods using activation-based attention feature maps and an adaptive threshold selection strategy. Inspired by attention transfer, Activation-based attention feature maps are constructed as the important evaluation of filters in each layer. Adaptive threshold selection strategy decides the number of removed filters, which satisfies one of three adaptive pruning policies. Experimental results on CIFAR-10 and ImageNet with ResNet architectures show performance gains over several state-of-the-art methods.
SP:075aa882a64acdb6d7c9486c235ef657b7afb104
Adaptive Activation-based Structured Pruning
1 INTRODUCTION . Deep neural networks ( DNNs ) have substantial compute and memory requirements . As deep learning becomes pervasive and moves towards edge devices , DNN deployment becomes harder because of the mistmatch between resource-hungry DNNs and resource-constrained edge devices . DNN pruning is a promising approach ( Li et al . ( 2016 ) ; Han et al . ( 2015 ) ; Molchanov et al . ( 2016 ) ; Theis et al . ( 2018 ) ; Renda et al . ( 2020 ) ) , which identifies the parameters ( or weight elements ) that do not contribute significantly to the accuracy and prunes them from the network . Recently , works based on the Lottery Ticket Hypothesis ( LTH ) have achieved great successes in creating smaller and more accurate models through iterative pruning with rewinding ( Frankle & Carbin ( 2018 ) ) . However , LTH has only been shown to work successfully with unstructured pruning which , unfortunately leads to models with low sparsity and difficult to accelerate on commodity hardware such as CPUs and GPUs ( e.g. , Hill et al . ( 2017 ) shows directly applying NVDIA cuSPARSE on unstructured pruned models can lead to 60ˆ slowdown on GPU compared to dense kernels . ) Moreover , most pruning methods require users to explore and adjust multiple hyper-parameters , e.g. , with LTH-based iterative pruning , users need to determine how many parameters to prune in each round . Tuning the pruning process is time consuming and often leads to sub-optimal results . We propose activation-based , adaptive , iterative structured pruning to find the “ winning ticket ” models that are at the same time hardware efficient and to automatically meet the users ’ model accuracy , size , and speed requirements . First , we propose an activation-based structured pruning method to identify and remove unimportant filters in an LTH-based iterative pruning ( with rewinding ) process . Specifically , we properly define an attention mapping function that takes a 2D activation feature maps of a filter as input , and outputs a 1D value used to indicate the importance of the filter . This approach is more effective than weight-value based filter pruning because activationbased attention values not only capture the features of inputs but also contain the information of convolution layers that act as feature detectors for prediction tasks . We then integrate this attention- based method into the LTH-based iterative pruning framework to prune the filters in each round and find the winning ticket that is small , accurate , and hardware-efficient . Second , we propose adaptive pruning that automatically optimizes the pruning process according to different user objectives . For latency-sensitive scenarios like interactive virtual assistants , we propose FLOPs-guaranteed pruning to achieve the best accuracy given the maximum amount of compute FLOPs ; For memory-limited environments like embedded systems , we propose modelsize-guaranteed pruning to achieve the best accuracy given the maximum amount of memory footprint ; For accuracy-critical applications such as those on self-driving cars , we propose accuracyguaranteed pruning to create the most resource-efficient model given the acceptable accuracy loss . Aiming for different targets , our method adaptively controls the pruning aggressiveness by adjusting the global threshold used to prune filters . Moreover , it considers the difference in each layer ’ s contributions to the model ’ s size and computational complexity and uses a per-layer threshold , calculated by dividing each layer ’ s remaining parameters or FLOPs by the entire model ’ s remaining parameters or FLOPs , to prune each layer with differentiated level of aggressiveness . Our results outperform the related works significantly in all cases targeting accuracy loss , parameters reduction , and FLOPs reduction . For example , on ResNet-56 with CIFAR-10 dataset , without accuracy drop , our method achieves the largest parameter reduction ( 79.11 % ) , outperforming the related works by 22.81 % to 66.07 % , and the largest FLOPs reduction ( 70.13 % ) , outperforming the related works by 14.13 % to 26.53 % . In addition , our method enables a pruned model the reach 0.6 % or 1.08 % higher accuracy than the original model but with only 30 % or 50 % of the original ’ s parameters . On ResNet-50 on ImageNet , for the same level of parameters and FLOPs reduction , our method achieves the smallest accuracy loss , lower than the related works by 0.08 % to 3.21 % ; and for the same level of accuracy loss , our method reduces significantly more parameters ( 6.45 % to 29.61 % higher than related works ) and more FLOPs ( 0.82 % to 17.2 % higher than related works ) . 2 BACKGROUND AND RELATED WORKS . Unstructured vs . Structured Pruning . Unstructured pruning ( LeCun et al . ( 1990 ) ; Han et al . ( 2015 ) ; Molchanov et al . ( 2017 ) ) is a fine-grained approach that prunes individual unimportant elements in weight tensors . It has less impact to model accuracy , compared to structured pruning , but unstructured pruned models are hard to accelerate on commodity hardware . Structured pruning is a coarse-grained approach that prunes entire regular regions of weight tensors according to some rule-based heuristics , such as L1-norm ( Li et al . ( 2016 ) ) , average percentage of zero ( Molchanov et al . ( 2016 ) ) , and other information considering the relationship between neighboring layers ( Theis et al . ( 2018 ) ; Lee et al . ( 2018 ) ) . Compared to unstructured pruning , it is more difficult to prune a model without causing accuracy loss using structured pruning , because by removing entire regions , it might remove weight elements that are important to the final accuracy ( Li et al . ( 2016 ) ) . However , structured pruned models can be mapped easily to general-purpose hardware and accelerated directly with off-the-shelf hardware and libraries ( He et al . ( 2018b ) ) . One Shot vs. Iterative Pruning . One-shot pruning prunes a pre-trained model and then retrains it once , whereas iterative pruning prunes and retrains the model in multiple rounds . Both techniques can choose either structured or unstructured pruning techniques . Recently , works based on the Lottery Ticket Hypothesis ( LTH ) have achieved great successes in creating smaller and more accurate models through iterative pruning with rewinding ( Frankle & Carbin ( 2018 ) ) . LTH posits that a dense randomly initialized network has a sub-network , termed as a winning ticket , which can achieve an accuracy comparable to the original network . At the beginning of each pruning round , it rewinds the weights and/or learning rate of the sub-network to some early epoch of the training phase of the original model to reduce the distance between the sub-network and original model and increase the change of finding the winning ticket . However , most of LTH-based works considered only unstructured pruning , e.g. , Iterative Magnitude Pruning ( IMP ) ( Frankle & Carbin ( 2018 ) ; Frankle et al . ( 2019 ) ) , which , as discussed above , is hardware-inefficient . It is non-trivial to design an iterative pruning method with structured pruning . To understand the state of iterative structured pruning , we experimented with IMP ’ s structured pruning counterpart—L1norm structured pruning ( ILP ) ( Li et al . ( 2016 ) ) which removes entire filters depending on their L1norm value . We observed that ILP can not effectively prune a model while maintaining its accuracy , e.g. , ILP can prune ResNet-50 by at most 11.5 % of parameters when the maximum accuracy loss Algorithm 1 Adaptive Iterative Structured Pruning Algorithm is limited to 1 % on ImageNet . Therefore , directly applying iterative pruning with existing weightmagnitude based structured pruning methods does not produce accurate pruned models . In this paper , we study how to produce small , accurate , and hardware-efficient models based on iterative structured pruning with rewinding . Automatic Pruning . For pruning to be useful in practice , it is important to automatically meet the pruning objectives for diverse ML applications and devices . Most pruning methods require users to explore and adjust multiple hyper-parameters , e.g. , with LTH-based iterative pruning , users need to determine how many parameters to prune in each round . Tuning the pruning process is time consuming and often leads to sub-optimal results . Therefore , we study how to automatically adapt the pruning process in order to meet the pruning objectives without user intervention . Some works ( Zoph et al . ( 2018 ) ; Cai et al . ; Ashok et al . ( 2017 ) ) use reinforcement-learning algorithms to find computationally efficient architectures . These works are in the area of Neural Architecture Search ( NAS ) , among which the most recent is AutoML for Model Compression ( AMC ) ( He et al . ( 2018b ) ) . AMC enables the model to arrive at the target speedup by limiting the action space ( the sparsity ratio for each layer ) , and it finds out the limit of compression that offers no loss of accuracy by tweaking the reward function . But this method has to explore over a large search space of all available layer-wise sparsity , which is time consuming when neural networks are large and datasets are complicated . 3 METHODOLOGY . Algorithm 1 illustrates the overall flow of the proposed adaptive activation-based structured pruning . To represent pruning of weights , we use a mask Mr t0 , 1ud for each weight tensor W rt Rd , where r is the pruning round number and t is the training epoch . Therefore , the pruned network at the end of training epoch T is represented by the element-wise product Mr dW rT . The first three steps are to train the original model to completion , while saving the weights at Epoch k. Steps 4–10 represent a pruning round . Step 4 prunes the model ( discussed in Section 3.1 ) . Step 5 ( optional ) and 6 perform rewinding . Step 7 retrains the pruned model for the remaining T ´ k epochs . Step 8 evaluates the pruned model according to the pruning target . If the target is not met , Step 9 resets the weights to an earlier round . Step 10 calculates the pruning threshold for the next pruning round following the adaptive pruning policy ( discussed in Section 3.2 ) . 3.1 ACTIVATION-BASED FILTER PRUNING . To realize iterative structured pruning , one can start with the state-of-the-art structured pruning methods and apply it iteratively . The widely used structured pruning method—L1-norm based structured pruning Renda et al . ( 2020 ) removes filters with the lowest L1-norm values , whereas the most recent method—Polarization-based structured pruning Zhuang et al . ( 2020 ) improves it by using a regularizer on scaling factors of filters and pruning filters whose scaling factors are below than a threshold . These two methods both assume that the weight values of a filter can be used as an indicator about the importance of that filter , much like how LTH uses weight values in unstructured pruning . However , we observe that weight-based structured pruning methods can not produce accurate pruned models . For example , to prune ResNet-56 on CIFAR-10 with no loss in top-1 accuracy , L1-norm based structured pruning can achieve at most only 1.15ˆmodel compression , and Polarization-norm based structured pruning can achieve at most only 1.89ˆ inference speedup . The reason is that , some filters , even though their weight values are small , can still produce useful non-zero activation values that are important for learning features during backpropagation . That is , filters with small values may have large activations . We propose that the activation values of filters are more effective in finding unimportant filters to prune . Activations like ReLu enable non-linear operations , and enable convolutional layers to act as feature detectors . If an activation value is small , then its corresponding feature detector is not important for prediction tasks . So activation values , i.e. , the intermediate output tensors after the non-linear activation , not only detect features of training dataset , but also contain the information of convolution layers that act as feature detectors for prediction tasks . We present a visual motivation in Figure 1 ( a ) . The figure shows the activation output of 16 filters of a convolution layer on one input image . The first image on the left is the original image , and the second image is the input features after data augmentation . We observe that some filters extract image features with high activation patterns , e.g. , the 6th and 12th filters . In comparison , the activation outputs of some filters are close to zero , such as the 2nd , 14th , and 16th . Therefore , from visual inspection , removing filters with weak activation patterns is likely to have low impact on the final accuracy of the pruned model . There is a natural connection between our activation-based pruning approach and the related attention-based knowledge transfer works ( Zagoruyko & Komodakis ( 2016 ) ) . Attention is a mapping function used to calculate the statistics of each element in the activations over the channel dimension . By minimizing the difference of the activation-based attention maps from intermediate layers between a teacher model and a student model , attention transfer enables the student to imitate the behaviour of the teacher . Our proposed pruning method builds upon the same theory that activation-based attention is a good indicator of filters regarding their ability to capture features , and it addresses the new challenges in using activations to guide automatic structured pruning . In the following , we describe how to prune filters based on its activation feature maps in each round of the iterative structured pruning process . Figure 1 ( b ) shows the inputs and outputs of a 2D convolution layer ( referred to as conv2d ) , followed by an activation layer . For the ith conv2d layer , let Xi Rn i´1ˆhi´1ˆwi´1 denote the input features , and F ij R ni´1ˆkiˆki be the jth filter , where hi´1 and wi´1 are the height and width of the input features , respectively , ni´1 is the number of input channels , ni is the number of output channels , and ki is the kernel size of the filter . The activation of the jth filter F ij after ReLu mapping is therefore denoted by A i j R hiˆwi . The attention mapping function takes a 2D activation Aij R hiˆwi of filter F ij as input , and outputs a 1D value which will be used as an indicator of the importance of filters . We consider three forms of activation-based attention maps , where p ě 1 and aik , l denotes every element of Aij : 1 ) Mean of attention values raised to the power of p : FmeanpAijq “ 1hiˆwi řhi k “ 1 řwi l “ 1 ˇ ˇ ˇ aik , l ˇ ˇ ˇ p ; 2 ) Max of atten- Algorithm 2 Accuracy-guaranteed Adaptive Pruning 1 : Input : Target Accuracy Loss AccLossTarget 2 : Output : A small pruned model with an acceptable accuracy 3 : Initialize : T “ 0.0 , λ “ 0.005 . 4 : for pruning round r ( r ě 1 ) do 5 : Prune the model using T rrs ( Refer to Algorithm 3 ) 6 : Train the pruned model , evaluate its accuracy Accrrs 7 : Calculate the accuracy loss AccLossrrs : AccLossrrs “ Accr0s ´Accrrs 8 : if AccLossrrs ă AccLossTarget then 9 : if the changes of model size are within 0.1 % for several rounds then 10 : Terminate 11 : else 12 : λrr ` 1s “ λrrs 13 : T rr ` 1s “ T rrs ` λrr ` 1s 14 : end if 15 : else 16 : Find the last acceptable round k 17 : if k has been used to roll back for several times then 18 : Mark k as unacceptable 19 : Go to Step 15 20 : else 21 : Roll back model weights to round k 22 : λrr ` 1s “ λrrs { 2.0pN ` 1q ( N is the number of times for rolling back to round k ) 23 : T rr ` 1s “ T rks ` λrr ` 1s 24 : end if 25 : end if 26 : end for tion values raised to the power of p : FmaxpAijq “ maxl “ 1 , hiˆwi ˇ ˇ ˇ aik , l ˇ ˇ ˇ p ; and 3 ) Sum of attention values raised to the power of p : FsumpAijq “ řhi k “ 1 řwi l “ 1 ˇ ˇ ˇ aik , l ˇ ˇ ˇ p . We choose FmeanpAijq with p equals to 1 as the indicator to identify and prune unimportant filters , and our method removes the filters whose attention value is lower than the pruning threshold . See Section 5 for an ablation study on these choices .
This work proposes a technique for iterative structured pruning, without necessarily requiring too much manual human intervention. There are two parts to this paper that are important: 1. It is argued that we should prune channels based on the activation maps generated, rather than focusing on the weights of the channel. 2. They propose an iterative procedure that automatically backtracks if it has made a poor pruning decision.
SP:075aa882a64acdb6d7c9486c235ef657b7afb104
EViT: Expediting Vision Transformers via Token Reorganizations
1 INTRODUCTION . Computer vision research has evolved into Transformers since ViTs ( Dosovitskiy et al. , 2021 ) . Equipped with global self-attention , ViTs have shown impressive capability upon local convolution ( i.e. , CNNs ) on prevalent visual recognition scenarios , including image classification ( Dosovitskiy et al. , 2021 ; Touvron et al. , 2021a ; Jiang et al. , 2021 ; Graham et al. , 2021 ) , object detection ( Carion et al. , 2020 ) , and semantic segmentation ( Xie et al. , 2021 ; Liu et al. , 2021 ; Wang et al. , 2021a ; c ) , with both supervised and unsupervised ( self-supervised ) training ( Pan et al. , 2021 ; Ge et al. , 2021 ) configurations . Based on the main spirit of ViTs ( i.e. , MHSA ) , there are wide investigations ( Liu et al. , 2021 ; Chu et al. , 2021 ; Wang et al. , 2021a ) to explore the network structure of ViT models for continuous recognition performance improvement . Along with the development of ViT models , the computation burden is becoming an issue . The global self-attention between image tokens and long-range dependency make the model converge slow compared to CNNs . As illustrated in ( Dosovitskiy et al. , 2021 ) , training a ViT from scratch ∗Major work done during an internship at Tencent AI Lab . †Corresponding author . typically requires larger datasets ( e.g. , ImageNet-21k ( Deng et al. , 2009 ) and JFT-300M ( Sun et al. , 2017 ) ) than those of CNNs ( e.g. , CIFAR-10/100 ( Krizhevsky et al. , 2009 ) and ImageNet-1k ) . Also , using more training iterations is a necessity for network convergence ( Dosovitskiy et al. , 2021 ; Touvron et al. , 2021a ; Wang et al. , 2022 ) . Without such large scale training , the ViT models are not fully exploited and perform inferior on visual recognition scenarios . These issues motivate us to expedite ViTs for practice usage . The model acceleration of ViTs is important to reduce computational complexity . However , there are few studies focused on ViT acceleration . This is because the significant model difference between CNNs and ViTs prevents CNN model acceleration ( e.g. , pruning and distillation ) from applying on ViTs . Nevertheless , we analyze ViT from another perspective . We observe that not all image tokens in ViTs contribute positively to the final predictions . Fig . 1 shows some examples where part of the input image tokens are randomly dropped out . In Fig . 1b , removing image tokens related to the visual content of the corresponding category makes the ViT predict incorrectly . In comparison , removing unrelated image tokens does not affect ViT predictions , as shown in Fig . 1a . On the other hand , we notice that ViTs divide images into non-overlapping tokens and perform self-attention ( Vaswani et al. , 2017 ) computation on these tokens . A notable characteristic of self-attention is that it can process a varying number of tokens . These observations motivate us to reorganize image tokens for ViT model accelerations . In this work , we propose a token reorganization method to identify and fuse image tokens . Given all the image tokens as input , we compute token attentiveness between these tokens and the class token for identification . Then , we preserve the attentive image tokens and fuse the inattentive tokens into one token to allow the gradient back-propagate through the inattentive tokens for better attentive token identification . In this way , we gradually reduce the number of image tokens as the network goes deeper to decrease computation cost . Also , the capacity of the ViT backbone can be flexibly controlled via the identification process where no additional parameters are introduced . We adopt our token reorganization method on representative ViT models ( i.e. , DeiT ( Touvron et al. , 2021a ) and LV-ViT ( Jiang et al. , 2021 ) ) for ImageNet classification evaluation . The experimental results show our advantages . For instance , with the same amount of input image tokens , our method speeds up the DeiT-S model by 50 % , while only sacrificing 0.3 % recognition accuracy on ImageNet classification . On the other hand , we extend our methods to boost the ViT model recognition performance under the same computational cost . By increasing the input image resolution , our method facilitates Vision Transformers in taking more image tokens to achieve higher classification accuracy . Numerically , we improve the ImageNet classification accuracy of the DeiT-S model by 1 % under the same computational cost . Moreover , by using an oracle ViT to guide the token reorganization process , our method can increase the accuracy of the original DeiT-S from 79.8 % to 80.7 % while reducing its computation cost by 36 % under the multiply-accumulate computation ( MAC ) metric . 2 RELATED WORK . 2.1 VISION TRANSFORMERS . Transformers ( Vaswani et al. , 2017 ) have drawn much attention to computer vision recently due to its strong capability of modeling long-range relation . A few attempts have been made to add self- attention layers or Transformers on top of CNNs in image classification ( Hu et al. , 2019 ) , object detection ( Carion et al. , 2020 ) , segmentation ( Wang et al. , 2021c ) , image retrieval ( Lu et al. , 2019 ) and even video understanding ( Sun et al. , 2019 ; Girdhar et al. , 2019 ) . Vision Transformer ( ViT ) ( Dosovitskiy et al. , 2021 ) first introduced a set of pure Transformer backbones for image classification and its follow-ups modify the ViT architecture for not only better visual recognition ( Touvron et al. , 2021a ; Yuan et al. , 2021 ; Zhou et al. , 2021 ) but many other high-level vision tasks , such as object detection ( Carion et al. , 2020 ; Zhu et al. , 2020 ; Liu et al. , 2021 ) , semantic segmentation ( Wang et al. , 2021a ; Xie et al. , 2021 ; Chu et al. , 2021 ) , and video understanding ( Bertasius et al. , 2021 ; Fan et al. , 2021 ) . Vision Transformers have shown its strong potential as an alternative to the previously dominant CNNs . 2.2 MODEL ACCELERATION . Neural networks are typically overparameterized ( Allen-Zhu et al. , 2019 ) , which results in significant redundancy in computation in deep learning models . To deploy deep neural networks on mobile devices , we must reduce the storage and computational overhead of the networks . Many adaptive computation methods are explored ( Bengio et al. , 2015 ; Wang et al. , 2018 ; Graves , 2016 ; Hu et al. , 2020 ; Wang et al. , 2020b ; Han et al. , 2021b ) to alleviate the computation burden . Parameter pruning ( Srinivas & Babu , 2015 ; Han et al. , 2015 ; Chen et al. , 2015b ) reduces redundant parameters which are not sensitive to the final performance . Some other methods leverage knowledge distillation ( Hinton et al. , 2015 ; Romero et al. , 2014 ; Luo et al. , 2016 ; Chen et al. , 2015a ) to obtain a small and compact model with distilled knowledge of a larger one . These model acceleration strategies are limited to convolutional neural networks . There are also some attempts to accelerate the computation of the Transformer model , including proposing more efficient attention mechanisms ( Wang et al. , 2020a ; Kitaev et al. , 2020 ; Choromanski et al. , 2020 ) and the compressed Transformer structures ( Liu et al. , 2021 ; Heo et al. , 2021 ; Wang et al. , 2021a ) . These methods mainly focus on reducing the complexity of the network architecture through artificially designed modules . Another approach to ViT acceleration is reducing the number of tokens involved in the inference of ViTs . Notably , Wang et al . ( 2021b ) proposed a method to dynamically determine the number of patches to divide on an image . The ViT will stop inference for an input image if it has sufficient confidence in the prediction of the intermediate outputs . Remarkably , Ryoo et al . ( 2021 ) proposed TokenLearner to expedite ViTs , where a relatively small amount of tokens are learned by aggregating the entire feature map weighted by a dynamic attention map conditioned on the feature map . This can be seen as a sophisticated method for tokenizing the input images . Different from TokenLearner , our work focuses on the progressive selection of informative tokens during training . Another related work is DynamicViT ( Rao et al. , 2021 ) , which introduces a method to reduce token for a fully trained ViT , where an extra learnable neural network is added to ViT to select a subset of tokens . Our work provides a novel perspective for reducing the computational overhead of inference by proposing a token reorganization method to progressively reduce and reorganize image tokens . Unlike DynamicViT , our method does not need a fully trained ViT to help the training and brings no additional parameters into ViT . 3 TOKEN REORGANIZATIONS . Our method EViT is built upon ViT ( Dosovitskiy et al. , 2021 ) and its variants for visual recognition . We first review ViT and then present how to incorporate our method into the ViT training procedure . Each component of EViT , including attentive token identification and inattentive token fusion , will be elaborated . Furthermore , we analyze the effectiveness of our method by visualizing the attentive tokens at different layers and discuss training on higher resolution images with EViT . 3.1 VIT OVERVIEW . Vision Transformers ( ViTs ) are first introduced by Dosovitskiy et al . ( 2021 ) into visual recognition . They perform tokenization by dividing an input image into patches and projecting each patch to a token embedding . An extra class token [ CLS ] is added to the set of image tokens and is responsible for aggregating global image information and final classification . All of the tokens are added by a learnable vector ( i.e. , positional encoding ) and fed into the sequentially-stacked Transformer encoders consisting of a multi-head self-attention ( MHSA ) layer and a feed-forward network ( FFN ) . In MHSA , the tokens are linearly mapped and further packed into three matrices , namely Q , K , and V . The attention operation is conducted as follows . Attention ( Q , K , V ) = Softmax ( QK > √ d ) V . ( 1 ) where d is the length of the query vector . The result of Softmax ( QK > / √ d ) is a square matrix which is called the attention map . The first row of attention map represents the attention from [ CLS ] to all tokens and will be used to determine the attentiveness ( importance ) of each token ( detailed in the next subsection ) . The output tokens of MHSA are sent to FFN , consisting of two fully connected layers with a GELU activation layer ( Hendrycks & Gimpel , 2016 ) in between . At the final Transformer encoder layer , the [ CLS ] token is extracted and utilized for object category prediction . More details of Transformers can be found in Vaswani et al . ( 2017 ) . 3.2 ATTENTIVE TOKEN IDENTIFICATION . Let n denote the number of input tokens to a ViT encoder . In the last encoder of ViT , the [ CLS ] token is taken out for classification . The interactions between [ CLS ] and other tokens are performed via the attention mechanism ( Vaswani et al. , 2017 ) in the ViT encoders : xclass = Softmax ( qclass ·K > √ d ) V = a · V . ( 2 ) where qclass , K , and V denote the query vector of [ CLS ] , the key matrix , and the value matrix , respectively , in an attention head . In other words , the output of the [ CLS ] token xclass is a linear combination of the value vectors V = [ v1 , v2 , . . . , vn ] > , with the combination coefficients ( denoted by a in Eq . 2 ) being the attention values from [ CLS ] with respect to all tokens . Since vi comes from the i-th token , the attention value ai ( i.e. , the i-th entry in a ) determines how much information of the i-th token is fused into the output of [ CLS ] ( i.e. , xclass ) through the linear combination . It is thus natural to assume that the attention value ai indicates the importance of the i-th token . Moreover , Caron et al . ( 2021 ) also showed that the [ CLS ] token in ViTs pays more attention ( i.e. , having a larger attention value ) to class-specific tokens than to the tokens on the non-object regions . To this end , we propose to use the attentiveness of the [ CLS ] token with respect to other tokens to identify the most important tokens . Based on these arguments , a simple method to reduce computation in ViT is to remove the tokens with the smallest attention values . However , we find that directly removing those tokens severely deteriorates the classification accuracy , as shown in Table 1 . Therefore , we propose to incorporate image token reorganization during the ViT training process . In multi-head self-attention layer , there are multiple heads performing the computation of Eq . 1 in parallel . Thus , there are multiple [ CLS ] attention vectors a ( h ) , h = [ 1 , . . . , H ] , with H being the total number of attention heads ( Vaswani et al. , 2017 ) . We compute the average attentiveness value of all heads by ā = ∑H h=1 a ( h ) /H . As shown in Figure 2 , we identify and preserve the tokens corresponding to the k largest ( top-k ) elements in ā ( k is a hyperparameter ) , which we call the attentive tokens , and further fuse the other tokens ( which we call the inattentive tokens ) into a new token . The fusion of tokens is detailed in the following paragraph . We define the token keeping rate as κ = k/n .
This paper aims to expediting vision transformers by reducing the number of tokens. The main contribution is the attentive token identification, which is based on calculating the attentiveness of the class token with respect to each image token. Experiments on DeiT and LV-ViT show that the proposed approach is able to successfully reduce the computational cost of the baseline models with nearly no performance drop.
SP:f2f42c4a7163bf5a94e00d0d0ab05c8ea5e44727
EViT: Expediting Vision Transformers via Token Reorganizations
1 INTRODUCTION . Computer vision research has evolved into Transformers since ViTs ( Dosovitskiy et al. , 2021 ) . Equipped with global self-attention , ViTs have shown impressive capability upon local convolution ( i.e. , CNNs ) on prevalent visual recognition scenarios , including image classification ( Dosovitskiy et al. , 2021 ; Touvron et al. , 2021a ; Jiang et al. , 2021 ; Graham et al. , 2021 ) , object detection ( Carion et al. , 2020 ) , and semantic segmentation ( Xie et al. , 2021 ; Liu et al. , 2021 ; Wang et al. , 2021a ; c ) , with both supervised and unsupervised ( self-supervised ) training ( Pan et al. , 2021 ; Ge et al. , 2021 ) configurations . Based on the main spirit of ViTs ( i.e. , MHSA ) , there are wide investigations ( Liu et al. , 2021 ; Chu et al. , 2021 ; Wang et al. , 2021a ) to explore the network structure of ViT models for continuous recognition performance improvement . Along with the development of ViT models , the computation burden is becoming an issue . The global self-attention between image tokens and long-range dependency make the model converge slow compared to CNNs . As illustrated in ( Dosovitskiy et al. , 2021 ) , training a ViT from scratch ∗Major work done during an internship at Tencent AI Lab . †Corresponding author . typically requires larger datasets ( e.g. , ImageNet-21k ( Deng et al. , 2009 ) and JFT-300M ( Sun et al. , 2017 ) ) than those of CNNs ( e.g. , CIFAR-10/100 ( Krizhevsky et al. , 2009 ) and ImageNet-1k ) . Also , using more training iterations is a necessity for network convergence ( Dosovitskiy et al. , 2021 ; Touvron et al. , 2021a ; Wang et al. , 2022 ) . Without such large scale training , the ViT models are not fully exploited and perform inferior on visual recognition scenarios . These issues motivate us to expedite ViTs for practice usage . The model acceleration of ViTs is important to reduce computational complexity . However , there are few studies focused on ViT acceleration . This is because the significant model difference between CNNs and ViTs prevents CNN model acceleration ( e.g. , pruning and distillation ) from applying on ViTs . Nevertheless , we analyze ViT from another perspective . We observe that not all image tokens in ViTs contribute positively to the final predictions . Fig . 1 shows some examples where part of the input image tokens are randomly dropped out . In Fig . 1b , removing image tokens related to the visual content of the corresponding category makes the ViT predict incorrectly . In comparison , removing unrelated image tokens does not affect ViT predictions , as shown in Fig . 1a . On the other hand , we notice that ViTs divide images into non-overlapping tokens and perform self-attention ( Vaswani et al. , 2017 ) computation on these tokens . A notable characteristic of self-attention is that it can process a varying number of tokens . These observations motivate us to reorganize image tokens for ViT model accelerations . In this work , we propose a token reorganization method to identify and fuse image tokens . Given all the image tokens as input , we compute token attentiveness between these tokens and the class token for identification . Then , we preserve the attentive image tokens and fuse the inattentive tokens into one token to allow the gradient back-propagate through the inattentive tokens for better attentive token identification . In this way , we gradually reduce the number of image tokens as the network goes deeper to decrease computation cost . Also , the capacity of the ViT backbone can be flexibly controlled via the identification process where no additional parameters are introduced . We adopt our token reorganization method on representative ViT models ( i.e. , DeiT ( Touvron et al. , 2021a ) and LV-ViT ( Jiang et al. , 2021 ) ) for ImageNet classification evaluation . The experimental results show our advantages . For instance , with the same amount of input image tokens , our method speeds up the DeiT-S model by 50 % , while only sacrificing 0.3 % recognition accuracy on ImageNet classification . On the other hand , we extend our methods to boost the ViT model recognition performance under the same computational cost . By increasing the input image resolution , our method facilitates Vision Transformers in taking more image tokens to achieve higher classification accuracy . Numerically , we improve the ImageNet classification accuracy of the DeiT-S model by 1 % under the same computational cost . Moreover , by using an oracle ViT to guide the token reorganization process , our method can increase the accuracy of the original DeiT-S from 79.8 % to 80.7 % while reducing its computation cost by 36 % under the multiply-accumulate computation ( MAC ) metric . 2 RELATED WORK . 2.1 VISION TRANSFORMERS . Transformers ( Vaswani et al. , 2017 ) have drawn much attention to computer vision recently due to its strong capability of modeling long-range relation . A few attempts have been made to add self- attention layers or Transformers on top of CNNs in image classification ( Hu et al. , 2019 ) , object detection ( Carion et al. , 2020 ) , segmentation ( Wang et al. , 2021c ) , image retrieval ( Lu et al. , 2019 ) and even video understanding ( Sun et al. , 2019 ; Girdhar et al. , 2019 ) . Vision Transformer ( ViT ) ( Dosovitskiy et al. , 2021 ) first introduced a set of pure Transformer backbones for image classification and its follow-ups modify the ViT architecture for not only better visual recognition ( Touvron et al. , 2021a ; Yuan et al. , 2021 ; Zhou et al. , 2021 ) but many other high-level vision tasks , such as object detection ( Carion et al. , 2020 ; Zhu et al. , 2020 ; Liu et al. , 2021 ) , semantic segmentation ( Wang et al. , 2021a ; Xie et al. , 2021 ; Chu et al. , 2021 ) , and video understanding ( Bertasius et al. , 2021 ; Fan et al. , 2021 ) . Vision Transformers have shown its strong potential as an alternative to the previously dominant CNNs . 2.2 MODEL ACCELERATION . Neural networks are typically overparameterized ( Allen-Zhu et al. , 2019 ) , which results in significant redundancy in computation in deep learning models . To deploy deep neural networks on mobile devices , we must reduce the storage and computational overhead of the networks . Many adaptive computation methods are explored ( Bengio et al. , 2015 ; Wang et al. , 2018 ; Graves , 2016 ; Hu et al. , 2020 ; Wang et al. , 2020b ; Han et al. , 2021b ) to alleviate the computation burden . Parameter pruning ( Srinivas & Babu , 2015 ; Han et al. , 2015 ; Chen et al. , 2015b ) reduces redundant parameters which are not sensitive to the final performance . Some other methods leverage knowledge distillation ( Hinton et al. , 2015 ; Romero et al. , 2014 ; Luo et al. , 2016 ; Chen et al. , 2015a ) to obtain a small and compact model with distilled knowledge of a larger one . These model acceleration strategies are limited to convolutional neural networks . There are also some attempts to accelerate the computation of the Transformer model , including proposing more efficient attention mechanisms ( Wang et al. , 2020a ; Kitaev et al. , 2020 ; Choromanski et al. , 2020 ) and the compressed Transformer structures ( Liu et al. , 2021 ; Heo et al. , 2021 ; Wang et al. , 2021a ) . These methods mainly focus on reducing the complexity of the network architecture through artificially designed modules . Another approach to ViT acceleration is reducing the number of tokens involved in the inference of ViTs . Notably , Wang et al . ( 2021b ) proposed a method to dynamically determine the number of patches to divide on an image . The ViT will stop inference for an input image if it has sufficient confidence in the prediction of the intermediate outputs . Remarkably , Ryoo et al . ( 2021 ) proposed TokenLearner to expedite ViTs , where a relatively small amount of tokens are learned by aggregating the entire feature map weighted by a dynamic attention map conditioned on the feature map . This can be seen as a sophisticated method for tokenizing the input images . Different from TokenLearner , our work focuses on the progressive selection of informative tokens during training . Another related work is DynamicViT ( Rao et al. , 2021 ) , which introduces a method to reduce token for a fully trained ViT , where an extra learnable neural network is added to ViT to select a subset of tokens . Our work provides a novel perspective for reducing the computational overhead of inference by proposing a token reorganization method to progressively reduce and reorganize image tokens . Unlike DynamicViT , our method does not need a fully trained ViT to help the training and brings no additional parameters into ViT . 3 TOKEN REORGANIZATIONS . Our method EViT is built upon ViT ( Dosovitskiy et al. , 2021 ) and its variants for visual recognition . We first review ViT and then present how to incorporate our method into the ViT training procedure . Each component of EViT , including attentive token identification and inattentive token fusion , will be elaborated . Furthermore , we analyze the effectiveness of our method by visualizing the attentive tokens at different layers and discuss training on higher resolution images with EViT . 3.1 VIT OVERVIEW . Vision Transformers ( ViTs ) are first introduced by Dosovitskiy et al . ( 2021 ) into visual recognition . They perform tokenization by dividing an input image into patches and projecting each patch to a token embedding . An extra class token [ CLS ] is added to the set of image tokens and is responsible for aggregating global image information and final classification . All of the tokens are added by a learnable vector ( i.e. , positional encoding ) and fed into the sequentially-stacked Transformer encoders consisting of a multi-head self-attention ( MHSA ) layer and a feed-forward network ( FFN ) . In MHSA , the tokens are linearly mapped and further packed into three matrices , namely Q , K , and V . The attention operation is conducted as follows . Attention ( Q , K , V ) = Softmax ( QK > √ d ) V . ( 1 ) where d is the length of the query vector . The result of Softmax ( QK > / √ d ) is a square matrix which is called the attention map . The first row of attention map represents the attention from [ CLS ] to all tokens and will be used to determine the attentiveness ( importance ) of each token ( detailed in the next subsection ) . The output tokens of MHSA are sent to FFN , consisting of two fully connected layers with a GELU activation layer ( Hendrycks & Gimpel , 2016 ) in between . At the final Transformer encoder layer , the [ CLS ] token is extracted and utilized for object category prediction . More details of Transformers can be found in Vaswani et al . ( 2017 ) . 3.2 ATTENTIVE TOKEN IDENTIFICATION . Let n denote the number of input tokens to a ViT encoder . In the last encoder of ViT , the [ CLS ] token is taken out for classification . The interactions between [ CLS ] and other tokens are performed via the attention mechanism ( Vaswani et al. , 2017 ) in the ViT encoders : xclass = Softmax ( qclass ·K > √ d ) V = a · V . ( 2 ) where qclass , K , and V denote the query vector of [ CLS ] , the key matrix , and the value matrix , respectively , in an attention head . In other words , the output of the [ CLS ] token xclass is a linear combination of the value vectors V = [ v1 , v2 , . . . , vn ] > , with the combination coefficients ( denoted by a in Eq . 2 ) being the attention values from [ CLS ] with respect to all tokens . Since vi comes from the i-th token , the attention value ai ( i.e. , the i-th entry in a ) determines how much information of the i-th token is fused into the output of [ CLS ] ( i.e. , xclass ) through the linear combination . It is thus natural to assume that the attention value ai indicates the importance of the i-th token . Moreover , Caron et al . ( 2021 ) also showed that the [ CLS ] token in ViTs pays more attention ( i.e. , having a larger attention value ) to class-specific tokens than to the tokens on the non-object regions . To this end , we propose to use the attentiveness of the [ CLS ] token with respect to other tokens to identify the most important tokens . Based on these arguments , a simple method to reduce computation in ViT is to remove the tokens with the smallest attention values . However , we find that directly removing those tokens severely deteriorates the classification accuracy , as shown in Table 1 . Therefore , we propose to incorporate image token reorganization during the ViT training process . In multi-head self-attention layer , there are multiple heads performing the computation of Eq . 1 in parallel . Thus , there are multiple [ CLS ] attention vectors a ( h ) , h = [ 1 , . . . , H ] , with H being the total number of attention heads ( Vaswani et al. , 2017 ) . We compute the average attentiveness value of all heads by ā = ∑H h=1 a ( h ) /H . As shown in Figure 2 , we identify and preserve the tokens corresponding to the k largest ( top-k ) elements in ā ( k is a hyperparameter ) , which we call the attentive tokens , and further fuse the other tokens ( which we call the inattentive tokens ) into a new token . The fusion of tokens is detailed in the following paragraph . We define the token keeping rate as κ = k/n .
In this paper, an EVIT method is proposed for vision transformer speedup. It reduces image tokens based on the token attentiveness, which is measured by the class token. The inattentive tokens are reorganized as one to support attentive tokens. Experiments have shown on the benchmarks for visual recognition.
SP:f2f42c4a7163bf5a94e00d0d0ab05c8ea5e44727
A Study of Face Obfuscation in ImageNet
1 INTRODUCTION . Visual data is being generated at an unprecedented scale . People share billions of photos daily on social media ( Meeker , 2014 ) . There is one security camera for every 4 people in China and the United States ( Lin & Purnell , 2019 ) . Even your home can be watched by smart devices taking photos ( Butler et al. , 2015 ; Dai et al. , 2015 ) . Learning from the visual data has led to computer vision applications that promote the common good , e.g. , better traffic management ( Malhi et al. , 2011 ) and law enforcement ( Sajjad et al. , 2020 ) . However , it also raises privacy concerns , as images may capture sensitive information such as faces , addresses , and credit cards ( Orekondy et al. , 2018 ) . Extensive prior research has focused on preventing unauthorized access to sensitive information in private datasets ( Fredrikson et al. , 2015 ; Shokri et al. , 2017 ) . However , are publicly available datasets free of privacy concerns ? Taking the popular ImageNet dataset ( Deng et al. , 2009 ) as an example , there are only 3 people categories1 in the 1000 categories of the ImageNet Large Scale Visual Recognition Challenge ( ILSVRC ) ( Russakovsky et al. , 2015 ) ; nevertheless , the dataset exposes many people co-occurring with other objects in images ( Prabhu & Birhane , 2021 ) , e.g. , people sitting on chairs , walking dogs , or drinking beer ( Fig . 1 ) . It is concerning since ILSVRC is freely available for academic use2 and widely used by the research community . In this paper , we attempt to mitigate ILSVRC ’ s privacy issues . Specifically , we construct a privacyenhanced version of ILSVRC and gauge its utility as a benchmark for image classification and as a dataset for transfer learning . Face annotation . As an initial step , we focus on a prominent type of private information—faces . To examine and mitigate their privacy issues , we first annotate faces in ImageNet using face detectors and crowdsourcing . We use Amazon Rekognition to detect faces automatically , and then refine the results through crowdsourcing on Amazon Mechanical Turk to obtain accurate annotations . We have annotated 1,431,093 images in ILSVRC , resulting in 562,626 faces from 243,198 images ( 17 % of all images have at least one face ) . Many categories have more than 90 % images with faces , 1scuba diver , bridegroom , and baseball player 2https : //image-net.org/request even though they are not people categories , e.g. , volleyball and military uniform . Our annotations confirm that faces are ubiquitous in ILSVRC and pose a privacy issue . We release the face annotations to facilitate subsequent research in privacy-aware visual recognition on ILSVRC . Effects of face obfuscation on classification accuracy . Obfuscating sensitive image areas is widely used for preserving privacy ( McPherson et al. , 2016 ) . We focus on two simple obfuscation methods : blurring and overlaying ( Fig . 1 ) , whose privacy effects have been analyzed in prior work ( Oh et al. , 2016 ; Li et al. , 2017 ; Hasan et al. , 2018 ) . Using our face annotations , we construct face-obfuscated versions of ILSVRC . What are the effects of using them for image classification ? At first glance , it seems inconsequential—one should still recognize a car even when the people inside have their faces blurred . Indeed , we verify that validation accuracy drops only slightly ( 0.1 % –0.7 % for blurring , 0.3 % –1.0 % for overlaying ) when using face-obfuscated images to train and evaluate . We analyze this drop in detail ( identifying categories which are particularly affected ) , but this key result demonstrates that we can train privacy-aware visual classifiers on ILSVRC which remain highly competitive , with less than a 1 % accuracy drop . Effects on feature transferability . Besides a classification benchmark , ILSVRC also serves as pretraining data for transferring to domains where labeled images are scarce ( Girshick , 2015 ; Liu et al. , 2015a ) . So a further question is : Does face obfuscation hurt the transferability of visual features learned from ILSVRC ? We investigate by pretraining models on the original/obfuscated images and finetuning on 4 downstream tasks : object recognition on CIFAR-10 ( Krizhevsky et al. , 2009 ) , scene recognition on SUN ( Xiao et al. , 2010 ) , object detection on PASCAL VOC ( Everingham et al. , 2010 ) , and face attribute classification on CelebA ( Liu et al. , 2015b ) . They include both classification and spatial localization , as well as both face-centric and face-agnostic recognition . In all of the 4 tasks , models pretrained on face-obfuscated images perform closely with models pretrained on original images . We do not see a statistically significant difference between them , suggesting that visual features learned from face-obfuscated pretraining are equally transferable . Again , this encourages us to adopt face obfuscation as an additional protection on visual recognition datasets without worrying about detrimental effects on the dataset ’ s utility . Contributions . Our contributions are twofold . First , we obtain accurate face annotations in ILSVRC , facilitating subsequent research on privacy protection . We will release the code and the annotations . Second , to the best of our knowledge , we are the first to investigate the effects of privacy-aware face obfuscation on large-scale visual recognition . Through extensive experiments , we demonstrate that training on face-obfuscated images does not significantly compromise accuracy on both image classification and downstream tasks , while providing some privacy protection . Therefore , we advocate for face obfuscation to be included in ImageNet and to become a standard step in future dataset creation efforts . 2 RELATED WORK . Privacy-preserving machine learning ( PPML ) . Machine learning frequently uses private datasets ( Chen et al. , 2019b ) . Research in PPML is concerned with an adversary trying to infer the private data . The privacy breach can happen to the trained model . For example , model inversion attack recovers sensitive attributes ( e.g. , gender , genotype ) of an individual given the model ’ s output ( Fredrikson et al. , 2014 ; 2015 ; Hamm , 2017 ; Li et al. , 2019 ; Wu et al. , 2019 ) . Membership inference attack infers whether an individual was included in training ( Shokri et al. , 2017 ; Nasr et al. , 2019 ; Hisamoto et al. , 2020 ) . Training data extraction attack extracts verbatim training data from the model ( Carlini et al. , 2019 ; 2020 ) . For defending against these attacks , differential privacy is a general framework ( Abadi et al. , 2016 ; Chaudhuri & Monteleoni , 2008 ; McMahan et al. , 2018 ; Jayaraman & Evans , 2019 ; Jagielski et al. , 2020 ) . It requires the model to behave similarly whether or not an individual is in the training data . Privacy breaches can also happen in training/inference . To address hardware/software vulnerabilities , researchers have used enclaves—a hardware mechanism for protecting a memory region from unauthorized access—to execute machine learning workloads ( Ohrimenko et al. , 2016 ; Tramer & Boneh , 2018 ) . Machine learning service providers can run their models on users ’ private data encrypted using homomorphic encryption ( Gilad-Bachrach et al. , 2016 ; Brutzkus et al. , 2019 ; Juvekar et al. , 2018 ; Bian et al. , 2020 ; Yonetani et al. , 2017 ) . It is also possible for multiple data owners to train a model collectively without sharing their private data using federated learning ( McMahan et al. , 2017 ; Bonawitz et al. , 2017 ; Li et al. , 2020 ) or secure multi-party computation ( Shokri & Shmatikov , 2015 ; Melis et al. , 2019 ; Hamm et al. , 2016 ; Pathak et al. , 2010 ; Hamm et al. , 2016 ) . There is a fundamental difference between our work and PPML . PPML focuses on private datasets , whereas we focus on public datasets with private information . ImageNet , like other academic datasets , is publicly available to researchers . There is no point preventing an adversary from inferring the data . However , public datasets can also expose private information about individuals , who may not even be aware of their presence in the data . It is their privacy we are protecting . Privacy in visual data . To mitigate privacy issues with public visual datasets , researchers have attempted to obfuscate private information before publishing the data . Frome et al . ( 2009 ) and Uittenbogaard et al . ( 2019 ) use blurring and inpainting to obfuscate faces and license plates in Google Street View . nuScenes ( Caesar et al. , 2020 ) is an autonomous driving dataset where faces and license plates are detected and then blurred . Similar method is also used for the action dataset AViD ( Piergiovanni & Ryoo , 2020 ) . We follow this line of work to obfuscate faces in ImageNet but differ in two critical ways . First , to the best of our knowledge , we are the first to thoroughly analyze the effects of face obfuscation on visual recognition . Second , prior works use only automatic methods such as face detectors , whereas we additionally employ crowdsourcing . Human annotations are more accurate and thus more useful for following research on privacy preservation in ImageNet . Most importantly though , automated face recognition methods are known to contain racial and gender biases ( Buolamwini & Gebru , 2018 ) ; thus using these methods alone is likely to result in more privacy protection to members of majority groups . Including a manual verification step helps partially mitigate these issues . Finally , we note that face obfuscation alone is not sufficient for privacy protection . Orekondy et al . ( 2018 ) constructed Visual Redactions , annotating images with 42 privacy attributes , including faces , names , and addresses . Ideally , we should obfuscate all such information ; however , this may not be immediately feasible . Obfuscating faces ( omnipresent in visual datasets ) is an important first step . Privacy guarantees of face obfuscation . Unfortunately , face obfuscation does not provide any formal guarantee of privacy . Both humans and machines may be able to infer an individual ’ s identity from face-obfuscated images , presumably relying on cues outside faces such as height and clothing ( Chang et al. , 2006 ; Oh et al. , 2016 ) . Researchers have tried to protect sensitive image regions against attacks , e.g. , by perturbing the image adversarially to reduce the performance of a recognizer ( Oh et al. , 2017 ; Ren et al. , 2018 ; Sun et al. , 2018 ; Wu et al. , 2018 ; Xiao et al. , 2020 ) . However , these methods are tuned for a particular model and provide no privacy guarantee either . Further , guarantees in privacy may reduce dataset utility as shown for example by Cheng et al . ( 2021 ) . Therefore , we choose two simple local methods—blurring and overlaying—instead of more sophisticated alternatives . Overlaying removes all information in a face bounding box , whereas blurring removes only partial information . Their effectiveness for privacy protection can be ascertained only empirically , which has been the focus of prior work ( Oh et al. , 2016 ; Li et al. , 2017 ; Hasan et al. , 2018 ) but is beyond the scope of this paper . Visual recognition from degraded data . Researchers have studied visual recognition in the presence of various image degradation , including blurring ( Vasiljevic et al. , 2016 ) , lens distortions ( Pei et al. , 2018 ) , and low resolution ( Ryoo et al. , 2016 ) . These undesirable artifacts are due to imperfect sensors rather than privacy concerns . In contrast , we intentionally obfuscate faces for privacy ’ s sake . Ethical issues with datasets . Datasets are important in machine learning and computer vision . But recently they have been called out for scrutiny ( Paullada et al. , 2020 ) , especially regarding the presence of people . A prominent issue is imbalanced representation , e.g. , underrepresentation of certain demographic groups in data for face recognition ( Buolamwini & Gebru , 2018 ) , activity recognition ( Zhao et al. , 2017 ) , and image captioning ( Hendricks et al. , 2018 ) . For ImageNet , researchers have examined and attempted to mitigate issues such as geographic diversity , the category vocabulary , and imbalanced representation ( Shankar et al. , 2017 ; Stock & Cisse , 2018 ; Dulhanty & Wong , 2019 ; Yang et al. , 2020 ) . We focus on an orthogonal issue : the privacy of people in the images . Prabhu & Birhane ( 2021 ) also discussed ImageNet ’ s privacy issues and suggested face obfuscation as one potential solution . Our face annotations enable face obfuscation to be implemented , and our experiments support its effectiveness . Concurrent work ( Asano et al. , 2021 ) addresses the privacy issue by collecting a dataset of unlabeled images without people . Potential negative impacts . The main concern we see is giving the impression of privacy guarantees when in fact face obfuscation is an imperfect technique for privacy protection . We hope that the above detailed discussion and this clarification will help mitigate this issue . Another important concern is disparate impact on people of different demographics as a result of using automated face detection methods ; as mentioned above , we hope that incorporating a manual annotation step will help partially alleviate this issue so that similar privacy preservation is afforded to all .
This paper presents an empirical study on the effect of face obfuscation in the ImageNet dataset. The main conclusion is that face obfuscation does not decrease the utility of the dataset. Specifically, the authors showed that various networks trained on the obfuscated dataset only experienced small accuracy drop on the image classification task. The authors also discussed the impact on different categories, showing that face obfuscation hurt more to the object categories that are more closely related to faces (i.e., the bounding boxes of which overlap more with faces). Last but now least, experiments has been conducted to show that face obfuscation also does not have a significant impact on the transferability of the features learned from the new dataset. All these conclusions are inline with intuitions since ImageNet is not primarily focused on human activities / faces.
SP:4df661cd71eb3a7947d890ff84e25f48b6b38012
A Study of Face Obfuscation in ImageNet
1 INTRODUCTION . Visual data is being generated at an unprecedented scale . People share billions of photos daily on social media ( Meeker , 2014 ) . There is one security camera for every 4 people in China and the United States ( Lin & Purnell , 2019 ) . Even your home can be watched by smart devices taking photos ( Butler et al. , 2015 ; Dai et al. , 2015 ) . Learning from the visual data has led to computer vision applications that promote the common good , e.g. , better traffic management ( Malhi et al. , 2011 ) and law enforcement ( Sajjad et al. , 2020 ) . However , it also raises privacy concerns , as images may capture sensitive information such as faces , addresses , and credit cards ( Orekondy et al. , 2018 ) . Extensive prior research has focused on preventing unauthorized access to sensitive information in private datasets ( Fredrikson et al. , 2015 ; Shokri et al. , 2017 ) . However , are publicly available datasets free of privacy concerns ? Taking the popular ImageNet dataset ( Deng et al. , 2009 ) as an example , there are only 3 people categories1 in the 1000 categories of the ImageNet Large Scale Visual Recognition Challenge ( ILSVRC ) ( Russakovsky et al. , 2015 ) ; nevertheless , the dataset exposes many people co-occurring with other objects in images ( Prabhu & Birhane , 2021 ) , e.g. , people sitting on chairs , walking dogs , or drinking beer ( Fig . 1 ) . It is concerning since ILSVRC is freely available for academic use2 and widely used by the research community . In this paper , we attempt to mitigate ILSVRC ’ s privacy issues . Specifically , we construct a privacyenhanced version of ILSVRC and gauge its utility as a benchmark for image classification and as a dataset for transfer learning . Face annotation . As an initial step , we focus on a prominent type of private information—faces . To examine and mitigate their privacy issues , we first annotate faces in ImageNet using face detectors and crowdsourcing . We use Amazon Rekognition to detect faces automatically , and then refine the results through crowdsourcing on Amazon Mechanical Turk to obtain accurate annotations . We have annotated 1,431,093 images in ILSVRC , resulting in 562,626 faces from 243,198 images ( 17 % of all images have at least one face ) . Many categories have more than 90 % images with faces , 1scuba diver , bridegroom , and baseball player 2https : //image-net.org/request even though they are not people categories , e.g. , volleyball and military uniform . Our annotations confirm that faces are ubiquitous in ILSVRC and pose a privacy issue . We release the face annotations to facilitate subsequent research in privacy-aware visual recognition on ILSVRC . Effects of face obfuscation on classification accuracy . Obfuscating sensitive image areas is widely used for preserving privacy ( McPherson et al. , 2016 ) . We focus on two simple obfuscation methods : blurring and overlaying ( Fig . 1 ) , whose privacy effects have been analyzed in prior work ( Oh et al. , 2016 ; Li et al. , 2017 ; Hasan et al. , 2018 ) . Using our face annotations , we construct face-obfuscated versions of ILSVRC . What are the effects of using them for image classification ? At first glance , it seems inconsequential—one should still recognize a car even when the people inside have their faces blurred . Indeed , we verify that validation accuracy drops only slightly ( 0.1 % –0.7 % for blurring , 0.3 % –1.0 % for overlaying ) when using face-obfuscated images to train and evaluate . We analyze this drop in detail ( identifying categories which are particularly affected ) , but this key result demonstrates that we can train privacy-aware visual classifiers on ILSVRC which remain highly competitive , with less than a 1 % accuracy drop . Effects on feature transferability . Besides a classification benchmark , ILSVRC also serves as pretraining data for transferring to domains where labeled images are scarce ( Girshick , 2015 ; Liu et al. , 2015a ) . So a further question is : Does face obfuscation hurt the transferability of visual features learned from ILSVRC ? We investigate by pretraining models on the original/obfuscated images and finetuning on 4 downstream tasks : object recognition on CIFAR-10 ( Krizhevsky et al. , 2009 ) , scene recognition on SUN ( Xiao et al. , 2010 ) , object detection on PASCAL VOC ( Everingham et al. , 2010 ) , and face attribute classification on CelebA ( Liu et al. , 2015b ) . They include both classification and spatial localization , as well as both face-centric and face-agnostic recognition . In all of the 4 tasks , models pretrained on face-obfuscated images perform closely with models pretrained on original images . We do not see a statistically significant difference between them , suggesting that visual features learned from face-obfuscated pretraining are equally transferable . Again , this encourages us to adopt face obfuscation as an additional protection on visual recognition datasets without worrying about detrimental effects on the dataset ’ s utility . Contributions . Our contributions are twofold . First , we obtain accurate face annotations in ILSVRC , facilitating subsequent research on privacy protection . We will release the code and the annotations . Second , to the best of our knowledge , we are the first to investigate the effects of privacy-aware face obfuscation on large-scale visual recognition . Through extensive experiments , we demonstrate that training on face-obfuscated images does not significantly compromise accuracy on both image classification and downstream tasks , while providing some privacy protection . Therefore , we advocate for face obfuscation to be included in ImageNet and to become a standard step in future dataset creation efforts . 2 RELATED WORK . Privacy-preserving machine learning ( PPML ) . Machine learning frequently uses private datasets ( Chen et al. , 2019b ) . Research in PPML is concerned with an adversary trying to infer the private data . The privacy breach can happen to the trained model . For example , model inversion attack recovers sensitive attributes ( e.g. , gender , genotype ) of an individual given the model ’ s output ( Fredrikson et al. , 2014 ; 2015 ; Hamm , 2017 ; Li et al. , 2019 ; Wu et al. , 2019 ) . Membership inference attack infers whether an individual was included in training ( Shokri et al. , 2017 ; Nasr et al. , 2019 ; Hisamoto et al. , 2020 ) . Training data extraction attack extracts verbatim training data from the model ( Carlini et al. , 2019 ; 2020 ) . For defending against these attacks , differential privacy is a general framework ( Abadi et al. , 2016 ; Chaudhuri & Monteleoni , 2008 ; McMahan et al. , 2018 ; Jayaraman & Evans , 2019 ; Jagielski et al. , 2020 ) . It requires the model to behave similarly whether or not an individual is in the training data . Privacy breaches can also happen in training/inference . To address hardware/software vulnerabilities , researchers have used enclaves—a hardware mechanism for protecting a memory region from unauthorized access—to execute machine learning workloads ( Ohrimenko et al. , 2016 ; Tramer & Boneh , 2018 ) . Machine learning service providers can run their models on users ’ private data encrypted using homomorphic encryption ( Gilad-Bachrach et al. , 2016 ; Brutzkus et al. , 2019 ; Juvekar et al. , 2018 ; Bian et al. , 2020 ; Yonetani et al. , 2017 ) . It is also possible for multiple data owners to train a model collectively without sharing their private data using federated learning ( McMahan et al. , 2017 ; Bonawitz et al. , 2017 ; Li et al. , 2020 ) or secure multi-party computation ( Shokri & Shmatikov , 2015 ; Melis et al. , 2019 ; Hamm et al. , 2016 ; Pathak et al. , 2010 ; Hamm et al. , 2016 ) . There is a fundamental difference between our work and PPML . PPML focuses on private datasets , whereas we focus on public datasets with private information . ImageNet , like other academic datasets , is publicly available to researchers . There is no point preventing an adversary from inferring the data . However , public datasets can also expose private information about individuals , who may not even be aware of their presence in the data . It is their privacy we are protecting . Privacy in visual data . To mitigate privacy issues with public visual datasets , researchers have attempted to obfuscate private information before publishing the data . Frome et al . ( 2009 ) and Uittenbogaard et al . ( 2019 ) use blurring and inpainting to obfuscate faces and license plates in Google Street View . nuScenes ( Caesar et al. , 2020 ) is an autonomous driving dataset where faces and license plates are detected and then blurred . Similar method is also used for the action dataset AViD ( Piergiovanni & Ryoo , 2020 ) . We follow this line of work to obfuscate faces in ImageNet but differ in two critical ways . First , to the best of our knowledge , we are the first to thoroughly analyze the effects of face obfuscation on visual recognition . Second , prior works use only automatic methods such as face detectors , whereas we additionally employ crowdsourcing . Human annotations are more accurate and thus more useful for following research on privacy preservation in ImageNet . Most importantly though , automated face recognition methods are known to contain racial and gender biases ( Buolamwini & Gebru , 2018 ) ; thus using these methods alone is likely to result in more privacy protection to members of majority groups . Including a manual verification step helps partially mitigate these issues . Finally , we note that face obfuscation alone is not sufficient for privacy protection . Orekondy et al . ( 2018 ) constructed Visual Redactions , annotating images with 42 privacy attributes , including faces , names , and addresses . Ideally , we should obfuscate all such information ; however , this may not be immediately feasible . Obfuscating faces ( omnipresent in visual datasets ) is an important first step . Privacy guarantees of face obfuscation . Unfortunately , face obfuscation does not provide any formal guarantee of privacy . Both humans and machines may be able to infer an individual ’ s identity from face-obfuscated images , presumably relying on cues outside faces such as height and clothing ( Chang et al. , 2006 ; Oh et al. , 2016 ) . Researchers have tried to protect sensitive image regions against attacks , e.g. , by perturbing the image adversarially to reduce the performance of a recognizer ( Oh et al. , 2017 ; Ren et al. , 2018 ; Sun et al. , 2018 ; Wu et al. , 2018 ; Xiao et al. , 2020 ) . However , these methods are tuned for a particular model and provide no privacy guarantee either . Further , guarantees in privacy may reduce dataset utility as shown for example by Cheng et al . ( 2021 ) . Therefore , we choose two simple local methods—blurring and overlaying—instead of more sophisticated alternatives . Overlaying removes all information in a face bounding box , whereas blurring removes only partial information . Their effectiveness for privacy protection can be ascertained only empirically , which has been the focus of prior work ( Oh et al. , 2016 ; Li et al. , 2017 ; Hasan et al. , 2018 ) but is beyond the scope of this paper . Visual recognition from degraded data . Researchers have studied visual recognition in the presence of various image degradation , including blurring ( Vasiljevic et al. , 2016 ) , lens distortions ( Pei et al. , 2018 ) , and low resolution ( Ryoo et al. , 2016 ) . These undesirable artifacts are due to imperfect sensors rather than privacy concerns . In contrast , we intentionally obfuscate faces for privacy ’ s sake . Ethical issues with datasets . Datasets are important in machine learning and computer vision . But recently they have been called out for scrutiny ( Paullada et al. , 2020 ) , especially regarding the presence of people . A prominent issue is imbalanced representation , e.g. , underrepresentation of certain demographic groups in data for face recognition ( Buolamwini & Gebru , 2018 ) , activity recognition ( Zhao et al. , 2017 ) , and image captioning ( Hendricks et al. , 2018 ) . For ImageNet , researchers have examined and attempted to mitigate issues such as geographic diversity , the category vocabulary , and imbalanced representation ( Shankar et al. , 2017 ; Stock & Cisse , 2018 ; Dulhanty & Wong , 2019 ; Yang et al. , 2020 ) . We focus on an orthogonal issue : the privacy of people in the images . Prabhu & Birhane ( 2021 ) also discussed ImageNet ’ s privacy issues and suggested face obfuscation as one potential solution . Our face annotations enable face obfuscation to be implemented , and our experiments support its effectiveness . Concurrent work ( Asano et al. , 2021 ) addresses the privacy issue by collecting a dataset of unlabeled images without people . Potential negative impacts . The main concern we see is giving the impression of privacy guarantees when in fact face obfuscation is an imperfect technique for privacy protection . We hope that the above detailed discussion and this clarification will help mitigate this issue . Another important concern is disparate impact on people of different demographics as a result of using automated face detection methods ; as mentioned above , we hope that incorporating a manual annotation step will help partially alleviate this issue so that similar privacy preservation is afforded to all .
The main concern addressed in this paper is the privacy problem that may result from images in ImageNet databases containing unexpected faces. The authors propose a two-step face filtering method. First, the authors use a detector called Amazon Rekognition to detect the ImageNet database. Then, the authors further optimize the detector output through the crowdsourcing platform Amazon Mechanical Turk (AMT) to reduce false positives and false negatives in automated detection. For the detected faces, the authors took two approaches, distinguishing between blurring and overlaying, and tested their effectiveness on different models separately. The accuracy of the two approaches was reduced by 0.9% on average compared to the original database on the ILSVRC classification challenge. And using the database that has blurred or covered the faces still maintains the transferability of the original database in the tests of downstream tasks. Contribution: 1. The authors perform a very time-consuming and labor-intensive task for accurate labeling and filtering of faces in the ImageNet database and statistical analysis of the classes of faces contained in ImageNet. 2. The authors demonstrate experiments related to classification tasks and pre-trained model training using a database containing blurred or covered faces, proving that the theory is feasible and that the dropped accuracy is acceptable. 3. In terms of ethics, using blurred or covered face data for training can reduce privacy concerns. The study of the ImageNet database in terms of privacy can provide an important reference for subsequent databases
SP:4df661cd71eb3a7947d890ff84e25f48b6b38012
Privacy Protected Multi-Domain Collaborative Learning
1 INTRODUCTION . Unsupervised domain adaptation ( UDA ) ( Tang et al. , 2020 ; Jiang et al. , 2020 ; Zhang et al. , 2020 ) attempts to transfer knowledge from well-labeled source domains to annotate unlabeled target samples , which have significant domain discrepancy with source domains due to the various data collection manners and devices . Recent explorations ( Na et al. , 2021 ; Dong et al. , 2020 ) suppose the model to be trained has access to both source and target data during the training stage . With such basic assumption , it becomes possible to measure the domain discrepancy and adopt metric-based solutions ( Kang et al. , 2020 ) or domain confusion ( Cui et al. , 2020 ; Tang & Jia , 2020 ) to generate domain-invariant features . However , the hypothesis violates the concerns of practical application on privacy protection , and fails to be deployed to small devices with limited storage . This requirement motivates source-free domain adaptation ( SFDA ) , where the source-supervised model is available to assist the target domain without any source data ( Liang et al. , 2020 ; Li et al. , 2020 ; Kundu et al. , 2020 ) . Generally , SFDA either adapts target samples to source-like ones ( Liang et al. , 2020 ) or generates fake source samples from source-model by subsequently taking UDA strategies ( Kurmi et al. , 2021 ) . To improve the training efficiency , FADA ( Peng et al. , 2020 ) employs a federated learning paradigm ( Karimireddy et al. , 2020 ; Chen et al. , 2020 ) by allocating the target domain on a centralized server while keeping multiple source ones as clients . However , this approach is vulnerable to attacks as the source features transition to target domain . Further , these domain adaptation works ignore the improvement of model generalization on source domain , which is inconsistent with requirement of reality . For example , the long-standing hospitals already have well-annotated patients ’ data , while other newly-built hospitals just collected data without annotation , which need help from long-standing hospitals with well-annotated data due to the huge labeling cost . Besides , with geographical restriction , different hospitals only record their local patients ’ data resulting in various population statistics , causing model bias for long-standing hospitals . Inspired by the above observation , we introduce a more practical scenario called Privacy Protected Multi-Domain Collaborative Learning ( P2MDCL ) ( shown in Figure 1 ) . Specifically , P2MDCL assumes that the well-annotated source and unlabeled target domains are distributed across different clients and there exists a global server merely communicating with each client and integrating the received model parameters from clients . Finally , the server broadcasts the consensus model to all clients for their use to reach the win-win deal . The key challenge for P2MDCL is to learn a more generic model by solving two core issues : 1 ) how to achieve domain alignment during iterative communication ; and 2 ) how to enhance discriminative feature learning . In this paper , we propose a novel Mask-Driven Federated Network ( MDFNet ) to address P2MDCL . First , our MDFNet introduces two orthogonal masks following high-level features in each client to activate domain-invariant and domain-specific semantics respectively . In practice , we minimize the confusion of these two masks to achieve high-quality feature separation and semantic complementary . Second , the unlabeled target client adopts adaptive self-supervised optimization to learn more discrimina- tive representations via pseudo labels generation . Finally , MDFNet adopts a progressive weighting scheme to balance the effect of each client in model integration on the server , which discoveries more knowledge of the labeled client to adjust the model of unlabeled client during the initial communication rounds , then the mature unlabeled client model also yields positive effect on the feature learning of labeled client . The main contributions of our work are summarized as : • First , we are the pioneers to take into account the “ win-win ” and privacy requirements under unsupervised domain adaptation scenarios by introducing Privacy Protected MultiDomain Collaborative Learning ( P2MDCL ) . • Second , we propose an effective algorithm MDFNet to fight off the domain shift in a federated training mechanism , which reaches the win-win deal for all involved domains . • Finally , we derive the generalized error bound for our method , which theoretically testifies the rationality of MDFNet . Moreover , extensive experimental results and analysis empirically illustrate the effectiveness of our method on solving P2MDCL . 2 RELATED WORK . Domain Adaptation . Unsupervised domain adaptation ( Cui et al. , 2020 ) attempts to build a model with well-labeled source and unlabeled target data at hand , by mitigating the domain mismatch . Along this line , the recent explorations mainly adopt discrepancy metric-based method ( Yan et al. , 2017 ; Tzeng et al. , 2014 ) and adversarial training scheme ( Zhang et al. , 2019 ; Tzeng et al. , 2017 ) to learn domain-invariant features . Although these solutions effectively reduce the influence of domain discrepancy , the practical applications difficultly permit the co-existence of source and target data due to the limited storage of small device and data privacy . The demand stimulates the development of source-free domain adaptation ( Liang et al. , 2020 ; Kurmi et al. , 2021 ) , which merely provides the well-trained source model for knowledge adaption on target domain . In addition , Peng et al . ( 2020 ) respectively considers target domain and multiple source domains as the centralized server and clients and adopts federated learning fashion to achieve domain adaptation with multiple discriminators , which is vulnerable to the attack due to the source and target features transmission into the discriminators in the centralized target domain . Even though these strategies actually achieve the comparable transferring ability with the UDA solutions , empirical studies illustrate the current domain adaptation techniques fail to learn a generalized model for source and target domains . Alternatively , they only focus on the improvement of target performance , yet neglecting any benefit to source domain . To this end , this paper posts a novel and practical scenario privacy protected multidomain collaborative learning ( P2MDCL ) , where source and target domains are both regarded as clients independently communicating with the server which produces and broadcasts the consensus model to clients for their use . Federated Learning ( FL ) . FL allows multi-clients collaboratively to complete the same task without data currency across clients ( Yang et al. , 2019 ) . Along with this concept , recent works mainly focus on semi-supervised scenario ( FSSL ) where FedMatch ( Jeong et al. , 2021 ) allocates unlabeled data on client side and labeled data in the server while FedIRM ( Liu et al. , 2021 ) only deploys them on various clients . But they both assume the instances across all client are sampled from the identical distribution . Moreover , Smith et al . ( 2017 ) ; Liu et al . ( 2020 ) explore FSSL with non-i.i.d by supposing each client contains several well-annotated instances for training . Differently , our considered P2MDCL closely approximates the reality , which involves several clients without any annotations and exists significant domain discrepancy across all clients . 3 THE PROPOSED METHOD . 3.1 PROBLEM DEFINITION AND MOTIVATION . The P2MDCL scenario assumes there are L well-annotated source clients Dli = { ( xl ( i ) j , y l ( i ) j ) } nli j=1 ( i ∈ { 1 , · · · , L } ) and U unlabeled target clients Duk = { xu ( k ) j } nuk j=1 ( k ∈ { L+ 1 , · · · , L+ U } ) , where x and y denote an input sample and its ground-truth label , respectively . The instances of these clients come from different distributions but share the identical category space and clients are not allowed to exchange private data with each other . Akin to federated learning , the additional global server in P2MDCL collects and assembles all clients ’ network parameters to form the consensus model . The main motivation of P2MDCL is addressing the negative effect of insufficient training samples in Dli and label shortage in Duk to reach a “ win-win ” deal across all clients . We face two challenges to solve P2MDCL : 1 ) how to reduce the significant distribution discrepancy while protecting data privacy and 2 ) how to learn more generic and discriminative representations from unlabeled target clients . To this end , this work proposes an effective Mask-Driven Federated Network ( MDFNet ) , which deploys mask-driven disentanglement to locally seek domain-specific/invariant features , and explores the adaptive self-supervised optimization to promote the discriminative ability of unlabeled target clients . 3.2 MASK-DRIVEN DISENTANGLEMENT . Feature separation is a commonly-used strategy in domain adaptation to disentangle latent representation into domain-specific features and domain-invariant ones ( Bousmalis et al. , 2016 ; Peng et al. , 2019 ) . However , they typically develop two separated networks to extract the corresponding features , which increase storage burden for each local device with insufficient computational resources . Peng et al . ( 2019 ) points out the high-level neurons from feature extractor actually involve domain- specific and invariant knowledge . Inspired by ( Chattopadhyay et al. , 2020 ) , we explore the binary mask to achieve feature disentanglement by activating the interested neurons . For the brevity , we omit the symbols l/u and ( k ) in the following illustration . As Figure 2 shows , each client of our MDFNet contains a basic feature encoder parameterized θe mapping the raw input into the hidden space via gi = θe ( xi ) ∈ Rd . Subsequently , two additional parameters m̂s , m̂I ∈ Rd are introduced into the local network and activated to form the mask probabilities by using the sigmoid function σ ( · ) to get ms = σ ( m̂s ) and mI = σ ( m̂I ) . For each feature gi , based on the mask probabilities , we sample the binary domain-specific and invariant masks ( msi , m I i ∈ { 0 , 1 } d ) from the Bernoulli distributions . To this end , we obtain the domain specific and invariant features with the element-wise multiplication⊗ over binary masks and features , i.e. , gsi = msi ⊗gi and gIi = mIi ⊗gi . Moreover , we adopt three strategies to achieve high-quality feature separation . Concretely , each client firstly minimizes the semantic overlap between gIi and g s i to store complementary information in them . Motivated by Rahman & Wang ( 2016 ) , we design the following soft-interactive loss as : Ls = ∑ i 〈gsi , gIi 〉 sum ( gsi + g I i − gsi ⊗ gIi ) , ( 1 ) where 〈 , 〉 means the inner product of two feature vectors , and sum ( · ) represents the sum of all elements for a vector . This approximately reflects the information overlap of two mask distributions . The minimization of soft-interactive loss gradually increases the difference between msi and m I i which activate different neurons . Similar to DSN ( Bousmalis et al . ( 2016 ) ) , each client also develops the individual classifier θc ( · ) taking domain-invariant features as input to the category probability distribution θc ( gIj ) . The cross-entropy loss between ground-truth and prediction intensifies the discriminnative ability of domain-invariant features . On the other hand , we also feed the combination of gsi and g I i into decoder θd ( · ) to reconstruct the original input with Lr = ∑ i ‖θd ( gsi , gIi ) − xi‖22 . Thus , the overall loss function of mask-driven disentanglement for labeled clients is formulated as : min θc , θe , θd , m̂s , m̂I Llo = ∑ i −yi log ( θc ( g I i ) ) + Lr + Ls , ( 2 ) where we actually adopt straight-through estimator ( Bengio et al. , 2013 ) to progressively optimize m̂s and m̂I instead of the discrete binary masks leading to the invalid back-propagation .
This paper studies the problem of privacy-protected multi-domain collaborative filtering, in which a "win-win" deal for source and target domains can be achieved. The proposed framework, MDFNet, contains multiple local clients and one global server. In each client, the encoder achieves feature separation, and the decoder constructs original data based on separated features. Experiments on benchmark datasets demonstrate the best performance of the proposed MDFNet compared with baselines.
SP:bb43cfb5f54986cf5d310aa141514739712c2fd4
Privacy Protected Multi-Domain Collaborative Learning
1 INTRODUCTION . Unsupervised domain adaptation ( UDA ) ( Tang et al. , 2020 ; Jiang et al. , 2020 ; Zhang et al. , 2020 ) attempts to transfer knowledge from well-labeled source domains to annotate unlabeled target samples , which have significant domain discrepancy with source domains due to the various data collection manners and devices . Recent explorations ( Na et al. , 2021 ; Dong et al. , 2020 ) suppose the model to be trained has access to both source and target data during the training stage . With such basic assumption , it becomes possible to measure the domain discrepancy and adopt metric-based solutions ( Kang et al. , 2020 ) or domain confusion ( Cui et al. , 2020 ; Tang & Jia , 2020 ) to generate domain-invariant features . However , the hypothesis violates the concerns of practical application on privacy protection , and fails to be deployed to small devices with limited storage . This requirement motivates source-free domain adaptation ( SFDA ) , where the source-supervised model is available to assist the target domain without any source data ( Liang et al. , 2020 ; Li et al. , 2020 ; Kundu et al. , 2020 ) . Generally , SFDA either adapts target samples to source-like ones ( Liang et al. , 2020 ) or generates fake source samples from source-model by subsequently taking UDA strategies ( Kurmi et al. , 2021 ) . To improve the training efficiency , FADA ( Peng et al. , 2020 ) employs a federated learning paradigm ( Karimireddy et al. , 2020 ; Chen et al. , 2020 ) by allocating the target domain on a centralized server while keeping multiple source ones as clients . However , this approach is vulnerable to attacks as the source features transition to target domain . Further , these domain adaptation works ignore the improvement of model generalization on source domain , which is inconsistent with requirement of reality . For example , the long-standing hospitals already have well-annotated patients ’ data , while other newly-built hospitals just collected data without annotation , which need help from long-standing hospitals with well-annotated data due to the huge labeling cost . Besides , with geographical restriction , different hospitals only record their local patients ’ data resulting in various population statistics , causing model bias for long-standing hospitals . Inspired by the above observation , we introduce a more practical scenario called Privacy Protected Multi-Domain Collaborative Learning ( P2MDCL ) ( shown in Figure 1 ) . Specifically , P2MDCL assumes that the well-annotated source and unlabeled target domains are distributed across different clients and there exists a global server merely communicating with each client and integrating the received model parameters from clients . Finally , the server broadcasts the consensus model to all clients for their use to reach the win-win deal . The key challenge for P2MDCL is to learn a more generic model by solving two core issues : 1 ) how to achieve domain alignment during iterative communication ; and 2 ) how to enhance discriminative feature learning . In this paper , we propose a novel Mask-Driven Federated Network ( MDFNet ) to address P2MDCL . First , our MDFNet introduces two orthogonal masks following high-level features in each client to activate domain-invariant and domain-specific semantics respectively . In practice , we minimize the confusion of these two masks to achieve high-quality feature separation and semantic complementary . Second , the unlabeled target client adopts adaptive self-supervised optimization to learn more discrimina- tive representations via pseudo labels generation . Finally , MDFNet adopts a progressive weighting scheme to balance the effect of each client in model integration on the server , which discoveries more knowledge of the labeled client to adjust the model of unlabeled client during the initial communication rounds , then the mature unlabeled client model also yields positive effect on the feature learning of labeled client . The main contributions of our work are summarized as : • First , we are the pioneers to take into account the “ win-win ” and privacy requirements under unsupervised domain adaptation scenarios by introducing Privacy Protected MultiDomain Collaborative Learning ( P2MDCL ) . • Second , we propose an effective algorithm MDFNet to fight off the domain shift in a federated training mechanism , which reaches the win-win deal for all involved domains . • Finally , we derive the generalized error bound for our method , which theoretically testifies the rationality of MDFNet . Moreover , extensive experimental results and analysis empirically illustrate the effectiveness of our method on solving P2MDCL . 2 RELATED WORK . Domain Adaptation . Unsupervised domain adaptation ( Cui et al. , 2020 ) attempts to build a model with well-labeled source and unlabeled target data at hand , by mitigating the domain mismatch . Along this line , the recent explorations mainly adopt discrepancy metric-based method ( Yan et al. , 2017 ; Tzeng et al. , 2014 ) and adversarial training scheme ( Zhang et al. , 2019 ; Tzeng et al. , 2017 ) to learn domain-invariant features . Although these solutions effectively reduce the influence of domain discrepancy , the practical applications difficultly permit the co-existence of source and target data due to the limited storage of small device and data privacy . The demand stimulates the development of source-free domain adaptation ( Liang et al. , 2020 ; Kurmi et al. , 2021 ) , which merely provides the well-trained source model for knowledge adaption on target domain . In addition , Peng et al . ( 2020 ) respectively considers target domain and multiple source domains as the centralized server and clients and adopts federated learning fashion to achieve domain adaptation with multiple discriminators , which is vulnerable to the attack due to the source and target features transmission into the discriminators in the centralized target domain . Even though these strategies actually achieve the comparable transferring ability with the UDA solutions , empirical studies illustrate the current domain adaptation techniques fail to learn a generalized model for source and target domains . Alternatively , they only focus on the improvement of target performance , yet neglecting any benefit to source domain . To this end , this paper posts a novel and practical scenario privacy protected multidomain collaborative learning ( P2MDCL ) , where source and target domains are both regarded as clients independently communicating with the server which produces and broadcasts the consensus model to clients for their use . Federated Learning ( FL ) . FL allows multi-clients collaboratively to complete the same task without data currency across clients ( Yang et al. , 2019 ) . Along with this concept , recent works mainly focus on semi-supervised scenario ( FSSL ) where FedMatch ( Jeong et al. , 2021 ) allocates unlabeled data on client side and labeled data in the server while FedIRM ( Liu et al. , 2021 ) only deploys them on various clients . But they both assume the instances across all client are sampled from the identical distribution . Moreover , Smith et al . ( 2017 ) ; Liu et al . ( 2020 ) explore FSSL with non-i.i.d by supposing each client contains several well-annotated instances for training . Differently , our considered P2MDCL closely approximates the reality , which involves several clients without any annotations and exists significant domain discrepancy across all clients . 3 THE PROPOSED METHOD . 3.1 PROBLEM DEFINITION AND MOTIVATION . The P2MDCL scenario assumes there are L well-annotated source clients Dli = { ( xl ( i ) j , y l ( i ) j ) } nli j=1 ( i ∈ { 1 , · · · , L } ) and U unlabeled target clients Duk = { xu ( k ) j } nuk j=1 ( k ∈ { L+ 1 , · · · , L+ U } ) , where x and y denote an input sample and its ground-truth label , respectively . The instances of these clients come from different distributions but share the identical category space and clients are not allowed to exchange private data with each other . Akin to federated learning , the additional global server in P2MDCL collects and assembles all clients ’ network parameters to form the consensus model . The main motivation of P2MDCL is addressing the negative effect of insufficient training samples in Dli and label shortage in Duk to reach a “ win-win ” deal across all clients . We face two challenges to solve P2MDCL : 1 ) how to reduce the significant distribution discrepancy while protecting data privacy and 2 ) how to learn more generic and discriminative representations from unlabeled target clients . To this end , this work proposes an effective Mask-Driven Federated Network ( MDFNet ) , which deploys mask-driven disentanglement to locally seek domain-specific/invariant features , and explores the adaptive self-supervised optimization to promote the discriminative ability of unlabeled target clients . 3.2 MASK-DRIVEN DISENTANGLEMENT . Feature separation is a commonly-used strategy in domain adaptation to disentangle latent representation into domain-specific features and domain-invariant ones ( Bousmalis et al. , 2016 ; Peng et al. , 2019 ) . However , they typically develop two separated networks to extract the corresponding features , which increase storage burden for each local device with insufficient computational resources . Peng et al . ( 2019 ) points out the high-level neurons from feature extractor actually involve domain- specific and invariant knowledge . Inspired by ( Chattopadhyay et al. , 2020 ) , we explore the binary mask to achieve feature disentanglement by activating the interested neurons . For the brevity , we omit the symbols l/u and ( k ) in the following illustration . As Figure 2 shows , each client of our MDFNet contains a basic feature encoder parameterized θe mapping the raw input into the hidden space via gi = θe ( xi ) ∈ Rd . Subsequently , two additional parameters m̂s , m̂I ∈ Rd are introduced into the local network and activated to form the mask probabilities by using the sigmoid function σ ( · ) to get ms = σ ( m̂s ) and mI = σ ( m̂I ) . For each feature gi , based on the mask probabilities , we sample the binary domain-specific and invariant masks ( msi , m I i ∈ { 0 , 1 } d ) from the Bernoulli distributions . To this end , we obtain the domain specific and invariant features with the element-wise multiplication⊗ over binary masks and features , i.e. , gsi = msi ⊗gi and gIi = mIi ⊗gi . Moreover , we adopt three strategies to achieve high-quality feature separation . Concretely , each client firstly minimizes the semantic overlap between gIi and g s i to store complementary information in them . Motivated by Rahman & Wang ( 2016 ) , we design the following soft-interactive loss as : Ls = ∑ i 〈gsi , gIi 〉 sum ( gsi + g I i − gsi ⊗ gIi ) , ( 1 ) where 〈 , 〉 means the inner product of two feature vectors , and sum ( · ) represents the sum of all elements for a vector . This approximately reflects the information overlap of two mask distributions . The minimization of soft-interactive loss gradually increases the difference between msi and m I i which activate different neurons . Similar to DSN ( Bousmalis et al . ( 2016 ) ) , each client also develops the individual classifier θc ( · ) taking domain-invariant features as input to the category probability distribution θc ( gIj ) . The cross-entropy loss between ground-truth and prediction intensifies the discriminnative ability of domain-invariant features . On the other hand , we also feed the combination of gsi and g I i into decoder θd ( · ) to reconstruct the original input with Lr = ∑ i ‖θd ( gsi , gIi ) − xi‖22 . Thus , the overall loss function of mask-driven disentanglement for labeled clients is formulated as : min θc , θe , θd , m̂s , m̂I Llo = ∑ i −yi log ( θc ( g I i ) ) + Lr + Ls , ( 2 ) where we actually adopt straight-through estimator ( Bengio et al. , 2013 ) to progressively optimize m̂s and m̂I instead of the discrete binary masks leading to the invalid back-propagation .
This paper proposes a Mask-Driven Federated Network (MDFNet) to reach a “win-win” deal for multiple domains with data protected to solve the Privacy Protected Multi-Domain Collaborative Learning (P2MDCL) problem. Specifically, each domain is armed with an individual local model via a mask disentangled mechanism to learn domain-invariant semantics. Second, the centralized server refines the global invariant model by integrating and exchanging local knowledge across all domains. Moreover, adaptive self-supervised optimization is deployed to learn discriminative features for unlabeled domains.
SP:bb43cfb5f54986cf5d310aa141514739712c2fd4
Convergent Boosted Smoothing for Modeling GraphData with Tabular Node Features
1 INTRODUCTION . Tabular data consists of observations stored as rows of a table , where multiple numeric/categorical features are recorded for each observation , one per column . Models for tabular data must learn to output accurate predictions solely from ( potentially high-dimensional or sparse ) sets of heterogeneous feature values . For learning from tabular data , ensembles of decision trees frequently rank on the top of model leaderboards , as they have proven to be highly performant when trained via multi-round boosting algorithms that progressively encourage the learner to focus more on “ difficult ” examples predicted inaccurately in earlier rounds ( Bansal , 2018 ; Elsayed et al. , 2021 ; Fakoor et al. , 2020 ; Huang et al. , 2020b ; Ke et al. , 2017 ; Prokhorenkova et al. , 2018 ) . Boosted trees thus form the cornerstone of supervised learning when rows of a table are independent and identically distributed ( iid ) . However , the iid assumption is severely violated for graph structured data ( Chami et al. , 2020 ) , in which nodes store features/labels that are highly correlated with those of their neighbors . Many methods have been proposed to account for the graph structure during learning , with graph neural networks ( GNN ) becoming immensely popular in recent years ( Scarselli et al. , 2008 ; Wang et al. , 2019 ; Zhou et al. , 2020a ) . Despite their success , modern GNNs suffer from various issues ( Alon & Yahav , 2021 ; Oono & Suzuki , 2019 ) : they are complex both in their implementation and the amount of required computation ( Faber et al. , 2021 ; Huang et al. , 2020a ; Shchur et al. , 2018 ) , and limited theoretical guarantees exist regarding their performance or even the convergence of their training ( Garg et al. , 2020 ; Li & Cheng , 2021 ; Zhou et al. , 2020b ) . Furthermore , the sample complexity of sophisticated GNN models and their ability to handle rich node features may be worse than simpler models like those used for tabular data ( Faber et al. , 2021 ; Huang et al. , 2020a ; Shchur et al. , 2018 ) . Graph datasets store tabular feature vectors , along with additional edge information . Despite the close relationship between graph and tabular data , there has been little work on how to adapt powerful boosting methods – our sharpest tool for dissecting tabular data – to account for edges . Here we consider how to best leverage tabular models for non-iid node classification/regression tasks in graphs . We focus on settings where each node is associated with tabular ( numeric/categorical ) features x and labels y , and the goal is to predict the labels of certain nodes . Straightforward application of tabular modeling ( in which nodes are treated as iid examples ) will produce a learned mapping where training and inference are uninformed by how nodes are arranged in the graph . To account for the graph structure , we introduce simple graph propagation operations into the definition of a modified , non-iid boosting loss function , such that edge information can be exploited to improve model accuracy relative to classical alternatives ( Friedman , 2001 ) . We refer to the resulting algorithm as EBBS which stands for efficient bilevel boosted smoothing , whereby the end-to-end training of a bilevel loss is such that values of the boosted base model f ebb and flow across the input graph producing a smoothed predictor f̃ . And unlike an existing adaptation of boosting for graph data ( Ivanov & Prokhorenkova , 2021 ) , our approach is simpler ( no GNN or other auxiliary model required ) , produces more accurate predictions on benchmark datasets , and enjoys reliable convergence guarantees . 2 BACKGROUND . Consider a graph G with a set of vertices , i.e . nodes , V , { v1 , v2 , . . . , vn } whose connectivity is described by the edge set E , { ( vi , vj ) ∈ V × V } , where the tuple of nodes ( vi , vj ) implies a relationship among vi and vj . The edge set is represented by the adjacency matrixA = ( aij ) n×n ∈ Rn×n , whose entry aij is nonzero if ( vi , vj ) ∈ E . In node prediction tasks , each node vi is associated with a feature vector xi ∈ Rd0 , as well as a label yi ∈ Rc that might hold discrete or continuous values ( corresponding to classification or regression tasks ) . In certain cases we have access to the labels at only a subset of nodes { yi } i∈L , with L ⊂ V . Given { yi } i∈L , the connectivity of the graph E , and the feature values of all nodes { xi } i∈V , our task is to predict the labels of the unlabeled nodes { yi } i∈U , with U = V \ L. In the most straightforward application , a tabular model may simply be fit to dataset { ( xi , yi ) } i∈L , disregarding that the observations are not iid . For prediction , the learned model is then independently applied to each xi for i ∈ U . Of course this naive approach may fare poorly as it fails to leverage E , but we note that it empirically performs better than one might expect on certain graph datasets ( where perhaps each xi contains sufficiently rich information to infer the corresponding yi , or nodes connected in E only exhibit weak correlation in their features or labels ) ( Faber et al. , 2021 ; Huang et al. , 2020a ; Ivanov & Prokhorenkova , 2021 ) . 2.1 GRADIENT BOOSTED DECISION TREES ( GBDT ) . GBDT represents a popular model for iid ( non-graph ) tabular data ( Friedman , 2001 ) , whereby remarkably accurate predictions are produced by an ensemble of weak learners . Formally , at each iteration t of boosting , the current ensemble model f ( t ) ( x ) is updated in an additive manner via f ( t+1 ) ( x ) = f ( t ) ( x ) + η ( t ) h ( t ) ( x ) , where h ( t ) ( x ) is a weaker learner selected from some candidate function spaceH ( typically decision trees ) , and η ( t ) is the learning rate calculated with the aid of line search . The weak learner h ( t ) ∈ H is chosen to approximate the pseudo-residuals given by the negative gradient of some loss function ` w.r.t the current model ’ s predictions . This involves solving h ( t ) = arg min h∈H m∑ i=1 ∥∥∥∥−∂ ` ( yi , f ( t ) ( xi ) ) ∂f ( t ) ( xi ) − h ( xi ) ∥∥∥∥2 2 , ( 1 ) where { xi , yi } mi=1 is a set of m training points . Decision trees are constructed by a recursive partition of feature space into Jt disjoint regions R1t , ... , RJtt to minimize the loss function ( 1 ) and predict a constant value bjt in each region Rjt . The output of h ( t ) ( x ) can be written as the sum : h ( t ) ( x ) = ∑Jt j=1 bjt1Rjt ( x ) , where 1 is the indicator notation . Many efficient GBDT implementations are available today ( Ke et al. , 2017 ; Prokhorenkova et al. , 2018 ; Wen et al. , 2019 ) , and these models can easily be trained for nonstandard prediction tasks via custom loss functions ( Elsayed et al. , 2021 ; Li et al. , 2007 ; Velthoen et al. , 2021 ) . 2.2 PRIOR ATTEMPTS TO COMBINE BOOSTING WITH GRAPH NEURAL NETWORKS . The AdaGCN model from Sun et al . ( 2019 ) proposes a GNN architecture motivated by the structure of AdaBoost iterations ; however , this approach is not actually designed to handle tabular data . More related to our work is the boosted graph neural network ( BGNN ) approach from Ivanov & Prokhorenkova ( 2021 ) , which proposes a novel architecture that jointly trains GBDT and GNN models . Within this framework , GBDT extracts predictive information from node features and the GNN accounts for the graph structure , achieving a significant performance increase on various graph datasets with tabular features . In the first iteration , BGNN builds a GBDT model f ( 1 ) ( x ) over the training node features that are treated essentially as iid tabular data . Using all GBDT predictions , BGNN then concatenates these with the original node features and subsequently uses these as inputs to a GNN model . Next , the GNN is trained with l steps of gradient descent to optimize network parameters as well as the input node features themselves by backpropagating gradients entirely through the GNN into its input space . BGNN uses the difference between the optimized node features and input node features as a new target value for the next decision tree in the subsequent boosting round . After GBDT has been updated with the addition of the new tree , it makes predictions that are used to augment the node features for subsequent use in the GNN , and the process repeats until some stopping criteria is met . While BGNN has thus far demonstrated promising empirical results , there is no guarantee of convergence or even cost function descent . Our EBBS model addresses this issue through a fully integrated alternative described in Section 3 , which enjoys convergence guarantees provided in Section 4 . Besides its favorable analytical properties , EBBS naturally supports a suite of graph-based regularizers promoting different properties described in Section 3.4 , empirically achieves higher accuracy , and is less prone to overfitting without careful hyperparameter-tuning since the graph is accounted for via a simple regularizer instead of an additional parameterized GNN model . Appendix B provides further discussion about the connection between EBBS and BGNN , highlighting key technical differences . 3 END-TO-END INTEGRATION OF GRAPH PROPAGATION AND BOOSTING . In this section we describe our method for combining GBDT with graph-aware propagation layers . We first describe a general family of propagation layers that mimic gradient descent steps along a principled graph-regularized loss , followed by the derivation of a bilevel optimization algorithm that exploits these layers . In terms of notation , we will henceforth usemi to reference the i-th row of an arbitrary matrixM . 3.1 GRAPH-AWARE PROPAGATION LAYERS INSPIRED BY GRADIENT DESCENT . Recently there has been a surge of interest in GNN architectures with layers defined with respect to the minimization of a principled class of graph-regularized energy functions ( Klicpera et al. , 2018 ; Ma et al. , 2020 ; Pan et al. , 2021 ; Yang et al. , 2021 ; Zhang et al. , 2020 ; Zhu et al. , 2021 ) . In this context , the basic idea is to associate each descent step along an optimization trajectory ( e.g. , a gradient descent step , power iteration , or related ) , with a GNN layer , such that in aggregate , the forward GNN pass can be viewed as minimization of the original energy . Hence GNN training can benefit from the inductive bias afforded by energy function minimizers ( or close approximations thereof ) whose specific form can be controlled by trainable parameters . The most common instantiation of this framework , with roots in Zhou et al . ( 2004 ) , begins with the energy ` Z ( Z ) , ‖Z − f ( X ; θ ) ‖2F + λtr [ Z > LZ ] , ( 2 ) where λ is a trade-off parameter , Z ∈ Rn×d is a learnable embedding of d-dimensional features across n nodes , and f ( X ; θ ) denotes a base model ( parameterized by θ ) that computes an initial target embedding based on the d0-dimensional node featuresX ∈ Rn×d0 . We also define the graph Laplacian of G as L ∈ Rn×n , meaning L = D −A , whereD represents the degree matrix . Intuitively , solutions of ( 2 ) reflect a balance between proximity to f ( X ; θ ) and minimal quadratic differences across graph edges as enforced by the trace term tr [ Z > LZ ] = ∑ { i , j } ∈E ‖zi − zj‖ 2 2 ; more general regularization factors are considered in Section 3.4 . On the positive side , ( 2 ) can be solved in closed-form via f̃∗ ( X ; θ ) , arg min Z ` Z ( Z ) = P ∗f ( X ; θ ) , with P ∗ , ( I + λL ) −1 . ( 3 ) However , for large graphs the requisite inverse is not practically-feasible to compute , and instead iterative approximations are preferable . To this end , we may initialize as Z ( 0 ) = f ( X ; θ ) , and then proceed to iteratively descend in the direction of the negative gradient . Given that ∂ ` Z ( Z ) ∂Z = 2λLZ + 2Z − 2f ( X ; θ ) , ( 4 ) the k-th iteration of gradient descent becomes Z ( k ) = Z ( k−1 ) − α [ ( λL+ I ) Z ( k−1 ) − f ( X ; θ ) ] , ( 5 ) where α2 serves as the effective step size . Given that L is generally sparse , computation of ( 5 ) can leverage efficient sparse matrix multiplications , and we may also introduce modifications such as Jacobi preconditioning to speed convergence ( Axelsson , 1996 ; Yang et al. , 2021 ) . Furthermore , based on well-known properties of gradient descent , if k is sufficiently large and α is small enough , then f̃∗ ( X ; θ ) ≈ f̃ ( k ) ( X ; θ ) , P ( k ) [ f ( X ; θ ) ] , ( 6 ) where the operator P ( k ) ( · ) computes k gradient steps via ( 5 ) . The structure of these propagation steps , as well as related variants based on normalized modifications of gradient descent , equate to principled GNN layers , such as those used by GCN ( Kipf & Welling , 2016 ) , APPNP ( Klicpera et al. , 2018 ) , and many others , which can be trained within a broader bilevel optimization framework as described next .
The authors propose a new method for integrating graph-based models with boosting. This is done using the typical method involving residuals and weak-learners, but adding a step where information is propagated in the graph. The approach is also simple, as no GNNs or other auxiliary models are required. It is also shown how the meta-loss introduced by the authors provides convergence given some moderate assumptions. According to the experiments reported, the proposed model is better than the current state of the art in the considered domain.
SP:a69b08508dcfffcd0d1f64454e90d3ea2337cb5e
Convergent Boosted Smoothing for Modeling GraphData with Tabular Node Features
1 INTRODUCTION . Tabular data consists of observations stored as rows of a table , where multiple numeric/categorical features are recorded for each observation , one per column . Models for tabular data must learn to output accurate predictions solely from ( potentially high-dimensional or sparse ) sets of heterogeneous feature values . For learning from tabular data , ensembles of decision trees frequently rank on the top of model leaderboards , as they have proven to be highly performant when trained via multi-round boosting algorithms that progressively encourage the learner to focus more on “ difficult ” examples predicted inaccurately in earlier rounds ( Bansal , 2018 ; Elsayed et al. , 2021 ; Fakoor et al. , 2020 ; Huang et al. , 2020b ; Ke et al. , 2017 ; Prokhorenkova et al. , 2018 ) . Boosted trees thus form the cornerstone of supervised learning when rows of a table are independent and identically distributed ( iid ) . However , the iid assumption is severely violated for graph structured data ( Chami et al. , 2020 ) , in which nodes store features/labels that are highly correlated with those of their neighbors . Many methods have been proposed to account for the graph structure during learning , with graph neural networks ( GNN ) becoming immensely popular in recent years ( Scarselli et al. , 2008 ; Wang et al. , 2019 ; Zhou et al. , 2020a ) . Despite their success , modern GNNs suffer from various issues ( Alon & Yahav , 2021 ; Oono & Suzuki , 2019 ) : they are complex both in their implementation and the amount of required computation ( Faber et al. , 2021 ; Huang et al. , 2020a ; Shchur et al. , 2018 ) , and limited theoretical guarantees exist regarding their performance or even the convergence of their training ( Garg et al. , 2020 ; Li & Cheng , 2021 ; Zhou et al. , 2020b ) . Furthermore , the sample complexity of sophisticated GNN models and their ability to handle rich node features may be worse than simpler models like those used for tabular data ( Faber et al. , 2021 ; Huang et al. , 2020a ; Shchur et al. , 2018 ) . Graph datasets store tabular feature vectors , along with additional edge information . Despite the close relationship between graph and tabular data , there has been little work on how to adapt powerful boosting methods – our sharpest tool for dissecting tabular data – to account for edges . Here we consider how to best leverage tabular models for non-iid node classification/regression tasks in graphs . We focus on settings where each node is associated with tabular ( numeric/categorical ) features x and labels y , and the goal is to predict the labels of certain nodes . Straightforward application of tabular modeling ( in which nodes are treated as iid examples ) will produce a learned mapping where training and inference are uninformed by how nodes are arranged in the graph . To account for the graph structure , we introduce simple graph propagation operations into the definition of a modified , non-iid boosting loss function , such that edge information can be exploited to improve model accuracy relative to classical alternatives ( Friedman , 2001 ) . We refer to the resulting algorithm as EBBS which stands for efficient bilevel boosted smoothing , whereby the end-to-end training of a bilevel loss is such that values of the boosted base model f ebb and flow across the input graph producing a smoothed predictor f̃ . And unlike an existing adaptation of boosting for graph data ( Ivanov & Prokhorenkova , 2021 ) , our approach is simpler ( no GNN or other auxiliary model required ) , produces more accurate predictions on benchmark datasets , and enjoys reliable convergence guarantees . 2 BACKGROUND . Consider a graph G with a set of vertices , i.e . nodes , V , { v1 , v2 , . . . , vn } whose connectivity is described by the edge set E , { ( vi , vj ) ∈ V × V } , where the tuple of nodes ( vi , vj ) implies a relationship among vi and vj . The edge set is represented by the adjacency matrixA = ( aij ) n×n ∈ Rn×n , whose entry aij is nonzero if ( vi , vj ) ∈ E . In node prediction tasks , each node vi is associated with a feature vector xi ∈ Rd0 , as well as a label yi ∈ Rc that might hold discrete or continuous values ( corresponding to classification or regression tasks ) . In certain cases we have access to the labels at only a subset of nodes { yi } i∈L , with L ⊂ V . Given { yi } i∈L , the connectivity of the graph E , and the feature values of all nodes { xi } i∈V , our task is to predict the labels of the unlabeled nodes { yi } i∈U , with U = V \ L. In the most straightforward application , a tabular model may simply be fit to dataset { ( xi , yi ) } i∈L , disregarding that the observations are not iid . For prediction , the learned model is then independently applied to each xi for i ∈ U . Of course this naive approach may fare poorly as it fails to leverage E , but we note that it empirically performs better than one might expect on certain graph datasets ( where perhaps each xi contains sufficiently rich information to infer the corresponding yi , or nodes connected in E only exhibit weak correlation in their features or labels ) ( Faber et al. , 2021 ; Huang et al. , 2020a ; Ivanov & Prokhorenkova , 2021 ) . 2.1 GRADIENT BOOSTED DECISION TREES ( GBDT ) . GBDT represents a popular model for iid ( non-graph ) tabular data ( Friedman , 2001 ) , whereby remarkably accurate predictions are produced by an ensemble of weak learners . Formally , at each iteration t of boosting , the current ensemble model f ( t ) ( x ) is updated in an additive manner via f ( t+1 ) ( x ) = f ( t ) ( x ) + η ( t ) h ( t ) ( x ) , where h ( t ) ( x ) is a weaker learner selected from some candidate function spaceH ( typically decision trees ) , and η ( t ) is the learning rate calculated with the aid of line search . The weak learner h ( t ) ∈ H is chosen to approximate the pseudo-residuals given by the negative gradient of some loss function ` w.r.t the current model ’ s predictions . This involves solving h ( t ) = arg min h∈H m∑ i=1 ∥∥∥∥−∂ ` ( yi , f ( t ) ( xi ) ) ∂f ( t ) ( xi ) − h ( xi ) ∥∥∥∥2 2 , ( 1 ) where { xi , yi } mi=1 is a set of m training points . Decision trees are constructed by a recursive partition of feature space into Jt disjoint regions R1t , ... , RJtt to minimize the loss function ( 1 ) and predict a constant value bjt in each region Rjt . The output of h ( t ) ( x ) can be written as the sum : h ( t ) ( x ) = ∑Jt j=1 bjt1Rjt ( x ) , where 1 is the indicator notation . Many efficient GBDT implementations are available today ( Ke et al. , 2017 ; Prokhorenkova et al. , 2018 ; Wen et al. , 2019 ) , and these models can easily be trained for nonstandard prediction tasks via custom loss functions ( Elsayed et al. , 2021 ; Li et al. , 2007 ; Velthoen et al. , 2021 ) . 2.2 PRIOR ATTEMPTS TO COMBINE BOOSTING WITH GRAPH NEURAL NETWORKS . The AdaGCN model from Sun et al . ( 2019 ) proposes a GNN architecture motivated by the structure of AdaBoost iterations ; however , this approach is not actually designed to handle tabular data . More related to our work is the boosted graph neural network ( BGNN ) approach from Ivanov & Prokhorenkova ( 2021 ) , which proposes a novel architecture that jointly trains GBDT and GNN models . Within this framework , GBDT extracts predictive information from node features and the GNN accounts for the graph structure , achieving a significant performance increase on various graph datasets with tabular features . In the first iteration , BGNN builds a GBDT model f ( 1 ) ( x ) over the training node features that are treated essentially as iid tabular data . Using all GBDT predictions , BGNN then concatenates these with the original node features and subsequently uses these as inputs to a GNN model . Next , the GNN is trained with l steps of gradient descent to optimize network parameters as well as the input node features themselves by backpropagating gradients entirely through the GNN into its input space . BGNN uses the difference between the optimized node features and input node features as a new target value for the next decision tree in the subsequent boosting round . After GBDT has been updated with the addition of the new tree , it makes predictions that are used to augment the node features for subsequent use in the GNN , and the process repeats until some stopping criteria is met . While BGNN has thus far demonstrated promising empirical results , there is no guarantee of convergence or even cost function descent . Our EBBS model addresses this issue through a fully integrated alternative described in Section 3 , which enjoys convergence guarantees provided in Section 4 . Besides its favorable analytical properties , EBBS naturally supports a suite of graph-based regularizers promoting different properties described in Section 3.4 , empirically achieves higher accuracy , and is less prone to overfitting without careful hyperparameter-tuning since the graph is accounted for via a simple regularizer instead of an additional parameterized GNN model . Appendix B provides further discussion about the connection between EBBS and BGNN , highlighting key technical differences . 3 END-TO-END INTEGRATION OF GRAPH PROPAGATION AND BOOSTING . In this section we describe our method for combining GBDT with graph-aware propagation layers . We first describe a general family of propagation layers that mimic gradient descent steps along a principled graph-regularized loss , followed by the derivation of a bilevel optimization algorithm that exploits these layers . In terms of notation , we will henceforth usemi to reference the i-th row of an arbitrary matrixM . 3.1 GRAPH-AWARE PROPAGATION LAYERS INSPIRED BY GRADIENT DESCENT . Recently there has been a surge of interest in GNN architectures with layers defined with respect to the minimization of a principled class of graph-regularized energy functions ( Klicpera et al. , 2018 ; Ma et al. , 2020 ; Pan et al. , 2021 ; Yang et al. , 2021 ; Zhang et al. , 2020 ; Zhu et al. , 2021 ) . In this context , the basic idea is to associate each descent step along an optimization trajectory ( e.g. , a gradient descent step , power iteration , or related ) , with a GNN layer , such that in aggregate , the forward GNN pass can be viewed as minimization of the original energy . Hence GNN training can benefit from the inductive bias afforded by energy function minimizers ( or close approximations thereof ) whose specific form can be controlled by trainable parameters . The most common instantiation of this framework , with roots in Zhou et al . ( 2004 ) , begins with the energy ` Z ( Z ) , ‖Z − f ( X ; θ ) ‖2F + λtr [ Z > LZ ] , ( 2 ) where λ is a trade-off parameter , Z ∈ Rn×d is a learnable embedding of d-dimensional features across n nodes , and f ( X ; θ ) denotes a base model ( parameterized by θ ) that computes an initial target embedding based on the d0-dimensional node featuresX ∈ Rn×d0 . We also define the graph Laplacian of G as L ∈ Rn×n , meaning L = D −A , whereD represents the degree matrix . Intuitively , solutions of ( 2 ) reflect a balance between proximity to f ( X ; θ ) and minimal quadratic differences across graph edges as enforced by the trace term tr [ Z > LZ ] = ∑ { i , j } ∈E ‖zi − zj‖ 2 2 ; more general regularization factors are considered in Section 3.4 . On the positive side , ( 2 ) can be solved in closed-form via f̃∗ ( X ; θ ) , arg min Z ` Z ( Z ) = P ∗f ( X ; θ ) , with P ∗ , ( I + λL ) −1 . ( 3 ) However , for large graphs the requisite inverse is not practically-feasible to compute , and instead iterative approximations are preferable . To this end , we may initialize as Z ( 0 ) = f ( X ; θ ) , and then proceed to iteratively descend in the direction of the negative gradient . Given that ∂ ` Z ( Z ) ∂Z = 2λLZ + 2Z − 2f ( X ; θ ) , ( 4 ) the k-th iteration of gradient descent becomes Z ( k ) = Z ( k−1 ) − α [ ( λL+ I ) Z ( k−1 ) − f ( X ; θ ) ] , ( 5 ) where α2 serves as the effective step size . Given that L is generally sparse , computation of ( 5 ) can leverage efficient sparse matrix multiplications , and we may also introduce modifications such as Jacobi preconditioning to speed convergence ( Axelsson , 1996 ; Yang et al. , 2021 ) . Furthermore , based on well-known properties of gradient descent , if k is sufficiently large and α is small enough , then f̃∗ ( X ; θ ) ≈ f̃ ( k ) ( X ; θ ) , P ( k ) [ f ( X ; θ ) ] , ( 6 ) where the operator P ( k ) ( · ) computes k gradient steps via ( 5 ) . The structure of these propagation steps , as well as related variants based on normalized modifications of gradient descent , equate to principled GNN layers , such as those used by GCN ( Kipf & Welling , 2016 ) , APPNP ( Klicpera et al. , 2018 ) , and many others , which can be trained within a broader bilevel optimization framework as described next .
In this paper, the authors present a new approach to combine the boosted decision tree classifiers with a graph propagation model, which is important in handling table input data. The approach casts the graph propagation as an optimization problem, where the input node features are generated by boosted decision trees. The gradient can be taken in the functional space to learn the decision trees to minimize a unified loss. The final algorithm is shown to minimize the unified loss in a principled manner. The superior performance is demonstrated over the existing BGNN model.
SP:a69b08508dcfffcd0d1f64454e90d3ea2337cb5e
Softmax Gradient Tampering: Decoupling the Backward Pass for Improved Fitting
1 INTRODUCTION . Smooth gradient flow is the key to successful convergence of deep neural networks . Batch Normalizing Ioffe & Szegedy ( 2015 ) , Weight Standardization Qiao et al . ( 2019 ) , and Group Normalization Wu & He ( 2018 ) are all types of normalization techniques that smooth the gradient landscape in the backward pass . Batch normalization smoothes the loss and gradient surfaces by limiting their lipschitzness Santurkar et al . ( 2018 ) . Methods of smoothing gradients by manipulating the labels distribution include label smoothing Szegedy et al . ( 2016 ) , mixup Zhang et al . ( 2017 ) . In this paper , we manipulate the gradients in an unconventional way to impose smooth gradients in the backward pass . This technique is one of a family of Gradient Tampering methods , called Softmax Gradient Tampering since it is conducted at the softmax stage . We demonstrate that Softmax Gradient Tampering improves convergence fit and therefore improves training and testing accuracy . Our main contributions in this paper are the following : 1 . We propose Softmax Gradient Tampering that improves the learning capacity of neural networks and allows for a much better fit on the training dataset . This reflects both in training and validation accuracy metrics . 2 . We theoretically analyze how softmax gradient tampering works from the perspective of gradient smoothness . 3 . We provide comprehensive findings from different models , and datasets . 2 METHOD : SOFTMAX GRADIENT TAMPERING . The softmax gradient tampering technique is described in this section . Softmax gradient tampering modifies the gradient in the backward pass . It does not alter the behaviour in the forward pass . Before we describe our method , we briefly explain the forward pass itself . A neural network with parameters W generates C logits denoted by z for every input vector x. z is given as z = Wx . Then a set of probability values pi are generated from the logits using a softmax function which is defined as pi = e zi∑C j=1 zj . pi and zi represent the predicted probability values and the logits for the ith class respectively . Following this step , the loss function is invoked and the loss between the predicted probability values the ground truth labels ( which is also a probability distribution ) is calculated . If the loss function is cross-entropy loss , then the value of the loss is given as L = − ∑C i=1 qi log ( pi ) where qi is the ground truth label of the i th class for a particular training example . By backpropagation rule , we can calculate the gradient of the loss with respect to the logits which takes the form ∂L∂zi = pi − qi . The softmax gradient tampering technique is now described . We introduce a hyperparameter α , which takes a value between [ 0 , 1 ] and regulates the degree of tampering . The softmax gradient tampering method modifies the predicted probability values in the backward pass as follows : p′i = pαi∑C j=1 p α j i = 1 , . . . , C 0 ≤ α ≤ 1 ( 1 ) The above transformation changes the gradient of the loss with respect to the logits as follows : ∂L ∂zi = p′i − qi ( 2 ) The rest of the backward pass proceeds as usual . We denote the original probability distribution as P ( with values pi at the ith index ) and the transformed distribution as P ′ ( with values p′i at the i th index ) . A code snippet of the algorithm implemented using PyTorch Paszke et al . ( 2019 ) is given in the appendix . In PyTorch , the available method to modify gradients in the backward pass is using the register_hook function which is called in the forward function . The backward pass gradient profile is smoothed by the transformation of the probability distribution using equation 1 . Because the gradient at the softmax layer ’ s input ( ∂L/∂zi ) is directly proportional to the predicted probabilities pi , smoothing out the probability distribution also smooths out the gradient profile . We explore the effect of this change from a theoretical standpoint in section 5 . In section 3 , we offer empirical data and experiments to support the proposed method . 3 EXPERIMENTS . 3.1 GRID SEARCH EXPERIMENTS . We do a thorough hyperparameter search on the ResNet-18 He et al . ( 2016 ) architecture and the ImageNet-1K dataset , changing the value of α from 0.1 to 0.9 . We begin with a coarse grid search in 0.1 increments . The optimum values of α are found to be between 0.2 and 0.4 . Then we do a fine grid search in 0.05 increments between 0.2 and 0.4 . We find that the optimum performance for ResNet-18 is achieved at a α = 0.25 . We discover that the optimum value of α is 0.3 by repeating the grid search for ResNet-50 He et al . ( 2016 ) in the range of 0.2 to 0.4 with increments of 0.05 . We utilise a batch size of 1024 over four V100 GPUs for each grid search experiment . Each experiment is carried out with a 50 epoch budget . In the first 2 epochs , we raise the learning rate linearly from 4× 10−4 to 0.4 . The learning rate is then decayed using a cosine scheduler ( single schedule with no restarts ) Loshchilov & Hutter ( 2016 ) from 0.4 at the start of the 2nd epoch to 4× 10−4 at the end of the 46th epoch . We continue training for the next 4 epochs at a constant learning rate of 4 × 10−4 ( cooldown phase ) . The plots are shown in Figure ( 2 ) . We do a grid search on the ResNet-20 He et al . ( 2016 ) architecture and the CIFAR-10 dataset , with a epoch budget of 360 . However , we get no advantage for any value of α . This is covered in more detail in section 3.3 . Figure ( 2a , b ) shows a pattern in which training accuracy rises when α is lowered from 1 to 0 , indicating the method ’ s potential as an algorithm to drive the network towards better learnt feature representations and therefore a better fit on the training dataset . This is due to the transform ’ s nature , which smoothes the gradients . This line of reasoning is expanded upon in section 5 . The generalization gap is lowest when α = 0.2 but it comes at the expense of test accuracy , so we disregard it . Values of α < 0.2 cause issues with half precision training , therefore we choose α = 0.25 . In our trails with ResNet-18 , we find optimal values of α = 0.25 , while in our experiments with ResNet-50 , we get optimal values of α = 0.3 . We port these values for all 100 epoch schedules , as well as the Squeeze-and-Excitation versions of the ResNets . We indeed find a peak at α = 0.1 , but we found that such low α values are typically unstable in most subsequent tests , especially with mixed precision or half precision arithmetic . According to our observations in section 3.5 , low values of α frequently result in high logit norms , which directly cause floating point problems Micikevicius et al . ( 2017 ) . 3.2 EXPERIMENT ON THE IMAGENET DATASET . We perform experiments on the ImageNet-1K dataset Deng et al . ( 2009 ) , which has 1.28 million images divided into 1000 classes . ImageNet-1K ’ s test set consists of 5000 images per class . We train models based on several ResNets , including the Squeeze-and-Excitation versions . We observe a constant 0.1 % to 0.15 % rise in ResNet-18 ’ s test accuracy . What ’ s more interesting is that we ’ re seeing comparable improvements in training accuracy . Softmax gradient tampering is not a regularization technique , as shown by this . It just allows the network to better fit the dataset . In section 4 , we show how combining softmax gradient tampering with regularization techniques such as label smoothing may improve network performance and reduce the generalization gap . We find significantly larger improvements in experiments using ResNet-50 , a 0.5 % increase over the step scheduling baseline and a 0.3 % increase over the cosine scheduling baseline . In studies with Squeeze-and-Excitation networks , we use the cosine scheduler . In these experiments , we find consistent improvements across the board , with SEResNet-18 Hu et al . ( 2018 ) gaining 0.4 % and SEResNet-50 Hu et al . ( 2018 ) gaining 0.7 % . As we saw with ResNet-18 , we find continuous improvements in training performance because the networks are just training better and arriving to better optimas at convergence . ResNet-101 training provides a modest advantage of 0.35 % . All models are trained on four V100 GPUs with a batch size of 1024 . We utilise the same set of hyperparameters in each experiment , which are as follows : 100 epoch budget , 5 epochs linear warmup phase beginning with a learning rate of 4 × 10−4 and ending with a peak learning rate of 0.4 . In our studies , we employ either a step scheduler ( dividing the learning rate by 10 at the 30th , 60th , and 90th epochs ) or a cosine decay scheduler . We observe a consistent improvement in test metrics when utilising cosine scheduling across all of our experiments . Finally , towards the conclusion of training , we impose a 10 epoch cooling phase in which the network is trained with a constant learning rate schedule set at 4×10−4 . Other hyperparameters include a weight decay value of 5× 10−4 and a momentum of 0.9 for the SGD Nesterov optimizer . All of our models are trained using mixed floating point precision . The findings are shown in Table 1 . 3.3 EXPERIMENT ON CIFAR DATASETS . The CIFAR-10 dataset Krizhevsky et al . ( 2014 ) is divided into 10 classes , each with 5000 datapoints for training and 1000 datapoints for testing . In comparison to ImageNet-1K , it is a much smaller dataset . The most frequent situation seen when training models on CIFAR-10 is that the models fully overfit the training dataset , achieving almost 100 % accuracy . Given that the proposed method ( section 2 ) is not intended as a regularization technique , no improvements in the training of such datasets are anticipated . We see the same thing in Figure 2c , where we show the results of a hyperparameter grid search . Different values of α do not impair ResNet-20 ’ s performance on CIFAR-10 , but neither do they improve on the baseline . 3.4 TOWARDS REMOVING NORMALIZATION . Experiments are carried out on non-Batch Normalized networks . We specifically utilize the ResNet18 model and train it on ImageNet-1K without any of the BatchNorm layers . We see a substantial increase in test accuracy of about 1 % . Similarly to prior findings , we get a 1.1 % increase in training accuracy , leading us to infer that when softmax gradient tampering is used , the network ’ s convergence optima is significantly superior . Table 2 shows the findings . 3.5 EFFECT ON LOSS , LOGITS AND OTHER METRICS . As seen in Figure ( 3a ) , the logit norm increases as α falls from 1 to 0 . In the initial training phase , softmax gradient tampering causes the gradient in both the valid and wrong classes to rise as the network misclassifies majority of the examples . Due to the non-linear nature of the softmax function , the predicted probability distribution may frequently be close to a delta distribution , even in instances of misclassifications . But softmax gradient tampering reshapes the predicted probability distribution . It redistributes probability from the confident classes to the less confident ones . Gradient rises in all classes . We also see that the final value of the loss rises as α drops , and that the logit norm and the final value of the loss are linearly correlated Figure ( 3c ) . ResNet-18 and ResNet-50 per epoch training and test accuracies with and without softmax gradient manipulation are shown in Figure ( 4a , b ) . In all instances , we observe a gain . This means that softmax gradient tampering forces the network to learn better representations .
In this paper, the authors propose a technique called Softmax Gradient Tampering, which transforms the predicted output class probabilities to improve training performance of neural networks. The authors show that the proposed technique results in a smoother output probability distribution for lower values of the hyperparameter $\alpha$. On standard benchmarks on the ImageNet and CIFAR-10 datasets, the authors show that the proposed method results in better training accuracy performance as well as improved generalization performance.
SP:90cd795a78c403b27c768be54597bd04cc0fbaa3
Softmax Gradient Tampering: Decoupling the Backward Pass for Improved Fitting
1 INTRODUCTION . Smooth gradient flow is the key to successful convergence of deep neural networks . Batch Normalizing Ioffe & Szegedy ( 2015 ) , Weight Standardization Qiao et al . ( 2019 ) , and Group Normalization Wu & He ( 2018 ) are all types of normalization techniques that smooth the gradient landscape in the backward pass . Batch normalization smoothes the loss and gradient surfaces by limiting their lipschitzness Santurkar et al . ( 2018 ) . Methods of smoothing gradients by manipulating the labels distribution include label smoothing Szegedy et al . ( 2016 ) , mixup Zhang et al . ( 2017 ) . In this paper , we manipulate the gradients in an unconventional way to impose smooth gradients in the backward pass . This technique is one of a family of Gradient Tampering methods , called Softmax Gradient Tampering since it is conducted at the softmax stage . We demonstrate that Softmax Gradient Tampering improves convergence fit and therefore improves training and testing accuracy . Our main contributions in this paper are the following : 1 . We propose Softmax Gradient Tampering that improves the learning capacity of neural networks and allows for a much better fit on the training dataset . This reflects both in training and validation accuracy metrics . 2 . We theoretically analyze how softmax gradient tampering works from the perspective of gradient smoothness . 3 . We provide comprehensive findings from different models , and datasets . 2 METHOD : SOFTMAX GRADIENT TAMPERING . The softmax gradient tampering technique is described in this section . Softmax gradient tampering modifies the gradient in the backward pass . It does not alter the behaviour in the forward pass . Before we describe our method , we briefly explain the forward pass itself . A neural network with parameters W generates C logits denoted by z for every input vector x. z is given as z = Wx . Then a set of probability values pi are generated from the logits using a softmax function which is defined as pi = e zi∑C j=1 zj . pi and zi represent the predicted probability values and the logits for the ith class respectively . Following this step , the loss function is invoked and the loss between the predicted probability values the ground truth labels ( which is also a probability distribution ) is calculated . If the loss function is cross-entropy loss , then the value of the loss is given as L = − ∑C i=1 qi log ( pi ) where qi is the ground truth label of the i th class for a particular training example . By backpropagation rule , we can calculate the gradient of the loss with respect to the logits which takes the form ∂L∂zi = pi − qi . The softmax gradient tampering technique is now described . We introduce a hyperparameter α , which takes a value between [ 0 , 1 ] and regulates the degree of tampering . The softmax gradient tampering method modifies the predicted probability values in the backward pass as follows : p′i = pαi∑C j=1 p α j i = 1 , . . . , C 0 ≤ α ≤ 1 ( 1 ) The above transformation changes the gradient of the loss with respect to the logits as follows : ∂L ∂zi = p′i − qi ( 2 ) The rest of the backward pass proceeds as usual . We denote the original probability distribution as P ( with values pi at the ith index ) and the transformed distribution as P ′ ( with values p′i at the i th index ) . A code snippet of the algorithm implemented using PyTorch Paszke et al . ( 2019 ) is given in the appendix . In PyTorch , the available method to modify gradients in the backward pass is using the register_hook function which is called in the forward function . The backward pass gradient profile is smoothed by the transformation of the probability distribution using equation 1 . Because the gradient at the softmax layer ’ s input ( ∂L/∂zi ) is directly proportional to the predicted probabilities pi , smoothing out the probability distribution also smooths out the gradient profile . We explore the effect of this change from a theoretical standpoint in section 5 . In section 3 , we offer empirical data and experiments to support the proposed method . 3 EXPERIMENTS . 3.1 GRID SEARCH EXPERIMENTS . We do a thorough hyperparameter search on the ResNet-18 He et al . ( 2016 ) architecture and the ImageNet-1K dataset , changing the value of α from 0.1 to 0.9 . We begin with a coarse grid search in 0.1 increments . The optimum values of α are found to be between 0.2 and 0.4 . Then we do a fine grid search in 0.05 increments between 0.2 and 0.4 . We find that the optimum performance for ResNet-18 is achieved at a α = 0.25 . We discover that the optimum value of α is 0.3 by repeating the grid search for ResNet-50 He et al . ( 2016 ) in the range of 0.2 to 0.4 with increments of 0.05 . We utilise a batch size of 1024 over four V100 GPUs for each grid search experiment . Each experiment is carried out with a 50 epoch budget . In the first 2 epochs , we raise the learning rate linearly from 4× 10−4 to 0.4 . The learning rate is then decayed using a cosine scheduler ( single schedule with no restarts ) Loshchilov & Hutter ( 2016 ) from 0.4 at the start of the 2nd epoch to 4× 10−4 at the end of the 46th epoch . We continue training for the next 4 epochs at a constant learning rate of 4 × 10−4 ( cooldown phase ) . The plots are shown in Figure ( 2 ) . We do a grid search on the ResNet-20 He et al . ( 2016 ) architecture and the CIFAR-10 dataset , with a epoch budget of 360 . However , we get no advantage for any value of α . This is covered in more detail in section 3.3 . Figure ( 2a , b ) shows a pattern in which training accuracy rises when α is lowered from 1 to 0 , indicating the method ’ s potential as an algorithm to drive the network towards better learnt feature representations and therefore a better fit on the training dataset . This is due to the transform ’ s nature , which smoothes the gradients . This line of reasoning is expanded upon in section 5 . The generalization gap is lowest when α = 0.2 but it comes at the expense of test accuracy , so we disregard it . Values of α < 0.2 cause issues with half precision training , therefore we choose α = 0.25 . In our trails with ResNet-18 , we find optimal values of α = 0.25 , while in our experiments with ResNet-50 , we get optimal values of α = 0.3 . We port these values for all 100 epoch schedules , as well as the Squeeze-and-Excitation versions of the ResNets . We indeed find a peak at α = 0.1 , but we found that such low α values are typically unstable in most subsequent tests , especially with mixed precision or half precision arithmetic . According to our observations in section 3.5 , low values of α frequently result in high logit norms , which directly cause floating point problems Micikevicius et al . ( 2017 ) . 3.2 EXPERIMENT ON THE IMAGENET DATASET . We perform experiments on the ImageNet-1K dataset Deng et al . ( 2009 ) , which has 1.28 million images divided into 1000 classes . ImageNet-1K ’ s test set consists of 5000 images per class . We train models based on several ResNets , including the Squeeze-and-Excitation versions . We observe a constant 0.1 % to 0.15 % rise in ResNet-18 ’ s test accuracy . What ’ s more interesting is that we ’ re seeing comparable improvements in training accuracy . Softmax gradient tampering is not a regularization technique , as shown by this . It just allows the network to better fit the dataset . In section 4 , we show how combining softmax gradient tampering with regularization techniques such as label smoothing may improve network performance and reduce the generalization gap . We find significantly larger improvements in experiments using ResNet-50 , a 0.5 % increase over the step scheduling baseline and a 0.3 % increase over the cosine scheduling baseline . In studies with Squeeze-and-Excitation networks , we use the cosine scheduler . In these experiments , we find consistent improvements across the board , with SEResNet-18 Hu et al . ( 2018 ) gaining 0.4 % and SEResNet-50 Hu et al . ( 2018 ) gaining 0.7 % . As we saw with ResNet-18 , we find continuous improvements in training performance because the networks are just training better and arriving to better optimas at convergence . ResNet-101 training provides a modest advantage of 0.35 % . All models are trained on four V100 GPUs with a batch size of 1024 . We utilise the same set of hyperparameters in each experiment , which are as follows : 100 epoch budget , 5 epochs linear warmup phase beginning with a learning rate of 4 × 10−4 and ending with a peak learning rate of 0.4 . In our studies , we employ either a step scheduler ( dividing the learning rate by 10 at the 30th , 60th , and 90th epochs ) or a cosine decay scheduler . We observe a consistent improvement in test metrics when utilising cosine scheduling across all of our experiments . Finally , towards the conclusion of training , we impose a 10 epoch cooling phase in which the network is trained with a constant learning rate schedule set at 4×10−4 . Other hyperparameters include a weight decay value of 5× 10−4 and a momentum of 0.9 for the SGD Nesterov optimizer . All of our models are trained using mixed floating point precision . The findings are shown in Table 1 . 3.3 EXPERIMENT ON CIFAR DATASETS . The CIFAR-10 dataset Krizhevsky et al . ( 2014 ) is divided into 10 classes , each with 5000 datapoints for training and 1000 datapoints for testing . In comparison to ImageNet-1K , it is a much smaller dataset . The most frequent situation seen when training models on CIFAR-10 is that the models fully overfit the training dataset , achieving almost 100 % accuracy . Given that the proposed method ( section 2 ) is not intended as a regularization technique , no improvements in the training of such datasets are anticipated . We see the same thing in Figure 2c , where we show the results of a hyperparameter grid search . Different values of α do not impair ResNet-20 ’ s performance on CIFAR-10 , but neither do they improve on the baseline . 3.4 TOWARDS REMOVING NORMALIZATION . Experiments are carried out on non-Batch Normalized networks . We specifically utilize the ResNet18 model and train it on ImageNet-1K without any of the BatchNorm layers . We see a substantial increase in test accuracy of about 1 % . Similarly to prior findings , we get a 1.1 % increase in training accuracy , leading us to infer that when softmax gradient tampering is used , the network ’ s convergence optima is significantly superior . Table 2 shows the findings . 3.5 EFFECT ON LOSS , LOGITS AND OTHER METRICS . As seen in Figure ( 3a ) , the logit norm increases as α falls from 1 to 0 . In the initial training phase , softmax gradient tampering causes the gradient in both the valid and wrong classes to rise as the network misclassifies majority of the examples . Due to the non-linear nature of the softmax function , the predicted probability distribution may frequently be close to a delta distribution , even in instances of misclassifications . But softmax gradient tampering reshapes the predicted probability distribution . It redistributes probability from the confident classes to the less confident ones . Gradient rises in all classes . We also see that the final value of the loss rises as α drops , and that the logit norm and the final value of the loss are linearly correlated Figure ( 3c ) . ResNet-18 and ResNet-50 per epoch training and test accuracies with and without softmax gradient manipulation are shown in Figure ( 4a , b ) . In all instances , we observe a gain . This means that softmax gradient tampering forces the network to learn better representations .
This paper proposes the softmax gradient tampering to modify the gradients in the backward pass to enhance the accuracy. The predicted probability value is transformed using a power-based probability transformation, and the gradient profile is more smooth. The experimental results show the slight accuracy increase.
SP:90cd795a78c403b27c768be54597bd04cc0fbaa3
Calibration Regularized Training of Deep Neural Networks using Kernel Density Estimation
1 INTRODUCTION . Deep neural networks have shown tremendous success in classification tasks , being regularly the best performing models in terms of accuracy . However , they are also known to make overconfident predictions ( Guo et al. , 2017 ) , which is particularly problematic in safety-critical applications such as medical diagnosis or autonomous driving . Therefore , in many real world applications we do not just care about the predictive performance , but also about the trustworthiness of that prediction , that is , we are interested in accurate predictions with robust uncertainty estimates . To this end , we want our models to be uncertainty calibrated which means that , for instance , among all cells that have been predicted with a probability of 0.8 to be cancerous , in fact a fraction of 80 % belong to a malignant tumor . Being calibrated , however , does not imply that the classifier achieves good accuracy . For instance , a classifier that always predicts the marginal distribution of the target class is calibrated , but will not be very useful in practice . Likewise , a good predictive performance does not ensure calibration . In particular , for a broad class of loss functions , risk minimization leads to asymptotically Bayes optimal classifiers ( Bartlett et al. , 2006 ) . However , there is no guarantee that they are calibrated , even in the aysmptotic limit . Therefore , we consider minimizing the risk plus a term that penalizes miscalibration , i.e. , Risk +λ · CalibrationError . For parameter values λ > 0 , this will push the classifier towards a calibrated model , while maintaining similar accuracy . The existence of such a λ > 0 is suggested by the fact that there always exists at least one Bayes optimal classifier that is calibrated , namely P ( y|x ) . To optimize the risk and the calibration error jointly , we propose a differentiable and consistent estimator of the expected Lp calibration error based on kernel density estimates ( KDEs ) . In particular , we use a Beta kernel in binary classification tasks and a Dirichlet kernel in the multiclass setting , as these kernels are the natural choices to model density estimation over a probability simplex . Our Dirichlet kernel based estimator allows for the estimation of canonical calibration , which is the strongest notion of multiclass calibration as it implies the calibration of the whole probability vector ( Bröcker , 2009 ; Appice et al. , 2015 ; Vaicenavicius et al. , 2019 ) . By contrast , most other state-ofthe-art methods only achieve weaker versions of multiclass calibration , namely top-label ( Guo et al. , 2017 ) and marginal or class-wise calibration ( Kull et al. , 2019 ) . The top-label calibration only considers the scores for the predictied class , while for marginal calibration the multiclass problem is split up into K one-vs-all binary ones , each of which is required to be calibrated according to the definition of binary calibration . In many applications marginal and canonical calibration are preferable to top-label calibration , since we often care about having reliable uncertainty estimates for more than just one class per prediction . For instance , in medical diagnosis we do not just care about the most likely disease a certain patient might have but also about the probabilities of other diseases . Our contributions can be summarized as follows : 1 . We develop a trainable calibration error objective using Dirichlet kernel density estimates , which can be minimized alongside any loss function in the existing batch stochastic gradient descent framework . 2 . We propose to use our estimator to evaluate canonical calibration . Due to the scaling properties of Dirichlet kernel density estimation , and the tendency for probabilities to be concentrated in a relatively small number of classes , this becomes feasible in cases that can not be estimated using a binned estimator . 3 . We show on a variety of network architectures and two datasets that DNNs trained alongside an estimator of the calibration error achieve competitive results both on existing metrics and on the proposed measure of canonical calibration . 2 RELATED WORK . Calibration of probabilistic predictors has long been studied in many fields . This topic gained attention in the deep learning community following the observation in Guo et al . ( 2017 ) that modern neural networks are poorly calibrated and tend to give overconfident predictions due to overfitting on the NLL loss . The surge of interest resulted in many calibration strategies that can be split in two general categories , which we discuss subsequently . Post-hoc calibration strategies learn a calibration map of the predictions from a trained predictor in a post-hoc manner . For instance , Platt scaling ( Platt , 1999 ) fits a logistic regression model on top of the logit outputs of the model . A special case of Platt scaling that fits a single scalar , called temperature , has been popularized by Guo et al . ( 2017 ) as an accuracy-preserving , easy to implement and effective method to improve calibration . However , it has the undesired consequence that it clamps the high confidence scores of accurate predictions ( Kumar et al. , 2018 ) . Other approaches for post-hoc calibration include : histogram binning ( Zadrozny & Elkan , 2001 ) , isotonic regression ( Zadrozny & Elkan , 2002 ) , and Bayesian binning into quantiles ( Naeini & Cooper , 2015 ) . Trainable calibration strategies integrate a differentiable calibration measure into the training objective . One of the earliest approaches is regularization by penalizing low entropy predictions ( Pereyra et al. , 2017 ) . Similarly to temperature scaling , it has been shown that entropy regularization needlessly suppresses high confidence scores of correct predictions ( Kumar et al. , 2018 ) . Another popular strategy is MMCE ( Maxmimum Mean Calibration Error ) ( Kumar et al. , 2018 ) , where the entropy regularizer is replaced by a kernel-based surrogate for the calibration error that can be optimized alongside NLL . It has been shown that label smoothing ( Szegedy et al. , 2015 ; Müller et al. , 2020 ) , i.e . training models with a weighted mixture of the labels instead of one-hot vectors , also improves model calibration . Liang et al . ( 2020 ) propose to add the difference between predicted confidence and accuracy as auxiliary term to the cross-entropy loss . Focal loss ( Mukhoti et al. , 2020 ; Lin et al. , 2018 ) has recently been empirically shown to produce better calibrated models than many of the alternatives , but does not estimate a clear quantity related to calibration error . Kernel density estimation ( Parzen , 1962 ; Rosenblatt , 1956 ) is a non-parametric method to estimate a probability density function from a finite sample . Zhang et al . ( 2020 ) propose a KDE-based estimator of the calibration error for measuring calibration performance . However , they use the triweight kernel , which has a limited support interval and is therefore applicable to binary classification , but does not have a natural extension to higher dimensional simplexes , in contrast to the Dirichlet kernel that we consider here . As a result , they consider an unnatural proxy to marginal calibration error , which does not result in a consistent estimator . 3 METHODS . The most commonly used loss functions are designed to achieve consistency in the sense of Bayes optimality under risk minimization , however , they do not guarantee calibration - neither for finite samples nor in the asymptotic limit . Since we are interested in models f that are both accurate and calibrated , we consider the following optimization problem bounding the calibration error CE ( f ) : f = arg min f∈F Risk ( f ) , s.t . CE ( f ) ≤ B ( 1 ) for some B > 0 , and its associated Lagrangian f = arg min f∈F ( Risk ( f ) + λ · CE ( f ) ) . ( 2 ) We measure the ( mis- ) calibration in terms of the Lp calibration error . To this end , let ( Ω , A , P ) be a probability space , let X = Rd , Y = { 0 , 1 , ... , K } . Let x : Ω → X and y : Ω → Y be random variables while realizations are denoted with subscripts . Furthermore , let f : X → 4K be a decision function , where 4K denotes the K dimensional simplex as is achieved e.g . from the output of a final softmax layer in a neural network . Definition 3.1 ( Calibration error , ( Naeini et al. , 2015 ; Kumar et al. , 2019 ; Wenger et al. , 2020 ) ) . The Lp calibration error of f is : CEp ( f ) = ( E [ ∥∥∥E [ y | f ( x ) ] − f ( x ) ∥∥∥p p ] ) 1 p . ( 3 ) We note that we consider multiclass calibration , and that f ( x ) and the conditional expectation in Equation 3 therefore map to points on a probability simplex . We say that a classifier f is perfectly calibrated if CEp ( f ) = 0 . Kumar et al . ( 2018 ) have also considered a minimization problem similar to Equation 2 . Instead of using the CEp they use a metric called maximum mean calibration error ( MMCE ) that is 0 if and only if CEp = 0 . However , it is unclear how MMCE relates to the canonical multiclass setting or to the norm parameter p for non-zero CEp . In order to optimize Definition 3.1 directly , we need to perform density estimation over the probability simplex in order to empirically compute the conditional expectation . In a binary setting , this has traditionally been done with binned estimates ( Naeini et al. , 2015 ; Guo et al. , 2017 ; Kumar et al. , 2019 ) . However , this is not differentiable w.r.t . the function f , and can not be incorporated into a gradient based training procedure . Furthermore , binned estimates suffer from the curse of dimensionality and do not have a practical extension to multiclass settings . A natural choice for a differentiable kernel density estimator in the binary case is a kernel based on the Beta distribution and the extension to the multiclass case is given by the Dirichlet distribution . Hence , we consider an estimator for the CEp based on Beta and Dirichlet kernel density estimates in the binary and multiclass setting , respectively . We require that this estimator is consistent and differentiable such that we can train it according to Equation 2 . This estimator is given by : ̂CEp ( f ) p = 1 n n∑ h=1 [ ∥∥∥ ̂E [ y | f ( x ) ] ∣∣∣ f ( xh ) − f ( xh ) ∥∥∥p p ] , ( 4 ) where ̂E [ y | f ( x ) ] ∣∣∣ f ( xh ) denotes ̂E [ y | f ( x ) ] evaluated at f ( x ) = f ( xh ) . If Px , y has a probability density px , y with respect to the product of the Lebesgue and counting measure , we can define : px , y ( xi , yi ) = py|x=xi ( yi ) px ( xi ) . Then we define the estimator of the conditional expectation as follows : E [ y | f ( x ) ] = ∑ yk∈Y yk py|x=f ( x ) ( yk ) = ∑ yk∈Y yk px , y ( f ( x ) , yk ) px ( f ( x ) ) ( 5 ) ≈ ∑n i=1 k ( f ( x ) ; f ( xi ) ) yi∑n i=1 k ( f ( x ) ; f ( xi ) ) = : ̂E [ y | f ( x ) ] ( 6 ) where k is the kernel of a kernel density estimate evaluated at point xi . Proposition 3.2 . ̂E [ y | f ( x ) ] is a pointwise consistent estimator of E [ y | f ( x ) ] , that is : lim n→∞ ∑n i=1 k ( f ( x ) ; f ( xi ) ) yi∑n i=1 k ( f ( x ) ; f ( xi ) ) = ∑ yk∈Y yk px , y ( f ( x ) , yk ) px ( f ( x ) ) . ( 7 ) Proof . By the consistency of kernel density estimators ( Silverman , 1986 ; Chen , 1999 ; Ouimet & Tolosana-Delgado , 2021 ) , for all f ( x ) ∈ ( 0 , 1 ) , 1n ∑n i=1 k ( f ( x ) ; f ( xi ) ) yi n→∞−−−−→∑ yk∈Y yk px , y ( f ( x ) , yk ) and 1 n ∑n i=1 k ( f ( x ) ; f ( xi ) ) n→∞−−−−→ px ( f ( x ) ) . The fact that the ratio of two convergent sequences converges against the ratio of their limits shows the result . Mean squared error in binary classification As a first instantiation of our framework we consider a binary classification setting , with the mean squared error MSE ( f ) = E [ ( f ( x ) − y ) 2 ] as the risk function , jointly optimized with the L2 calibration error CE2 . Following Murphy ( 1973 ) ; Degroot & Fienberg ( 1983 ) ; Kuleshov & Liang ( 2015 ) ; Nguyen & O ’ Connor ( 2015 ) we decompose ( full derivation in Appendix A ) the MSE as : MSE ( f ) − CE2 ( f ) 2 = E [ ( 1− E [ y | f ( x ) ] ) E [ y | f ( x ) ] ] ≥ 0 . ( 8 ) Similar to Equation 2 , we consider the optimization problem for some λ > 0 : f = arg min f∈F ( MSE ( f ) + λCE2 ( f ) 2 ) . ( 9 ) Using Equation 8 we rewrite : MSE ( f ) + λCE2 ( f ) 2 = ( 1 + λ ) MSE ( f ) − λ ( MSE ( f ) − CE2 ( f ) 2 ) ( 10 ) = ( 1 + λ ) MSE ( f ) − λE [ ( 1− E [ y | f ( x ) ] ) E [ y | f ( x ) ] ] . ( 11 ) Rescaling Equation 11 by a factor of ( 1 + λ ) −1 and a variable substitution γ = λ1+λ ∈ [ 0 , 1 ) f = arg min f∈F ( MSE ( f ) + λCE2 ( f ) 2 ) = arg min f∈F ( MSE ( f ) − γE [ ( 1− E [ y | f ( x ) ] ) E [ y | f ( x ) ] ] ) ( 12 ) = arg min f∈F ( MSE ( f ) + γE [ E [ y | f ( x ) ] 2 ] ) . ( 13 ) For optimization we wish to find an estimator for E [ E [ y | f ( x ) ] 2 ] . Building upon Equation 6 , a partially debiased estimator can be written as:1 ̂ E [ E [ y | f ( x ) ] 2 ] ≈ 1 n n∑ h=1 ( ∑ i 6=h k ( f ( xh ) ; f ( xi ) ) yi ) 2 − ∑ i 6=h ( k ( f ( xh ) ; f ( xi ) ) yi ) 2 ( ∑ i 6=h k ( f ( xh ) ; f ( xi ) ) ) 2 − ∑ i6=h ( k ( f ( xh ) ; f ( xi ) ) ) 2 . ( 14 ) In a binary setting , the kernels k ( · , · ) are Beta distributions , i.e . denoting zi : = f ( xi ) for short , then : kBeta ( z , zi ) : = z αi−1 ( 1− z ) βi−1 Γ ( αi + βi ) Γ ( αi ) Γ ( βi ) , ( 15 ) with αi = zih +1 and βi = 1−zi h +1 ( Chen , 1999 ; Bouezmarni & Rolin , 2003 ; Zhang & Karunamuni , 2010 ) , where h is a bandwidth parameter in the kernel density estimate that goes to 0 as n → ∞ . We note that the computational complexity of this estimator is O ( n2 ) . Within the gradient descent training procedure , the density is estimated using a mini-batch and therefore the O ( n2 ) complexity is w.r.t . a mini-batch , not the entire dataset . The estimator in Equation 14 is a ratio of two second order U-statistics that converge as n−1/2 ( Ferguson , 2005 ) . Therefore , the overall convergence will be n−1/2 . Empirical covergence rates are calculated in Appendix D.3 and shown to be close to the theoretically expected value . 1We have debiased the numerator and denominator individually ( Ferguson , 2005 , Section 2 ) , but for simplicity have not corrected for the fact that we are estimating a ratio ( Scott & Wu , 1981 ) . Multiclass calibration with Dirichlet kernel density estimates There are multiple definitions regarding multiclass calibration that differ in the strictness regarding the calibration of the probability vector f ( x ) . The weakest notion is top label calibration , which , as the name suggests , only cares about calibrating the entry with the highest predicted probability , which reduces to a binary calibration problem again ( Guo et al. , 2017 ) . Marginal or class-wise calibration ( Kull et al. , 2019 ) is the most commonly used definition of multiclass calibration and a stronger version of top label calibration . Here , the problem is split into K one-vs-all binary calibration setting , such that each class has to be calibrated against the other K − 1 classes : MCEp ( f ) p = K∑ k=1 E [ ∣∣∣E [ y = k | f ( x ) k ] − f ( x ) k∣∣∣p ] . ( 16 ) An estimator for this calibration error is : ̂MCEp ( f ) p = K∑ k=1 1 n n∑ j=1 ∣∣∣∣∣ ∑ i 6=j kBeta ( f ( xj ) k ; f ( xi ) k ) [ yi ] k∑ i 6=j kBeta ( f ( xj ) k ; f ( xi ) k ) − f ( xj ) k ∣∣∣∣∣ p . ( 17 ) The strongest notion of multiclass calibration , and the one that we want to consider in this paper , is called canonical calibration ( Bröcker , 2009 ; Appice et al. , 2015 ; Vaicenavicius et al. , 2019 ) . Here it is required that the whole probability vector f ( x ) is calibrated . The definition is exactly the one from Definition 3.1 . Its estimator is : ̂CEp ( f ) p = 1 n n∑ j=1 ∥∥∥∥∥ ∑ i6=j kDir ( f ( xj ) ; f ( xi ) ) yi∑ i 6=j kDir ( f ( xj ) ; f ( xi ) ) − f ( xj ) ∥∥∥∥∥ p p ( 18 ) where kDir is a Dirichlet kernel defined as : kDir ( z , zi ) : = Γ ( ∑K i=1 αi ) ∏K i=1 Γ ( αi ) K∏ j=1 z αij−1 j ( 19 ) with αi = zi/h+ 1 ( Ouimet & Tolosana-Delgado , 2021 ) . As before , the computational complexity is O ( n2 ) irrespective of p. This estimator is differentiable and furthermore , the following proposition holds : Proposition 3.3 . The Dirichlet kernel based CE estimator is consistent , that is lim n→∞ 1 n n∑ j=1 ∥∥∥∥∥ ∑n i 6=j kDir ( f ( xj ) ; f ( xi ) ) yi∑n i6=j kDir ( f ( xj ) ; f ( xi ) ) − f ( xj ) ∥∥∥∥∥ p p = E [ ∥∥∥E [ y | f ( x ) ] − f ( x ) ∥∥∥p p ] p . ( 20 ) Proof . Dirichlet kernel estimators are consistent ( Ouimet & Tolosana-Delgado , 2021 ) , consequently , by Proposition 3.2 the term inside the norm is consistent for any fixed f ( xj ) ( note , that summing over i 6= j ensures that the ratio of the KDE ’ s does not depend on the outer summation ) . Moreover , for any convergent sequence also the norm of that sequence converges against the norm of its limit . Ultimately , the outer sum is merely the sample mean of consistent summands , which again is consistent .
The paper proposes a new approach for calibrating neural network outputs. The idea is to train the neural network with a regularized loss function that is a linear combination of prediction and calibration errors. The calibration error is measured as the Lp norm of the difference between predicted class probabilities and the expected true class probabilities given the predicted class probabilities where the latter term is computed using a kernel density estimate with a Dirichlet Kernel. The approach is evaluated in terms of accuracy and expected calibration error on CIFAR-10 and CIFAR-100.
SP:59bd93781598c2b893c92d41b3cad91b5f719e57
Calibration Regularized Training of Deep Neural Networks using Kernel Density Estimation
1 INTRODUCTION . Deep neural networks have shown tremendous success in classification tasks , being regularly the best performing models in terms of accuracy . However , they are also known to make overconfident predictions ( Guo et al. , 2017 ) , which is particularly problematic in safety-critical applications such as medical diagnosis or autonomous driving . Therefore , in many real world applications we do not just care about the predictive performance , but also about the trustworthiness of that prediction , that is , we are interested in accurate predictions with robust uncertainty estimates . To this end , we want our models to be uncertainty calibrated which means that , for instance , among all cells that have been predicted with a probability of 0.8 to be cancerous , in fact a fraction of 80 % belong to a malignant tumor . Being calibrated , however , does not imply that the classifier achieves good accuracy . For instance , a classifier that always predicts the marginal distribution of the target class is calibrated , but will not be very useful in practice . Likewise , a good predictive performance does not ensure calibration . In particular , for a broad class of loss functions , risk minimization leads to asymptotically Bayes optimal classifiers ( Bartlett et al. , 2006 ) . However , there is no guarantee that they are calibrated , even in the aysmptotic limit . Therefore , we consider minimizing the risk plus a term that penalizes miscalibration , i.e. , Risk +λ · CalibrationError . For parameter values λ > 0 , this will push the classifier towards a calibrated model , while maintaining similar accuracy . The existence of such a λ > 0 is suggested by the fact that there always exists at least one Bayes optimal classifier that is calibrated , namely P ( y|x ) . To optimize the risk and the calibration error jointly , we propose a differentiable and consistent estimator of the expected Lp calibration error based on kernel density estimates ( KDEs ) . In particular , we use a Beta kernel in binary classification tasks and a Dirichlet kernel in the multiclass setting , as these kernels are the natural choices to model density estimation over a probability simplex . Our Dirichlet kernel based estimator allows for the estimation of canonical calibration , which is the strongest notion of multiclass calibration as it implies the calibration of the whole probability vector ( Bröcker , 2009 ; Appice et al. , 2015 ; Vaicenavicius et al. , 2019 ) . By contrast , most other state-ofthe-art methods only achieve weaker versions of multiclass calibration , namely top-label ( Guo et al. , 2017 ) and marginal or class-wise calibration ( Kull et al. , 2019 ) . The top-label calibration only considers the scores for the predictied class , while for marginal calibration the multiclass problem is split up into K one-vs-all binary ones , each of which is required to be calibrated according to the definition of binary calibration . In many applications marginal and canonical calibration are preferable to top-label calibration , since we often care about having reliable uncertainty estimates for more than just one class per prediction . For instance , in medical diagnosis we do not just care about the most likely disease a certain patient might have but also about the probabilities of other diseases . Our contributions can be summarized as follows : 1 . We develop a trainable calibration error objective using Dirichlet kernel density estimates , which can be minimized alongside any loss function in the existing batch stochastic gradient descent framework . 2 . We propose to use our estimator to evaluate canonical calibration . Due to the scaling properties of Dirichlet kernel density estimation , and the tendency for probabilities to be concentrated in a relatively small number of classes , this becomes feasible in cases that can not be estimated using a binned estimator . 3 . We show on a variety of network architectures and two datasets that DNNs trained alongside an estimator of the calibration error achieve competitive results both on existing metrics and on the proposed measure of canonical calibration . 2 RELATED WORK . Calibration of probabilistic predictors has long been studied in many fields . This topic gained attention in the deep learning community following the observation in Guo et al . ( 2017 ) that modern neural networks are poorly calibrated and tend to give overconfident predictions due to overfitting on the NLL loss . The surge of interest resulted in many calibration strategies that can be split in two general categories , which we discuss subsequently . Post-hoc calibration strategies learn a calibration map of the predictions from a trained predictor in a post-hoc manner . For instance , Platt scaling ( Platt , 1999 ) fits a logistic regression model on top of the logit outputs of the model . A special case of Platt scaling that fits a single scalar , called temperature , has been popularized by Guo et al . ( 2017 ) as an accuracy-preserving , easy to implement and effective method to improve calibration . However , it has the undesired consequence that it clamps the high confidence scores of accurate predictions ( Kumar et al. , 2018 ) . Other approaches for post-hoc calibration include : histogram binning ( Zadrozny & Elkan , 2001 ) , isotonic regression ( Zadrozny & Elkan , 2002 ) , and Bayesian binning into quantiles ( Naeini & Cooper , 2015 ) . Trainable calibration strategies integrate a differentiable calibration measure into the training objective . One of the earliest approaches is regularization by penalizing low entropy predictions ( Pereyra et al. , 2017 ) . Similarly to temperature scaling , it has been shown that entropy regularization needlessly suppresses high confidence scores of correct predictions ( Kumar et al. , 2018 ) . Another popular strategy is MMCE ( Maxmimum Mean Calibration Error ) ( Kumar et al. , 2018 ) , where the entropy regularizer is replaced by a kernel-based surrogate for the calibration error that can be optimized alongside NLL . It has been shown that label smoothing ( Szegedy et al. , 2015 ; Müller et al. , 2020 ) , i.e . training models with a weighted mixture of the labels instead of one-hot vectors , also improves model calibration . Liang et al . ( 2020 ) propose to add the difference between predicted confidence and accuracy as auxiliary term to the cross-entropy loss . Focal loss ( Mukhoti et al. , 2020 ; Lin et al. , 2018 ) has recently been empirically shown to produce better calibrated models than many of the alternatives , but does not estimate a clear quantity related to calibration error . Kernel density estimation ( Parzen , 1962 ; Rosenblatt , 1956 ) is a non-parametric method to estimate a probability density function from a finite sample . Zhang et al . ( 2020 ) propose a KDE-based estimator of the calibration error for measuring calibration performance . However , they use the triweight kernel , which has a limited support interval and is therefore applicable to binary classification , but does not have a natural extension to higher dimensional simplexes , in contrast to the Dirichlet kernel that we consider here . As a result , they consider an unnatural proxy to marginal calibration error , which does not result in a consistent estimator . 3 METHODS . The most commonly used loss functions are designed to achieve consistency in the sense of Bayes optimality under risk minimization , however , they do not guarantee calibration - neither for finite samples nor in the asymptotic limit . Since we are interested in models f that are both accurate and calibrated , we consider the following optimization problem bounding the calibration error CE ( f ) : f = arg min f∈F Risk ( f ) , s.t . CE ( f ) ≤ B ( 1 ) for some B > 0 , and its associated Lagrangian f = arg min f∈F ( Risk ( f ) + λ · CE ( f ) ) . ( 2 ) We measure the ( mis- ) calibration in terms of the Lp calibration error . To this end , let ( Ω , A , P ) be a probability space , let X = Rd , Y = { 0 , 1 , ... , K } . Let x : Ω → X and y : Ω → Y be random variables while realizations are denoted with subscripts . Furthermore , let f : X → 4K be a decision function , where 4K denotes the K dimensional simplex as is achieved e.g . from the output of a final softmax layer in a neural network . Definition 3.1 ( Calibration error , ( Naeini et al. , 2015 ; Kumar et al. , 2019 ; Wenger et al. , 2020 ) ) . The Lp calibration error of f is : CEp ( f ) = ( E [ ∥∥∥E [ y | f ( x ) ] − f ( x ) ∥∥∥p p ] ) 1 p . ( 3 ) We note that we consider multiclass calibration , and that f ( x ) and the conditional expectation in Equation 3 therefore map to points on a probability simplex . We say that a classifier f is perfectly calibrated if CEp ( f ) = 0 . Kumar et al . ( 2018 ) have also considered a minimization problem similar to Equation 2 . Instead of using the CEp they use a metric called maximum mean calibration error ( MMCE ) that is 0 if and only if CEp = 0 . However , it is unclear how MMCE relates to the canonical multiclass setting or to the norm parameter p for non-zero CEp . In order to optimize Definition 3.1 directly , we need to perform density estimation over the probability simplex in order to empirically compute the conditional expectation . In a binary setting , this has traditionally been done with binned estimates ( Naeini et al. , 2015 ; Guo et al. , 2017 ; Kumar et al. , 2019 ) . However , this is not differentiable w.r.t . the function f , and can not be incorporated into a gradient based training procedure . Furthermore , binned estimates suffer from the curse of dimensionality and do not have a practical extension to multiclass settings . A natural choice for a differentiable kernel density estimator in the binary case is a kernel based on the Beta distribution and the extension to the multiclass case is given by the Dirichlet distribution . Hence , we consider an estimator for the CEp based on Beta and Dirichlet kernel density estimates in the binary and multiclass setting , respectively . We require that this estimator is consistent and differentiable such that we can train it according to Equation 2 . This estimator is given by : ̂CEp ( f ) p = 1 n n∑ h=1 [ ∥∥∥ ̂E [ y | f ( x ) ] ∣∣∣ f ( xh ) − f ( xh ) ∥∥∥p p ] , ( 4 ) where ̂E [ y | f ( x ) ] ∣∣∣ f ( xh ) denotes ̂E [ y | f ( x ) ] evaluated at f ( x ) = f ( xh ) . If Px , y has a probability density px , y with respect to the product of the Lebesgue and counting measure , we can define : px , y ( xi , yi ) = py|x=xi ( yi ) px ( xi ) . Then we define the estimator of the conditional expectation as follows : E [ y | f ( x ) ] = ∑ yk∈Y yk py|x=f ( x ) ( yk ) = ∑ yk∈Y yk px , y ( f ( x ) , yk ) px ( f ( x ) ) ( 5 ) ≈ ∑n i=1 k ( f ( x ) ; f ( xi ) ) yi∑n i=1 k ( f ( x ) ; f ( xi ) ) = : ̂E [ y | f ( x ) ] ( 6 ) where k is the kernel of a kernel density estimate evaluated at point xi . Proposition 3.2 . ̂E [ y | f ( x ) ] is a pointwise consistent estimator of E [ y | f ( x ) ] , that is : lim n→∞ ∑n i=1 k ( f ( x ) ; f ( xi ) ) yi∑n i=1 k ( f ( x ) ; f ( xi ) ) = ∑ yk∈Y yk px , y ( f ( x ) , yk ) px ( f ( x ) ) . ( 7 ) Proof . By the consistency of kernel density estimators ( Silverman , 1986 ; Chen , 1999 ; Ouimet & Tolosana-Delgado , 2021 ) , for all f ( x ) ∈ ( 0 , 1 ) , 1n ∑n i=1 k ( f ( x ) ; f ( xi ) ) yi n→∞−−−−→∑ yk∈Y yk px , y ( f ( x ) , yk ) and 1 n ∑n i=1 k ( f ( x ) ; f ( xi ) ) n→∞−−−−→ px ( f ( x ) ) . The fact that the ratio of two convergent sequences converges against the ratio of their limits shows the result . Mean squared error in binary classification As a first instantiation of our framework we consider a binary classification setting , with the mean squared error MSE ( f ) = E [ ( f ( x ) − y ) 2 ] as the risk function , jointly optimized with the L2 calibration error CE2 . Following Murphy ( 1973 ) ; Degroot & Fienberg ( 1983 ) ; Kuleshov & Liang ( 2015 ) ; Nguyen & O ’ Connor ( 2015 ) we decompose ( full derivation in Appendix A ) the MSE as : MSE ( f ) − CE2 ( f ) 2 = E [ ( 1− E [ y | f ( x ) ] ) E [ y | f ( x ) ] ] ≥ 0 . ( 8 ) Similar to Equation 2 , we consider the optimization problem for some λ > 0 : f = arg min f∈F ( MSE ( f ) + λCE2 ( f ) 2 ) . ( 9 ) Using Equation 8 we rewrite : MSE ( f ) + λCE2 ( f ) 2 = ( 1 + λ ) MSE ( f ) − λ ( MSE ( f ) − CE2 ( f ) 2 ) ( 10 ) = ( 1 + λ ) MSE ( f ) − λE [ ( 1− E [ y | f ( x ) ] ) E [ y | f ( x ) ] ] . ( 11 ) Rescaling Equation 11 by a factor of ( 1 + λ ) −1 and a variable substitution γ = λ1+λ ∈ [ 0 , 1 ) f = arg min f∈F ( MSE ( f ) + λCE2 ( f ) 2 ) = arg min f∈F ( MSE ( f ) − γE [ ( 1− E [ y | f ( x ) ] ) E [ y | f ( x ) ] ] ) ( 12 ) = arg min f∈F ( MSE ( f ) + γE [ E [ y | f ( x ) ] 2 ] ) . ( 13 ) For optimization we wish to find an estimator for E [ E [ y | f ( x ) ] 2 ] . Building upon Equation 6 , a partially debiased estimator can be written as:1 ̂ E [ E [ y | f ( x ) ] 2 ] ≈ 1 n n∑ h=1 ( ∑ i 6=h k ( f ( xh ) ; f ( xi ) ) yi ) 2 − ∑ i 6=h ( k ( f ( xh ) ; f ( xi ) ) yi ) 2 ( ∑ i 6=h k ( f ( xh ) ; f ( xi ) ) ) 2 − ∑ i6=h ( k ( f ( xh ) ; f ( xi ) ) ) 2 . ( 14 ) In a binary setting , the kernels k ( · , · ) are Beta distributions , i.e . denoting zi : = f ( xi ) for short , then : kBeta ( z , zi ) : = z αi−1 ( 1− z ) βi−1 Γ ( αi + βi ) Γ ( αi ) Γ ( βi ) , ( 15 ) with αi = zih +1 and βi = 1−zi h +1 ( Chen , 1999 ; Bouezmarni & Rolin , 2003 ; Zhang & Karunamuni , 2010 ) , where h is a bandwidth parameter in the kernel density estimate that goes to 0 as n → ∞ . We note that the computational complexity of this estimator is O ( n2 ) . Within the gradient descent training procedure , the density is estimated using a mini-batch and therefore the O ( n2 ) complexity is w.r.t . a mini-batch , not the entire dataset . The estimator in Equation 14 is a ratio of two second order U-statistics that converge as n−1/2 ( Ferguson , 2005 ) . Therefore , the overall convergence will be n−1/2 . Empirical covergence rates are calculated in Appendix D.3 and shown to be close to the theoretically expected value . 1We have debiased the numerator and denominator individually ( Ferguson , 2005 , Section 2 ) , but for simplicity have not corrected for the fact that we are estimating a ratio ( Scott & Wu , 1981 ) . Multiclass calibration with Dirichlet kernel density estimates There are multiple definitions regarding multiclass calibration that differ in the strictness regarding the calibration of the probability vector f ( x ) . The weakest notion is top label calibration , which , as the name suggests , only cares about calibrating the entry with the highest predicted probability , which reduces to a binary calibration problem again ( Guo et al. , 2017 ) . Marginal or class-wise calibration ( Kull et al. , 2019 ) is the most commonly used definition of multiclass calibration and a stronger version of top label calibration . Here , the problem is split into K one-vs-all binary calibration setting , such that each class has to be calibrated against the other K − 1 classes : MCEp ( f ) p = K∑ k=1 E [ ∣∣∣E [ y = k | f ( x ) k ] − f ( x ) k∣∣∣p ] . ( 16 ) An estimator for this calibration error is : ̂MCEp ( f ) p = K∑ k=1 1 n n∑ j=1 ∣∣∣∣∣ ∑ i 6=j kBeta ( f ( xj ) k ; f ( xi ) k ) [ yi ] k∑ i 6=j kBeta ( f ( xj ) k ; f ( xi ) k ) − f ( xj ) k ∣∣∣∣∣ p . ( 17 ) The strongest notion of multiclass calibration , and the one that we want to consider in this paper , is called canonical calibration ( Bröcker , 2009 ; Appice et al. , 2015 ; Vaicenavicius et al. , 2019 ) . Here it is required that the whole probability vector f ( x ) is calibrated . The definition is exactly the one from Definition 3.1 . Its estimator is : ̂CEp ( f ) p = 1 n n∑ j=1 ∥∥∥∥∥ ∑ i6=j kDir ( f ( xj ) ; f ( xi ) ) yi∑ i 6=j kDir ( f ( xj ) ; f ( xi ) ) − f ( xj ) ∥∥∥∥∥ p p ( 18 ) where kDir is a Dirichlet kernel defined as : kDir ( z , zi ) : = Γ ( ∑K i=1 αi ) ∏K i=1 Γ ( αi ) K∏ j=1 z αij−1 j ( 19 ) with αi = zi/h+ 1 ( Ouimet & Tolosana-Delgado , 2021 ) . As before , the computational complexity is O ( n2 ) irrespective of p. This estimator is differentiable and furthermore , the following proposition holds : Proposition 3.3 . The Dirichlet kernel based CE estimator is consistent , that is lim n→∞ 1 n n∑ j=1 ∥∥∥∥∥ ∑n i 6=j kDir ( f ( xj ) ; f ( xi ) ) yi∑n i6=j kDir ( f ( xj ) ; f ( xi ) ) − f ( xj ) ∥∥∥∥∥ p p = E [ ∥∥∥E [ y | f ( x ) ] − f ( x ) ∥∥∥p p ] p . ( 20 ) Proof . Dirichlet kernel estimators are consistent ( Ouimet & Tolosana-Delgado , 2021 ) , consequently , by Proposition 3.2 the term inside the norm is consistent for any fixed f ( xj ) ( note , that summing over i 6= j ensures that the ratio of the KDE ’ s does not depend on the outer summation ) . Moreover , for any convergent sequence also the norm of that sequence converges against the norm of its limit . Ultimately , the outer sum is merely the sample mean of consistent summands , which again is consistent .
The paper proposes a regularization term to augment loss functions, where the regularization term effectively minimizes the calibration error of the model. The term itself is a kernel density estimator over the K-simplex space (hence Dirichlet kernel is the natural choice). The authors claim the estimator is consistent (but not unbiased, though they partially debias it) and empirically verify that their method yields tradeoff between accuracy and calibration that is near Pareto optimal.
SP:59bd93781598c2b893c92d41b3cad91b5f719e57
A global convergence theory for deep ReLU implicit networks via over-parameterization
1 INTRODUCTION . 1 ) Background and Motivation : In the last decade , implicit deep learning ( El Ghaoui et al. , 2019 ) have attracted more and more attention . Its popularity is mainly because it generalizes the recursive rules of many widely used neural network architectures . A line of recent works ( Bai et al. , 2019 ; El Ghaoui et al. , 2019 ; Bai et al. , 2020 ) have shown that the implicit neural network architecture is a wider class that includes most current neural network architectures as special cases , such as feed-forward neural networks , convolution neural networks , residual networks , and recurrent neural networks . Moreover , implicit deep learning is also well known for its competitive performance compared to other regular deep neural networks but using significantly fewer computational resources ( Dabre & Fujita , 2019 ; Dehghani et al. , 2018 ; Bai et al. , 2018 ) . Although a line of literature has been shown the superior performance of implicit neural networks experimentally , the theoretical understanding is still limited . To date , it is still unknown if a simple first-order optimization method such as ( stochastic ) gradient descent can converge on an implicit neural network activated by a nonlinear function . Unlike a regular deep neural network , an implicit neural network could have infinitely many layers , resulting in the possibility of divergence of the forward propagation ( El Ghaoui et al. , 2019 ; Kawaguchi , 2021 ) . The main challenge in establishing the convergence of implicit neural network training lies in the fact that , in general , the equilibrium equation of implicit neural networks can not be solved in closed-form . What exacerbates the problem is the well-posedness of the forward propagation . In other words , the equilibrium equation may have zero or multiple solutions . A line of recent studies have suggested a number of strategies to handle this well-posedness challenge , but they all involved reformulation or solving subproblems in each iteration . For example , El Ghaoui et al . ( 2019 ) suggested to reformulate the training as the Fenchel divergence formulation and solve the reformulated optimization problem by projected gradient descent method . However , this requires solving a projection subproblem in each iteration and convergence was only demonstrated numerically . By using an extra softmax layer , Kawaguchi ( 2021 ) established global convergence result of gradient descent for a linear implicit neural network . Unfortunately , their result can not be extended to nonlinear activations , which are critical to the learnability of deep neural networks . This paper proposes a global convergence theory of gradient descent for implicit neural networks activated by nonlinear Rectified Linear Unit ( ReLU ) activation function by using overparameterization . Specifically , we show that random initialized gradient descent with fixed stepsize converges to a global minimum of a ReLU implicit neural network at a linear rate as long as the implicit neural network is overparameterized . Recently , over-parameterization has been shown to be effective in optimizing finite-depth neural networks ( Zou et al. , 2020 ; Nguyen & Mondelli , 2020 ; Arora et al. , 2019 ; Oymak & Soltanolkotabi , 2020 ) . Although the objective function in the training is nonsmooth and non-convex , it can be shown that GD or SGD converge to a global minimum linearly if the width m of each layer is a polynomial of the number of training sample n and the number of layers h , i.e. , m = poly ( n , h ) . However , these results can not be directly applied to implicit neural networks , since implicit neural networks have infinitely many hidden layers , i.e. , h → ∞ , and the well-posedness problem surfaces during the training process . In fact , Chen et al . ( 2018 ) ; Bai et al . ( 2019 ; 2020 ) have all observed that the time and number of iterations spent on forward propagation are gradually increased with the the training epochs . Thus , we have to ensure the unique equilibrium point always exists throughout the training given that the width m is only polynomial of n. 2 ) Preliminaries of Implicit Deep Learning : In this work , we consider an implicit neural network with the transition at the ` -th layer in the following form ( El Ghaoui et al. , 2019 ; Bai et al. , 2019 ) : z ` = σ ( γ√ m Az ` −1 + φ ( x ) ) , ( 1 ) where φ : Rd → Rm is a feature mapping function that transforms an input vector x ∈ Rd to a desired feature vector φ , φ ( x ) , z ` ∈ Rm is the output of the ` -th layer , A ∈ Rm×m is a trainable weight matrix , σ ( u ) = max { 0 , u } is the ReLU activation function , and γ ∈ ( 0 , 1 ) is a fixed scalar to scaleA . As will be shown later in Section 2.1 , γ plays the role of ensuring the existence of the limit z∗ = lim ` →∞ z ` . In general , the feature mapping function φ is a nonlinear function , which extracts features from the low-dimensional input vector x . In this paper , we consider a simple nonlinear feature mapping function φ given by φ ( x ) , 1√ m σ ( Wx ) , ( 2 ) where W ∈ Rm×d is a trainable parameter matrix . As ` → ∞ , an implicit neural network can be considered an infinitely deep neural network . Consequently , z∗ is not only the limit of the sequence { z ` } ∞ ` =0 with z0 = 0 , but it is also the equilibrium point ( or fixed point ) of the equilibrium equation : z∗ = σ ( γ̃Az∗ + φ ) , ( 3 ) where γ̃ , γ/ √ m. In implicit neural networks , the prediction ŷ for the input vector x is the combination of the fixed point z∗ and the feature vector φ , i.e. , ŷ = uTz∗ + vTφ , ( 4 ) where u , v ∈ Rm are trainable weight vectors . For simplicity , we use θ , vec ( A , W , u , v ) to group all training parameters . Given a training data set { ( xi , yi ) } ni=1 , we want to minimize L ( θ ) = n∑ i=1 1 2 ( ŷi − yi ) 2 = 1 2 ‖ŷ − y‖2 , ( 5 ) where ŷ and y are the vectors formed by stacking all the prediction and labels . 3 ) Main Results : Our results are based on the following observations . We first analyze the forward propagation and find that the unique equilibrium point always exists if the scaled matrix γ̃A in Eq . ( 3 ) has an operator norm less than one . Thus , the well-posedness problem is reduced to finding a sequence of scalars { γk } ∞k=1 such that γ̃kA ( k ) is appropriately scaled . To achieve this goal , we show that the operator norm A ( k ) is uniformly upper bounded by a constant over all iterations . Consequently , a fixed scalar γ is enough to ensure the well-posedness of Eq . ( 3 ) . Our second observation is from the analysis of the gradient descent method with infinitesimal step-size ( gradient flow ) . By applying the chain rule with the gradient flow , we derive the dynamics of prediction ŷ ( t ) which is governed by the spectral property of a Gram matrix . In particular , if the smallest eigenvalue of the Gram matrix is lower bounded throughout the training , the gradient descent method enjoys a linear convergence rate . Along with some basic functional analysis results , it can be shown that the smallest eigenvalue of the Gram matrix at initialization is lower bounded if no two data samples are parallel . Although the Gram matrix varies in each iteration , the spectral property is preserved if the Gram matrix is close to its initialization . Thus , the convergence problem is reduced to showing the Gram matrix in latter iterations is close to its initialization . Our third observation is that we find random initialization , over-parameterization , and linear convergence jointly enforce the ( operator ) norms of parameters upper bounded by some constants and close to their initialization . Accordingly , we can use this property to show that the operator norm ofA is upper bounded and the spectral property of the Gram matrix is preserved throughout the training . Combining all these insights together , we can conclude that the random initialized gradient descent method with a constant step-size converges to a global minimum of the implicit neural network with ReLU activation . The main contributions of this paper are summarized as follows : ( i ) By scaling the weight matrix A with a fixed scalar γ , we show that the unique equilibrium point z∗ for each x always exists during the training if the parameters are randomly initialized , even for the nonlinear ReLU activation function . ( ii ) We analyze the gradient flow of implicit neural networks . Despite the non-smooth and nonconvexity of the objective function , the convergence to a global minimum at a linear rate is guaranteed if the implicit neural network is over-parameterized and the data is non-degenerate . ( iii ) Since gradient descent is discretized version of gradient flow , we can show gradient descent with fixed stepsize converges to a global minimum of implicit neural networks at a linear rate under the same assumptions made by the gradient flow analysis , as long as the stepsize is chosen small enough . Notation : For a vector x , ‖x‖ is the Euclidean norm of x . For a matrix A , ‖A‖ is the operator norm of A . If A is a square matrix , then λmin ( A ) and λmax ( A ) denote the smallest and largest eigenvalue ofA , respectively , and λmax ( A ) ≤ ‖A‖ . We denote [ n ] , { 1 , 2 , · · · , n } . 2 WELL-POSEDNESS AND GRADIENT COMPUTATION . In this section , we provide a simple condition for the equilibrium equation ( 3 ) to be well-posed in the sense that the unique equilibrium point exists . Instead of backpropagating through all the intermediate iterations of a forward pass , we derive the gradients of trainable parameters by using the implicit function theorem . In this work , we make the following assumption on parameter initialization . Assumption 1 ( Random Initialization ) . The entries Aij and Wij are randomly initialized by the standard Gaussian distribution N ( 0 , 1 ) , and ui and vi are randomly initialized by the symmetric Bernoulli or Rademacher distribution . Remark 2.1 . This initialization is similar to the approaches widely used in practice ( Glorot & Bengio , 2010 ; He et al. , 2015 ) . The result obtained in this work can be easily extended to the case where the distributions forAij , Wij , ui , and vi are replaced by sub-Gaussian random variables . 2.1 FORWARD PROPAGATION AND WELL-POSEDNESS . In a general implicit neural network , Eq . ( 3 ) is not necessarily well-posed , since it may admit zero or multiple solutions . In this work , we show that scaling the matrix A with γ̃ = γ/ √ m guarantees the existence and uniqueness of the equilibrium point z∗ with random initialization . This follows from a foundational result in random matrix theory as restated in the following lemma . Lemma 2.1 ( Vershynin ( 2018 ) , Theorem 4.4.5 ) . Let A be an m × n random matrix whose entries Aij are independent , zero-mean , and sub-Gaussian random variables . Then , for any t > 0 , we have ‖A‖ ≤ CK ( √ m + √ n + t ) with probability at least 1 − 2e−t2 . Here C > 0 is a fixed constant , and K = maxi , j ‖Aij‖ψ2 . Under Assumption 1 , Lemma 2.1 implies that , with exponentially high probability , ‖A‖ ≤ c √ m for some constant c > 0 . By scalingA by a positive scalar γ̃ , we show that the transition Eq . ( 1 ) is a contraction mapping . Thus , the unique equilibrium point exists with detailed proof in Appendix A.1 . Lemma 2.2 . If ‖A‖ ≤ c √ m for some c > 0 , then for any γ0 ∈ ( 0 , 1 ) , the scalar γ , min { γ0 , γ0/c } uniquely determines the existence of the equilibrium z∗ for every x , and ‖z ` ‖ ≤ 11−γ0 ‖φ‖ for all ` . Lemma 2.2 indicates that equilibria always exist if we can maintain the operator norm of the scaled matrix ( γ/ √ m ) A less than 1 during the training . However , the operator norms of matrix A are changed by the update of the gradient descent . It is hard to use a fixed scalar γ to scale the matrix A over all iterations , unless the operator norm of A is bounded . In Section 3 , we will show ‖A‖ ≤ 2c √ m always holds throughout the training , provided that ‖A ( 0 ) ‖ ≤ c √ m at initialization and the width m is sufficiently large . Thus , by using the scalar γ = min { γ0 , γ0/ ( 2c ) } for any γ0 ∈ ( 0 , 1 ) , equilibria always exist and the equilibrium equation Eq . ( 3 ) is well-posed .
In this paper, the authors theoretically analyze the convergence of gradient descent for an implicit neural network with infinite layers with ReLU activation. The authors show the unique fixed point of the infinite-layered mapping when the weight matrix $\boldsymbol{A}$ has a properly bounded spectral norm. Using implicit differentiation, the authors show the partial gradient at the fixed point. Furthermore, the authors show the linear convergence rate by proving the strictly positive-definite of the Gram matrix $\boldsymbol{G}(t)$ (and $\boldsymbol{H}(t)$).
SP:c8330db4743ced525c85be1261a538cfc9ad4e35
A global convergence theory for deep ReLU implicit networks via over-parameterization
1 INTRODUCTION . 1 ) Background and Motivation : In the last decade , implicit deep learning ( El Ghaoui et al. , 2019 ) have attracted more and more attention . Its popularity is mainly because it generalizes the recursive rules of many widely used neural network architectures . A line of recent works ( Bai et al. , 2019 ; El Ghaoui et al. , 2019 ; Bai et al. , 2020 ) have shown that the implicit neural network architecture is a wider class that includes most current neural network architectures as special cases , such as feed-forward neural networks , convolution neural networks , residual networks , and recurrent neural networks . Moreover , implicit deep learning is also well known for its competitive performance compared to other regular deep neural networks but using significantly fewer computational resources ( Dabre & Fujita , 2019 ; Dehghani et al. , 2018 ; Bai et al. , 2018 ) . Although a line of literature has been shown the superior performance of implicit neural networks experimentally , the theoretical understanding is still limited . To date , it is still unknown if a simple first-order optimization method such as ( stochastic ) gradient descent can converge on an implicit neural network activated by a nonlinear function . Unlike a regular deep neural network , an implicit neural network could have infinitely many layers , resulting in the possibility of divergence of the forward propagation ( El Ghaoui et al. , 2019 ; Kawaguchi , 2021 ) . The main challenge in establishing the convergence of implicit neural network training lies in the fact that , in general , the equilibrium equation of implicit neural networks can not be solved in closed-form . What exacerbates the problem is the well-posedness of the forward propagation . In other words , the equilibrium equation may have zero or multiple solutions . A line of recent studies have suggested a number of strategies to handle this well-posedness challenge , but they all involved reformulation or solving subproblems in each iteration . For example , El Ghaoui et al . ( 2019 ) suggested to reformulate the training as the Fenchel divergence formulation and solve the reformulated optimization problem by projected gradient descent method . However , this requires solving a projection subproblem in each iteration and convergence was only demonstrated numerically . By using an extra softmax layer , Kawaguchi ( 2021 ) established global convergence result of gradient descent for a linear implicit neural network . Unfortunately , their result can not be extended to nonlinear activations , which are critical to the learnability of deep neural networks . This paper proposes a global convergence theory of gradient descent for implicit neural networks activated by nonlinear Rectified Linear Unit ( ReLU ) activation function by using overparameterization . Specifically , we show that random initialized gradient descent with fixed stepsize converges to a global minimum of a ReLU implicit neural network at a linear rate as long as the implicit neural network is overparameterized . Recently , over-parameterization has been shown to be effective in optimizing finite-depth neural networks ( Zou et al. , 2020 ; Nguyen & Mondelli , 2020 ; Arora et al. , 2019 ; Oymak & Soltanolkotabi , 2020 ) . Although the objective function in the training is nonsmooth and non-convex , it can be shown that GD or SGD converge to a global minimum linearly if the width m of each layer is a polynomial of the number of training sample n and the number of layers h , i.e. , m = poly ( n , h ) . However , these results can not be directly applied to implicit neural networks , since implicit neural networks have infinitely many hidden layers , i.e. , h → ∞ , and the well-posedness problem surfaces during the training process . In fact , Chen et al . ( 2018 ) ; Bai et al . ( 2019 ; 2020 ) have all observed that the time and number of iterations spent on forward propagation are gradually increased with the the training epochs . Thus , we have to ensure the unique equilibrium point always exists throughout the training given that the width m is only polynomial of n. 2 ) Preliminaries of Implicit Deep Learning : In this work , we consider an implicit neural network with the transition at the ` -th layer in the following form ( El Ghaoui et al. , 2019 ; Bai et al. , 2019 ) : z ` = σ ( γ√ m Az ` −1 + φ ( x ) ) , ( 1 ) where φ : Rd → Rm is a feature mapping function that transforms an input vector x ∈ Rd to a desired feature vector φ , φ ( x ) , z ` ∈ Rm is the output of the ` -th layer , A ∈ Rm×m is a trainable weight matrix , σ ( u ) = max { 0 , u } is the ReLU activation function , and γ ∈ ( 0 , 1 ) is a fixed scalar to scaleA . As will be shown later in Section 2.1 , γ plays the role of ensuring the existence of the limit z∗ = lim ` →∞ z ` . In general , the feature mapping function φ is a nonlinear function , which extracts features from the low-dimensional input vector x . In this paper , we consider a simple nonlinear feature mapping function φ given by φ ( x ) , 1√ m σ ( Wx ) , ( 2 ) where W ∈ Rm×d is a trainable parameter matrix . As ` → ∞ , an implicit neural network can be considered an infinitely deep neural network . Consequently , z∗ is not only the limit of the sequence { z ` } ∞ ` =0 with z0 = 0 , but it is also the equilibrium point ( or fixed point ) of the equilibrium equation : z∗ = σ ( γ̃Az∗ + φ ) , ( 3 ) where γ̃ , γ/ √ m. In implicit neural networks , the prediction ŷ for the input vector x is the combination of the fixed point z∗ and the feature vector φ , i.e. , ŷ = uTz∗ + vTφ , ( 4 ) where u , v ∈ Rm are trainable weight vectors . For simplicity , we use θ , vec ( A , W , u , v ) to group all training parameters . Given a training data set { ( xi , yi ) } ni=1 , we want to minimize L ( θ ) = n∑ i=1 1 2 ( ŷi − yi ) 2 = 1 2 ‖ŷ − y‖2 , ( 5 ) where ŷ and y are the vectors formed by stacking all the prediction and labels . 3 ) Main Results : Our results are based on the following observations . We first analyze the forward propagation and find that the unique equilibrium point always exists if the scaled matrix γ̃A in Eq . ( 3 ) has an operator norm less than one . Thus , the well-posedness problem is reduced to finding a sequence of scalars { γk } ∞k=1 such that γ̃kA ( k ) is appropriately scaled . To achieve this goal , we show that the operator norm A ( k ) is uniformly upper bounded by a constant over all iterations . Consequently , a fixed scalar γ is enough to ensure the well-posedness of Eq . ( 3 ) . Our second observation is from the analysis of the gradient descent method with infinitesimal step-size ( gradient flow ) . By applying the chain rule with the gradient flow , we derive the dynamics of prediction ŷ ( t ) which is governed by the spectral property of a Gram matrix . In particular , if the smallest eigenvalue of the Gram matrix is lower bounded throughout the training , the gradient descent method enjoys a linear convergence rate . Along with some basic functional analysis results , it can be shown that the smallest eigenvalue of the Gram matrix at initialization is lower bounded if no two data samples are parallel . Although the Gram matrix varies in each iteration , the spectral property is preserved if the Gram matrix is close to its initialization . Thus , the convergence problem is reduced to showing the Gram matrix in latter iterations is close to its initialization . Our third observation is that we find random initialization , over-parameterization , and linear convergence jointly enforce the ( operator ) norms of parameters upper bounded by some constants and close to their initialization . Accordingly , we can use this property to show that the operator norm ofA is upper bounded and the spectral property of the Gram matrix is preserved throughout the training . Combining all these insights together , we can conclude that the random initialized gradient descent method with a constant step-size converges to a global minimum of the implicit neural network with ReLU activation . The main contributions of this paper are summarized as follows : ( i ) By scaling the weight matrix A with a fixed scalar γ , we show that the unique equilibrium point z∗ for each x always exists during the training if the parameters are randomly initialized , even for the nonlinear ReLU activation function . ( ii ) We analyze the gradient flow of implicit neural networks . Despite the non-smooth and nonconvexity of the objective function , the convergence to a global minimum at a linear rate is guaranteed if the implicit neural network is over-parameterized and the data is non-degenerate . ( iii ) Since gradient descent is discretized version of gradient flow , we can show gradient descent with fixed stepsize converges to a global minimum of implicit neural networks at a linear rate under the same assumptions made by the gradient flow analysis , as long as the stepsize is chosen small enough . Notation : For a vector x , ‖x‖ is the Euclidean norm of x . For a matrix A , ‖A‖ is the operator norm of A . If A is a square matrix , then λmin ( A ) and λmax ( A ) denote the smallest and largest eigenvalue ofA , respectively , and λmax ( A ) ≤ ‖A‖ . We denote [ n ] , { 1 , 2 , · · · , n } . 2 WELL-POSEDNESS AND GRADIENT COMPUTATION . In this section , we provide a simple condition for the equilibrium equation ( 3 ) to be well-posed in the sense that the unique equilibrium point exists . Instead of backpropagating through all the intermediate iterations of a forward pass , we derive the gradients of trainable parameters by using the implicit function theorem . In this work , we make the following assumption on parameter initialization . Assumption 1 ( Random Initialization ) . The entries Aij and Wij are randomly initialized by the standard Gaussian distribution N ( 0 , 1 ) , and ui and vi are randomly initialized by the symmetric Bernoulli or Rademacher distribution . Remark 2.1 . This initialization is similar to the approaches widely used in practice ( Glorot & Bengio , 2010 ; He et al. , 2015 ) . The result obtained in this work can be easily extended to the case where the distributions forAij , Wij , ui , and vi are replaced by sub-Gaussian random variables . 2.1 FORWARD PROPAGATION AND WELL-POSEDNESS . In a general implicit neural network , Eq . ( 3 ) is not necessarily well-posed , since it may admit zero or multiple solutions . In this work , we show that scaling the matrix A with γ̃ = γ/ √ m guarantees the existence and uniqueness of the equilibrium point z∗ with random initialization . This follows from a foundational result in random matrix theory as restated in the following lemma . Lemma 2.1 ( Vershynin ( 2018 ) , Theorem 4.4.5 ) . Let A be an m × n random matrix whose entries Aij are independent , zero-mean , and sub-Gaussian random variables . Then , for any t > 0 , we have ‖A‖ ≤ CK ( √ m + √ n + t ) with probability at least 1 − 2e−t2 . Here C > 0 is a fixed constant , and K = maxi , j ‖Aij‖ψ2 . Under Assumption 1 , Lemma 2.1 implies that , with exponentially high probability , ‖A‖ ≤ c √ m for some constant c > 0 . By scalingA by a positive scalar γ̃ , we show that the transition Eq . ( 1 ) is a contraction mapping . Thus , the unique equilibrium point exists with detailed proof in Appendix A.1 . Lemma 2.2 . If ‖A‖ ≤ c √ m for some c > 0 , then for any γ0 ∈ ( 0 , 1 ) , the scalar γ , min { γ0 , γ0/c } uniquely determines the existence of the equilibrium z∗ for every x , and ‖z ` ‖ ≤ 11−γ0 ‖φ‖ for all ` . Lemma 2.2 indicates that equilibria always exist if we can maintain the operator norm of the scaled matrix ( γ/ √ m ) A less than 1 during the training . However , the operator norms of matrix A are changed by the update of the gradient descent . It is hard to use a fixed scalar γ to scale the matrix A over all iterations , unless the operator norm of A is bounded . In Section 3 , we will show ‖A‖ ≤ 2c √ m always holds throughout the training , provided that ‖A ( 0 ) ‖ ≤ c √ m at initialization and the width m is sufficiently large . Thus , by using the scalar γ = min { γ0 , γ0/ ( 2c ) } for any γ0 ∈ ( 0 , 1 ) , equilibria always exist and the equilibrium equation Eq . ( 3 ) is well-posed .
The paper presents a proof of exponential convergence to global optimality in the over-parametrization settings for an implicit model with scaled weights parameters. Although existing work has established similar proofs for feedforward explicit neural networks, such methods don't work with non-linearly activated implicit models where the well-posedness issue poses challenges to the training process. The authors shows that by scaling the weights, well-posedness can be ensured. The convergence result is obtained first on continuous settings and is then extended to discrete settings. Numerical experiments on real datasets confirms the finding.
SP:c8330db4743ced525c85be1261a538cfc9ad4e35
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack
The AutoAttack ( AA ) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available . However , the high computational cost ( e.g. , 100 times more than that of the project gradient descent attack ) makes AA infeasible for practitioners with limited computational resources , and also hinders applications of AA in the adversarial training ( AT ) . In this paper , we propose a novel method , minimum-margin ( MM ) attack , to fast and reliably evaluate adversarial robustness . Compared with AA , our method achieves comparable performance but only costs 3 % of the computational time in extensive experiments . The reliability of our method lies in that we evaluate the quality of adversarial examples using the margin between two targets that can precisely identify the most adversarial example . The computational efficiency of our method lies in an effective Sequential TArget Ranking Selection ( STARS ) method , ensuring that the cost of the MM attack is independent of the number of classes . The MM attack opens a new way for evaluating adversarial robustness and contributes a feasible and reliable method to generate high-quality adversarial examples in AT . 1 INTRODUCTION . The deep neural network ( DNN ) has attracted a large number of researchers from different disciplines such as computer science ( Goodfellow et al. , 2016 ; Castelvecchi , 2016 ; Vaswani et al. , 2017 ) , physics ( DeVries et al. , 2018 ; Huang et al. , 2019 ; Levine et al. , 2019 ) , biology ( Maxmen , 2018a ; b ; Webb , 2018 ) and medicine ( Hao et al. , 2015 ; Esteva et al. , 2017 ) . The success of DNN mainly lies in its ability to learn useful high-level features from abundant data ( Deng & Yu , 2014 ; LeCun et al. , 2015 ) . These learned features have been successfully used to address many difficult tasks . For example , DNNs can recognize images with high accuracy comparable to human beings ( LeCun et al. , 1998 ; Krizhevsky et al. , 2012 ) . In addition , DNNs are also widely used for speech recognition ( Hinton et al. , 2012 ) , natural language processing ( Andor et al. , 2016 ) , and playing games ( Mnih et al. , 2013 ; Silver et al. , 2016 ) . As the impacts of DNN increase fast , its reliability has been a key to deploy it in real-world applications ( Huang et al. , 2011 ; Kurakin et al. , 2017 ) . Recently , a growing body of research shows that DNNs are vulnerable to adversarial examples , i.e. , test inputs that are modified slightly yet strategically to cause misclassification ( Szegedy et al. , 2014 ; Nguyen et al. , 2015 ; Kurakin et al. , 2017 ; Carlini & Wagner , 2017a ; Finlayson et al. , 2019 ; Wang et al. , 2019 ; Zhang et al. , 2020b ; Gao et al. , 2021 ) . The existence of such adversarial examples undoubtedly lowers the reliability of DNNs . Meanwhile , researchers have also been considering finding a reliable way to evaluate adversarial robustness of a DNN before deploying it in the real world . The high-level idea of evaluating adversarial robustness of a DNN is quite straightforward , i.e. , generating adversarial examples and calculating the accuracy of the DNN on these examples ( this kind of accuracy is also known as adversarial robust accuracy ) . Szegedy et al . ( 2014 ) first pointed out the existence of adversarial examples and used a less powerful box-constrained L-BFGS method to generate them . Based on the studies in ( Szegedy et al. , 2014 ) , Goodfellow et al . ( 2015 ) put forward the fast gradient sign method ( FGSM ) . One common loss function they used is cross-entropy ( CE ) loss , and to maximize the loss function , FGSM uses its gradient to determine in which direction the pixel ’ s intensity should be increased or decreased . Madry et al . ( 2018 ) introduced a simple refinement of the FGSM : projected gradient descent attack ( PGD ) , where instead of taking a single step of size in the direction of the gradient sign , multiple smaller steps are taken . Existing Evaluation Methods . PGD was an effective method to evaluate adversarial robustness of a standard-trained DNN ( Madry et al. , 2018 ) since adversarial robust accuracy of a standard-trained DNN is always very low after using PGD . Nevertheless , the existence of adversarial examples has already inspired research on training a robust DNN to defend against them , which means that a standard-trained DNN is not the only DNN we might meet and we need to evaluate adversarial robustness of a robust DNN as well . Unfortunately , PGD fails to reliably evaluate adversarial robustness of a robust DNN ( Carlini & Wagner , 2017b ; Croce & Hein , 2020 ) . Carlini & Wagner ( 2017b ) observed the phenomenon of gradient vanishing in the widely used CE loss for the potential failure of L-BFGS , FGSM and PGD , and replaced the CE loss with many possible choices . Croce & Hein ( 2020 ) claimed that the fixed step size and the single attack used are the causes of poor evaluations , and they put forward an ensemble of diverse attacks ( consisting of APGD-CE , APGD-DLR , FAB and Square ) called AutoAttack ( AA ) to test adversarial robustness . Until now , AA has been the most reliable method to evaluate adversarial robustness ( Croce & Hein , 2020 ) . However , though AA performs well in reliability , it needs a large amount of computational time . As shown in Figure 1 ( a ) , for attacking a ResNet-18 model on CIFAR-10 ( following the adversarial training in ( Madry et al. , 2018 ) ) , the computational cost of AA ( or T-AA ) is 65 times ( or 100 times ) more than PGD used in ( Madry et al. , 2018 ) , where T-AA is more time-consuming since it considers each target for APGD-DLR and FAB in AA . Worse still , in the worst case as analyzed in Appendix A , the computational cost of AA ( or T-AA ) is even 109 times ( or 440 times ) more than PGD . A Dilemma Between Reliability and Computational Efficiency . The high computational cost makes AA infeasible when considerable computational resources are unavailable . Unfortunately , such scenarios are common in the real world , e.g. , for practitioners who need real-time evaluation at each epoch of the training process of a robust model , such high computational cost is unacceptable . Similarly , since a large number of adversarial examples need to be generated at each epoch during adversarial training ( AT ) , such high computational cost hinders applications of AA in AT . In consideration of the high reliability but low computational efficiency of AA , and the high computational efficiency but low reliability of PGD , we seem to encounter a dilemma : we have to consider giving up one factor ( reliability or computational efficiency ) when evaluating the adversarial robustness . Our Reliable and Fast Solution . In this paper , we are dedicated to achieving reliability and computational efficiency simultaneously . For reliability , we evaluate the quality of adversarial examples using the margin between two targets for precisely identifying the most adversarial example . For computational efficiency , we put forward an effective Sequential TArget Ranking Selection ( STARS ) method to ensure that the cost of the MM attack is independent of the number of classes . Reliability . To achieve reliability , we investigate the reasons behind the failure of PGD . We identify that CE loss , which is based on the probability of the true label py , is not an appropriate measure to the quality of adversarial examples . In Figure 2 , we provide a simple demonstration to this issue , in which we consider one targeted false label t. As we can see , it is much more reasonable to measure the quality of adversarial examples in terms of the margin of probability py−pt . The most adversarial example in Figure 2 then corresponds to the one with the minimum margin of probability instead of the minimum probability py . Detailed study of the rationality of minimum-magrin is provided in Section 3.1 . Since the search space B of high dimensional images is large ( grey area ) , previous studies use gradient descent methods to generate the adversarial example that maximizes the loss function ( Goodfellow et al. , 2015 ; Carlini & Wagner , 2017b ; Madry et al. , 2018 ) . Though it looks promising to generate adversarial examples via minimizing the margin of probability , we find that there are still two issues : ( a ) The probability p is a kind of rescaling method to the logits z. Croce & Hein ( 2020 ) heuristically rescaled the logits z using their proposed DLR ( defined at Eq. ( 6 ) ) . We investigate the performance of different rescaling methods in Section 3.1 . We numerically find that the method using natural logits zy − zt ( the meaning of margin ) with no rescaling performs the best ; ( b ) For the problem of multi-class and untargeted attacks , the margin is then zy −maxi 6=y zi . However , − ( zy − maxi 6=y zi ) is not an appropriate loss function , because the max function only considers one target at the current step while those unconsidered targets may lead to more adversarial examples . Hence , the reliable method is to minimize zy − zt for each t 6= y and take the most adversarial one , which is a widely used solution ( Croce & Hein , 2020 ) . Computational Efficiency . Although running the attack for each false target is reliable , the computational cost depends on the number of classes . For datasets with a large number of classes , e.g. , CIFAR-100 ( 100 classes ) and Imagenet ( 1,000 classes ) , the computational cost will increase accordingly . To achieve computational efficiency , we propose a Sequential TArget Ranking Selection ( STARS ) method in Section 3.2 to make the computational time independent of the number of classes . According to the ranking of predicted probability , STARS method only selects a few highest targets and runs a sequential attack . Experiments show that , benefited from STARS method , MM attack can save 76.36 % of the computational time on CIFAR-10 , 98.51 % on CIFAR-100 and 77.78 % on SVHN . By taking all the above factors into consideration , we propose a novel method , the minimummargin ( MM ) attack . Its detailed realization is provided in Section 3 . We present extensive experimental results in Section 4 , which verify that our MM attack can fast and reliably evaluate adversarial robustness . In particular , MM attack achieves comparable performance but only costs 3 % of the computational time compared with the current benchmark AA . Hence , our proposed MM attack provides a new direction of evaluating adversarial robustness and contributes a feasible and reliable method to generate high-quality adversarial examples in AT . 2 PRELIMINARY . Neural Networks . A neural network is a function fθ : Rn → [ 0 , 1 ] K , where θ is the parameters contained in fθ and K is normally the number of classes . The output of the network is computed using the softmax function , which ensures that the output is a valid probability vector . Namely , given an input x ∈ Rn , fθ ( x ) = [ p1 , . . . , pK ] = p where ∑K i=1 pi = 1 and pi is the probability that input x has on class i . Before the softmax function , the output of the network is called logits z , i.e. , p = softmax ( z ) . The classifier assigns the label y = arg maxi f ( x ) i . Projected Gradient Descent Attack ( PGD ) . Madry et al . ( 2018 ) proposed the projected gradient descent ( PGD ) attack to generate adversarial examples to mislead a well-trained classifier fθ . Specifically , they start with setting x0 = x , and then in each iteration : x′ ( t+1 ) = ΠB [ x ( 0 ) ] ( x ′ ( t ) + α sign ( ∇x′ ( t ) ` ( fθ ( x ′ ( t ) ) , y ) ) , t = 0 , 1 , 2 , . . . , ( 1 ) where B [ x ] = { x′ | d∞ ( x , x′ ) ≤ } , ( 2 ) is the closed ball of radius > 0 centered at x ; the x ( 0 ) refers to the starting point which corresponds to the natural example ( or the natural example perturbed by a small Gaussian or uniformly random noise ) ; x ( t ) is the adversarial example at step t ; ΠB [ x ( 0 ) ] ( · ) is the projection function that projects the adversarial variant back to the -ball centered at x ( 0 ) if necessary ; the ` ∞ distance metric is d∞ ( x , x ′ ) = ‖x− x′‖∞ ; and ` is cross entropy ( CE ) loss : CE ( f , x , y ) = − log ( py ) = −zy + log K∑ j=1 ezj , ( 3 ) where pi = ezi/ ∑K j=1 e zj , i = 1 , ... , K , and z is the logits of the model outputs . Carlini and Wagner attack ( CW ) . Carlini & Wagner ( 2017b ) observed the phenomenon of gradient vanishing in the widely used CE loss for potential failure . The gradient w.r.t x in Eq . ( 3 ) is given by ∇xCE ( f , x , y ) = ( −1 + py ) ∇xzy + ∑ i 6=j pi∇xzi . ( 4 ) If py ≈ 1 and consequently pi ≈ 0 for i 6= y , then ∇xCE ( x , y ) ≈ 0 in Eq . ( 4 ) . This gradient vanishing issue can lead to the failure of attacks . Motivated by this phenomenon , Carlini & Wagner ( 2017b ) replaced the CE loss with several possible choices . Among these choices , the widely used one for the untargeted attack is CW ( f , x , y ) = −zy ( x′ ) + max i 6=y zi ( x ′ ) . ( 5 ) AutoAttack ( AA ) . Croce & Hein ( 2020 ) claimed that the fixed step size and the lack of diversity in attack methods are the main reasons for the limitations of previous studies . Motivated by the line search method ( Grippo et al. , 1986 ) , they put forward adaptive projected gradient descent attack ( APGD ) . They showed that using adaptive step size significantly improves the adversarial robustness compared with using fixed step size . For the loss function at Eq . ( 5 ) , they claim that scale invariance w.r.t . z is necessary , and they proposed an alternative loss function : DLR ( f , x , y ) = −zy ( x ′ ) −maxi 6=y zi ( x′ ) zπ1 ( x ′ ) − zπ3 ( x′ ) , ( 6 ) where π is the permutation of the components of z in decreasing order . For the targeted attack , they propose another alternative loss function : Targeted-DLR ( f , x , y , t ) = − zy ( x ′ ) − zt ( x′ ) zπ1 ( x ′ ) − 12 · ( zπ3 ( x′ ) + zπ4 ( x′ ) ) . ( 7 ) For the lack of diversity , they claimed that diverse attacks are beneficial for reliability , and then they put forward an ensemble of various parameter-free attacks called AutoAttack ( AA ) to test adversarial robustness , where AA contains APGD-CE , APGD-DLR , FAB and Square . Targeted AutoAttack ( T-AA ) replaces APGD-DLR with the targeted APGD-DLR , and replaces FAB with targeted FAB .
The paper proposed a strong adversarial attack, i.e., an attack that can generate strong adversarial examples and thus can better evaluate the adversarial robustness of given deep learning models. Compared with the SOTA attack, the proposed attack is much faster and thus easier to be applied in practice. The idea is novel and the results are solid.
SP:291bb805fd27e09408f36ac44529a4e399838004
Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack
The AutoAttack ( AA ) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available . However , the high computational cost ( e.g. , 100 times more than that of the project gradient descent attack ) makes AA infeasible for practitioners with limited computational resources , and also hinders applications of AA in the adversarial training ( AT ) . In this paper , we propose a novel method , minimum-margin ( MM ) attack , to fast and reliably evaluate adversarial robustness . Compared with AA , our method achieves comparable performance but only costs 3 % of the computational time in extensive experiments . The reliability of our method lies in that we evaluate the quality of adversarial examples using the margin between two targets that can precisely identify the most adversarial example . The computational efficiency of our method lies in an effective Sequential TArget Ranking Selection ( STARS ) method , ensuring that the cost of the MM attack is independent of the number of classes . The MM attack opens a new way for evaluating adversarial robustness and contributes a feasible and reliable method to generate high-quality adversarial examples in AT . 1 INTRODUCTION . The deep neural network ( DNN ) has attracted a large number of researchers from different disciplines such as computer science ( Goodfellow et al. , 2016 ; Castelvecchi , 2016 ; Vaswani et al. , 2017 ) , physics ( DeVries et al. , 2018 ; Huang et al. , 2019 ; Levine et al. , 2019 ) , biology ( Maxmen , 2018a ; b ; Webb , 2018 ) and medicine ( Hao et al. , 2015 ; Esteva et al. , 2017 ) . The success of DNN mainly lies in its ability to learn useful high-level features from abundant data ( Deng & Yu , 2014 ; LeCun et al. , 2015 ) . These learned features have been successfully used to address many difficult tasks . For example , DNNs can recognize images with high accuracy comparable to human beings ( LeCun et al. , 1998 ; Krizhevsky et al. , 2012 ) . In addition , DNNs are also widely used for speech recognition ( Hinton et al. , 2012 ) , natural language processing ( Andor et al. , 2016 ) , and playing games ( Mnih et al. , 2013 ; Silver et al. , 2016 ) . As the impacts of DNN increase fast , its reliability has been a key to deploy it in real-world applications ( Huang et al. , 2011 ; Kurakin et al. , 2017 ) . Recently , a growing body of research shows that DNNs are vulnerable to adversarial examples , i.e. , test inputs that are modified slightly yet strategically to cause misclassification ( Szegedy et al. , 2014 ; Nguyen et al. , 2015 ; Kurakin et al. , 2017 ; Carlini & Wagner , 2017a ; Finlayson et al. , 2019 ; Wang et al. , 2019 ; Zhang et al. , 2020b ; Gao et al. , 2021 ) . The existence of such adversarial examples undoubtedly lowers the reliability of DNNs . Meanwhile , researchers have also been considering finding a reliable way to evaluate adversarial robustness of a DNN before deploying it in the real world . The high-level idea of evaluating adversarial robustness of a DNN is quite straightforward , i.e. , generating adversarial examples and calculating the accuracy of the DNN on these examples ( this kind of accuracy is also known as adversarial robust accuracy ) . Szegedy et al . ( 2014 ) first pointed out the existence of adversarial examples and used a less powerful box-constrained L-BFGS method to generate them . Based on the studies in ( Szegedy et al. , 2014 ) , Goodfellow et al . ( 2015 ) put forward the fast gradient sign method ( FGSM ) . One common loss function they used is cross-entropy ( CE ) loss , and to maximize the loss function , FGSM uses its gradient to determine in which direction the pixel ’ s intensity should be increased or decreased . Madry et al . ( 2018 ) introduced a simple refinement of the FGSM : projected gradient descent attack ( PGD ) , where instead of taking a single step of size in the direction of the gradient sign , multiple smaller steps are taken . Existing Evaluation Methods . PGD was an effective method to evaluate adversarial robustness of a standard-trained DNN ( Madry et al. , 2018 ) since adversarial robust accuracy of a standard-trained DNN is always very low after using PGD . Nevertheless , the existence of adversarial examples has already inspired research on training a robust DNN to defend against them , which means that a standard-trained DNN is not the only DNN we might meet and we need to evaluate adversarial robustness of a robust DNN as well . Unfortunately , PGD fails to reliably evaluate adversarial robustness of a robust DNN ( Carlini & Wagner , 2017b ; Croce & Hein , 2020 ) . Carlini & Wagner ( 2017b ) observed the phenomenon of gradient vanishing in the widely used CE loss for the potential failure of L-BFGS , FGSM and PGD , and replaced the CE loss with many possible choices . Croce & Hein ( 2020 ) claimed that the fixed step size and the single attack used are the causes of poor evaluations , and they put forward an ensemble of diverse attacks ( consisting of APGD-CE , APGD-DLR , FAB and Square ) called AutoAttack ( AA ) to test adversarial robustness . Until now , AA has been the most reliable method to evaluate adversarial robustness ( Croce & Hein , 2020 ) . However , though AA performs well in reliability , it needs a large amount of computational time . As shown in Figure 1 ( a ) , for attacking a ResNet-18 model on CIFAR-10 ( following the adversarial training in ( Madry et al. , 2018 ) ) , the computational cost of AA ( or T-AA ) is 65 times ( or 100 times ) more than PGD used in ( Madry et al. , 2018 ) , where T-AA is more time-consuming since it considers each target for APGD-DLR and FAB in AA . Worse still , in the worst case as analyzed in Appendix A , the computational cost of AA ( or T-AA ) is even 109 times ( or 440 times ) more than PGD . A Dilemma Between Reliability and Computational Efficiency . The high computational cost makes AA infeasible when considerable computational resources are unavailable . Unfortunately , such scenarios are common in the real world , e.g. , for practitioners who need real-time evaluation at each epoch of the training process of a robust model , such high computational cost is unacceptable . Similarly , since a large number of adversarial examples need to be generated at each epoch during adversarial training ( AT ) , such high computational cost hinders applications of AA in AT . In consideration of the high reliability but low computational efficiency of AA , and the high computational efficiency but low reliability of PGD , we seem to encounter a dilemma : we have to consider giving up one factor ( reliability or computational efficiency ) when evaluating the adversarial robustness . Our Reliable and Fast Solution . In this paper , we are dedicated to achieving reliability and computational efficiency simultaneously . For reliability , we evaluate the quality of adversarial examples using the margin between two targets for precisely identifying the most adversarial example . For computational efficiency , we put forward an effective Sequential TArget Ranking Selection ( STARS ) method to ensure that the cost of the MM attack is independent of the number of classes . Reliability . To achieve reliability , we investigate the reasons behind the failure of PGD . We identify that CE loss , which is based on the probability of the true label py , is not an appropriate measure to the quality of adversarial examples . In Figure 2 , we provide a simple demonstration to this issue , in which we consider one targeted false label t. As we can see , it is much more reasonable to measure the quality of adversarial examples in terms of the margin of probability py−pt . The most adversarial example in Figure 2 then corresponds to the one with the minimum margin of probability instead of the minimum probability py . Detailed study of the rationality of minimum-magrin is provided in Section 3.1 . Since the search space B of high dimensional images is large ( grey area ) , previous studies use gradient descent methods to generate the adversarial example that maximizes the loss function ( Goodfellow et al. , 2015 ; Carlini & Wagner , 2017b ; Madry et al. , 2018 ) . Though it looks promising to generate adversarial examples via minimizing the margin of probability , we find that there are still two issues : ( a ) The probability p is a kind of rescaling method to the logits z. Croce & Hein ( 2020 ) heuristically rescaled the logits z using their proposed DLR ( defined at Eq. ( 6 ) ) . We investigate the performance of different rescaling methods in Section 3.1 . We numerically find that the method using natural logits zy − zt ( the meaning of margin ) with no rescaling performs the best ; ( b ) For the problem of multi-class and untargeted attacks , the margin is then zy −maxi 6=y zi . However , − ( zy − maxi 6=y zi ) is not an appropriate loss function , because the max function only considers one target at the current step while those unconsidered targets may lead to more adversarial examples . Hence , the reliable method is to minimize zy − zt for each t 6= y and take the most adversarial one , which is a widely used solution ( Croce & Hein , 2020 ) . Computational Efficiency . Although running the attack for each false target is reliable , the computational cost depends on the number of classes . For datasets with a large number of classes , e.g. , CIFAR-100 ( 100 classes ) and Imagenet ( 1,000 classes ) , the computational cost will increase accordingly . To achieve computational efficiency , we propose a Sequential TArget Ranking Selection ( STARS ) method in Section 3.2 to make the computational time independent of the number of classes . According to the ranking of predicted probability , STARS method only selects a few highest targets and runs a sequential attack . Experiments show that , benefited from STARS method , MM attack can save 76.36 % of the computational time on CIFAR-10 , 98.51 % on CIFAR-100 and 77.78 % on SVHN . By taking all the above factors into consideration , we propose a novel method , the minimummargin ( MM ) attack . Its detailed realization is provided in Section 3 . We present extensive experimental results in Section 4 , which verify that our MM attack can fast and reliably evaluate adversarial robustness . In particular , MM attack achieves comparable performance but only costs 3 % of the computational time compared with the current benchmark AA . Hence , our proposed MM attack provides a new direction of evaluating adversarial robustness and contributes a feasible and reliable method to generate high-quality adversarial examples in AT . 2 PRELIMINARY . Neural Networks . A neural network is a function fθ : Rn → [ 0 , 1 ] K , where θ is the parameters contained in fθ and K is normally the number of classes . The output of the network is computed using the softmax function , which ensures that the output is a valid probability vector . Namely , given an input x ∈ Rn , fθ ( x ) = [ p1 , . . . , pK ] = p where ∑K i=1 pi = 1 and pi is the probability that input x has on class i . Before the softmax function , the output of the network is called logits z , i.e. , p = softmax ( z ) . The classifier assigns the label y = arg maxi f ( x ) i . Projected Gradient Descent Attack ( PGD ) . Madry et al . ( 2018 ) proposed the projected gradient descent ( PGD ) attack to generate adversarial examples to mislead a well-trained classifier fθ . Specifically , they start with setting x0 = x , and then in each iteration : x′ ( t+1 ) = ΠB [ x ( 0 ) ] ( x ′ ( t ) + α sign ( ∇x′ ( t ) ` ( fθ ( x ′ ( t ) ) , y ) ) , t = 0 , 1 , 2 , . . . , ( 1 ) where B [ x ] = { x′ | d∞ ( x , x′ ) ≤ } , ( 2 ) is the closed ball of radius > 0 centered at x ; the x ( 0 ) refers to the starting point which corresponds to the natural example ( or the natural example perturbed by a small Gaussian or uniformly random noise ) ; x ( t ) is the adversarial example at step t ; ΠB [ x ( 0 ) ] ( · ) is the projection function that projects the adversarial variant back to the -ball centered at x ( 0 ) if necessary ; the ` ∞ distance metric is d∞ ( x , x ′ ) = ‖x− x′‖∞ ; and ` is cross entropy ( CE ) loss : CE ( f , x , y ) = − log ( py ) = −zy + log K∑ j=1 ezj , ( 3 ) where pi = ezi/ ∑K j=1 e zj , i = 1 , ... , K , and z is the logits of the model outputs . Carlini and Wagner attack ( CW ) . Carlini & Wagner ( 2017b ) observed the phenomenon of gradient vanishing in the widely used CE loss for potential failure . The gradient w.r.t x in Eq . ( 3 ) is given by ∇xCE ( f , x , y ) = ( −1 + py ) ∇xzy + ∑ i 6=j pi∇xzi . ( 4 ) If py ≈ 1 and consequently pi ≈ 0 for i 6= y , then ∇xCE ( x , y ) ≈ 0 in Eq . ( 4 ) . This gradient vanishing issue can lead to the failure of attacks . Motivated by this phenomenon , Carlini & Wagner ( 2017b ) replaced the CE loss with several possible choices . Among these choices , the widely used one for the untargeted attack is CW ( f , x , y ) = −zy ( x′ ) + max i 6=y zi ( x ′ ) . ( 5 ) AutoAttack ( AA ) . Croce & Hein ( 2020 ) claimed that the fixed step size and the lack of diversity in attack methods are the main reasons for the limitations of previous studies . Motivated by the line search method ( Grippo et al. , 1986 ) , they put forward adaptive projected gradient descent attack ( APGD ) . They showed that using adaptive step size significantly improves the adversarial robustness compared with using fixed step size . For the loss function at Eq . ( 5 ) , they claim that scale invariance w.r.t . z is necessary , and they proposed an alternative loss function : DLR ( f , x , y ) = −zy ( x ′ ) −maxi 6=y zi ( x′ ) zπ1 ( x ′ ) − zπ3 ( x′ ) , ( 6 ) where π is the permutation of the components of z in decreasing order . For the targeted attack , they propose another alternative loss function : Targeted-DLR ( f , x , y , t ) = − zy ( x ′ ) − zt ( x′ ) zπ1 ( x ′ ) − 12 · ( zπ3 ( x′ ) + zπ4 ( x′ ) ) . ( 7 ) For the lack of diversity , they claimed that diverse attacks are beneficial for reliability , and then they put forward an ensemble of various parameter-free attacks called AutoAttack ( AA ) to test adversarial robustness , where AA contains APGD-CE , APGD-DLR , FAB and Square . Targeted AutoAttack ( T-AA ) replaces APGD-DLR with the targeted APGD-DLR , and replaces FAB with targeted FAB .
This paper proposes a minimum-margin (MM) attack to evaluate defenses. The authors report detailed results on the effects of different loss functions. Experiments are done on CIFAR-10/100 and SVHN, against the adversarially trained models.
SP:291bb805fd27e09408f36ac44529a4e399838004
Learning to Remember Patterns: Pattern Matching Memory Networks for Traffic Forecasting
1 INTRODUCTION . Traffic forecasting is a challenging problem due to complex road networks , varying patterns in the data , and intertwined dependencies among models . This implies that prediction methods should not only find intrinsic spatio-temporal dependencies among many roads , but also quickly respond to irregular congestion and various traffic patterns ( Lee et al. , 2020 ) caused by external factors , such as accidents or weather conditions ( Vlahogianni et al. , 2014 ; Li & Shahabi , 2018 ; Xie et al. , 2020 ; Jiang & Luo , 2021 ) . To resolve these challenges and successfully predict traffic conditions , many deep learning models have been proposed . Examples include the models with graph convolutional neural networks ( GCNs ) ( Bruna et al. , 2014 ) and recurrent neural networks ( RNNs ) ( Siegelmann & Sontag , 1991 ) , which outperform conventional statistical methods such as autoregressive integrated moving average ( ARIMA ) ( Vlahogianni et al. , 2014 ; Li et al. , 2018 ) . Attention-based models , such as GMAN ( Zheng et al. , 2020 ) , have also been explored to better handle complex spatio-temporal dependency of traffic data . Graph WaveNet ( Wu et al. , 2019 ) adopts a diffusion process with a self-learning adjacency matrix and dilated convolutional neural networks ( CNNs ) , achieving stateof-the-art performance . Although effective , existing models have a weakness in that they do not accurately forecast when conditions are abruptly changed ( e.g. , rush hours and accidents ) . In this work , we aim to design a novel method for modeling the spatio-temporal dependencies of roads and to improve forecasting performance . To achieve this goal , we first extract representative traffic patterns from historical traffic data , as we find that there are similar traffic patterns among roads , and a set of traffic patterns can be generalized for roads with similar spatio-temporal features . Figure 1 shows the example speed patterns ( left , 90-minute window ) that we extract from many different roads and a representative traffic pattern ( right time series ) . With the representative patterns , we transform the conventional forecasting problem into a pattern-matching task to find out which pattern would be the best match for the given spatio-temporal features to predict future traffic conditions . With insights from the huge success of neural memory networks in natural language processing and machine translation ( Weston et al. , 2015 ; Sukhbaatar et al. , 2015 ; Kaiser et al. , 2017 ; Madotto et al. , 2018 ) , we design graph convolutional memory networks called GCMem to manage representative patterns in spatio-temporal perspective . Lastly , we design PM-MemNet , which utilizes representative patterns from GCMem for traffic forecasting . PM-MemNet consists of an encoder and a decoder . The encoder consists of temporal embedding with stacked GCMem , which generates meaningful representations via memorization , and the decoder is composed of a gated recurrent unit ( GRU ) with GCMem . We compare PM-MemNet to existing state-of-the-art models and find that PM-MemNet outperforms existing models . We also present a qualitative analysis in which we further investigate the strengths of PM-MemNet in managing a traffic pattern where high responsiveness of a model to abrupt speed changes is desired for accurate forecasting . The experimental results indicate that PM-MemNet achieves state-of-the-art performance , especially in long-term prediction , compared to existing deep learning models . To further investigate the characteristics of PM-MemNet , we conduct an ablation study with various decoder architectures and find that PM-MemNet demonstrates the best performance . We also investigate how the number of representative patterns affects model performance . Finally , we discuss the limitations of this work and future directions for neural memory networks in the traffic forecasting domain . The contributions of this work include : ( 1 ) computing representative traffic patterns of roads , ( 2 ) design of GCMem to manage the representative patterns , ( 3 ) design of PM-MemNet that matches and uses the most appropriate patterns from GCMem for traffic forecasting , ( 4 ) evaluation of PMMemNet compared to state-of-the-art models , ( 5 ) qualitative analysis to identify the strengths of PM-MemNet , and ( 6 ) discussion of limitations and future research directions . 2 RELATED WORK . 2.1 TRAFFIC FORECASTING . Deep learning models achieve huge success by effectively capturing spatio-temporal features in traffic forecasting tasks . Past studies ahve shown that RNN-based models outperform conventional temporal modeling approaches , such as ARIMA and support vector regression ( SVR ) ( Vlahogianni et al. , 2014 ; Li et al. , 2018 ) . More recently , many studies have demonstrated that attention-based models ( Zheng et al. , 2020 ; Park et al. , 2020 ) and CNNs ( Yu et al. , 2018 ; Wu et al. , 2019 ) record better performance in long-term period prediction tasks , compared to RNN-based models . In terms of spatial modeling , Zhang et al . ( 2016 ) propose a CNN-based spatial modeling method for Euclidean space . Another line of modeling methods , such as GCNs , using graph structures for managing complex road networks also become popular . However , there are difficulties in using GCNs in the modeling process , such as the need to build an adjacency matrix and the dependence of GCNs on invariant connectivity in the adjacency matrix . To overcome these difficulties , a set of approaches , such as graph attention models ( GATs ) , have been proposed to dynamically calculate edge importance ( Park et al. , 2020 ) . GWNet ( Wu et al. , 2019 ) adopts a self-adaptive adjacency matrix to capture hidden spatial dependencies in training . Although effective , forecasting models still suffer from inaccurate predictions due to abruptly changing road speeds and instability , with lagging patterns in long-term periods . To address these challenges , we build , save , and retrieve representative traffic patterns for predicting speed rather than directly forecasting with an input sequence . 2.2 NEURAL MEMORY NETWORKS . neural memory networks are widely used for sequence-to-sequence modeling in the natural language processing and machine translation domains . Memory networks are first proposed by Weston et al . ( 2015 ) to answer a query more precisely even for large datasets with long-term memory . Memory networks perform read and write operations for given input queries . Sukhbaatar et al . ( 2015 ) introduce end-to-end memory networks that can update memory in an end-to-end manner . Through the end-to-end memory learning , models can be easily applied to realistic settings . Furthermore , by using adjacent weight tying , they can achieve recurrent characteristics that can enhance generalization . Kaiser et al . ( 2017 ) propose novel memory networks that can be utilized in various domains where life-long one-shot learning is needed . Madotto et al . ( 2018 ) also introduce Mem2Seq , which integrates the multi-hop attention mechanism with memory networks . In our work , we utilize memory networks for traffic pattern modeling due to the similarity of the tasks and develop novel graph convolutional memory networks called GCMem to better model the spatio-temporal correlation of the given traffic patterns . 3 PROPOSED APPROACH . In this section , we define the traffic forecasting problem , describe how we extract key patterns in the traffic data that serve as keys , and introduce our model , PM-MemNet . 3.1 PROBLEM SETTING . To handle the spatial relationships of roads , we utilize a road network graph . We define a road network graph as G = ( V , E , A ) , where V is a set of all different nodes with |V| = N , E is a set of edges representing the connectivity between nodes , andA ∈ RN×N is a weighted adjacency matrix that contains the connectivity and edge weight information . An edge weight is calculated based on the distance and direction of the edge between two connected nodes . As used in the previous approaches ( Li et al. , 2018 ; Wu et al. , 2019 ; Zheng et al. , 2020 ; Park et al. , 2020 ) , we calculate edge weights via the Gaussian kernel as follows : Ai , j = exp ( − dist 2 ij σ2 ) , where distij is the distance between node i and node j and σ is the standard deviation of the distances . Prior research has formulated a traffic forecasting problem as a simple spatio-temporal data prediction problem ( Li et al. , 2018 ; Wu et al. , 2019 ; Zheng et al. , 2020 ; Park et al. , 2020 ) aiming to predict values in the next T time steps using previous T ′ historical traffic data and an adjacency matrix . Traffic data at time t is represented by a graph signal matrix , XtG ∈ RN×din , where din is the number of features , such as speed , flow , and time of the day . In summary , the goal of the previous work is to learn a mapping function f ( · ) to directly predict future T graph signals from T ′ historical input graph signals : [ XG ( t−T ′+1 ) , · · · , XG ( t ) ] f ( · ) −−→ [ XG ( t+1 ) , · · · , XG ( t+T ) ] The goal of this study is different from previous work in that we aim to predict future traffic speeds from patterned data , instead of utilizing input XG directly . We denote P ⊂ RT ′ as a set of representative traffic patterns , p ∈ P as one traffic pattern in P , and d : X × P → [ 0 , ∞ ) as a distance function for pattern matching . Detailed information about traffic pattern extraction will be discussed in the next subsection . Our problem is to train the mapping function f ( · ) as follows : [ XG ( t−T ′+1 ) , · · · , XG ( t ) ] d ( · ) , k−NN−−−−−−−→ [ P t1 , . . . , P tN ] f ( · ) −−→ [ XG ( t+1 ) , · · · , XG ( t+T ) ] , where P ti = { p1 , . . . , pk } is a set of k-nearest neighboring traffic patterns of node i in time t , with a distance function d. Note that pj is the j-th nearest neighbor pattern . 3.2 KEY EXTRACTION FROM TRAFFIC PATTERNS . Analyzing the traffic data , we find that the data has repeating patterns . In traffic data , the leading and trailing patterns have a high correlation , even during short-term periods . To take advantage of these findings , in our model , we build a representative pattern set , P. First , from historical data , we compute an average daily pattern , which consists of 288 speed data points ( total 24 hours with 5- minute intervals ) for each vertex v ∈ V . We then extract pattern p by slicing the daily patterns with a given window size T ′ , as shown in Figure 2 ( a ) . At this stage , |P| = N ×b 288T ′ c. After we collect the patterns , we investigate similarity distribution of the extracted pattern set , P , via cosine similarity ( Figure 2 ( b ) ) and find that the pattern set P has a biased distribution with too many similar patterns ( i.e. , class imbalance ) . Since such class imbalance causes memory ineffectiveness in accurate memory retrieval and gives biased training results , we use clustering-based undersampling ( Lin et al. , 2017 ) with cosine similarity . For example , if pattern p and pattern p′ have a cosine similarity larger than δ , they are in same cluster . We utilize the center of each cluster as a representative pattern of that cluster . After undersampling by clustering , we have a balanced and representative pattern set , P , as shown in Figure 2 ( c ) , which we use as keys for memory access . Table 2 presents the effect of different δ and |P| on forecasting performance .
This paper studies the traffic forecasting problem and proposes to conduct prediction by pattern matching. Authors first extract key patterns from the historical data in an offline manner and then fetch the patterns for each time series with a distance function (e.g., cosine similarity). Then, the patterns of different nodes are interacted with GCN to get node representation.
SP:4a83a8ba8190703c509ecc17fbdc70e82e67d6c8